# News aggregator

### Reminder: 10 days to submit talks to CUFP 2016

### Douglas M. Auclair (geophf): May 2016 1Liners

**One-liners**

- May 24th, 2016:

Given f :: a -> [a] -> b, g :: a -> c

Write h :: c -> [c] -> b, point-free, in terms of f and g

where h x y = f (g x) (map g y) - May 16th, 2016: The next 3 #1Liner are of a piece, using

data CmpV a =

Vec { len :: Int, top :: a, elts :: [a],

cmp :: CmpV a -> CmpV a -> Ordering } - Give the point-free definition of:

twoVs :: CmpV a -> CmpV b -> ([a], [b]) - instance Ord (CmpV a) where

compare v1 = uncurry (cmp v1) . (v1,)

Make compare point-free by removing v1 from above definition - An Ord-instance needs an Eq-instance:

instance Eq a => Eq (CmpV a) where

v1 == v2 = elts v1 == elts v2

point-free-itize (==) - May 16th, 2016: You have the following lambda:

\x y -> x == y || fn x y

where fn :: a -> a -> Bool

Point-free-itize - obadz @obadzz without any fancy uses of the (a ->) Applicative :)

curry $ foldr (||) False . flip map [(==), fn] . flip uncurry - obadz @obadzz with fancy use of (a ->) Applicative :)

curry $ liftA2 (||) (uncurry (==)) (uncurry fn) - Noah Luck Easterly @walkstherain

curry $ uncurry (||) . (uncurry (==) &&& uncurry fn) - May 5th, 2016:

sames :: Eq a => [a] -> [a] -> Int

Counts equal values at the same indices in two lists.

What is the point-free definition?- joomy @cattheory

sames = curry (length . filter (uncurry (==)) . uncurry zip) - bazzargh @bazzargh and then Fatih Karakurt @karakfa

((length . filter id) .) . zipWith (==) - me: sum <<- fromenum="" li="" nbsp="" zipwith="">
- Noah Luck Easterly @walkstherain

sames = ((sum . map fromEnum) .) . zipWith (==) - `getSum . foldMap (Sum . fromEnum)` seems better than `foldr (bool id succ) 0` but both satisfy `[Bool] -> Int`
- Андреев Кирилл @nonaem00

let it = (length .) . (filter id .) . zipWith (==)

- joomy @cattheory

### Roman Cheplyaka: Predicting a coin toss

I flip a coin and it comes up heads. What is the probability it will come up heads the next time I flip it?

“Fifty percent,” you say. “The coin tosses are independent events; the coin doesn’t have a memory.”

Now I flip a coin ten times, and ten times in a row it comes up heads. Same question: what is the probability it will come up heads the next time?

You pause for a second, if only because you are not used to getting heads ten times in a row.

But, after some hesitation, you convince yourself that this is no different from the first experiment. The coin still has got no memory, and the chances are still 50-50.

Or you become suspicious that something is not right with the coin. Maybe it is biased, or maybe it has two heads and no tails. In that case, your answer may be something like 95% for heads, where the remaining 5% account for the chance that the coin is only somewhat biased and tails are still possible.

This sounds paradoxical: coin tosses are independent, yet the past outcomes influence the probability of the future ones. We can explain this by switching from frequentist to Bayesian statistics. Bayesian statistics lets us model the coin bias (the probability of getting a single outcome of heads) itself as a random variable, which we shall call \(\theta\). It is random simply because we don’t know its true value, and not because it varies from one experiment to another. Consequently, we will update its probability distribution after every experiment because we gain more information, not because it affects the coin itself.

<figure> </figure>Let \(X_i\in\{H,T\}\) be the outcome of \(i\)th toss. If we know \(\theta\), we automatically know the distribution of \(X_i\):

\[p(X_i=H|\theta)=\theta.\]

As before, the coin has no memory, so for any given \(\theta\), the tosses are independent: \(p(X_i \wedge X_j|\theta)=p(X_i|\theta)p(X_j|\theta)\). But they are independent only when conditioned on \(\theta\), and that resolves the paradox. If we don’t assume that we know \(\theta\), then \(X\)s *are* dependent, because the earlier observations affect what we know about \(\theta\), and \(\theta\) affects the probability of the future observations.

A model with conditionally independent variables is called a Bayesian network or probabilistic graphical model, and it can be represented by a directed graph as shown on the right. The arrows point from causes to effects, and the absence of an edge indicates conditional independence.

Based on our evidence of 10 heads in a row, the Bayes’ theorem lets us estimate the distribution of \(\theta\). All we need is a prior distribution – what did we think about the coin before we tossed it?

For coin tossing and other binary problems, it is customary to take the Beta distribution as the prior distribution, as it makes the calculations very easy. Often such choice is justified, but in our case it would be a terrible one. Almost all coins we encounter in our lives are fair. To make the beta distribution centered at \(\theta=0.5\) and low variance, we would need to set its parameters, \(\alpha\) and \(\beta\), to large equal numbers. The resulting distribution would assign non-trivial probability only to low deviations from \(\theta=0.5\), and the distribution would be barely affected by our striking evidence.

<figure> </figure>Instead, let’s engineer our prior distribution from scratch. Double-sided coins may be rare in everyday life, but they are easy to buy on eBay. When someone approaches us out of the blue and starts flipping coins, there’s a fair chance they’ve got one of those. Still, we believe in humanity, so let’s assign a point probability of just \(1\%\) to \(\theta=0\) and \(\theta=1\). What about biased coins, such as coins with \(\theta=0.83\)? Turns out they are unlikely to exist. Nevertheless, the Bayesian statistics teaches to be reluctant to assign zero probabilities to events, since then no amount of evidence can prove us wrong. So let’s take \(0.1\%\) and spread it uniformly across the interval \([0;1]\). The remaining \(97.9\%\) will be the probability of a fair coin.

Formally, our prior distribution over \(\theta\) can be specified by its probability density as

\[ p(\theta)=0.979\delta(\theta-0.5)+0.01\delta(\theta)+0.01\delta(\theta-1)+0.001, \]

where \(\delta\) is the Dirac delta function used to specify point probabilities.

Let \(D\) refer to the event that \(X_i=H\), \(i=1,2,\ldots,10\). Then \(p(D|\theta)=\theta^{10}\). By Bayes’ theorem,

\[ p(\theta|D)=\frac{p(D|\theta)p(\theta)}{\int_0^1 p(D|\theta)p(\theta)d\theta} = \frac{\theta^{10}p(\theta)}{\int_0^1 \theta^{10}p(\theta)d\theta}. \]

Now we need to do a bit of calculation by hand:

\[ \begin{multline} \int_0^1 \theta^{10}p(\theta)d\theta=0.979\cdot0.5^{10}+0.01\cdot 1^{10}+0.01 \cdot 0^{10} + 0.001\int_0^1 \theta^{10}d\theta \\ = 9.56\cdot 10^{-4} + 0.01 + 9.09\cdot 10^{-5}=0.0110; \end{multline} \] \[ p(\theta|D)=0.087\delta(\theta-0.5)+0.905\delta(\theta-1)+0.091\theta^{10}. \]

Thus, we are \(90.5\%\) sure that the coin is double-headed, but we also allow \(8.7\%\) for pure coincidence and \(0.8\%\) for a biased coin.

<figure> </figure> <figure> </figure>Now back to our question: how likely is it that the next toss will produce heads?

\[ \begin{multline} p(X_{11}=H|D) = \int_0^1 p(X_{11}=H|D,\theta)p(\theta|D)d\theta = \int_0^1 \theta \, p(\theta|D)d\theta \\ = 0.087\cdot 0.5+0.905\cdot 1+0.091\cdot \int_0^1\theta^{11}d\theta = 0.956. \end{multline} \]

Very likely indeed. Notice, by the way, how we used the conditional independence above to replace \(p(X_{11}=H|D,\theta)\) with \(p(X_{11}=H|\theta)=\theta\).

Bayesian statistics is a powerful tool, but the prior matters. Before you reach for the conjugate prior, consider whether it actually represents your beliefs.

A couple of exercises:

- How does our prior distribution change after a single coin toss (either heads or tails)?
- How does our prior distribution change after ten heads and one tails?

### Package takeover (also: the future of Netwire)

### Philip Wadler: Naomi Klein: The Best is Yet to Come

Amidst all the bad news, Naomi Klein shines a ray of light.

Taken together, the evidence is clear: The left just won. Forget the nomination—I mean the argument. Clinton, and the 40-year ideological campaign she represents, has lost the battle of ideas. The spell of neoliberalism has been broken, crushed under the weight of lived experience and a mountain of data.

What for decades was unsayable is now being said out loud—free college tuition, double the minimum wage, 100 percent renewable energy. And the crowds are cheering. With so much encouragement, who knows what’s next? Reparations for slavery and colonialism? A guaranteed annual income? Democratic worker co-ops as the centerpiece of a green jobs program? Why not? The intellectual fencing that has constrained the left’s imagination for so long is lying twisted on the ground. ...

Looking beyond this election cycle, this is actually good news. If Sanders could come this far, imagine what a left candidate who was unburdened by his weaknesses could do. A political coalition that started from the premise that economic inequality and climate destabilization are inextricable from systems of racial and gender hierarchy could well build a significantly larger tent than the Sanders campaign managed to erect.

And if that movement has a bold plan for humanizing and democratizing new technology networks and global systems of trade, then it will feel less like a blast from the past, and more like a path to an exciting, never-before-attempted future. Whether coming after one term of Hillary Clinton in 2020, or one term of Donald Trump, that combination—deeply diverse and insistently forward-looking—could well prove unbeatable.

### Philip Wadler: Scientists for EU

Scientists for EU wrote a cogent letter explaining how the EU advances UK science, and how UK science advantages the UK. You can sign it here.

*Please read this letter, add your name and share this link with other scientists.*Scientific advance and innovation are critically dependent on collaboration. To remain a world-leading science nation, we must be team players.

The EU leads the world in science output, is beating the US in science growth – and is rapidly increasing investment in research. The EU is a science superpower. Our place in this team has boosted our science networking, access to talent, shared infrastructure and UK science policy impact. The economy of scale streamlines bureaucracy and brings huge added value for all. International collaborations have 40% more impact than domestic-only research.

Strong science is key for our economy and quality of life. It creates a virtuous cycle, leveraging investment from industry, raising productivity and creating high-value jobs for our future. In fact, 20% of UK jobs currently rely on some science knowledge. Science brings better medicines, cleaner energy, public health protections, a safer environment, new technologies and solutions to global challenges.

If we leave the EU, the UK will lose its driving seat in this world-leading team. Free-flow of talent and easy collaboration would likely be replaced by uncertainty, capital flight, market barriers and costly domestic red-tape. This would stifle our science, innovation and jobs.

It is no surprise that a recent survey showed 93% of research scientists and engineers saying the EU is a “major benefit” to UK research. The surprise is that many voters are still unaware that UK science and its benefits would be demoted by a vote to leave.

We, the undersigned, urge you to seriously consider the implications for UK science when you vote in the referendum on UK membership of the EU.

### Philip Wadler: A week of cycling ideas from Copenhagen

People for Bikes is visiting Copenhagen to learn about its famous cycling infrastructure. A week of stories is archived here, including The Great Copenhagen Loading Zone Compromise and How bikes make Copenhagen's neighbourhood business districts thrive.

### CUFP 2016 Call For Tutorials

### LOPSTR'16: Final Call for Papers and *Deadline Extension*

### Two Lectureships (Assistant Professorships) at St Andrews

### CMM-to-ASM: Register allocation wierdness

### [ANN] aivika-lattice-0.1

### Philip Wadler: David Turner Festschrift

### Philip Wadler: A busy summer

- Joy of Coding, 17 June 2016, De Doelen, Rotterdam; keynote speaker.
- ACSD, 22—24 June 2016, Torun; keynote speaker.
- BETTY Summer School, 27 June—1 July 2016, Limassol, Cyprus; lecturer.
- International Summer School on Metaprogramming, 8—12 August 2016, Cambridge; lecturer.
- Lambda World, 1 October 2016, Cadiz; keynote speaker.

### left to right pipeline of functions

### Allow extra commas in module declarations or lists?

### CFP: DEADLINE APPROACHING - MSWiM 2016

### Neil Mitchell: js-jquery 3.0.0 now out

The js-jquery Haskell library bundles the minified jQuery Javascript code into a Haskell package, so it can be depended upon by Cabal packages and incorporated into generated HTML pages. It does so in a way that doesn't require each Haskell package to bundle its own extra data files, and in a way that meets the licensing requirements of distributions such as Debian.

I've just released version 3.0.0, following on from jQuery 3.0.0 a few days ago. This release breaks compatibility with IE6-8, so if that's important to you, insert an upper bound on the package.