# News aggregator

### Worked up examples of Simon Marlow's codehot-swapping

### [Feedback requested]: -fhelpful-import-errors

### [TFPIE 2016] 1st call for papers

### [TFPIE 2016] 1st call for papers

### ghc commit and ticket mails

### Generalising Categories - categories of tuples

### problem with GHC bug wiki

### Catch-22 at GHC bug wiki.

### Gabriel Gonzalez: How to build library-agnostic streaming sources

The Haskell ecosystem has numerous libraries for effectful stream programming, including, but not limited to:

- List
- conduit
- enumerator
- io-streams
- iterIO
- iteratee
- list-t
- logict
- machines
- pipes

Unfortunately, this poses a problem for library writers. Which streaming library should they pick when providing effectful streaming components?

Sometimes the correct answer is: none of the above! We can often build streaming and effectful generators using only the base and transformers libraries!

The trick is to build polymorphic generators using only the MonadPlus and MonadTrans type classes. These generators can then be consumed by any library that provides an implementation of ListT that implements MonadPlus and MonadTrans.

MonadPlusI like to think of MonadPlus as the "list building" type class. This is because you can assemble a list using return, mzero, and mplus:

>>> import Control.Monad (MonadPlus(..))>>> mzero :: [Int]

[]

>>> return 1 :: [Int]

[1]

>>> return 1 `mplus` return 2 :: [Int] -- [1] ++ [2]

[1, 2]

In other words, mzero is analogous to [], mplus is analogous to (++), and return builds a singleton list.

However, many things other than lists implement MonadPlus, including every implementation of ListT in the wild. Therefore, if we build collections using MonadPlus operations these collections will type-check as ListT streams as well.

Let's provide the following helper function to convert a list to this more general MonadPlus type:

select :: MonadPlus m => [a] -> m aselect [] = mzero

select (x:xs) = return x `mplus` select xs

-- or: select = foldr (\x m -> return x `mplus` m) mzero

Note that this select function has some nice mathematical properties:

select (xs `mplus` ys) = (select xs) `mplus` (select ys)select mzero = mzero

-- This assumes the distributivity law for `MonadPlus`

select . (f >=> g) = (select . f) >=> (select . g)

select . return = returnMonadTrans

Using select and liftIO (from MonadIO), we can build list comprehensions that interleave effects, like this:

example :: (MonadIO m, MonadPlus m) => m ()example = do

x <- select [1, 2, 3]

liftIO (putStrLn ("x = " ++ show x))

y <- select [4, 5, 6]

liftIO (putStrLn ("y = " ++ show y))

You can read the above code as saying:

- Let x range from 1 to 3
- Print x every time it selects a new value
- For each value of x, let y range from 4 to 6
- Print y every time it selects a new value

Notice how example doesn't depend on any particular streaming library, so we can run it with diverse ListT implementations, all of which implement MonadPlus and MonadIO:

>>> Pipes.runListT example -- This requires `pipes-4.1.4`x = 1

y = 4

y = 5

y = 6

x = 2

y = 4

y = 5

y = 6

x = 3

y = 4

y = 5

y = 6

>>> _ <- Control.Monad.Logic.observeAllT example

<Exact same output>

>>> _ <- ListT.toList example

<Exact same output>

However, we can use this trick for more than just list comprehensions. We can build arbitrary lazy and effectful streams this way!

GeneratorsHere's an example of a generator that lazily emits lines read from standard input:

import Control.Monad (MonadPlus(..))import Control.Monad.Trans.Class (MonadTrans(..))

import System.IO (isEOF)

stdinLn :: (MonadIO m, MonadPlus m) => m String

stdinLn = do

eof <- liftIO isEOF

if eof

then mzero

else liftIO getLine `mplus` stdinLn

You can read the above code as saying:

- Check if we are at the end of the file
- If we're done, then return an empty list
- If we're not done, prepend a getLine onto a recursive call to stdinLn

We can prove this works by writing a program that forwards these lines to standard output:

echo :: (MonadIO m, MonadPlus m) => m ()echo = do

str <- stdinLn

liftIO (putStrLn str)

Now we can run echo using any of our favorite ListT implementations and it will do the right thing, streaming lines lazily from standard input to standard output in constant space:

>>> Pipes.runListT echoTest<Enter>

Test

ABC<Enter>

ABC

42<Enter>

42

<Ctrl-D>

>>>

The exception is the transformers library, whose ListT implementation does not stream or run in constant space.

CombinatorsWe can also implement lazy variations on Control.Monad combinators using this interface.

For example, we can implement a lazy variation on replicateM using just select and replicate:

replicateM' :: MonadPlus m => Int -> m a -> m areplicateM' n m = do

m' <- select (replicate n m)

m'

-- or: replicateM' n = join . select . replicate n

We can use this lazy replicateM' to lazily echo 10 lines from standard input to standard output:

example :: (MonadIO m, MonadPlus m) => m ()example = do

str <- replicateM' 10 (lift getLine)

liftIO (putStrLn str)

We can also implement a lazy mapM and forM, too, except now their implementations are so trivial that they don't even deserve their own functions:

mapM' :: Monad m => (a -> m b) -> m a -> m bmapM' = (=<<)

forM' :: MOnad m => m a -> (a -> m b) -> m a

forM' = (>>=)

example :: (MonadIO m, MonadPlus m) => m ()

example = mapM' (liftIO . print) (replicateM' 10 (liftIO getLine))

Similarly, a lazy sequence just becomes join.

ConclusionThe following streaming libraries already provide their own implementation of ListT compatible with the above trick:

- List
- list-t
- LogicT
- pipes

The other streaming libraries do not currently provide a ListT implementation, but there is no reason why they couldn't! For example, conduit could implement an internal ListT type of its own and use that as an intermediate type for converting the above abstraction to the public Source type:

convert :: Monad m=> Data.Conduit.Internal.ListT m a -> Source m a

This polymorphic API obviously does not capture all possible streaming patterns. For example, you can not implement something like take using this API. However, I suspect that a significant number of streaming components can be written using this dependency-free interface.

Edit: Thank you to Twan van Laarhoven, who pointed out that you can sometimes use MonadIO instead of MonadTrans, which produces nicer constraints. I updated this article to incorporate his suggestion.

### LANGUAGE ConstraintKinds not needed to use ConstraintKinds?

### Lee Pike: Viva La Resistance! A Resistance Game Solver

**Update (Feb142016): **see bottom for an improved strategy.

At a December workshop, I played The Resistance, a game in which there are two teams, the resistance and the spies. The spies know everyone’s identity; each resistance player knows only one’s own. Overall, the goal of the spies is to remain undetected; the goal of the resistance is to discover who the spies are.

Play proceeds in rounds in which a player nominates a subgroup to go on a “mission”. The nomination is then voted on. If the vote succeeds, every member of a mission plays either a “success” or “failure” card for the mission. One or two failure cards (depending on the mission size) causes the mission to fail. The cards played in a mission are public, but it is secret who played which card. (If the vote fails, the next player nominates a subgroup.)

The spies’ goal is to fail missions without being detected, and the resistance goal is to have missions succeed. So the spies generally wish to play failure cards while in a mission. Furthermore, spies always want some spy in the mission to spoil it. The resistance wants no spies to go on missions. The problem for the resistance is that when a mission fails, they know one or more of the subgroup is a spy, they just don’t know which one.

The spies win the game if they can cause three missions to fail before the resistance can cause three missions to succeed.

The game is simple but engaging, and similar in spirit to the game Mafia (Werewolf).y

The ProblemI played the game with computer scientists from the top universities and research labs. We debated strategy and were generally pretty bad at the game, despite having a lot of fun. In particular, the spies seemed to win a lot.

The problem is: what is the optimal approach to game play?

The ApproachThe problem is a great fit for Bayesian Analysis. In each round, we learn partial information about the players from the group outcome. Only spies play failure cards. We can use that information to update our believe in the probability that a player is a spy.

Suppose there are four players, and two spies. Initially each player has the same probability of being a spy, . Now suppose that go on a mission, and return the set of cards . How do we update the spy probabilities of the group?

Bayes Theorem states that

In our case, “A” is the event that a particular player is a spy, and “B” is the event that we we observed a particular set of mission cards. We wish to compute , the updated probability that a player is a spy given the cards played in the mission.

So for each mission, we apply Bayes’ Theorem to each player, including the players *not* in the mission—if the spy probabilities increase (or decrease) for the mission players, then they decrease (or increase) for the non-mission players.

From the cards , we know that two of are spies (and so is definitely not a spy, since there are two spies, total). Let’s compute the updated spy probability for player .

, the original spy probability for player (or any other player). To calculate , we first determine every possible assignment of players in the mission to spies and non-spies (there are such assignments). In the mission, there are three possibilities:

- spies
- spies
- spies

For each combination, we multiply the probabilities for the assignments. So in case (1), we have

for assigning players and to being spies and *c* to being a non-spy. Then we sum the probabilities for all three combinations. In our example, we get

So .

To compute P(B|A), we assume that player a is a spy, and now recompute the probability that contains the remaining one spy. Using the same approach as for computing , we get . Now we can apply Bayes’ Theorem:

So player’s a probability of being a spy shot up to 0.66, and so did player b’s and c’s.

Player ‘s spy probability drops to 0. The updated spy probabilities for any player not in the mission can be computed just as we did for the mission players, except we take the total number of spies and subject the number of failure cards observed. In this case, however, since we know all the spies were in the group, ‘s spy probability must be 0. (Another way of thinking about it is that there is an invariant that the sum of the player’s spy probabilities must always be the total number of spies in the group.)

The StrategiesHow should the game be played by the resistance and spies, respectively, to increase the odds of winning?

For the resistance, picking the players the least likely to be spies is optimal. Let’s call this the **pick-lowest** strategy.

One possible optimization is to always pick yourself in a mission. The rational is that you know if you are part of the resistance, and you pick yourself for a mission, then you have at least one guaranteed non-spy. So even if your probability of being a spy as known to the group is higher than others, you have perfect knowledge of yourself. Let’s call this the **pick-myself** strategy,

For the spies, there are a few options. By default, the spies could always play a failure card (**fail-always**). But a spy might also play a success card to avoid detection; doing so is especially advantageous in the early rounds; one strategy is to never fail the first round (**succeed-first**). If the spies can collude to ensure only one plays a failure card during a mission, that provides the least amount of information to the resistance (**fail-one**).

There are other strategies and other combinations of strategies, but this is a good representative sample.

Which are the best strategies?

FindingsTo discover the best strategies, we use Monte-Carlo simulations on the strategies over each number of players (with different number of players, there are different number of spies and missions). I found a few interesting results:

- The best chance of wining for the resistance, and the closest odds between the resistance and spies, is with six players. At six players, the resistance has a 30-40% chance of winning under different strategies. The worst configuration is eight players, with not more than a 14% chance of winning for the resistance. During actual game play, it seemed that the odds favored the spies.
- The best strategy for the resistance is the
**pick-lowest**strategy. The strategy may be counter-intuitive, but consider this: the pick-myself strategy provides the spies an opportunity to always include themselves in a mission when picking a mission. The pick-myself strategy is an instance of a local maxima (i.e., local knowledge of knowing yourself to be part of the resistance) that is non-optimal. - Moreover, voting becomes a no-op. The game includes a voting round in which players vote on the proposed players for a mission. But If the resistance agrees on an optimal strategy, any deviation from the strategy by a player is because the player is a spy. If the person does deviation, the resistance votes against it (and resistance outnumber spies, so will win the vote), and we now have complete assurance the proposer is a spy.
**The spies have no choice but to follow the optimal strategy of the resistance.** - Of the spy strategies listed above, the best is the
**fail-one**strategy, which is intuitive. The succeed-first strategy is another example of a local maxima that is non-optimal; while it protects that particular spy from detection, it is more valuable for the spies in general to fail the mission.

The related Mafia game has some analytical results, giving bounds on (their version of) the resistance to spies. I have not done that, nor have I done Monte-Carlo analysis to determine what proportion of resistance and spies and mission sizes gives a more even chance of winning. In Mafia, it is noted that in actual game play, the resistance wins more often than simulations/analytical analysis would suggest, with different attributions (e.g., people are bad at lying over iterative rounds).

Play Along at HomeI have implemented the Bayesian solver in a webserver hosted on Amazon Web Services. You can use the solver when playing with others.

http://ec2-54-173-249-238.compute-1.amazonaws.com/

An easier-to-remember link is http://tiny.cc/theresistance1.

If you want to run Monte Carlo simulations, you will have to download the code and run it locally, however.

Implementationhttps://github.com/leepike/theresistance

Update (Feb142016)It has been pointed out by Eike Schulte in the comments and Iavor Diatchki that by including some additional information, the strategy might be improved. This is indeed the case. The intuition is that if a group has previously included a spy, that group should not be selected again, even if it is the lowest probability group. For example, with five players, consider the following rounds:

- Round 0: players [0,1] are selected and there is one fail card.
- Round 1: players [2,3,4] are selected and there is one fail card.
- Round 2: players [2,3] are selected and there are no fail cards.

At this point, players [0,1] have a 0.5 probability of being a spy and players [2,3,4] have a 1/3 probability. So in round 3, we do not want to select players [2,3,4] even if they have the lowest probabilities. So we select a group with the lowest spy probability that has not already included a spy. The server has been updated to include this strategy. The strategy does better; for example, at 6 players, we have just over a 50% chance of winning!

### FP Complete developers survey

### Reconsidering -Wall and -Wcompat

### Dominic Steinitz: The Lie Derivative

In proposition 58 Chapter 1 in the excellent book O’Neill (1983), the author demonstrates that the Lie derivative of one vector field with respect to another is the same as the Lie bracket (of the two vector fields) although he calls the Lie bracket just bracket and does not define the Lie derivative preferring just to use its definition with giving it a name. The proof relies on a prior result where he shows a co-ordinate system at a point can be given to a vector field for which so that .

Here’s a proof seems clearer (to me at any rate) and avoids having to distinguish the case wehere the vector field is zero or non-zero. These notes give a similar proof but, strangely for undergraduate level, elide some of the details.

A Few DefinitionsLet be a smooth mapping and let be a tensor with then define the **pullback** of by to be

For a tensor the **pullback** is defined to be .

Standard manipulations show that is a smooth (covariant) tensor field and that is -linear and that .

Let be a *diffeomorphism* and a vector field on we define the **pullback** of this field to be

Note that the pullback of a vector field only exists in the case where is a diffeomorphism; in contradistinction, in the case of pullbacks of purely covariant tensors, the pullback always exists.

For the proof below, we only need the pullback of functions and vector fields; the pullback for tensors with is purely to give a bit of context.

From O’Neill (1983) Chapter 1 Definition 20, let be a smooth mapping. Vector fields on and on are –**related** written if and only if .

By Lemma 21 Chapter 1 of O’Neill (1983), and are -related if and only if .

Recalling that and since

we see that the fields and are -related: . Thus we can apply the Lemma.

Although we don’t need this, we can express the immediately above equivalence in a way similar to the rule for covariant tensors

First let’s calculate the Lie derivative of a function with respect to a vector field where is its flow

Analogously defining the Lie derivative of with respect to

we have

Since we have

Thus

as required.

BibliographyO’Neill, B. 1983. *Semi-Riemannian Geometry with Applications to Relativity, 103*. Pure and Applied Mathematics. Elsevier Science. https://books.google.com.au/books?id=CGk1eRSjFIIC.

### Process proposal: Require explicit user-oriented timelines in library proposals

### LambdaCube: DSL in the wild

After a few months of radio silence, the first public version of the new LambdaCube 3D DSL is finally available on Hackage. We have also updated our website at the same time, so if you want to get your hands dirty, you can head over to our little Getting Started Guide right away. The rest of this post will provide some context for this release.

The summer tour was a fun but exhausting experience, and we needed a few weeks of rest afterwards. This paid off nicely, as development continued with renewed energy in the autumn, and we’ve managed to keep up the same pace ever since. The past few months have been quite eventful!

First of all, our team has a new member: Andor Pénzes. Andor took it upon himself to improve the infrastructure of the project, which was sorely needed as there was no manpower left to do it before. In particular, this means that we finally have continuous integration set up with Travis, and LambdaCube 3D can also be built into a Docker image.

It is also worth noting that this release is actually the second version of the DSL. The sole purpose of the first version was to explore the design space and learn about the trade-offs of various approaches in implementing a Haskell-like language from scratch given our special requirements. It would be impossible to list all the changes we made, but there are a few highlights we’d like to point out:

- The speed of reduction is greatly improved.
- Reduction is based on partial evaluation.
- We have a much more expressive type system with a faster inference algorithm.
- Pattern match compilation is based on new research.

We had an all-team meeting in December and after some discussion we came up with a detailed roadmap (disclaimer: this is a living internal document) for the first half of 2016. Without the gory details, this is what you should expect in the coming months:

- A new release is planned for every 2-3 weeks. In the current roadmap, every release would bring improvements across several areas, e.g. compiler internals, language features, editor usability, backend performance, new target platforms.
- We have explicitly left some time for improving documentation (guides and references) and keeping it up-to-date with the releases.
- As a feature milestone, we’d like to get to a point where it’s possible to write a small game for a mobile platform by the summer (we already have a working iOS example, but it’s far from production ready).

Everything said, this is an early release intended for a limited audience. If you happen to be an adventurous Haskell programmer interested in computer graphics – especially the realtime kind – and its applications, this might be a good time for you to try LambdaCube 3D. Everyone else is welcome, of course, but you’re on your own for the time being. In any case, we’re happy to receive any kind of feedback.