News aggregator

Worked up examples of Simon Marlow's codehot-swapping

haskell-cafe - Tue, 02/16/2016 - 9:32pm Mentioned here: Are there any out there? Most stuff I've found for this is Plugins or something else. I suggested on Twitter it might be similar to the dynamic linking trick game devs use but I don't really know anything about what Simon specifically did. If anyone could please point to examples or deeper explanation that would be a great help, thank you. --- Chris Allen _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

[Feedback requested]: -fhelpful-import-errors

glasgow-user - Tue, 02/16/2016 - 7:12pm
Hi, I raised a ticket to request a new feature: -fhelpful-import-errors. This flag should enable helpful errors if there are typo's in the internal imports of your projects. As suggested by `thomie`, I created a design proposal at and am now looking for feedback. Thank you for your time.
Categories: Offsite Discussion

[TFPIE 2016] 1st call for papers

haskell-cafe - Tue, 02/16/2016 - 5:36pm
Trends in Functional Programming in Education (TFPIE 2016) Call for papers The 5th International Workshop on Trends in Functional Programming in Education, TFPIE 2016, will be held on June 7, 2016 at the University of Maryland College Park in the USA. It is co-located with the Symposium on Trends in Functional Programming (TFP 2016) which takes place from June 8 - 10. *** Goal *** The goal of TFPIE is to gather researchers, teachers and professionals that use, or are interested in the use of, functional programming in education. TFPIE aims to be a venue where novel ideas, classroom-tested ideas and work-in-progress on the use of functional programming in education are discussed. The one-day workshop will foster a spirit of open discussion by having a review process for publication after the workshop. The program chair of TFPIE 2016 will screen submissions to ensure that all presentations are within scope and are of
Categories: Offsite Discussion

[TFPIE 2016] 1st call for papers

General haskell list - Tue, 02/16/2016 - 5:35pm
Trends in Functional Programming in Education (TFPIE 2016) Call for papers The 5th International Workshop on Trends in Functional Programming in Education, TFPIE 2016, will be held on June 7, 2016 at the University of Maryland College Park in the USA. It is co-located with the Symposium on Trends in Functional Programming (TFP 2016) which takes place from June 8 - 10. *** Goal *** The goal of TFPIE is to gather researchers, teachers and professionals that use, or are interested in the use of, functional programming in education. TFPIE aims to be a venue where novel ideas, classroom-tested ideas and work-in-progress on the use of functional programming in education are discussed. The one-day workshop will foster a spirit of open discussion by having a review process for publication after the workshop. The program chair of TFPIE 2016 will screen submissions to ensure that all presentations are within scope and are of
Categories: Incoming News

ghc commit and ticket mails

haskell-cafe - Tue, 02/16/2016 - 2:53pm
Hi cafe, I introduce ghc development information. The ghc developers always have many works as a volunteer. You can know and learn their amazing activities [1][2]. (You can also subscribe mailing lists [3][4].) [1] [2] [3] [4] Regards, Takenobu _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Generalising Categories - categories of tuples

haskell-cafe - Tue, 02/16/2016 - 7:21am
Hi All I've been doing some work on generalising categories, basically so the composition operator (.) works for not just functions, but things like tuples of functions, whilst still retaining type inference. Control.Category is a bit too restrictive for such instances. You can see the details in my blog post here: Any comments appreciated, I hope people find it interesting. Also, a quick plug, I'm looking for work, so if there's any Haskell (and/or functional programing) work around Sydney, Australia going I'd be interested. :-) If people really think this is useful, tell me, and I'll try to put it up as a package. Alternatively, if it's already been done, I'd appreciate a pointer in that direction. Regards, Clinton Mead _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

problem with GHC bug wiki

haskell-cafe - Mon, 02/15/2016 - 4:29pm
Can anyone tell me how to contact the keepers of the GHC bug wiki about a probem with the wiki--not a GHC ticket? Thanks in advance, Doug
Categories: Offsite Discussion

Catch-22 at GHC bug wiki.

haskell-cafe - Mon, 02/15/2016 - 4:26pm
Having forgotten my login name, I have had to register at the GHC bug wiki under a different name with a different email address. I see no way for me to clean up the mess. Can anyone tell me how to reach the wiki keepers to get help?
Categories: Offsite Discussion

Gabriel Gonzalez: How to build library-agnostic streaming sources

Planet Haskell - Mon, 02/15/2016 - 9:33am

The Haskell ecosystem has numerous libraries for effectful stream programming, including, but not limited to:

  • List
  • conduit
  • enumerator
  • io-streams
  • iterIO
  • iteratee
  • list-t
  • logict
  • machines
  • pipes

Unfortunately, this poses a problem for library writers. Which streaming library should they pick when providing effectful streaming components?

Sometimes the correct answer is: none of the above! We can often build streaming and effectful generators using only the base and transformers libraries!

The trick is to build polymorphic generators using only the MonadPlus and MonadTrans type classes. These generators can then be consumed by any library that provides an implementation of ListT that implements MonadPlus and MonadTrans.


I like to think of MonadPlus as the "list building" type class. This is because you can assemble a list using return, mzero, and mplus:

>>> import Control.Monad (MonadPlus(..))
>>> mzero :: [Int]
>>> return 1 :: [Int]
>>> return 1 `mplus` return 2 :: [Int] -- [1] ++ [2]
[1, 2]

In other words, mzero is analogous to [], mplus is analogous to (++), and return builds a singleton list.

However, many things other than lists implement MonadPlus, including every implementation of ListT in the wild. Therefore, if we build collections using MonadPlus operations these collections will type-check as ListT streams as well.

Let's provide the following helper function to convert a list to this more general MonadPlus type:

select :: MonadPlus m => [a] -> m a
select [] = mzero
select (x:xs) = return x `mplus` select xs

-- or: select = foldr (\x m -> return x `mplus` m) mzero

Note that this select function has some nice mathematical properties:

select (xs `mplus` ys) = (select xs) `mplus` (select ys)
select mzero = mzero

-- This assumes the distributivity law for `MonadPlus`
select . (f >=> g) = (select . f) >=> (select . g)
select . return = returnMonadTrans

Using select and liftIO (from MonadIO), we can build list comprehensions that interleave effects, like this:

example :: (MonadIO m, MonadPlus m) => m ()
example = do
x <- select [1, 2, 3]
liftIO (putStrLn ("x = " ++ show x))
y <- select [4, 5, 6]
liftIO (putStrLn ("y = " ++ show y))

You can read the above code as saying:

  • Let x range from 1 to 3
  • Print x every time it selects a new value
  • For each value of x, let y range from 4 to 6
  • Print y every time it selects a new value

Notice how example doesn't depend on any particular streaming library, so we can run it with diverse ListT implementations, all of which implement MonadPlus and MonadIO:

>>> Pipes.runListT example -- This requires `pipes-4.1.4`
x = 1
y = 4
y = 5
y = 6
x = 2
y = 4
y = 5
y = 6
x = 3
y = 4
y = 5
y = 6
>>> _ <- Control.Monad.Logic.observeAllT example
<Exact same output>
>>> _ <- ListT.toList example
<Exact same output>

However, we can use this trick for more than just list comprehensions. We can build arbitrary lazy and effectful streams this way!


Here's an example of a generator that lazily emits lines read from standard input:

import Control.Monad (MonadPlus(..))
import Control.Monad.Trans.Class (MonadTrans(..))
import System.IO (isEOF)

stdinLn :: (MonadIO m, MonadPlus m) => m String
stdinLn = do
eof <- liftIO isEOF
if eof
then mzero
else liftIO getLine `mplus` stdinLn

You can read the above code as saying:

  • Check if we are at the end of the file
  • If we're done, then return an empty list
  • If we're not done, prepend a getLine onto a recursive call to stdinLn

We can prove this works by writing a program that forwards these lines to standard output:

echo :: (MonadIO m, MonadPlus m) => m ()
echo = do
str <- stdinLn
liftIO (putStrLn str)

Now we can run echo using any of our favorite ListT implementations and it will do the right thing, streaming lines lazily from standard input to standard output in constant space:

>>> Pipes.runListT echo

The exception is the transformers library, whose ListT implementation does not stream or run in constant space.


We can also implement lazy variations on Control.Monad combinators using this interface.

For example, we can implement a lazy variation on replicateM using just select and replicate:

replicateM' :: MonadPlus m => Int -> m a -> m a
replicateM' n m = do
m' <- select (replicate n m)

-- or: replicateM' n = join . select . replicate n

We can use this lazy replicateM' to lazily echo 10 lines from standard input to standard output:

example :: (MonadIO m, MonadPlus m) => m ()
example = do
str <- replicateM' 10 (lift getLine)
liftIO (putStrLn str)

We can also implement a lazy mapM and forM, too, except now their implementations are so trivial that they don't even deserve their own functions:

mapM' :: Monad m => (a -> m b) -> m a -> m b
mapM' = (=<<)

forM' :: MOnad m => m a -> (a -> m b) -> m a
forM' = (>>=)

example :: (MonadIO m, MonadPlus m) => m ()
example = mapM' (liftIO . print) (replicateM' 10 (liftIO getLine))

Similarly, a lazy sequence just becomes join.


The following streaming libraries already provide their own implementation of ListT compatible with the above trick:

  • List
  • list-t
  • LogicT
  • pipes

The other streaming libraries do not currently provide a ListT implementation, but there is no reason why they couldn't! For example, conduit could implement an internal ListT type of its own and use that as an intermediate type for converting the above abstraction to the public Source type:

convert :: Monad m
=> Data.Conduit.Internal.ListT m a -> Source m a

This polymorphic API obviously does not capture all possible streaming patterns. For example, you can not implement something like take using this API. However, I suspect that a significant number of streaming components can be written using this dependency-free interface.

Edit: Thank you to Twan van Laarhoven, who pointed out that you can sometimes use MonadIO instead of MonadTrans, which produces nicer constraints. I updated this article to incorporate his suggestion.

Categories: Offsite Blogs

LANGUAGE ConstraintKinds not needed to use ConstraintKinds?

glasgow-user - Mon, 02/15/2016 - 2:32am
I recently upgraded to ghc 8 and started using stacks via ImplicitParams. For that I wind up using 'type Stack = (?stack :: CallStack)' and so ContraintKinds (I see that in the future GHC will do this by default). So now I can have a file like: module T where import qualified Log as Log f :: Log.Stack => IO () f = Log.warn "blah blah" I noticed that now haskell-src-exts refuses to parse this file, saying 'MultiParamTypeClasses language extension is not enabled.'. I assume it's a bug with haskell-src-exts in that it should require LANGUAGE ConstraintKinds instead, but then GHC itself doesn't want ConstraintKinds. Instead, it wants FlexibleContexts. From the docs, FlexibleContexts seems to be about the contexts in instance heads. Is this intentional? I'll go ahead and make a bug for haskell-src-exts, but the ghc behaviour here seems odd as well. What extension should haskell-src-exts require to parse this?
Categories: Offsite Discussion

Lee Pike: Viva La Resistance! A Resistance Game Solver

Planet Haskell - Sun, 02/14/2016 - 9:54pm

Update (Feb142016): see bottom for an improved strategy.

The Game

At a December workshop, I played The Resistance, a game in which there are two teams, the resistance and the spies. The spies know everyone’s identity; each resistance player knows only one’s own. Overall, the goal of the spies is to remain undetected; the goal of the resistance is to discover who the spies are.

Play proceeds in rounds in which a player nominates a subgroup to go on a “mission”. The nomination is then voted on. If the vote succeeds, every member of a mission plays either a “success” or “failure” card for the mission. One or two failure cards (depending on the mission size) causes the mission to fail. The cards played in a mission are public, but it is secret who played which card. (If the vote fails, the next player nominates a subgroup.)

The spies’ goal is to fail missions without being detected, and the resistance goal is to have missions succeed. So the spies generally wish to play failure cards while in a mission. Furthermore, spies always want some spy in the mission to spoil it. The resistance wants no spies to go on missions. The problem for the resistance is that when a mission fails, they know one or more of the subgroup is a spy, they just don’t know which one.

The spies win the game if they can cause three missions to fail before the resistance can cause three missions to succeed.

The game is simple but engaging, and similar in spirit to the game Mafia (Werewolf).y

The Problem

I played the game with computer scientists from the top universities and research labs. We debated strategy and were generally pretty bad at the game, despite having a lot of fun. In particular, the spies seemed to win a lot.

The problem is: what is the optimal approach to game play?

The Approach

The problem is a great fit for Bayesian Analysis. In each round, we learn partial information about the players from the group outcome. Only spies play failure cards. We can use that information to update our believe in the probability that a player is a spy.

Suppose there are four players, and two spies. Initially each player has the same probability of being a spy, . Now suppose that go on a mission, and return the set of cards . How do we update the spy probabilities of the group?

Bayes Theorem states that

In our case, “A” is the event that a particular player is a spy, and “B” is the event that we we observed a particular set of mission cards. We wish to compute , the updated probability that a player is a spy given the cards played in the mission.

So for each mission, we apply Bayes’ Theorem to each player, including the players not in the mission—if the spy probabilities increase (or decrease) for the mission players, then they decrease (or increase) for the non-mission players.

From the cards , we know that two of  are spies (and so is definitely not a spy, since there are two spies, total). Let’s compute the updated spy probability for player .

, the original spy probability for player (or any other player). To calculate , we first determine every possible assignment of players in the mission to spies and non-spies (there are such assignments). In the mission, there are three possibilities:

  1. spies
  2. spies
  3. spies

For each combination, we multiply the probabilities for the assignments. So in case (1), we have

for assigning players  and  to being spies and c to being a non-spy. Then we sum the probabilities for all three combinations. In our example, we get

So .

To compute P(B|A), we assume that player a is a spy, and now recompute the probability that contains the remaining one spy. Using the same approach as for computing , we get . Now we can apply Bayes’ Theorem:

So player’s a probability of being a spy shot up to 0.66, and so did player b’s and c’s.

Player ‘s spy probability drops to 0. The updated spy probabilities for any player not in the mission can be computed just as we did for the mission players, except we take the total number of spies and subject the number of failure cards observed. In this case, however, since we know all the spies were in the group, ‘s spy probability must be 0. (Another way of thinking about it is that there is an invariant that the sum of the player’s spy probabilities must always be the total number of spies in the group.)

The Strategies

How should the game be played by the resistance and spies, respectively, to increase the odds of winning?

For the resistance, picking the players the least likely to be spies is optimal. Let’s call this the pick-lowest strategy.

One possible optimization is to always pick yourself in a mission. The rational is that you know if you are part of the resistance, and you pick yourself for a mission, then you have at least one guaranteed non-spy. So even if your probability of being a spy as known to the group is higher than others, you have perfect knowledge of yourself. Let’s call this the pick-myself strategy,

For the spies, there are a few options. By default, the spies could always play a failure card (fail-always). But a spy might also play a success card to avoid detection; doing so is especially advantageous in the early rounds; one strategy is to never fail the first round (succeed-first). If the spies can collude to ensure only one plays a failure card during a mission, that provides the least amount of information to the resistance (fail-one).

There are other strategies and other combinations of strategies, but this is a good representative sample.

Which are the best strategies?


To discover the best strategies, we use Monte-Carlo simulations on the strategies over each number of players (with different number of players, there are different number of spies and missions). I found a few interesting results:

  • The best chance of wining for the resistance, and the closest odds between the resistance and spies, is with six players. At six players, the resistance has a 30-40% chance of winning under different strategies. The worst configuration is eight players, with not more than a 14% chance of winning for the resistance. During actual game play, it seemed that the odds favored the spies.
  • The best strategy for the resistance is the pick-lowest strategy. The strategy may be counter-intuitive, but consider this: the pick-myself strategy provides the spies an opportunity to always include themselves in a mission when picking a mission. The pick-myself strategy is an instance of a local maxima (i.e., local knowledge of knowing yourself to be part of the resistance) that is non-optimal.
  • Moreover, voting becomes a no-op. The game includes a voting round in which players vote on the proposed players for a mission. But If the resistance agrees on an optimal strategy, any deviation from the strategy by a player is because the player is a spy. If the person does deviation, the resistance votes against it (and resistance outnumber spies, so will win the vote), and we now have complete assurance the proposer is a spy. The spies have no choice but to follow the optimal strategy of the resistance.
  • Of the spy strategies listed above, the best is the fail-one strategy, which is intuitive. The succeed-first strategy is another example of a local maxima that is non-optimal; while it protects that particular spy from detection, it is more valuable for the spies in general to fail the mission.

The related Mafia game has some analytical results, giving bounds on (their version of) the resistance to spies. I have not done that, nor have I done Monte-Carlo analysis to determine what proportion of resistance and spies and mission sizes gives a more even chance of winning. In Mafia, it is noted that in actual game play, the resistance wins more often than simulations/analytical analysis would suggest, with different attributions (e.g., people are bad at lying over iterative rounds).

Play Along at Home

I have implemented the Bayesian solver in a webserver hosted on Amazon Web Services. You can use the solver when playing with others.

An easier-to-remember link is

If you want to run Monte Carlo simulations, you will have to download the code and run it locally, however.


Update (Feb142016)

It has been pointed out by Eike Schulte in the comments and Iavor Diatchki that by including some additional information, the strategy might be improved. This is indeed the case. The intuition is that if a group has previously included a spy, that group should not be selected again, even if it is the lowest probability group. For example, with five players, consider the following rounds:

  • Round 0: players [0,1] are selected and there is one fail card.
  • Round 1: players [2,3,4] are selected and there is one fail card.
  • Round 2: players [2,3] are selected and there are no fail cards.

At this point, players [0,1] have a 0.5 probability of being a spy and players [2,3,4] have a 1/3 probability. So in round 3, we do not want to select players [2,3,4] even if they have the lowest probabilities. So we select a group with the lowest spy probability that has not already included a spy. The server has been updated to include this strategy. The strategy does better; for example, at 6 players, we have just over a 50% chance of winning!

Categories: Offsite Blogs

FP Complete developers survey

haskell-cafe - Sun, 02/14/2016 - 9:33pm
Last year FP Complete ran a large survey of Haskell developers. I can only find a synopsis here but my memory tells me they also had a blog post with a more in-depth analysis of the numbers. I can't find it on their website. Does anyone know where it is, or am I imagining it? Tom
Categories: Offsite Discussion

Reconsidering -Wall and -Wcompat

glasgow-user - Sun, 02/14/2016 - 6:12pm
tl;dr. GHC has a new set of warnings, -Wcompat, intended to give users advance notice of coming library changes. We want to know whether you think this set should be included in -Wall. See the Wiki [4] and voice your opinion via the linked poll. Hello everyone, GHC 8.0.1 will include a new warning group, -Wcompat, which arose out of the MonadFail proposal discussion [1] late last year. This warning set is intended to provide a means of informing users of coming changes in GHC's core libraries. We would like to solicit the community's feedback on whether this new flag set should be implied by -Wall. This proposal is motivated by concern expressed by some that -Wcompat would see little usage unless it is placed in one of the warning sets typically used during development. One such set is -Wall, which enables a generous fraction of GHC's warning collectionand is is intended [2] for use during development. Unfortunately, despite the (albeit only recently stated) intent of flag, -Wall i
Categories: Offsite Discussion

Dominic Steinitz: The Lie Derivative

Planet Haskell - Sun, 02/14/2016 - 1:25am

In proposition 58 Chapter 1 in the excellent book O’Neill (1983), the author demonstrates that the Lie derivative of one vector field with respect to another is the same as the Lie bracket (of the two vector fields) although he calls the Lie bracket just bracket and does not define the Lie derivative preferring just to use its definition with giving it a name. The proof relies on a prior result where he shows a co-ordinate system at a point can be given to a vector field for which so that .

Here’s a proof seems clearer (to me at any rate) and avoids having to distinguish the case wehere the vector field is zero or non-zero. These notes give a similar proof but, strangely for undergraduate level, elide some of the details.

A Few Definitions

Let be a smooth mapping and let be a tensor with then define the pullback of by to be

For a tensor the pullback is defined to be .

Standard manipulations show that is a smooth (covariant) tensor field and that is -linear and that .

Let be a diffeomorphism and a vector field on we define the pullback of this field to be

Note that the pullback of a vector field only exists in the case where is a diffeomorphism; in contradistinction, in the case of pullbacks of purely covariant tensors, the pullback always exists.

For the proof below, we only need the pullback of functions and vector fields; the pullback for tensors with is purely to give a bit of context.

From O’Neill (1983) Chapter 1 Definition 20, let be a smooth mapping. Vector fields on and on are –related written if and only if .

The Alternative Proof

By Lemma 21 Chapter 1 of O’Neill (1983), and are -related if and only if .

Recalling that and since

we see that the fields and are -related: . Thus we can apply the Lemma.

Although we don’t need this, we can express the immediately above equivalence in a way similar to the rule for covariant tensors

First let’s calculate the Lie derivative of a function with respect to a vector field where is its flow

Analogously defining the Lie derivative of with respect to

we have

Since we have


as required.


O’Neill, B. 1983. Semi-Riemannian Geometry with Applications to Relativity, 103. Pure and Applied Mathematics. Elsevier Science.

Categories: Offsite Blogs

Process proposal: Require explicit user-oriented timelines in library proposals

libraries list - Sat, 02/13/2016 - 4:11pm
tl;dr. I think proposals should include a user-oriented timeline, e.g. [2] Hello everyone, Recently I've been doing some thinking about library change roadmap and in particular our process for considering changes. While this process has without question improved remarkably in the last few years, I think there is still some room for improvement in communicating future plans, in particular to users. From the user's perspective there are three important points in time (let's call them milestones) associated with a library proposal (using MonadFail as an example), A. When can I start asking for warnings? This is the time when we add warnings notifying users of the coming change to -Wcompat (e.g. this is 8.0 in the case of MonadFail) B. When can I start conveniently acting upon these warnings? This is the point where enough time has passed that the user can take action on the warning in a manner consistent with the three-release policy (e.g. 8.4 in the case of MonadFail?)
Categories: Offsite Discussion

LambdaCube: DSL in the wild

Planet Haskell - Sat, 02/13/2016 - 2:11pm

After a few months of radio silence, the first public version of the new LambdaCube 3D DSL is finally available on Hackage. We have also updated our website at the same time, so if you want to get your hands dirty, you can head over to our little Getting Started Guide right away. The rest of this post will provide some context for this release.

The summer tour was a fun but exhausting experience, and we needed a few weeks of rest afterwards. This paid off nicely, as development continued with renewed energy in the autumn, and we’ve managed to keep up the same pace ever since. The past few months have been quite eventful!

First of all, our team has a new member: Andor Pénzes. Andor took it upon himself to improve the infrastructure of the project, which was sorely needed as there was no manpower left to do it before. In particular, this means that we finally have continuous integration set up with Travis, and LambdaCube 3D can also be built into a Docker image.

It is also worth noting that this release is actually the second version of the DSL. The sole purpose of the first version was to explore the design space and learn about the trade-offs of various approaches in implementing a Haskell-like language from scratch given our special requirements. It would be impossible to list all the changes we made, but there are a few highlights we’d like to point out:

  • The speed of reduction is greatly improved.
  • Reduction is based on partial evaluation.
  • We have a much more expressive type system with a faster inference algorithm.
  • Pattern match compilation is based on new research.

We had an all-team meeting in December and after some discussion we came up with a detailed roadmap (disclaimer: this is a living internal document) for the first half of 2016. Without the gory details, this is what you should expect in the coming months:

  • A new release is planned for every 2-3 weeks. In the current roadmap, every release would bring improvements across several areas, e.g. compiler internals, language features, editor usability, backend performance, new target platforms.
  • We have explicitly left some time for improving documentation (guides and references) and keeping it up-to-date with the releases.
  • As a feature milestone, we’d like to get to a point where it’s possible to write a small game for a mobile platform by the summer (we already have a working iOS example, but it’s far from production ready).

Everything said, this is an early release intended for a limited audience. If you happen to be an adventurous Haskell programmer interested in computer graphics – especially the realtime kind – and its applications, this might be a good time for you to try LambdaCube 3D. Everyone else is welcome, of course, but you’re on your own for the time being. In any case, we’re happy to receive any kind of feedback.

Categories: Offsite Blogs

New type of expressions containing (error ...) includes noisy implicit parameter

haskell-cafe - Sat, 02/13/2016 - 10:50am
Prelude> let myList = [1, 2, 3 :: Integer] Prelude> let myList' = myList ++ undefined Prelude> :t myList myList :: [Integer] Prelude> :t myList' myList' :: (?callStack::GHC.Stack.Types.CallStack) => [Integer] This is on by default and insofar as I've been able to try, it's avoidable in a default GHCi 8.0 REPL session. I'm glad I caught this before our book goes to print in a couple months. We'd managed to avoid talking about implicit parameters in 1,100+ pages of book but now we're forced to acknowledge their existence in the 4th of 32 chapters. This slipped past the radar more stealthily than the earlier stages of BBP did for 7.10. I was hearing about BBP on the GHC Trac pretty early on for months on end. Was the thinking that people still used implicit parameters for anything or taught them? On the one hand, this is a nice change and something I personally attempted (and failed) to make easier in GHC 7.10. The implementation making the types noisy rankles and didn't seem necessary when I investigated it
Categories: Offsite Discussion

conduit: Inexhaustible source

haskell-cafe - Fri, 02/12/2016 - 10:40pm
Hi, I've got a conduit thing that yields infinitely many values and never exits, which I've given the type ConduitM <> () o m Void - a bit like Source m o = ConduitM <> () o m () except that it can't exit due to the Void. (One side-question: why is Source m o not ConduitM <> Void o m ()?) I would now like to get the first item it yields; I'm currently using Data.Conduit.List.head but of course this returns a Maybe o in case the upstream thing exits. Is there a way to do this without that Maybe? I can't see anything obvious, but nor can I think of a terribly good reason why not. One thing that I was pondering was a kind of fuse operator with a type like ... ConduitM <> a b m r1
Categories: Offsite Discussion

typeclass for transformation of Euclidean points

haskell-cafe - Fri, 02/12/2016 - 5:45pm
Hi. What I'd like to have for a program I am working on is a generic typeclass for objects that can be treated like points on a 2D plane, and a few simple transform functions (e.g., addition, rotation around an origin) which I could combine to make more complex transformations. Of course, it would be trivial to do this myself, but it seems like the sort of thing somebody would have thought of already. Does anybody know of a package providing this? (I didn't see an obvious one when searching with hoogle.)
Categories: Offsite Discussion

-XGADTs changes type for program that does notcontain GADT

haskell-cafe - Fri, 02/12/2016 - 2:47pm
Dear Cafe, the following type-correct program does not contain GADT syntax. When I activate GADT language pragma, it does no longer typecheck. {-# language GADTs, ScopedTypeVariables #-} data M a b dw :: (b -> b -> b) -> M a b -> M a b -> M a b dw f x y = x data Bar a b s = Bar { this :: M a (M b s), that :: M b (M a s) } f :: forall p q s . (s -> s -> s) -> Bar p q s -> Bar p q s -> Bar p q s f g x y = let -- diff :: forall p . M p s -> M p s -> M p s diff a b = dw g a b in Bar { this = dw diff (this x)(this y) , that = dw diff (that x)(that y) } I can fix this by declaring the type for `diff` as indicated in the comment. Otherwise, `diff` is not polymorphic (enough), as the error message shows. This behaviour is consistent over ghc-7.8,7.10,8-rc, so it's unlikely to be a bug. But it does seem to go against the claim of "conservative extension of HM" (Sect 6.6 of the ICFP'06 paper, Sect 4.9 of MS-CIS-05-26)
Categories: Offsite Discussion