News aggregator

Philip Wadler: John McCarthy presents Recursive Functions of Symbolic Expressions

Planet Haskell - Sun, 04/24/2016 - 9:58am
“John McCarthy presents Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I”, Painting, 1662, Ferdinand Bol. From Classic Programmer Paintings.
Categories: Offsite Blogs

Edward Z. Yang: Hindley-Milner with top-level existentials

Planet Haskell - Sun, 04/24/2016 - 2:05am

Content advisory: This is a half-baked research post.

Abstract. Top-level unpacking of existentials are easy to integrate into Hindley-Milner type inference. Haskell should support them. It's possible this idea can work for internal bindings of existentials as well (ala F-ing modules) but I have not worked out how to do it.

Update. And UHC did it first!

Update 2. And rank-2 type inference is decidable (and rank-1 existentials are an even weaker system), although the algorithm for rank-2 inference requires semiunification.


The difference between Hindley-Milner and System F. Although in informal discussion, Hindley-Milner is commonly described as a “type inference algorithm”, it should properly be described as a type system which is more restrictive than System F. Both type systems allow polymorphism via universal quantification of variables, but in System F this polymorphism is explicit and can occur anywhere, whereas in Hindley-Milner the polymorphism is implicit, and can only occur at the “top level” (in a so-called “polytype” or “type scheme.”) This restriction of polymorphism is the key which makes inference (via Algorithm W) for Hindley-Milner decidable (and practical), whereas inference for System F undecidable.

-- Hindley Milner id :: a -> a id = λx. x -- System F id :: ∀a. a -> a id = Λa. λ(x : a). x

Existential types in System F. A common generalization of System F is to equip it with existential types:

Types τ ::= ... | ∃a. τ Terms e ::= ... | pack <τ, e>_τ | unpack <a, x> = e in e

In System F, it is technically not necessary to add existentials as a primitive concept, as they can be encoded using universal quantifiers by saying ∃a. τ = ∀r. (∀a. τ → r) → r.

Existential types in Hindley-Milner? This strategy will not work for Hindley-Milner: the encoding requires a higher-rank type, which is precisely what Hindley-Milner rules out for the sake of inference.

In any case, it is a fool's game to try to infer existential types: there's no best type! HM always infers the most general type for an expression: e.g., we will infer f :: a -> a for the function f = \x -> x, and not Int -> Int. But the whole point of data abstraction is to pick a more abstract type, which is not going to be the most general type and, consequently, is not going to be unique. What should be abstract, what should be concrete? Only the user knows.

Existential types in Haskell. Suppose that we are willing to write down explicit types when existentials are packed, can Hindley-Milner do the rest of the work: that is to say, do we have complete and decidable inference for the rest of the types in our program?

Haskell is an existence (cough cough) proof that this can be made to work. In fact, there are two ways to go about doing it. The first is what you will see if you Google for “Haskell existential type”:

{-# LANGUAGE ExistentialQuantification #-} data Ex f = forall a. Ex (f a) pack :: f a -> Ex f pack = Ex unpack :: Ex f -> (forall a. f a -> r) -> r unpack m k = case m of Ex x -> f x

Ex f is isomorphic to ∃a. f a, and similar to the System F syntax, they can be packed with the Ex constructor and unpacked by pattern-matching on them.

The second way is to directly use the System F encoding using Haskell's support for rank-n types:

{-# LANGUAGE RankNTypes #-} type Ex f = forall r. (forall a. f a -> r) -> r pack :: f a -> Ex f pack x = \k -> k x unpack :: Ex f -> (forall a. f a -> r) -> r unpack m k = m k

The boxy types paper demonstrated that you can do inference, so long as all of your higher rank types are annotated. Although, perhaps it was not as simple as hoped, since impredicative types are a source of constant bugs in GHC's type checker.

The problem

Explicit unpacks suck. As anyone who has tried programming with existentials in Haskell can attest, the use of existentials can still be quite clumsy due to the necessity of unpacking an existential (casing on it) before it can be used. That is to say, the syntax let Ex x = ... in ... is not allowed, and it is an easy way to get GHC to tell you its brain exploded.

Leijen investigated the problem of handling existentials without explicit unpacks.

Loss of principal types without explicit unpacks, and Leijen's solution. Unfortunately, the naive type system does not have principal types. Leijen gives an example where there is no principal type:

wrap :: forall a. a -> [a] key :: exists b. Key b -- What is the type of 'wrap key'? -- [exists b. Key b]? -- exists b. [key b]?

Neither type is a subtype of the other. In his paper, Leijen suggests that the existential should be unwrapped as late as possible (since you can go from the first type to the second, but not vice versa), and thus, the first type should be preferred.

The solution

A different approach. What if we always lift the existential to the top level? This is really easy to do if you limit unpacks to the top-level of a program, and it turns out this works really well. (The downside is that dynamic use of existentials is not supported.)

There's an existential in every top-level Haskell algebraic data type. First, I want to convince you that this is not all that strange of an idea. To do this, we look at Haskell's support for algebraic data types. Algebraic data types in Haskell are generative: each data type must be given a top-level declaration and is considered a distinct type from any other data type. Indeed, Haskell users use this generativity in conjunction with the ability to hide constructors to achieve data abstraction in Haskell. Although there is not actually an existential lurking about—generativity is not data abstraction—generativity is an essential part of data abstraction, and HM has no problem with this.

Top-level generativity corresponds to existentials that are unpacked at the top-level of a program (ala F-ing modules). We don't need existentials embedded inside our Haskell expressions to support the generativity of algebraic data types: all we need is the ability to pack an existential type at the top level, and then immediately unpack it into the top-level context. In fact, F-ing modules goes even further: existentials can always be lifted until they reach the top level of the program. Modular programming with applicative functors (the ML kind) can be encoded using top-level existentials which are immediately unpacked as they are defined.

The proposal. So let us suggest the following type system, Hindley-Milner with top-level existentials (where a* denotes zero to many type variables):

Term variables ∈ f, x, y, z Type variables ∈ a, b, c Programs prog ::= let f = e in prog | seal (b*, f :: σ) = (τ*, e) in prog | {- -} Type schemes (polytypes) σ ::= ∀a*. τ Expressions e ::= x | \x -> e | e e Monotypes τ ::= a | τ -> τ

There is one new top-level binding form, seal. We can give it the following typing rule:

Γ ⊢ e :: τ₀[b* → τ*] a* = free-vars(τ₀[b* → τ*]) Γ, b*, (f :: ∀a*. τ₀) ⊢ prog --------------------------------------------- Γ ⊢ seal (b*, f :: ∀a*. τ₀) = (τ*, e) in prog

It also elaborates directly to System F with existentials:

seal (b*, f :: σ) = (τ*, e) in prog ===> unpack <b*, f> = pack <τ*, e>_{∃b*. σ} in prog

A few observations:

  1. In conventional presentations of HM, let-bindings are allowed to be nested inside expressions (and are generalized to polytypes before being added to the context). Can we do something similar with seal? This should be possible, but the bound existential type variables must be propagated up.
  2. This leads to a second problem: naively, the order of quantifiers must be ∃b. ∀a. τ and not ∀a. ∃b. τ, because otherwise we cannot add the existential to the top-level context. However, there is a "skolemization" trick (c.f. Shao and F-ing modules) by which you can just make b a higher-kinded type variable which takes a as an argument, e.g., ∀a. ∃b. b is equivalent to ∃b'. ∀a. b' a. This trick could serve as the way to support inner seal bindings, but the encoding tends to be quite involved (as you must close over the entire environment.)
  3. This rule is not very useful for directly modeling ML modules, as a “module” is usually thought of as a record of polymorphic functions. Maybe you could generalize this rule to bind multiple polymorphic functions?

Conclusion. And that's as far as I've worked it out. I am hoping someone can tell me (1) who came up with this idea already, and (2) why it doesn't work.

Categories: Offsite Blogs

Mark Jason Dominus: Steph Curry: fluke or breakthrough?

Planet Haskell - Sat, 04/23/2016 - 6:00pm

[ Disclaimer: I know very little about basketball. I think there's a good chance this article contains at least one basketball-related howler, but I'm too ignorant to know where it is. ]

Randy Olson recently tweeted a link to a New York Times article about Steph Curry's new 3-point record. Here is Olson’s snapshot of a portion of the Times’ clever and attractive interactive chart:

(Skip this paragraph if you know anything about basketball. The object of the sport is to throw a ball through a “basket” suspended ten feet (3 meters) above the court. Normally a player's team is awarded two points for doing this. But if the player is sufficiently far from the basket—the distance varies but is around 23 feet (7 meters)—three points are awarded instead. Carry on!)

Stephen Curry

The chart demonstrates that Curry this year has shattered the single-season record for three-point field goals. The previous record, set last year, is 286, also by Curry; the new record is 406. A comment by the authors of the chart says

The record is an outlier that defies most comparisons, but here is one: It is the equivalent of hitting 103 home runs in a Major League Baseball season.

(The current single-season home run record is 73, and .)

I found this remark striking, because I don't think the record is an outlier that defies most comparisons. In fact, it doesn't even defy the comparison they make, to the baseball single-season home run record.

Babe Ruth

In 1919, the record for home runs in a single season was 29, hit by Babe Ruth. The 1920 record, also by Ruth, was 54. To make the same comparison as the authors of the Times article, that is the equivalent of hitting home runs in a Major League Baseball season.

No, far from being an outlier that defies most comparisons, I think what we're seeing here is something that has happened over and over in sport, a fundamental shift in the way they game is played; in short, a breakthrough. In baseball, Ruth's 1920 season was the end of what is now known as the dead-ball era. The end of the dead-ball era was the caused by the confluence of several trends (shrinking ballparks), rule changes (the spitball), and one-off events (Ray Chapman, the Black Sox). But an important cause was simply that Ruth realized that he could play the game in a better way by hitting a crapload of home runs.

The new record was the end of a sudden and sharp upward trend. Prior to Ruth's 29 home runs in 1919, the record had been 27, a weird fluke set way back in 1887 when the rules were drastically different. Typical single-season home run records in the intervening years were in the 11 to 16 range; the record exceeded 20 in only four of the intervening 25 years.

Ruth's innovation was promptly imitated. In 1920, the #2 hitter hit 19 home runs and the #10 hitter hit 11, typical numbers for the nineteen-teens. By 1929, the #10 hitter hit 31 home runs, which would have been record-setting in 1919. It was a different game.

Takeru Kobayashi

For another example of a breakthrough, let's consider competitive hot dog eating. Between 1980 and 1990, champion hot-dog eaters consumed between 9 and 16 hot dogs in 10 minutes. In 1991 the time was extended to 12 minutes and Frank Dellarosa set a new record, 21½ hot dogs, which was not too far out of line with previous records, and which was repeatedly approached in the following decade: through 1999 five different champions ate between 19 and 24½ hot dogs in 12 minutes, in every year except 1993.

But in 2000 Takeru Kobayashi (小林 尊) changed the sport forever, eating an unbelievably disgusting 50 hot dogs in 12 minutes. (50. Not a misprint. Fifty. Roman numeral Ⅼ.) To make the Times’ comparison again, that is the equivalent of hitting home runs in a Major League Baseball season.

At that point it was a different game. Did the record represent a fundamental shift in hot dog gobbling technique? Yes. Kobayashi won all five of the next five contests, eating between 44½ and 53¾ each time. By 2005 the second- and third-place finishers were eating 35 or more hot dogs each; had they done this in 1995 they would have demolished the old records. A new generation of champions emerged, following Kobayashi's lead. The current record is 69 hot dogs in 10 minutes. The record-setters of the 1990s would not even be in contention in a modern hot dog eating contest.

Bob Beamon

It is instructive to compare these breakthroughs with a different sort of astonishing sports record, the bizarre fluke. In 1967, the world record distance for the long jump was 8.35 meters. In 1968, Bob Beamon shattered this record, jumping 8.90 meters. To put this in perspective, consider that in one jump, Beamon advanced the record by 55 cm, the same amount that it had advanced (in 13 stages) between 1925 and 1967.

Progression of the world long jump record
The cliff at 1968 is Bob Beamon

Did Beamon's new record represent a fundamental shift in long jump technique? No: Beamon never again jumped more than 8.22m. Did other jumpers promptly imitate it? No, Beamon's record was approached only a few times in the following quarter-century, and surpassed only once. Beamon had the benefit of high altitude, a tail wind, and fabulous luck.

Joe DiMaggio

Another bizarre fluke is Joe DiMaggio's hitting streak: in the 1941 baseball season, DiMaggio achieved hits in 56 consecutive games. For extensive discussion of just how bizarre this is, see The Streak of Streaks by Stephen J. Gould. (“DiMaggio’s streak is the most extraordinary thing that ever happened in American sports.”) Did DiMaggio’s hitting streak represent a fundamental shift in the way the game of baseball was played, toward high-average hitting? Did other players promptly imitate it? No. DiMaggio's streak has never been seriously challenged, and has been approached only a few times. (The modern runner-up is Pete Rose, who hit in 44 consecutive games in 1978.) DiMaggio also had the benefit of fabulous luck.

Is Curry’s new record a fluke or a breakthrough?

I think what we're seeing in basketball is a breakthrough, a shift in the way the game is played analogous to the arrival of baseball’s home run era in the 1920s. Unless the league tinkers with the rules to prevent it, we might expect the next generation of players to regularly lead the league with 300 or 400 three-point shots in a season. Here's why I think so.

  1. Curry's record wasn't unprecedented. He's been setting three-point records for years. (Compare Ruth’s 1920 home run record, foreshadowed in 1919.) He's continuing a trend that he began years ago.

  2. Curry’s record, unlike DiMaggio’s streak, does not appear to depend on fabulous luck. His 402 field goals this year are on 886 attempts, a 45.4% success rate. This is in line with his success rate every year since 2009; last year he had a 44.3% success rate. Curry didn't get lucky this year; he had 40% more field goals because he made almost 40% more attempts. There seems to be no reason to think he couldn't make the same number of attempts next year with equal success, if he wants to.

  3. Does he want to? Probably. Curry’s new three-point strategy seems to be extremely effective. In his previous three seasons he scored 1786, 1873, and 1900 points; this season, he scored 2375, an increase of 475, three-quarters of which is due to his three-point field goals. So we can suppose that he will continue to attempt a large number of three-point shots.

  4. Is this something unique to Curry or is it something that other players might learn to emulate? Curry’s three-point field goal rate is high, but not exceptionally so. He's not the most accurate of all three-point shooters; he holds the 62nd–64th-highest season percentages for three-point success rate. There are at least a few other players in the league who must have seen what Curry did and thought “I could do that”. (Kyle Korver maybe? I'm on very shaky ground; I don't even know how old he is.) Some of those players are going to give it a try, as are some we haven’t seen yet, and there seems to be no reason why some shouldn't succeed.

A number of things could sabotage this analysis. For example, the league might take steps to reduce the number of three-point field goals, specifically in response to Curry’s new record, say by moving the three-point line farther from the basket. But if nothing like that happens, I think it's likely that we'll see basketball enter a new era of higher offense with more three-point shots, and that future sport historians will look back on this season as a watershed.

[ Addendum 20160425: As I feared, my Korver suggestion was ridiculous. Thanks to the folks who explained why. Reason #1: He is 35 years old. ]

Categories: Offsite Blogs

wxhaskell: Play sound not working on OSX?

haskell-cafe - Sat, 04/23/2016 - 1:08pm
Is it possible that play . sound might not work on OSX or I'm I doing something wrong? This advice from around 2006 is all I could find: "Did you build wxWidgets with configuring --enable-sound build option?" Is this still valid? If so, how do I do that? Tilmann |import Graphics.UI.WX main :: IO () main = start $ do f <- frame [] p <- panel f [] set f [layout := minsize(Size 100 100) $ widget p] play (sound "alarm.wav") | Using OSX 10.9.5, GHC 7.10.2, WX _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

[ANN] Aivika: A parallel distributed discrete eventsimulation library

haskell-cafe - Sat, 04/23/2016 - 10:43am
Hi Cafe! I’m glad to announce the first add-on module [3] of Aivika simulation library that allows creating and running parallel distributed discrete event simulations. It is destined for clusters and supercomputers, although it works on a stand-alone notebook too. My module implements an optimistic strategy that implies transparent rollbacks. It is mainly inspired by ideas of the Time Warp algorithm, which first publication is dated early 80s. Please note that this is the first version. It is mainly experimental, but it already shows good results. For example, when using the reproducible random number generators on different nodes, I always get the same simulation results. Furthermore, for testing purposes, I rewrote one of my examples with discrete processes and the shared resource so that it uses too much messages just to see how it works. The add-on module allows doing everything that the generalized version [2] of Aivika can do in a part related to the discrete event simulation. Moreover, the pa
Categories: Offsite Discussion

Generators? Iterators?

haskell-cafe - Sat, 04/23/2016 - 6:28am
Hi. I'm more in the shallow end of the Haskell pool right now, so forgive me if this is really basic... In Haskell, what is the class for things that can provide an infinite sequence? I mean, things that produce a value as well as function for producing the next value? Clearly RandomGen does this, but what is the next level of abstraction above RandomGen (i.e., RandomGen minus the randomness aspect).
Categories: Offsite Discussion

Should webassembly be a target for GHC?

haskell-cafe - Sat, 04/23/2016 - 5:28am
I'm interested in webassembly's recent momentum Is this a way we could get a foot in the web? I'm curious, just putting the thought out there. Cheers, Dimitri
Categories: Offsite Discussion

Oliver Charles: Announcing transformers-eff

Planet Haskell - Fri, 04/22/2016 - 6:00pm

In my last post, I spent some time discussing a few different approaches to dealing with computational effects in Haskell - namely monad transformers, free monads, and the monad transformer library. I presented an approach to systematically building mtl-like type classes based on the idea of lifting languages for a given effect into larger monad transformer stacks. This approach felt so mechanical to me I set about exploring a way to formalise it, and am happy to announce a new experimental library – transformers-eff.

transformers-eff takes inspiration from the work of algebraic effects and handlers, and splits each effect into composable programs for introducing effects and handlers that eliminate these effects. As the name indicates, this work is also closely related to monad transformer stacks, as they provide the implementation of the specific effects. I believe the novelty in my approach is that we can do this entirely within the system of monad transformers, and this observation makes it very convenient to create re-usable effects.

Core API

Before looking at an example, I want to start by presenting the core API. First, we have the Eff monad transformer:

data Eff (f :: * -> *) (m :: * -> *) (a :: *)

If you squint, you’ll see that Eff has the familiar shape of a monad transformer - it transforms a given monad m, providing it access to effects described by f. As Eff f m is itself a monad, it’s possible to stack Effs together. The type parameter f is used to indicate which effects this Eff transformer talks about.

Next, the library provides a way to eliminate Eff by translating it into a concrete monad transformer:

translate :: (Monad m,Monad (t m),MonadTrans t) => (forall x r. f x -> ContT r (t m) x) -> Eff f m a -> t m a

Translations are defined by a single function that is very similar to the type of “lifts” we saw in my previous blog post. The difference here is that the homomorphism maps into ContT, which allows the translation to adjust control flow. For many effects it will be enough to simply lift directly into this, but it can be useful to inspect the continuation, for example to build non-deterministic computations.

Finally, we have one type class method:

interpret :: (Monad m) => f a -> m a

However, this type class is fairly constrained in its instances, so you should read m as actually being some sort of monad transformer stack containing Eff f.


Let’s dive in and look at some examples.

Reader effects

Last post we spent a lot of time looking at various representations of the reader monad, so let’s see how this looks under transformers-eff.

We already have a definition for our language, r -> a as we saw last week. While we could work directly with this, we’ll be interpreting into ReaderT so I’ll use the Reader newtype for a little extra readibility. Given this language, we just need to write a translation into a concrete monad transformer, which will be ReaderT:

effToReaderT :: Monad m => Eff (Reader e) m a -> ReaderT e m a effToReaderT = translate (\r -> lift (hoist generalize r))

This is a little dense, so let’s break it down. When we call translate, we have to provide a function with the type:

forall a m. Reader r a -> ContT _ (ReaderT r m) a

The ReaderT r m part is coming from the type we gave in the call to translate, that is – the type of effToReaderT. We don’t really need to concern outselves with continuations for this effect, as reading from a fixed environment does not change the flow of control - so we’ll begin with lift. We now have to produce a ReaderT r m a from a Reader r a. If we notice that Reader r a = ReaderT r Identity a, we can make use of the tools in the mmorph library, which lets us map that Identity to any m via hoist generalize.

We still need a way to easily introduce these effects into our programs, and that means writing an mtl type class. However, the instances require almost no work on our behalf and we only have to provide two, making this is a very quick process:

class (Monad m) => EffReader env m | m -> env where liftReader :: Reader env a -> m a instance Monad m => EffReader env (Eff (Reader env) m) where liftReader = interpret instance {-# OVERLAPPABLE #-} EffReader env m => EffReader env (Eff effects m) where liftReader = lift . liftReader

I then provide a user-friendly API built on this lift operation:

ask :: EffEnv e m => m e ask = liftReader (Reader id)

Finally, most users are probably more interested in running the effect rather than just translating it to ReaderT, so let’s provide a convenience function to translate and run all in one go:

runReader :: Eff (Reader r) m a -> r -> m a runReader eff r = runReaderT (effToReaderT eff) r

In total, the reader effect is described as:

class (Monad m) => EffReader env m | m -> env where liftReader :: Reader env a -> m a instance Monad m => EffReader env (Eff (Reader env) m) where liftReader = interpret instance {-# OVERLAPPABLE #-} EffReader env m => EffReader env (Eff effects m) where liftReader = lift . liftReader ask :: EffEnv e m => m e ask = liftReader (Reader id) effToReaderT :: Monad m => Eff (Reader e) m a -> ReaderT e m a effToReaderT = translate (\r -> lift (hoist generalize r)) A logging effect

We also looked at a logging effect last week, and this can also be built using transformers-eff:

data LoggingF message a = Log message deriving (Functor) class (Monad m) => EffLog message m | m -> message where liftLog :: Free (LoggingF message) a -> m a instance Monad m => EffLog env (Eff (Free (LoggingF message)) m) where liftLog = interpret instance {-# OVERLAPPABLE #-} EffLog env m => EffLog env (Eff effects m) where liftLog = lift . liftLog log :: EffLog message m => message -> m () log = liftLog . liftF . Log runLog :: (MonadIO m) => Eff (Free (LoggingF message) e) m a -> (message -> IO ()) -> m a runLog eff = runIdentityT (translate (iterM (\(Log msg) -> liftIO (io msg))))

The interpretation here is given an IO action to perform whenever a message is logged. I could have implemented this in a few ways - perhaps lifting the whole computation into ReaderT (message -> IO ()), but instead I have just used IdentityT as the target monad transformer, and added a MonadIO constraint onto m. Whenever a message is logged, we’ll directly call the given IO action. As you can also see, I’ve used a free monad as the source language for the effect. This example demonstrates that we are free to mix a variety of tools (here free monads, MonadIO and the identity transformer) in order to get the job done.

What does this approach bring? Less type class instances

We saw above that when we introduced our EffLog type class, it was immediately available for use along side EffReader effects - and we didn’t have to do anything extra! To me, this is a huge win - I frequently find myself frustrated with the amount of work required to do when composing many different projects together with mtl, and this is not just a theoretical frustration. To provide just one example from today, I wanted to use ListT with some Yesod code that required MonadLogger. There is obviously no MonadLogger instance for ListT, and it’s almost unsolvable to provide such an instance withoutrs/o using orphan instances - neither one of those libraries should need to depend on the other, so we’re stuck! If you stay within Eff, this problem doesn’t occur.

Many will be quick to point out that in mtl it doesn’t necessary make sense to have all transformers compose due to laws (despite the lack of any laws actually being stated…), and I’m curious if this is true here. In this library, due to the limitation on having to write your effectful programs based on an underlying algebra, I’m not sure it’s possible to introduce the problematic type class methods like local and catch.

One effect at a time

In the mtl approach a single monad transformer stack might be able to deal with a whole selection of effects in one go. However, I’ve found that this can actually make it quite difficult to reason about the flow of code. To provide an example, let’s consider this small API:

findOllie :: (MonadDb m, MonadPlus m) => m Person findOllie = do x <- dbLookup (PersonId 42) guard (personName x == "Ollie") return x type QueryError = String dbLookup :: (MonadDb m, MonadError QueryError m) => PersonId -> m Person data DbT m a instance Monad m => Monad (DbT m) instance Monad m => MonadDb (DbT m) runDb :: (MonadIO m) :: DbT m a -> m a

If we just try and apply runDb to findOllie, we’ll get

runDb findOllie :: (MonadError QueryError m, MonadIO m, MonadPlus m) => m Person

We still need to take care of MonadError and MonadPlus. For MonadError I’ll use ExceptT, and for MonadPlus I’ll use MaybeT:

runMaybeT (runExceptT (runDb findOllie)) :: IO (Maybe (Either QueryError Person))

Next, let’s consider a few scenarios. Firstly, the case where everything succeeds -

> runMaybeT (runExceptT (runDb findOllie)) Just (Right Person ...)

However, that query could fail, which would cause an error

> runMaybeT (runExceptT (runDb findOllie)) Just (Left "Table `person` not found")

Still as expected. Finally, person 42 might not actually be me, in which case we get

> runMaybeT (runExceptT (runDb findOllie)) Just (Left "")

Huh? What’s happened here is that we’ve hit the MonadPlus instance for ExceptT, and because our QueryError is a String we have a Monoid instance, so we were given an “empty” error. This is not at all what we were expecting!

While this example is a contrived one, I am very nervous that this accidental choice of instances could happen deep within another section of code, for example where I expect to do some local error handling and accidentally eliminate a chance of failure that I was expecting to deal with elsewhere.

In transformers-eff this is not possible, as each Eff deals with one and only one effect at a time. This could be done with mtl by introducing a separate type class for failure and only adding an instance for MaybeT, we are working around the problem by convention, and I would much rather bake that in to the types.

Fast code

The underlying implementation of Eff is built on top of continuations, and due to aggressive inlineing, GHC is able to work some serious magic. In fact, in all the benchmarks I’ve produced so far, Eff is as fast as transformers, and even comes out slightly faster in one (though within the same order of magnitude).

Compatible with the rest of Hackage

As Eff is just another monad transformer, you can stack in other monad transformers. Note that by doing this you may lack the type class instances you need, so explicit lifting might be necessary. I mainly expect this being useful by putting Eff “on the top” - for example I can use Eff locally with in a Snap monad computation, provided I eventually run back down to just Snap. This is the same pattern as locally using transformers.

Categories: Offsite Blogs

JLAMP special issue for PLACES

General haskell list - Fri, 04/22/2016 - 5:51pm
-------------------------------- Call for papers: Special Issue of JLAMP for PLACES (Programming Language Approaches to Concurrency and Communication-cEntric Software) -------------------------------- Submission deadline: July 29th 2016 -------------------------------- -------------------------------- This special issue of the Journal of Logical and Algebraic Methods in Programming (JLAMP) is devoted to the topics of the 9th International Workshop on Programming Language Approaches to Concurrency and Communication-cEntric Software (PLACES 2016), which took place in April 2016 in Eindhoven as part of ETAPS. This is however an *open call* for papers, therefore both participants of the workshop and other authors are encouraged to submit their contributions. Themes: Modern hardware platforms, from the very small to the very large, inc
Categories: Incoming News

ANN: shine and shine-varying: Lightweight declarative 2D graphics à la gloss using GHCJS (and a FRP interface)

General haskell list - Fri, 04/22/2016 - 4:08pm
Shine wraps JavaScript’s drawing functions in a declarative API, hiding the boilerplate canvas code. The API mimics the one provided by gloss (Picture, animate, play…), but some types are different to better adapt to js code (ex. images, fonts, inputs). Example: main = runWebGUI $ \ webView -> do ctx <- fixedSizeCanvas webView 800 600 let concentricCircles = foldMap Circle [1,10..100] draw ctx concentricCircles -- one-shot drawing The only direct dependency is ghcjs-dom, so the resulting jsexe should be relatively lightweight. I also wrote shine-varying, a FRP interface to shine in terms of Vars plus some utility Vars. Example (translation of the resize-yogi Elm example): resizeImage img (x',y') = Translate (x/2) (y/2) -- Pictures are centered on (0,0), so we need to move it $ Image (Stretched x y) img -- Scale the picture to the given position where x = fromIntegral x' -- mousePosition is Integral y = fromIntegral y' main = runWebGUI $ \ webView -> d
Categories: Incoming News

Automatically Deriving Numeric Type Class Instances

haskell-cafe - Fri, 04/22/2016 - 3:23pm
Is it possible to automatically derive instances of Numeric type classes like Num, Fractional, Real, Floating, etc? I currently have two datatypes, Pair and Triple, that are defined like this: data Pair a = Pair a a data Triple a = Triple a a a I wrote these pretty trivial instances for Num and Floating: instance Num a => Num (Pair a) where (+) = liftA2 (+) (*) = liftA2 (*) abs = liftA abs negate = liftA negate signum = liftA signum fromInteger = pure . fromInteger instance Fractional a => Fractional (Pair a) where (/) = liftA2 (/) recip = liftA recip fromRational = pure . fromRational and practically identical instances for Triple as well. Is there anyway to have GHC derive these instances and the other numeric type classes? Thanks, Jake _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Haskell in Leipzig 2016: Call for Papers

General haskell list - Fri, 04/22/2016 - 12:21pm
                             Haskell in Leipzig                             September 14-15, 2016                             HTKW Leipzig, Germany                 == About HaL == The workshop series “Haskell in Leipzig”, now in its 11th year, brings together Haskell developers, Haskell researchers, Haskell enthusiasts and Haskell beginners to listen to talks, take part in tutorials, and join in interesting conversations. Everything related to Haskell is on topic, whether it is about current research, practical applications, interesting ideas off the beaten track, education, or art, and topics may extend to functional programming in general and its connections to other programming paradigms as well. This year, HaL is colocated with two related conferences,  * the Workshop on Functional and (Constraint) Logic Programming (WFLP) and  * the Workshop on (Co
Categories: Incoming News

Haskell in Leipzig 2016: Call for Papers

haskell-cafe - Fri, 04/22/2016 - 11:55am
                             Haskell in Leipzig                             September 14-15, 2016                             HTKW Leipzig, Germany                 == About HaL == The workshop series “Haskell in Leipzig”, now in its 11th year, brings together Haskell developers, Haskell researchers, Haskell enthusiasts and Haskell beginners to listen to talks, take part in tutorials, and join in interesting conversations. Everything related to Haskell is on topic, whether it is about current research, practical applications, interesting ideas off the beaten track, education, or art, and topics may extend to functional programming in general and its connections to other programming paradigms as well. This year, HaL is colocated with two related conferences,  * the Workshop on Functional and (Constraint) Logic Programming (WFLP) and  * the Workshop on (C
Categories: Offsite Discussion

I have a question about Haskell

haskell-cafe - Fri, 04/22/2016 - 4:13am
Hi i have a problem in my code! here is my code: here is error message: in very bottom function (polyEvaluate), why is not working “result <- map (\(a,b) -> a+b) powers” ??? in Prelude, it is working Thanks!_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

ANN: New committee members

General haskell list - Thu, 04/21/2016 - 8:43pm
The committee has selected its new member following the March self-nomination period. The new (returning) member is: * Gershom Bazerman Thank you to everyone who submitted a self-nomination. Note that if you have self-nominated in the past, but not been picked, please self-nominate again in the future.
Categories: Incoming News

[RV 2016] RV 2016, Sept 23-30 2016, Madrid,Spain - 3rd CFP

General haskell list - Thu, 04/21/2016 - 1:26pm
RV 2016 16th International Conference on Runtime Verification September 23-30, Madrid, Spain <> Scope Runtime verification is concerned with monitoring and analysis of software and hardware system executions. Runtime verification techniques are crucial for system correctness, reliability, and robustness; they are significantly more powerful and versatile than conventional testing, and more practical than exhaustive formal verification. Runtime verification can be used prior to deployment, for testing, verification, and debugging purposes, and after deployment for ensuring reliability, safety, and security and for providing fault containment and recovery as well as online system repair. Topics of interest to the conference include: - specification l
Categories: Incoming News

Philip Wadler: Pedal on Parliament

Planet Haskell - Thu, 04/21/2016 - 3:25am

Come join Pedal on Parliament! Gather in the Meadows from 11am Saturday 23 April, procession sets off at noon.

A few years ago, I took my son with me to ICFP in Copenhagen. We had a blast cycling around the city, and marvelled that there were bike paths everywhere. When I lived in Morningside, my cycle to work was along quiet roads, but even so it felt safer when I arrived on the bike path through the Meadows. Now that I live near the Cameo, I'm even happier to get off the busy road and onto a path. And I look forward to the future, because Edinburgh is a city that invests in cycling and has a plan on the table that includes a cycle path from the Meadows to the Canal, which will run past my flat.

Getting more people cycling will cut pollution, benefit health, and increase quality of life. Studies show that people don't cycle because they feel sharing the road with cars is unsafe, so investment in cycle paths can make a huge difference. If people in the UK cycled and walked as much as people do in Copenhagen, the NHS would save around £17 billion within twenty years. The video below makes the case brilliantly.

Scotland has set a goal that 10% of all travel should be by cycle or foot (the buzzword is active travel), but only spends about 2% of its budget on active travel. The City of Edinburgh has pledged to up it's active travel budget by 1% a year until it reaches 10%. Pedal on Parliament is our chance to support the positive steps in Edinburgh, and encourage the rest of the country to take action.

<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="" frameborder="0" height="266" src="" width="320"></iframe>
Categories: Offsite Blogs

I've wanted a Haskell shirt for awhile

haskell-cafe - Thu, 04/21/2016 - 1:27am
Does anyone know where I can get one? Failing that, I made one to celebrate today. Disclaimer: I don't intend to be spammy, but I totally understand and respect if some people don't like this. I was just happy and wanted to share. Please don't hate. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Bryn Keller: Mac OS X C++ Development

Planet Haskell - Wed, 04/20/2016 - 6:00pm

I recently switched to a MacBook Pro. I have customers that use Linux and Mac, and I wanted to work in a similar environment. Also recently (a few months before the MacBook) I started working with C++ again after a long hiatus.

I had thought that the Mac, being a Unix, would be relatively close to Linux, and that things I was building for Linux would be much more likely to work there than on Windows. That might still be true, but it turns out that there are several things on Mac that are not obvious, and seriously complicate native code development compared with Linux. These are my notes on those differences and how to deal with them. Hopefully, it may be useful for other migrants to Mac as well.


Apple includes something called Xcode. This is apparently a combination of a platform SDK, and an IDE with a user interface similar to iTunes. You have to have it, but you don’t have to use the IDE part. It must be installed from the App Store. Don’t fight it, just install it and move on.

Command line tools

You definitely want the Xcode command line tools. Run:

xcode-select --install

to install them. This will give you git as well.


There are actually two package managers for Mac OS X, MacPorts and Homebrew, and as a developer you’ll definitely need one of them. I use brew, because other people I know recommended it, and it’s been nice so far. You need it to install libraries and tools that don’t come with the Mac. Most notably gcc, cmake, and so on.

Clang and gcc

Apple ships the clang compiler with Mac OS X, so this is the considered the standard compiler for the platform. This means that some libraries (notably Qt) only support building with clang.

Some C/C++ projects assume (incorrectly) that everybody builds with gcc. For this reason (I guess), Apple did a really odd thing: they ship a gcc executable, which is actually clang in disguise:

> $ gcc clang: error: no input files

This (I guess) works sometimes, since many flags work the same in both compilers. However, it is deeply confusing and causes problems as well. For example, gcc supports OpenMP, a powerful parallel computing tool, and crucial for the work I’m doing. Recent versions of clang support it as well, but Apple’s fork of clang that ships with Macs does not. So to use OpenMP, I have to have the real gcc. This will cause other problems down the road, we’ll get to them in a bit.

You’ll want to install gcc with brew:

brew install gcc gdb

Since clang is already masquerading as gcc, the Homebrew folks came up with a workaround - the gcc package installs executables called gcc-5 and g++-5 instead of gcc and g++. I added the following in my profile to encourage build systems to use these compilers instead of clang.

export HOMEBREW_CC=gcc-5 export HOMEBREW_CXX=g++-5 export CC=gcc-5 export CXX=g++-5

Note the Homebrew-specific ones. Homebrew generally installs binary, precompiled packages rather than compiling on your machine, but you can pass --build-from-source to it to make it recompile. If you do that, it will honor the HOMEBREW_CC and HOMEBREW_CXX environment variables and use those to do the build.

I also aliased cmake to ensure that cmake uses gcc-5 and g++-5 by default as well:

alias cmake=/usr/local/bin/cmake -DCMAKE_C_COMPILER=$CC -DCMAKE_CXX_COMPILER=$CXX Compatibility

C++, unlike C, doesn’t specify a binary interface standard. This means that libraries that are compiled with different C++ compilers can have problems interoperating. So there’s that to consider when you use things that were compile with clang (such as anything you download using brew without recompiling) together with things you’ve been building with g++-5.

The most pressing example of this is related to the C++ standard library. There are, on Mac (and elsewhere too I suppose), at least two implementations: libstdc++, and libc++. By default, clang uses libc++ and gcc-5 uses libstdc++. In practice, this means that if you install a C++ based library with brew, you will be able to compile against it with g++-5, but when you get to the link stage, it will fail with lots of missing symbols. If this happens,

brew reinstall --build-from-source <package>

can often fix the problem. However, there are brew packages (e.g. pkg-config) that will fail to compile under g++-5, so there can be cases where this doesn’t work. One example: I was trying to build mxnet directly using the brew package for OpenCV support, and it failed with the aforementioned link errors. I tried to reinstall opencv with --build-from-source with brew, and it started recompiling all of opencv’s (many) dependencies, including pkg-config, which for some reason fails to compile. So in the end I had to pull opencv as well and build it manually, after which mxnet built fine too.

Next time

These were some important things to be aware of when starting to develop in C++ on Macs. In the next installment, we’ll talk about dynamic libraries, install names, and other such loveliness.

Categories: Offsite Blogs

Types for Data.Sequence pattern synonyms

libraries list - Wed, 04/20/2016 - 5:39pm
As discussed, I plan to add pattern synonyms Empty, :<|, and :|> to make working with sequences more convenient. The remaining question is what types they should have. From the Data.Sequence standpoint, the best thing would be to make them work *only* for Seq, so that the Seq type will be inferred from their use. However, modules using multiple sequence types might benefit from more flexibility, via ad hoc classes. This, however, requires that something else pin down the sequence type, and could cause more confusing error messages. I'm leaning toward the simple, monomorphic approach, but I figured I should ask here in case anyone disagrees strongly. {-# LANGUAGE PatternSynonyms, ScopedTypeVariables, MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances, ViewPatterns #-} module Patt where import Data.Sequence as S import Prelude as P class FastFront x xs | xs -> x where fcons :: x -> xs -> xs fviewl :: xs -> ViewLS x xs class FastRear x xs | xs -> x where fsnoc :: xs -> x -> xs fviewr :
Categories: Offsite Discussion