News aggregator

Looking for a new release manager for cabal

haskell-cafe - Sun, 02/22/2015 - 3:01pm
(bcc: haskell-cafe) Hi, After about 3 years of cabal releases I'm looking for someone else to take over the responsibility as cabal release manager. As a release manager I try to keep on top of pull requests, make releases, and make sure bugs get triaged and bugfixes get release. Anyone interested?
Categories: Offsite Discussion

PPDP 2015: 2nd call for papers

General haskell list - Sun, 02/22/2015 - 2:16pm
====================================================================== Call for papers 17th International Symposium on Principles and Practice of Declarative Programming PPDP 2015 Special Issue of Science of Computer Programming (SCP) Siena, Italy, July 14-16, 2015 (co-located with LOPSTR 2015) ====================================================================== SUBMISSION DEADLINE: 20 MARCH, 2015 PPDP 2015 is a forum that brings together researchers from the declarative programming communities, including those working in the logic, constraint and functional programming paradigms, but also embracing languages, database languages, and knowledge representation languages. The goal is to stimulate research in the use of logical formalisms and methods for specifying, performing, and analyzing computations, including mechanisms for mobility, modularity, concurrency, object-orientation, security,
Categories: Incoming News

LOPSTR 2015: 2nd Call for Papers

General haskell list - Sun, 02/22/2015 - 2:13pm
============================================================ 25th International Symposium on Logic-Based Program Synthesis and Transformation LOPSTR 2015 Special Issue of Formal Aspects of Computing University of Siena, Siena, IT, July 13-15, 2015 (co-located with PPDP 2015) DEADLINES Abstract submission: April 6, 2015 Paper/Extended abstract submission: April 13, 2015 ============================================================ The aim of the LOPSTR series is to stimulate and promote international research and collaboration on logic-based program development. LOPSTR is open to contributions in logic-based program development in any language paradigm. LOPSTR has a reputation for being a lively, friendly forum for presenting and discussing work in progress. Formal proceedings are produced only after the symposium so that authors can incorporate this feed
Categories: Incoming News

Roman Cheplyaka: Examples of monads in a dynamic language

Planet Haskell - Sun, 02/22/2015 - 2:00pm

In Monads in dynamic languages, I explained what the definition of a monad in a dynamic language should be and concluded that there’s nothing precluding them from existing. But I didn’t give an example either.

So, in case you are still wondering whether non-trivial monads are possible in a dynamic language, here you go. I’ll implement a couple of simple monads — Reader and Maybe — with proofs.

And all that will take place in the ultimate dynamic language — the (extensional) untyped lambda calculus.

The definitions of the Reader and Maybe monads are not anything special; they are the same definitions as you would write, say, in Haskell, except Maybe is Church-encoded.

What I find fascinating about this is that despite the untyped language, which allows more things to go wrong than a typed one, the monad laws still hold. You can still write monadic code and reason about it in the untyped lambda calculus in the same way as you would do in a typed language.

Reader return x = λr.x a >>= k = λr.k(ar)r Left identity return x >>= k { inline return } = λr.x >>= k { inline >>= } = λr.k((λr.x)r)r { β-reduce } = λr.kxr { η-reduce } = kx Right identity a >>= return { inline return } = a >>= λx.λr.x { inline >>= } = λr.(λx.λr.x)(ar)r { β-reduce } = λ { η-reduce } = a Associativity a >>= f >>= g { inline 1st >>= } = λr.f(ar)r >>= g { inline 2nd >>= } = λr.g((λr.f(ar)r)r)r { β-reduce } = λr.g(f(ar)r)r a >>= (λx. f x >>= g) { inline 2nd >>= } = a >>= λx.λr.g((fx)r)r { inline 1st >>= } = λr.(λx.λr.g(fxr)r)(ar)r { β-reduce } = λr.g(f(ar)r)r Maybe return x = λj.λn.jx a >>= k = λj.λn.a(λx.kxjn)n Left identity return x >>= k { inline return } = λj.λn.jx >>= k { inline >>= } = λj.λn.(λj.λn.jx)(λx.kxjn)n { β-reduce } = λj.λn.kxjn { η-reduce } = kx Right identity a >>= return { inline return } = a >>= λx.λj.λn.jx { inline >>= } = λj.λn.a(λx.(λx.λj.λn.jx)xjn)n { β-reduce } = λj.λn.a(λx.jx)n { η-reduce } = λj.λn.ajn { η-reduce } = a Associativity a >>= f >>= g { inline 1st >>= } = (λj.λn.a(λx.fxjn)n) >>= g { inline 2nd >>= } = (λj.λn.(λj.λn.a(λx.fxjn)n)(λx.gxjn)n) { β-reduce } = λj.λn.a(λx.fx(λx.gxjn)n)n a >>= (λx. f x >>= g) { inline 2nd >>= } = a >>= (λx.λj.λn.fx(λx.gxjn)n) { inline 1st >>= } = λj.λn.a(λx.(λx.λj.λn.fx(λx.gxjn)n)xjn)n { β-reduce } = λj.λn.a(λx.fx(λx.gxjn)n)n
Categories: Offsite Blogs

Robert Harper: OPLSS 2015 Announcement

Planet Haskell - Sun, 02/22/2015 - 1:40pm
We are pleased to announce the preliminary program for the 14th Annual Oregon Programming Languages Summer School (OPLSS) to be held June 15th to 27th, 2015 at the University of Oregon in Eugene.

This year’s program is titled Types, Logic, Semantics, and Verification and features the following speakers:

  • Amal Ahmed, Northeastern University
  • Nick Benton, Microsoft Cambridge Research Lab
  • Adam Chlipala, Massachusetts Institute of Technology
  • Robert Constable, Cornell University
  • Peter Dybjer, Chalmers University of Technology
  • Robert Harper, Carnegie Mellon University
  • Ed Morehouse, Carnegie Mellon University
  • Greg Morrisett, Harvard University
  • Frank Pfenning, Carnegie Mellon University

The registration deadline is March 16, 2015.

Full information on registration and scholarships is available at

Please address all inquiries to

Best regards from the OPLSS 2015 organizers!

Robert Harper

Greg Morrisett

Zena Ariola

Filed under: Research, Teaching Tagged: OPLSS15
Categories: Offsite Blogs

Is working with numeric formulas on Haskell a pain for anyone else?

Haskell on Reddit - Sun, 02/22/2015 - 12:47pm

Since I started using Haskell, I can't ever implement any kind of mathematical formula without having a bad time. For example:

bin :: (Num a) => a -> [a] bin 0 = [] bin 1 = [] bin x = b : bin (x - l / (2 - b)) where l = 2 ^ floor (log x / log 2) b = floor (x * 2 / (l * 3))

This is an enumeration of the set of binary strings (0, 1, 00, 01, 10, 11, 100...). Plug this definition on GHC it will tell you some things you didn't know about your mother:

binstr.hs:2:5: Could not deduce (Eq a) arising from the literal ‘0’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Eq a) to the context of the type signature for bin :: Num a => a -> [a] In the pattern: 0 In an equation for ‘bin’: bin 0 = [] binstr.hs:5:13: Could not deduce (RealFrac a) arising from a use of ‘floor’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (RealFrac a) to the context of the type signature for bin :: Num a => a -> [a] In the second argument of ‘(^)’, namely ‘floor (log x / log 2)’ In the expression: 2 ^ floor (log x / log 2) In an equation for ‘l’: l = 2 ^ floor (log x / log 2) binstr.hs:5:20: Could not deduce (Floating a) arising from a use of ‘log’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Floating a) to the context of the type signature for bin :: Num a => a -> [a] In the first argument of ‘(/)’, namely ‘log x’ In the first argument of ‘floor’, namely ‘(log x / log 2)’ In the second argument of ‘(^)’, namely ‘floor (log x / log 2)’ binstr.hs:5:26: Could not deduce (Fractional a) arising from a use of ‘/’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Fractional a) to the context of the type signature for bin :: Num a => a -> [a] In the first argument of ‘floor’, namely ‘(log x / log 2)’ In the second argument of ‘(^)’, namely ‘floor (log x / log 2)’ In the expression: 2 ^ floor (log x / log 2) binstr.hs:6:9: Could not deduce (Integral a) arising from a use of ‘floor’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Integral a) to the context of the type signature for bin :: Num a => a -> [a] In the expression: floor (x * 2 / (l * 3)) In an equation for ‘b’: b = floor (x * 2 / (l * 3)) In an equation for ‘bin’: bin x = b : bin (x - l / (2 - b)) where l = 2 ^ floor (log x / log 2) b = floor (x * 2 / (l * 3))

I'm experienced enough to know what it means before even reading it: I probably got some "floating/integral" usage wrong on the pipeline. So what do I do now? I read the type of every single numeric function you used in order to fix the plumbing. After at least an hour of playing around, I get this:

bin :: (Integral a) => a -> [a] bin 0 = [] bin 1 = [] bin x = b : bin (x - (div l (2 - b))) where l = 2 ^ floor (logBase 2 (fromIntegral x)) b = div (x * 2) (l * 3) fbin :: (RealFrac a, Floating a) => a -> [a] fbin 0 = [] fbin 1 = [] fbin x = b : fbin (x - l / (2 - b)) where l = 2 ^ floor (log x / log 2) b = flr (x * 2 / (l * 3)) flr = fromIntegral . floor

Which is not only uglier, but I also needed 2 versions. Even on C++ I can write a single version that works in a minute. I honestly can't think in a single programming language that writing such function would be any harder. I have a feeling there is something really, really wrong with Haskell's numeric typeclasses. For example: Num, Integral, Integer, RealFrac, Fractional, Enum - at this point, it seems like we are just listing arbitrary properties in a way that is not just confusing, but absolutely meaningless in a mathematical point of view, ugly in a software engineering sense and sometimes plainly wrong. For example, why "Float" is not an instance of "Integral", when, in reality, Floats can absolutely always be used where Integers can.

That is not how numbers work. I want to be able to write my mathematical formulas without having to plumb a lot of conversions all around. I don't want to have a gigant type constraints list. I don't want to think aboout how Haskell writer's made sense of numeric types. I just want a "Real" type that I can use. The compiler should be able to do the rest. Who designed that schema? What is the justification behind this?

submitted by SrPeixinho
[link] [47 comments]
Categories: Incoming News

Haskell implementation of longest path algorithm

haskell-cafe - Sun, 02/22/2015 - 11:34am
Compared to the Nim code [] for a longest path algorithm, Haskell [] looks horrendously verbose and ugly, even if you ignore the pragmas and imports. Is this idiomatic Haskell style? Could it be clearer, but has to be written that way for performance - although it still takes 3.7x as long as the Nim version []? I posted this to the beginners group last week, but no reply, so trying again here. -- View this message in context: Sent from the Haskell - Haskell-Cafe mailing list archive at
Categories: Offsite Discussion

Haskell in Artificial Intelligence

Haskell on Reddit - Sun, 02/22/2015 - 4:43am

Haskell noob here. I am mostly through LYAH. Discovered Xmonad recently and love it. Looking to use Haskell in more areas ( currently I am solving Project Euler probs in Haskell ) and want to find out if Haskell is popular in AI ?

Given that lisp was the language of AI 50s-90s and was the first language to support higher order functions and other functional elements. I am wondering those FP gurus in AI must have liked Haskell.

Are there any Haskell books like Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp (by Peter Norvig)

submitted by sivapvarma
[link] [14 comments]
Categories: Incoming News

Graphics.UI.FLTK.LowLevel.FLTKHS - Sun, 02/22/2015 - 2:47am
Categories: Offsite Blogs - Sun, 02/22/2015 - 12:23am
Categories: Offsite Blogs - Sun, 02/22/2015 - 12:23am
Categories: Offsite Blogs

Need your encouragement

Haskell on Reddit - Sat, 02/21/2015 - 9:41pm

I'm an out-of-work java (former) developer that reconnected with my love for programming with learning Haskell. I'm using the 'Learn you a Great Good Haskell' book and it was a slow but very satisfying journey until I hit the chapter on fold and horses etc

I also bought the textbook by Hutton on Haskell and reading it gave me a very good alternative intro (especially compared to LYAGGH)

But to be honest, the learning curve got steep on me really fast after encountering lambdas, folds ...or so it seems. The worst part is that this is feeding into my feeling of worthlessness and I think I'm no good at anything.

On one hand I think I'm not smart enough for this and should just learn python or something and earn a living to support my family. On the other hand there is the idealist in me that abhors everything about the 10-12 years I spent as a java developer and all its unnecessary, over-engineered complexity (language + ecosystem) that imperative languages seem to share and Haskell so far has been everything that I dreamed about in a prog language.

My wife, who supports my lame ass, asked me to look at Haskell jobs and there is not much. So I feel dispirited and disappointed in myself. What the fuck am I doing with my life... :sigh:

(EDIT:- background: I have a master's in CS (database systems) and an undergrad in Civil Engg. Started as a web dev in Coldfusion then moved to java and programmed for a decade until leaving for reasons which I would only call disenchantment with my work in general.... same CRUD bullshit and lack of respect for developers)

submitted by vgn
[link] [45 comments]
Categories: Incoming News

Proposal: Functor and friends for the wrappers in Data.Monoid

libraries list - Sat, 02/21/2015 - 5:34pm
I propose to add Functor, Applicative, Monad, Foldable, and Traversable and maybe even MonadFix instances to wrapper newtypes in the Data.Monoid module. The same way as in the semigroups package, e.g. <> Basically: instance Functor Sum where fmap f (Sum x) = Sum (f x) instance Foldable Sum where foldMap f (Sum a) = f a instance Traversable Sum where traverse f (Sum a) = Sum <$> f a instance Applicative Sum where pure = Sum a <* _ = a _ *> a = a Sum f <*> Sum x = Sum (f x) instance Monad Sum where return = Sum _ >> a = a Sum a >>= f = f a instance MonadFix Sum where mfix f = fix (f . getSum) _______________________________________________ Libraries mailing list Libraries< at >
Categories: Offsite Discussion

Edward Kmett: Free Monoids in Haskell

Planet Haskell - Sat, 02/21/2015 - 5:08pm

It is often stated that Foldable is effectively the toList class. However, this turns out to be wrong. The real fundamental member of Foldable is foldMap (which should look suspiciously like traverse, incidentally). To understand exactly why this is, it helps to understand another surprising fact: lists are not free monoids in Haskell.

This latter fact can be seen relatively easily by considering another list-like type:

  data SL a = Empty | SL a :> a   instance Monoid (SL a) where mempty = Empty mappend ys Empty = ys mappend ys (xs :> x) = (mappend ys xs) :> x   single :: a -> SL a single x = Empty :> x  

So, we have a type SL a of snoc lists, which are a monoid, and a function that embeds a into SL a. If (ordinary) lists were the free monoid, there would be a unique monoid homomorphism from lists to snoc lists. Such a homomorphism (call it h) would have the following properties:

  h [] = Empty h (xs <> ys) = h xs <> h ys h [x] = single x  

And in fact, this (together with some general facts about Haskell functions) should be enough to define h for our purposes (or any purposes, really). So, let's consider its behavior on two values:

  h [1] = single 1   h [1,1..] = h ([1] <> [1,1..]) -- [1,1..] is an infinite list of 1s = h [1] <> h [1,1..]  

This second equation can tell us what the value of h is at this infinite value, since we can consider it the definition of a possibly infinite value:

  x = h [1] <> x = fix (single 1 <>) h [1,1..] = x  

(single 1 ) is a strict function, so the fixed point theorem tells us that x = ⊥.

This is a problem, though. Considering some additional equations:

  [1,1..] <> [n] = [1,1..] -- true for all n h [1,1..] = ⊥ h ([1,1..] <> [1]) = h [1,1..] <> h [1] = ⊥ <> single 1 = ⊥ :> 1 ≠ ⊥  

So, our requirements for h are contradictory, and no such homomorphism can exist.

The issue is that Haskell types are domains. They contain these extra partially defined values and infinite values. The monoid structure on (cons) lists has infinite lists absorbing all right-hand sides, while the snoc lists are just the opposite.

This also means that finite lists (or any method of implementing finite sequences) are not free monoids in Haskell. They, as domains, still contain the additional bottom element, and it absorbs all other elements, which is incorrect behavior for the free monoid:

  pure x <> ⊥ = ⊥ h ⊥ = ⊥ h (pure x <> ⊥) = [x] <> h ⊥ = [x] ++ ⊥ = x:⊥ ≠ ⊥  

So, what is the free monoid? In a sense, it can't be written down at all in Haskell, because we cannot enforce value-level equations, and because we don't have quotients. But, if conventions are good enough, there is a way. First, suppose we have a free monoid type FM a. Then for any other monoid m and embedding a -> m, there must be a monoid homomorphism from FM a to m. We can model this as a Haskell type:

  forall a m. Monoid m => (a -> m) -> FM a -> m  

Where we consider the Monoid m constraint to be enforcing that m actually has valid monoid structure. Now, a trick is to recognize that this sort of universal property can be used to define types in Haskell (or, GHC at least), due to polymorphic types being first class; we just rearrange the arguments and quantifiers, and take FM a to be the polymorphic type:

  newtype FM a = FM { unFM :: forall m. Monoid m => (a -> m) -> m }  

Types defined like this are automatically universal in the right sense. [1] The only thing we have to check is that FM a is actually a monoid over a. But that turns out to be easily witnessed:

  embed :: a -> FM a embed x = FM $ \k -> k x   instance Monoid (FM a) where mempty = FM $ \_ -> mempty mappend (FM e1) (FM e2) = FM $ \k -> e1 k <> e2 k  

Demonstrating that the above is a proper monoid delegates to instances of Monoid being proper monoids. So as long as we trust that convention, we have a free monoid.

However, one might wonder what a free monoid would look like as something closer to a traditional data type. To construct that, first ignore the required equations, and consider only the generators; we get:

  data FMG a = None | Single a | FMG a :<> FMG a  

Now, the proper FM a is the quotient of this by the equations:

  None :<> x = x = x :<> None x :<> (y :<> z) = (x :<> y) :<> z  

One way of mimicking this in Haskell is to hide the implementation in a module, and only allow elimination into Monoids (again, using the convention that Monoid ensures actual monoid structure) using the function:

  unFMG :: forall a m. Monoid m => FMG a -> (a -> m) -> m unFMG None _ = mempty unFMG (Single x) k = k x unFMG (x :<> y) k = unFMG x k <> unFMG y k  

This is actually how quotients can be thought of in richer languages; the quotient does not eliminate any of the generated structure internally, it just restricts the way in which the values can be consumed. Those richer languages just allow us to prove equations, and enforce properties by proof obligations, rather than conventions and structure hiding. Also, one should note that the above should look pretty similar to our encoding of FM a using universal quantification earlier.

Now, one might look at the above and have some objections. For one, we'd normally think that the quotient of the above type is just [a]. Second, it seems like the type is revealing something about the associativity of the operations, because defining recursive values via left nesting is different from right nesting, and this difference is observable by extracting into different monoids. But aren't monoids supposed to remove associativity as a concern? For instance:

  ones1 = embed 1 <> ones1 ones2 = ones2 <> embed 1  

Shouldn't we be able to prove these are the same, becuase of an argument like:

  ones1 = embed 1 <> (embed 1 <> ...) ... reassociate ... = (... <> embed 1) <> embed 1 = ones2  

The answer is that the equation we have only specifies the behavior of associating three values:

  x <> (y <> z) = (x <> y) <> z  

And while this is sufficient to nail down the behavior of finite values, and finitary reassociating, it does not tell us that infinitary reassociating yields the same value back. And the "... reassociate ..." step in the argument above was decidedly infinitary. And while the rules tell us that we can peel any finite number of copies of embed 1 to the front of ones1 or the end of ones2, it does not tell us that ones1 = ones2. And in fact it is vital for FM a to have distinct values for these two things; it is what makes it the free monoid when we're dealing with domains of lazy values.

Finally, we can come back to Foldable. If we look at foldMap:

  foldMap :: (Foldable f, Monoid m) => (a -> m) -> f a -> m  

we can rearrange things a bit, and get the type:

  Foldable f => f a -> (forall m. Monoid m => (a -> m) -> m)  

And thus, the most fundamental operation of Foldable is not toList, but toFreeMonoid, and lists are not free monoids in Haskell.

[1]: What we are doing here is noting that (co)limits are objects that internalize natural transformations, but the natural transformations expressible by quantification in GHC are already automatically internalized using quantifiers. However, one has to be careful that the quantifiers are actually enforcing the relevant naturality conditions. In many simple cases they are.

Categories: Offsite Blogs

Munich Haskell Meeting, 2015-02-24 < at > 19:30

haskell-cafe - Sat, 02/21/2015 - 4:43pm
Dear all, Next week, our monthly Munich Haskell Meeting will take place again on Tuesday, February 24 at Cafe Puck at 19h30. For details see here: If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. Everybody is welcome! cu,
Categories: Offsite Discussion