# News aggregator

### Looking for a new release manager for cabal

### PPDP 2015: 2nd call for papers

### LOPSTR 2015: 2nd Call for Papers

### Roman Cheplyaka: Examples of monads in a dynamic language

In Monads in dynamic languages, I explained what the definition of a monad in a dynamic language should be and concluded that there’s nothing precluding them from existing. But I didn’t give an example either.

So, in case you are still wondering whether non-trivial monads are possible in a dynamic language, here you go. I’ll implement a couple of simple monads — Reader and Maybe — with proofs.

And all that will take place in the ultimate dynamic language — the (extensional) untyped lambda calculus.

The definitions of the Reader and Maybe monads are not anything special; they are the same definitions as you would write, say, in Haskell, except Maybe is Church-encoded.

What I find fascinating about this is that despite the untyped language, which allows more things to go wrong than a typed one, the monad laws still hold. You can still write monadic code and reason about it in the untyped lambda calculus in the same way as you would do in a typed language.

Reader return x = λr.x a >>= k = λr.k(ar)r Left identity return x >>= k { inline return } = λr.x >>= k { inline >>= } = λr.k((λr.x)r)r { β-reduce } = λr.kxr { η-reduce } = kx Right identity a >>= return { inline return } = a >>= λx.λr.x { inline >>= } = λr.(λx.λr.x)(ar)r { β-reduce } = λr.ar { η-reduce } = a Associativity a >>= f >>= g { inline 1st >>= } = λr.f(ar)r >>= g { inline 2nd >>= } = λr.g((λr.f(ar)r)r)r { β-reduce } = λr.g(f(ar)r)r a >>= (λx. f x >>= g) { inline 2nd >>= } = a >>= λx.λr.g((fx)r)r { inline 1st >>= } = λr.(λx.λr.g(fxr)r)(ar)r { β-reduce } = λr.g(f(ar)r)r Maybe return x = λj.λn.jx a >>= k = λj.λn.a(λx.kxjn)n Left identity return x >>= k { inline return } = λj.λn.jx >>= k { inline >>= } = λj.λn.(λj.λn.jx)(λx.kxjn)n { β-reduce } = λj.λn.kxjn { η-reduce } = kx Right identity a >>= return { inline return } = a >>= λx.λj.λn.jx { inline >>= } = λj.λn.a(λx.(λx.λj.λn.jx)xjn)n { β-reduce } = λj.λn.a(λx.jx)n { η-reduce } = λj.λn.ajn { η-reduce } = a Associativity a >>= f >>= g { inline 1st >>= } = (λj.λn.a(λx.fxjn)n) >>= g { inline 2nd >>= } = (λj.λn.(λj.λn.a(λx.fxjn)n)(λx.gxjn)n) { β-reduce } = λj.λn.a(λx.fx(λx.gxjn)n)n a >>= (λx. f x >>= g) { inline 2nd >>= } = a >>= (λx.λj.λn.fx(λx.gxjn)n) { inline 1st >>= } = λj.λn.a(λx.(λx.λj.λn.fx(λx.gxjn)n)xjn)n { β-reduce } = λj.λn.a(λx.fx(λx.gxjn)n)n### Robert Harper: OPLSS 2015 Announcement

**14th Annual**

**Oregon Programming Languages Summer School**(OPLSS) to be held

**June 15th**

**to 27th, 2015**at the University of Oregon in Eugene.

This year’s program is titled Types, Logic, Semantics, and Verification and features the following speakers:

- Amal Ahmed, Northeastern University
- Nick Benton, Microsoft Cambridge Research Lab
- Adam Chlipala, Massachusetts Institute of Technology
- Robert Constable, Cornell University
- Peter Dybjer, Chalmers University of Technology
- Robert Harper, Carnegie Mellon University
- Ed Morehouse, Carnegie Mellon University
- Greg Morrisett, Harvard University
- Frank Pfenning, Carnegie Mellon University

The registration deadline is **March 16, 2015.**

Full information on registration and scholarships is available at https://www.cs.uoregon.edu/research/summerschool/.

Please address all inquiries to summerschool@cs.uoregon.edu.

Best regards from the OPLSS 2015 organizers!

Robert Harper

Greg Morrisett

Zena Ariola

Filed under: Research, Teaching Tagged: OPLSS15

### Is working with numeric formulas on Haskell a pain for anyone else?

Since I started using Haskell, I can't ever implement any kind of mathematical formula without having a bad time. For example:

bin :: (Num a) => a -> [a] bin 0 = [] bin 1 = [] bin x = b : bin (x - l / (2 - b)) where l = 2 ^ floor (log x / log 2) b = floor (x * 2 / (l * 3))This is an enumeration of the set of binary strings (0, 1, 00, 01, 10, 11, 100...). Plug this definition on GHC it will tell you some things you didn't know about your mother:

binstr.hs:2:5: Could not deduce (Eq a) arising from the literal ‘0’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Eq a) to the context of the type signature for bin :: Num a => a -> [a] In the pattern: 0 In an equation for ‘bin’: bin 0 = [] binstr.hs:5:13: Could not deduce (RealFrac a) arising from a use of ‘floor’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (RealFrac a) to the context of the type signature for bin :: Num a => a -> [a] In the second argument of ‘(^)’, namely ‘floor (log x / log 2)’ In the expression: 2 ^ floor (log x / log 2) In an equation for ‘l’: l = 2 ^ floor (log x / log 2) binstr.hs:5:20: Could not deduce (Floating a) arising from a use of ‘log’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Floating a) to the context of the type signature for bin :: Num a => a -> [a] In the first argument of ‘(/)’, namely ‘log x’ In the first argument of ‘floor’, namely ‘(log x / log 2)’ In the second argument of ‘(^)’, namely ‘floor (log x / log 2)’ binstr.hs:5:26: Could not deduce (Fractional a) arising from a use of ‘/’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Fractional a) to the context of the type signature for bin :: Num a => a -> [a] In the first argument of ‘floor’, namely ‘(log x / log 2)’ In the second argument of ‘(^)’, namely ‘floor (log x / log 2)’ In the expression: 2 ^ floor (log x / log 2) binstr.hs:6:9: Could not deduce (Integral a) arising from a use of ‘floor’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Integral a) to the context of the type signature for bin :: Num a => a -> [a] In the expression: floor (x * 2 / (l * 3)) In an equation for ‘b’: b = floor (x * 2 / (l * 3)) In an equation for ‘bin’: bin x = b : bin (x - l / (2 - b)) where l = 2 ^ floor (log x / log 2) b = floor (x * 2 / (l * 3))I'm experienced enough to know what it means before even reading it: I probably got some "floating/integral" usage wrong on the pipeline. So what do I do now? **I read the type of every single numeric function you used in order to fix the plumbing.** After at least an hour of playing around, I get this:

Which is not only uglier, but I also needed 2 versions. **Even on C++ I can write a single version that works in a minute.** I honestly can't think in a single programming language that writing such function would be any harder. I have a feeling there is something really, really wrong with Haskell's numeric typeclasses. For example: Num, Integral, Integer, RealFrac, Fractional, Enum - at this point, it seems like we are just listing arbitrary properties in a way that is not just confusing, but absolutely meaningless in a mathematical point of view, ugly in a software engineering sense and sometimes plainly wrong. For example, why "Float" is not an instance of "Integral", when, in reality, Floats can absolutely always be used where Integers can.

That is not how numbers work. I want to be able to write my mathematical formulas without having to plumb a lot of conversions all around. I don't want to have a gigant type constraints list. I don't want to think aboout how Haskell writer's made sense of numeric types. I just want a "Real" type that I can use. The compiler should be able to do the rest. Who designed that schema? What is the justification behind this?

submitted by SrPeixinho[link] [47 comments]

### Haskell implementation of longest path algorithm

### Haskell in Artificial Intelligence

Haskell noob here. I am mostly through LYAH. Discovered Xmonad recently and love it. Looking to use Haskell in more areas ( currently I am solving Project Euler probs in Haskell ) and want to find out if Haskell is popular in AI ?

Given that lisp was the language of AI 50s-90s and was the first language to support higher order functions and other functional elements. I am wondering those FP gurus in AI must have liked Haskell.

Are there any Haskell books like Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp (by Peter Norvig)

submitted by sivapvarma[link] [14 comments]

### Graphics.UI.FLTK.LowLevel.FLTKHS

### b.hatena.ne.jp

### b.hatena.ne.jp

### Need your encouragement

I'm an out-of-work java (former) developer that reconnected with my love for programming with learning Haskell. I'm using the 'Learn you a Great Good Haskell' book and it was a slow but very satisfying journey until I hit the chapter on fold and horses etc

I also bought the textbook by Hutton on Haskell and reading it gave me a very good alternative intro (especially compared to LYAGGH)

But to be honest, the learning curve got steep on me really fast after encountering lambdas, folds ...or so it seems. The worst part is that this is feeding into my feeling of worthlessness and I think I'm no good at anything.

On one hand I think I'm not smart enough for this and should just learn python or something and earn a living to support my family. On the other hand there is the idealist in me that abhors everything about the 10-12 years I spent as a java developer and all its unnecessary, over-engineered complexity (language + ecosystem) that imperative languages seem to share and Haskell so far has been everything that I dreamed about in a prog language.

My wife, who supports my lame ass, asked me to look at Haskell jobs and there is not much. So I feel dispirited and disappointed in myself. What the fuck am I doing with my life... :sigh:

(EDIT:- background: I have a master's in CS (database systems) and an undergrad in Civil Engg. Started as a web dev in Coldfusion then moved to java and programmed for a decade until leaving for reasons which I would only call disenchantment with my work in general.... same CRUD bullshit and lack of respect for developers)

submitted by vgn[link] [45 comments]

### Yesod: Webprogrammierung in Haskell

### Proposal: Functor and friends for the wrappers in Data.Monoid

### Edward Kmett: Free Monoids in Haskell

It is often stated that Foldable is effectively the toList class. However, this turns out to be wrong. The real fundamental member of Foldable is foldMap (which should look suspiciously like traverse, incidentally). To understand exactly why this is, it helps to understand another surprising fact: lists are not free monoids in Haskell.

This latter fact can be seen relatively easily by considering another list-like type:

data SL a = Empty | SL a :> a instance Monoid (SL a) where mempty = Empty mappend ys Empty = ys mappend ys (xs :> x) = (mappend ys xs) :> x single :: a -> SL a single x = Empty :> xSo, we have a type SL a of snoc lists, which are a monoid, and a function that embeds a into SL a. If (ordinary) lists were the free monoid, there would be a unique monoid homomorphism from lists to snoc lists. Such a homomorphism (call it h) would have the following properties:

h [] = Empty h (xs <> ys) = h xs <> h ys h [x] = single xAnd in fact, this (together with some general facts about Haskell functions) should be enough to define h for our purposes (or any purposes, really). So, let's consider its behavior on two values:

h [1] = single 1 h [1,1..] = h ([1] <> [1,1..]) -- [1,1..] is an infinite list of 1s = h [1] <> h [1,1..]This second equation can tell us what the value of h is at this infinite value, since we can consider it the definition of a possibly infinite value:

x = h [1] <> x = fix (single 1 <>) h [1,1..] = x(single 1 ) is a strict function, so the fixed point theorem tells us that x = ⊥.

This is a problem, though. Considering some additional equations:

[1,1..] <> [n] = [1,1..] -- true for all n h [1,1..] = ⊥ h ([1,1..] <> [1]) = h [1,1..] <> h [1] = ⊥ <> single 1 = ⊥ :> 1 ≠ ⊥So, our requirements for h are contradictory, and no such homomorphism can exist.

The issue is that Haskell types are domains. They contain these extra partially defined values and infinite values. The monoid structure on (cons) lists has infinite lists absorbing all right-hand sides, while the snoc lists are just the opposite.

This also means that finite lists (or any method of implementing finite sequences) are not free monoids in Haskell. They, as domains, still contain the additional bottom element, and it absorbs all other elements, which is incorrect behavior for the free monoid:

pure x <> ⊥ = ⊥ h ⊥ = ⊥ h (pure x <> ⊥) = [x] <> h ⊥ = [x] ++ ⊥ = x:⊥ ≠ ⊥So, what is the free monoid? In a sense, it can't be written down at all in Haskell, because we cannot enforce value-level equations, and because we don't have quotients. But, if conventions are good enough, there is a way. First, suppose we have a free monoid type FM a. Then for any other monoid m and embedding a -> m, there must be a monoid homomorphism from FM a to m. We can model this as a Haskell type:

forall a m. Monoid m => (a -> m) -> FM a -> mWhere we consider the Monoid m constraint to be enforcing that m actually has valid monoid structure. Now, a trick is to recognize that this sort of universal property can be used to *define* types in Haskell (or, GHC at least), due to polymorphic types being first class; we just rearrange the arguments and quantifiers, and take FM a to be the polymorphic type:

Types defined like this are automatically universal in the right sense. [1] The only thing we have to check is that FM a is actually a monoid over a. But that turns out to be easily witnessed:

embed :: a -> FM a embed x = FM $ \k -> k x instance Monoid (FM a) where mempty = FM $ \_ -> mempty mappend (FM e1) (FM e2) = FM $ \k -> e1 k <> e2 kDemonstrating that the above is a proper monoid delegates to instances of Monoid being proper monoids. So as long as we trust that convention, we have a free monoid.

However, one might wonder what a free monoid would look like as something closer to a traditional data type. To construct that, first ignore the required equations, and consider only the generators; we get:

data FMG a = None | Single a | FMG a :<> FMG aNow, the proper FM a is the quotient of this by the equations:

None :<> x = x = x :<> None x :<> (y :<> z) = (x :<> y) :<> zOne way of mimicking this in Haskell is to hide the implementation in a module, and only allow elimination into Monoids (again, using the convention that Monoid ensures actual monoid structure) using the function:

unFMG :: forall a m. Monoid m => FMG a -> (a -> m) -> m unFMG None _ = mempty unFMG (Single x) k = k x unFMG (x :<> y) k = unFMG x k <> unFMG y kThis is actually how quotients can be thought of in richer languages; the quotient does not eliminate any of the generated structure internally, it just restricts the way in which the values can be consumed. Those richer languages just allow us to prove equations, and enforce properties by proof obligations, rather than conventions and structure hiding. Also, one should note that the above should look pretty similar to our encoding of FM a using universal quantification earlier.

Now, one might look at the above and have some objections. For one, we'd normally think that the quotient of the above type is just [a]. Second, it seems like the type is revealing something about the associativity of the operations, because defining recursive values via left nesting is different from right nesting, and this difference is observable by extracting into different monoids. But aren't monoids supposed to remove associativity as a concern? For instance:

ones1 = embed 1 <> ones1 ones2 = ones2 <> embed 1Shouldn't we be able to prove these are the same, becuase of an argument like:

ones1 = embed 1 <> (embed 1 <> ...) ... reassociate ... = (... <> embed 1) <> embed 1 = ones2The answer is that the equation we have only specifies the behavior of associating three values:

x <> (y <> z) = (x <> y) <> zAnd while this is sufficient to nail down the behavior of *finite* values, and *finitary* reassociating, it does not tell us that *infinitary* reassociating yields the same value back. And the "... reassociate ..." step in the argument above was decidedly infinitary. And while the rules tell us that we can peel any finite number of copies of embed 1 to the front of ones1 or the end of ones2, it does not tell us that ones1 = ones2. And in fact it is vital for FM a to have distinct values for these two things; it is what makes it the free monoid when we're dealing with domains of lazy values.

Finally, we can come back to Foldable. If we look at foldMap:

foldMap :: (Foldable f, Monoid m) => (a -> m) -> f a -> mwe can rearrange things a bit, and get the type:

Foldable f => f a -> (forall m. Monoid m => (a -> m) -> m)And thus, the most fundamental operation of Foldable is not toList, but toFreeMonoid, and lists are not free monoids in Haskell.

[1]: What we are doing here is noting that (co)limits are objects that internalize natural transformations, but the natural transformations expressible by quantification in GHC are already automatically internalized using quantifiers. However, one has to be careful that the quantifiers are actually enforcing the relevant naturality conditions. In many simple cases they are.