News aggregator

Yesod Web Framework: Proposal: Changes to the PVP

Planet Haskell - Tue, 04/08/2014 - 9:55am

As I mentioned two posts ago, there was a serious discussion on the libraries mailing list about the Package Versioning Policy (PVP).

This blog post presents some concrete changes I'd like to see to the PVP to make it better for both general consumers of Hackage, and for library authors as well. I'll start off with a summary of the changes, and then give the explanations:

  1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.

  2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.

  3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds.

    Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.

  4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds.

    Note that this is very related to point (3).

While I (Michael Snoyman) am the author of this proposal, the following people have reviewed the proposal and support it:

  • Bryan O'Sullivan
  • Felipe Lessa
  • Roman Cheplyaka
  • Vincent Hanquez
Reproducible builds

There are a number of simple cases that can result in PVP-compliant code not being buildable. These aren't just hypothetical cases; in my experience as both a package author and Stackage maintainer, I've seen these come up.

  • Package foo version 1.0 provides an instance for MonadFoo for IO and Identity. Version 1.1 removes the IO instance for some reason. Package bar provides a function:

    bar :: MonadFoo m => Int -> m Double

    Package bar compiles with both version 1.0 and 1.1 of foo, and therefore (following the PVP) adds a constraint to its cabal file foo >= 1.0 && < 1.2.

    Now a user decides to use the bar package. The user never imports anything from foo, and therefore has no listing for foo in the cabal file. The user code depends on the IO instance for MonadFoo. When compiled with foo 1.0, everything works fine. However, when compiled with foo 1.1, the code no longer compiles.

  • Similarly, instead of typeclass instances, the same situation can occur with module export lists. Consider version 1.0 of foo which provides:

    module Foo (foo1, foo2) where

    Version 1.1 removes the foo2 export. The bar package reexports the entire Foo module, and then a user package imports the module from bar. If the user package uses the foo2 function, it will compile when foo-1.0 is used, but not when foo-1.1 is used.

In both of these cases, the issue is the same: transitive dependencies are not being clamped down. The PVP makes an assumption that the entire interface for a package can be expressed in its version number, which is not true. I see three possible solutions to this:

  1. Try to push even more of a burden onto package authors, and somehow make them guarantee that their interface is completely immune to changes elsewhere in the stack. This kind of change was proposed on the libraries list. I'm strongly opposed to some kind of change like this: it makes authors' lives harder, and makes it very difficult to provide backwards compatibility in libraries. Imagine if transformers 0.4 adds a new MonadIO instance; the logical extreme of this position would be to disallow a library from working with both transformers 0.3 and 0.4, which will split Hackage in two.

  2. Modify the PVP so that instead of listing just direct dependencies, authors are required to list all transitive dependencies as well. So it would be a violation to depend on bar without explicitly listing foo in the dependency list. This will work, and be incredibly difficult to maintain. It will also greatly increase the time it takes for a new version of a deep dependency to be usable due to the number of authors who will have to bump version bounds.

  3. Transfer responsibility for this to package users: if you first built your code against foo 1.0, you should freeze that information and continue building against foo 1.0, regardless of the presence of new versions of foo. Not only does this increase reproducibility, it's just common sense: it's entirely possible that new versions of a library will introduce a runtime bug, performance regression, or even fix a bug that your code depends on. Why should the reliability of my code base be dependent on the actions of some third party that I have no control over?

Non-upgradeable packages

There are some packages which ship with GHC and cannot be upgraded. I'm aware of at least base and template-haskell, though perhaps there are others (haskell98 and haskell2010?). In the past, there was good reason to place upper bounds on base, specifically with the base 3/4 split. However, we haven't had that experience in a while, and don't seem to be moving towards doing that again. In today's world, we end up with the following options:

  • Place upper bounds on base to indicate "I haven't tested this with newer versions of GHC." This then makes it difficult for users to test out that package with newer versions of GHC.
  • Leave off upper bounds on base. Users may then try to install a package onto a version of GHC on which the package hasn't been tested, which will result in either (1) everything working (definitely the common case based on my Stackage maintenance), or (2) getting a compilation error.

I've heard two arguments to push us in the direction of keeping the upper bounds in this case, so I'd like to address them both:

  • cabal error messages are easier to understand than GHC error messages. I have two problems with that:
    • I disagree: cabal error messages are terrible. (I'm told this will be fixed in the next version of cabal.) Take the following output as a sample:

      cabal: Could not resolve dependencies: trying: 4Blocks-0.2 rejecting: base- (conflict: 4Blocks => base>=2 && <=4) rejecting: base-,,,,,,,,,,,,,, (global constraint requires installed instance)

      I've seen a number of users file bug reports not understanding that this message means "you have the wrong version of GHC."

    • Even if the error messages were more user-friendly, they make it more difficult to fix the actual problem: the code doesn't compile with the new version of GHC. Often times, I've been able to report an error message to a library author and, without necessarily even downloading the new version of GHC, he/she has been able to fix the problem.

  • Using upper bounds in theory means that cabal will be able to revert to an older version of the library that is compatible with the new version of GHC. However, I find it highly unlikely that there's often- if ever- a case where an older version of a library is compatible with a later version of GHC.

Mostly-stable, and finer-grained versioning

I'll combine the discussion of the last two points. I think the heart of the PVP debates really comes from mostly-stable packages. Let's contrast with the extremes. Consider a library which is completely stable, never has a breaking change, and has stated with absolute certainty that it never will again. Does anyone care about upper bounds on this library? They're irrelevant! I'd have no problem with including an upper bound, and I doubt even the staunchest PVP advocates would really claim it's a problem to leave it off.

On the other hand, consider an extremely unstable library, which is releasing massively breaking changes on a weekly basis. I would certainly agree in that case that an upper bound on that library is highly prudent.

The sticking point is the middle ground. Consider the following code snippet:

import Data.Text (Text, pack) foo :: Text foo = pack "foo"

According to the PVP as it stands today, this snippet requires an upper bound of < 1.2 on the text package. But let's just play the odds here: does anyone actually believe there's a real chance that the next iteration of text will break this code snippet? I highly doubt it; this is a stable subset of the text API, and I doubt it will ever be changing. The same can be said of large subsets of many other packages.

By putting in upper bounds in these cases, we run a very real risk of bifurcating Hackage into "those demanding the new text version for some new feature" vs "those who haven't yet updated their upper bounds to allow the new version of text."

The PVP currently takes an extremely conservative viewpoint on this, with the goal of solving just one problem: making sure code that compiles now continues to compile. As I demonstrated above, it doesn't actually solve that problem completely. And in addition, in this process, it has created other problems, such as this bifurcation.

So my proposal is that, instead of creating rigid rules like "always put an upper bound no matter what," we allow some common sense into the process, and also let package authors explicitly say "you can rely on this API not changing."

Categories: Offsite Blogs

Noam Lewis: Generate Javascript classes for your .NET types

Planet Haskell - Tue, 04/08/2014 - 9:36am

We open-sourced another library: ClosureExterns.NET (on github and nuget). It generates Javascript classes from .NET-based backend types, to preserve type “safety” (as safe as Javascript gets) across front- and backend. As a bonus you get Google closure annotations. The type annotations are understood by WebStorm (and other editors) and improve your development experience. Also, if you use Google Closure to compile or verify your code, it will take these types into account. We use it extensively with C#. We haven’t tried it with F#, but it’s supposed to work with any .NET type.

ClosureExterns.NET makes it easier to keep your frontend models in sync with your backend. The output is customizable – you can change several aspects of the generated code. For example you can change the constructor function definition, to support inheritance from some other Javascript function. For more details see ClosureExternOptions.

Getting Started

First, install it. Using nuget, install the package ClosureExterns.

Then, expose a method that generates your externs. For example, a console application:

public static class Program { public static void Main() { var types = ClosureExternsGenerator.GetTypesInNamespace(typeof(MyNamespace.MyType)); var output = ClosureExternsGenerator.Generate(types); Console.Write(output); } }

You can also customize the generation using a ClosureExternsOptions object.

Example input/output Input class B { public int[] IntArray { get; set; } } Output var Types = {}; // ClosureExterns.Tests.ClosureExternsGeneratorTest+B /** @constructor */ Types.B = function() {}; /** @type {Array.<number>} */ Types.B.prototype.intArray = null;

For a full example see the tests.

Tagged: .NET, c#, Javascript
Categories: Offsite Blogs

Why is this algorithm so much slower in Haskell than in C?

Haskell on Reddit - Tue, 04/08/2014 - 7:39am

Code in haskell: compiled with ghc -O2 filename Code in C: compiled with gcc -O3 --std=c99 -lm filename I need to generate fifty 40-bit numbers where each bit is set with probability p (in my examples p is set to 0.001). In haskell code it's represented by fractions of 10000 for speedup, so sorry if it's a bit unclear. It's a part of a bigger algorithm, but according to a profiler the slowest one. My question is why is Haskell version so much slower than C? (~4sec vs ~0.25sec)

submitted by Envielox
[link] [34 comments]
Categories: Incoming News

Syntax highlighted markdown in this subreddit

Haskell on Reddit - Tue, 04/08/2014 - 5:28am

Any chance of getting codeblocks syntax highlighted as Haskell in this subreddit? The standard way these days is you write

``` haskell x = foo `bar` mu where y = 2 ```

E.g. this works on Github and Hakyll. Right now in this subreddit, if you write the above, you get:

haskell x = foo `bar` mu where y = 2

Which generates:

<p><code>haskell x = foo `bar` mu where y = 2 </code></p>

Meaning some nifty JavaScript could find such elements identified by <p><code>…</code></p> and then haskell followed by a newline to be highlighted as Haskell code. If reddit ever explicitly supports code fences, then we can just turn this off.

Question is: can moderators add scripts to a subreddit due to the security issue? Maybe if a reddit admin approved it?

submitted by cookedbacon
[link] [10 comments]
Categories: Incoming News

Danny Gratzer: Bargain Priced Coroutines

Planet Haskell - Mon, 04/07/2014 - 6:00pm
Posted on April 8, 2014

The other day I was reading the 19th issue of the Monad.Reader and there was a fascinating post on coroutines.

While reading some of the code I noticed that it, like most things in Haskell, can be reduced to 5 lines with a library that Edward Kmett has written.

Consider the type of a trampoline as described in this article

newtype Trampoline m a = Tramp {runTramp :: m (Either (Tramp m a) a)}

So a trampoline is a monadic computation of some sort returning either a result, a, or another computation to run to get the rest.

Now this looks strikingly familiar. A computation returning Trampoline m a is really a computation returning a tree of Tramp m a’s terminating in a pure value.

This sounds like a free monad!

import Control.Monad.Trans.Free import Control.Monad.Identity type Trampoline = FreeT Identity

Recall that FreeT is defined as

data FreeF f a b = Pure a | Free (f b) data FreeT f m a = FreeT (m (FreeF f a (FreeT f m a)))

This is isomorphic to what we where looking at before. As an added bonus, we’ve saved the tedium of defining our own monad and applicative instance for Trampoline.

We can now implement bounce and pause to define our trampolines. bounce must take a computation and unwrap it by one level, leaving either a value or another computation.

This is just a matter of rejiggering the FreeF into an Either

bounce :: Functor m => Trampoline m a -> m (Either (Trampoline m a) a) bounce = fmap toEither . runFreeT where toEither (Pure a) = Right a toEither (Free m) = Left $ runIdentity m

pause requires some thought, the trick is to realize that if we wrap a computation in one layer of Free when unwrapped by bounce we’ll get the rest of the computation.


pause :: Monad m => Trampoline m () pause = FreeT $ return (Free . Identity $ return ())

So that’s 6 lines of code for trampolines. Let’s move on to generators.

A generator doesn’t yield just another computation, it yields a pair of a computation and a freshly generated value. We can account for this by changing that Identity functor.

type Generator c = FreeT ((,) c)

Again we get free functor, applicative and monad instances. We two functions, yield and runGen. Yield is going to take one value and stick it into the first element of the pair.

yield :: Monad m => g -> Generator g m () yield g = FreeT . return $ Free (g, return ())

This just sticks a good old boring m () in the second element of the pair.

Now runGen should take a generator and produce a m (Maybe c, Generator c m a). This can be done again by pattern matching on the underlying FreeF.

runGen :: (Monad m, Functor m) => Generator g m a -> m (Maybe g, Generator g m a) runGen = fmap toTuple . runFreeT where toTuple (Pure a) = (Nothing, return a) toTuple (Free (g, rest)) = (Just g, rest)

Now, last but not least, let’s build consumers. These wait for a value rather than generating one, so -> looks like the right functor.

type Consumer c = FreeT ((->) c)

Now we want await and runCon. await to wait for a value and runCon to supply one. These are both fairly mechanical.

runConsumer :: Monad m => c -> Consumer c m a -> m a runConsumer c = (>>= go) . runFreeT where go (Pure a) = return a go (Free f) = runConsumer c $ f c runCon :: (Monad m, Functor m) => Maybe c -> Consumer c m a -> m (Either a (Consumer c m a)) runCon food c = runFreeT c >>= go where go (Pure a) = return . Left $ a go (Free f) = do result <- runFreeT $ f food return $ case result of Pure a -> Left $ a free -> Right . FreeT . return $ free

runCon is a bit more complex than I’d like. This is to essentially ensure that if we had some code like

Just a <- await lift $ do foo bar baz Just b <- await

We want foo, bar, and baz to run with just one await. You’d expect that we’d run as much as possible with each call to runCon. Thus we unwrap not one, but two layers of our FreeT and run them, then rewrap the lower layer. The trick is that we make sure never to duplicate side effects by using good old return.

We can sleep easy that this is sound since return a >>= f is f a by the monad laws. Thus, our call to return can’t do anything detectable or too interesting.

While this is arguably more intuitive, I don’t particularly like it so we can instead write

runCon :: (Monad m, Functor m) => Maybe c -> Consumer c m a -> m (Either a (Consumer c m a)) runCon food = fmap go . runFreeT where go (Pure a) = Left a go (Free f) = Right (f food)

Much simpler, but now our above example wouldn’t run foo and friends until the second call of runCon.

Now we can join generators to consumers in a pretty naive way,

(>~>) :: (Functor m, Monad m) => Generator c m () -> Consumer c m a -> m a gen >~> con = do (cMay, rest) <- runGen gen case cMay of Nothing -> starve con Just c -> runCon c con >>= use rest where use _ (Left a) = return a use rest (Right c) = rest >~> c

And now we can use it!

addGen :: Generator Int IO () addGen = do lift $ putStrLn "Yielding 1" yield 1 lift $ putStrLn "Yielding 2" yield 2 addCon :: Consumer Int IO () addCon = do lift $ putStrLn "Waiting for a..." Just a <- await lift $ putStrLn "Waiting for b..." Just b <- await lift . print $ a + b main = addGen >~> addCon

When run this prints

Yielding 1 Waiting for a... Yielding 2 Waiting for b... 3

Now, this all falls out of playing with what functor we give to FreeT. So far, we’ve gotten trampolines out of Identity, generators out of (,) a, and consumers out of (->) a.

<script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Announcing GlomeTrace-0.3 and GlomeView-0.3

Haskell on Reddit - Mon, 04/07/2014 - 3:26pm

Home page:



Screen shot:

One of the things that's been bugging me about Glome was that it was pretty good at rendering images, but it wasn't really a very usable library for general computational geometry tasks because there wasn't a way to figure out exactly what any given ray hit.

This new version of Glome adds the ability to tag objects so that you get a list of tags associated with any objects that a ray hit. (Geometry can be hierarchical, so you may have more than one tag.)

As an example of why this is useful, GlomeView has been modified so that once the scene has rendered, you can click on the image and a ray is traced out into the scene wherever you click. Several of the objects are tagged with strings, and that string gets printed when you click the object.

Textures can pass tags through as well, so it's possible to click on the reflection of an object rather than the object itself, and GlomeView will correctly identify the thing you clicked on.

In addition to tags, I also re-worked textures to make them much more general. It's now possible to define your own lighting model. At the same time, I added a new feature to the standard lighting model that allows you to create portals such that any ray that hits the surface of the portal gets transformed in any user-defined way and re-cast somewhere else in the same (or even a different) scene.

edit: here's a link to the 0.2 announcement from several months ago:

submitted by elihu
[link] [1 comment]
Categories: Incoming News

Haskell Job Opportunity

Haskell on Reddit - Mon, 04/07/2014 - 11:10am

Eureka Genomics is a gene sequencing company developing low cost genotyping tests for animals. We are in the process of hardening our codebase from research prototypes to production-quality applications. We are looking to hire a few more Haskell developers to refactor existing Perl and C++ programs into reliable, maintainable Haskell. We would prefer to keep our team local to Houston.

If interested, please send your resume to

submitted by EGBio
[link] [15 comments]
Categories: Incoming News

F# compiler, library and tools now open for community contribution

Lambda the Ultimate - Mon, 04/07/2014 - 10:22am

F# is the first MS language to go open source. The F# team is now going further into the Open World to allow community contributions to the core language, library and tool set. This means the F# team will now take pull requests :)

From a recent blog post on the topic:

"Prior to today (April 3, 2014), contributions were not accepted to the core implementation of the F# language. From today, we are enabling the community to contribute to the F# language, library and tools, and to the Visual F# Tools themselves, while maintaining the integrity and unity of the F# language itself.

In more detail:
•Contributions can now be made to the core F# compiler, library and tools implementation.
•Proposed changes will be rigorously moderated by ourselves and other community contributors from Microsoft Research and the F# community.
•The full tests for the F# compiler and library are now available.
•In time, the full source code and test suite for the Visual F# Tools will be made available."

Categories: Offsite Discussion

Functional Software Engineering Posts

Haskell on Reddit - Mon, 04/07/2014 - 3:42am

Symphonic Solutions Ltd ( is a disruptive technology company that leverages functional paradigms to enable engineered innovation.

In real terms, we build high performance, low latency platforms that integrate to a plethora of third party API’s and apply intelligent manipulation and analysis to the data via a DSL. We help turn ‘Big Data’ into ‘Useful Information’. We also don’t use Hadoop - which is nice.

We’re looking for software engineers who understand the functional domain extremely well. Ideally you will be a polyglot software engineer with skills and at least twelve months (commercial) experience in Scala (with with Akka) || Clojure || Haskell || NodeJS. We run all of these in production so it’s important you understand the difference between experimentation and production quality code. You should also be able to at least read one of either Python, Bash or C. We don’t allow Ruby anywhere on our platform so you should be comfortable about that. We’re looking for people who want to live the functional dream, not people who think it’s a ‘nice to have’ on their C.V please.

You should understand the sympathetic orchestration of human, software, operating system and hardware. You should also be able to translate complex functional requirements into simple, well composed programs which work in concert to produce the desired result. Experience in working within a micro-services architecture will be looked on very favourably.

We are flexible on working locations, including home working, with a London office based near Edgware Road station. During the early phases of engineering design some time in the office will be a necessity however and you should be comfortable about that. You should be able to work independently and within a wider team as well as manage your time such that your responsibilities in each sprint are delivered to a high quality and in a timely manner. You’ll be expected and empowered to make software design choices specific to your responsibilities in concert with the platform engineering team.

You should be happy to work within an agile project management framework which supports Kanban. You should be familiar with working with JIRA, Confluence and Git. You will be expected to produce documentation around your sprint responsibilities which deliver a high level of knowledge retention and transfer.

Your programs will be run on SmartOS platforms and so you should be able to build technology agnostic programs. In simple terms you should be happy not to rely on any operating system specific functionality in order to write your programs. You will be trained in technologies such as DTrace to better aid your diagnostic efforts. You will be provided with local build environments based off Vagrant which you should be happy to manage yourself though training and support from engineering will be available at all times.

If this interests you, in the first instance, please answer the following questions :

Question 1 Explain the relationship between the human and the machine. Describe the relationship in as much detail as you feel comfortable with.

Question 2 Explain tail recursion to a 5 year old.

Question 3 Explain how you would manage state in a functional manner to a 5 year old.

We expect that you have public GitHub/Bitbucket repo’s sharing your work and/or research with functional languages and programs so please also send your GitHub/Bitbucket handle with your answers.Please send your answers and whatever you have of a C.V. to

submitted by khushildep
[link] [21 comments]
Categories: Incoming News

BudHac 2014

Haskell on Reddit - Mon, 04/07/2014 - 3:00am
Categories: Incoming News

The ghc-vis User Guide - Sun, 04/06/2014 - 3:53pm
Categories: Offsite Blogs