News aggregator

Local mirrors of hackage

haskell-cafe - Wed, 02/24/2016 - 6:23pm
I have a computer that I work on which isn't directly connected to the internet, that I want to use Haskell on. In order to do that I want a local version of hackage. I was previously able to do this using, but since hackage 2 came around this no longer works. Doing this would involve running something on a computer that *is* attached to the internet, and then transferring a bunch of files over to the other computer. Can someone point me to a recipe for doing this? Victor
Categories: Offsite Discussion

Hedis 0.7.0 release warning

haskell-cafe - Wed, 02/24/2016 - 5:47pm
Hi! I've released hedis 0.7.0 with one change which might affect some of you, so I thought it makes sense to notify cafe. To fix the issue #23 [0] we decided that it's ok to sacrifice pipelining between runRedis calls, so from now on, there won't be pipelining in that scenario. I want to thank Kirill Zaborsky for the fix in pr #45 [1] Best regards, Kostia [0]: [1]: _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Free and cofree

haskell-cafe - Wed, 02/24/2016 - 1:06pm
Hi all, A little while back I gave a talk at YOW Lambda Jam and wrote some posts on combing free monads and cofree comonads (inspired by some posts by Ed Kmett and Dan Piponi). I shared them on reddit, but recently realised that there might be some folks on this list who might be interested in the material. For those interested people, it's all here: The "coming soon" posts will turn up eventually, although not until I'm done preparing my proposal for this year's Lambda Jam :) Cheers, Dave _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

JP Moresmau: Haskell Through of Disillusionment

Planet Haskell - Tue, 02/23/2016 - 12:00pm
I'm going through a hard pass with Haskell. I still love the language, of course, but some things in the ecosystem bother me as they impact seriously both the fun of writing Haskell and my productivity.

Sometimes it's the lack of good development environment that gets me. I have failed with EclipseFP to build a community and gather enough support, but it doesn't seem that other efforts go that much further. I contribute to Leksah and haskell-ide-engine, and there a plugins now for Atom or other modern editors, but when I do a spot of Android development I see what a good IDE is and how much I miss in Haskell.

But today it's more the open source libraries issues that irks me. It's great that we have loads of libraries, and they're open source and usually good quality. But of course the maintainers are all volunteers, and sometimes have better things to do. But there are a few libraries that I use in my code that now actually stop me from progressing. I have provided enhancements or bug fixes that I need for my projects as pull requests, and they languish in the maintainers' inboxes for months. So what am I to do? Hound the maintainers? Fork the library to apply my patches? Rewrite my code so it doesn't use that library but another, better maintained? Not use libraries but write everything myself? And of course if I offer to take over maintainership I'll end up being overloaded and will perpetuate the problem. I suppose the best approach will be to offer to be one of MANY maintainers for the library, so that I can merge my changes and release on Hackage if the others maintainers are otherwise busy/uninterested. I'm not sure how that can work in the general case, though, if loads of people are maintainers for loads of libraries, I believe that having one person with the vision and the drive for a project is best, but for little libraries it may not matter much.

Categories: Offsite Blogs

Playing with OverloadedLabels in GHC 8 RC2,how to do this?

haskell-cafe - Tue, 02/23/2016 - 10:29am
Hi all, I'm playing with the OverloadedLabels extension in GHC 8 RC2. I have been able to define simple record accessors, like in this gist: After realizing than with OverloadedLabels a single symbol can be used to extract two different types from the same record, I tried to define an instance that says: "if a symbol can be used to extract an string from my record, then it can also be used to extract that a Text value". Here's my attempt (using a dummy Text type): {-# LANGUAGE OverloadedLabels #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE MagicHash #-} module Main where import GHC.OverloadedLabels import GHC.Prim newtype Text = Text { getText :: String } deriving Show data Person = Person { _id :: Int , _name :: String } instance IsLabel "name" (Person -> String) where fromLabel _ = _name ins
Categories: Offsite Discussion

Typeclassopedia exercises - my own answers published.

haskell-cafe - Tue, 02/23/2016 - 5:32am
Hi all, I’ve published my own answers to some of the exercises in Brent Yorgey’s Typeclassopedia: Github hosted HTML preview: Github source repository: Thanks to Conal Elliott and Phil Ruffwind for their help. -db
Categories: Offsite Discussion

Functional Jobs: Full-Stack Developer (Haskell/PureScript/ES6) at Canopy Education Inc. (Full-time)

Planet Haskell - Mon, 02/22/2016 - 8:31pm

Canopy Education Inc. is looking for a product-focused full-stack developer to help engineer the democratization of the college admissions process and higher education attainment.

There aren't many industries left that haven't been significantly disrupted by technology in some way, but you're reading about one right here! You will find many opportunities to apply high-leverage computer science (think machine learning, probabilistic reasoning, etc.) as well as plenty of opportunities for the more human side of the problem, as the current admissions process is a huge source of stress and confusion for students and parents alike. If we execute correctly, your work will impact the entire next generation of college graduates-to-be.

You will join a company whose culture centers around authenticity, excellence, and balance. You'll find that everyone likes to keep things simple and transparent. You'll find everyone to be goal-oriented and hands-off as long as you are a self-starter. And you'll feel right at home if you don't like sitting in front of a computer all day for hours on end.

But for those times we do sit in front of a computer, we are a polyglot functional programming shop, with emphasis on Haskell, PureScript, and ES6. Our infrastructure and non-mission-critical tooling tends to be in whatever works best for the task at hand: sometimes that's one of our main languages, other times it's Ruby or bash—basically, it's a team decision based on whatever sits at the intersection of appropriateness, developer joy, quality, and velocity.

As an early-stage company headquartered in Cambridge, MA, we have a strong preference for key members of our team to be located in the Boston metro area; however, given that our company has its roots in remote work, we are open to remote arrangements given one year of continuous employment and/or executive approval.


You know you are right for this position if:

  • You have at least five years of professional software engineering experience, and at least two years of preference for a high-level programming language that's used in industry, like Haskell, Clojure, OCaml, Erlang, F#, or similar.
  • You have some front-end experience with JS or a functional language that compiles to JS, like Purescript, Elm, Clojurescript, or similar. ES6 + React ecosystem are bonuses.
  • You are a self-starter and internally motivated, with a strong desire to be part of a successful team that shares your high standards.
  • You have great written communication skills and are comfortable with making big decisions over digital presence (e.g. video chat).
  • You have polyglot experience along several axes (dynamic/static, imperative/functional, lazy/strict, weird/not-weird).
  • You are comfortable with PaaS like heroku. Or even BaaS like Firebase. Your preferred db is postgres. You have basic but passable sysadmin skills.
  • You are fluent with git.
  • You instrument before you optimize. You test before you ship. You listen before you conclude. You measure before you cut. Twice.

We offer a competitive salary and a full suite of benefits, some of them unconventional, but awesome for the right person:

  • Medical, dental, vision insurance and 401k come standard.
  • Flexible hours with a 4-hour core - plan the rest of your workday as you wish, just give us the majority of your most productive hours. Productivity ideas: avoid traffic, never wait in line at the grocery store, wake up without an alarm clock.
  • Goal-based environment (as opposed to grind-based or decree-based environment; work smarter, not harder). We collaborate on setting goals, but you set your own process for accomplishing those goals. You will be entrusted with a lot of responsibility and you might even experience fulfillment and self-actualization as a result.
  • Daily physical activity/mindfulness break + stipend: invest a non-core hour to make yourself more awesome by using it for yoga, tap-dance lessons, a new bike, massage, a surfboard - use your imagination! Just don’t sit at a computer all day! Come back to work more relaxed and productive and share your joy with the rest of the team. Note: You must present and share proof of your newly enriched life with the team in order to receive the stipend.
  • Equipment/setup budget so you can tool up the way you want. A brand new 15" MBP is standard issue if you have no strong opinions.

Remember: We’re a startup. You’re an early employee. We face challenges. We have to ship. Your ideas matter. You will make a difference.

Get information on how to apply for this position.

Categories: Offsite Blogs

Ken T Takusagawa: [zomcxiel] Idiomatic language features

Planet Haskell - Mon, 02/22/2016 - 5:56pm

When describing an algorithm, it would be preferable to use pseudocode that is a real programming language, or something very close to a real programming language: this way the pseudocode can easily be executed to verify that it actually works.  (Inspired by the bizarre prefix syntax for object attributes in the pseudocode for the CLR textbook 2nd edition, which, in retrospect, resembles Haskell record accessor syntax. In the 3rd edition (CLRS), object attributes were changed to use the dot syntax as seen in many object-oriented languages.)

The pseudocode should be easily translatable into other languages, so should avoid language features idiomatic to the chosen real language, features that would take a lot of effort or awkwardness to replicate in another language.  However, if some idiomatic feature is "necessary" for the algorithm in question, for an open-ended definition of "necessary", go ahead and use it.  For example, an algorithm may be much more difficult to describe without lazy evaluation as in Haskell.  There is not a black-and-white distinction about what features are acceptable for pseudocode; this is why there cannot be a standardized language for pseudocode.  (The original motivation was to have a standardized pseudocode language for describe algorithms in Wikipedia.  I suspect Wikipedia's use of pseudocode is also mired in political difficulties of not wanting to show favoritism toward any particular real programming language.)

Enumerate the idiomatic features of various programming languages.

generally: Object-oriented features, exceptions
C: pointer arithmetic, stdarg, longjmp, null terminated strings
Haskell: Lazy evaluation, monads, closures

Categories: Offsite Blogs

Fw: new important message

haskell-cafe - Mon, 02/22/2016 - 1:40pm
Hello! New message, please read <> dek5< at > _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Wolfgang Jeltsch: Generic programming in Haskell

Planet Haskell - Mon, 02/22/2016 - 12:07pm

Generic programming is a powerful way to define a function that works in an analogous way for a class of types. In this article, I describe the latest approach to generic programming that is implemented in GHC. This approach goes back to the paper A Generic Deriving Mechanism for Haskell by José Pedro Magalhães, Atze Dijkstra, Johan Jeuring, and Andres Löh.

This article is a writeup of a Theory Lunch talk I gave on 4 February 2016. As usual, the source of this article is a literate Haskell file, which you can download, load into GHCi, and play with.


Parametric polymorphism allows you to write functions that deal with values of any type. An example of such a function is the reverse function, whose type is [a] -> [a]. You can apply reverse to any list, no matter what types the elements have.

However, parametric polymorphism does not allow your functions to depend on the structure of the concrete types that are used in place of type variables. So values of these types are always treated as black boxes. For example, the reverse function only reorders the elements of the given list. A function of type [a] -> [a] could also drop elements (like the tail function does) or duplicate elements (like the cycle function does), but it could never invent new elements (except for ⊥) or analyze elements.

Now there are situation where a function is suitable for a class of types that share certain properties. For example, the sum function works for all types that have a notion of binary addition. Haskell uses type classes to support such functions. For example, the Num class provides the method (+), which is used in the definition of sum, whose type Num a => [a] -> a contains a respective class constraint.

The methods of a class have to be implemented separately for every type that is an instance of the class. This is reasonable for methods like (+), where the implementations for the different instances differ fundamentally. However, it is unfortunate for methods that are implemented in an analogous way for most of the class instances. An example of such a method is (==), since there is a canonical way of checking values of algebraic data types for equality. It works by first comparing the outermost data constructors of the two given values and if they match, the individual fields. Only when the data constructors and all the fields match, the two values are considered equal.

For several standard classes, including Eq, Haskell provides the deriving mechanism to generate instances with default method implementations whose precise functionality depends on the structure of the type. Unfortunately, there is no possibility in standard Haskell to extend this deriving mechanism to user-defined classes. Generic programming is a way out of this problem.


For generic programming, we need several language extensions. The good thing is that only one of them, DeriveGeneric, is specific to generic programming. The other ones have uses in other areas as well. Furthermore, DeriveGeneric is a very small extension. So the generic programming approach we describe here can be considered very lightweight.

We state the full set of necessary extensions with the following pragma:

{-# LANGUAGE DefaultSignatures, DeriveGeneric, FlexibleContexts, TypeFamilies, TypeOperators #-}

Apart from these language extensions, we need the module GHC.Generics:

import GHC.Generics Our running example

As our running example, we pick serialization and deserialization of values. Serialization means converting a value into a bit string, and deserialization means parsing a bit string in order to get back a value.

We introduce a type Bit for representing bits:

data Bit = O | I deriving (Eq, Show)

Furthermore, we define the class of all types that support serialization and deserialization as follows:

class Serializable a where put :: a -> [Bit] get :: [Bit] -> (a, [Bit])

There is a canonical way of serializing values of algebraic data types. It works by first encoding the data constructor of the given value as a sequence of bits and then serializing the individual fields. To show this approach in action, we define an algebraic data type Tree, which is a type of labeled binary trees:

data Tree a = Leaf | Branch (Tree a) a (Tree a) deriving Show

An instantiation of Serializable for Tree that follows the canonical serialization approach can be carried out as follows:

instance Serializable a => Serializable (Tree a) where put Leaf = [O] put (Branch left root right) = [I] ++ put left ++ put root ++ put right get (O : bits) = (Leaf, bits) get (I : bits) = (Branch left root right, bits''') where (left, bits') = get bits (root, bits'') = get bits' (right, bits''') = get bits''

Of course, it quickly becomes cumbersome to provide such an instance declaration for every algebraic data type that should use the canonical serialization approach. So we want to implement the canonical approach once and for all and make it easily usable for arbitrary types that are amenable to it. Generic programming makes this possible.


An algebraic data type is essentially a sum of products where the terms “sum” and “product” are understood as follows:

  • A sum is a variant type. In Haskell, Either is the canonical type constructor for binary sums, and the empty type Void from the void package is the nullary sum.

  • A product is a tuple type. In Haskell, (,) is the canonical type constructor for binary products, and () is the nullary product.

The key idea of generic programming is to map types to representations that make the sum-of-products structure explicit and to implement canonical behavior based on these representations instead of the actual types.

The GHC.Generics module defines a number of type constructors for constructing representations:

data V1 p infixr 5 :+: data (:+:) f g p = L1 (f p) | R1 (g p) data U1 p = U1 infixr 6 :*: data (:*:) f g p = f p :*: g p newtype K1 i a p = K1 { unK1 :: a } newtype M1 i a f p = M1 { unM1 :: f p }

All of these type constructors take a final parameter p. This parameter is relevant only when dealing with higher-order classes. In this article, however, we only discuss generic programming with first-order classes. In this case, the parameter p is ignored. The different type constructors play the following roles:

  • V1 is for the nullary sum.

  • (:+:) is for binary sums.

  • U1 is for the nullary product.

  • (:*:) is for binary products.

  • K1 is a wrapper for fields of algebraic data types. Its parameter i used to provide some information about the field at the type level, but is now obsolete.

  • M1 is a wrapper for attaching meta information at the type level. Its parameter i denotes the kind of the language construct the meta information refers to, and its parameter c provides access to the meta information.

The GHC.Generics module furthermore introduces a class Generic, whose instances are the types for which a representation exists. Its definition is as follows:

class Generic a where type Rep a :: * -> * from :: a -> (Rep a) p to :: (Rep a) p -> a

A type Rep a is the representation of the type a. The methods from and to convert from values of the actual type to values of the representation type and vice versa.

To see all this in action, we make Tree a an instance of Generic:

instance Generic (Tree a) where type Rep (Tree a) = M1 D D1_Tree ( M1 C C1_Tree_Leaf U1 :+: M1 C C1_Tree_Branch ( M1 S NoSelector (K1 R (Tree a)) :*: M1 S NoSelector (K1 R a) :*: M1 S NoSelector (K1 R (Tree a)) ) ) from Leaf = M1 (L1 (M1 U1)) from (Branch left root right) = M1 ( R1 ( M1 ( M1 (K1 left) :*: M1 (K1 root) :*: M1 (K1 right) )) ) to (M1 (L1 (M1 U1))) = Leaf to (M1 ( R1 ( M1 ( M1 (K1 left) :*: M1 (K1 root) :*: M1 (K1 right) )) )) = Branch left root right

The types D1_Tree, C1_Tree_Leaf, and C1_Tree_Branch are type-level representations of the type constructor Tree, the data constructor Leaf, and the data constructor Branch, respectively. We declare them as empty types:

data D1_Tree data C1_Tree_Leaf data C1_Tree_Branch

We need to make these types instances of the classes Datatype and Constructor, which are part of GHC.Generics as well. These classes provide a link between the type-level representations of type and data constructors and the meta information related to them. This meta information particularly covers the identifiers of the type and data constructors, which are needed when implementing canonical implementations for methods like show and read. The instance declarations for the Tree-related types are as follows:

instance Datatype D1_Tree where datatypeName _ = "Tree" moduleName _ = "Main" instance Constructor C1_Tree_Leaf where conName _ = "Leaf" instance Constructor C1_Tree_Branch where conName _ = "Branch"

Instantiating the Generic class as shown above is obviously an extremely tedious task. However, it is possible to instantiate Generic completely automatically for any given algebraic data type, using the deriving syntax. This is what the DeriveGeneric language extension makes possible.

So instead of making Tree a an instance of Generic by hand, as we have done above, we could have declared the Tree type as follows in the first place:

data Tree a = Leaf | Branch (Tree a) a (Tree a) deriving (Show, Generic) Implementing canonical behavior

As mentioned above, we implement canonical behavior based on representations. Let us see how this works in the case of the Serializable class.

We introduce a new class Serializable' whose methods provide serialization and deserialization for representation types:

class Serializable' f where put' :: f p -> [Bit] get' :: [Bit] -> (f p, [Bit])

We instantiate this class for all the representation types:

instance Serializable' U1 where put' U1 = [] get' bits = (U1, bits) instance (Serializable' r, Serializable' s) => Serializable' (r :*: s) where put' (rep1 :*: rep2) = put' rep1 ++ put' rep2 get' bits = (rep1 :*: rep2, bits'') where (rep1, bits') = get' bits (rep2, bits'') = get' bits' instance Serializable' V1 where put' _ = error "attempt to put a void value" get' _ = error "attempt to get a void value" instance (Serializable' r, Serializable' s) => Serializable' (r :+: s) where put' (L1 rep) = O : put' rep put' (R1 rep) = I : put' rep get' (O : bits) = let (rep, bits') = get' bits in (L1 rep, bits') get' (I : bits) = let (rep, bits') = get' bits in (R1 rep, bits') instance Serializable' r => Serializable' (M1 i a r) where put' (M1 rep) = put' rep get' bits = (M1 rep, bits') where (rep, bits') = get' bits instance Serializable a => Serializable' (K1 i a) where put' (K1 val) = put val get' bits = (K1 val, bits') where (val, bits') = get bits

Note that in the case of K1, the context mentions Serializable, not Serializable', and the methods put' and get call put and get, not put' and get'. The reason is that the value wrapped in K1 has an ordinary type, not a representation type.

Accessing canonical behavior

We can now apply canonical behavior to ordinary types using the methods from and to from the Generic class. For example, we can implement functions defaultPut and defaultGet that provide canonical serialization and deserialization for all instances of Generic:

defaultPut :: (Generic a, Serializable' (Rep a)) => a -> [Bit] defaultPut = put' . from defaultGet :: (Generic a, Serializable' (Rep a)) => [Bit] -> (a, [Bit]) defaultGet bits = (to rep, bits') where (rep, bits') = get' bits

We can use these functions in instance declarations for Serializable. For example, we can make Tree a an instance of Serializable in the following way:

instance Serializable a => Serializable (Tree a) where put = defaultPut get = defaultGet

Compared to the instance declaration we had initially, this one is a real improvement, since we do not have to implement the desired behavior of put and get by hand anymore. However, it still contains boilerplate code in the form of the trivial method declarations. It would be better to establish defaultPut and defaultGet as defaults in the class declaration:

class Serializable a where put :: a -> [Bit] put = defaultPut get :: [Bit] -> (a, [Bit]) get = defaultGet

However, this is not possible, since the types of defaultPut and defaultGet are less general than the types of put and get, as they put additional constraints on the type a. Luckily, GHC supports the language extension DefaultSignatures, which allows us to give default implementations that have less general types than the actual methods (and consequently work only for those instances that are compatible with these less general types). Using DefaultSignatures, we can declare the Serializable class as follows:

class Serializable a where put :: a -> [Bit] default put :: (Generic a, Serializable' (Rep a)) => a -> [Bit] put = defaultPut get :: [Bit] -> (a, [Bit]) default get :: (Generic a, Serializable' (Rep a)) => [Bit] -> (a, [Bit]) get = defaultGet

With this class declaration in place, we can make Tree a an instance of Serializable as follows:

instance Serializable a => Serializable (Tree a)

With the minor extension DeriveAnyClass, which is provided by GHC starting from Version 7.10, we can even use the deriving keyword to instantiate Serializable for Tree a. As a result, we only have to write the following in order to define the Tree type and make it an instance of Serializable:

data Tree a = Leaf | Branch (Tree a) a (Tree a) deriving (Show, Generic, Serializable)

So finally, we can use our own classes like the Haskell standard classes regarding the use of deriving clauses, except that we have to additionally derive an instance declaration for Generic.

Specialized implementations

Usually, not all instances of a class should or even can be generated by means of generic programming, but some instances have to be crafted by hand. For example, making Int an instance of Serializable requires manual work, since Int is not an algebraic data type.

However, there is no problem with this, since we still have the opportunity to write explicit instance declarations, despite the presence of a generic solution. This is in line with the standard deriving mechanism: you can make use of it, but you are not forced to do so. So we can have the following instance declaration, for example:

instance Serializable Int where put val = replicate val I ++ [O] get bits = (length is, bits') where (is, O : bits') = span (== I) bits

Of course, the serialization approach we use here is not very efficient, but the instance declaration illustrates the point we want to make.

Tagged: functional programming, generic programming, GHC, Haskell, Institute of Cybernetics, literate programming, parametric polymorphism, talk, Theory Lunch, type class, type family, void (Haskell package)
Categories: Offsite Blogs

funny comics about Monad

haskell-cafe - Mon, 02/22/2016 - 7:07am
check this one! _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

FP Complete: Testing GHC with Stackage

Planet Haskell - Mon, 02/22/2016 - 6:00am

For the past few release of GHC (I think 7.10.2 and 7.10.3), I've tried to help out with the testing effort by using Stackage Nightly as a "stress test." Stackage Nightly provides a large (1810 at last count) collection of real-world Haskell packages to test compatibility. For minor version upgrades of GHC, this stress test is especially good, since virtually no packages should stop working with a minor version bump (besides things like conflicting identifier or module names). The test does end up providing quite a few false positives for a new major GHC release, but it still quite informative.

Until now, each time I ran the test was a manual procedure. It didn't take that much time to set up, but (1) did take some non-zero part of my time, and (2) made it impossible for others to run this test (or even automate it) without my help. I got this set up as the ghc-rc-stackage project on Github, and I wanted to put up a post explaining how to use it in case GHC developers or testers want to play around with this.

Quickstart instructions

From a Linux machine with Docker installed, run the following from any directory:

docker pull fpco/ghc-rc-stackage docker run --rm -it \ -v `pwd`/build:/build -v `pwd`/fake-home:/fake-home \ -e USERID=`id -u` \ fpco/ghc-rc-stackage \ /stackage/

This will populate a build subdirectory with various content, include binary artifacts, generated documentation, and most important, build logs. Those will be present in build/logs/nightly. Pay attention to the console output to see which packages failed so you know which logs to pay attention to.

This script will try to build the most recent Stackage Nightly snapshot with the current GHC release candidate selected for the repo (at time of writing, this is GHC 8.0.1 rc2).

Diving deeper

In order to perform these builds, we have a Docker image that contains:

  • An Ubuntu base system
  • A bunch of system libraries needed for building Stackage packages
  • The relevant GHC release candidate
  • The stack and stackage-curator binaries necessary for running a Stackage build
  • A script (/stackage/ that downloads the YAML configuration for the most recent Stackage Nightly and calls the stackage-curator executable with appropriate command line arguments to kick off the build.

The docker run line above does some magic to bind-mount appropriate directories and set the USERID environment variable used by that script, which will in turn be used to set appropriate permissions.


Making local modifications to this setup is easy, just:

  • clone the Github repository
  • modify the Dockerfile and as desired
  • run docker build --tag fpco/ghc-rc-stackage .
  • run the same docker run command from above

These steps are covered in the project's


This is intended to provide basic functionality. The only planned enhancements I have for this right now are bumping the links for future release candidates in the future. As usual, if you have ideas for improvement, pull requests are very much welcome!

Categories: Offsite Blogs

Compile times and separate compilation

haskell-cafe - Sat, 02/20/2016 - 11:56am
Random thought about compile times: could separate compilation be made even more fine-grained by taking it to the level of individual top-level identifiers, rather than modules? This would probably help slow recompiles a lot. Tom
Categories: Offsite Discussion

ANN: tasty-discover

General haskell list - Sat, 02/20/2016 - 3:45am
Hi folks, happy to announce a first iteration of a test discovery and runner tool for the tasty framework. It's a small program based on a fork of hspec-discover and some tasty-th magic, which: * discovers all test modules under your test suite `hs-source-dirs` * parses the files for test names with the prefix `prop_` and `case_` * generates the boilerplate and runs the tests It has potential to be a flexible test discover for many test libraries. Best, Luke
Categories: Incoming News

Brent Yorgey: The network reliability problem

Planet Haskell - Fri, 02/19/2016 - 7:51pm

Let be a directed graph with vertices and edges . Multiple edges between the same pair of vertices are allowed. For concreteness’ sake, think of the vertices as routers, and the edges as (one-way) connections. Let denote the set of probabilities, and be a function which assigns some probability to each edge. Think of as the probability that a single message sent along the edge from the source router will successfully reach the target router on the other end.

Suppose that when a router receives a message on an incoming connection, it immediately resends it on all outgoing connections. For , let denote the probability that, under this “flooding” scenario, at least one copy of a message originating at will eventually reach .

For example, consider the simple network shown below.

A message sent from along the upper route through has an probability of arriving at . By definition a message sent along the bottom route has an probability of arriving at . One way to think about computing the overall probability is to compute the probability that it is not the case that the message fails to traverse both links, that is, . Alternatively, in general we can see that , so as well. Intuitively, since the two events are not mutually exclusive, if we add them we are double-counting the situation where both links work, so we subtract the probability of both working.

The question is, given some graph and some specified nodes and , how can we efficiently compute ? For now I am calling this the “network reliability problem” (though I fully expect someone to point out that it already has a name). Note that it might make the problem a bit easier to restrict to directed acyclic graphs; but the problem is still well-defined even in the presence of cycles.

This problem turned out to be surprisingly more difficult and interesting than it first appeared. In a future post or two I will explain my solution, with a Haskell implementation. In the meantime, feel free to chime in with thoughts, questions, solutions, or pointers to the literature.

Categories: Offsite Blogs

Munich Haskell Meeting, 2015-02-23 < at > 19:30

haskell-cafe - Fri, 02/19/2016 - 2:10pm
Dear all, Next week, our monthly Munich Haskell Meeting will take place again on Tuesday, February 23 at Cafe Puck at 19h30. For details see here: If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. Everybody is welcome! cu,
Categories: Offsite Discussion

ARRAY 2016 Call for Papers

General haskell list - Fri, 02/19/2016 - 1:40pm
*************************************************************************** CALL FOR PAPERS ARRAY 2016 3rd ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming Santa Barbara, CA, USA June 14, 2016 part of PLDI 2016 37th Annual ACM SIGPLAN Conference on Programming Language Design and Implementation June 13-17, 2016 *************************************************************************** Focus and Description: Array-oriented programming is a powerful abstraction for compactly implementing numerically intensive algorithms. Many modern languages now provide som
Categories: Incoming News

Haskell ITA meetup in Florence, Italy (2016-03-26)

haskell-cafe - Fri, 02/19/2016 - 1:33pm
Hello everyone I'd like to announce that on March 26 (2016-03-26) we're going to host the third meetup of the Haskell ITA user group ( near Florence, Italy. This time we're going to focus more on the practical part rather than the talks, we're going to form some little group in order to create some small project or contribute to some open source library (Stack was proposed, but any suggestion is welcome). It's important if you decide to attend that you have the development environment already configured, you can use this guide (in italian) to do it: There will be people from all Italy, and we'll use the italian language during all the event. Here's the link for the registration: Italian version will follow --- Iscrivetevi qui: Questo è il terzo evento del gruppo dei programmatori Haskell Italiani. A differenza dei due precedenti, vogliamo conc
Categories: Offsite Discussion

Manual type-checking in graphs: Avoidable?

haskell-cafe - Fri, 02/19/2016 - 4:50am
I use FGL, which (roughly) defines type Gr a b as a graph on nodes of type a and edges of type b. Suppose you wanted a graph that described which people own which hamsters, knowing only their name. You would have to make node and edge types like this: data GraphNode = Person String | Hamster String data GraphEdge = Has where the strings represent their names. Suppose then you wanted to write a function that, given a person, returns the names of all their hamsters. To make sure the call makes sense, the function would have to first check that the input is in fact a person. Since persons and hamsters are both constructors of the same type, you can't let Haskell's robust, beautiful type-checking system distinguish them for you; you've got to write something like "case n of Person _ -> True; _ -> False". Is there some way around writing such manual checks?
Categories: Offsite Discussion

Confusing behavior in Haskell networking libraries

haskell-cafe - Fri, 02/19/2016 - 1:22am
I was trying to write a web scraper, so I used scalpel <>. The website I wanted to scrape blocks my IP (I run a tor exit node), so I decided to use proxychains <> (specifically, version 3.1-6 according to Debian). I ran into the following weird behavior: if I tell proxychains to run dns through the proxy, things are fine, but if I tell it to run dns in the clear or the URL I'm trying to connect to is an IP address (e.g. manually resolved), I always get timeouts (much faster than I should). (don't resolve dns over the proxy) % proxychains stack exec -- test-scalpel "" ProxyChains-3.1 ( |R-chain|-<>-<><>-<--timeout (resolve dns over the proxy) % proxychains stack exec -- test-scalpel "" ProxyChains-3.1 ( |DNS-request| |R-chain|-<>-<><>-<><>-OK |DNS-response| ifco
Categories: Offsite Discussion