News aggregator

How to test your idea (predicates, key-functions) rather than the prelude function?

Haskell on Reddit - Fri, 09/25/2015 - 4:56am

Hi /r/haskell,

I absolutely love the idea of property-based testing, but I have a hard time coming up with good tests. A lot of the work I do is stringing maps, filters and sorts together. The thing is that I feel that I am not testing my ideas behind the composing of the functions, but rather the prelude functions themselves.

To give an example to guide the discussion, I had to write a program that given a blacklist of ids and a list of sets of ids has to remove the sets that contain one or more of the ids in the blacklist. (The original is written in python, but is reproduced here in Haskell. It's essentially the same).

import qualified Data.Set as Set blacklist = Set.fromList [2,3] idset = [Set.fromList[2,5,6], Set.fromList[5,4,6]] blacklist_filtering = filter (Set.null . Set.intersection blacklist) main = print $ blacklist_filtering idset {-- result := fromList[5,4,6] --}

So how would one test this? We could test that when we apply the blacklist filter again on the already filtered list that we should get the same result. However, does this really test my logic or is this more a test for filter? I think this is more a test for filter. One could argue that in a language like python which has side-effects/states that this is still useful. However I know that my function behaves as a function without side effects.

What would be a good test for blacklist_filtering, both in a functional language and in a non-functional language like python (which has property based testing support via the hypothesis package)? Or is this something you normally wouldn't test?

Furthermore, this question isn't limited to filter. You could do the same for sort, takewhile, etcetera, almost everything that takes a predicate or a key function. To expand, sort also acts idempotent, but you aren't really testing your 'key function' on which you would want to sort if you sort the same list again. (Assuming your using the prelude sort and not a custom one).

submitted by exarge
[link] [15 comments]
Categories: Incoming News

Call for Contributions: BOB 2016 - Berlin, Feb 19, 2016

Haskell on Reddit - Fri, 09/25/2015 - 3:49am
BOB Conference 2016 "What happens when we use what's best for a change?" Berlin, February 19 Call for Contributions Deadline: October 30, 2015

You drive advanced software engineering methods, implement ambitious architectures and are open to cutting-edge innovation? Attend this conference, meet people that share your goals, and get to know the best software tools and technologies available today. We strive to offer a day full of new experiences and impressions that you can use to immediately improve your daily life as a software developer.

If you share our vision and want to contribute, submit a proposal for a talk or tutorial!


We are looking for talks about best-of-breed software technology, e.g.:

  • functional programming
  • reactive programming
  • persistent data structures and databases
  • types
  • formal methods for correctness and robustness
  • ... everything really that isn't mainstream, but you think should be.

Presenters should provide the audience with information that is practically useful for software developers. This could take the form of e.g.:

  • experience reports
  • introductory talks on technical background
  • demos and how-tos

We accept proposals for presentations of 45 minutes (40 minutes talk + 5 minutes questions), as well as 90 minute tutorials for beginners. The language of presentation should be either English or German.

Your proposal should include (in your presentation language of choice):

  • an abstract of max. 1500 characters.
  • a short bio/cv
  • contact information (including at least email address)
  • a list of 3-5 concrete ideas of how your work can be applied in a developer's daily life
  • additional material (websites, blogs, slides, videos of past presentations, ...)

Submit here:


NOTE: The conference fee will be waived for presenters, but travel expenses will not be covered.


The program committee offers shepherding to all speakers. Shepherding provides speakers assistance with preparing their sessions, as well as a review of the talk slides.

Program Committee

(more information here:

  • Matthias Fischmann, zerobuzz UG
  • Matthias Neubauer, SICK AG
  • Nicole Rauch, Softwareentwicklung und Entwicklungscoaching
  • Michael Sperber, Active Group
  • Stefan Wehr, factis research
Scientific Advisory Board
  • Annette Bieniusa, TU Kaiserslautern
  • Peter Thiemann, Uni Freiburg
submitted by 34798s7d98t6
[link] [comment]
Categories: Incoming News

ANN: CfN for new Haskell Prime language committee

General haskell list - Thu, 09/24/2015 - 10:56pm
Dear Haskell Community, In short, it's time to assemble a new Haskell Prime language committee. Please refer to the CfN at for more details. Cheers, hvr
Categories: Incoming News

Monad of no `return` Proposal (MRP): Moving `return` out of `Monad`

libraries list - Thu, 09/24/2015 - 10:43pm
Hello *, Concluding AMP and MFP, We (David and I) proudly present you the final installment of the Monad trilogy: Monad of no `return` Proposal ============================= TLDR: To complete the AMP, turn `Monad(return)` method into a top-level binding aliasing `Applicative(pure)`. Current Situation ----------------- With the implementation of Functor-Applicative-Monad Proposal (AMP)[1] and (at some point) the MonadFail proposal (MFP)[2] the AMP class hierarchy becomes class Functor f where fmap :: (a -> b) -> f a -> f b class Functor f => Applicative f where pure :: a -> f a (<*>) :: f (a -> b) -> f a -> f b (*>) :: f a -> f b -> f b u *> v = … (<*) :: f a -> f b -> f a u <* v = … class Applicative m => Monad m where (>>=) :: m a -> (a -> m b) -> m b return :: a -> m a return = pure (>>) :: m a -> m b -> m b m >> k = …
Categories: Offsite Discussion

ANNOUNCE: polymap

General haskell list - Thu, 09/24/2015 - 7:53pm
I'm excited to announce the first release of a package I've been working on over the last week called polymap, a library providing type-safe polygonal maps whose sides are defined by a kindlist of types zipped with a storage type for each side. I've tried to match the interface exposed by the containers package as closely as possible. For example: import Data.Set (Set) import Data.PolyMap.Nat (first, second, third) import qualified Data.PolyMap as PM mEmpty :: PM.PolyMap '[ '(String, Set), '(Int, Set) ] mEmpty = PM.empty mOne :: PM.SimplePolyMap '[String, Int] Set mOne = PM.singleton ("one", length "one") main = do print mEmpty -- empty PolyMap print mOne -- PolyMap with one Relation print mTwo -- PolyMapwith two Relations print (PM.member first "one" mTwo) -- True print (PM.notMember first "asdf" mTwo) -- True --print (PM.notMember second "asdf" mTwo) -- will not typeche
Categories: Incoming News

STM implementation

haskell-cafe - Thu, 09/24/2015 - 4:22pm
Hi all, I’m considering the idea of hacking into the STM implementation, but the source in rts/STM.c seems rather dense. Is there some documentation about the internals or some place to start to understand what’s going on? Thank you, Nicola _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Noam Lewis: Two implementations of DHM type inference

Planet Haskell - Thu, 09/24/2015 - 3:51pm

Here are two simple implementations of Damas-Hindley-Milner type inference.

First, is my Haskell version of a region-based optimized type checker as explained by Oleg Kiselyov, in his excellent review of the optimizations to generalization used in OCaml. Oleg gives an SML implementation, which I’ve Haskellized rather mechanically (using ST instead of mutable references, etc.) The result is a bit ugly, but it does include all the optimizations explained by Oleg above (both lambda-depth / region / level fast generalization and instantiation, plus path compression on linked variables, and not doing expensive occurs checks by delaying them to whenever we traverse the types anyway).

Second, here’s my much shorter and more elegant implementation using the neat unification-fd package by Wren Romano. It’s less optimized though – currently I’m not doing regions or other optimizations. I’m not entirely satisfied with how it looks: I’m guessing this isn’t how the author of unification-fd intended generalization to be implemented, but it works. Generalization does the expensive lookup of free metavariables in the type environment. Also the instantiate function is a bit clunky. The unification-fd package itself is doing inexpensive occurs checks as above, and path compression, but doesn’t provide an easy way to keep track of lambda-depth so I skipped that part. Perhaps the Variable class should include a parameterized payload per variable, which could be used for (among other things) keeping track of lambda-depth.

Tagged: Haskell
Categories: Offsite Blogs

The Incredible Proof Machine

Haskell on Reddit - Thu, 09/24/2015 - 9:40am
Categories: Incoming News

Darcs: darcs hacking sprint 9 report

Planet Haskell - Thu, 09/24/2015 - 9:35am
After a one year and a half absence, the Darcs Hacking Sprint returned!

Once again, the event occurred at the IRILL (Innovation and Research Initiative for Free Software) in Paris, on September 18th to 20th.

The sprint had 7 participants: Danill Frumin, Eric Kow, Florent Becker, Ganesh Sittampalam, Guillaume Hoffmann, Thomas Miedema and Vinh Dang.

Darcs and GHC 8Thomas Miedema is a Haskell and GHC hacker, and came on the first day of the sprint. Since Darcs is a system that aims at supporting the various GHC versions out there, Thomas helped us preparing for GHC 8, the next major version. He explained us one issue of GHC 8 that got triggered by Darcs: a bug with the PatternSynonyms extension. Fortunately it seems that the bug will be fixed in GHC HEAD. (First release candidate is planned for December).

Thomas explaining PatternSynonyms to Eric and GaneshDiving into SelectChanges and PatchChoices codeOn the first day I (Guillaume) claimed the "rollback takes ages" bug, which made me look into SelectChanges and PatchChoices code. The result is that I still haven't yet fixed the bug, but I discovered that patch matching was unnecessarily strict, which I could fix easily. Internally, there are two interesting patch types when it comes to matching:
  • NamedPatch: represent the contents of a patch file in _darcs/patches, that is, its info and its contents
  • PatchInfoAnd: represents the info of a patch as read from an inventory file (from _darcs/inventories or _darcs/hashed_inventory) and a lazy field to its corresponding NamedPatch.
Now, getting the NamedPatch for some patch is then obviously more costly than a PatchInfoAnd. You may even have to download the patch file in order to read it (in the case of lazy repositories). Moreover,  the majority of matchers only need the patch info (or metadata), not its actual contents. Only two matchers (hunk and touch) need to actually read the patch file, while matching or a patch name for instance (probably the most common operation) does not.

So, before the sprint, as soon as you wanted to match on a patch file, you had to open (and maybe download) its file, even if this was useless. With my change (mostly in Darcs.Patch.Match) we gained a little more laziness; and the unreasonably slow command "rollback -p ." passes from 2 minutes to ~15 seconds on my laptop. I hope to push this change into Darcs 2.10.2.

Eric, Guillaume and Vinh
Now, the real source of the "rollback -p ." slowness is that patch selection is done on FL's (Forward List), while commands like rollback and obliterate naturally work backwards in time on RL. Currently, an RL is inverted and then given to the patch selection code, which is not convenient at all! Moreover, the actual representation of history of a Darcs repository is (close to being) an RL. So it seems like a proper fix for the bug is to generalize the patch selection code to also work on RL's; which may involve a good amount of typeclass'ing in the relevant modules. I think this will be too big/risky to port to the 2.10 branch, so it will wait for Darcs 2.12.
Ganesh's new not-yet-officially-named stash command
A few days before the sprint, Ganesh unveiled his "stash" branch. It feature a refactoring that enables to suspend patches (ie, put them into a state such that they have no effect in the working copy) but without changing their identity (which is currently what occurs with the darcs rebase command). This enables to implement a git-stash-like feature.
The sprinters (IRL and on IRC) discussed the possible name of the command that should encapsulate this stash feature. More importantly, on the last day we discussed what would be the actual UI of such a feature. As always when a new feature is coming to darcs, we want to make the UI as darcsy as possible :-)
Coming back to the code, Ganesh's refactoring, if extensive, will also simplify the existing types for suspended patches. We decided to go with it.Dan's den
Dan demonstrating den (on the left: Florent)Daniil Frumin was this years Google Summer of Code student for Darcs. Mentored by Ganesh, he brought improvements to Darcsden, many of them being already deployed. Among them, it is possible to launch a local instance of Darcsden (using an executable called den), not unlike Mercurial's "serve" command.
Dan tells more about his work and this sprint in his latest blog post.
A better website and documentationAs a newcomer to the project, Vinh took a look at the documentation, especially the website of the project. He implemented changes to make the front page less intimidating and more organized. He also had a fresh look at our "quickstart" and proposed improvements which we felt were much needed!
Florent's projectsFor this sprint, Florent was more an external visitor than a Darcs hacker. He talked about one of his current projects: Pijul, a version control system with another approach. Check out their website!
Conclusion and the next sprintIn the end this sprint turned out to be more productive and crowded than we initially thought! It has been a lot of time since the previous one, so we had a lot of things to share at first. Sprints do make synchronization between contributors more effective. They are also a moment when we can get more concentrated on the Darcs codebase, and spend more time tacking some issue.

Avenue d'Italie, ParisWe would like to thank the IRILL people for hosting the sprint for the third time and our generous donators to make travelling to sprints easier.

We already have a time and a place for the next sprint: Sevilla, Spain in January 2016! The exact moment will be announced later, but you can already start organizing yourself and tell us if you're going.

Thomas, Eric and GaneshFrom left to right: Vinh, Florent, Dan, Ganesh and Eric
Categories: Offsite Blogs

Daniil Frumin: Darcsden improvements and Darcs sprint

Planet Haskell - Thu, 09/24/2015 - 7:06am

This post is intended to be a short summary of my SoC project, as well as my recent trip to Darcs sprint.


I am finishing up this post on the train back from the Autumn 2015 Darcs sprint. Today (Sept 20, Sun) was a very fun day full of darcs chatting and coding. By the end of the day we’ve heard a number of presentations

  • Ganesh described his work on "stash" command for darcs (naming subject to change!). It involves some refactoring of the rebase code. I hope we would hear more from him on that, because the internal workings are actually quite interesting — I believe it’s the first time singleton types and DataKinds are used in the darcs codebase;
  • Florent Becker gave a presentation about Pijul and the theory behind it — A Categorical Theory of Patches by Samuel Mimram and Cinzia Di Giusto, see arXiv:1311.3903;
  • Vinh Dang talked about his improvements on the darcs wiki (it’s about time to organize the website), his goal was to make it more accessible to the newcomers;
  • Yours truly gave a small presentation, outline of which you will find below:
Looking back

I have spent this summer hacking on DarcsDen as part of the Google Summer of Code program.

My basic goal was to create a "local" version of darcsden. It was not a trivial task to install darcsden (and probably installation is still not very easy!). It uses a third-party software like Redis and CouchDB. During my coding process I modifed darcsden such that it now can be a good choice for local (or lightweight single user) darcs UI. The local darcsden version can be used without any databases, tracking the repositories in the local file system. This way darcsden can be used by a developer on her local computer, like darcsum, (for working with/comparing repositories) as well as a replacement for darcsweb/cgit — a single user web front for darcs repositories.

Besides that a user of a local version can use darcsden’s interactive UI for recording new patches, as well as a command-line tool den for a quick way of browsing the repositories.

Installing darcsden-local is currently not as easy as I want to it be, but I hope that soon you will be able to install it just by running cabal install darcsden or brew install darcsden. As for now, one could do the following:

  1. darcs get --lazy
  2. cabal install . or stack install

This should install the darcsden binary and all the related css/js files. You can start darcsden by running darcsden --local. If you open your web browser you should see a list of repositories in the current directory.

However, you might have repositories scattered all over the place, and scanning your whole system for darcs repositories is just inefficient. For this purposes darcs keeps a list of repositories in a file inside your ~/.darcs directory. You can manage that list either by hand, or using the command-line den tool:

  • den $PATH — add $PATH to the list of repositories in ~/.darcs/darcsden_repos (if it’s not already present there), start darcsden server in the background and launch the browser pointing to $PATH;
  • den — the same as den .;
  • den --add $PATH — add $PATH to the list of repositories in ~/.darcs/darcsden_repos;
  • den --remove $PATH — remove $PATH from the list of repositories in ~/.darcs/darcsden_repos.

In order to further customize darcsden, one can tweak the configuration file located at ~/.darcs/darcsden.conf. Apart from the usual darcsden settings one may pay attention to the following variables:

  • homeDir (default .), points to the "root" directory with repositories. If the list file ~/.darcs/darcsden_repos is not present darcsden will recursively search repositories in that directory
  • unLocal, pwLocal: the username and the password of the "local" user

The user/password credentials are required for editing the repositories and recording new patches. However, the den binary should automatically pick them up from the config file and log you in.

Once you are logged in, and you have unrecorded changes in the repository, you can use darcsden UI to record a new patch.

DarcsDen record

Below you can see an example of recording and merging patches from a branch.

DarcsDen merge

Darsden allows you to create forks/branches of your repositories, and it keeps track of the patch dependencies in your branches.

More "internal" changes:

  • Instead of having to specify some parts of the configuration in DarcsDen.Settings, darcsden now uses runtime flags: –hub for using hub-specific modifications, –local for using the local backend and no flag for default behaviour
  • The flag actually choose what are called instances — something that a bit less fine-grained than settings. Instances allow you to pick backend, overwrite settings, modify the looks of the front page.
  • HTTP-testing using wreq. The previous test suite used selenium and it got bit-rotten. The wreq-based is easier to run and perhaps slightly easier to maintain.
  • HTTP auth, which is used as part of the local instance; the den tool utilizes it to log the user in automatically.
  • Support for repositories inside directories and nested repositories.
  • All the backend code that is used for handling repositories and meta-data on the file system.
  • Functionality for downloading zipped dist archives of darcs repositories.
  • Assorted mini-fixes
What now?

During the sprint I hacked together some code for viewing suspended patches along the regular ones. The next step would be to have a similar interface for managing the suspended patches.

We have also discussed the possibility of adding rewrite rules implementing short-cut fusion for the directed types in Darcs. In order to see if it’s really worth it we would have to bring back to life the benchmarking suite (or at least check on it!).

It was a really exciting weekend for me and I was delighted to meet some of my IRC friends. As it turns out, it is a small world and despite being from different parts of it we have a bunch of common IRL friends, professors. As the French would (probably not) say, très bien. The next darcs sprint will probably be in January, and probably in Europe, again.

Tagged: darcs, haskell
Categories: Offsite Blogs

Yesod Web Framework: The true root of the PVP debate

Planet Haskell - Thu, 09/24/2015 - 6:45am

I recently wrote a new Stack feature and blogged about it. The feature is about adding support for PVP bounds in cabal files. I did my very best to avoid stirring up anything controversial. But as is usual when the PVP comes up, a disagreement broke out on Reddit about version bounds. This essentially comes down to two different ways to view an upper bound on a package:

  • We've tested, and know for certain that the new version of a dependency is incompatible with our package
  • I can't guarantee that any new versions of a dependency will be compatible with our package

If you look through the history of PVP debates, you'll see that this argument comes up over and over again. I'm going to make a bold statement: if a core feature to how we do package management is so easily confused, there's a problem with how we're using the feature. I made an offhand comment about this on Twitter:

Instead of cabal file version ranges, we should have a set of "built with" versions per package. Let tooling generate and interpret the data

— Michael Snoyman (@snoyberg) September 24, 2015 <script async="async" charset="utf-8" src=""></script>

Based on the positive feedback to that tweet, I'm moving ahead with making a more official proposal.

How PVP bounds work today

Here's the theory of how you're supposed to write PVP bounds in a file:

  • Test your package against a range of dependencies. For example, let's say we tested with text-1.1.2 and text-
  • Modify the build-depends in your .cabal file to say text >= 1.1.2 && < 1.3, based on the fact that it's known to work with at least version 1.1.2, and unknown to work on anything than major version 1.2.
  • Next time you make a release, go through this whole process again.

PVP detractors will respond with a few points:

  • You don't know that your package won't work with text-1.1.1
  • You don't know that your package won't work with text-1.3
  • You don't know for certain that your package will work with text-1.1.3, text-, or text-1.2.1 (yes, it should based on PVP rules, but mistakes can happen.
  • Repeating this testing/updating process manually each time you make a code change is tedious and error-prone.
Collect the real information

If you notice, what we did in the above was extract the cabal file's metadata (version bounds) from what we actually know (known versions that the package works with). I'm going to propose a change: let's capture that real information, instead of the proxy data. The data could go into the cabal file itself, a separate metadata file, or a completely different database. In fact, the data doesn't even need to be provided by the package author. Stackage Nightly, for instance, would be a wonderful source of this information.

A dependency solver - perhaps even cabal-install's - would then be able to extract exactly the same version bound information we have today from this data. We could consider it an intersection between the .cabal file version bounds and the external data. Or, we could ignore .cabal file bounds and jump straight to the database. Or we could even get more adventurous, e.g. preferring known-good build plans (based on build plan history).

In theory, this functionality - if done as a separate database from the .cabal files themselves - means that on-Hackage revisions would be much less important, possibly even a feature that no one needs in the future.


And here's the best part: this doesn't require authors to do anything. We can automate the entire process. There could even be build servers sitting and churning constantly trying to find combinations that build together. We've seen already how difficult it is to get authors to adopt a policy. The best policy is one that can be automated and machine run.


Problems I've thought of so far:

  • Some packages (notably Edward Kmett's) have a versioning scheme which expresses more information than the PVP itself, and therefore the generated version bounds from this scheme may be too strict. But that won't necessarily be a problem, since a build server will be able to just test new versions as they come out.
  • This very blog post may start a flame war again, which I sincerely hope doesn't happen.

In order for this to really happen, we need:

  1. Broad support for the idea
  2. Changes to the cabal-install dependency solver (or an alternate replacement)
  3. Central infrasturcture for tracking the build successes
  4. Tooling support for generating the build success information

And to be blunt: this is not a problem that actually affects me right now, or most people I work with (curation is in a sense a simplified version of this). If the response is essentially "don't want it," let's just drop it. But if people relying on version bounds today think that this may be a good path forward, let's pursue it.

Categories: Offsite Blogs

first that generalizes for Bifunctor and Arrow?

Haskell on Reddit - Thu, 09/24/2015 - 6:28am

We have 'first' in Arrow, baked for tuple. And we have 'first' in Bifunctor, that bakes for function. Is there one / is it practical to have which generalizes on both?

submitted by literon
[link] [13 comments]
Categories: Incoming News