News aggregator

haskell (and beyond) in education

haskell-cafe - Thu, 08/14/2014 - 6:29pm
I am collecting some data on FP used to introduce programming ie as a first course: Naturally the haskell link is the first: I was just wondering if there are more extremal cases of this: eg Are there any univs using Idris/Agda to *introduce* programming/math/proofs etc _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

'import ccall unsafe' and parallelism

glasgow-user - Thu, 08/14/2014 - 4:54pm
Greetings everybody, I happen to be a bit confused with regards to unsafe foreign imports and parallelism. Assume the following C function: foreign import ccall unsafe "cfun" cfun :: CInt -> IO () Now, cfun does some work: go xs = unsafePerformIO $ do forM_ xs $ cfun return $ somethingUnhealthy And I'd like to parallelize this: parMap rdeepseq go [costly,costly] However, due to the way ghc handles unsafe imports, namely block everything else whenever 'cfun' is called, I happen to have only one active 'go'. Lets assume 'cfun' is cheap and would suffer from 'ccall safe' more than I'd be willing to pay. Is there any fix possible? Viele Gruesse, Christian PS: The real problem happens to use a bunch of different judy arrays, each of which lives in its on thread; 300 judy arrays, 300 threads, each up to 20 million inserts. But I think the basic problem can be reduced to "how to parallelize 'ccall unsafe's. _______________________________________________ Glasgow-haskell-users mailing list Glasgow-h
Categories: Offsite Discussion

Haskell Freecell Library

Haskell on Reddit - Thu, 08/14/2014 - 3:22pm

I created a library for playing freecell in Haskell. It includes a rudimentary solver that works (some of the time!). Improvements are welcome. You can also, of course, use it just to play text-based freecell (more fun than you think).

Be gentle, please, I'm new.

submitted by tdees40
[link] [10 comments]
Categories: Incoming News

Haskell SDL2 FRP games working on Android

haskell-cafe - Thu, 08/14/2014 - 1:45pm
Hi Café Answering recent questions posted on this list regarding FRP, I would like to announce that we have a working Android game written in Haskell, using SDL2 and Yampa. A short article describing the implications that this could have, together the libraries and backend we used, is here: Changes made to existing libraries will be published as we move towards releasing the game on Google Play, and so will examples of Android programs written in Haskell. All the best Ivan
Categories: Offsite Discussion

Understanding ((->) r) as a functor.

Haskell on Reddit - Thu, 08/14/2014 - 1:29pm

I'm reading "learn you a haskell" and after the chapter on applicative functors I had some confusion:

In haskell the Functor instance of ( (->) r) is defined as:

instance Functor ((->) r) where fmap f g = (\x -> f (g x))

A category is a set of objects K along together with a set of morphisms M (K,M). As far as I understand it a Functor is a categorical homomorphism. So The set of all functions along together with functional composition should be a homomorphism between a category.

I'm assuming in this category, kinds of type * are the objects and the morphisms are normal Haskell functions.

As far as I can gather (->) is considered to be the morphism between objects (since a->b still has kind *) and functional composition is the morphism between functions. However this doesn't make much since to me. Since in this case the set ( ((->) r), <$> ) would be the Functor, not the set of functions. ((->) r) is something completely different and separate from functions? It's obviously a Funtor, but why the claim "Functions ARE functors" makes sense is still beyond me. To me, Functions are morphisms of a category who's Functor is the type class associated with functions. You don't call (Just 5) a functor, you call Maybe a functor.

What am I missing? If someone could explain this to me like I was 5, no 3, I would be extremely happy.

submitted by yyttr3
[link] [4 comments]
Categories: Incoming News

Announcing Stackage Server

Haskell on Reddit - Thu, 08/14/2014 - 10:02am
Categories: Incoming News

Cost semantics for functional languages

Lambda the Ultimate - Thu, 08/14/2014 - 5:53am

There is an ongoing discussion in LtU (there, and there) on whether RAM and other machine models are inherently a better basis to reason about (time and) memory usage than lambda-calculus and functional languages. Guy Blelloch and his colleagues have been doing very important work on this question that seems to have escaped LtU's notice so far.

A portion of the functional programming community has long been of the opinion that we do not need to refer to machines of the Turing tradition to reason about execution of functional programs. Dynamic semantics (which are often perceived as more abstract and elegant) are adequate, self-contained descriptions of computational behavior, which we can elevate to the status of (functional) machine model -- just like "abstract machines" can be seen as just machines.

This opinion has been made scientifically precise by various brands of work, including for example implicit (computational) complexity, resource analysis and cost semantics for functional languages. Guy Blelloch developed a family of cost semantics, which correspond to annotations of operational semantics of functional languages with new information that captures more intentional behavior of the computation: not only the result, but also running time, memory usage, degree of parallelism and, more recently, interaction with a memory cache. Cost semantics are self-contained way to think of the efficiency of functional programs; they can of course be put in correspondence with existing machine models, and Blelloch and his colleagues have proved a vast amount of two-way correspondences, with the occasional extra logarithmic overhead -- or, from another point of view, provided probably cost-effective implementations of functional languages in imperative languages and conversely.

This topic has been discussed by Robert Harper in two blog posts, Language and Machines which develops the general argument, and a second post on recent joint work by Guy and him on integrating cache-efficiency into the model. Harper also presents various cost semantics (called "cost dynamics") in his book "Practical Foundations for Programming Languages".

In chronological order, three papers that are representative of the evolution of this work are the following.

Parallelism In Sequential Functional Languages
Guy E. Blelloch and John Greiner, 1995.
This paper is focused on parallelism, but is also one of the earliest work carefully relating a lambda-calculus cost semantics with several machine models.

This paper formally studies the question of how much parallelism is available in call-by-value functional languages with no parallel extensions (i.e., the functional subsets of ML or Scheme). In particular we are interested in placing bounds on how much parallelism is available for various problems. To do this we introduce a complexity model, the PAL, based on the call-by-value lambda-calculus. The model is defined in terms of a profiling semantics and measures complexity in terms of the total work and the parallel depth of a computation. We describe a simulation of the A-PAL (the PAL extended with arithmetic operations) on various parallel machine models, including the butterfly, hypercube, and PRAM models and prove simulation bounds. In particular the simulations are work-efficient (the processor-time product on the machines is within a constant factor of the work on the A-PAL), and for P processors the slowdown (time on the machines divided by depth on the A-PAL) is proportional to at most O(log P). We also prove bounds for simulating the PRAM on the A-PAL.

Space Profiling for Functional Programs
Daniel Spoonhower, Guy E. Blelloch, Robert Harper, and Phillip B. Gibbons, 2011 (conference version 2008)

This paper clearly defines a notion of ideal memory usage (the set of store locations that are referenced by a value or an ongoing computation) that is highly reminiscent of garbage collection specifications, but without making any reference to an actual garbage collection implementation.

We present a semantic space profiler for parallel functional programs. Building on previous work in sequential profiling, our tools help programmers to relate runtime resource use back to program source code. Unlike many profiling tools, our profiler is based on a cost semantics. This provides a means to reason about performance without requiring a detailed understanding of the compiler or runtime system. It also provides a specification for language implementers. This is critical in that it enables us to separate cleanly the performance of the application from that of the language implementation. Some aspects of the implementation can have significant effects on performance. Our cost semantics enables programmers to understand the impact of different scheduling policies while hiding many of the details of their implementations. We show applications where the choice of scheduling policy has asymptotic effects on space use. We explain these use patterns through a demonstration of our tools. We also validate our methodology by observing similar performance in our implementation of a parallel extension of Standard ML

Cache and I/O efficient functional algorithms
Guy E. Blelloch, Robert Harper, 2013 (see also the shorter CACM version)

The cost semantics in this last work incorporates more notions from garbage collection, to reason about cache-efficient allocation of values -- in that it relies on work on formalizing garbage collection that has been mentioned on LtU before.

The widely studied I/O and ideal-cache models were developed to account for the large difference in costs to access memory at different levels of the memory hierarchy. Both models are based on a two level memory hierarchy with a fixed size primary memory (cache) of size \(M\), an unbounded secondary memory, and assume unit cost for transferring blocks of size \(B\) between the two. Many algorithms have been analyzed in these models and indeed these models predict the relative performance of algorithms much more accurately than the standard RAM model. The models, however, require specifying algorithms at a very low level requiring the user to carefully lay out their data in arrays in memory and manage their own memory allocation.

In this paper we present a cost model for analyzing the memory efficiency of algorithms expressed in a simple functional language. We show how many algorithms written in standard forms using just lists and trees (no arrays) and requiring no explicit memory layout or memory management are efficient in the model. We then describe an implementation of the language and show provable bounds for mapping the cost in our model to the cost in the ideal-cache model. These bound imply that purely functional programs based on lists and trees with no special attention to any details of memory layout can be as asymptotically as efficient as the carefully designed imperative I/O efficient algorithms. For example we describe an \(O(\frac{n}{B} \log_{M/B} \frac{n}{B})\) cost sorting algorithm, which is optimal in the ideal cache and I/O models.

Categories: Offsite Discussion

Cyclic Lists, Purely

Haskell on Reddit - Thu, 08/14/2014 - 1:38am
Categories: Incoming News

ANNOUNCE: hayland

haskell-cafe - Thu, 08/14/2014 - 1:19am
I just pushed the first release of the Haskell bindings to Wayland I've been working on to Hackage. SHAMELESS SPAM ----- I'm looking for a PhD position in (mathematical) type theory (preferably in Europe). Contact me for my (interesting!) background. What is Wayland? ----- For those of you who have been living under a rock, X11 is probably going to be ditched soon (porting work for GNOME, KDE, and E is already well underway), and the graphics stack reworked, in favor of "wayland". There will no longer be a monolithic server between the compositor (aka window manager) and the window clients. In the new design, the compositor *is* the central server. The new architecture allows for proper isolation of clients (a HUGE security improvement over X11), as well as major cleanups of code (enabling embedded use). Why bind to C code? ----- In theory, wayland should only be a protocol specification (which happens to have a reference implementation) which could be implemented in Haskell. However, the current stat
Categories: Offsite Discussion

Closed Type Family Simplification

haskell-cafe - Thu, 08/14/2014 - 1:02am
When a closed type family has only one instance it seems like it should never fail to simplify. Yet this doesn't appear to be the case. When I defined (in GHC 7.8.3) the closed type family type family (:.:) f g a where (:.:) f g a = f (g a) I get errors such as 'Could not deduce (Object c3 ((:.:) f g a) ~ Object c3 (f (g a)))' (where Object is a Constraint family), indicating that f (g a) is not being substituted for (:.:) f g a as desired. Any idea why this happens? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Does the lambda calculus have provisions for I/O? State can be done with free variables.

haskell-cafe - Wed, 08/13/2014 - 10:39pm
Hi: Does the lambda calculus have provisions for I/O? State can be done with free variables.
Categories: Offsite Discussion

FP Complete: Announcing Stackage Server

Planet Haskell - Wed, 08/13/2014 - 6:00pm
A New Service

A couple months ago I made a post explaining Stackage server, its motivations and use-cases, and that it would be available in the coming months. It's now officially available in beta!

Stackage server.

As a quick recap: the essence of Stackage is that rather than publishing at the granularity of packages, like Hackage, it publishes at the granularity of package sets: Either everything builds together, or nothing is published until it does. We call these published things “snapshots.”

Note: A snapshot is an exact image that can be reproduced at any time. With the snapshot's digest hash, you can download and install the same index and install packages based on that index all the time. Subsequently generated snapshots have no effect on previous ones.

I've been using it for a couple months for every project I've worked on, private and public. It's perfect for application developers and teams who want to share a common always-building package set. Provided you're using one of the 500+ packages we're publishing in the snapshots, there will always be a build plan for the package you want to use in your project.

And if your library is in Stackage, as explained in the previous post, you will get a heads up on Github if your updates or other people's updates cause a build failure related to your library.

How it Works

Snapshots are built every couple days. It takes about 16 hours to complete a build. You can view the build progress at

There are two types of snapshots published by FP Complete:

  1. Exclusive: this excludes packages not specified in the Stackage configuration. This means anything that you try to install from this snapshot will have a build plan.
  2. Inclusive: this includes Hackage packages not known to build. If you try to install a package not tracked by Stackage, it may or may not build.

You can use whichever suites your needs. If you want everything to always build, the former is an attractive choice. If you need to use a package not currently on Stackage, the latter choice makes sense.

Try it Right Now

Choose a snapshot. Each snapshot applies to a specific GHC version. For example, the latest (as of writing) GHC 7.8 build. You'll see something like this:

To use, copy the following to your ~/.cabal/config:

remote-repo: stackage:

Note: Remove or comment out any existing remote-repo line.

Run the following to update your packages:

$ cabal update

If you already have installed some packages, it's better to clear out your package set. See this page in the FAQ for how to do that.


How does this interact with sandboxes? Good question. Here's the rundown:

  • hsenv: Yes, works fine. Edit the .hsenv/cabal/config file and off you go.
  • cabal sandbox: Not yet! There is an open issue about this. But I have tried cabal sandboxes inside hsenv, which worked.

We've added this to the FAQ on the wiki. Contributions to this wiki page are welcome!


Personally, I'm very satisfied with this service so far. I just use my existing tools with a different remote-repo.

Others familiar with Nix have asked how they compare. They are very similar solutions in terms of versioning and curation (although Stackage has full-time maintenance); the main advantage to Stackage is that it just uses existing tools, so you don't have to learn a new tool and way of working to have a better user experience.

We'd like feedback on a few points:

  • Is the inclusive/exclusive separation useful?
  • Is the process of using Stackage in an existing system (clearing the package set and starting fresh) easy?
  • Should we establish a convention for storing Stackage snapshot hashes in projects source-tracking repositories?

And any other feedback you come up with while using it.

Stackage for businesses

As part of my last announcement for Stackage I mentioned there will also be custom services for businesses looking to build their development platform on Stackage.

These commercial services include:

  1. Premium support - FP Complete will quickly respond and make improvements or fixes to the public Stackage server as they need to happen.
  2. Private snapshots with premium support - very helpful for commercial users looking to add proprietary or custom libraries.
  3. Validated pre-compiled build images based on public or private snapshots. These can be used on developer systems or automated build systems.
  4. Packaged Jenkins server using the pre-compiled build images.

All these additional commercial services are meant to be helpful add-ons and we look forward to hearing more about what features you think would be beneficial to you. For more information email us at:

Categories: Offsite Blogs

Openings for brilliant Haskell devs for excitingproject "as close to a space station as you can get" < at >Swedish spinoff, DSLs/safety-critical/embedded software, and more

haskell-cafe - Wed, 08/13/2014 - 10:28am
Dear All, I’d like to share this exciting job opportunity (formatted text below), as this forum might be right for the very skilled developers we are looking for here. The LinkedIn posting is on the following webpage, e.g. in case the email does not display correctly: … Best wishes, Johan Glimming < at > LinkedIn: — Openings for brilliant Haskell devs for exciting project "as close to a space station as you can get" < at > Swedish spinoff, DSLs/safety-critical/embedded software, and more As we grow our business further we look for extraordinary Haskell developers to further strengthen our exceptionally skilled teams in our spinoff and startup Functor Group AB, Kista/Stockholm, Sweden, with subsidiaries. < at >functors +FunctorGroups You would have a
Categories: Offsite Discussion

Odd FFI behavior

glasgow-user - Wed, 08/13/2014 - 5:44am
I have some strange behavior with GHC 7.6.3 on Ubuntu 14 TLS when using FFI and I am looking for some ideas on what could be going on. Fundamentally, adding wait calls (delays coded in the C) changes the behavior of the C, in that returned status codes have proper values when there are delays, and return errors when there are no delays. But these same calls result in proper behavior on the Aardvark’s serial bus, return proper data, etc. Only the status get’s messed up. The module calls a thin custom C layer over the Aaardvark C layer, which dynamically loads a dll and makes calls into it. The thin layer just makes the use of c2hs eaiser. It is always possible there is some kind of memory issue, but there is no pattern to the mishap. It is random. Adding delays just changes probabilities of bad status. I made a C version of my application calling the same custom C layer, and there are no problems. This sort of indicates the problem is with the FFI. Because the failures are not general in that they ta
Categories: Offsite Discussion