News aggregator

7 PhD studentships in Nottingham

Haskell on Reddit - Tue, 02/24/2015 - 2:59am

We are currently advertising 7 PhD studentships in Nottingham for students who qualify as 'home' applicants. Applications in the area of the Functional Programming Lab are encouraged! For further details, see

submitted by grahamhutton
[link] [2 comments]
Categories: Incoming News

Type first coding

Haskell on Reddit - Tue, 02/24/2015 - 1:37am

Is it possible to write the types as precisely as possible, and have undefined functions all of which interlock with each other, and then later fill in the blanks?

I was wondering whether type first can be a replacement for Test first coding.

submitted by maybefbi
[link] [15 comments]
Categories: Incoming News

The Future of Community.Haskell.Org

haskell-cafe - Tue, 02/24/2015 - 12:55am
This message is intended to kick off a discussion on the state of the Community.Haskell.Org server and possible future plans. Included below is the text of a blog post on the infra blog ( We would especially like input and feedback from those who use or rely on any services or accounts. We don't want to leave you in the lurch, and want to make sure that you feel your needs can be taken care of smoothly, even as we look to wind things down. Feedback is welcome via email (to admin< at >h.o) as a post on the blog, or via the reddit discussion ( ) - - - is a server in our ecosystem that comparatively few know about these days. It actually was, to my knowledge, a key part of how the whole community infrastructure got set up way back when. The sparse homepage still even says: "This server is run
Categories: Offsite Discussion

GHC 7.10 will use Plan FTP

libraries list - Tue, 02/24/2015 - 12:20am
Colleagues You will remember (see this email<>) that the Haskell community has been engaged in a debate about proposed changes to the Haskell Prelude, the Plan-List or Plan-FTP debate<>. We decided to hold an open survey to get feedback from the community. Simon Marlow and Simon PJ were asked to make a decision in the light of that feedback. The survey closed on 21 Feb, so it's time to decide. Bottom line: we recommend Plan FTP. There are strong arguments in both directions. For the record, our own personal instincts were both in the conservative, Plan List direction. But the survey gave some pretty compelling feedback, and was far more unequivocal than we expected. Some details * Over 800 people participated in the survey. That's a lot for an 11-day window. Many wrote several paragraphs of supporting information. Thank you for participating so actively! * Overall, 82% of
Categories: Offsite Discussion - Mon, 02/23/2015 - 8:19pm
Categories: Offsite Blogs

Thiago Negri: Peg Solitaire

Planet Haskell - Mon, 02/23/2015 - 7:13pm
I'm proud to announce my new game: Peg Solitaire.

Peg solitaire is a classic board puzzle game for one player involving movement of pegs on a board with holes.

The objective is, making valid moves, to empty the entire board except for a solitary peg. A valid move is to jump a peg orthogonally over an adjacent peg into a hole two positions away and then to remove the jumped peg. Thus valid moves in each of the four orthogonal directions are:
  • Jump to right;
  • Jump to left;
  • Jump up;
  • Jump down.
The first challenge is to solve the puzzle, and then you can challenge yourself and try to solve it with the minimum number of movements. Each jump is considered a move, multiple jumps with the same peg are considered a single move, so you can keep jumping with the same peg and still count it as a single movement. The game displays the count of how many movements you have made at the top of the screen.

  • Multiple board types to select;
  • Five different board types: English (Standard), European (French), Wiegleb, Asymmetrical (Bell) and Diamond;
  • Display game move count;
  • Quick replay same board;
  • Give up current game;
  • Restart current game;
  • Shows system status bar, so you can keep track of other things while playing, for example current time
The game is entirely free and displays ads, there is no in-app purchase options.

It's made of web technologies (HTML, JavaScript, CSS) and deployed as a native application via Apache Cordova. Translated to six languages. Really exciting!

I tried to get the look and feel of each platform individually, using a flat design for iOS, a Material design for Android and a dark theme for Windows Phone. Take a look, I hope it's pretty obvious which screenshot belongs to which platform:

There are still updates to come. iOS has one in the queue for review process and I'm delivering another one for Windows Phone this week.

Take a look at the game, download, play it and leave a rating or feedback:

Hope to see you soon!
Categories: Offsite Blogs

Jasper Van der Jeugt: Writing an LRU Cache in Haskell

Planet Haskell - Mon, 02/23/2015 - 6:00pm

In-memory caches form an important optimisation for modern applications. This is one area where people often tend to write their own implementation (though usually based on an existing idea). The reason for this is mostly that having a one-size-fits all cache is really hard, and people often want to tune it for performance reasons according to their usage pattern, or use a specific interface that works really well for them.

However, this sometimes results in less-than-optimal design choices. I thought I would take some time and explain how an LRU cache can be written in a reasonably straightforward way (the code is fairly short), while still achieving great performance. Hence, it should not be too much trouble to tune this code to your needs.

The data structure usually underpinning an LRU cache is a Priority Search Queue, where the priority of an element is the time at which it was last accessed. A number of Priority Search Queue implementations are provided by the psqueues package, and in this blogpost we will be using its HashPSQ data type.

Disclaimer: I am one of the principal authors of the psqueues package.

This blogpost is written in literate Haskell, so you should be able to plug it into GHCi and play around with it. The raw file can be found here.

A pure implementation

First, we import some things, including the Data.HashPSQ module from psqueues.

> {-# LANGUAGE BangPatterns #-} > module Data.SimpleLruCache where > import Control.Applicative ((<$>)) > import Data.Hashable (Hashable, hash) > import qualified Data.HashPSQ as HashPSQ > import Data.IORef (IORef, newIORef, atomicModifyIORef') > import Data.Int (Int64) > import Data.Maybe (isNothing) > import qualified Data.Vector as V > import Prelude hiding (lookup)

Let’s start with our datatype definition. Our Cache type is parameterized by k and v, which represent the types of our keys and values respectively. The priorities of our elements will be the logical time at which they were last accessed, or the time at which they were inserted (for elements which have never been accessed). We will represent these logical times by values of type Int64.

> type Priority = Int64

The cTick field stores the “next” logical time – that is, the value of cTick should be one greater than the maximum priority in cQueue. At the very least, we need to maintain the invariant that all priorities in cQueue are smaller than cTick. A consequence of this is that cTick should increase monotonically. This is violated in the case of an integer overflow, so we need to take special care of that case.

> data Cache k v = Cache > { cCapacity :: !Int -- ^ The maximum number of elements in the queue > , cSize :: !Int -- ^ The current number of elements in the queue > , cTick :: !Priority -- ^ The next logical time > , cQueue :: !(HashPSQ.HashPSQ k Priority v) > } deriving (Eq, Show)

Creating an empty Cache is easy; we just need to know the maximum capacity.

> empty :: Int -> Cache k v > empty capacity > | capacity < 1 = error "Cache.empty: capacity < 1" > | otherwise = Cache > { cCapacity = capacity > , cSize = 0 > , cTick = 0 > , cQueue = HashPSQ.empty > }

Next, we will write a utility function to ensure that the invariants of our datatype are met. We can then use that in our lookup and insert functions.

> trim :: (Hashable k, Ord k) => Cache k v -> Cache k v > trim c

The first thing we want to check is if our logical time reaches the maximum value it can take. If this is the case, can either reset all the ticks in our queue, or we can clear it. We choose for the latter here, since that is simply easier to code, and we are talking about a scenario that should not happen very often.

> | cTick c == maxBound = empty (cCapacity c)

Then, we just need to check if our size is still within bounds. If it is not, we drop the oldest item – that is the item with the smallest priority. We will only ever need to drop one item at a time, because our cache is number-bounded and we will call trim after every insert.

> | cSize c > cCapacity c = c > { cSize = cSize c - 1 > , cQueue = HashPSQ.deleteMin (cQueue c) > } > | otherwise = c

Insert is pretty straightforward to implement now. We use the insertView function from Data.HashPSQ which tells us whether or not an item was overwritten.

insertView :: (Hashable k, Ord p, Ord k) => k -> p -> v -> HashPSQ k p v -> (Maybe (p, v), HashPSQ k p v)

This is necessary, since we need to know whether or not we need to update cSize.

> insert :: (Hashable k, Ord k) => k -> v -> Cache k v -> Cache k v > insert key val c = trim $! > let (mbOldVal, queue) = HashPSQ.insertView key (cTick c) val (cQueue c) > in c > { cSize = if isNothing mbOldVal then cSize c + 1 else cSize c > , cTick = cTick c + 1 > , cQueue = queue > }

Lookup is not that hard either, but we need to remember that in addition to looking up the item, we also want to bump the priority. We can do this using the alter function from psqueues: that allows us to modify a value (bump its priority) and return something (the value, if found) at the same time.

alter :: (Hashable k, Ord k, Ord p) => (Maybe (p, v) -> (b, Maybe (p, v))) -> k -> HashPSQ.HashPSQ k p v -> (b, HashPSQ.HashPSQ k p v)

The b in the signature above becomes our lookup result.

> lookup > :: (Hashable k, Ord k) => k -> Cache k v -> Maybe (v, Cache k v) > lookup k c = case HashPSQ.alter lookupAndBump k (cQueue c) of > (Nothing, _) -> Nothing > (Just x, q) -> > let !c' = trim $ c {cTick = cTick c + 1, cQueue = q} > in Just (x, c') > where > lookupAndBump Nothing = (Nothing, Nothing) > lookupAndBump (Just (_, x)) = (Just x, Just ((cTick c), x))

That basically gives a clean and simple implementation of a pure LRU Cache. If you are only writing pure code, you should be good to go! However, most applications deal with caches in IO, so we will have to adjust it for that.

A simple IO-based cache

Using an IORef, we can update our Cache to be easily usable in the IO Monad.

> newtype Handle k v = Handle (IORef (Cache k v))

Creating one is easy:

> newHandle :: Int -> IO (Handle k v) > newHandle capacity = Handle <$> newIORef (empty capacity)

Our simple interface only needs to export one function. cached takes the key of the value we are looking for, and an IO action which produces the value. However, we will only actually execute this IO action if it is not present in the cache.

> cached > :: (Hashable k, Ord k) > => Handle k v -> k -> IO v -> IO v > cached (Handle ref) k io = do

First, we check the cache using our lookup function from above. This uses atomicModifyIORef', since our lookup might bump the priority of an item, and in that case we modify the cache.

> lookupRes <- atomicModifyIORef' ref $ \c -> case lookup k c of > Nothing -> (c, Nothing) > Just (v, c') -> (c', Just v)

If it is found, we can just return it.

> case lookupRes of > Just v -> return v

Otherwise, we execute the IO action and call atomicModifyIORef' again to insert it into the cache.

> Nothing -> do > v <- io > atomicModifyIORef' ref $ \c -> (insert k v c, ()) > return v Contention

This scheme already gives us fairly good performance. However, that can degrade a little when lots of threads are calling atomicModifyIORef' on the same IORef.

atomicModifyIORef' is implemented using a compare-and-swap, so conceptually it works a bit like this:

atomicModifyIORef' :: IORef a -> (a -> (a, b)) -> IO b atomicModifyIORef' ref f = do x <- readIORef ref let (!x', !y) = f x -- Atomically write x' if value is still x swapped <- compareAndSwap ref x x' if swapped then return y else atomicModifyIORef' ref f -- Retry

We can see that this can lead to contention: if we have a lot of concurrent atomicModifyIORef's, we can get into a retry loop. It cannot cause a deadlock (i.e., it should still eventually finish), but it will still bring our performance to a grinding halt. This is a common problem with IORefs which I have also personally encountered in real-world scenarios.

A striped cache

A good solution around this problem, since we already have a Hashable instance for our key anyway, is striping the keyspace. We can even reuse our Handle in quite an elegant way. Instead of just using one Handle, we create a Vector of Handles instead:

> newtype StripedHandle k v = StripedHandle (V.Vector (Handle k v))

The user can configure the number of handles that are created:

> newStripedHandle :: Int -> Int -> IO (StripedHandle k v) > newStripedHandle numStripes capacityPerStripe = > StripedHandle <$> V.replicateM numStripes (newHandle capacityPerStripe)

Our hash function then determines which Handle we should use:

> stripedCached > :: (Hashable k, Ord k) > => StripedHandle k v -> k -> IO v -> IO v > stripedCached (StripedHandle v) k = > cached (v V.! idx) k > where > idx = hash k `mod` V.length v

Because our access pattern is now distributed among the different Handles, we should be able to avoid the contention problem.


We have implemented a very useful data structure for many applications, with two variations and decent performance. Thanks to the psqueues package, the implementations are very straightforward, small in code size, and it should be possible to tune the caches to your needs.

Many variations are possible: you can use real timestamps (UTCTime) as priorities in the queue and have items expire after a given amount of time. Or, if modifications of the values v are allowed, you can add a function which writes the updates to the cache as well as to the underlying data source.

For embedding the pure cache into IO, there many alternatives to using IORefs: for example, we could have used MVars or TVars. There are other strategies for reducing contention other than striping, too.

You could even write a cache which is bounded by its total size on the heap, rather than by the number of elements in the queue. If you want a single bounded cache for use across your entire application, you could allow it to store heterogeneously-typed values, and provide multiple strongly-typed interfaces to the same cache. However, implementing these things is a story for another time.

Thanks to the dashing Alex Sayers for proofreading and suggesting many corrections and improvements.

Categories: Offsite Blogs

GHC Weekly News - 2015/02/23

Haskell on Reddit - Mon, 02/23/2015 - 5:57pm
Categories: Incoming News

The GHC Team: GHC Weekly News - 2015/02/23

Planet Haskell - Mon, 02/23/2015 - 5:52pm

Hi *,

It's once again time for your sometimes-slightly-irregularly-scheduled GHC news! This past Friday marked the end of the FTP vote for GHC 7.10, there's an RC on the way (see below), we've closed out a good set of patches and tickets from users and pushed them into HEAD, and to top it off - it's your editor's birthday today, so that's nice!

Quick note: as said above GHC HQ is expecting to make a third release candidate for GHC 7.10.1 soon in early March since the delay has allowed us to pick up some more changes and bugfixes. We plan on the final release being close to the end of March (previously end of February).

This week, GHC HQ met up again to discuss and write down our current priorities and thoughts:

  • After discussing our current timetable - as we're currently hovering around the ICFP deadline - we're hoping to make our third GHC 7.10.1 release candidate on Friday, March 13th, with the final release on Friday, March 27th. This was the main take away from our meeting today.

We've also had a little more list activity this week than we did before:

Some noteworthy commits that went into ghc.git in the past week include:

Closed tickets the past week include: #9266, #10095, #9959, #10086, #9094, #9606, #9402, #10093, #9054, #10102, #4366, #7604, #9103, #10104, #7765, #7103, #10051, #7056, #9907, #10078, #10096, #10072, #10043, #9926, #10088, #10091, #8309, #9049, #9895, and #8539.

Categories: Offsite Blogs

Austin Seipp: The Future of

Planet Haskell - Mon, 02/23/2015 - 4:50pm is a server in our ecosystem that comparatively few know about these days. It actually was, to my knowledge, a key part of how the whole community infrastructure got set up way back when. The sparse homepage still even says: "This server is run by a mysterious group of Haskell hackers who do not wish to be known as a Cabal, and is funded from money earned by mentors in the Google Summer-of-Code programme." At a certain point after this server was created, it ceased to be run by a "mysterious group of Haskell hackers" and instead became managed officially by the Committee that we know today. You can see the original announcement email in the archives.

The community server, first set up in 2007 played a key role back before the current set of cloud-based services we know today was around. It provided a shared host which could provide many of the services a software project needs -- VCS hosting, public webspace for documentation, issue trackers, mailing lists, and soforth.

Today, the server is somewhat of a relic of another time. People prefer to host projects in places like github, bitbucket, or darcs hub. Issue trackers likewise tend to be associated with these hosts, and there are other free, hosted issue trackers around as well. When folks want a mailing list, they tend to reach for google groups.

Meanwhile, managing a big box full of shell account has become a much more difficult, riskier proposition. Every shell account is a security vulnerability waiting to happen, and there are more adversarial "scriptkiddie" hackers than ever looking to claim new boxes to spam and otherwise operate from.

Managing a mailman installation is likewise more difficult. There are more spammers out there, with better methods, and unmaintained lists quickly can turn into ghost towns filled with pages of linkspam and nothing but. The same sad fate falls on unmaintained tracs.

As a whole, the internet is a more adversarial world for small, self-hosted services, especially those whose domain names have some "google juice". We think it would be good to, to the extent possible, get out of the business of doing this sort of hosting. And indeed, very few individuals tend to request accounts, since there are now so many nicer, better ways of getting the benefits that once was rare in providing.

So what next? Well, we want to "end of life" most of, but in as painless a way as possible. This means finding what few tracs, if any, are still active, and helping their owners migrate. Similarly for mailing lists. Of course we will find a way to continue to host their archives for historical purposes.

Similarly, we will attempt to keep source repositories accessible for historical purposes, but would very much like to encourage owners to move to more well supported code hosting. One purpose that, until recently, was hard to serve elsewhere was in hosting of private darcs repositories with shared access -- such as academics might use to collaborate on a work in project. However, that capability is now also provided on At this point, we can't think of anything in this regard that is not better provided elsewhere -- but if you can, please let us know.

On webspace, it may be the case that a little more leniency is in order. For one, it is possible to provide restricted accounts that are able to control web-accessible files but have no other rights. For another, while many open source projects now host documentation through github pages or similar, and there are likewise many services for personal home pages, nonetheless it seems a nice thing to allow projects to host their resources on a system that is not under the control of a for-profit third party that, ultimately is responsible to its bottom line and not its users.

But all this is open for discussion! was put together to serve the open source community of Haskell developers, and its direction needs to be determined based on feedback regarding current needs. What do you think? What would you like to see continued to be provided? What do you feel is less important? Are there other good hosted services that should be mentioned as alternatives?

And, of course, are you interested in rolling up your sleeves to help with any of the changes discussed? This could mean simply helping out with sorting out the mailman and trac situation, inventorying the active elements and collaborating with their owners. Or, it could mean a more sustained technical involvement. Whatever you have to offer, we will likely have a use for it. As always, you can email admin@h.o or hop on the #haskell-infrastructure freenode channel to get involved directly.

Categories: Offsite Blogs

GHC 7.10 will use Plan FTP

Haskell on Reddit - Mon, 02/23/2015 - 4:27pm
Categories: Incoming News

Is there a repository of implementations of program on the lambda calculus?

Haskell on Reddit - Mon, 02/23/2015 - 3:39pm

I've been challenging myself to program everything on the bare-bone lambda calculus those days. There are no primitives, I have to implement all from scratch, which is hard since resource on the subject is very scarce. It is easy to find basic functions such as add, sub, div, etc., for church-encoded numbers. Anything more complex such as sin, cos, fractionals, binary encodings, etc., is nowhere to be found. Why there is no book/repository of lambda calculus implementations? I know there is not really any practical demand to find those functions, but, I mean, we have more resources about coconuts. If anyone know good ones, please share it here!

submitted by SrPeixinho
[link] [8 comments]
Categories: Incoming News

Suggestions for a semestral project in Haskell

Haskell on Reddit - Mon, 02/23/2015 - 2:04pm

I'm taking an undergrad class on logic & functional programming and need to pick a topic for my semestral project, so I'd like to ask here for a topic suggestion.

The project should be mostly about algorithms & data structures, and not really about advanced Haskell itself, meaning no lenses, conduits, or even concurrency.

I've looked through the Purely Functional Data Structures and Pearls of Functional Algorithm Design, but everything seems either too easy or too difficult to pick as a project. I don't really have much experience with the algorithmic nature of Haskell, as I've only programmed web apps in it, and it's hard to judge based on how difficult the problem would be in an imperative language (some things that are hundreds of lines of C are 5 lines of Prolog.)

What are some interesting problems you'd recommend for one semester worth of undergrad work? :)

submitted by progfu
[link] [5 comments]
Categories: Incoming News

Announcing BufferBuilder - Encode JSON 2-5x faster than Aeson

Haskell on Reddit - Mon, 02/23/2015 - 12:52pm

Hi all,

/u/implicit_cast and I wrote a new library for fast encoding of JSON (and UTF-8, and buffers of bytes in general). Our benchmarks show that it is about five times faster than Aeson if encoding from records straight into JSON, and about two times faster if you use it to encode Aeson Values directly.

Here is a blog post describing the rationale and backstory:

And here are the packages on Hackage:

submitted by chadaustin
[link] [23 comments]
Categories: Incoming News

hpc - ignore some expressions

haskell-cafe - Mon, 02/23/2015 - 12:36pm
Hi, I am generating code coverage reports using hpc, it works great. The reports would be much more helpful if I could somehow tell hpc to treat certain expressions as "covered" though. As an example, let's think of "impossible" cases of a function. f x | check x = result x | otherwise = error "impossible" I know there are techniques to get rid of (or minimise the number of) the impossible cases. I also can see how this information can be relevant, i.e. seeing whether `check` always holds or not in the report. However, at least for one version of the report, I want the error call like the above example to be treated as covered. Or maybe a different colour could be used to annotate "ignored"? Did anyone need/want anything like this? Is it possible? My guess is that this is somewhat possible using manually generated .tix files and merging them with the .tix file that is generated in the usual way. However this feels too adhoc, fragile and seems to be too much effort to reach a simple goal. Best, O
Categories: Offsite Discussion

Trying to create a good monad homework assignment for undergrads

Haskell on Reddit - Mon, 02/23/2015 - 9:59am

Does anyone have any suggestions? I'm gonna assume they'll have a basic understanding.

submitted by gin-writ
[link] [10 comments]
Categories: Incoming News

Throwing an Exception Back with Continuations?

Haskell on Reddit - Mon, 02/23/2015 - 2:09am

I'm playing around with the pipes library and I'd like to have a producer which runs a certain interaction between a producer and a consumer, (except the producer and consumer really both send messages back and forth so that's not a good name for them) but the outer producer intercepts some (but not all) of the messages being sent to the inner consumer and sends them down the outer pipeline to the outer consumer, then sends the response of the outer consumer back as the simulated inner response. It sounds like a sort of nested call/cc operation but I'm not sure if it makes sense or if it's possible to get it working without excessive use of the continuation monad and even then I'm not completely sure if it will work or not.

inner: innerProducer --> interceptor --> innerConsumer
outer: outerProducer --> outerConsumer

With continuations, I think I want the interceptor to sometimes throw an "exception" with its current continuation and the message to be sent. The outer producer catches this exception, sends the message, awaits a response from downstream, then sends the response back through the continuation. I'm pretty sure this can't fit exclusively in the ExceptionT monad but I could be wrong.

I haven't worked with continuations before so I have no idea if this works or not. Does this make any sense?

edit: Monad morphisms might help too but I'm not sure what the limits of those are.

submitted by expwnent
[link] [5 comments]
Categories: Incoming News