News aggregator

Joachim Breitner: 11 ways to write your last Haskell program

Planet Haskell - Thu, 10/02/2014 - 7:47am

At my university, we recently held an exam that covered a bit of Haskell, and a simple warm-up question at the beginning asked the students to implement last :: [a] -> a. We did not demand a specific behaviour for last [].

This is a survey of various solutions, only covering those that are actually correct. I elided some variation in syntax (e.g. guards vs. if-then-else).

Most wrote the naive and straightforward code:

last [x] = x last (x:xs) = last xs

Then quite a few seemed to be uncomfortable with pattern-matching and used conditional expressions. There was some variety in finding out whether a list is empty:

last (x:xs) | null xs == True = x | otherwise = last xs last (x:xs) | length (x:xs) == 1 = x | otherwise = last xs last (x:xs) | length xs == 0 = x | otherwise = last xs last xs | lenght xs > 1 = last (tail xs) | otherwise = head xs last xs | lenght xs == 1 = head xs | otherwise = last (tail xs) last (x:xs) | xs == [] = x | otherwise = last xs

The last one is not really correct, as it has the stricter type Eq a => [a] -> a. Also we did not expect our students to avoid the quadratic runtime caused by using length in every step.

The next class of answers used length to pick out the right elemet, either using (!!) directly, or simulating it with head and drop:

last xs = xs !! (length xs - 1) last xs = head (drop (length xs - 1) xs)

There were two submissions that spelled out an explicit left folding recursion:

last (x:xs) = lastHelper x xs where lastHelper z [] = z lastHelper z (y:ys) = lastHelper y ys

And finally there are a few code-golfers that just plugged together some other functions:

last x = head (reverse x)

Quite a lot of ways to write last!

Categories: Offsite Blogs

Recommend webserver/framework for a Report server in Haskell.

Haskell on Reddit - Thu, 10/02/2014 - 7:06am

I'm working on a report server (business intellingence type) and I'm wondering which webserver or webframework would be best to use. I know the best is to try them all, but there is so many that some advice would be usefull.

I'm rewritting our company warehouse which is a set of sql queries , R scripts and a big makefile orchastrating everything. It works pretty well but it's hard to maintain, so I deciced to start from scratch in Haskell.

The goal of the application is to generate and "serve" reports, cache them and eventually aggregate them within a dashboard. "Reports" are basically "tables" (list of tuple) which can be rendered as a csv, a table in a html page , a graph (SVG ?) etc and/or saved as a table in the warehouse database. A warehouse table can then be reused by another report and/or saved as a new warehouse table or rendered as a report.

So my plan is to integrate a full ETL within a web server (so every file or table are made on demand and cached for future reuse).

My server needs 4 layers - query/data extraction - data transformation - caching - rendering (data -> display conversion) - serving

I know how to do the 2 first layer and I'm starting working on the "rendering/serving" ones, which could be interleaved or not.

At the moment I'm considering writing all the rendering myself (using some popular package of course, like blaze-html, charts and/or diagrams) and using a simple webserver like WARP. However, it might be easier to use more advance webframework which could for example make the "renderer dispatching" easier.

Any suggestions ?

submitted by maxigit
[link] [12 comments]
Categories: Incoming News

Haskell Weekly News: Issue 308

haskell-cafe - Thu, 10/02/2014 - 5:47am
Welcome to issue 308 of the HWN, an issue covering crowd-sourced bits of information about Haskell from around the web. This issue covers from September 21 to 27, 2014 Quotes of the Week * No one < at >remember'd fun stuff in IRC -- sadness! Top Reddit Stories * The GHC source code contains 1088 TODOs, please help bring that number down Domain: self.haskell, Score: 137, Comments: 101 Original: [1] On Reddit: [2] * [Elm] "Controlling Time and Space: understanding the many formulations of FRP" by Evan Czaplicki Domain:, Score: 77, Comments: 25 Original: [3] On Reddit: [4] * Visualisation of Haskell Performance (thesis, pdf) Domain:, Score: 72, Comments: 14 Original: [5] On Reddit: [6] * shell-conduit: Write shell scripts in Haskell with Conduit Domain:, Score: 70,
Categories: Offsite Discussion

Yesod Web Framework: Updating auto-update

Planet Haskell - Thu, 10/02/2014 - 5:24am

A few weeks back, Kazu and I received an email from Martin Bailey, who was interested in working with us to further optimize Warp. The subject at the time was reduced buffer copying and allocations. That subject is very interesting in itself, and once finalized, will get its own blog post as well. But that's not the subject of this blog post.

After working on some ideas, Kazu benchmarked Warp, and found a massive performance degradation which had already slipped onto Hackage. The problem only appears under highly concurrent requests (which is exactly what Kazu's benchmark was testing). That bug has now been resolved, and users are recommended to upgrade to the newest versions of auto-update and warp. (We also included some other performance boosts here, care of Ryan Newton's atomic-primops package.) This blog post covers the problem, and its solution.

It's about time

One of the required response headers according to the HTTP spec is the Date header. As you might guess, this consists of the date, as well as the current time on the server. This has a very specific format specified in the spec. A naive approach to filling in this header would be, for each request, to run:

now <- getCurrentTime let value = formatTimeHTTP now headers' = ("Date", value) : headers

However, when you're writing a server that handles hundreds of thousands of requests per second, having to get and format the current time on each request is incredibly expensive. Instead, quite some time ago, Kazu introduced a date formatting worker thread, which essentially looks like this:

let getNowFormatted = formatTimeHTTP <$> getCurrentTime nowFormatted <- getNowFormatted >>= newIORef forkIO $ forever $ do threadDelay 1000000 getNowFormatted >>= writeIORef nowFormatted return $ readIORef nowFormatted

We fork a single thread that recalculates the formatted time every second, and updates a mutable reference. This means that, regardless of how many clients connect, we will always run this computation at most once per second. And reading the current time requires no system calls, formatting, or allocation: it's just a memory read.

Watt's the matter

The problem with this approach is that, even if there are zero requests, we're still going to run this computation once a second. This doesn't hurt our performance too much (it's a relatively cheap operation). However, it does mean that our process never stops working, which is bad for power consumption. So we needed a way to let that worker thread turn off when it wasn't needed anymore, and start up again on demand.

If this sounds familiar, it's because we blogged about it just two months ago. But there's an implementation detail I want to point out about auto-update. When I started writing it, I aimed to have the worker thread completely shut down when not needed, and then for a new worker thread to be spawned on demand. This resulted in some strange corner cases. In particular, it was possible for two different callers to spawn two different worker threads at the same time. We used atomicModifyIORef to ensure that only one of those threads became the "official" worker, and the other one would shut down right away. There was also a lot of tricky business around getting async exceptions right.

The code wasn't too difficult to follow, and was still pretty short. You can view it on Github.

The core of the problem

Unfortunately, before release, we didn't do enough performance testing of auto-update in highly concurrent settings. When we threw enough cores at the problem, we quickly ran into a situation where multiple Haskell threads would end up making concurrent requests for the auto-update value, and each of those threads would fork a separate worker thread. Only one of those would survive, but the forking itself was killing our performance! (I had one benchmark demonstrating that, for one million requests across one thousand threads on four cores, over ten thousands worker threads were forked.)

There was another problem as well. We made heavy use of atomicModifyIORef to get this working correctly. Compare this to our original dedicated thread solution: each data access was a simple, cheap readIORef. By introducing atomicModifyIORef at each request site, we introduced memory barrier issues immediately.

The solution

After iterating a few times, Kazu, Martin and I came up with a fairly elegant solution to the problem. As noted, the original dedicated thread was in many ways ideal. A lot of our problems were coming from trying to fork threads on demand. So we came up with a compromise: why not have a single, dedicated worker thread, but allow it to go to sleep/block when it's no longer needed?

Blocking semantics meant we needed to move beyond IORefs over to either MVars or STM (we elected for the former, but it would still be interesting to compare performance with the latter). But we still want to be able to have incredibly cheap lookup in the common case of a value being precomputed. So we ended up with three mutable variables:

  1. An IORef containing a Maybe a. If a precomputed value is available, it's present in there as Just a. Otherwise, there's a Nothing value. This mean that, in the normal case, a requester only needs to (a) do a memory read, and (b) case analyze the value.

  2. An MVar baton to tell the worker thread that someone is waiting for a value. In the case that the IORef contains Nothing, the requester fills this MVar. Before it starts working, the worker thread always tries to takeMVar this baton.

  3. An MVar for passing back result values. This may seem redundant with the IORef, but is necessary for the case of Nothing. A requester putMVars into the baton, and then blocks in readMVar on this reference until the value is available. The worker thread will update both the IORef and this value at the same time, and then go to sleep until it needs to compute again, at which point the process will loop.

Since the requesters never try to fork threads, there's no possibility of running into the high contention problem of multiple forks. The worst case scenario is that many requesters see a Nothing value at the same time, and all end up blocking on the same MVar. While less efficient than just reading an IORef, this isn't nearly as expensive as what we had previously.

There's another big advantage: the normal case no longer uses any atomicModifyIORef (or any other synchronized memory access), meaning we've avoided memory barriers during periods of high concurrency.

And finally: the code is drastically simpler.

And now that we've fixed our regression, and gotten back high performance together with better power consumption, we can finally return to Martin's original idea of better buffer management. More on that soon hopefully :).

Categories: Offsite Blogs

How to detect the function body in a source file

Haskell on Reddit - Thu, 10/02/2014 - 5:00am

I am looking for the easiest way to determine the start and end of a function in a haskell source file given a line number. No tools I could think of provide that. I had a short look at the output of parseFileContents from haskell-src-exts but that seemed overkill for the given purpose.

Does anyone have a suggestion ? Thanks in advance.

submitted by __gilligan
[link] [15 comments]
Categories: Incoming News

Specializing type signatures in haskell (re: Traversable/Foldable in Prelude)

Haskell on Reddit - Thu, 10/02/2014 - 4:32am

I recently read Neal Mitchell's article about why Traversable/Foldable shouldn't be in Prelude and although I agree with many of the points therein I think there might be some middle ground to be found.

What if it was possible to annotate (for documentation purposes) some specializations for type signatures and have those specializations show up in hoogle and in ghci.

For example we could have something like:

mapM :: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b) {-# SPECIALIZED :: Monad m => (a -> m b) -> [a] -> m [b] #-}

and when doing :t in ghci both types should show up. This would be similar to what /u/edwardkmett has done in a lot of the documentation for lens, but in a way that could be supported by the compiler and haddock.

That way beginners could see type signatures that makes more sense for them, without losing the generality that more experienced haskell programmers want.

The drawback with this is of course that there are some situations where the more general types would introduce ambiguity.

submitted by dnaq
[link] [15 comments]
Categories: Incoming News

Reflection functional dependency as an associatedtype?

haskell-cafe - Wed, 10/01/2014 - 11:14pm
I was trying to make a bridge like this class MonadStateTF m where   type StateOf m :: *instance (MonadState s m) => MonadStateTF m where  type StateOF m = s to allow writing constraints on state monads without a dangling type parameter type NumMonad m = (MonadStateTF m, Num (StateOf m)) Unfortunately, the declaration type StateOF m = s complains that "s" isn't mentioned on the left hand side rather than somehow making use of the fundep. Brandon_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

A prototype concept for teaching type-classpolymorphic functions

haskell-cafe - Wed, 10/01/2014 - 11:01pm
Hey there, For those teaching Haskell and making teaching tools, you might be interested in this mode of presentation of a function: It's a little slow because the implementation is naive (it abuses my server), however the concept is: see what fmap really does by showing what you get when applying it to a variety of different types, including “strange” ones like Const and the function type. A given function has visually evident ways of applying to different data structures. The value changes and so does the type. Challenge: take this ball and run with it to illustrate return and >>= for a number of monads. Maybe, [], LogicT, Either, etc. Given that I've been using Haskell for 6 years and haven't yet written a monad tutorial, I don't want to break that clean sheet now. If that's too hard, there're plenty of simpler classes to experiment with. Some types require special handling to display in an intuitive way, like the ((->) r) example. But ignoring that, if y
Categories: Offsite Discussion - Wed, 10/01/2014 - 11:00pm
Categories: Offsite Blogs

Broken linkr in wiki for Ivo

haskell-cafe - Wed, 10/01/2014 - 10:01pm
The theorem provers page has a link for Ivor at the bottom: This current link is dead: This new link works:
Categories: Offsite Discussion

Improving Haskell-related documentation

Haskell on Reddit - Wed, 10/01/2014 - 8:32pm

A number of people, with various levels of experience with Haskell, seem to find some Haskell-related documentation to be overly terse, such that a mere "follow the type(s) signature(s)" approach doesn't help. For example, in a recent blog post, Neil Mitchell wrote:

When I want to use &&& or *** I just pick randomly, see if it type checks, then try again. When other people I know what to use these functions they just use an explicit lambda. No one thinks of referring to the documentation, since the documentation presents a unification problem (which most of the people I know could solve), not an intuition.

Based on your recent experiences, what Haskell-related documentation is particularly in need of expansion, not only in terms of the 'what' ("Intuitively, the x function helps us [etc]") but also the 'why' ("The x function can be useful when we need to [etc]")?

submitted by flexibeast
[link] [62 comments]
Categories: Incoming News

GHC 7.10 GHC-API Changes and proposed changes.

haskell-cafe - Wed, 10/01/2014 - 7:34pm
There are a number of changes in GHC 7.10 that will make it easier for tool writers to use the GHC API. These include 1. More parser entrypoints, to explicitly parse fragments [Andrew Gibiansky] The following parsers are now provided parseModule, parseImport, parseStatement, ​ parseDeclaration, parseExpression, parseTypeSignature, ​ parseFullStmt, parseStmt, parseIdentifier, ​ parseType, parseHeader 2. No more landmines in the AST [Alan Zimmerman] In the past it was difficult to work with the GHC AST as any generic traversals had to carefully tiptoe around an assortment of panic and undefined expressions. These have now been removed, allowing standard traversal libraries to be used. There is a third change currently being discussed, and I would like feedback on it's desirability. 3. Introduce an annotation structure to the ParsedSource to record the location of uncaptured keywords. At the moment the location of let / in / if / then / else / do etc is not captured in t
Categories: Offsite Discussion

Journal of Functional Programming - Call for PhD Abstracts

General haskell list - Wed, 10/01/2014 - 6:26pm
============================================================ CALL FOR PHD ABSTRACTS Journal of Functional Programming Deadline: 31st October 2014 ============================================================ PREAMBLE: Many students complete PhDs in functional programming each year, but there is currently no common location in which to promote and advertise the resulting work. The Journal of Functional Programming would like to change that! As a service to the community, JFP recently launched a new feature, in the form of a regular publication of abstracts from PhD dissertations that were completed during the previous year. The abstracts are made freely available on the JFP website, i.e. not behind any paywall, and do not require any transfer for copyright, merely a license from the author. The first round of published abstracts can be downloaded from Please submit dissertation abstracts according to the instructio
Categories: Incoming News

Journal of Functional Programming - Call for PhDAbstracts

haskell-cafe - Wed, 10/01/2014 - 6:04pm
============================================================ CALL FOR PHD ABSTRACTS Journal of Functional Programming Deadline: 31st October 2014 ============================================================ PREAMBLE: Many students complete PhDs in functional programming each year, but there is currently no common location in which to promote and advertise the resulting work. The Journal of Functional Programming would like to change that! As a service to the community, JFP recently launched a new feature, in the form of a regular publication of abstracts from PhD dissertations that were completed during the previous year. The abstracts are made freely available on the JFP website, i.e. not behind any paywall, and do not require any transfer for copyright, merely a license from the author. The first round of published abstracts can be downloaded from Please submit dissertation abstracts according to the instructio
Categories: Offsite Discussion