News aggregator

Daniil Frumin: Cabal sandbox status in your ZSH prompt

Planet Haskell - Fri, 01/17/2014 - 3:46am

Sometime ago I made a simple script for my zsh setup that allows me to see whether am I in a cabalized sandbox environment or not. I posted it to Haskell-cafe a month ago, but totally forgot to post it to this blog.

The result of checking for the sandbox is cached, which is probably unnecessary; it updates only when the user performs a cabal command or changes a directory.

<script src=""></script>

After you place the contents of the script in your .zshrc file (or in a similar location), you should update your $PROMPT to use $(sandbox_prompt). The prompt I am using, by the way, is

HOST_COLOR=%{$fg_bold[yellow]%} local ret_status="%(?:%{$fg_bold[green]%}:%{$fg_bold[red]%})%?%{$reset_color%}" PROMPT=$'\n$(ssh_connection)$HOST_COLOR%n@%m%{$reset_color%}$(my_git_prompt)$(sandbox_prompt) : %~\n[${ret_status}] %# '

and it is based on the oh-my-solarized theme.

Tagged: cabal, haskell, zsh
Categories: Offsite Blogs

Edward Z. Yang: PEPM’14: The HERMIT in the Stream

Planet Haskell - Fri, 01/17/2014 - 1:36am

POPL is almost upon us! I’ll be live-Tumblr-ing it when the conference comes upon us proper, but in the meantime, I thought I’d write a little bit about one paper in the colocated PEPM'14 program: The HERMIT in the Stream, by Andrew Farmer, Christian Höner zu Sierdissen and Andy Gill. This paper presents an implementation of an optimization scheme for fusing away use of the concatMap combinator in the stream fusion framework, which was developed using the HERMIT optimization framework. The HERMIT project has been chugging along for some time now, and a stream of papers of various applications of the framework have been trickling out (as anyone who was at the Haskell implementors workshop can attest.)

“But wait,” you may ask, “don’t we already have stream fusion?” You’d be right: but while stream fusion is available as a library, it has not replaced the default fusion system that ships with GHC: foldr/build fusion. What makes fusion scheme good? One important metric is the number of list combinators it supports. Stream fusion nearly dominates foldr/build fusion, except for the case of concatMap, a problem which has resisted resolution for seven years and has prevented GHC from switching to using stream fusion as its default.

As it turns out, we’ve known how to optimize concatMap for a long time; Duncan Coutts gave a basic outline in his thesis. The primary contribution of this paper was a prototype implementation of this optimization, including an elucidation of the important technical details (increasing the applicability of the original rule, necessary modifications to the simplifier, and rules for desugaring list comprehensions). The paper also offers some microbenchmarks and real world benchmarks arguing for the importance of optimizing concatMap.

I was glad to see this paper, since it is an important milestone on the way to replacing foldr/build fusion with stream fusion in the GHC standard libraries. It also seems the development of this optimization was greatly assisted by the use HERMIT, which seems like a good validation for HERMIT (though the paper does not go into very much detail about how HERMIT assisted in the process of developing this optimization).

There is something slightly unsatisfying with the optimization as stated in the paper, which can be best articulated by considering the paper from the perspective of a prospective implementor of stream fusion. She has two choices:

  • She can try to use the HERMIT system directly. However, HERMIT induces a 5-20x compilation slowdown, which is quite discouraging for real use. This slowdown is probably not fundamental, and will be erased in due time, but that is certainly not the case today. The limited implementation of stream fusion in the prototype (they don’t implement all of the combinators, just enough so they could run their numbers) also recommends against direct use of the system.
  • She can directly incorporate the rules as stated into a compiler. This would require special-case code to apply the non-semantics preserving simplifications only to streams, and essentially would require a reimplementation of the system, with the guidance offered by this paper. But this special-case code is of limited applicability beyond its utility for concatMap, which is a negative mark.

So, it seems, at least from the perspective of an average GHC user, we will have to wait a bit longer before stream fusion is in our hands. Still, I agree that the microbenchmarks and ADPFusion case study show the viability of the approach, and the general principle of the novel simplification rules seems reasonable, if a little ad hoc.

One note if you’re reading the nofib performance section: the experiment was done comparing their system to foldr/build, so the delta is mostly indicative of the benefit of stream fusion (in the text, they point out which benchmarks benefitted the most from concatMap fusion). Regardless, it’s a pretty cool paper: check it out!

Categories: Offsite Blogs

Yesod Web Framework: Update: The ST monad and conduit

Planet Haskell - Fri, 01/17/2014 - 12:40am

I wanted to post an update on yesterday's blog post about using the ST monad in conduit. On Reddit, gereeter pointed out some serious problems with the approach I laid out. I think the problems come down to two categories:

  • Users can (ab)use the conduitSwapBase abstraction to perform arbitrary ST actions anywhere in their code, which is clearly wrong:

    unsafePerform = runPipe . conduitSwapBase . lift
  • If a Conduit gets run multiple times from some point in the middle of its computation, the abstraction breaks. This can occur in at least two circumstances:

    • Using a backtracking base monad such as list.
    • Using connect-and-resume and then connecting the same intermediate ResumableSource multiple times.

So it's quite clear at this point that the ST instance used by conduitSwapBase is wrong and should be removed. For now, I think I'm going to remove the entire conduitSwapBase machinery since, while it may be somewhat useful for lifting IO to MonadIO and other such things, such functionality is already present in conduit.

All that said... what about my original problem? Are we really going to give up on the dream of having a referentially transparent signature for freqSink? Actually, I realized I was being incredibly dense yesterday. Let's look again at the initial freqSink I provided:

freqSink :: Consumer ByteString IO (V.Vector Int) freqSink = do freq <- lift $ VM.replicate 256 0 CB.mapM_ $ \w -> do let index = fromIntegral w oldCount <- freq index VM.write freq index (oldCount + 1) lift $ V.freeze freq

My complaint about this was that the type signature says IO, when in reality it works for both IO and ST. However, I'd simply forgotten what the actual type signatures in the vector package look like, e.g.:

new :: (PrimMonad m, Unbox a) => Int -> m (MVector (PrimState m) a)

In other words, none of the code I wrote above ever implied the IO constraint. So I can trivially generalize the type signature of freqSink above to:

freqSink :: PrimMonad m => Consumer ByteString m (V.Vector Int)

This now gives exactly the right implication about the code: it runs in any prim monad (which happens to be IO or ST), and while it is externally referentially transparent, it makes use of some state transformer operations under the surface. Looking at the critiques of the conduitSwapBase approach above, let's see if they're resolved:

  • There's definitely no way to implement unsafePerform, since we haven't added any ST-specific code to conduit!
  • Putting a backtracking monad as the base monad is now impossible, since it wouldn't be an instance of PrimMonad.
  • Connect-and-resume will maintain the intermediate state for each time the ResumableSource is reused. However, that's always the case when using IO or ST as a base monad.

So overall, this seems like a much saner approach to the problem.

Thanks again to gereeter for the feedback on the previous blog post and catching my mistakes!

Categories: Offsite Blogs

How can I replicate something like Cloact?

Haskell on Reddit - Fri, 01/17/2014 - 12:15am

So recently, I started learning ClojureScript in conjunction with Cloact and ClojureScript's go-like asynchronous CSP functionality.

It is very expressive, but I miss my types!

Similar to when I learned Haskell, it is somewhat trial and error for me, finding correlations between compilation messages, and what I did.

As simple as Clojure's destructuring, and handling of vectors, hashes (maps), and sets, there are times where I end up facepalming over differences like iterating a vector as if a map--and I get no error in the browser console!

So, the question is: How can I do something like...

[:div {:id some-var} "Some text" [:div.some-class.other-class "This time no attrs, but with an atomic var and class" @atom-var] ]

Which produces a representation like..

<div id="val-of-some-var"> Some text <div class="some-class other-class"> This time no attrs, but with an atomic var and class <span>atom-var-value</span> </div> </div>

(Though Cloact puts a lot more things in when it builds the DOM-tree inside of Facebook's React)

A free monad seems to be the obvious way to get something similar, though I see the conciseness suffering with ritualistic syntax.

For example:

div $ do attr "id" someVar childText "Some Text" subRender $ div $ do classes ["some-class","other-class"] childText "This time no attrs, but with an atomic var and class" readAtom atomVar

The way that Cloact seems to do things is that each part of the tree is actively rebuilt judiciously with functions, which is managed through cause and effect with atomic vars.

So, as a reiteration, how can I (or anyone else that actually has time) do something like this in a language that does more than mere checks on matching (), [], {}?

I feel like cloact (and a few other tools like the async core library I mentioned) can make me be productive. So far, what comes out of the compilation is pretty thick (in developer mode), but I have the same confidence in it continuing to work (once shown to) as I do for things I write in Haskell, unlike arbitrary structured jQuery which quickly becomes unmaintainable.

submitted by levischuck
[link] [4 comments]
Categories: Incoming News

Ur/Web in production : haskell - Thu, 01/16/2014 - 10:10pm
Categories: Offsite Blogs

leonidas/codeblog - Thu, 01/16/2014 - 7:44pm
Categories: Offsite Blogs

Can't install yesod-platform due to vector package

haskell-cafe - Thu, 01/16/2014 - 6:43pm
Hi all, I'm trying to install Yesod as suggested by this page. with the following command cabal update cabal install yesod-platform yesod-bin --max-backjumps=-1 --reorder-goals But I constantly get the following error. ... 136 warnings and 6 errors generated. Failed to install vector- cabal: Error: some packages failed to install: aeson- depends on vector- which failed to install. ... The error appears to be from the fact that cabal cannot install vector package. Any suggestion to fix the problem. I'm on Maverick, Xcode5, Haskell Platform 2013.2.0.0 64bit Thanks, Ed _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Silencing 'cabal configure' warning

haskell-cafe - Thu, 01/16/2014 - 3:52pm
Hi all, When running 'cabal configure' I get a warning that my package database is old: [killy< at >xerxes : /dane/projekty/singletons] cabal configure Warning: The package list for '' is 25 days old. Run 'cabal update' to get the latest list of available packages. Is there a way to silence this warning other than running 'cabal update'? I don't want to update my package database and getting a warning every time I configure a package is irritating. Janek
Categories: Offsite Discussion

Ur/Web in production

Haskell on Reddit - Thu, 01/16/2014 - 2:46pm
Categories: Incoming News

Jan Stolarek: Yet another web overview

Planet Haskell - Thu, 01/16/2014 - 7:27am

I haven’t done a single web overview in 2013. It’s time to fix that since in the last few weeks I came across a couple of interesting online resources that I’d like to share:

  • Fun with type functions (2011) – Simon PJ’s presentation of the tutorial paper with the same title. Covers associated data types and type families (see “Associated Types With Class” for an in-depth presentation) + some stuff found in Data Parallel Haskell (read “Data Parallel Haskell: a status report” for more details). The whole presentation feels like a teaser as it ends quite quickly and skips some really interesting examples found in the paper.
  • Types a la Milner (2012) by Benjamin C. Pierce (he’s the author of the book about types “Types and Programming Languages”). The talk covers a bit of programming languages history, type systems in general (“well-typed programs don’t go wrong”), type inference in the presence of polymorphism and using types to manage security of personal information. I found the type inference and historical parts very interesting.
  • The trouble with types (2013) by Martin Odersky (creator of Scala). Talk covers the role of types in programming, presents the spectrum of static type systems and then focuses on innovations in the type system of Scala.
  • I also found an interesting blog hosted on GitHub. Despite only 10 posts the blog has lot’s of stuff on practical type level programming in Haskell. Highly recommended.
Categories: Offsite Blogs

Yesod Web Framework: The ST monad and conduit

Planet Haskell - Thu, 01/16/2014 - 6:40am

I've recently been working on some higher level analysis utilities based on top of conduit. A major component of this work is performing high-performance numerical calculations on large streams of data. So I was particularly intriguied when I saw a StackOverflow question that touched on the same points. I'd like to elaborate on my answer to that question, and demonstrate a possible addition to the conduit library.

The issue at play in that question is the desire to tally up the frequency of each octet in a stream of data. If you look through the answers, it quickly becomes apparent that using some kind of a mutable packed data structure (either Vector or Array) provides drastically better performance than immutable data structures. For our purposes, let's stick with the vector library, though the discussion here applies equally well to array.

The actions we need to perform are very straight-forward: read in the entirety of the data from some source (let's say standard input), and perform a mutating action for each and every octet that we receieve. We could read the entire stream of data using lazy I/O, but as readers of this blog are likely aware, I'd rather avoid lazy I/O when possible. So in my answer, I used conduit to read in the stream of data. The answer looks like this:

import Control.Monad.Trans.Class (lift) import Data.ByteString (ByteString) import Data.Conduit (Consumer, ($$)) import qualified Data.Conduit.Binary as CB import qualified Data.Vector.Unboxed as V import qualified Data.Vector.Unboxed.Mutable as VM import System.IO (stdin) freqSink :: Consumer ByteString IO (V.Vector Int) freqSink = do freq <- lift $ VM.replicate 256 0 CB.mapM_ $ \w -> do let index = fromIntegral w oldCount <- freq index VM.write freq index (oldCount + 1) lift $ V.freeze freq main :: IO () main = (CB.sourceHandle stdin $$ freqSink) >>= print

We now have a reusable freqSink component that will consume a stream of ByteStrings to produce a Vector Int of the frequencies. The Sink creates a new mutable vector to hold the frequency values, maps over all the input octets, and for each octet updates the mutable vector. Finally, it freezes the mutable vector into an immutable one and returns it.

I like almost everything about this, except for two characters: IO. Our freqSink function sets the base monad to be IO, implying that freqSink may perform actions that have an impact on the outside world. However, we know that this isn't the case: by analyzing the code, we see that all of the mutating changes are contained within the little world that freqSink creates for itself. In other words, this function is referentially transparent, but the type signature is saying otherwise.

Fortunately, Haskell already has the perfect solution for this kind of a problem: the ST monad. All we need to do is swap IO for ST s and freqSink will be properly annotated as being referentially transparent. But when we make this change, we get the following error message:

Couldn't match type `ST s0' with `IO'

The problem is that, while freqSink is refentially transparent, sourceHandle is not. Since the source is capable of performing arbitrary IO, it has to live in the IO base monad, and since the two components live in the same processing pipeline, freqSink must match that base monad as well. While this all works, it's still quite disappointing.

But perhaps we can have our cake and eat it too. We want freqSink's type signature to be refentially transparent, which means it needs to live in ST. What we need is some way to turn an ST-based Sink into an IO-based Sink. And there's a function that let's us do just that: unsafeSTToIO. This ends up looking like:

import Control.Monad.Morph (hoist) import Control.Monad.ST (ST) import Control.Monad.ST.Unsafe (unsafeSTToIO) import Control.Monad.Trans.Class (lift) import Data.ByteString (ByteString) import Data.Conduit (Consumer, ($$)) import qualified Data.Conduit.Binary as CB import qualified Data.Vector.Unboxed as V import qualified Data.Vector.Unboxed.Mutable as VM import System.IO (stdin) freqSink :: Consumer ByteString (ST s) (V.Vector Int) freqSink = do freq <- lift $ VM.replicate 256 0 CB.mapM_ $ \w -> do let index = fromIntegral w oldCount <- freq index VM.write freq index (oldCount + 1) lift $ V.freeze freq main :: IO () main = (CB.sourceHandle stdin $$ hoist unsafeSTToIO freqSink) >>= print

This once again works, and freqSink's type signature now indicates that it is referentially transparent. However, we've put two heavy burdens on users of our freqSink function:

  • They need to know about hoist and understand how to use it.
  • They need to pull in an unsafe function and know in which circumstances it's safe to use it.

What we really want is to provide a general purpose function which is completely safe. We want to contain the concept of "I can safely swap out the base monad of this Conduit with some other base monad." So I've just pushed a new commit to conduit adding the conduitSwapBase helper function (name up for debate). Let's start by seeing how it solves our present problem:

import Control.Monad.ST (ST) import Control.Monad.Trans.Class (lift) import Data.ByteString (ByteString) import Data.Conduit (Consumer, ($$)) import qualified Data.Conduit.Binary as CB import Data.Conduit.Util (conduitSwapBase) import qualified Data.Vector.Unboxed as V import qualified Data.Vector.Unboxed.Mutable as VM import System.IO (stdin) freqSinkST :: Consumer ByteString (ST s) (V.Vector Int) freqSinkST = do freq <- lift $ VM.replicate 256 0 CB.mapM_ $ \w -> do let index = fromIntegral w oldCount <- freq index VM.write freq index (oldCount + 1) lift $ V.freeze freq freqSink :: Monad m => Consumer ByteString m (V.Vector Int) freqSink = conduitSwapBase freqSinkST main :: IO () main = (CB.sourceHandle stdin $$ freqSink) >>= print

I renamed the previous function to freqSinkST, leaving its type exactly as it was. In addition, we now have a new freqSink, which can live in any base monad. The type signature makes it completely clear that this function is referentially transparent. And all we needed to do was use conduitSwapBase to perform this conversion.

Once that conversion is performed, we can easily combine freqSink with our IO-based sourceHandle. Or for that matter, it could be combined with a completely pure source, or a source living in the Maybe monad.

I believe this function could be used to clean up the compression/decompression functions in zlib-conduit and (at least some of) the functions in blaze-builder-conduit.

As it stands right now, conduitSwapBase will allow the following base transformations to be applied:

  • ST can be converted to ~~any other monad~~. EDIT See update below.
  • Identity can be converted to any other monad.
  • IO can be converted to any instance of MonadIO.
  • For many transformers (all instances of MonadTransControl actually), if the base monad m1 can be converted to m2, then the transformer t m1 can be converted to t m2.

This addition allows us to keep more type safety in our codebase, while still allowing safe interleaving of IO actions with pure code. I'm happy with the addition so far, I'm curious to hear further ideas from the community.

UPDATE: As pointed out on Reddit, a backtracking base monad can break refential transparency for ST. I've pushed a new commit that constrains the types of monads that can be converted to. In particular, it works for monads which are processed in a linear/non branching manner. This includes Identity, IO and Maybe, and transformers like ReaderT and ErrorT.

I'm currently calling this concept MonadLinear, but I have a strong feeling that there's a better abstraction already in existence.

Categories: Offsite Blogs

The ST monad and conduit

Haskell on Reddit - Thu, 01/16/2014 - 6:38am
Categories: Incoming News

Jan Stolarek: Verifying weight biased leftist heaps using dependent types (a draft)

Planet Haskell - Thu, 01/16/2014 - 5:03am

I recently wrote a paper for the 26th International Conference on Computer Aided Verification (CAV’14), Vienna, Austria. The paper is titled “Verifying weight biased leftist heaps using dependent types”. In this paper I have taken a weight biased leftist heap data structure presented by Okasaki in “Purely Functional Data Structures” and created its Agda implementation, which uses dependent types to statically guarantee that rank and priority invariants are maintained. My verification is based on techniques demonstrated by Altenkirch, McKinna and McBride in “Why Dependent Types Matter” but my focus is on things that were left out in that paper. In the end my paper is an intermediate level tutorial on verification in Agda but also a case study of a purely functional data structure implemented in a dependently typed setting. Worth noting is the fact that despite title of Okasaki’s book his implementation is not purely functional as it uses exceptions to deal with edge cases like taking the smallest element from an empty heap.

Draft version of the paper can be downloaded here. It comes with companion source code that contains a thorough discussion of concepts presented in the paper as well as others that didn’t make it into publication due to space limitations. Companion code is available at GitHub (tag “blog-post-draft-release” points to today’s version). The paper is mostly finished. It should only receive small corrections and spelling fixes. However, if you have any suggestions or comments please share them with me – submission deadline is in three weeks so there is still time to include them.

Categories: Offsite Blogs