News aggregator

Don Stewart (dons): Haskell dev roles with Strats @ Standard Chartered

Planet Haskell - 5 hours 14 min ago

The Strats team at Standard Chartered is growing. We have 10 more open roles currently, in a range of areas:

  • Haskell dev for hedging effectiveness analytics, and build hedging services.
  • Haskell devs for derivatives pricing services. Generic roles using Haskell.
  • Web-experienced Haskell devs for frontends to analytics services written in Haskell. PureScript and or data viz, user interfaces skills desirable
  • Haskell dev for trading algorithms and strategy development.
  • Dev/ops role to extend our continuous integration infrastructure (Haskell+git)
  • Contract analysis and manipulation in Haskell for trade formats (FpML + Haskell).
  • Haskell dev for low latency (< 100 microsecond) components in soft real-time non-linear pricing charges service.

You would join an existing team of 25 Haskell developers in Singapore or London. Generally our roles involve directly working with traders to automate their work and improve their efficiency. We use Haskell for all tasks. Either GHC Haskell or our own (“Mu”) implementation, and this is a rare chance to join a large, experienced Haskell dev team.

We offer permanent or contractor positions, at Director and Associate Director level, with very competitive compensation. Demonstrated experience in typed FP (Haskell, OCaml, F# etc) is required or other typed FP.

All roles require some physical presence in either Singapore or London, and we offer flexiblity with these constraints (with work from home available). No financial background is required or assumed.

More info about our development process is in the 2012 PADL keynote, and a 2013 HaskellCast interview.

If this sounds exciting to you, please send your PDF resume to me – donald.stewart <at> sc.com


Tagged: jobs
Categories: Offsite Blogs

Well-Typed.Com: Sharing, Memory Leaks, and Conduit and friends

Planet Haskell - Thu, 09/29/2016 - 12:20am
TL;DR: Sharing conduit values leads to memory leaks. Make sure to disable the full laziness optimization in the module with your top-level calls to runConduit or ($$) (skip to the end of the conclusion for some details on how to do this). Similar considerations apply to other streaming libraries and indeed any Haskell code that uses lazy data structures to drive computation. Motivation

We use large lazy data structures in Haskell all the time to drive our programs. For example, consider

main1 :: IO () main1 = forM_ [1..5] $ \_ -> mapM_ print [1 .. 1000000]

It’s quite remarkable that this works and that this program runs in constant memory. But this stands on a delicate cusp. Consider the following minor variation on the above code:

ni_mapM_ :: (a -> IO b) -> [a] -> IO () {-# NOINLINE ni_mapM_ #-} ni_mapM_ = mapM_ main2 :: IO () main2 = forM_ [1..5] $ \_ -> ni_mapM_ print [1 .. 1000000]

This program runs, but unlike main1, it has a maximum residency of 27 MB; in other words, this program suffers from a memory leak. As it turns out, main1 was running in constant memory because the optimizer was able to eliminate the list altogether (due to the fold/build rewrite rule), but it is unable to do so in main2.

But why is main2 leaking? In fact, we can recover constant space behaviour by recompiling the code with -fno-full-laziness. The full laziness transformation is effectively turning main2 into

longList :: [Integer] longList = [1 .. 1000000] main3 :: IO () main3 = forM_ [1..5] $ \_ -> ni_mapM_ print longList

The first iteration of the forM_ loop constructs the list, which is then retained to be used by the next iterations. Hence, the large list is retained for the duration of the program, which is the beforementioned space leak.

The full laziness optimization is taking away our ability to control when data structures are not shared. That ability is crucial when we have actions driven by large lazy data structures. One particularly important example of such lazy structures that drive computation are conduits or pipes. For example, consider the following conduit code:

import qualified Data.Conduit as C countConduit :: Int -> C.Sink Char IO () countConduit cnt = do mi <- C.await case mi of Nothing -> liftIO (print cnt) Just _ -> countConduit $! cnt + 1 getConduit :: Int -> C.Source IO Char getConduit 0 = return () getConduit n = do ch <- liftIO getChar C.yield ch getConduit (n - 1)

Here countConduit is a sink that counts the characters it receives from upstream, and getConduit n is a conduit that reads n characters from the console and passes them downstream. Suppose we connect these two conduits and run them inside an exception handler that retries when an error occurs:

retry :: IO a -> IO a retry io = catch io (\(_ :: SomeException) -> retry io) main :: IO () main = retry $ C.runConduit $ getConduit 1000000 C.=$= countConduit 0

we again end up with a large memory leak, this time of type Pipe and ->Pipe (conduit’s internal type):

Although the values that stream through the conduit come from IO, the conduit itself is fully constructed and retained in memory. In this blog post we examine what exactly is being retained here, and why. We will also suggest a simple workaround: it usually suffices to avoid sharing at the very top-level calls to runConduit or ($$). Note that these problems are not specific to the conduit library, but apply equally to all other similar libraries.

We will not assume any knowledge of conduit but start from first principles; however, if you have never used any of these libraries before this blog post is probably not the best starting point; you might for example first want to watch my presentation Lazy I/O and Alternatives in Haskell.

Lists

Before we look at the more complicated case, let’s first consider another program using just lists:

main :: IO () main = retry $ ni_mapM_ print [1..1000000]

This program suffers from a spaceleak for similar reasons to the example with lists we saw in the introduction, but it’s worth spelling out the details here: where exactly is the list being maintained?

Recall that the IO monad is effectively a state monad over a token RealWorld state (if that doesn’t make any sense to you, you might want to read ezyang’s article Unraveling the mystery of the IO monad first). Hence, ni_mapM_ (just a wrapper around mapM_) is really a function of three arguments: the action to execute for every element of the list, the list itself, and the world token. That means that

ni_mapM_ print [1..1000000]

is a partial application, and hence we are constructing a PAP object. Such a PAP object is an runtime representation of a partial application of a function; it records the function we want to execute (ni_mapM_), as well as the arguments we have already provided. It is this PAP object that we give to retry, and which retry retains until the action completes because it might need it in the exception handler. The long list in turn is being retained because there is a reference from the PAP object to the list (as one of the arguments that we provided).

Full laziness does not make a difference in this example; whether or not that [1 .. 10000000] expression gets floated out makes no difference.

Reminder: Conduits/Pipes

Just to make sure we don’t get lost in the details, let’s define a simple conduit-like or pipe-like data structure:

data Pipe i o m r = Yield o (Pipe i o m r) | Await (Either r i -> Pipe i o m r) | Effect (m (Pipe i o m r)) | Done r

A pipe or a conduit is a free monad which provides three actions:

  1. Yield a value downstream
  2. Await a value from upstream
  3. Execute an effect in the underlying monad.

The argument to Await is passed an Either; we give it a Left value if upstream terminated, or a Right value if upstream yielded a value.1

This definition is not quite the same as the one used in real streaming libraries and ignores various difficulties (in particular exception safely, as well as other features such as leftovers); however, it will suffice for the sake of this blog post. We will use the terms “conduit” and “pipe” interchangeably in the remainder of this article.

Sources

The various Pipe constructors differ in their memory behaviour and the kinds of memory leaks that they can create. We therefore consider them one by one. We will start with sources, because their memory behaviour is relatively straightforward.

A source is a pipe that only ever yields values downstream.2 For example, here is a source that yields the values [n, n-1 .. 1]:

yieldFrom :: Int -> Pipe i Int m () yieldFrom 0 = Done () yieldFrom n = Yield n $ yieldFrom (n - 1)

We could “run” such a pipe as follows:

printYields :: Show o => Pipe i o m () -> IO () printYields (Yield o k) = print o >> printYields k printYields (Done ()) = return ()

If we then run the following program:

main :: IO () main = retry $ printYields (yieldFrom 1000000)

we get a memory leak. This memory leak is very similar to the memory leak we discussed in section Lists above, with Done () playing the role of the empty list and Yield playing the role of (:).

Sinks

A sink is a conduit that only ever awaits values from upstream; it never yields anything downstream.2 The memory behaviour of sinks is considerably more subtle than the memory behaviour of sources and we will examine it in detail. As a reminder, the constructor for Await is

data Pipe i o m r = Await (Either r i -> Pipe i o m r) | ...

As an example of a sink, consider this pipe that counts the number of characters it receives:

countChars :: Int -> Pipe Char o m Int countChars cnt = Await $ \mi -> case mi of Left _ -> Done cnt Right _ -> countChars $! cnt + 1

We could “run” such a sink by feeding it a bunch of characters; say, 1000000 of them:

feed :: Char -> Pipe Char o m Int -> IO () feed ch = feedFrom 10000000 where feedFrom :: Int -> Pipe Char o m Int -> IO () feedFrom _ (Done r) = print r feedFrom 0 (Await k) = feedFrom 0 $ k (Left 0) feedFrom n (Await k) = feedFrom (n-1) $ k (Right ch)

If we run this as follows and compile with optimizations enabled, we once again end up with a memory leak:

main :: IO () main = retry $ feed 'A' (countChars 0)

We can recover constant space behaviour again by disabling full laziness; however, the effect of full laziness on this example is a lot more subtle than the example we described in the introduction.

Full laziness

Let’s take a brief moment to describe what full laziness is, exactly. Full laziness is one of the optimizations that ghc applies by default when optimizations are enabled; it is described in the paper “Let-floating: moving bindings to give faster programs”. The idea is simple; if we have something like

f = \x y -> let e = .. -- expensive computation involving x but not y in ..

full laziness floats the let binding out over the lambda to get

f = \x = let e = .. in \y -> ..

This potentially avoids unnecessarily recomputing e for different values of y. Full laziness is a useful transformation; for example, it turns something like

f x y = .. where go = .. -- some local function

into

f x y = .. f_go .. = ..

which avoids allocating a function closure every time f is called. It is also quite a notorious optimization, because it can create unexpected CAFs (constant applicative forms; top-level definitions of values); for example, if you write

nthPrime :: Int -> Int nthPrime n = allPrimes !! n where allPrimes :: [Int] allPrimes = ..

you might expect nthPrime to recompute allPrimes every time it is invoked; but full laziness might move that allPrimes definition to the top-level, resulting in a large space leak (the full list of primes would be retained for the lifetime of the program). This goes back to the point we made in the introduction: full laziness is taking away our ability to control when values are not shared.

Full laziness versus sinks

Back to the sink example. What exactly is full laziness doing here? Is it constructing a CAF we weren’t expecting? Actually, no; it’s more subtle than that. Our definition of countChars was

countChars :: Int -> Pipe Char o m Int countChars cnt = Await $ \mi -> case mi of Left _ -> Done cnt Right _ -> countChars $! cnt + 1

Full laziness is turning this into something more akin to

countChars' :: Int -> Pipe Char o m Int countChars' cnt = let k = countChars' $! cnt + 1 in Await $ \mi -> case mi of Left _ -> Done cnt Right _ -> k

Note how the computation of countChars' $! cnt + 1 has been floated over the lambda; ghc can do that, since this expression does not depend on mi. So in memory the countChars 0 expression from our main function (retained, if you recall, because of the surrounding retry wrapper), develops something like this. It starts of as a simple thunk:

Then when feed matches on it, it gets reduced to weak head normal form, exposing the top-most Await constructor:

The body of the await is a function closure pointing to the function inside countChars (\mi -> case mi ..), which has countChars $! (cnt + 1) as an unevaluated thunk in its environment. Evaluating it one step further yields

So where for a source the data structure in memory was a straightforward “list” consisting of Yield nodes, for a sink the situation is more subtle: we build up a chain of Await constructors, each of which points to a function closure which in its environment has a reference to the next Await constructor. This wouldn’t matter of course if the garbage collector could clean up after us; but if the conduit itself is shared, then this results in a memory leak.

Without full laziness, incidentally, evaluating countChars 0 yields

and the chain stops there; the only thing in the function closure now is cnt. Since we don’t allocate the next Yield constructor before running the function, we never construct a chain of Yield constructors and hence we have no memory leak.

Depending on values

It is tempting to think that if the conduit varies its behaviour depending on the values it receives from upstream the same chain of Await constructors cannot be constructed and we avoid a memory leak. For example, consider this variation on countChars which only counts spaces:

countSpaces :: Int -> Pipe Char o m Int countSpaces cnt = Await $ \mi -> case mi of Left _ -> Done cnt Right ' ' -> countSpaces $! cnt + 1 Right _ -> countSpaces $! cnt

If we substitute this conduit for countChars in the previous program, do we fare any better? Alas, the memory behaviour of this conduit, when shared, is in fact far, far worse.

The reason is that both the countSpaces $! cnt + 1 and the expression countSpaces $! cnt can both be floated out by the full laziness optimization. Hence, now every Await constructor will have a function closure in its payload with two thunks, one for each alternative way to execute the conduit. What’s more, both of these thunks will are retained as long as we retain a reference to the top-level conduit.

We can neatly illustrate this using the following program:

main :: IO () main = do let count = countSpaces 0 feed ' ' count feed ' ' count feed ' ' count feed 'A' count feed 'A' count feed 'A' count

The first feed ' ' explores a path through the conduit where every character is a space; so this constructs (and retains) one long chain of Await constructors. The next two calls to feed ' ' however walk over the exact same path, and hence memory usage does not increase for a while. But then we explore a different path, in which every character is a non-space, and hence memory behaviour will go up again. Then during the second call to feed 'A' memory usage is stable again, until we start executing the last feed 'A', at which point the garbage collector can finally start cleaning things up:

What’s worse, there is an infinite number of paths through this conduit. Every different combination of space and non-space characters will explore a different path, leading to combinatorial explosion and terrifying memory usage.

Effects

The precise situation for effects depends on the underlying monad, but let’s explore one common case: IO. As we will see, for the case of IO the memory behaviour of Effect is actually similar to the memory behaviour of Await. Recall that the Effect constructor is defined as

data Pipe i o m r = Effect (m (Pipe i o m r)) | ...

Consider this simple pipe that prints the numbers [n, n-1 .. 1]:

printFrom :: Int -> Pipe i o IO () printFrom 0 = Done () printFrom n = Effect $ print n >> return (printFrom (n - 1))

We might run such a pipe using3:

runPipe :: Show r => Pipe i o IO r -> IO () runPipe (Done r) = print r runPipe (Effect k) = runPipe =<< k

In order to understand the memory behaviour of Effect, we need to understand how the underlying monad behaves. For the case of IO, IO actions are state transformers over a token RealWorld state. This means that the Effect constructor actually looks rather similar to the Await constructor. Both have a function as payload; Await a function that receives an upstream value, and Effect a function that receives a RealWorld token. To illustrate what printFrom might look like with full laziness, we can rewrite it as

printFrom :: Int -> Pipe i o IO () printFrom n = let k = printFrom (n - 1) in case n of 0 -> Done () _ -> Effect $ IO $ \st -> unIO (print n >> return k) st

If we visualize the heap (using ghc-vis), we can see that it does indeed look very similar to the picture for Await:

Increasing sharing

If we cannot guarantee that our conduits are not shared, then perhaps we should try to increase sharing instead. If we can avoid allocating these chains of pipes, but instead have pipes refer back to themselves, perhaps we can avoid these memory leaks.

In theory, this is possible. For example, when using the conduit library, we could try to take advantage of monad transformers and rewrite our feed source and our count sink as:

feed :: Source IO Char feed = evalStateC 1000000 go where go :: Source (StateT Int IO) Char go = do st <- get if st == 0 then return () else do put $! (st - 1) ; yield 'A' ; go count :: Sink Char IO Int count = evalStateC 0 go where go :: Sink Char (StateT Int IO) Int go = do mi <- await case mi of Nothing -> get Just _ -> modify' (+1) >> go

In both definitions go refers back to itself directly, with no arguments; hence, it ought to be self-referential, without any long chain of sources or sinks ever being constructed. This works; the following program runs in constant space:

main :: IO () main = retry $ print =<< (feed $$ count)

However, this kind of code is extremely brittle. For example, consider the following minor variation on count:

count :: Sink Char IO Int count = evalStateC 0 go where go :: Sink Char (StateT Int IO) Int go = withValue $ \_ -> modify' (+1) >> go withValue :: (i -> Sink i (StateT Int IO) Int) -> Sink i (StateT Int IO) Int withValue k = do mch <- await case mch of Nothing -> get Just ch -> k ch

This seems like a straight-forward variation, but this code in fact suffers from a memory leak again4. The optimized core version of this variation of count looks something like this:

count :: ConduitM Char Void (StateT Int IO) Int count = ConduitM $ \k -> let countRec = modify' (+ 1) >> count in unConduitM await $ \mch -> case mch of Nothing -> unConduitM get k Just _ -> unConduitM countRec k

In the conduit library, ConduitM is a codensity transformation of an internal Pipe datatype; the latter corresponds more or less to the Pipe datastructure we’ve been describing here. But we can ignore these details: the important point here is that this has the same typical shape that we’ve been studying above, with an allocation inside a lambda but before an await.

We can fix it by writing our code as

count :: Sink Char IO Int count = evalStateC 0 go where go :: Sink Char (StateT Int IO) Int go = withValue goWithValue goWithValue :: Char -> Sink Char (StateT Int IO) Int goWithValue _ = modify' (+1) >> go withValue :: (i -> Sink i (StateT Int IO) Int) -> Sink i (StateT Int IO) Int withValue k = do mch <- await case mch of Nothing -> get Just ch -> k ch

Ironically, it would seem that full laziness here could have helped us by floating out that modify' (+1) >> go expression for us. The reason that it didn’t is probably related to the exact way the k continuation is threaded through in the compiled code (I simplified a bit above). Whatever the reason, tracking down problems like these is difficult and incredibly time consuming; I’ve spent many many hours studying the output of -ddump-simpl and comparing before and after pictures. Not a particularly productive way to spend my time, and this kind of low-level thinking is not what I want to do when writing application level Haskell code!

Composed pipes

Normally we construct pipes by composing components together. Composition of pipes can be defined as

(=$=) :: Monad m => Pipe a b m r -> Pipe b c m r -> Pipe a c m r {-# NOINLINE (=$=) #-} _ =$= Done r = Done r u =$= Effect d = Effect $ (u =$=) <$> d u =$= Yield o d = Yield o (u =$= d) Yield o u =$= Await d = u =$= d (Right o) Await u =$= Await d = Await $ \ma -> u ma =$= Await d Effect u =$= Await d = Effect $ (=$= Await d) <$> u Done r =$= Await d = Done r =$= d (Left r)

The downstream pipe “is in charge”; the upstream pipe only plays a role when downstream awaits. This mirrors Haskell’s lazy “demand-driven” evaluation model.

Typically we only run self-contained pipes that don’t have any Awaits or Yields left (after composition), so we are only left with Effects. The good news is that if the pipe components don’t consist of long chains, then their composition won’t either; at every Effect point we wait for either upstream or downstream to complete its effect; only once that is done do we receive the next part of the pipeline and hence no chains can be constructed.

On the other hand, of course composition doesn’t get rid of these memory leaks either. As an example, we can define a pipe equivalent to the getConduit from the introduction

getN :: Int -> Pipe i Char IO Int getN 0 = Done 0 getN n = Effect $ do ch <- getChar return $ Yield ch (getN (n - 1))

and then compose getN and countChars to get a runnable program:

main :: IO () main = retry $ runPipe $ getN 1000000 =$= countChars 0

This program suffers from the same memory leaks as before because the individual pipelines component are kept in memory. As in the sink example, memory behaviour would be much worse still if there was different paths through the conduit network.

Conclusions

At Well-Typed we’ve been developing an application for a client to do streaming data processing. We’ve been using the conduit library to do this, with great success. However, occassionally memory leaks arise that difficult to fix, and even harder to track down; of course, we’re not the first to suffer from these problems; for example, see ghc ticket 9520.

In this blog post we described how such memory leaks arise. Similar memory leaks can arise with any kind of code that uses large lazy data structures to drive computation, including other streaming libraries such as pipes or streaming, but the problem is not restricted to streaming libraries.

The conduit library tries to avoid these intermediate data structures by means of fusion rules; naturally, when this is successful the problem is avoided. We can increase the likelihood of this happening by using combinators such as folds etc., but in general the intermediate pipe data structures are difficult to avoid.

The core of the problem is that in the presence of the full laziness optimization we have no control over when values are not shared. While it is possible in theory to write code in such a way that the lazy data structures are self-referential and hence keeping them in memory does not cause a memory leak, in practice the resulting code is too brittle and writing code like this is just too difficult. Just to provide one more example, in our application we had some code that looked like this:

go x@(C y _) = case y of Constr1 -> doSomethingWith x >> go Constr2 -> doSomethingWith x >> go Constr3 -> doSomethingWith x >> go Constr4 -> doSomethingWith x >> go Constr5 -> doSomethingWith x >> go

This worked and ran in constant space. But after adding a single additional clause to this pattern match, suddenly we reintroduced a memory leak again:

go x@(C y _) = case y of Constr1 -> doSomethingWith x >> go Constr2 -> doSomethingWith x >> go Constr3 -> doSomethingWith x >> go Constr4 -> doSomethingWith x >> go Constr5 -> doSomethingWith x >> go Constr6 -> doSomethingWith x >> go

This was true even when that additional clause was never used; it had nothing to do with the change in the runtime behaviour of the code. Instead, when we added the additional clause some limit got exceeded in ghc’s bowels and suddenly something got allocated that wasn’t getting allocated before.

Full laziness can be disabled using -fno-full-laziness, but sadly this throws out the baby with the bathwater. In many cases, full laziness is a useful optimization. In particular, there is probably never any point allocation a thunk for something that is entirely static. We saw one such example above; it’s unexpected that when we write

go = withValue $ \_ -> modify' (+1) >> go

we get memory allocations corresponding to the modify' (+1) >> go expression.

Fortunately, there is a simple workaround. Any internal sharing in the conduit is (usually) fine, as long as we don’t retain the conduit from one run to the next. So it’s the argument to the top-level calls to runConduit or ($$) that we need to worry about (or the equivalent “run” functions from other libraries). This leads to the following recommendation:

Conduit code typically looks like

runMyConduit :: Some -> Args -> IO r runMyConduit some args = runConduit $ stage1 some =$= stage2 args ... =$= stageN

You should put all top-level calls to runConduit into a module of their own, and disable full laziness in that module by declaring

{-# OPTIONS_GHC -fno-full-laziness #-}

at the top of the file. This means the computation of the conduit (stage1 =$= stage2 .. =$= stageN) won’t get floated to the top and the conduit will be recomputed on every invocation of runMyConduit (note that this relies on runMyConduit to have some arguments; if it doesn’t, you should add a dummy one).

It is not necessary to disable full laziness anywhere else. In particular, the conduit stages themselves (stage1 etc.) can be defined in modules where full laziness is enabled as usual.

There is a recent proposal for adding a pragma to ghc that might make it possible to disable full laziness on specific expressions, but for now the above is a reasonable workaround.

Addendum 1: ghc’s “state hack”

Let’s go back to the section about sinks; if you recall, we considered this example:

countChars :: Int -> Pipe Char o m Int countChars cnt = let k = countChars $! cnt + 1 in Await $ \mi -> case mi of Left _ -> Done cnt Right _ -> k feedFrom :: Int -> Pipe Char o m Int -> IO () feedFrom n (Done r) = print r feedFrom 0 (Await k) = feedFrom 0 $ k (Left 0) feedFrom n (Await k) = feedFrom (n - 1) $ k (Right 'A') main :: IO () main = retry $ feedFrom 10000000 (countChars 0)

We explained how countChars 0 results in a chain of Await constructors and function closures. However, you might be wondering, why would this be retained at all? After all, feedFrom is just an ordinary function, albeit one that computes an IO action. Why shouldn’t the whole expression

feedFrom 10000000 (countChars 0)

just be reduced to a single print 10000000 action, leaving no trace of the pipe at all? Indeed, this is precisely what happens when we disable ghc’s “state hack”; if we compile this program with -fno-state-hack it runs in constant space.

So what is the state hack? You can think of it as the opposite of the full laziness transformation; where full laziness transforms

\x -> \y -> let e = <expensive> in .. ~~> \x -> let e = <expensive> in \y -> ..

the state hack does the opposite

\x -> let e = <expensive> in \y -> .. ~~> \x -> \y -> let e = <expensive> in ..

though only for arguments y of type State# <token>. In general this is not sound, of course, as it might duplicate work; hence, the name “state hack”. Joachim Breitner’s StackOverflow answer explains why this optimization is necessary; my own blog post Understanding the RealWorld provides more background.

Let’s leave aside the question of why this optimization exists, and consider the effect on the code above. If you ask ghc to dump the optimized core (-ddump-stg), and translate the result back to readable Haskell, you will realize that it boils down to a single line change. With the state hack disabled the last line of feedFrom is effectively:

feedFrom n (Await k) = IO $ unIO (feedFrom (n - 1) (k (Right 'A')))

where IO and unIO just wrap and unwrap the IO monad. But when the state hack is enabled (the default), this turns into

feedFrom n (Await k) = IO $ \w -> unIO (feedFrom (n - 1) (k (Right 'A'))) w

Note how this floats the recursive call to feedFrom into the lambda. This means that

feedFrom 10000000 (countChars 0)

no longer reduces to a single print statement (after an expensive computation); instead, it reduces immediately to a function closure, waiting for its world argument. It’s this function closure that retains the Await/function chain and hence causes the memory leak.

Addendum 2: Interaction with cost-centres (SCC)

A final cautionary tale. Suppose we are studying a memory leak, and so we are compiling our code with profiling enabled. At some point we add some cost centres, or use -fprof-auto perhaps, and suddenly find that the memory leak disappeared! What gives?

Consider one last time the sink example. We can make the memory leak disappear by adding a single cost centre:

feed :: Char -> Pipe Char o m Int -> IO () feed ch = feedFrom 10000000 where feedFrom :: Int -> Pipe Char o m Int -> IO () feedFrom n p = {-# SCC "feedFrom" #-} case (n, p) of (_, Done r) -> print r (0, Await k) -> feedFrom 0 $ k (Left 0) (_, Await k) -> feedFrom (n-1) $ k (Right ch)

Adding this cost centre effectively has the same result as specifying -fno-state-hack; with the cost centre present, the state hack can no longer float the computations into the lambda.

Footnotes
  1. The ability to detect upstream termination is one of the characteristics that sets conduit apart from the pipes package, in which this is impossible (or at least hard to do). Personally, I consider this an essential feature.

  2. Sinks and sources can also execute effects, of course; since we are interested in the memory behaviour of the indvidual constructors, we treat effects separately.

  3. runPipe is (close to) the actual runPipe we would normally use; we connect pipes that await or yield into a single self contained pipe that does neither.

  4. For these simple examples actually the optimizer can work its magic and the memory leak doesn’t appear, unless evalStateC is declared NOINLINE. Again, for larger examples problems arise whether it’s inlined or not.

Categories: Offsite Blogs

FP Complete: Updated Hackage mirroring

Planet Haskell - Tue, 09/27/2016 - 6:00am

As we've discussed on this blog before, FP Complete has been running a Hackage mirror for quite a few years now. In addition to a straight S3-based mirror of raw Hackage content, we've also been running some Git repos providing the same content in an arguably more accessible format (all-cabal-files, all-cabal-hashes, and all-cabal-metadata).

In the past, we did all of this mirroring using Travis, but had to stop doing so a few months back. Also, a recent revelation showed that the downloads we were making were not as secure as I'd previously believed (due to lack of SSL between the Hackage server and its CDN). Finally, there's been off-and-on discussion for a while about unifying on one Hackage mirroring tool. After some discussion among Duncan, Herbert, and myself, all of these goals ended up culminating in this mailing list post

This blog post details the end result of these efforts: where code is running, where it's running, how secret credentials are handled, and how we monitor the whole thing.

Code

One of the goals here was to use the new hackage-security mechanism in Hackage to validate the package tarballs and cabal file index downloaded from Hackage. This made it natural to rely on Herbert's hackage-mirror-tool code, which supports downloads, verification, and uploading to S3. There were a few minor hiccups getting things set up, but overall it was surprisingly easy to integrate, especially given that Herbert's code had previously never been used against Amazon S3 (it had been used against the Dreamhost mirror).

I made a few downstream modifications to the codebase to make it compatible with officially released versions of Cabal, Stackify it, and in the process generate Docker images. I also included a simple shell script for running the tool in a loop (based on Herbert's README instructions). The result is the snoyberg/hackage-mirror-tool Docker image.

After running this image (we'll get to how it's run later), we have a fully populated S3 mirror of Hackage guaranteeing a consistent view of Hackage (i.e., all package tarballs are available, without CDN caching issues in place). The next step is to use this mirror to populated the Git repositories. We already have all-cabal-hashes-tool and all-cabal-metadata-tool for updating the appropriate repos, and all-cabal-files is just a matter of running a tar xf on the tarball containing .cabal files. Putting all of this together, I set up the all-cabal-tool repo, containing:

  • run-inner.sh will:
    • Grab the 01-index.tar.gz file from the S3 mirror
    • Update the all-cabal-files repo
    • Use git archive in that repo to generate and update the 00-index.tar.gz file*
    • Update the all-cabal-hashes and all-cabal-metadata repos using the appropriate tools
  • run.sh uses the hackage-watcher to run run-inner.sh each time a new version of 01-index.tar.gz is available. It's able to do a simple ETag check, saving on bandwidth, disk IO, and CPU usage.
  • Dockerfile pulls in all of the relevant tools and provides a commercialhaskell/all-cabal-tool Docker image
  • You may notice some other code in that repo. I did have intention of rewriting the Bash scripts and other Haskell code into a single Haskell executable for simplicity, but didn't get around to it yet. If anyone's interested in taking up the mantle on that, let me know.

* About this 00/01 business: 00-index.tar.gz is the original package format, without hackage-security, and is used by previous cabal-install releases, as well as Stack and possibly some other tools too. hackage-mirror-tool does not mirror this file since it has no security information, so generating it from the known-secure 01-index.tar.gz file (via the all-cabal-files repo) seemed the best option.

In setting up these images, I decided to split them into two pieces instead of combining them so that the straight Hackage mirroring bits would remain unaffected by the rest of the code, since the Hackage mirror (as we'll see later) will be available for users outside of the all-cabal* set of repos.

At the end of this, you can see that we're no longer using the original hackage-mirror code that powered the FP Complete S3 mirror for years. Unification achieved!

Kubernetes

As I mentioned, we previously ran all of this mirroring code on Travis, but had to move off of it. Anyone who's worked with me knows that I hate being a system administrator, so it was a painful few months where I had to run this code myself on an EC2 machine I set up personally. Fortunately, FP Complete runs a Kubernetes cluster these days, and that means I don't need to be a system administrator :). As mentioned, I packaged up all of the code above in two Docker images, so running them on Kubernetes is very straightforward.

For the curious, I've put the Kubernetes deployment configurations in a Gist.

Credentials

We have a few different credentials that need to be shared with these Docker containers:

  • AWS credentials for uploading
  • GPG key for signing tags
  • SSH key for pushing to Github

One of the other nice things about Kubernetes (besides allowing me to not be a sysadmin) is that it has built-in secrets support. I obviously won't be sharing those files with you, but if you look at the deployment configs I shared before, you can see how they are being referenced.

Monitoring

One annoyance I've had in the past is, if there's a bug in the scripts or some system problem, mirroring will stop for many hours before I become aware of it. I was determined to not let that be a problem again. So I put together the Hackage Mirror status page. It compares the last upload date from Hackage itself against the last modified time on various S3 artifacts, as well as the last commit for the Git repos. If any of the mirrors fall more than an hour behind Hackage itself, it returns a 500 status code. That's not technically the right code to use, but it does mean that normal HTTP monitoring/alerting tools can be used to watch that page and tell me if anything has gone wrong.

If you're curious to see the code powering this, it's available on Github.

Official Hackage mirror

With the addition of the new hackage-security metadata files to our S3 mirror, one nice benefit is that the FP Complete mirror is now an official Hackage mirror, and can be used natively by cabal-install without having to modify any configuration files. Hopefully this will be useful to end users.

And strangely enough, just as I finished this blog post, I got my first "mirrors out of sync" 500 error message ever, proving that the monitoring itself works (even if the mirroring had a bug).

What's next?

Hopefully nothing! I've spent quite a bit more time on this in the past few weeks than I'd hoped, but I'm happy with the end result. I feel confident that the mirroring processes will run reliably, I understand and trust the security model from end to end, and there's less code and machines to maintain overall.

Thank you!

Many thanks to Duncan and Herbert for granting me access to the private Hackage server to work around CDN caching issues, and to Herbert for the help and quick fixes with hackage-mirror-tool.

Categories: Offsite Blogs

Ken T Takusagawa: [rotqywrk] foldl foldr

Planet Haskell - Mon, 09/26/2016 - 5:03pm

foldl: (x * y) * z

foldr: x * (y * z)

Also a nice reference: https://wiki.haskell.org/Foldr_Foldl_Foldl'

Categories: Offsite Blogs

Functional Jobs: Senior Backend Engineer at Euclid Analytics (Full-time)

Planet Haskell - Mon, 09/26/2016 - 3:53pm

We are looking to add a senior individual contributor to the backend engineering team! Our team is responsible for creating and maintaining the infrastructure that powers the Euclid Analytics Engine. We leverage a forward thinking and progressive stack built in Scala and Python, with an infrastructure that uses Mesos, Spark and Kafka. As a senior engineer you will build out our next generation ETL pipeline. You will need to use and build tools to interact with our massive data set in as close to real time as possible. If you have previous experience with functional programming and distributed data processing tools such as Spark and Hadoop, then you would make a great fit for this role!

RESPONSIBILITIES:

  • Partnering with the data science team to architect and build Euclid’s big data pipeline
  • Building tools and services to maintain a robust, scalable data service layer
  • Leverage technologies such as Spark and Kafka to grow our predictive analytics and machine learning capabilities in real time
  • Finding innovative solutions to performance issues and bottlenecks

REQUIREMENTS:

  • At least 3 years industry experience in a full time role utilizing Scala or other modern functional programming languages (Haskell, Clojure, Lisp, etc.)
  • Database management experience (MySQL, Redis, Cassandra, Redshift, MemSQL)
  • Experience with big data infrastructure including Spark, Mesos, Scalding and Hadoop
  • Excited about data flow and orchestration with tools like Kafka and Spark Streaming
  • Have experience building production deployments using Amazon Web Services or Heroku’s Cloud Application Platform
  • B.S. or equivalent in Computer Science or another technical field

Get information on how to apply for this position.

Categories: Offsite Blogs

Derek Elkins: Quotient Types for Programmers

Planet Haskell - Thu, 09/22/2016 - 11:21pm
Introduction

Programmers in typed languages with higher order functions and algebraic data types are already comfortable with most of the basic constructions of set/type theory. In categorical terms, those programmers are familiar with finite products and coproducts and (monoidal/cartesian) closed structure. The main omissions are subset types (equalizers/pullbacks) and quotient types (coequalizers/pushouts) which would round out limits and colimits. Not having a good grasp on either of these constructions dramatically shrinks the world of mathematics that is understandable, but while subset types are fairly straightforward, quotient types are quite a bit less intuitive.

Subset Types

In my opinion, most programmers can more or less immediately understand the notion of a subset type at an intuitive level.
A subset type is just a type combined with a predicate on that type that specifies which values of the type we want. For example, we may have something like { n:Nat | n /= 0 } meaning the type of naturals not equal to #0#. We may use this in the type of the division function for the denominator. Consuming a value of a subset type is easy, a natural not equal to #0# is still just a natural, and we can treat it as such. The difficult part is producing a value of a subset type. To do this, we must, of course, produce a value of the underlying type — Nat in our example — but then we must further convince the type checker that the predicate holds (e.g. that the value does not equal #0#). Most languages provide no mechanism to prove potentially arbitrary facts about code, and this is why they do not support subset types. Dependently typed languages do provide such mechanisms and thus either have or can encode subset types. Outside of dependently typed languages the typical solution is to use an abstract data type and use a runtime check when values of that abstract data type are created.

Quotient Types

The dual of subset types are quotient types. My impression is that this construction is the most difficult basic construction for people to understand. Further, programmers aren’t much better off, because they have little to which to connect the idea. Before I give a definition, I want to provide the example with which most people are familiar: modular (or clock) arithmetic. A typical way this is first presented is as a system where the numbers “wrap-around”. For example, in arithmetic mod #3#, we count #0#, #1#, #2#, and then wrap back around to #0#. Programmers are well aware that it’s not necessary to guarantee that an input to addition, subtraction, or multiplication mod #3# is either #0#, #1#, or #2#. Instead, the operation can be done and the mod function can be applied at the end. This will give the same result as applying the mod function to each argument at the beginning. For example, #4+7 = 11# and #11 mod 3 = 2#, and #4 mod 3 = 1# and #7 mod 3 = 1# and #1+1 = 2 = 11 mod 3#.

For mathematicians, the type of integers mod #n# is represented by the quotient type #ZZ//n ZZ#. The idea is that the values of #ZZ // n ZZ# are integers except that we agree that any two integers #a# and #b# are treated as equal if #a - b = kn# for some integer #k#. For #ZZ // 3 ZZ#, #… -6 = -3 = 0 = 3 = 6 = …# and #… = -5 = -2 = 1 = 4 = 7 = …# and #… = -4 = -1 = 2 = 5 = 8 = …#.

Equivalence Relations

To start to formalize this, we need the notion of an equivalence relation. An equivalence relation is a binary relation #(~~)# which is reflexive (#x ~~ x# for all #x#), symmetric (if #x ~~ y# then #y ~~ x#), and transitive (if #x ~~ y# and #y ~~ z# then #x ~~ z#). We can check that “#a ~~ b# iff there exists an integer #k# such that #a-b = kn#” defines an equivalence relation on the integers for any given #n#. For reflexivity we have #a - a = 0n#. For symmetry we have if #a - b = kn# then #b - a = -kn#. Finally, for transitivity we have if #a - b = k_1 n# and #b - c = k_2 n# then #a - c = (k_1 + k_2)n# which we get by adding the preceding two equations.

Any relation can be extended to an equivalence relation. This is called the reflexive-, symmetric-, transitive-closure of the relation. For an arbitrary binary relation #R# we can define the equivalence relation #(~~_R)# via “#a ~~_R b# iff #a = b# or #R(a, b)# or #b ~~_R a# or #a ~~_R c and c ~~_R b# for some #c#“. To be precise, #~~_R# is the smallest relation satisfying those constraints. In Datalog syntax, this looks like:

eq_r(A, A). eq_r(A, B) :- r(A, B). eq_r(A, B) :- eq_r(B, A). eq_r(A, B) :- eq_r(A, C), eq_r(C, B). Quotient Types: the Type Theory view

If #T# is a type, and #(~~)# is an equivalence relation, we use #T // ~~# as the notation for the quotient type, which we read as “#T# quotiented by the equivalence relation #(~~)#”. We call #T# the underlying type of the quotient type. We then say #a = b# at type #T // ~~# iff #a ~~ b#. Dual to subset types, to produce a value of a quotient type is easy. Any value of the underlying type is a value of the quotient type. (In type theory, this produces the perhaps surprising result that #ZZ# is a subtype of #ZZ // n ZZ#.) As expected, consuming a value of a quotient type is more complicated. To explain this, we need to explain what a function #f : T // ~~ -> X# is for some type #X#. A function #f : T // ~~ -> X# is a function #g : T -> X# which satisfies #g(a) = g(b)# for all #a# and #b# such that #a ~~ b#. We call #f# (or #g#, they are often conflated) well-defined if #g# satisfies this condition. In other words, any well-defined function that consumes a quotient type isn’t allowed to produce an output that distinguishes between equivalent inputs. A better way to understand this is that quotient types allow us to change what the notion of equality is for a type. From this perspective, a function being well-defined just means that it is a function. Taking equal inputs to equal outputs is one of the defining characteristics of a function.

Sometimes we can finesse needing to check the side condition. Any function #h : T -> B# gives rise to an equivalence relation on #T# via #a ~~ b# iff #h(a) = h(b)#. In this case, any function #g : B -> X# gives rise to a function #f : T // ~~ -> X# via #f = g @ h#. In particular, when #B = T# we are guaranteed to have a suitable #g# for any function #f : T // ~~ -> X#. In this case, we can implement quotient types in a manner quite similar subset types, namely we make an abstract type and we normalize with the #h# function as we either produce or consume values of the abstract type. A common example of this is rational numbers. We can reduce a rational number to lowest terms either when it’s produced or when the numerator or denominator get accessed, so that we don’t accidentally write functions which distinguish between #1/2# and #2/4#. For modular arithmetic, the mod by #n# function is a suitable #h#.

Quotient Types: the Set Theory view

In set theory such an #h# function can always be made by mapping the elements of #T# to the equivalence classes that contain them, i.e. #a# gets mapped to #{b | a ~~ b}# which is called the equivalence class of #a#. In fact, in set theory, #T // ~~# is usually defined to be the set of equivalence classes of #(~~)#. So, for the example of #ZZ // 3 ZZ#, in set theory, it is a set of exactly three elements: the elements are #{ 3n+k | n in ZZ}# for #k = 0, 1, 2#. Equivalence classes are also called partitions and are said to partition the underlying set. Elements of these equivalence classes are called representatives of the equivalence class. Often a notation like #[a]# is used for the equivalence class of #a#.

More Examples

Here is a quick run-through of some significant applications of quotient types. I’ll give the underlying type and the equivalence relation and what the quotient type produces. I’ll leave it as an exercise to verify that the equivalence relations really are equivalence relations, i.e. reflexive, symmetric, and transitive. I’ll start with more basic examples. You should work through them to be sure you understand how they work.

Integers

Integers can be presented as pairs of naturals #(n, m)# with the idea being that the pair represents “#n - m#”. Of course, #1 - 2# should be the same as #2 - 3#. This is expressed as #(n_1, m_1) ~~ (n_2, m_2)# iff #n_1 + m_2 = n_2 + m_1#. Note how this definition only relies on operations on natural numbers. You can explore how to define addition, subtraction, multiplication, and other operations on this representation in a well-defined manner.

Rationals

Rationals can be presented very similarly to integers, only with multiplication instead of addition. We also have pairs #(n, d)#, usually written #n/d#, in this case of an integer #n# and a non-zero natural #d#. The equivalence relation is #(n_1, d_1) ~~ (n_2, d_2)# iff #n_1 d_2 = n_2 d_1#.

(Topological) Circles

We can extend the integers mod #n# to the continuous case. Consider the real numbers with the equivalence relation #r ~~ s# iff #r - s = k# for some integer #k#. You could call this the reals mod #1#. Topologically, this is a circle. If you walk along it far enough, you end up back at a point equivalent to where you started. Occasionally this is written as #RR//ZZ#.

Torii

Doing the previous example in 2D gives a torus. Specifically, we have pairs of real numbers and the equivalence relation #(x_1, y_1) ~~ (x_2, y_2)# iff #x_1 - x_2 = k# and #y_1 - y_2 = l# for some integers #k# and #l#. Quite a bit of topology relies on similar constructions as will be expanded upon on the section on gluing.

Unordered pairs

Here’s an example that’s a bit closer to programming. Consider the following equivalence relation on arbitrary pairs: #(a_1, b_1) ~~ (a_2, b_2)# iff #a_1 = a_2 and b_1 = b_2# or #a_1 = b_2 and b_1 = a_2#. This just says that a pair is equivalent to either itself, or a swapped version of itself. It’s interesting to consider what a well-defined function is on this type.1

Gluing / Pushouts

Returning to topology and doing a bit more involved construction, we arrive at gluing or pushouts. In topology, we often want to take two topological spaces and glue them together in some specified way. For example, we may want to take two discs and glue their boundaries together. This gives a sphere. We can combine two spaces into one with the disjoint sum (or coproduct, i.e. Haskell’s Either type.) This produces a space that contains both the input spaces, but they don’t interact in any way. You can visualize them as sitting next to each other but not touching. We now want to say that certain pairs of points, one from each of the spaces, are really the same point. That is, we want to quotient by an equivalence relation that would identify those points. We need some mechanism to specify which points we want to identify. One way to accomplish this is to have a pair of functions, #f : C -> A# and #g : C -> B#, where #A# and #B# are the space we want to glue together. We can then define a relation #R# on the disjoint sum via #R(a, b)# iff there’s a #c : C# such that #a = tt "inl"(f(c)) and b = tt "inr"(g(c))#. This is not an equivalence relation, but we can extend it to one. The quotient we get is then the gluing of #A# and #B# specified by #C# (or really by #f# and #g#). For our example of two discs, #f# and #g# are the same function, namely the inclusion of the boundary of the disc into the disc. We can also glue a space to itself. Just drop the disjoint sum part. Indeed, the circle and torus are examples.

Polynomial ring ideals

We write #RR[X]# for the type of polynomials with one indeterminate #X# with real coefficients. For two indeterminates, we write #RR[X, Y]#. Values of these types are just polynomials such as #X^2 + 1# or #X^2 + Y^2#. We can consider quotienting these types by equivalence relations generated from identifications like #X^2 + 1 ~~ 0# or #X^2 - Y ~~ 0#, but we want more than just the reflexive-, symmetric-, transitive-closure. We want this equivalence relation to also respect the operations we have on polynomials, in particular, addition and multiplication. More precisely, we want if #a ~~ b# and #c ~~ d# then #ac ~~ bd# and similarly for addition. An equivalence relation that respects all operations is called a congruence. The standard notation for the quotient of #RR[X, Y]# by a congruence generated by both of the previous identifications is #RR[X, Y]//(X^2 + 1, X^2 - Y)#. Now if #X^2 + 1 = 0# in #RR[X, Y]//(X^2 + 1, X^2 - Y)#, then for any polynomial #P(X, Y)#, we have #P(X, Y)(X^2 + 1) = 0# because #0# times anything is #0#. Similarly, for any polynomial #Q(X, Y)#, #Q(X, Y)(X^2 - Y) = 0#. Of course, #0 + 0 = 0#, so it must be the case that #P(X, Y)(X^2 + 1) + Q(X, Y)(X^2 - Y) = 0# for all polynomials #P# and #Q#. In fact, we can show that all elements in the equivalence class of #0# are of this form. You’ve now motivated the concrete definition of a ring ideal and given it’s significance. An ideal is an equivalence class of #0# with respect to some congruence. Let’s work out what #RR[X, Y]//(X^2 + 1, X^2 - Y)# looks like concretely. First, since #X^2 - Y = 0#, we have #Y = X^2# and so we see that values of #RR[X, Y]//(X^2 + 1, X^2 - Y)# will be polynomials in only one indeterminate because we can replace all #Y#s with #X^2#s. Since #X^2 = -1#, we can see that all those polynomials will be linear (i.e. of degree 1) because we can just keep replacing #X^2#s with #-1#s, i.e. #X^(n+2) = X^n X^2 = -X^n#. The end result is that an arbitrary polynomial in #RR[X, Y]//(X^2 + 1, X^2 - Y)# looks like #a + bX# for real numbers #a# and #b# and we have #X^2 = -1#. In other words, #RR[X, Y]//(X^2 + 1, X^2 - Y)# is isomorphic to the complex numbers, #CC#.

As a reasonably simple exercise, given a polynomial #P(X) : RR[X]#, what does it get mapped to when embedded into #RR[X]//(X - 3)#, i.e. what is #[P(X)] : RR[X]//(X - 3)#?2

Free algebras modulo an equational theory

Moving much closer to programming, we have a rather broad and important example that a mathematician might describe as free algebras modulo an equational theory. This example covers several of the preceding examples. In programmer-speak, a free algebra is just a type of abstract syntax trees for some language. We’ll call a specific absract syntax tree a term. An equational theory is just a collection of pairs of terms with the idea being that we’d like these terms to be considered equal. To be a bit more precise, we will actually allow terms to contain (meta)variables. An example equation for an expression language might be Add(#x#,#x#) = Mul(2,#x#). We call a term with no variables a ground term. We say a ground term matches another term if there is a consistent substitution for the variables that makes the latter term syntactically equal to the ground term. E.g. Add(3, 3) matches Add(#x#,#x#) via the substitution #x |->#3. Now, the equations of our equational theory gives rise to a relation on ground terms #R(t_1, t_2)# iff there exists an equation #l = r# such that #t_1# matches #l# and #t_2# matches #r#. This relation can be extended to an equivalence relation on ground terms, and we can then quotient by that equivalence relation.

Let’s consider a worked example. We can consider the theory of monoids. We have two operations (types of AST nodes): Mul(#x#,#y#) and 1. We have the following three equations: Mul(1,#x#) =#x#, Mul(#x#, 1) =#x#, and Mul(Mul(#x#,#y#),#z#) = Mul(#x#, Mul(#y#,#z#)). We additionally have a bunch of constants subject to no equations. In this case, it turns out we can define a normalization function, what I called #h# far above, and that the quotient type is isomorphic to lists of constants. Now, we can extend this theory to the theory of groups by adding a new operation, Inv(#x#), and new equations: Inv(Inv(#x#)) =#x#, Inv(Mul(#x#,#y#)) = Mul(Inv(#y#), Inv(#x#)), and Mul(Inv(#x#),#x#) = 1. If we ignore the last of these equations, you can show that we can normalize to a form that is isomorphic to a list of a disjoint sum of the constants, i.e. [Either Const Const] in Haskell if Const were the type of the constant terms. Quotienting this type by the equivalence relation extended with that final equality, corresponds to adding the rule that a Left c cancels out Right c in the list whenever they are adjacent.

This overall example is a fairly profound one. Almost all of abstract algebra can be viewed as an instance of this or a closely related variation. When you hear about things defined in terms of “generators and relators”, it is an example of this sort. Indeed, those “relators” are used to define a relation that will be extended to an equivalence relation. Being defined in this way is arguably what it means for something to be “algebraic”.

Postscript

The Introduction to Type Theory section of the NuPRL book provides a more comprehensive and somewhat more formal presentation of these and related concepts. While the quotient type view of quotients is conceptually different from the standard set theoretic presentation, it is much more amenable to computation as the #ZZ // n ZZ# example begins to illustrate.

  1. It’s a commutative function.

  2. It gets mapped to it’s value at #3#, i.e. #P(3)#.

Categories: Offsite Blogs

Michael Snoyman: Proposed conduit reskin

Planet Haskell - Thu, 09/22/2016 - 6:00pm

In a few different conversations I've had with people, the idea of reskinning some of the surface syntax of the conduit library has come up, and I wanted to share the idea here. I call this "reskinning" since all of the core functionality of conduit would remain unchanged in this proposal, we'd just be changing operators and functions a bit.

The idea here is: conduit borrowed the operator syntax of $$, =$ and $= from enumerator, and it made sense at the beginning of its lifecycle. However, for quite a while now conduit has evolved to the point of having a unified type for Sources, Conduits, and Sinks, and the disparity of operators adds more confusion than it may be worth. So without further ado, let's compare a few examples of conduit usage between the current skin:

import Conduit import qualified Data.Conduit.Binary as CB main :: IO () main = do -- copy files runResourceT $ CB.sourceFile "source.txt" $$ sinkFile "dest.txt" -- sum some numbers print $ runIdentity $ enumFromToC 1 100 $$ sumC -- print a bunch of numbers enumFromToC 1 100 $$ mapC (* 2) =$ takeWhileC (< 100) =$ mapM_C print

With a proposed reskin:

import Conduit2 import qualified Data.Conduit.Binary as CB main :: IO () main = do -- copy files runConduitRes $ CB.sourceFile "source.txt" .| sinkFile "dest.txt" -- sum some numbers print $ runConduitPure $ enumFromToC 1 100 .| sumC -- print a bunch of numbers runConduit $ enumFromToC 1 100 .| mapC (* 2) .| takeWhileC (< 100) .| mapM_C print

This reskin is easily defined with this module:

{-# LANGUAGE FlexibleContexts #-} module Conduit2 ( module Conduit , module Conduit2 ) where import Conduit hiding (($$), (=$), ($=), (=$=)) import Data.Void (Void) infixr 2 .| (.|) :: Monad m => ConduitM a b m () -> ConduitM b c m r -> ConduitM a c m r (.|) = fuse runConduitPure :: ConduitM () Void Identity r -> r runConduitPure = runIdentity . runConduit runConduitRes :: MonadBaseControl IO m => ConduitM () Void (ResourceT m) r -> m r runConduitRes = runResourceT . runConduit

To put this in words:

  • Replace the $=, =$, and =$= operators - which are all synonyms of each other - with the .| operator. This borrows intuition from the Unix shell, where the pipe operator denotes piping data from one process to another. The analogy holds really well for conduit, so why not borrow it? (We call all of these operators "fusion.")
  • Get rid of the $$ operator - also known as the "connect" or "fuse-and-run" operator - entirely. Instead of having this two-in-one action, separate it into .| and runConduit. The advantage is that no one needs to think about whether to use .| or $$, as happens today. (Note that runConduit is available in the conduit library today, it's just not very well promoted.)
  • Now that runConduit is a first-class citizen, add in some helper functions for two common use cases: running with ResourceT and running a pure conduit.

The goals here are to improve consistency, readability, and intuition about the library. Of course, there are some downsides:

  • There's a slight performance advantage (not benchmarked recently unfortunately) to foo $$ bar versus runConduit $ foo =$= bar, since the former combines both sets of actions into one. We may be able to gain some of this back with GHC rewrite rules, but my experience with rewrite rules in conduit has been less than reliable.
  • Inertia: there's a lot of code and material out there using the current set of operators. While we don't need to ever remove (or even deprecate) the current operators, having two ways of writing conduit code in the wild can be confusing.
  • Conflicting operator: doing a quick Hoogle search reveals that the parallel package already uses .|. We could choose a different operator instead (|. for instance seems unclaimed), but generally I get nervous any time I'm defining new operators.
  • For simple cases like source $$ sink, code is now quite a few keystrokes longer: runConduit $ source .| sink.

Code wise, this is a trivial change to implement. Updating docs to follow this new convention wouldn't be too difficult either. The question is: is this a good idea?

Categories: Offsite Blogs

Automating Ad hoc Data Representation Transformations

Lambda the Ultimate - Thu, 09/22/2016 - 12:29pm

Automating Ad hoc Data Representation Transformations by Vlad Ureche, Aggelos Biboudis, Yannis Smaragdakis, and Martin Odersky:

To maximize run-time performance, programmers often specialize their code by hand, replacing library collections and containers by custom objects in which data is restructured for efficient access. However, changing the data representation is a tedious and error-prone process that makes it hard to test, maintain and evolve the source code.

We present an automated and composable mechanism that allows programmers to safely change the data representation in delimited scopes containing anything from expressions to entire class definitions. To achieve this, programmers define a transformation and our mechanism automatically and transparently applies it during compilation, eliminating the need to manually change the source code.

Our technique leverages the type system in order to offer correctness guarantees on the transformation and its interaction with object-oriented language features, such as dynamic dispatch, inheritance and generics.

We have embedded this technique in a Scala compiler plugin and used it in four very different transformations, ranging from improving the data layout and encoding, to
retrofitting specialization and value class status, and all the way to collection deforestation. On our benchmarks, the technique obtained speedups between 1.8x and 24.5x.

This is a realization of an idea that has been briefly discussed here on LtU a few times, whereby a program is written using high-level representations, and the user has the option to provide a lowering to a more efficient representation after the fact.

This contrasts with the typical approach of providing efficient primitives, like primitive unboxed values, and leaving it to the programmer to compose them efficiently up front.

Categories: Offsite Discussion

Philip Wadler: Lambdaman, supporting Bootstrap

Planet Haskell - Wed, 09/21/2016 - 6:16pm

After watching talks or videos of Propositions as Types, folk ask me how they can get their own Lambdaman t-shirt. In the past, I tried to make it available through various services, but they always rejected the design as a copyright violation. (It's not, it's fair use.) Thanks to a little help from my friends, CustomInk has agreed to print the design as a Booster. Sign up now, order will be printed on October 15. Any profits (there will be more if there is a bigger order) go to Bootstrap, an organisation run by Shriram Krishnamurthi, Matthias Felleisen, and the PLT group that teaches functional programming to middle and high school students. Order has already surpassed our goal of fifty shirts!
Categories: Offsite Blogs

FP Complete: Practical Haskell: Simple File Mirror (Part 2)

Planet Haskell - Wed, 09/21/2016 - 6:00am

This is part 2 of a three part series. If you haven't seen it already, I'd recommend starting with the first part, which covers communication protocols and streaming of data. This second part will cover network communication and some basic concurrency in Haskell.

Simple HTTP client

We saw previously how to send and receive binary data using the conduit library. We're going to build on this with a conduit-aware network library. This first example will make a very simplistic, hard-coded HTTP request and send the entire response from the server to standard output.

#!/usr/bin/env stack -- stack --resolver nightly-2016-09-10 --install-ghc runghc --package classy-prelude-conduit {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} import ClassyPrelude.Conduit import Data.Conduit.Network (runTCPClient, appSource, appSink, clientSettings) main :: IO () main = runTCPClient settings $ \appData -> do yield request $$ appSink appData appSource appData $$ stdoutC where settings = clientSettings 80 "httpbin.org" request :: ByteString request = encodeUtf8 $ unlines [ "GET /get?foo=bar&baz=bin HTTP/1.1" , "Host: httpbin.org" , "User-Agent: Practical Haskell" , "Connection: close" , "" ]

The runTCPClient creates the actual TCP connection, and provides access to it via the appData value. This value allows us to send data to the server (via appSink) and get data from the server (via appSource). We can also get information about the connection such as the locally used port number, which we're not using in this example.

We've hard-coded a settings value that states we should connect to host httpbin.org* on port 80. We've also hard-coded an HTTP request body, which is thoroughly uninteresting.

Once our connection has been established, we send our hard-coded request to the server with yield request $$ appSink appData. When that's complete, we stream all data from the server to standard output with appSource appData $$ stdoutC.

The output from this looks very much like you'd expect it to:

HTTP/1.1 200 OK Server: nginx Date: Wed, 21 Sep 2016 07:38:30 GMT Content-Type: application/json Content-Length: 224 Connection: close Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true { "args": { "baz": "bin", "foo": "bar" }, "headers": { "Host": "httpbin.org", "User-Agent": "Practical Haskell" }, "origin": "31.210.186.0", "url": "http://httpbin.org/get?foo=bar&baz=bin" }

* Side note: anyone playing with HTTP client software should definitely check out httpbin.org, it's a great resource.

Upgrading to TLS

On a small tangent, it's trivial to adapt the above program to work over secure HTTPS instead of plaintext HTTP. All we need to do is:

  • Use the Data.Conduit.Network.TLS module from the network-conduit-tls library
  • Swap runTLSClient for runTCPClient, and tlsClientConfig for clientSettings
  • Change port 80 to port 443

The code looks as follows. To convince yourself that this is real: go ahead and run it and see what the url value in the response body looks like.

#!/usr/bin/env stack {- stack --resolver nightly-2016-09-10 --install-ghc runghc --package classy-prelude-conduit --package network-conduit-tls -} {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} import ClassyPrelude.Conduit import Data.Conduit.Network (appSink, appSource) import Data.Conduit.Network.TLS (runTLSClient, tlsClientConfig) main :: IO () main = runTLSClient settings $ \appData -> do yield request $$ appSink appData appSource appData $$ stdoutC where settings = tlsClientConfig 443 "httpbin.org" request :: ByteString request = encodeUtf8 $ unlines [ "GET /get?foo=bar&baz=bin HTTP/1.1" , "Host: httpbin.org" , "User-Agent: Practical Haskell" , "Connection: close" , "" ]Echo server

Let's play with the server side of things. We're going to implement an echo server, which will receive a chunk of data from the client and then send it right back.

#!/usr/bin/env stack -- stack --resolver nightly-2016-09-10 --install-ghc runghc --package classy-prelude-conduit {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} import ClassyPrelude.Conduit import Data.Conduit.Network (appSink, appSource, runTCPServer, serverSettings) main :: IO () main = runTCPServer settings $ \appData -> appSource appData $$ appSink appData where settings = serverSettings 4200 "*"

This listens on port 4200, on all network interfaces ("*"). We start our server with runTCPServer, which grabs a listening socket and waits for connections. For each connection, it forks a new thread, and runs the provided application. In this case, our application is trivial: we connect the source to the sink, automatically piping data from the connection back to itself.

To stress a point above: this is a fully multithreaded server application. You can make multiple telnet connections to the server and interact with each of them independently. This is a lot of bang for very little buck.

For those of you concerned about the inefficiency of forking a new thread for each incoming connection: Haskell's runtime is built on top of green threads, making the act of forking very cheap. There are more details available in a talk I gave on "Haskell for fast, concurrent, robust services" (relevant slide and video link).

Full duplex

The examples so far have all been half duplex, meaning they have always been either sending or receiving data. Let's implement a full duplex application: a simple telnet client replacement. We need to wait for any input from standard input, while at the same time waiting for any input from the socket. We're going to take advantage of Haskell threading to handle this case too:

#!/usr/bin/env stack -- stack --resolver nightly-2016-09-10 --install-ghc runghc --package classy-prelude-conduit {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} import ClassyPrelude.Conduit import Data.Conduit.Network (appSink, appSource, runTCPClient, clientSettings) main :: IO () main = runTCPClient settings $ \appData -> race_ (stdinC $$ appSink appData) (appSource appData $$ stdoutC) where settings = clientSettings 4200 "localhost"

The race_ function is a wonderful helper for concurrency, which says "run these two actions, see which one finishes first, kill the other one, and ignore any results (the _ at the end of the name)." It has a sibling function, concurrently, for running two things until they both complete. You can implement a surprisingly large number of common concurrency solutions using just these two functions. For more information, see the library package tutorial on haskell-lang.org.

You may be terrified of the performance characteristics of this: we've introduced two blocking threads, when theoretically callback-based I/O would be far more efficient! Not to worry: in Haskell, the runtime system uses a fully callback based system under the surface, using whatever system calls are relevant for your operating system. When a Haskell green thread makes a "blocking" I/O call, what actually happens is the runtime puts the thread to sleep, installs a callback handler to wait for data to be available, and when the callback is triggered, wakes the green thread up again.

The details of the Haskell runtime are well described in the paper Mio: A High-Performance Multicore IO Manager for GHC. Fortunately, for most real world cases, you can write the naive, easy-to-conceptualize I/O operations based on blocking semantics, and automatically get the great performance you'd want from event/callback based system calls.

Client and server in same process

Just to prove that we can: let's throw our client and server into a single process, using the same concurrency approach we've had until now.

#!/usr/bin/env stack -- stack --resolver nightly-2016-09-10 --install-ghc runghc --package classy-prelude-conduit {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE OverloadedStrings #-} import ClassyPrelude.Conduit import Data.Conduit.Network (appSink, appSource, runTCPClient, clientSettings, runTCPServer, serverSettings) main :: IO () main = race_ server client server :: IO () server = runTCPServer settings $ \appData -> appSource appData $$ appSink appData where settings = serverSettings 4200 "*" client :: IO () client = do -- Sleep for 1 second (1 million microsecond) to give the server a -- chance to start up. There are definitely better ways to do -- this, but this is good enough for our example. threadDelay 1000000 runTCPClient settings $ \appData -> race_ (stdinC $$ appSink appData) (appSource appData $$ stdoutC) where settings = clientSettings 4200 "localhost"

This isn't a particularly useful application (stdinC $$ stdoutC would do the same thing without wasting a network connection), but it does show how easy it is to combine various pieces of code in Haskell for concurrent applications.

Next time on Practical Haskell

We've so far figured out how to deal with our simple file mirror's communication protocol, and how to do network communication. All that's left is combining these two things together and wrapping it up with a command line interface. Stay tuned!

Categories: Offsite Blogs

Brent Yorgey: The generic-random library, part 1: simple generic Arbitrary instances

Planet Haskell - Tue, 09/20/2016 - 4:27pm

In a previous post I pointed out that we know all the theory to make nice, principled, practical random generators for recursive algebraic data types; someone just needed to step up and do the work. Well, Li-yao Xia took up the challenge and produced a brilliant package, generic-random, available on Hackage right now for you to use!

However, although the package does include some Haddock documentation, it is probably difficult for someone with no experience or background in this area to navigate. So I thought it would be worth writing a few blog posts by way of a tutorial and introduction to the package.

> {-# LANGUAGE GADTSyntax #-} > {-# LANGUAGE DeriveGeneric #-} > {-# LANGUAGE FlexibleContexts #-} > {-# LANGUAGE UndecidableInstances #-} > > import GHC.Generics > import Test.QuickCheck > > import Generic.Random.Generic The problem

First, a quick recap of the problem we are trying to solve: the obvious, naive way of generating random instances of some recursive algebraic data type often produces really terrible distributions. For example, one might generate really tiny structures most of the time and then occasionally generate a humongous one. For more background on the problem, see this post or this one.

A first example: generating generic Arbitrary instances

As a first example, consider the following algebraic data type:

> data Foo where > Bar :: Char -> Int -> String -> Foo > Baz :: Bool -> Bool -> Foo > Quux :: [Woz] -> Foo > deriving (Show, Generic) > > data Woz where > Wiz :: Int -> Woz > Waz :: Bool -> Woz > deriving (Show, Generic)

You have probably noticed by now that this is not recursive (well, except for the embedded lists). Patience! We’ll get to recursive ADTs in due time, but it turns out the library has some nice things to offer for non-recursive ADTs as well, and it makes for an easier introduction.

Now, suppose we wanted to use QuickCheck to test some properties of a function that takes a Foo as an argument. We can easily make our own instances of Arbitrary for Foo and Woz, like so:

instance Arbitrary Foo where arbitrary = oneof [ Bar <$> arbitrary <*> arbitrary <*> arbitrary , Baz <$> arbitrary <*> arbitrary , Quux <$> arbitrary ] instance Arbitrary Woz where arbitrary = oneof [ Wiz <$> arbitrary , Waz <$> arbitrary ]

This works reasonably well:

λ> sample (arbitrary :: Gen Foo) Baz True True Baz False True Baz True True Quux [] Baz False True Bar '<' 3 "zy\\\SOHpO_" Baz False True Bar '\SOH' 0 "\"g\NAKm" Bar 'h' (-9) "(t" Quux [Wiz (-2),Waz False] Baz False True

The only problem is that writing those instances is quite tedious. There is no thought required at all. Isn’t this exactly the sort of thing that is supposed to be automated with generic programming?

Why yes, yes it is. And the generic-random package can do exactly that. Notice that we have derived Generic for Foo and Woz. We can now use the genericArbitrary function from Generic.Random.Generic to derive completely standard Arbitrary instances, just like the ones we wrote above:

> instance Arbitrary Foo where > arbitrary = genericArbitrary > > instance Arbitrary Woz where > arbitrary = genericArbitrary λ> sample (arbitrary :: Gen Foo) Quux [] Bar '\159' (-2) "" Baz True True Baz False False Baz True True Baz True False Quux [Wiz 9,Wiz 7,Waz True,Waz True,Waz False] Quux [Wiz (-10),Waz False,Waz False,Waz True,Waz True,Wiz (-14),Wiz 13,Waz True,Wiz (-8),Wiz 12,Wiz (-13)] Bar '\130' 10 "FN\222j?\b=\237(\NULW\231+ts\245" Bar 'n' 14 "" Bar '\205' 4 "\SYN"

Seems about the same, except we wrote way less code! Huzzah!

If we want certain constructors to occur more frequently, we can also control that using genericArbitraryFrequency, which takes a list of Ints (each Int specifies the weight for one constructor).

A few notes:

  • Using the Generic.Random.Generic module is the quickest and simplest way to generate random instances of your data type, if it works for your use case.

  • It has some limitations, namely:

    • It only generates Arbitrary instances for QuickCheck. It can’t create more general random generators.

    • It probably won’t work very well for recursive data types.

However, these limitations are addressed by other parts of the library. Intrigued? Read on!

Recursive types, the simple way

Let’s now consider a simple recursive type:

> data Tree a where > Leaf :: a -> Tree a > Branch :: Tree a -> Tree a -> Tree a > deriving (Show, Generic) > > treeSize :: Tree a -> Int > treeSize (Leaf _) = 1 > treeSize (Branch l r) = 1 + treeSize l + treeSize r

We can try using genericArbitrary:

instance Arbitrary a => Arbitrary (Tree a) where arbitrary = genericArbitrary

The problem is that this tends to generate some tiny trees and some enormous trees, with not much in between:

λ> map treeSize replicateM 50 (generate (arbitrary :: Gen (Tree Int))) [1,1,1,269,1,1,1,1,1,11,7,3,5,1,1,1,7,1,1,1,3,3,83,5,1,1,3,111,265,47,1,3,19,1,11,1,5,3,15,15,1,91,1,13,4097,119,1,15,5,3]

And this is not a problem specific to trees; this kind of thing is likely to happen for any recursive type.

Before we get to more interesting/complicated tools, it’s worth noting that random-generics provides a simple mechanism to limit the size of the generated structures: the genericArbitrary' function works like genericArbitrary but uses QuickCheck’s sized mechanism to cut off the recursion when it gets too big. The available size is partitioned among recursive calls, so it does not suffer from the exponential growth you might see if only the depth was limited. When the size counter reaches zero, the generator tries to terminate the recursion by picking some finite, non-recursive value(s). The parameter to genericArbitrary' is a natural number specifying how deep the finite, recursion-terminating values can be. Z (i.e zero) means the generator will only be willing to terminate the recursion with nullary constructors. In our case, Tree does not have any nullary constructors, so we should not use Z: if we do, the generator will be unable to terminate the recursion when the size reaches zero and we will get the same behavior as genericArbitrary. Instead, we should use S Z, which means it will be able to pick the depth-1 term Leaf x (for some arbitrary x) to terminate the recursion.

Let’s try it:

> instance (Arbitrary a, Generic a, BaseCases Z (Rep a)) => Arbitrary (Tree a) where > arbitrary = genericArbitrary' (S Z) λ> sample (arbitrary :: Gen (Tree Int)) Leaf 0 Branch (Leaf 0) (Branch (Leaf 0) (Branch (Leaf 0) (Leaf 0))) Branch (Leaf (-1)) (Leaf 1) Leaf (-3) Leaf 7 Branch (Leaf (-4)) (Branch (Branch (Leaf 1) (Leaf (-1))) (Leaf (-1))) Branch (Leaf (-2)) (Branch (Leaf 1) (Branch (Leaf 0) (Branch (Leaf 0) (Leaf 0)))) Leaf 14 Branch (Branch (Leaf 2) (Leaf 2)) (Branch (Branch (Branch (Leaf 1) (Branch (Branch (Leaf 0) (Branch (Leaf 0) (Leaf 0))) (Branch (Leaf 0) (Leaf 0)))) (Branch (Branch (Branch (Leaf 0) (Leaf 0)) (Leaf 0)) (Leaf 0))) (Leaf (-3))) Leaf 4 Leaf 9

Ah, that’s much better.

Finally, genericArbitraryFrequency' is the same as genericArbitraryFrequency but limits the recursion depth as genericArbitrary' does.

If you have a recursive data type you want to use with QuickCheck, it’s worth trying this, since it is quick and simple. The main problem with this approach is that it does not generate a uniform distribution of values. (Also, it is limited in that it is specifically tied to QuickCheck.) In this example, although you can’t necessarily tell just by looking at the sample random trees, I guarantee you that some kinds of trees are much more likely to be generated than others. (Though I couldn’t necessarily tell you which kinds.) This can be bad if the specific trees that will trigger a bug are in fact unlikely to be generated.

Next time, we’ll look at how we can actually have efficient, size-limited, uniform random generators using Boltzmann samplers.


Categories: Offsite Blogs

Roman Cheplyaka: How to prepare a good pull request

Planet Haskell - Sun, 09/18/2016 - 2:00pm
  1. A pull request should have a specific goal and have a descriptive title. Do not put multiple unrelated changes in a single pull request.

  2. Do not include any changes that are irrelevant to the goal of the pull request.

    This includes refactoring or reformatting unrelated code and changing or adding auxiliary files (.gitignore, .travis.yml etc.) in a way that is not related to your main changes.

  3. Make logical, not historical commits.

    Before you submit your work for review, you should rebase your branch (git rebase -i) and regroup your changes into logical commits.

    Logical commits achieve different parts of the pull request goal. Each commit should have a descriptive commit message. Logical commits within a single pull request rarely overlap with each other in terms of the lines of code they touch.

    If you want to amend your pull request, I’d rather you rewrite the branch and force-push it instead of adding new (historical) commits or creating a new pull request. Note, however, that other maintainers may disagree with me on this one.

  4. Make clean commits. Run git diff or git show on your commits. It will show you issues like trailing whitespace or missing newlines at the end of the file.

.gitignore

My .gitignore policy is that the project-specific .gitignore file should only contain patterns specific for this project. For instance, if a test suite generates files *.out, this pattern belongs to the project’s .gitignore.

If a pattern is standard across a wide range of projects (e.g. *.o, or .stack-work for Haskell projects), then it belongs to the user-specific ~/.gitignore.

stack.yaml

(This section is specific to Haskell.)

My policy is to track stack.yaml inside the repo for applications, but not for libraries.

The rationale is that for an application, stack.yaml provides a useful bit of metainformation: which snapshot the app is guaranteed to build with. Additionally, non-programmers (or non-Haskell programmers) may want to install the application, and the presence of stack.yaml makes it easy for them.

These benefits do not apply to libraries. And the cost of including .stack.yaml is:

  • The snapshot version gets out of date quickly, so you need to update this file regularly.
  • This file is often changed temporarily (e.g. to test a specific version of a dependency), and if it is tracked, you need to pay attention not to commit those changes by accident.
Categories: Offsite Blogs

Tom Schrijvers: Doctoral or Post-Doctoral Position in Programming Languages Theory & Implementation

Planet Haskell - Fri, 09/16/2016 - 7:08pm
I am looking for a new member to join my research team in either a doctoral or post-doctoral position.

You can find more details here.
Categories: Offsite Blogs

wren gayle romano: Visiting Nara over the next week

Planet Haskell - Thu, 09/15/2016 - 10:39pm

I announced this on twitter a while back, but tomorrow I'm flying out to Nara Japan. I'll be out there all week for ICFP and all that jazz. It's been about a decade since last time I was in the Kansai region, and I can't wait. As I've done in the past, if you want to meet up for lunch or dinner, just comment below (or shoot me a tweet, email, etc).



comments
Categories: Offsite Blogs

Douglas M. Auclair (geophf): August 2016 1HaskellADay 1Liners

Planet Haskell - Thu, 09/15/2016 - 8:29am
  • August 20th, 2016: maybeify :: (a, Maybe b) -> Maybe (a, b)
    Define maybeify. Snaps for elegance.
    • Hardy Jones @st58 sequence
    • Bruno @Brun0Cad mapM id
    • Thomas D @tthomasdd {-# LANGUAGE TupleSections #-}
      mabeify (x,mY) = maybe Nothing (return . (x,)) mY
    • Андреев Кирилл @nonaem00 import "category-extras" Control.Functor.Strong
      maybeify = uncurry strength
    • bazzargh @bazzargh I can't beat 'sequence', but: uncurry (fmap.(,))
    • Nick @crazy_fizruk distribute (from Data.Distributive)
Categories: Offsite Blogs

Brent Yorgey: Meeting people at ICFP in Nara

Planet Haskell - Thu, 09/15/2016 - 7:53am

In less than 24 hours I’m getting on a plane to Japan (well, technically, Dallas, but I’ll get to Japan eventually). As I did last year, I’m making an open offer here: leave a comment on this post, and I will make a point of finding and meeting you sometime during the week! One person took me up on the offer last year and we had a nice chat over dinner.


Categories: Offsite Blogs

Manuel M T Chakravarty: This is the video of my Compose :: Melbourne keynote. I am...

Planet Haskell - Wed, 09/14/2016 - 9:00pm
<iframe allowfullscreen="allowfullscreen" frameborder="0" height="225" id="youtube_iframe" src="https://www.youtube.com/embed/9dk7_GDNocQ?feature=oembed&amp;enablejsapi=1&amp;origin=http://safe.txmblr.com&amp;wmode=opaque" width="400"></iframe>

This is the video of my Compose :: Melbourne keynote. I am making the case for purely functional graphics programming with Haskell playgrounds, including live programming a little game (for the second half of the talk).

Some of the code is hard to read in the video; you may like to refer to the slides.

Categories: Offsite Blogs

Jens Petersen: Stackage LTS 7 is released

Planet Haskell - Wed, 09/14/2016 - 12:01pm
The Stackage curator team is pleased to announce the initial release of Stackage LTS 7 for ghc-8.0.1. Released over 3.5 months after LTS 6.0, the biggest change introduced with LTS 7.0 is the move from ghc-7.10.3 to ghc-8.0.1.  There is also the usual large number of (major) version number bumps: in order to achieve this we used a new one-time pruning step in Stackage Nightly development dropping all packages with constraining Stackage upper-bounds.  Nevertheless LTS 7.0 has 1986 packages which is almost as many as LTS 6.0 with 1994 packages (latest 6.17 at the time of writing has 2002 packages) thanks to all the work of the community of Haskell maintainers. We are going to do the major pruning step earlier for current Stackage Nightly now that LTS 7 is branched from Nightly, which will give package maintainers even more time to get ready for LTS 8.
Categories: Offsite Blogs

The haskell-lang.org team: Updates for September 14, 2016

Planet Haskell - Wed, 09/14/2016 - 10:00am

The biggest update to the site is the addition of three new targeted "next steps" tutorials on the get started page for using Stack, aimed at Play (using the REPL), Script (single-file programs), and Build (full projects). Hopefully this will help people with different goals all get started with Haskell quickly.

In addition, we have included a few new tutorials:

Plus a few other minor edits throughout the site.

The complete diff can be found here.

Categories: Offsite Blogs

Jan Stolarek: Moving to University of Edinburgh

Planet Haskell - Wed, 09/14/2016 - 6:45am

I wanted to let you all know that after working for 8 years as a Lecturer at the Institute of Information Technology (Lodz University of Technology, Poland), I have received a sabbatical leave to focus solely on research. Yesterday I began my work as a Research Associate at the Laboratory for Foundations of Computer Science, University of Edinburgh. This is a two-year post-doc position. I will be part of the team working on the Skye project under supervision of James Cheney. This means that from now on I will mostly focus on developing the Links programming language.

Categories: Offsite Blogs