News aggregator

CFP: IEEE International Conference on Cloud and Autonomic Computing (CAC 2015)

haskell-cafe - Wed, 04/29/2015 - 10:15pm
[ Less than two weeks to deadline. Apologies if you receive multiple copies of this email.] IEEE International Conference on Cloud and Autonomic Computing (CAC 2015) (pending IEEE support) Cambridge, MA, USA September 21-25, 2015 Co-located with the Ninth IEEE International Conference on Self-Adaptive and Self-Organizing System (SASO 2015) and with the 15th IEEE Peer-to-Peer Computing Conference Call for Papers Overview Enterprise-scale cloud platforms and services systems, present common and cross-cutting challenges in maximizing power efficiency and performance while maintaining predictable and reliable behavior, and at the same time responding appropriately to environmental and system changes such as hardware failures and varying workloads. Autonomic computing systems address the challenges in managing these environments by integrating monitoring, decision-processing and actuation capabilities to autonomously manage resources and applications based on high-level policies.
Categories: Offsite Discussion

Danny Gratzer: Compiling With CPS

Planet Haskell - Wed, 04/29/2015 - 6:00pm
Posted on April 30, 2015 Tags: compilers, haskell

Hello folks. It’s been a busy month so I haven’t had much a chance to write but I think now’s a good time to talk about another compiler related subject: continuation passing style conversion.

When you’re compiling a functional languages (in a sane way) your compiler mostly consists of phases which run over the AST and simplify it. For example in a language with pattern matching, it’s almost certainly the case that we can write something like

case x of (1, 2) -> True (_, _) -> False

Wonderfully concise code. However, it’s hard to compile nested patterns like that. In the compiler, we might simplify this to

case x of (a, b) -> case a of 1 -> case b of 2 -> True _ -> False _ -> False

note to future me, write a pattern matching compiler

We’ve transformed our large nested pattern into a series of simpler, unnested patterns. The benefit here is that this maps more straight to a series of conditionals (or jumps).

Now one of the biggest decisions in any compiler is what to do with expressions. We want to get rid of complicated nested expressions because chances are our compilation target doesn’t support them. In my second to last post we transformed a functional language into something like SSA. In this post, we’re going to walk through a different intermediate representation: CPS.

What is CPS

CPS is a restriction of how a functional language works. In CPS we don’t have nested expressions anymore. We instead have a series of lets which telescope out and each binds a “flat” expressions. This is the process of “removing expressions” from our language. A compiler probably is targeting something with a much weaker notion of expressions (like assembly) and so we change our tree like structure into something more linear.

Additionally, no functions return. Instead, they take a continuation and when they’re about to return they instead pass their value to it. This means that conceptually, all functions are transformed from a -> b to (a, b -> void) -> void. Logically, this is actually a reasonable thing to do. This corresponds to mapping a proposition b to ¬ ¬ b. What’s cool here is that since each function call calls a continuation instead of returning its result, we can imagine that each function just transferring control over to some other part of the program instead of returning. This leads to a very slick and efficient way of implementing CPSed function calls as we’ll see.

This means we’d change something like

fact n = if n == 0 then 1 else n * fact (n - 1)


fact n k = if n == 0 then k 1 else let n' = n - 1 in fact n' (\r -> let r' = n * r in k r')

To see what’s going on here we

  1. Added an extra argument to fact, its return continuation
  2. In the first branch, we pass the result to the continuation instead of returning it
  3. In the next branch, we lift the nested expression n - 1 into a flat let binding
  4. We add an extra argument to the recursive call, the continuation
  5. In this continuation, we apply multiply the result of the recursive call by n (Note here that we did close over n, this lambda is a real lambda)
  6. Finally, we pass the final result to the original continuation k.

The only tree-style-nesting here comes from the top if expression, everything else is completely linear.

Let’s formalize this process by converting Simply Typed Lambda Calculus (STLC) to CPS form.


First things first, we specify an AST for normal STLC.

data Tp = Arr Tp Tp | Int deriving Show data Op = Plus | Minus | Times | Divide -- The Tp in App refers to the return type, it's helpful later data Exp a = App (Exp a) (Exp a) Tp | Lam Tp (Scope () Exp a) | Num Int -- No need for binding here since we have Minus | IfZ (Exp a) (Exp a) (Exp a) | Binop Op (Exp a) (Exp a) | Var a

We’ve supplemented our lambda calculus with natural numbers and some binary operations because it makes things a bit more fun. Additionally, we’re using bound to deal with bindings for lambdas. This means there’s a terribly boring monad instance lying around that I won’t bore you with.

To convert to CPS, we first need to figure out how to convert our types. Since CPS functions never return we want them to go to Void, the unoccupied type. However, since our language doesn’t allow Void outside of continuations, and doesn’t allow functions that don’t go to Void, let’s bundle them up into one new type Cont a which is just a function from a -> Void. However, this presents us with a problem, how do we turn an Arr a b into this style of function? It seems like our function should take two arguments, a and b -> Void so that it can produce a Void of its own. However, this requires products since currying isn’t possible with the restriction that all functions return Void! Therefore, we supplement our CPS language with pairs and projections for them.

Now we can write the AST for CPS types and a conversion between Tp and it.

data CTp = Cont CTp | CInt | CPair CTp CTp cpsTp :: Tp -> CTp cpsTp (Arr l r) = Cont $ CPair (cpsTp l) (Cont (cpsTp r)) cpsTp Int = CInt

The only interesting thing here is how we translate function types, but we talked about that above. Now for expressions.

We want to define a new data type that encapsulates the restrictions of CPS. In order to do this we factor out our data types into “flat expressions” and “CPS expressions”. Flat expressions are things like values and variables while CPS expressions contain things like “Jump to this continuation” or “Branch on this flat expression”. Finally, there’s let expressions to perform various operations on expressions.

data LetBinder a = OpBind Op (FlatExp a) (FlatExp a) | ProjL a | ProjR a | Pair (FlatExp a) (FlatExp a) data FlatExp a = CNum Int | CVar a | CLam CTp a (CExp a) data CExp a = Let a (LetBinder a) (CExp a) | CIf (FlatExp a) (CExp a) (CExp a) | Jump (FlatExp a) (FlatExp a) | Halt (FlatExp a)

Lets let us bind the results of a few “primitive operations” across values and variables to a fresh variable. This is where things like “incrementing a number” happen. Additionally, in order to create a pair or access its elements we need to us a Let.

Notice that here application is spelled Jump hinting that it really is just a jmp and not dealing with the stack in any way. They’re all jumps we can not overflow the stack as would be an issue with a normal calling convention. To seal of the chain of function calls we have Halt, it takes a FlatExp and returns it as the result of the program.

Expressions here are also parameterized over variables but we can’t use bound with them (for reasons that deserve a blogpost-y rant :). Because of this we settle for just ensuring that each a is globally unique.

So now instead of having a bunch of nested Exps, we have flat expressions which compute exactly one thing and linearize the tree of expressions into a series of flat ones with let binders. It’s still not quite “linear” since both lambdas and if branches let us have something tree-like.

We can now define conversion to CPS with one major helper function

cps :: (Eq a, Enum a) => Exp a -> (FlatExp a -> Gen a (CExp a)) -> Gen a (CExp a)

This takes an expression, a “continuation” and produces a CExp. We have some monad-gen stuff going on here because we need unique variables. The “continuation” is an actual Haskell function. So our function breaks an expression down to a FlatExp and then feeds it to the continuation.

cps (Var a) c = c (CVar a) cps (Num i) c = c (CNum i)

The first two cases are easy since variables and numbers are already flat expressions, they go straight into the continuation.

cps (IfZ i t e) c = cps i $ \ic -> CIf ic <$> cps t c <*> cps e c

For IfZ we first recurse on the i. Then once we have a flattened computation representing i, we use CIf and recurse.

cps (Binop op l r) c = cps l $ \fl -> cps r $ \fr -> gen >>= \out -> Let out (OpBind op fl fr) <$> c (CVar out)

Like before, we use cps to recurse on the left and right sides of the expression. This gives us two flat expressions which we use with OpBind to compute the result and bind it to out. Now that we have a variable for the result we just toss it to the continuation.

cps (Lam tp body) c = do [pairArg, newCont, newArg] <- replicateM 3 gen let body' = instantiate1 (Var newArg) body cbody <- cps body' (return . Jump (CVar newCont)) c (CLam (cpsTp tp) pairArg $ Let newArg (ProjL pairArg) $ Let newCont (ProjR pairArg) $ cbody)

Converting a lambda is a little bit more work. It needs to take a pair so a lot of the work is projecting out the left component (the argument) and the right component (the continuation). With those two things in hand we recurse in the body using the continuation supplied as an argument. The actual code makes this process look a little out of order. Remember that we only use cbody once we’ve bound the projections to newArg and pairArg respectively.

cps (App l r tp) c = do arg <- gen cont <- CLam (cpsTp tp) arg <$> c (CVar arg) cps l $ \fl -> cps r $ \fr -> gen >>= \pair -> return $ Let pair (Pair fr cont) (Jump fl (CVar pair))

For application we just create a lambda for the current continuation. We then evaluate the left and right sides of the application using recursive calls. Now that we have a function to jump to, we create a pair of the argument and the continuation and bind it to a name. From there, we just jump to fl, the function. Turning the continuation into a lambda is a little strange, it’s also we needed an annotation for App. The lambda uses the return type of the application and constructs a continuation that maps a to c a. Note that c a is a Haskell expressions with the type CExp a.

convert :: Exp Int -> CExp Int convert = runGen . flip cps (return . Halt)

With this, we’ve written a nice little compiler pass to convert expressions into their CPS forms. By doing this we’ve “eliminated expressions”. Everything is now flat and evaluation basically proceeds by evaluating one small computation and using the result to compute another and another.

There’s still some things left to compile out before this is machine code though

  • Closures - these can be converted to explicitly pass records with closure conversion
  • Hoist lambdas out of nested scope - this gets rid of anonymous functions, something we don’t have in C or assembly
  • Make allocation explicit - Allocate a block of memory for a group of let statements and have them explicitly move the results of their computations to it
  • Register allocation - Cleverly choose whether to store some particular variable in a register or load it in as needed.

Once we’ve done these steps we’ve basically written a compiler. However, they’re all influenced by the fact that we’ve compiled out expressions and (really) function calls with our conversion to CPS, it makes the process much much simpler.

Wrap Up

CPS conversion is a nice alternative to something like STG machines for lazy languages or SSA for imperative ones. As far as I’m aware the main SML interpreter (SML/NJ) compiles code in this way. As does Ur/Web if I’m not crazy. Additionally, the course entitled “Higher Order, Typed Compilation” which is taught here at CMU uses CPS conversion to make compiling SML really quite pleasant.

In fact, someone (Andrew Appel?) once wrote a paper that noted that SSA and CPS are actually the same. The key difference is that in SSA we merge multiple blocks together using the phi function. In CPS, we just let multiple source blocks jump to the same destination block (continuation). You can see this in our conversion of IfZ to CPS, instead of using phi to merge in the two branches, they both just use the continuation to jump to the remainder of the program. It makes things a little simpler because no one person needs to worry about

Finally, if you’re compiling a language like Scheme with call/cc, using CPS conversion makes the whole thing completely trivial. All you do is define call/cc at the CPS level as

call/cc (f, c) = f ((λ (x, c') → c x), c)

So instead of using the continuation supplied to us in the expression we give to f, we use the one for the whole call/cc invocation! This causes us to not return into the body of f but instead to carry on the rest of the program as if f had returned whatever value x is. This is how my old Scheme compiler did things, I put figuring out how to implement call/cc off for a week before I realized it was a 10 minute job!

Hope this was helpful!

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Diagrams: Diagrams + Cairo + Gtk + Mouse picking, Reloaded

Planet Haskell - Wed, 04/29/2015 - 6:00pm

by Brent Yorgey on April 30, 2015

Tagged as: cairo, GTK, mouse, coordinates, transformation, features, 1.3.

Diagrams + Cairo + Gtk + Mouse picking, reloaded

A little over a year ago, Christopher Mears wrote a nice article on how to match up mouse clicks in a GTK window with parts of a diagram. The only downside was that to make it work, you had to explicitly construct the diagram in such a way that its coordinate system precisely matched the coordinates of the window you wanted to use, so that there was essentially no "translation" to do. This was unfortunate, since constructing a diagram in a particular global coordinate system is not a very "diagrams-y" sort of thing to do. However, the 1.3 release of diagrams includes a new feature that makes matching up mouse clicks and diagrams much easier and more idiomatic, and I thought it would be worth updating Chris's original example to work more idiomatically in diagrams 1.3. The complete code is listed at the end.

First, here's how we construct the house. This is quite different from the way Chris did it; I have tried to make it more idiomatic by focusing on local relationships of constituent pieces, rather than putting everything at absolute global coordinates. We first create all the constituent pieces:

> -- The diagram to be drawn, with features tagged by strings. > prettyHouse :: QDiagram Cairo V2 Double [String] > prettyHouse = house > where > roof = triangle 1 # scaleToY 0.75 # centerY # fc blue > door = rect 0.2 0.4 # fc red > handle = circle 0.02 # fc black > wall = square 1 # fc yellow > chimney = fromOffsets [0 ^& 0.25, 0.1 ^& 0, 0 ^& (-0.4)] > # closeTrail # strokeT # fc green > # centerX > # named "chimney" > smoke = mconcat > [ circle 0.05 # translate v > | v <- [ zero, 0.05 ^& 0.15 ] > ] > # fc grey

We then put the pieces together, labelling each by its name with the value function. Diagrams can be valuated by any monoid; when two diagrams are combined, the value at each point will be the mappend of the values of the two component diagrams. In this case, each point in the final diagram will accumulate a list of Strings corresponding to the pieces of the house which are under that point. Note how we make use of combinators like vcat and mconcat, alignments like alignB, snugL and snugR, and the use of a named subdiagram (the chimney) to position the components relative to each other. (You can click on any of the above function names to go to their documentation!)

> house = vcat > [ mconcat > [ roof # snugR # value ["roof"] > , chimney # snugL # value ["chimney"] > ] > # centerX > , mconcat > [ handle # translate (0.05 ^& 0.2) # value ["handle"] > , door # alignB # value ["door"] > , wall # alignB # value ["wall"] > ] > ] > # withName "chimney" (\chim -> > atop (smoke # moveTo (location chim) # translateY 0.4 > # value ["smoke"] > ) > )

Now, when we render the diagram to a GTK window, we can get diagrams to give us an affine transformation that mediates between the diagram's local coordinates and the GTK window's coordinates. I'll just highlight a few pieces of the code; the complete listing can be found at the end of the post. We first create an IORef to hold the transformation:

> gtk2DiaRef <- (newIORef mempty :: IO (IORef (T2 Double)))

We initialize it with the identity transformation. We use the renderDiaT function to get not only a rendering action but also the transformation from diagram to GTK coordinates; we save the inverse of the transformation in the IORef (since we will want to convert from GTK to diagram coordinates):

> let (dia2gtk, (_,r)) = renderDiaT Cairo > (CairoOptions "" (mkWidth 250) PNG False) > prettyHouse > > -- store the inverse of the diagram -> window coordinate transformation > -- for later use in interpreting mouse clicks > writeIORef gtk2DiaRef (inv dia2gtk)

(Note that if it is possible for the first motion notify event to happen before the expose event, then such mouse motions will be computed to correspond to the wrong part of the diagram, but who cares.) Now, when we receive a mouse click, we apply the stored transformation to convert to a point in diagram coordinates, and pass it to the sample function to extract a list of house components at that location.

> (x,y) <- eventCoordinates > > -- transform the mouse click back into diagram coordinates. > gtk2Dia <- liftIO $ readIORef gtk2DiaRef > let pt' = transform gtk2Dia (p2 (x,y)) > > liftIO $ do > putStrLn $ show (x,y) ++ ": " > ++ intercalate " " (sample prettyHouse pt')

The final product ends up looking and behaving identically to the video that Chris made.

Finally, here's the complete code. A lot of it is just boring standard GTK setup.

> import Control.Monad (void) > import Control.Monad.IO.Class (liftIO) > import Data.IORef > import Data.List (intercalate) > import Diagrams.Backend.Cairo > import Diagrams.Backend.Cairo.Internal > import Diagrams.Prelude > import Graphics.UI.Gtk > > main :: IO () > main = do > -- Ordinary Gtk setup. > void initGUI > w <- windowNew > da <- drawingAreaNew > w `containerAdd` da > void $ w `on` deleteEvent $ liftIO mainQuit >> return True > > -- Make an IORef to hold the transformation from window to diagram > -- coordinates. > gtk2DiaRef <- (newIORef mempty :: IO (IORef (T2 Double))) > > -- Render the diagram on the drawing area and save the transformation. > void $ da `on` exposeEvent $ liftIO $ do > dw <- widgetGetDrawWindow da > > -- renderDiaT returns both a rendering result as well as the > -- transformation from diagram to output coordinates. > let (dia2gtk, (_,r)) = renderDiaT Cairo > (CairoOptions "" (mkWidth 250) PNG False) > prettyHouse > > -- store the inverse of the diagram -> window coordinate transformation > -- for later use in interpreting mouse clicks > writeIORef gtk2DiaRef (inv dia2gtk) > > renderWithDrawable dw r > return True > > -- When the mouse moves, show the coordinates and the objects under > -- the pointer. > void $ da `on` motionNotifyEvent $ do > (x,y) <- eventCoordinates > > -- transform the mouse click back into diagram coordinates. > gtk2Dia <- liftIO $ readIORef gtk2DiaRef > let pt' = transform gtk2Dia (p2 (x,y)) > > liftIO $ do > putStrLn $ show (x,y) ++ ": " > ++ intercalate " " (sample prettyHouse pt') > return True > > -- Run the Gtk main loop. > da `widgetAddEvents` [PointerMotionMask] > widgetShowAll w > mainGUI > > -- The diagram to be drawn, with features tagged by strings. > prettyHouse :: QDiagram Cairo V2 Double [String] > prettyHouse = house > where > roof = triangle 1 # scaleToY 0.75 # centerY # fc blue > door = rect 0.2 0.4 # fc red > handle = circle 0.02 # fc black > wall = square 1 # fc yellow > chimney = fromOffsets [0 ^& 0.25, 0.1 ^& 0, 0 ^& (-0.4)] > # closeTrail # strokeT # fc green > # centerX > # named "chimney" > smoke = mconcat > [ circle 0.05 # translate v > | v <- [ zero, 0.05 ^& 0.15 ] > ] > # fc grey > house = vcat > [ mconcat > [ roof # snugR # value ["roof"] > , chimney # snugL # value ["chimney"] > ] > # centerX > , mconcat > [ handle # translate (0.05 ^& 0.2) # value ["handle"] > , door # alignB # value ["door"] > , wall # alignB # value ["wall"] > ] > ] > # withName "chimney" (\chim -> > atop (smoke # moveTo (location chim) # translateY 0.4 > # value ["smoke"] > ) > )
Categories: Offsite Blogs

Agda and cpphs 1.19

libraries list - Wed, 04/29/2015 - 3:42pm
Hi, The current version of Agda ( in Hackage doesn't install with the current version of cpphs (1.19) in Hackage due to the following restriction: build-tools: cpphs >= 1.18.6 && < 1.19 Although I'm a maintainer of Agda, using the Hackage web interface I couldn't increase the upper bound for cpphs to 1.20 because this upper bound is inside a cabal flag. Since this problem has been repeatedly reported by Agda's users, could some Hackage trustee fix the problem, please. Thanks,
Categories: Offsite Discussion

Philip Wadler: Fractal Maps

Planet Haskell - Wed, 04/29/2015 - 3:15pm

Sky Welch's Fractal Maps updates Alasdair Corbett's Mandelbrot Maps. It renders faster, and displays the two curves side-by-side (rather than displaying one large and one tiny). Mandelbrot Maps is the most popular Mandelbrot app in Google Play, with over 10,000 downloads. I expect Fractal Maps to catch up soon. Try it today! (Disclaimer/boast: both Sky and Alasdair produced their software as part of UG4 projects under my supervision.)
Categories: Offsite Blogs

ANN: Uinitd v0.1.0.0, userspace init system

Haskell on Reddit - Wed, 04/29/2015 - 2:18pm

This is my first utility written in Haskell, and really the first stateful Haskell program I have written. I mainly made this for myself since I run multiple computers with different window managers and wanted something better than messing about with rc files.

Writing this in Haskell was quite enjoyable, especially since this is the first time I have worked with a monad stack (using reader, journal, and state). The biggest issues I had, and the biggest timesink, was by far dealing with IO Exceptions. Much of the code is still messy and deserves a refactor, but I'd love any comments on what I have and what I can do better with.

As far as the program itself goes, all the core functionality is present. I'm testing it on three separate systems and haven't encountered any major bugs yet. Some useful features are currently missing like service deletion and interactive mode, but I'll do a major refactor/cleaning before adding more.

submitted by Watley
[link] [9 comments]
Categories: Incoming News

Thinking About Performance

Haskell on Reddit - Wed, 04/29/2015 - 11:49am
Categories: Incoming News

Proposal: Add "fma" to the RealFloat class

libraries list - Wed, 04/29/2015 - 8:21am
This proposal is very much in the spirit of the earlier proposal on adding new float/double functions; for instance see here: "fma" (a.k.a. fused-multiply-add) is one of those functions; which is the workhorse in many HPC applications. The idea is to multiply two floats and add a third with just one rounding, and thus preserving more precision. There are a multitude of applications for this operation in engineering data-analysis, and modern processors come with custom implementations and a lot of hardware to support it natively. I created a ticket along these lines already: Edward suggested that the matter should further be discussed here. I think the proposal is rather straightforward, and should be noncontroversial. To wit, we shall add a new method to the RealFloat class: class (RealFrac a, Floating a) => RealFloat a where ... fma :: a -> a -> a -> a The intention is that
Categories: Offsite Discussion

Wiki user

haskell-cafe - Wed, 04/29/2015 - 7:07am
Hello, The Haskell Wiki says automatic registration has been disabled, and that I should send an e-mail. Could you please create a wiki account for me? The username I'd like to have is: akrasner. Thanks in advance!
Categories: Offsite Discussion

Looking for retainers of PINNED objects

glasgow-user - Wed, 04/29/2015 - 5:37am
Hi all, I'm profiling a fairly large program which seems to have a space leak. The heap profiling (-hc) shows that PINNED objects are accumulated over time. In order to check the retainers of the objects, I ran the retainer profiling. Unfortunately it didn't output anything with -hr -hcPINNED. Also, this is just a guess though, the retainer profiling without any filters (I mean just -hr) doesn't seem to include PINNED objects at all. Is there a way to check what retains the PINNED objects? Thanks, Mitsutoshi _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users< at >
Categories: Offsite Discussion

Announcing: stackage-install

Haskell on Reddit - Wed, 04/29/2015 - 4:08am
Categories: Incoming News

FP Complete: Announcing: stackage-install

Planet Haskell - Wed, 04/29/2015 - 3:30am

Hot on the heels of yesterday's release of stackage-upload, I'm happy to announce the release of stackage-install. This tool was actually not something we'd planned on writing, but Greg Weber came up with the idea for this addition, so I went ahead with it. What's exciting is that- combined with stackage-update- users of Haskell packages now have a simple workflow that ensures all packages are downloaded over a secure connection.

As with stackage-upload, I've copied below the content of the README file; if you see errors please send a pull request to update the content. This tool is pretty simple right now, but can be easily extended. If others are interested in collaborating on this project, please be in touch.

stackage-install provides a wrapper around the cabal install command, which will download packages more securely. Initially, this means downloading over an HTTPS connection from FP Complete's Amazon S3 mirror of Hackage, though more hardening is planned for the future (see future improvements below).

To install, simply run cabal update && cabal install stackage-install. Usage is intended to overlap well with cabal install. Whenever you would have run cabal install foo, you can now run stackage-install foo (or stk install foo with stackage-cli installed), which will perform the following steps:

  1. Run cabal fetch --dry-run ... to get cabal's build plan
  2. Download the relevant packages from S3, and place them in the locations that cabal-install expects
  3. Run cabal install ...

If you have a modified remote-repo in your ~/.cabal/config file, this tool will not provide proper hardening. Most users do not modify their remote-repo, so this shouldn't be an issue most of the time.

There are some combinations of cabal install arguments which may not translate well to this tool. One known issue is that passing --dry-run is not supported, but others may apply as well.

This tool necessarily has to call cabal-install twice, once to calculate the dependencies, and then to install them. It's theoretically possible that cabal-install could come up with different build plans between the two calls, in which case the second call may download some packages insecurely. I've opened cabal issue #2566 about disabling downloading in cabal.

Why not fix cabal?

Hopefully cabal will get fixed soon, the discussion has already started. It's unfortunately unclear how long that discussion will take, and I received a specific request to write this tool. Since it's a small amount of code, I went ahead with this as an interim solution.

That said, some of the future enhancements discussed below are not planned for cabal, in which case this tool will continue to remain relevant for people looking for additional security beyond transport security.

Why Stackage?

See the same question and its answer on stackage-update.

Future enhancements
  • Check hashes of all packages downloaded against a collection of package hashes
  • Verify signatures from authors against the signature archive
Categories: Offsite Blogs

Prime sieve'' and Haskell demo

haskell-cafe - Wed, 04/29/2015 - 1:40am
From vicki.smith< at > Tue Apr 28 16:15:03 2015 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,HTML_MESSAGE autolearn=ham autolearn_force=no version=3.4.0 Received: from ( []) by (8.14.8/8.14.8) with ESMTP id t3SKF17I005566 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for <doug< at >>; Tue, 28 Apr 2015 16:15:01 -0400 Received: from ( []) by (8.13.5/DND2.0/8.13.5) with ESMTP id t3SKE8BW008313 (version=TLSv1/SSLv3 cipher=AES256-SHA256 bits=256 verify=FAIL); Tue, 28 Apr 2015 16:14:16 -0400 Received: from ( by ( with Microsoft SMTP
Categories: Offsite Discussion

Edward Kmett: Domains, Sets, Traversals and Applicatives

Planet Haskell - Wed, 04/29/2015 - 1:37am

Last time I looked at free monoids, and noticed that in Haskell lists don't really cut it. This is a consequence of laziness and general recursion. To model a language with those properties, one needs to use domains and monotone, continuous maps, rather than sets and total functions (a call-by-value language with general recursion would use domains and strict maps instead).

This time I'd like to talk about some other examples of this, and point out how doing so can (perhaps) resolve some disagreements that people have about the specific cases.

The first example is not one that I came up with: induction. It's sometimes said that Haskell does not have inductive types at all, or that we cannot reason about functions on its data types by induction. However, I think this is (techincally) inaccurate. What's true is that we cannot simply pretend that that our types are sets and use the induction principles for sets to reason about Haskell programs. Instead, one has to figure out what inductive domains would be, and what their proof principles are.

Fortunately, there are some papers about doing this. The most recent (that I'm aware of) is Generic Fibrational Induction. I won't get too into the details, but it shows how one can talk about induction in a general setting, where one has a category that roughly corresponds to the type theory/programming language, and a second category of proofs that is 'indexed' by the first category's objects. Importantly, it is not required that the second category is somehow 'part of' the type theory being reasoned about, as is often the case with dependent types, although that is also a special case of their construction.

One of the results of the paper is that this framework can be used to talk about induction principles for types that don't make sense as sets. Specifically:

  newtype Hyp = Hyp ((Hyp -> Int) -> Int)  

the type of "hyperfunctions". Instead of interpreting this type as a set, where it would effectively require a set that is isomorphic to the power set of its power set, they interpret it in the category of domains and strict functions mentioned earlier. They then construct the proof category in a similar way as one would for sets, except instead of talking about predicates as subsets, we talk about sub-domains instead. Once this is done, their framework gives a notion of induction for this type.

This example is suitable for ML (and suchlike), due to the strict functions, and sort of breaks the idea that we can really get away with only thinking about sets, even there. Sets are good enough for some simple examples (like flat domains where we don't care about ⊥), but in general we have to generalize induction itself to apply to all types in the 'good' language.

While I haven't worked out how the generic induction would work out for Haskell, I have little doubt that it would, because ML actually contains all of Haskell's data types (and vice versa). So the fact that the framework gives meaning to induction for ML implies that it does so for Haskell. If one wants to know what induction for Haskell's 'lazy naturals' looks like, they can study the ML analogue of:

  data LNat = Zero | Succ (() -> LNat)  

because function spaces lift their codomain, and make things 'lazy'.


The other example I'd like to talk about hearkens back to the previous article. I explained how foldMap is the proper fundamental method of the Foldable class, because it can be massaged to look like:

  foldMap :: Foldable f => f a -> FreeMonoid a  

and lists are not the free monoid, because they do not work properly for various infinite cases.

I also mentioned that foldMap looks a lot like traverse:

  foldMap :: (Foldable t , Monoid m) => (a -> m) -> t a -> m traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)  

And of course, we have Monoid m => Applicative (Const m), and the functions are expected to agree in this way when applicable.

Now, people like to get in arguments about whether traversals are allowed to be infinite. I know Ed Kmett likes to argue that they can be, because he has lots of examples. But, not everyone agrees, and especially people who have papers proving things about traversals tend to side with the finite-only side. I've heard this includes one of the inventors of Traversable, Conor McBride.

In my opinion, the above disagreement is just another example of a situation where we have a generic notion instantiated in two different ways, and intuition about one does not quite transfer to the other. If you are working in a language like Agda or Coq (for proving), you will be thinking about traversals in the context of sets and total functions. And there, traversals are finite. But in Haskell, there are infinitary cases to consider, and they should work out all right when thinking about domains instead of sets. But I should probably put forward some argument for this position (and even if I don't need to, it leads somewhere else interesting).

One example that people like to give about finitary traversals is that they can be done via lists. Given a finite traversal, we can traverse to get the elements (using Const [a]), traverse the list, then put them back where we got them by traversing again (using State [a]). Usually when you see this, though, there's some subtle cheating in relying on the list to be exactly the right length for the second traversal. It will be, because we got it from a traversal of the same structure, but I would expect that proving the function is actually total to be a lot of work. Thus, I'll use this as an excuse to do my own cheating later.

Now, the above uses lists, but why are we using lists when we're in Haskell? We know they're deficient in certain ways. It turns out that we can give a lot of the same relevant structure to the better free monoid type:

  newtype FM a = FM (forall m. Monoid m => (a -> m) -> m) deriving (Functor)   instance Applicative FM where pure x = FM ($ x) FM ef < *> FM ex = FM $ \k -> ef $ \f -> ex $ \x -> k (f x)   instance Monoid (FM a) where mempty = FM $ \_ -> mempty mappend (FM l) (FM r) = FM $ \k -> l k <> r k   instance Foldable FM where foldMap f (FM e) = e f   newtype Ap f b = Ap { unAp :: f b }   instance (Applicative f, Monoid b) => Monoid (Ap f b) where mempty = Ap $ pure mempty mappend (Ap l) (Ap r) = Ap $ (<>) < $> l < *> r   instance Traversable FM where traverse f (FM e) = unAp . e $ Ap . fmap pure . f  

So, free monoids are Monoids (of course), Foldable, and even Traversable. At least, we can define something with the right type that wouldn't bother anyone if it were written in a total language with the right features, but in Haskell it happens to allow various infinite things that people don't like.

Now it's time to cheat. First, let's define a function that can take any Traversable to our free monoid:

  toFreeMonoid :: Traversable t => t a -> FM a toFreeMonoid f = FM $ \k -> getConst $ traverse (Const . k) f  

Now let's define a Monoid that's not a monoid:

  data Cheat a = Empty | Single a | Append (Cheat a) (Cheat a)   instance Monoid (Cheat a) where mempty = Empty mappend = Append  

You may recognize this as the data version of the free monoid from the previous article, where we get the real free monoid by taking a quotient. using this, we can define an Applicative that's not valid:

  newtype Cheating b a = Cheating { prosper :: Cheat b -> a } deriving (Functor)   instance Applicative (Cheating b) where pure x = Cheating $ \_ -> x   Cheating f < *> Cheating x = Cheating $ \c -> case c of Append l r -> f l (x r)  

Given these building blocks, we can define a function to relabel a traversable using a free monoid:

  relabel :: Traversable t => t a -> FM b -> t b relabel t (FM m) = propser (traverse (const hope) t) (m Single) where hope = Cheating $ \c -> case c of Single x -> x  

And we can implement any traversal by taking a trip through the free monoid:

  slowTraverse :: (Applicative f, Traversable t) => (a -> f b) -> t a -> f (t b) slowTraverse f t = fmap (relabel t) . traverse f . toFreeMonoid $ t  

And since we got our free monoid via traversing, all the partiality I hid in the above won't blow up in practice, rather like the case with lists and finite traversals.

Arguably, this is worse cheating. It relies on the exact association structure to work out, rather than just number of elements. The reason is that for infinitary cases, you cannot flatten things out, and there's really no way to detect when you have something infinitary. The finitary traversals have the luxury of being able to reassociate everything to a canonical form, while the infinite cases force us to not do any reassociating at all. So this might be somewhat unsatisfying.

But, what if we didn't have to cheat at all? We can get the free monoid by tweaking foldMap, and it looks like traverse, so what happens if we do the same manipulation to the latter?

It turns out that lens has a type for this purpose, a slight specialization of which is:

  newtype Bazaar a b t = Bazaar { runBazaar :: forall f. Applicative f => (a -> f b) -> f t }  

Using this type, we can reorder traverse to get:

  howBizarre :: Traversable t => t a -> Bazaar a b (t b) howBizarre t = Bazaar $ \k -> traverse k t  

But now, what do we do with this? And what even is it? [1]

If we continue drawing on intuition from Foldable, we know that foldMap is related to the free monoid. Traversable has more indexing, and instead of Monoid uses Applicative. But the latter are actually related to the former; Applicatives are monoidal (closed) functors. And it turns out, Bazaar has to do with free Applicatives.

If we want to construct free Applicatives, we can use our universal property encoding trick:

  newtype Free p f a = Free { gratis :: forall g. p g => (forall x. f x -> g x) -> g a }  

This is a higher-order version of the free p, where we parameterize over the constraint we want to use to represent structures. So Free Applicative f is the free Applicative over a type constructor f. I'll leave the instances as an exercise.

Since free monoid is a monad, we'd expect Free p to be a monad, too. In this case, it is a McBride style indexed monad, as seen in The Kleisli Arrows of Outrageous Fortune.

  type f ~> g = forall x. f x -> g x   embed :: f ~> Free p f embed fx = Free $ \k -> k fx   translate :: (f ~> g) -> Free p f ~> Free p g translate tr (Free e) = Free $ \k -> e (k . tr)   collapse :: Free p (Free p f) ~> Free p f collapse (Free e) = Free $ \k -> e $ \(Free e') -> e' k  

That paper explains how these are related to Atkey style indexed monads:

  data At key i j where At :: key -> At key i i   type Atkey m i j a = m (At a j) i   ireturn :: IMonad m => a -> Atkey m i i a ireturn = ...   ibind :: IMonad m => Atkey m i j a -> (a -> Atkey m j k b) -> Atkey m i k b ibind = ...  

It turns out, Bazaar is exactly the Atkey indexed monad derived from the Free Applicative indexed monad (with some arguments shuffled) [2]:

  hence :: Bazaar a b t -> Atkey (Free Applicative) t b a hence bz = Free $ \tr -> runBazaar bz $ tr . At   forth :: Atkey (Free Applicative) t b a -> Bazaar a b t forth fa = Bazaar $ \g -> gratis fa $ \(At a) -> g a   imap :: (a -> b) -> Bazaar a i j -> Bazaar b i j imap f (Bazaar e) = Bazaar $ \k -> e (k . f)   ipure :: a -> Bazaar a i i ipure x = Bazaar ($ x)   (>>>=) :: Bazaar a j i -> (a -> Bazaar b k j) -> Bazaar b k i Bazaar e >>>= f = Bazaar $ \k -> e $ \x -> runBazaar (f x) k   (>==>) :: (s -> Bazaar i o t) -> (i -> Bazaar a b o) -> s -> Bazaar a b t (f >==> g) x = f x >>>= g  

As an aside, Bazaar is also an (Atkey) indexed comonad, and the one that characterizes traversals, similar to how indexed store characterizes lenses. A Lens s t a b is equivalent to a coalgebra s -> Store a b t. A traversal is a similar Bazaar coalgebra:

  s -> Bazaar a b t ~ s -> forall f. Applicative f => (a -> f b) -> f t ~ forall f. Applicative f => (a -> f b) -> s -> f t  

It so happens that Kleisli composition of the Atkey indexed monad above (>==>) is traversal composition.

Anyhow, Bazaar also inherits Applicative structure from Free Applicative:

  instance Functor (Bazaar a b) where fmap f (Bazaar e) = Bazaar $ \k -> fmap f (e k)   instance Applicative (Bazaar a b) where pure x = Bazaar $ \_ -> pure x Bazaar ef < *> Bazaar ex = Bazaar $ \k -> ef k < *> ex k  

This is actually analogous to the Monoid instance for the free monoid; we just delegate to the underlying structure.

The more exciting thing is that we can fold and traverse over the first argument of Bazaar, just like we can with the free monoid:

  bfoldMap :: Monoid m => (a -> m) -> Bazaar a b t -> m bfoldMap f (Bazaar e) = getConst $ e (Const . f)   newtype Comp g f a = Comp { getComp :: g (f a) } deriving (Functor)   instance (Applicative f, Applicative g) => Applicative (Comp g f) where pure = Comp . pure . pure Comp f < *> Comp x = Comp $ liftA2 (< *>) f x   btraverse :: (Applicative f) => (a -> f a') -> Bazaar a b t -> Bazaar a' b t btraverse f (Bazaar e) = getComp $ e (c . fmap ipure . f)  

This is again analogous to the free monoid code. Comp is the analogue of Ap, and we use ipure in traverse. I mentioned that Bazaar is a comonad:

  extract :: Bazaar b b t -> t extract (Bazaar e) = runIdentity $ e Identity  

And now we are finally prepared to not cheat:

  honestTraverse :: (Applicative f, Traversable t) => (a -> f b) -> t a -> f (t b) honestTraverse f = fmap extract . btraverse f . howBizarre  

So, we can traverse by first turning out Traversable into some structure that's kind of like the free monoid, except having to do with Applicative, traverse that, and then pull a result back out. Bazaar retains the information that we're eventually building back the same type of structure, so we don't need any cheating.

To pull this back around to domains, there's nothing about this code to object to if done in a total language. But, if we think about our free Applicative-ish structure, in Haskell, it will naturally allow infinitary expressions composed of the Applicative operations, just like the free monoid will allow infinitary monoid expressions. And this is okay, because some Applicatives can make sense of those, so throwing them away would make the type not free, in the same way that even finite lists are not the free monoid in Haskell. And this, I think, is compelling enough to say that infinite traversals are right for Haskell, just as they are wrong for Agda.

For those who wish to see executable code for all this, I've put a files here and here. The latter also contains some extra goodies at the end that I may talk about in further installments.

[1] Truth be told, I'm not exactly sure.

[2] It turns out, you can generalize Bazaar to have a correspondence for every choice of p

  newtype Bizarre p a b t = Bizarre { bizarre :: forall f. p f => (a -> f b) -> f t }  

hence and forth above go through with the more general types. This can be seen here.

Categories: Offsite Blogs