# News aggregator

### Zélus : A Synchronous Language with ODEs

Zélus : A Synchronous Language with ODEs

Timothy Bourke, Marc Pouzet

2013

Zélus is a new programming language for modeling systems that mix discrete logical time and continuous time behaviors. From a user's perspective, its main originality is to extend an existing Lustre-like synchronous language with Ordinary Differential Equations (ODEs). The extension is conservative: any synchronous program expressed as data-flow equations and hierarchical automata can be composed arbitrarily with ODEs in the same source code.

A dedicated type system and causality analysis ensure that all discrete changes are aligned with zero-crossing events so that no side effects or discontinuities occur during integration. Programs are statically scheduled and translated into sequential code that, by construction, runs in bounded time and space. Compilation is effected by source-to-source translation into a small synchronous subset which is processed by a standard synchronous compiler architecture. The resultant code is paired with an off-the-shelf numeric solver.

We show that it is possible to build a modeler for explicit hybrid systems à la Simulink/Stateflow on top of an existing synchronous language, using it both as a semantic basis and as a target for code generation.

Synchronous programming languages (à la Lucid Synchrone) are language designs for reactive systems with discrete time. Zélus extends them gracefully to hybrid discrete/continuous systems, to interact with the physical world, or simulate it -- while preserving their strong semantic qualities.

The paper is short (6 pages) and centered around examples rather than the theory -- I enjoyed it. Not being familiar with the domain, I was unsure what the "zero-crossings" mentioned in the introductions are, but there is a good explanation further down in the paper:

The standard way to detect events in a numeric solver is via zero-crossings where a solver monitors expressions for changes in sign and then, if they are detected, searches for a more precise instant of crossing.

The Zélus website has a 'publications' page with more advanced material, and an 'examples' page with case studies.

### amazonka - Comprehensive Amazon Web Services SDK

Despite there already being a few pre-existing AWS libraries already on Hackage, I've recently worked on open sourcing an approach to auto-generating the full SDK in the same manner as Amazon's official Java, .NET, Ruby, and Python bindings.

I wrote a little overview about the motivation and reasoning here.

While they're not ready for prime time, I hope releasing them into wild would provide me with constructive feedback and additional motivation to get these to a level that rival the SDKs available for other languages, and make Haskell even more viable for those of us interested in writing Infrastructure and Cloud related tooling in Haskell.

Until either Hackage builds the documentation, or I manually upload it, you can view the collective Haddock here.

The full suite of supported services is:

- amazonka-autoscaling
- amazonka-cloudformation
- amazonka-cloudfront
- amazonka-cloudsearch-domains
- amazonka-cloudsearch
- amazonka-cloudtrail
- amazonka-cloudwatch-logs
- amazonka-cloudwatch
- amazonka-codedeploy
- amazonka-cognito-identity
- amazonka-cognito-sync
- amazonka-config
- amazonka-datapipeline
- amazonka-directconnect
- amazonka-dynamodb
- amazonka-ec2
- amazonka-elasticache
- amazonka-elasticbeanstalk
- amazonka-elastictranscoder
- amazonka-elb
- amazonka-emr
- amazonka-iam
- amazonka-importexport
- amazonka-kinesis
- amazonka-kms
- amazonka-lambda
- amazonka-opsworks
- amazonka-rds
- amazonka-redshift
- amazonka-route53-domains
- amazonka-route53
- amazonka-s3
- amazonka-sdb
- amazonka-ses
- amazonka-sns
- amazonka-sqs
- amazonka-storagegateway
- amazonka-sts
- amazonka-support
- amazonka-swf

[link] [16 comments]

### The GHC Team: GHC Weekly News - 2014/11/21

Hi *,

To get things back on track, we have a short post following up the earlier one this week. It's been busy today so I'll keep it short:

- The STABLE freeze Austin announced two weeks ago is happening now, although at this point a few things we wanted to ship are just 98% ready. So it may wait until Monday.

- HEAD has been very busy the past two days as many things are now trying to merge as closely to the window as possible. Some notes follow.

- Gergo Erdi merged the implementation of pattern synonym type signatures: https://www.haskell.org/pipermail/ghc-devs/2014-November/007369.html

- HEAD now has support for using the 'deriving' clause for arbitrary classes (see #5462).

- HEAD now has a new flag -fwarn-missing-exported-sigs, which fixes #2526. See https://phabricator.haskell.org/D482

- HEAD now has 64bit iOS and SMP support for ARM64, thanks to Luke Iannini. See #7942.

- HEAD no longer ships haskell98, haskell2010, old-locale or old-time, per our decision to drop support for haskell98 and haskell2010. GHC 7.10 compatible releases of old-locale and old-time have been released on hackage. See https://www.haskell.org/pipermail/ghc-devs/2014-November/007357.html and https://www.haskell.org/pipermail/ghc-devs/2014-November/007383.html

- base now exports a new module for Natural numbers called Numeric.Natural following Herbert Valerio Riedel's recent proposal.

- HEAD should finally be compatible with LLVM 3.5, AKA #9142. The patch from Ben Gamari is at https://phabricator.haskell.org/D155

- Your author has been busy and delayed due to some bad travel experiences the past week, so the 7.8.4 RC1 hasn't landed this past week. Hopefully it will be out by the end of this week still.

Since the last update was only a few days ago, you'd think we haven't closed a lot of tickets, but we have! Thomas Miedema has been very very persistent about closing tickets and cleaning them up, which is greatly appreciated: #9810, #8324, #8310, #9396, #9626, #9776, #9807, #9698, #7942, #9703, #8584, #8968, #8174, #9812, #9209, #9220, #9151, #9201, #9318, #9109, #9126, #8406, #8102, #8093, #8085, #8068, #8094, #9590, #9368, #2526, #9569, #8149, #9815, #5462, #9647, #8568, #9293, #7484, #1476, #9824, #9628, #7942

### Danny Gratzer: Bidirectional Type Checkers for λ→ and λΠ

This week I learned that my clever trick for writing a type checker actually has a proper name: bidirectional type checking. In this post I’ll explain what exactly that is and we’ll use it to write a few fun type checkers.

First of all, let’s talk about one of the fundamental conflicts when designing a statically typed language: how much information need we demand from the user? Clearly we can go too far in either direction. Even people who are supposedly against type inference support at least *some* inference. I’m not aware of a language that requires you to write something like

Clearly inferring the types of some expressions are necessary. On the other hand, if we leave out *all* type annotations then it becomes a lot harder for a human reader to figure out what’s going on! I at least, need to see signatures for top level functions or I become grumpy.

So inside a type checker we always have two sort of processes

- I know this must have the type T, I’ll check to make sure this is the case
- I have no idea what the type of this expression is, I’ll examine the expression to figure it out

In a bidirectional type checker, we acknowledge these two phases by explicitly separating the type checker into two functions

inferType :: Expr -> Maybe Type checkType :: Type -> Expr -> Maybe ()Our type checker thus has two directions, one where we use the type to validate the expression (the type flows in) or we synthesize the type form the expression (the type flows out). That’s all that this is!

It turns out that a technique like this is surprisingly robust. It handles everything from subtyping to simple dependent types! To see how this actually plays out I think it’d be best to just dive in and do something with it.

Laying Out Our LanguageNow when we’re building a bidirectional type checker we really want our AST to explicitly indicate inferrable vs checkable types. Clearly the parser might not care so much about this distinction, but prior to type checking it’s helpful to create this polarized tree.

For a simple language you can imagine

data Ty = Bool | Arr Ty Ty deriving(Eq, Show) data IExpr = Var Int | App IExpr CExpr | Annot CExpr Ty | If CExpr IExpr IExpr | ETrue | EFalse data CExpr = Lam CExpr | CI IExprThis is just simply typed lambda calculus with booleans. We’re using DeBruijn indices so we need not specify a variable for Lam. The IExpr type is for expressions we can *infer* types for, while CExpr is for types we can *check*.

Much this isn’t checking, we can always infer the types of variables, inferring the types of lambdas is hard, etc. Something worth noting is CI. For any inferrable type, we can make it checkable by inferring a type and checking that it’s equal to what we expected. This is actually how Haskell works, GHC is just inferring type without bothering with your signature and then just checks you were right in the first place!

Now that we’ve separated out our expressions, we can easily define our type checker.

type Env = [Ty] (?!) :: [a] -> Int -> Maybe a xs ?! i = if i < length xs then Just (xs !! i) else Nothing inferType :: Env -> IExpr -> Maybe Ty inferType env (Var i) = env ?! i inferType env (App l r) = case inferType env l of Just (Arr lTy rTy) -> checkType env r lTy >> return rTy _ -> Nothing inferType env (Annot e an) = checkType env e an >> return an inferType _ ETrue = return Bool inferType _ EFalse = return Bool inferType env (If i t e) = do checkType env i Bool lTy <- inferType env t rTy <- inferType env e guard (lTy == rTy) return lTy checkType :: Env -> CExpr -> Ty -> Maybe () checkType env (Lam ce) (Arr l r) = checkType (l : env) ce r checkType env (CI e) t = inferType env e >>= guard . (t ==) checkType _ _ _ = NothingSo our type checker doesn’t have many surprises in it. The environment is easy to maintain since DeBruijn indices are easily stored in a list.

Now that we’ve seen how a bidirectional type checker more or less works, let’s kick it up a notch.

Type Checking Dependent TypesType checking a simple dependently typed language is actually not nearly as bad as you’d expect. The first thing to realize is that since dependent types have only one syntactic category.

We maintain the distinction between inferrable and checkable values, resulting in

data IExpr = Var Int | App IExpr CExpr | Annot CExpr CExpr | ETrue | EFalse | Bool | Star -- New stuff starts here | Pi CExpr CExpr | Const String | Free Int deriving (Eq, Show, Ord) data CExpr = Lam CExpr | CI IExpr deriving (Eq, Show, Ord)So you can see we’ve added 4 new expressions, all inferrable. Star is just the kind of types as it is in Haskell. Pi is the dependent function type, it’s like Arr, except the return type can *depend* on the supplied value.

For example, you can imagine a type like

replicate :: (n : Int) -> a -> List n aWhich says something like “give me an integer n and a value and I’ll give you back a list of length n”.

Interestingly, we’ve introduce constants. These are necessary simply because without them this language is unbelievable boring. Constants would be defined in the environment and they represent constant, irreducible terms. You should think of them almost like constructors in Haskell. For example, one can imagine that 3 constants

Nat :: Star Zero :: Nat Succ :: (_ : Nat) -> NatWhich serve to define the natural numbers.

Last but not least, we’ve added “free variables” as an explicit

Now an important piece of a type checker is comparing types for equality, in STLC, equivalent types are syntactically equal so that was solved with deriving Eq. Here we need a bit more subtlety. Indeed, now we need to check arbitrary expressions for equality! This is hard. We’ll reduce things as much as possible and then just check syntactic equality. This means that if True then a else b would equal a as we’d hope, but \x -> if x then a else a wouldn’t.

Now in order to facilitate this check we’ll define a type for fully reduced expressions. Since we’re only interested in checking equality on these terms we can toss the inferrable vs checkable division out the window.

data VConst = CAp VConst Val | CVar String | CFree Int data Val = VStar | VBool | VTrue | VFalse | VConst VConst | VArr Val Val | VPi Val (Val -> Val) | VLam (Val -> Val) | VGen IntNow since we have constants we can have chains of application that we can’t reduce, that’s what VConst is. Notice that this handles the case of just having a constant nicely.

The value dichotomy uses a nice trick from the “Simple Easy!” paper, we use HOAS to have functions that reduce *themselves* when applied. The downside of this is that we need VGen to peek inside the now opaque VLam and VPi. The idea is we’ll generate a unique Int and apply the functions to VGen i.

Now in order to conveniently generate these fresh integers I used monad-gen (it’s not self promotion if it’s useful :). Equality checking comes to

-- *Whistle and fidget with hands* instance Enum Val where toEnum = VGen fromEnum _ = error "You're a bad person." eqTerm :: Val -> Val -> Bool eqTerm l r = runGen $ go l r where go VStar VStar = return True go VBool VBool = return True go VTrue VTrue = return True go VFalse VFalse = return True go (VArr f a) (VArr f' a') = (&&) <$> go f f' <*> go a a' go (VLam f) (VLam g) = gen >>= \v -> go (f v) (g v) go (VPi f) (VPi g) = gen >>= \v -> go (f v) (g v) go (VGen i) (VGen j) = return (i == j) go (VConst c) (VConst c') = case (c, c') of (CVar v, CVar v') -> return (v == v') (CAp f a, CAp f' a') -> (&&) <$> go (VConst f) (VConst f') <*> go a a' _ -> return False go _ _ = return FalseBasically we just recurse and return true or false at the leaves.

Now that we know how to check equality of values, we actually need to map terms into those values. This involves basically writing a little interpreter.

inf :: [Val] -> IExpr -> Val inf _ ETrue = VTrue inf _ EFalse = VFalse inf _ Bool = VBool inf _ Star = VStar inf _ (Free i) = VConst (CFree i) inf _ (Const s) = VConst (CVar s) inf env (Annot e _) = cnf env e inf env (Var i) = env !! i inf env (Pi l r) = VPi (cnf env l) (\v -> cnf (v : env) r) inf env (App l r) = case inf env l of VLam f -> f (cnf env r) VConst c -> VConst . CAp c $ cnf env r _ -> error "Impossible: evaluated ill-typed expression" cnf :: [Val] -> CExpr -> Val cnf env (CI e) = inf env e cnf env (Lam c) = VLam $ \v -> cnf (v : env) cThe interesting cases are for Lam, Pi, and App. For App we actually do reductions wherever we can, otherwise we know that we’ve just got a constant so we slap that on the front. Lam and Pi are basically the same, they wrap the evaluation of the body in a function and evaluate it based on whatever is fed in. This is critical, otherwise App’s reductions get much more complicated.

We need one final thing. You may have noticed that all Val’s are closed, there’s no free DeBruijn variables. This means that when we go under a binder we can’t type check open terms since we’re representing types as values and the term we’re checking shares variables with its type.

This means that our type checker when it goes under a binder is going to substitute the now free variable for a fresh Free i. Frankly, this kinda sucks. I poked about for a better solution but this is what “Simple Easy!” does too..

To do these substitutions we have

ibind :: Int -> IExpr -> IExpr -> IExpr ibind i e (Var j) | i == j = e ibind i e (App l r) = App (ibind i e l) (cbind i e r) ibind i e (Annot l r) = Annot (cbind i e l) (cbind i e r) ibind i e (Pi l r) = Pi (cbind i e l) (cbind i e r) ibind _ _ e = e -- Non recursive cases cbind :: Int -> IExpr -> CExpr -> CExpr cbind i e (Lam b) = Lam (cbind (i + 1) e b) cbind i e (CI c) = CI (ibind i e c)This was a bit more work than I anticipated, but now we’re ready to actually write the type checker!

Since we’re doing bidirectional type checking, we’re once again going to have two functions, inferType and checkType. Our environments is now a record

data Env = Env { localVar :: M.Map Int Val , constant :: M.Map String Val }The inferring stage is mostly the same

inferType :: Env -> IExpr -> GenT Int Maybe Val inferType _ (Var _) = lift Nothing -- The term is open inferType (Env _ m) (Const s) = lift $ M.lookup s m inferType (Env m _) (Free i) = lift $ M.lookup i m inferType _ ETrue = return VBool inferType _ EFalse = return VBool inferType _ Bool = return VStar inferType _ Star = return VStar inferType env (Annot e ty) = do checkType env ty VStar let v = cnf [] ty checkType env e v >> return v inferType env (App f a) = do ty <- inferType env f case ty of VPi aTy body -> do checkType env a aTy return (body $ cnf [] a) _ -> lift Nothing inferType env (Pi ty body) = do checkType env ty VStar i <- gen let v = cnf [] ty env' = env{locals = M.insert i v (locals env)} checkType env' (cbind 0 (Free i) body) VStar return VStarThe biggest difference is that now we have to compute some types on the fly. For example in Annot we check that we are in fact annotating with a type, then we reduce it to a value. This order is critical! Remember that cnf requires well typed terms.

Beyond this there are two interesting cases, there’s App which nicely illustrates what a pi type means and Pi which demonstrates how to deal with a binder.

For App we start in the same way, we grab the (function) type of the function. We can then check that the argument has the right type. To produce the output type however, we have to normalize the argument as far as we can and then feed it to body which computes the return type. Remember that if there’s some free variable in a then it’ll just be represented as VConst (CFree ...).

Pi checks that we’re quantifying over a type first off. From there it generates a fresh free variable and updates the environment before recursing. We use cbind to replace all occurrences of the now unbound variable for an explicit Free.

checkType is pretty trivial after this. Lam is almost identical to Pi and CI is just eqTerm.

checkType :: Env -> CExpr -> Val -> GenT Int Maybe () checkType env (CI e) v = inferType env e >>= guard . eqTerm v checkType env (Lam ce) (VPi argTy body) = do i <- gen let ce' = cbind 0 (Free i) ce env' = env{locals = M.insert i argTy (locals env)} checkType env' ce' (body $ VConst (CFree i)) checkType _ _ _ = lift NothingAnd that’s it!

Wrap UpSo let’s circle back to where we started: bidirectional type checking! Hopefully we’ve seen how structuring a type checker around these two core functions yields something quite pleasant.

What makes this really interesting though is how well it scales. You can use this style type checker to handle subtyping, [dependent] pattern matching, heaps and tons of interesting features.

At 400 lines though, I think I’ll stop here :)

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus### Mark Jason Dominus: Within this instrument, resides the Universe

When opportunity permits, I have been trying to teach my ten-year-old daughter rudiments of algebra and group theory. Last night I posed this problem:

Mary and Sue are sisters. Today, Mary is three times as old as Sue; in two years, she will be twice as old as Sue. How old are they now?

I have tried to teach Ms. 10 that these problems have several phases. In the first phase you translate the problem into algebra, and then in the second phase you manipulate the symbols, almost mechanically, until the answer pops out as if by magic.

There is a third phase, which is pedagogically and practically essential. This is to check that the solution is correct by translating the results back to the context of the original problem. It's surprising how often teachers neglect this step; it is as if a magician who had made a rabbit vanish from behind a screen then forgot to take away the screen to show the audience that the rabbit had vanished.

Ms. 10 set up the equations, not as I would have done, but using four unknowns, to represent the two ages today and the two ages in the future:

$$\begin{align} MT & = 3ST \\ MY & = 2SY \\ \end{align} $$

( here is the name of a single variable, not a product of and ; the others should be understood similarly.)

“Good so far,” I said, “but you have four unknowns and only two equations. You need to find two more relationships between the unknowns.” She thought a bit and then wrote down the other two relations:

$$\begin{align} MY & = MT + 2 \\ SY & = ST + 2 \end{align} $$

I would have written two equations in two unknowns:

$$\begin{align} M_T & = 3S_T\\ M_T+2 & = 2(S_T + 2) \end{align} $$

but one of the best things about mathematics is that there are many
ways to solve each problem, and no method is privileged above any
other except perhaps for reasons of practicality. Ms. 10's translation
is different from what I would have done, and it requires more work in
phase 2, but it is correct, and I am not going to tell her to do it my
way. The method works *both* ways; this is one of its best features.
If the problem can be solved by thinking of it as a problem in two
unknowns, then it can *also* be solved by thinking of it as a problem
in four or in eleven unknowns. You need to find more relationships,
but they must exist and they can be found.

Ms. 10 may eventually want to learn a technically easier way to do it, but to teach that right now would be what programmers call a premature optimization. If her formulation of the problem requires more symbol manipulation than what I would have done, that is all right; she needs practice manipulating the symbols anyway.

She went ahead with the manipulations, reducing the system of four
equations to three, then two and then one, solving the one equation to
find the value of the single remaining unknown, and then substituting
that value back to find the other unknowns. One nice thing about these
simple problems is that when the solution is correct you can see it at
a glance: Mary is six years old and Sue is two, and in two years they
will be eight and four. Ms. 10 loves picking values for the unknowns
ahead of time, writing down a random set of relations among those
values, and then working the method and seeing the correct answer pop
out. I remember being endlessly delighted by almost the same thing
when I was a little older than her. In *The Dying Earth* Jack Vance
writes of a wizard who travels to an alternate universe to learn from
the master “the secret of renewed youth, many spells of the ancients,
and a strange abstract lore that Pandelume termed ‘Mathematics.’”

“I find herein a wonderful beauty,” he told Pandelume. “This is no science, this is art, where equations fall away to elements like resolving chords, and where always prevails a symmetry either explicit or multiplex, but always of a crystalline serenity.”

After Ms. 10 had solved this problem, I asked if she was game for something a little weird, and she said she was, so I asked her:

Mary and Sue are sisters. Today, Mary is three times as old as Sue; in two years, they will be the same age. How old are they now?

“WHAAAAAT?” she said. She has a good number sense, and immediately
saw that this was a strange set of conditions. (If they aren't the
same age now, how can they be the same age in two years?) She asked
me what would happen. I said (truthfully) that I wasn't sure, and
suggested she work through it to find out. So she set up
the equations as before and worked out the solution, which is obvious
*once you see it*: Both girls are zero years old today, and zero is
three times as old as zero. Ms. 10 was thrilled and delighted, and
shared her discovery with her mother and her aunt.

There are some powerful lessons here. One is that the method works even when the conditions seem to make no sense; often the results pop out just the same, and can sometimes make sense of problems that seem ill-posed or impossible. Once you have set up the equations, you can just push the symbols around and the answer will emerge, like a familiar building approached through a fog.

But another lesson, only hinted at so far, is that mathematics has its own way of understanding things, and this is not always the way that humans understand them. Goethe famously said that whatever you say to mathematicians, they immediately translate it into their own language and then it is something different; I think this is exactly what he meant.

In this case it is not too much of a stretch to agree that Mary is three times as old as Sue when they are both zero years old. But in the future I plan to give Ms. 10 a problem that requires Mary and Sue to have negative ages—say that Mary is twice as old as Sue today, but in three years Sue will be twice as old—to demonstrate that the answer that pops out may not be a reasonable one, or that the original translation into mathematics can lose essential features of the original problem. The solution that says that is mathematically irreproachable, and if the original problem had been posed as “Find two numbers such that…” it would be perfectly correct. But translated back to the original context of a problem that asks about the ages of two sisters, the solution is unacceptable. This is the point of the joke about the spherical cow.

### i.imgur.com

### How should I represent my Haskell knowledge to potential employers? : haskell

### Help Explain code Haskell

This is a line of code i got from a haskell forum online that shuffles tuples > (a,b) of a list but the function is too complicated i cant understand anything written there. would be grateful if it could be explained to me and how can i implement the function to be more understandable, am new to Haskell. Thanks

import Control.Monad import Control.Monad.ST import Data.Array.ST import Data.STRef shuffle :: Stock -> IO List shuffle [] = return [] shuffle ds = getStdRandom (shuffle' ds) shuffle' :: [a] -> StdGen -> ([a],StdGen) shuffle' xs gen = runST (do g <- newSTRef gen let randomRST lohi = do (a,s') <- liftM (randomR lohi) (readSTRef g) writeSTRef g s' return a ar <- newArray n xs xs' <- forM [1..n] $ \i -> do j <- randomRST (i,n) vi <- readArray ar i vj <- readArray ar j writeArray ar j vi return vj gen' <- readSTRef g return (xs',gen')) where n = length xs newArray :: Int -> [a] -> ST s (STArray s Int a) newArray n xs = newListArray (1,n) xs submitted by enokleo[link] [7 comments]

### Lucid: templating DSL for HTML

### Lucid: templating DSL for HTML

### Nix: The Cabal Purgatory (HOWTO)

### Philip Wadler: Bright Club

Tuesday 25 November at The Stand Comedy Club, I will be part of the line up at Bright Club, speaking on the subject of 'Turingery'. Bright Club is stand-up by academics---we are trained professionals; don't try this at home! Doors open 7:30pm, show starts 8:30pm. The Stand is at 5 York Place, Edinburgh, EH1 3EB. Tickets £5 at the door or online.

### What are some well regarded projects in the Haskell community?

When learning a new language or system I always find that it's best to do far more reading than writing in order to get a feel for the right approach and conventions. I was wondering if there is an opinion about what are some very well written projects that could be studied.

tl;dr What are some examples of beautiful Haskell?

submitted by wreel[link] [24 comments]

### Why is Aeson's withObject in continuation passing style?

The type of Aeson's withObject uses continuation passing style

withObject :: String -> (Object -> Parser a) -> Value -> Parser aAnd similarly for withText, withArray, ... . It seems to me that String -> Value -> Parser Object (available as \s -> withObject s return) would be more useful.

Why is withObject given this type? Have a missed something and the CPS version is easier to use for some reason?

submitted by tomejaguar[link] [26 comments]