News aggregator

Announcement: Bang, a drum DSL for Haskell

haskell-cafe - Mon, 06/09/2014 - 10:06pm
Hi haskell-cafe, I've been working on "Bang" on and off for about a month or two now, and finally thought it was "good enough" to open-source and show to the public! I've been having fun with it, and hopefully some of you will, too. It currently supports basically all of the primitive transformations that I could think of doing on a drum sequence, such as reversing, mirroring, changing duration and tempo. There are a bunch of operators built in that make it easy to write things like polyrhythms, and infinite sequences of notes are possible as well. The source code and basic tutorial are up on Github: https://github.com/5outh/Bang And the package + more detailed documentation is up on Hackage: http://hackage.haskell.org/package/Bang Let me know if you have any questions, comments, and/or suggestions. I'd love to hear them. Thanks! 5outh? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Problems using ghc 7.8.2 with options -staticlib and-threaded on osx

haskell-cafe - Mon, 06/09/2014 - 7:36pm
Hi, I'm having trouble building my Haskell library with GHC 7.8.2 and Cabal 1.20.0.2 on OSX 10.9.3 with both -staticlib and -threaded enabled. The library I'm building exports several functions using FFI to be called from ObjectiveC in an Xcode project. This works fine with just -staticlib enabled, but now I realize I need to call into my Haskell library from multiple OS-threads simultaneously. But when I add -threaded to ghc-options I get an error when linking: error: libtool: can't locate file for: -lpthread error: libtool: file: -lpthread is not an object file (not allowed in a library) The linker command that fails looks like this (... replaces a bunch of -l and -L options): libtool -static -o liba.a dist/build/HsCocoa.o ... -lCffi -lpthread In /usr/lib I have a libpthread.dylib that links to libSystem.dylib, but no libpthread.a. I was under the impression that libpthread was included with the system install. Is this supposed to work or am I misunderstanding something basic? :) Any hel
Categories: Offsite Discussion

Oliver Charles: A Category for Correct-By-Construction Serializers and Deserializers

Planet Haskell - Mon, 06/09/2014 - 6:00pm

Frequently in computer programming we need to work with data in different representations, and we need to work with the data on both sides of said representation. For example, we might have some Haskell data types in memory, which we later serialize to disk. When the user restarts our application, we need to reload this data back into Haskell data types, to allow them to resume work.

Haskell provides us with machinery for doing this serialization via the binary library, which gives us the Binary type class with two methods:

class Binary t where put :: t -> Put get :: Get t

get deserializes a sequence of bytes into Haskell values, while put operates in the reverse direction - transforming Haskell values to a sequence of bytes.

Usually, we want the implementations of these methods to be mutual inverses - the computation in get should restore data serialized with put, and vice versa. Unfortunately, nothing in the type system nor the structure of these methods gives us this guarantee - it’s all down to the programmer. I don’t trust myself, so I set out to investigate a more reliable approach.

Ideally, we would like to build up serializers and deserializers from smaller pieces, such that each piece carries its own inverse. For example, we could pair up serialization for a String with its own inverse:

type Serializer a = (Get a, a -> Put) string :: Serializer String string = (get, put)

As long as String has a Binary instance where get and put correctly specified, we know that string is going to function as we expect in both directions.

We’re on to something here, but currently this only works for single Strings. What if I have a pair of Strings that I want to serialize? From what we’ve seen so far, there’s no way to combine our bidirectional serializers. Earlier I mentioned that we would like to work with small pieces and compose them - lets see if we can solve this problem for just serialization first.

Serialization consumes data. If we have data to serialize, the application of one serializer should consume some of this data, leaving us with slightly less data that we have to continue serializing. By repeated application of serializers, we will eventually have considered the entire structure and will have nothing left to do. This consumption of a structure bit-by-bit suggests that serialization will be a type changing operation, as a record with a field removed is certainly not the same type as its larger record. So let’s try and incorporate that:

data Serializing a b = Serializing (a -> PutM b) pString2 :: Serializing (String, String) String pString2 = Serializing $ \(a, b) -> do put a; return b pString1 :: Serializing String () pString1 = Serializing $ \a -> put a

Composition of Serializing should now be clear - we just need to compose them one after another:

(.) :: Serializing b c -> Serializing a b -> Serializing a c (Serializing g) . (Serializing f) = Serializing (f >=> g) putTwoStrings :: Serializing (String, String) () putTwoStrings = pString1 . pString2

We’ve built a serializer that can serialize a tuple of two strings, and we did so piece-by-piece. While what we have so far is not entirely satisfactory, it seems like we’re heading in the right direction. We’ll come back to this later, but first let’s see if the same ideas translate to deserializers.

Our serializer consumed data, so deserialization is naturally the opposite of consuming data - that is, deserialization produces data. When we deserialize we’ll start with nothing, and we’ll deserialize a few bytes into part of our structure one step at a time. Each step of deserialization should take the smaller structure and expand it into a larger structure - eventually leading us to the desired structure. Again, this will be a type changing operation, and we can encode all of this just as we did with Serializing:

data Deserializing a b = Deserializing (a -> Get b) getString1 :: Deserializing () String getString1 = Deserializing $ \() -> get getString2 :: Deserializing String (String, String) getString2 = Deserializing $ \s -> do { s' <- get; return (s, s') } (.) :: Deserializing b c -> Deserializing a b -> Deserializing a c (Deserializing g) . (Deserializing f) = Deserializing (f >=> g) getTwoStrings :: Deserializing () (String, String) getTwoStrings = getString2 . getString1

As you can see, it’s pretty much exactly the same idea. The only difference is that now each of our deserializers return a slightly bigger structure, whereas our serializers would move our structure to something smaller.

Just to prove that what we have so far works, we can try this in GHCI:

> let bytes = case putTwoStrings of Serializing p -> runPut (p ("Hello", "World!")) > case getTwoStrings of Deserializing g -> runGet (g ()) (LBS.pack bytes) ("Hello","World!")

To carry on working towards our goal, we need to pair the Serializer up with its Deserializer. Unfortunately, what we have so far won’t work:

data Serializer a b = Serializer (a -> Get b) (a -> Put b)

Notice here how the types both move from a to b - that’s certainly not going to work, as the shape of the data is changing in opposite directions! In Get, a is “smaller” than b, whereas for Put a is “larger” then b. In order to work around this, we just need to swap the order of types in one of these functions - I’ve swapped the order for Put:

data Serializer a b = Serializer (a -> Get b) (b -> PutM a)

This makes sense - if put will shrink our structure, then get can move from this smaller structure back to the original structure. We can express our string1 and string2 serializers now:

string2 :: Serializer (String, String) String string2 = Serializer (\(a, b) -> do put a; return b) (\s -> do { s' <- get; return (s', s) }) string1 :: Serializer String () string1 = Serializer put (\() -> get)

We were able to compose things before, and we can certainly compose things here…

(.) :: Serializer b c -> Serializer a b -> Serializer a c (Serializer g g') . (Serializer f f') = Serializer (f >=> g) (g' >=> f') twoStrings :: Serializer (String, String) () twoStrings = string1 . string2

However, this has a rather significant problem - can you spot it? Take time to think about this and see if you can work out what’s going wrong.

Did you find it? If we fire up GHCI and have a play with our twoStrings serializer, lets see what we get…

> let bytes = case twoStrings of Serializer _ p -> runPut (p ("A", "B")) > case twoStrings of Serializer g _ -> runGet (g ()) bytes ("B","A")

Oh no - that’s not what we wanted at all! The problem is that the order of effects are being reversed. When we put data, we put the first tuple element first, and then the second. However, we’re reading data in the opposite order - expecting the second element to be first in the stream, which is clearly not correct. For (String, String) the deserializer works but the tuple is in the wrong order - for other data types this would lead to a runtime exception.

With the current definition of Serializer, there’s simply no way around this - the types won’t let us run effects in different orders. The reason for this is that we can only access the underlying Get computation by having the smaller structure around first. However, we can be sneaky and changes things around just enough to let us run Get in a different order. Now the Get computation is no longer a function, but is a computation that returns a function:

data Serializer a b = Serializer (Get (a -> b)) (b -> PutM a)

With this change we do have access to any Get computation we like, and we are free to run them in a different order:

(.) :: Serializer b c -> Serializer a b -> Serializer a c (Serializer g g') . (Serializer f f') = Serializer (do buildF <- g buildG <- f return (buildF . buildG)) (g' >=> f')

Now it’s clear that both our Put and our Get computations are sequenced in the same order - nice! It turns out that our composition comes with a sane definition of identity too, which means our Serializer can be used with Category:

instance Category Serializer where (Serializer g g') . (Serializer f f') = Serializer (do buildF <- g buildG <- f return (buildF . buildG)) (g' >=> f') id = Serializer (return id) return Serializing Through Heterogeneous Lists

We have finally reached a nice core to our solution, but the surface API isn’t really working out. We had to write different Serializers for both (String, String) and String, which is certainly not desirable. Ultimately, we would like to be able to work with just one Serializer for String, and compose them however we please.

Unfortunately, working with tuples is causing us the real pain here. The reason for this is that tuples don’t really have any structure that would allow us to work with them in any sort of principled manner. Instead, what we can do is use a heterogeneous list, which we can recurse on just like an ordinary linked list. So, we introduce a type for heterogeneous lists:

data List :: [*] -> * where Nil :: List '[] Cons :: a -> List as -> List (a ': as)

And now we can use the new poly-kinded Category to upgrade Serializer to work with these lists:

data Serializer :: [*] -> [*] -> * where Serializer :: (Get (List a -> List b)) -> (List b -> PutM (List a)) -> Serializer a b instance Category Serializer where (Serializer g g') . (Serializer f f') = Serializer (do mkB <- g mkA <- f return (mkB . mkA)) (g' >=> f') id = Serializer (return id) return

This was quite a detour, and has this really helped us? Indeed it has, as we can now we can write a much more general Serializer String:

string :: Serializer as (String ': as) string = Serializer (do a <- get; return (Cons a)) (\(Cons a as) -> do put a; return as)

The type of string now indicates that this Serializer can serialize anything that starts with a String, and likewise when deserializing it expects a String to be the first element. This composes exactly as we’d expect:

twoStrings :: Serializer as (String ': String ': as) twoStrings = string . string

All we need to do is unwrap the List resulting from a Get or wrap up data in a List for Put and we’re good to go:

> let bytes = case twoStrings of Serializer _ p -> runPut (void $ p ("A" `Cons` ("B" `Cons` Nil))) > case twoStrings of Serializer g _ -> runGet (($ Nil) <$> g) bytes Cons "A" (Cons "B" Nil) Destructuring Data Via Prisms

The API we’ve built works really well if we already have data decomposed into a List, but we don’t normally have this luxury. This means we need a way to convert from a data type to it’s constituent parts, and this is exactly the functionality that Prisms in the lens library provide us with. While Prisms can be a little hard to get your head around, it can be illuminating to experiment with them in GHCI:

> review _Cons (10, []) [10] > review _Cons (10, [20]) [10,20] > review _Cons (1, [2..5]) [1,2,3,4,5] > preview _Cons [10, 20, 30] Just (10,[20,30]) > preview _Cons [10] Just (10,[]) > preview _Cons [] Nothing

Prisms have two main operations: review and preview. review lets us construct some data out of its parts - above we use _Cons with (10, [20]), which is the same as (10 : [20]) - resulting in the list [10, 20]. preview lets us go the other way, which is the same idea as pattern matching on a constructor. If we preview _Cons on non-empty lists, then the pattern matching succeeds and the list is separated into its head and tail. However, we can’t pattern match with _Cons on an empty list, so preview returns Nothing - which corresponds to a pattern match failure.

Armed with Prism, we’re almost entirely ready to go! The only problem is that Prism normally works with tuples, which we’ve already seen aren’t a great data for our needs. It’s entirely mechanical to convert between tuples and List, so we simply move between them with a type class. Combining this all together, we have the following:

class ListIso a b | a -> b, b -> a where _HList :: Iso' b (List a) usePrism :: ListIso a b => Prism' d b -> Serializer a '[d] usePrism p = Serializer get put where put (Cons d Nil) = do Just tuple <- return (preview p d) return (tuple ^. _HList) get = return $ \hlist -> Cons (review p (hlist ^. from _HList)) Nil

Now we are free to use this on our data types, just as we’d expect:

instance ListIso '[a, b] (a, b) where _HList = iso (\(a, b) -> Cons a (Cons b Nil)) (\(Cons a (Cons b Nil)) -> (a, b)) data PairOfStrings = PairOfStrings String String makePrisms ''PairOfStrings pairOfStrings :: Serializer '[] '[PairOfStrings] pairOfStrings = usePrism _PairOfStrings . string . string Choices

If you look closely at our definition of usePrism you might have seen something suspicious. Here’s the relevant code:

usePrism = ... where put (Cons d Nil) = do Just tuple <- return (Lens.preview p d) return (tuple ^. _HList)

In our put definition, we are assuming that Lens.preview is always returning a Just value. However, we saw earlier that this isn’t necessarily the case - the Prism corresponds to one of potentially many constructors. If we try and use usePrism with a prism that doesn’t match our expectations, then things go horribly wrong:

data Strings = PairOfStrings String String | ThreeStrings String String String makePrims ''Strings > case pairOfStrings of Serializer _ p -> runPut (void $ p (ThreeStrings "Uh" "Oh" "!" `Cons` Nil)) "*** Exception: Pattern match failure in do expression at ...

What we need to do is to allow for choice - if we have multiple possible prisms, then we need to consider each one. This corresponds to exhaustive pattern matching in case analysis.

It turns out choice is relatively straight forward to add in. Get is already an instance of MonadPlus, so we get choice there for free. Put however is a little more involved, as it doesn’t have an instance of MonadPlus. The best solution I’ve found thus far is to wrap up our Put computation inside Maybe, but this isn’t entirely satisfactory. Unfortunately binary doesn’t quite export enough to have a less expensive solution (PairS doesn’t have its oconstructor exported).

Monoid is a sensible type class to use for alternatives - choice is associative, and there is a sane identity (an always-failing Serializer). Thus the final definition of Serializer and its associated type classes are:

data Serializer :: [*] -> [*] -> * where Serializer :: (Get (List a -> List b)) -> (List b -> Maybe (PutM (List a))) -> Serializer a b instance Category Serializer where (Serializer g g') . (Serializer f f') = Serializer (g >>= \b -> f >>= \a -> return (b . a)) (\a -> do putF <- g' a let (b, lbs) = runPutM putF putG <- f' b return (putLazyByteString lbs >> putG)) id = Serializer (return id) (return . return) instance Monoid (Serializer a b) where mempty = Serializer mzero (const mzero) (Serializer g p) `mappend` (Serializer g' p') = Serializer (g `mplus` g') (\i -> p i `mplus` p' i)

Armed with this final definition of Serializer, we’re almost ready to provide a complete definition of serializing our Strings type. We need to provide a little extra information however, which allows us to disambiguate constructors. This is because if we are deserializing, if I read two strings I don’t necessarily know constructor to choose (yes, if we considered EOF this could be done, I’m going for brevity). You can find the definition of disambiguate in the full code listing.

Thus the final user-facing code is just:

strings :: Serializer '[] '[Strings] strings = mconcat [ usePrism _PairOfStrings . disambiguate 1 . string . string , usePrism _ThreeStrings . disambiguate 2 . string . string . string ]

And just to prove it all works…

> let Just putter = case strings of Serializer _ p -> p (ThreeStrings "A" "B" "C" `Cons` Nil) bytes = runPut (void putter) > case strings of Serializer g _ -> runGet (($ Nil) <$> g) bytes Cons (ThreeStrings "A" "B" "C") Nil > let Just putter = case strings of Serializer _ p -> p (PairOfStrings "Hello" "World!" `Cons` Nil) bytes = runPut (void putter) > case strings of Serializer g _ -> runGet (($ Nil) <$> g) bytes Cons (PairOfStrings "Hello" "World!") Nil Final Thoughts

We’ve seen that it’s possible to build correct-by-construction serializers and deserializers, and we got there by breaking down our problem into small parts and finding a good way to combine the parts together. Hopefully, I’ve illustrated some of the problems that arise from a naive solution, and how these problems guided us towards an implementation that is both more correct and more flexible.

Serializer is still not perfect however. With the idea of choice above, there’s no way to indicate exhaustive pattern matching. For example, in strings we are considering both constructors, yet put strings returns Maybe Put. This isn’t particularly satisfactory, because it should now always be possible to serialize this data type! On a similar note, it becomes harder to get compile time checks about exhaustive pattern matching, because we’re no longer doing case analysis explicitly. This is an interesting problem to me, and one that I would still like to solve.

There is also a bit more work that we might want to consider doing with Get and Put, which is to use a different concept of choice. There are other options than using Maybe - for example we could use lists which would inform us of all possible serializations for a data type, which might provide better debugging information than simply using the first one that matches.

I’d like to conclude by mentioning that the ideas here aren’t particularly new. In 2010 Rendel and Ostermann presented a solution using a category of partial isomorphisms and product functors from this category to Hask, which lead to various libraries on Hackage such as invertible-syntax, and boomerang. At ZuriHack, Martijn van Steenbergen presented the latest version of JsonGrammar, which uses a free category to describe operations on a JSON AST, and also illustrated how one can use prisms to provide a modern vocabulary for partial isomorphisms. json-grammar uses a stack-prism data type, which achieves the same goal as using heterogeneous lists, but does require another Template Haskell call (makeStackPrisms).

While I’m happy with the solution so far, I haven’t finished playing around with this. It’s unclear to me how this plays with recursive data types (for example, does this work for lists? What about trees?), and I need to learn more about stack-prism to see if using heterogenous lists impedes composition (as Sjoerd Visscher has warned me!). Hopefully I’ll be able to start using what I have so far in production, iron out the last problems, and release this to Hackage in the near future.

Thanks for reading, a full code listing can be found on Github

Categories: Offsite Blogs

New Functional Programming Job Opportunities

haskell-cafe - Mon, 06/09/2014 - 5:00pm
Here are some functional programming job opportunities that were posted recently: Sr. Software Engineer at Clutch Analytics http://functionaljobs.com/jobs/8720-sr-software-engineer-at-clutch-analytics Cheers, Sean Murphy FunctionalJobs.com
Categories: Offsite Discussion

Is there a functional concept similar to Union-Find?

Haskell on Reddit - Mon, 06/09/2014 - 3:43pm

This is union-find.

Obviously, one could do a naive translation, such as simply using a monad for effects or doing incremental updates, but it seems to me that in functional programming, it would make sense to make it a monoid and stuff like that.

submitted by tailcalled
[link] [7 comments]
Categories: Incoming News

Mike Izbicki: lens you an applicative for great haskell?

Planet Haskell - Mon, 06/09/2014 - 9:30am

Welcome back for round 2 of adventures in typeparams. In our last episode, we lensified the Functor type class. In this episode, we’re going to lensify the Applicative type class and plunge head first into type lens parsing.

Okay… down to business.

> {-# LANGUAGE TemplateHaskell #-} > {-# LANGUAGE ScopedTypeVariables #-} > {-# LANGUAGE KindSignatures #-} > {-# LANGUAGE TypeFamilies #-} > {-# LANGUAGE MultiParamTypeClasses #-} > {-# LANGUAGE UndecidableInstances #-} > {-# LANGUAGE FlexibleInstances #-} > {-# LANGUAGE RankNTypes #-} > {-# LANGUAGE OverloadedStrings #-}

We’ve got a few more imports today. Our work from last time has been uploaded to hackage and is in the Data.Params.Functor module. For parsing, we’ll be torturing the attoparsec library.

> import Control.Category > import Prelude hiding ( (.), id, Functor(..), Applicative(..) ) > import qualified Prelude as P > import Data.Params > import Data.Params.Functor > import qualified Control.Applicative as Ap > import qualified Data.Attoparsec.Text as A > import Data.Attoparsec.Text (parse,Parser,Result) > import Data.Monoid > import Data.Text (Text,pack) Functor review

As a quick warm up, let’s talk about the infix fmap operator <$>. The fmap function has type:

fmap :: Functor lens tb => TypeLens Base lens -> (a -> GetParam lens tb) -> SetParam lens a tb -> tb

All this <$> operator does is move fmap’s lens parameter to the end of the parameter list. This restructuring will help us chain our operators together and will be a common theme throughout the post. The operator is defined as:

> infixl 4 <$> > (f <$> t) lens = fmap lens f t

We can use the operator like:

ghci> length <$> (Left $ Right "test") $ _a._b Left (Right 4)

It will also be useful to have an operator just for specifying the type lens. Since a lens specifies the location “at” which we are operating, we call our new operator @@. It is defined as:

> infixr 0 @@ > (@@) :: (TypeLens p q -> b) -> TypeLens p q -> b > (@@) = id

And used like:

ghci> length <$> (Left $ Right "test") @@ _a._b Left (Right 4)

The fourth lens laws states that we must provide both prefix and infix versions of every combinator. Therefore we also introduce the function:

> at :: TypeLens q p -> (TypeLens q p -> t) -> t > at lens f = f lens ghci> at (_a._b) $ length <$> (Left $ Right "test") Left (Right 4) Lens you an Applicative

We’re ready to see our new Applicative class:

> class Functor lens tb => Applicative lens tb where > > pure :: GetParam lens tb -> TypeLens Base lens -> tb > > ap :: > ( tf ~ SetParam lens (a -> b) tb > , ta ~ SetParam lens a tb > , a ~ GetParam lens ta > , b ~ GetParam lens tb > ) > => TypeLens Base lens > -> tf > -> ta > -> tb

The functions pure and ap have the exact same meaning and laws as their counterparts in the standard libraries. The only difference is the addition of the TypeLens parameter and corresponding constraints.

The Left and Right Applicative instances for the Either class are defined as:

> instance Applicative p a => Applicative (Param_a p) (Either a b) where > pure a lens = Left $ pure a (zoom lens) > ap lens (Right a) _ = Right a > ap lens (Left f) (Right a) = Right a > ap lens (Left f) (Left b) = Left $ ap (zoom lens) f b > instance Applicative p b => Applicative (Param_b p) (Either a b) where > pure b lens = Right $ pure b (zoom lens) > ap lens (Left a) _ = Left a > ap lens (Right f) (Left a) = Left a > ap lens (Right f) (Right b) = Right $ ap (zoom lens) f b

And just like with Functors, we have to define the base case for our recusive definitions:

> instance Applicative Base t where > pure a _ = a > ap _ f = f

Now, to get the Applicative notation we all know and love, we redefine the <*> operator. It is just a thin wrapper around the ap function. Like the <$> operator, we just move the lens parameter to the end:

> infixl 4 <*> > (tf <*> ta) lens = ap lens (tf lens) ta

Easy as cake!

Let’s try it out!

We’ll start with the doubly nested Either. For nested Eithers, the lens we use specifies what the success constructors are. Any other constructors will act as errors.

Here’s an example without an error:

> fact1 :: Either (Either a String) b > fact1 = (++) <$> Left (Right "haskell") <*> Left (Right " rocks!") @@ _a._b ghci> fact1 Left (Right "haskell rocks!")

Here we have one possible way of signaling an error:

> fact2 :: Either (Either a String) String > fact2 = (++) <$> Left (Right "python") <*> Right "error" @@ _a._b ghci> fact2 Right "error"

And here we have the other way:

> fact3 :: Either (Either String String) b > fact3 = (++) <$> Left (Right "c++") <*> Left (Left "error") @@ _a._b ghci> fact3 Left (Left "error")

Of course, Applicatives are much more useful when our functions have many arguments. Let’s create a function that concatenates four strings together into a phrase:

> cat4 :: String -> String -> String -> String -> String > cat4 a b c d = a ++ " " ++ b ++ " "++ c ++ " " ++ d

And create a phrase with no errors:

> phrase1 :: Either (Either a String) b > phrase1 = cat4 > <$> Left (Right "haskell") > <*> Left (Right "is") > <*> Left (Right "super") > <*> Left (Right "awesome") > @@ _a._b ghci> phrase1 Left (Right "haskell is super awesome")

And a phrase with two errors:

> phrase2 :: Either (Either String String) String > phrase2 = cat4 > <$> Left (Right "python") > <*> Right "error" > <*> Left (Right "is") > <*> Left (Left "error") > @@ _a._b ghci> phrase2 Right "error"

Notice that in phrase2 we had two different causes of errors. The error with the fewest number of terms will always win. As a proof by example, let’s shuffle around our previous errors. We still get the same result:

> phrase3 :: Either (Either String String) String > phrase3 = cat4 > <$> Left (Right "python") > <*> Left (Left "error") > <*> Left (Right "is") > <*> Right "error" > @@ _a._b ghci> phrase3 Right "error" Pure bloods only

Thisis cool, but it’s not yet very generic. Everytime we want a success, we have to manually specify the constructors we want to use. We can avoid this tedium using the pure function. It’s type signature is:

pure :: Applicative lens tb => GetParam lens tb -> TypeLens Base lens -> tb

The important thing to notice is that the last parameter takes a TypeLens. This follows our magic formula. We can substitute it into our phrase1 variable like:

> phrase1' :: Either (Either a String) b > phrase1' = cat4 > <$> (pure "haskell" @@ _a._b) > <*> (pure "is" @@ _a._b) > <*> (pure "super" @@ _a._b) > <*> (pure "awesome" @@ _a._b) > @@ _a._b

But this is nasty! We have to specify the same TypeLens everywhere we want to use the pure function.

Thankfully, we don’t have to do this. The whole point of lenses is to create ridiculous new combinators that reduce boilerplate! So let’s do that! The “ap minus” combintator will automatically apply the lens for us:

> infixl 4 <*>- > (tf <*>- ta) lens = (tf <*> ta lens) lens

The minus sign signifies that the right side is “minus a lens” and so we should give it one automtically. Using this combinator, we can rewrite our phrase to look like:

> phrase1'' :: Either (Either a String) b > phrase1'' = cat4 > <$> (pure "haskell" @@ _a._b) > <*>- pure "is" > <*>- pure "super" > <*>- pure "awesome" > @@ _a._b

In order to get rid of the first lens application, we’ll need to perform the same trick to <$>:

> infixl 4 <$>- > (f <$>- t) lens = (f <$> t lens) lens

And we get the beautiful:

> phrase1''' :: Either (Either a String) b > phrase1''' = cat4 > <$>- pure "haskell" > <*>- pure "is" > <*>- pure "super" > <*>- pure "awesome" > @@ _a._b Combinatorics with combinators

There’s two more Applicative combinators needed for parsing: *> and <* . They use the same definition in the standard libraries, but with a third lens parameter:

> infixl 4 <* > (u <* v) lens = pure const <*> u <*> v @@ lens > infixl 4 *> > (u *> v) lens = pure (const id) <*> u <*> v @@ lens

Now we need to create all of the “minus” operators. Remember that the minus sign points to the variable that will have the lens automatically applied for us:

> infixl 4 <*- > infixl 4 -<*- > infixl 4 -<* > (u <*- v) lens = ( u <* v lens ) lens > (u -<*- v) lens = ( u lens <* v lens ) lens > (u -<* v) lens = ( u lens <* v ) lens > infixl 4 *>- > infixl 4 -*>- > infixl 4 -*> > (u *>- v) lens = ( u *> v lens ) lens > (u -*>- v) lens = ( u lens *> v lens ) lens > (u -*> v) lens = ( u lens *> v ) lens

Confused? Just remember: when you master these new combinators, all the n00bs will worship your l33t h4sk311 5ki115.

Parsing time

Now that we’ve constructed our torture chamber, it’s time to put attoparsec on the rack. We’ll use the built-in “blind” Functor and Applicative instances to define our lensified ones as:

> mkParams ''Parser > instance Functor p a => Functor (Param_a p) (Parser a) where > fmap' lens f parser = P.fmap (fmap' (zoom lens) f) parser > instance Applicative (Param_a Base) (Parser a) where > pure a lens = Ap.pure $ pure a (zoom lens) > ap lens tf ta = tf Ap.<*> ta

And now we’re ready to start parsing. We’ll start simple. The attoparsec library provides a function called string that matches a specified string. We’ll use it to create a Parser that matches the phrase “haskell rocks”:

> chain1 :: TypeLens Base (Param_a Base) -> Parser Text > chain1 = A.string "haskell" *> A.string " rocks" ghci> parse (chain1 @@ _a) "haskell rocks" Done "" " rocks"

In the above example, we chose to *not* specify the lens in the chain1 variable. This means that if we want to chain it with another parser, we should use the minus then operator like:

> chain2 :: TypeLens Base (Param_a Base) -> Parser Text > chain2 = chain1 -*> A.string "!" ghci> parse (chain2 @@ _a) "haskell rocks!" Done "" "!"

If we choose to compose on the right, then we’ll need to move the minus sign to the right:

> chain3 :: TypeLens Base (Param_a Base) -> Parser Text > chain3 = A.string "¡" *>- chain2 ghci> parse (chain3 @@ _a) "¡haskell rocks!" Done "" "!"

We have to use minus operators whenever we chain more than two parsers together. In the example below, the first *> takes three parameters (two parsers and a lens). It gets the lens from the minus of the first -*> operator. That operator also needs a lens, which it gets from the next -*>, and so on.

> chain4 :: TypeLens Base (Param_a Base) -> Parser Text > chain4 = A.string "do" > *> A.string " you" > -*> A.string " get" > -*> A.string " it" > -*> A.string " yet?" ghci> parse (chain4 @@ _a) "do you get it yet?" Done "" " yet?"

If we need to apply a lens to both sides, then we use the -*>- operator:

> chain5 :: TypeLens Base (Param_a Base) -> Parser Text > chain5 = chain3 -*> A.string " ... " -*>- chain4 ghci> parse (chain5 @@ _a) "¡haskell rocks! ... do you get it yet?" Done "" " yet?" Stacking parsers

Everything in the last section we could have done without type lenses. But now we’re going to lift the Parser into an arbitrary data type and work with it.

As a concrete example, we’ll put our Parser inside a Maybe. The Maybe Applicative instance is:

> instance Applicative p a => Applicative (Param_a p) (Maybe a) where > pure a lens = Just $ pure a (zoom lens) > ap lens Nothing _ = Nothing > ap lens (Just f) Nothing = Nothing > ap lens (Just f) (Just b) = Just $ ap (zoom lens) f b

And for convenience we’ll use the following parseMaybe function. It has the same effect as the parse function provided by attoparsec, but does everything from within a Maybe.

> parseMaybe :: Maybe (Parser a) -> Text -> Maybe (Result a) > parseMaybe parser str = flip parse str <$> parser @@ _a

Next, we lensify our parser combinators. This string lifts the string function provided by the attoparsec library into an arbitrary parameter specified by our type lens:

> string c lens = pure (A.string c) (zoom lens)

Back to parsing.

Let’s just repeat the same 5 parse chains from above, but now within the Maybe context. Notice two things:

  1. The A.string function provided by the attoparsec library did not take a type parameter, but our new string function does. This means there’s a lot more minus combinators!
  2. Instead of specifying our lens to focus on the _a parameter, we must focus on the _a._a parameter to hit the parser.
> chain1' :: TypeLens Base (Param_a (Param_a Base)) -> Maybe (Parser Text) > chain1' = string "haskell" -*>- string " rocks" ghci> parseMaybe (chain1' @@ _a._a) "haskell rocks" Just Done "" " rocks"

 

> chain2' :: TypeLens Base (Param_a (Param_a Base)) -> Maybe (Parser Text) > chain2' = chain1' -*>- string "!" ghci> parse (chain2' @@ _a._a) "haskell rocks!" Done "" '!'

 

> chain3' :: TypeLens Base (Param_a (Param_a Base)) -> Maybe (Parser Text) > chain3' = string "¡" -*>- chain2' ghci> parse (chain3' @@ _a._a) "¡haskell rocks!" Done "" '!'

 

> chain4' :: TypeLens Base (Param_a (Param_a Base)) -> Maybe (Parser Text) > chain4' = string "do" -*>- string " you" -*>- string " get" -*>- string " it" -*>- string " yet?" ghci> parse (chain4' @@ _a._a) "do you get it yet?" Done "" " yet?"

 

> chain5' :: TypeLens Base (Param_a (Param_a Base)) -> Maybe (Parser Text) > chain5' = chain3' -*>- string " ... " -*>- chain4' ghci> parse (chain5' @@ _a._a) "¡haskell rocks! ... do you get it yet?" Done "" " yet?"

Again, there’s nothing special about being nested inside a Maybe. We could be nested inside any monstrous data type of your choosing. Yay!

But in the example we’ve chosen, what happens if we add a Maybe into the chain? Nothing takes over and eats the whole Parser. It doesn’t matter if the Parse was failing or succeeding, the answer is Nothing.

> chain6 :: TypeLens Base (Param_a (Param_a Base)) -> Maybe (Parser Text) > chain6 = string "python" -*> Nothing ghci> parseMaybe (chain6 @@ _a._a) "python" Nothing ghci> parseMaybe (chain6 @@ _a._a) "haskell" Nothing Circuit parsing teaser

Now we’re ready for some super coolness. We’re going to design a parsing circuit that parses three unique Parse streams simultaneously!

Here is our Circuit definition:

> data Circuit x y z > = Circuit (Maybe x) (Maybe y) (Maybe z) > | CircuitFail > deriving (Show) > mkParams ''Circuit

The x, y, and z type params will hold the Parsers. These Parsers are wrapped within a Maybe. A value of Nothing represents that that parser will not consume any input. A value of (Just parser) means that it will consume input.

The Functor instances are rather interesting because of the Maybe wrapper. We must compose _a with the zoomed lens to make the types work out:

> instance Functor p x => Functor (Param_x p) (Circuit x y z) where > fmap' lens f CircuitFail = CircuitFail > fmap' lens f (Circuit x y z) = Circuit (fmap' (_a . zoom lens) f x) y z > instance Functor p y => Functor (Param_y p) (Circuit x y z) where > fmap' lens f CircuitFail = CircuitFail > fmap' lens f (Circuit x y z) = Circuit x (fmap' (_a . zoom lens) f y) z > instance Functor p z => Functor (Param_z p) (Circuit x y z) where > fmap' lens f CircuitFail = CircuitFail > fmap' lens f (Circuit x y z) = Circuit x y (fmap' (_a . zoom lens) f z)

The Applicative instances are where all the action is at. In each case, the pure function is fairly straightforward. It looks just like the other ones we’ve seen except that it applies the _a to the zoomed lens and gives default values of Nothing to the other parsers. The ap function calls ap on the appropriate parser and uses the First Monoid instance on the other two.

> instance > ( Applicative p x > , Monoid y > , Monoid z > ) => Applicative (Param_x p) (Circuit x y z) > where > pure x lens = Circuit (pure x @@ (_a . zoom lens)) Nothing Nothing > ap lens CircuitFail _ = CircuitFail > ap lens _ CircuitFail = CircuitFail > ap lens (Circuit x1 y1 z1) (Circuit x2 y2 z2) = Circuit > (ap (_a . zoom lens) x1 x2) > (getFirst $ First y1 <> First y2) > (getFirst $ First z1 <> First z2) > instance (Monoid x, Applicative p y, Monoid z) => Applicative (Param_y p) (Circuit x y z) where > pure a lens = Circuit Nothing (pure a @@ _a . zoom lens) Nothing > ap lens CircuitFail _ = CircuitFail > ap lens _ CircuitFail = CircuitFail > ap lens (Circuit x1 y1 z1) (Circuit x2 y2 z2) = Circuit > (getFirst $ First x1 <> First x2) > (ap (_a . zoom lens) y1 y2) > (getFirst $ First z1 <> First z2) > instance (Monoid x, Monoid y, Applicative p z) => Applicative (Param_z p) (Circuit x y z) where > pure a lens = Circuit Nothing Nothing (pure a @@ _a . zoom lens) > ap lens CircuitFail _ = CircuitFail > ap lens _ CircuitFail = CircuitFail > ap lens (Circuit x1 y1 z1) (Circuit x2 y2 z2) = Circuit > (getFirst $ First x1 <> First x2) > (getFirst $ First y1 <> First y2) > (ap (_a . zoom lens) z1 z2)

We write a nice wrapper so we can parse our circuits:

> parseCircuit > :: Circuit (Parser x) (Parser y) (Parser z) > -> Text > -> Text > -> Text > -> Circuit (Result x) (Result y) (Result z) > parseCircuit CircuitFail _ _ _ = CircuitFail > parseCircuit (Circuit x y z) str1 str2 str3 = Circuit > ( parseMaybe x str1 ) > ( parseMaybe y str2 ) > ( parseMaybe z str3 )

And now here is a simple circuit for us to play with:

> circ1 :: Circuit (Parser Text) (Parser Text) (Parser Text) > circ1 = Circuit > (string (pack "haskell") @@ _a._a) > (string (pack "is" ) @@ _a._a) > (string (pack "fun" ) @@ _a._a)

When we try to parse our circuit, we just match each word in parallel:

ghci> parseCircuit circ1 "haskell" "is" "fun" Circuit (Just Done "" "haskell") (Just Done "" "is") (Just Done "" "fun")

In this example, we compose our circuit only on the first parameter:

ghci> parseCircuit (circ1 *> circ1 @@ _x._a) "haskell" "is" "fun" Circuit (Just Partial _) (Just Done "" "is") (Just Done "" "fun")

Notice that (above) we no longer finished after matching the word “haskell”. We’ve got a whole ‘nother haskell to go. Oh Joy!

Here, we match completely:

ghci> parseCircuit (circ1 *> circ1 @@ _x._a) "haskellhaskell" "is" "fun" Circuit (Just Done "" "haskell") (Just Done "" "is") (Just Done "" "fun")

In our Circuit type, every parser is—at least so far—acting completely independently. That means one parser can fail while the others succeed:

ghci> parseCircuit circ1 "python " "is" "fun" Circuit (Just Fail "python " [] "Failed reading: takeWith") (Just Done "" "is") (Just Done "" "fun")

Let’s create another simple circuit to play with. In this one, only the first parser performs any actions. The other two are noops:

> circ2 :: Circuit (Parser Text) (Parser y) (Parser z) > circ2 = Circuit > (string (pack " with lenses") @@ _a._a) > Nothing > Nothing

We can compose circ1 and circ2 exactly as you would suspect. Our original string is now only a partial match:

ghci> parseCircuit (circ1 *> circ2 @@ _x._a) "haskell" "is" "fun" Circuit (Just Partial _) (Just Done "" "is") (Just Done "" "fun")

But this matches perfectly:

ghci> parseCircuit (circ1 *> circ2 @@ _x._a) "haskell with lenses" "is" "fun" Circuit (Just Done "" " with lenses") (Just Done "" "is") (Just Done "" "fun")

And this fails:

ghci> parseCircuit (circ1 *> circ2 @@ _x._a) "haskell without lenses" "is" "fun" Circuit (Just Fail " without lenses" [] "Failed reading: takeWith") (Just Done "" "is") (Just Done "" "fun")

We can simplify the code of circ2 even further (and make it more generic) using the pure function:

> circ3 :: Circuit (Parser Text) (Parser y) (Parser z) > circ3 = pure (string (pack " with lenses") @@ _a) @@ _x

circ3 behaves exactly like circ2 when sequenced with circ1:

ghci> parseCircuit (circ1 *> circ3 @@ _x._a) "haskell with lenses" "is" "fun" Circuit (Just Done "" " with lenses") (Just Done "" "is") (Just Done "" "fun")

And that’s enough for today. GHC needs to rest. It’s tired.

Tune in next time…

We’ve still go so many tantalizing questions to answer:

  1. What is that CircuitFail gizmo doing?
  2. How do I use Alternative to branch my parser?
  3. Can a Circuit’s parser depend on the other parsers in the Circuit?
  4. Do fiber optic burritos taste good??!?!

Stay tuned to find out!

Categories: Offsite Blogs

ICFP 2014 accepted papers

Haskell on Reddit - Mon, 06/09/2014 - 9:05am
Categories: Incoming News

GHC bug squashing at ZuriHac 2014

Haskell on Reddit - Mon, 06/09/2014 - 7:52am
Categories: Incoming News

Tutorial on proving Functor laws

Haskell on Reddit - Mon, 06/09/2014 - 7:01am
Categories: Incoming News

Memory pools for ghc?

Haskell on Reddit - Mon, 06/09/2014 - 4:30am

I was looking at the binary-trees game and it seems that the C and C++ implementations become 4x faster by using memory pools.

In Haskell, it seems that it should be possible to create a type for an expression where a temporary memory pool is utilized, but after evaluation, the whole memory pool is cleared, instead of using the garbage collector to scan the memory.

A sort of garbage collection will happen by tracing the result, but the user gets to explicitly indicate points in the program flow where lots of garbage exists and thus lots of GC can be avoided.

Because Haskell in general does not allow mutable storage, no reference from an old generation can point into this pool, so for pure code, this should be safe, and fast.

What I am thinking is that a function like this:

data PoolSpec = PoolSpec { ... } poolEval :: NFData a => PoolSpec -> a -> a poolEval spec val = do -- Make the memory allocator use a new pool, or just mark the -- current nursery position if enough memory is available in the -- current generation. let mark = pushMemoryMark spec -- In new pool, evaluate the result let result = deepseq val -- Release the pool. Copy result out of the pool. popMemoryMark mark result data Mark = ... -- Create a new memory pool with the current memory pool -- as parent. pushMemoryMark :: Mark -- Copy a result value from the current memory pool to the -- parent memory pool, destroy the current memory pool, -- and set the parent memory pool to be the current memory -- pool popMemoryMark :: NFData a => Mark -> a -> a

How incompatible are these functions with the current ghc rts, and what would, say a webserver gain by having this?

submitted by hastor
[link] [12 comments]
Categories: Incoming News

Apple invents “Maybe”

Haskell on Reddit - Sun, 06/08/2014 - 4:28pm

There must be a Haskell’er infiltrating Apple. Their new programming language, Swift, is strongly typed and the types are inferred by the compiler. Swift is probably the first programming language developed simultaneously with a high powered IDE. It includes lazy properties, explicit closures, implicit storage allocation, and innovative invocation of function parameters. Swift also defines “optional” values. For example:

if let upper = john.residence?.address?.buildingIdentifier()?.uppercaseString { println("John's uppercase building identifier is \(upper).") }

The “residence?” is an “optional value” that yields ‘nil’ or a value. As shown above, optional values can be cascaded. Very nice syntactic sugar for Maybe.

It reminds of the 1960’and 70’s when we were all busily designing programming languages for specific purposes. Swift is a language designed to interface to Apple’s APIs and Cocoa frameworks. There probably is something we can learn from Swift - programs are compact, safe, fast, and legible - with understandable compiler error messages.

Excerpt From: Apple Inc. “The Swift Programming Language.” iBooks. https://itun.es/us/jEUH0.l

submitted by crmills_2000
[link] [58 comments]
Categories: Incoming News

Joachim Breitner: ZuriHac 2014

Planet Haskell - Sun, 06/08/2014 - 1:34pm

I’m writing this on the train back from the ZuriHac Haskell Hackathon in Zürich, generously sponsored by Better and Google. My goal for this event was to attract new people to work on GHC, the Haskell compiler, so I announced a „GHC bugsquashing project“. I collected a few seemingly simple ticket that have a good effort/reward ratio for beginners and encouraged those who showed up to pick one to work on.

Roughly six people started, and four actually worked on GHC on all three days. The biggest hurdle for them was to get GHC built for the first time, especially those using a Mac or Windows. They also had to learn to avoid recompilation of the whole compiler, which takes an annoying amount of time (~30 minutes for most people). But once such hurdles weren taken all of them managed to find their way around the source code to the place they need to touch and were able to produce a patch, some of which are already merged into GHC master. When I wasn’t giving tips and hints I was working on various small tickets myself, but nothing of great impact. I very much hope that this event will pay off and one or two of the newcomers end up being regular contributors to GHC.

We took breaks from our respective projects to listen to interesting talks by Edward Kmett and Simon Marlow, and on Saturday evening we all went to the shores of the Zurisee and had a nice Barbecue. It was a good opportunity to get into contact with more of the attendees (the hacking itself was separated in multiple smaller office rooms) and I was happy to hear about people having read my recent Call Arity paper, and even found it valuable.

Thanks to the organizers and sponsors for this nice opportunity!

Categories: Offsite Blogs