News aggregator

Oliver Charles: 24 Days of GHC Extensions: DeriveGeneric

Planet Haskell - Mon, 12/15/2014 - 6:00pm

Yesterday, Andraz showed us a variety of extensions that came with GHC to help us avoid writing boilerplate code. We saw that GHC can automatically derive instances for Functor, Traversable, Foldable, along with the usual class of Eq, Ord, Show, etc. However, as exciting as this is, you might have been left a little worried that this is where it stops.

All of the classes that mentioned so far exist in the base library, and that allows us to extend GHC to automatically derive code for these classes. But what if you have a type class that’s not in base? It would be disappointing if there wasn’t a way to avoid boilerplate without extending the compiler, and with GHC 7.2, we got a new extension that helps solve exactly this problem.

> {-# LANGUAGE DeriveGeneric #-} > {-# LANGUAGE FlexibleContexts #-} > {-# LANGUAGE FlexibleInstances #-} > {-# LANGUAGE UndecidableInstances #-} > {-# LANGUAGE MultiParamTypeClasses #-} > {-# LANGUAGE FunctionalDependencies #-} > {-# LANGUAGE TypeOperators #-} > {-# LANGUAGE ViewPatterns #-} > import GHC.Generics

The (relatively) new DeriveGeneric extension allows us to use a paradigm of programming called data-type generic programming. In this style of programming, we are able to write our functions over arbitrary data types - provided they have the right “shape”. To get an idea of what it means for things to have the same shape, let’s start by looking at the humble Either data type:

data Either a b = Left a | Right b

Another data type that’s similar could be a validation data type:

> data Valid e a = Error e | OK a > deriving (Generic)

In fact, this data type is more than just similar - we can say it is isomorphic to Either. In this sense, isomorphic is just a fancy word for a pair of functions to translate between Valid and Either, without losing information. This ideas relates to have a structure-preserving mapping between Valid and Either. If that sounds scary, we can easily write this up in code:

> toEither :: Valid e a -> Either e a > toEither (Error e) = Left e > toEither (OK a) = Right a > fromEither :: Either e a -> Valid e a > fromEither (Left e) = Error e > fromEither (Right a) = OK a

See - easy!

As you can imagine, there are lots of different data types that are isomorphic, and the insight behind data-type generic programming is that (most) data-types can be built up out of simpler pieces. For Either and Valid, they are both built out of the same parts:

  1. Each data-type has two constructors
  2. Each constructor has one field

GHC.Generics is the library behind the DeriveGenerics extension, and it gives us the following pieces to build data types:

  • Fields
  • Type parameterized fields
  • Field products - which allow us to make a constructor with multiple fields
  • Constructor sums - which allow a data-type to have multiple constructors

The library also goes a little further than this, by providing you with program specific information, such as the name of types and fields. The latter can be useful for working with serializers such as JSON and XML.

So far, we’ve skimmed the idea of isomorphic types, and also that GHC.Generics gives us a set of basic parts to build types. It would be tedious if we had to write conversion functions ourselves, and by using DeriveGeneric, GHC will do all the heavy lifting behind the scenes for us.

As you can see above, we used deriving (Generic) on Valid, which means we already get some transformations that we can play with:

.> from (Error "ENOCHRISTMASPRESENTS") M1 {unM1 = L1 (M1 {unM1 = M1 {unM1 = K1 {unK1 = "ENOCHRISTMAS"}}})} .> from (OK ["Books", "Calculators"]) M1 {unM1 = R1 (M1 {unM1 = M1 {unM1 = K1 {unK1 = ["Books","Calculators"]}}})}

While the output is a little dense, notice how the contents of the data type is present in the K1 data type, and we choose a side of the Valid data type with L1 (“left” for Error) or R1 (“right” for OK).

Now that we’ve got a generic representation, we’re able to start writing generic functions over data-type that conforms to the shape we want. As a small example, let’s write a generic function to try and get the error out of an error-containing data type. As we have a large amount of different types (M1, L1, etc), we use a variety of type classes to navigate the data structure:

> class GetError rep e | rep -> e where > getError' :: rep a -> Maybe e > instance GetError f e => GetError (M1 i c f) e where > getError' (M1 m1) = getError' m1 > instance GetError l e => GetError (l :+: r) e where > getError' (L1 l) = getError' l > getError' (R1 _) = Nothing > instance GetError (K1 i e) e where > getError' (K1 e) = Just e > getError :: (Generic (errorLike e a), GetError (Rep (errorLike e a)) e) => errorLike e a -> Maybe e > getError = getError' . from

A little daunting, but I’ll be mean and leave it as an exercise to the reader to determine how this code works. However, just to prove it does work - let’s have a play in GHC:

.> getError (Error "Oh no!") Just "Oh no!" .> getError (OK "Phew") Nothing .> getError (Left "More explosions!") Just "More explosions!" .> getError (Right "Oh, false alarm") Nothing

Now that’s my idea of some Christmas magic.

Undoubtedly, generic programming is not a simple concept, and it will take time to get used to it if you’re new to it. Andres Löh, a firm supporter of generic programming, has a lovely Skills Matter talk that goes into more detail about this very extension.

This post is part of 24 Days of GHC Extensions - for more posts like this, check out the calendar.

Categories: Offsite Blogs

Trying to optimise my discrete Voronoi algorithm without success, could use some pointers!

Haskell on Reddit - Mon, 12/15/2014 - 2:26pm

Part of the work on the next iteration of my character rendering routines requires an approximation of discrete Voronoi diagrams. So far I’ve been unable to come up with a sufficiently performant implementation. Here’s what I’ve got at the moment:

There are three implementations in the above snippet:

  • directVoronoi: reference implementation where we check each input point against each tile and choose the closest one.
  • simpleVoronoi: we create an empty field which is seeded with the input points, then for every empty tile we find the value of the closest taken tile; this is the fastest solution in most cases.
  • propagationVoronoi: we start out with a seeded field again, but instead of naively checking every empty tile, we start from the filled ones and propagate their values to their empty neighbours until the whole field is done.

In theory the third approach should be the fastest, since it does a limited amount of work on each tile, but it looks like the benefits are lost to book-keeping overhead. If anyone has any idea how else to attack this problem I’d be happy to hear!

submitted by cobbpg
[link] [comment]
Categories: Incoming News

Bill Atkins: Simple Combinators for Manipulating CGPoint/CGSize/CGRect with Swift

Planet Haskell - Mon, 12/15/2014 - 1:27pm
One of the most painful things about Objective-C was having to modify CGPoint, CGSize or CGRect values. The clunky struct interface made even simple modifications verbose and ugly, since struct expressions were read-only:

    CGRect imageBounds = self.view.bounds;    imageBounds.size.height -= self.footer.bounds.size.height;
    self.imageView.bounds = imageBounds;
Even though we have auto-layout, I often find myself doing this kind of arithmetic with points, size or rects. In Objective-C, it required either generating dummy variables so you can modify members (as above), or really messy struct initialization syntax:
    self.imageView.bounds = (CGRect) {         .origin = self.view.bounds.origin,        .size = CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height -                               self.footer.bounds.size.height) };
Fortunately, none of this boilerplate is necessary with Swift. Since Swift lets you extend even C structures with new methods, I wrote a handful of combinators that eliminate this kind of code. The above snippet can now be replaced with:
    self.imageView.bounds = self.view.bounds.mapHeight { $0 - self.footer.size.height }
I can easily enlarge a scroll view's content size to hold its pages:

    self.scrollView.contentSize = self.scrollView.bounds.size.mapWidth { $0 * CGFloat(pages.count) }
I can do calculations that previously would've required dozens of lines of code in just one or two:
    let topHalfFrame = self.view.bounds.mapHeight { $0 / 2 }    let bottomHalfFrame = topHalfFrame.mapY { $0 + topHalfFrame.size.height }
These two lines will give me two frames that each take up half of the height of their parent view.
In cases where I simply need to set a value, I use the primitive "with..." functions:
Note that these methods can all be chained to create complex expressions.
The code for these methods is trivial, yet they give you a huge boost in expressive power.
Codeextension CGPoint {    func mapX(f: (CGFloat -> CGFloat)) -> CGPoint {        return self.withX(f(self.x))    }        func mapY(f: (CGFloat -> CGFloat)) -> CGPoint {        return self.withY(f(self.y))    }        func withX(x: CGFloat) -> CGPoint {        return CGPoint(x: x, y: self.y)    }        func withY(y: CGFloat) -> CGPoint {        return CGPoint(x: self.x, y: y)    }}
extension CGSize {    func mapWidth(f: (CGFloat -> CGFloat)) -> CGSize {        return self.withWidth(f(self.width))    }        func mapHeight(f: (CGFloat -> CGFloat)) -> CGSize {        return self.withHeight(f(self.height))    }        func withWidth(width: CGFloat) -> CGSize {        return CGSize(width: width, height: self.height)    }        func withHeight(height: CGFloat) -> CGSize {        return CGSize(width: self.width, height: height)    }}
extension CGRect {    func mapX(f: (CGFloat -> CGFloat)) -> CGRect {        return self.withX(f(self.origin.x))    }        func mapY(f: (CGFloat -> CGFloat)) -> CGRect {        return self.withY(f(self.origin.y))    }        func mapWidth(f: (CGFloat -> CGFloat)) -> CGRect {        return self.withWidth(f(self.size.width))    }        func mapHeight(f: (CGFloat -> CGFloat)) -> CGRect {        return self.withHeight(f(self.size.height))    }        func withX(x: CGFloat) -> CGRect {        return CGRect(origin: self.origin.withX(x), size: self.size)    }        func withY(y: CGFloat) -> CGRect {        return CGRect(origin: self.origin.withY(y), size: self.size)    }        func withWidth(width: CGFloat) -> CGRect {        return CGRect(origin: self.origin, size: self.size.withWidth(width))    }        func withHeight(height: CGFloat) -> CGRect {        return CGRect(origin: self.origin, size: self.size.withHeight(height))    }}
Categories: Offsite Blogs

Lennart Augustsson: A commentary on 24 days of GHC extensions

Planet Haskell - Mon, 12/15/2014 - 11:36am
Ollie Charles has continued his excellent series of Christmas Haskell blogs. This year he talks about 24 Days of GHC Extensions. His blog posts are an excellent technical introduction to various Haskell extensions. Reading them inspired me to write some non-technical comments about the various extensions; giving a little bit of history and personal comments about them.

Day 1: View PatternsView patterns have a long history. As far as I know views were first suggested by Phil Wadler in 1987, Views: a way for pattern matching to cohabit with data abstraction. As the title of the paper suggests, it was about being able to pattern match on abstract data types. Variations of Phil's suggestions have been implemented several times (I did it both in LML and for Haskell in hbc). In all these early suggestions the conversion between the concrete type and the view type were implicit, whereas in the ViewPatterns extension the conversion is explicit.

The addition of view patterns was partly spurred by the fact that F# has a very neat way of introducing pattern matching for abstract types, called active patterns. Since Simon Peyton Jones is in the same research lab as Don Syme it's natural that Simon couldn't let F# have this advantage over Haskell. :)

Day 2: Pattern SynonymsPattern synonyms is a more recent addition. The first time I remember hearing the idea was this the paper Abstract Value Constructors, and ever since then I wished that Haskell would have something like that. Patterns were one of the few constructs in Haskell that could not be named and reused. The first time they were added to Haskell was in the SHE preprocessor.

At a Haskell hackathon in Cambridge, August 2011, Simon Peyton Jones, Simon Marlow and I met to hash out how they could be added to GHC. I wanted to go beyond simple pattern synonyms. The simplest kind is uni-directional synonyms which can only occur in patterns. The simply bi-directional synonyms can be used both in patterns and expressions, but are limited in what they can do. The explicit bi-directional synonyms allow the full power of both patterns and expressions. With explicit bi-directional synonyms and view patterns we can finally implement views as envisioned by Phil, but now broken down into smaller reusable pieces. The result of our discussions at the hackathon can ge found here. But we were all too busy to implement any of it. All the hard implementation work, and fixing all the additional snags found, was done by Gergő Érdi.

I find this a very exciting extension, and you can take it even further, like Associated Pattern Synonyms in class declarations.

Day 3: Record WildcardsThe record wildcard extension allows you to open a record and get access to all the fields without using qualified names for them. The first time I encountered this idea was in Pascal which as a with-statement. It looks like with expr do stmt where the expr is a record value and inside the stmt all the fields of the record can be accessed directly. The construct later appeared in Ada as well, where it's called use.

So having something similar in Haskell doesn't seem so far fetched. But in was actually the dual, namely to construct values using record wildcard notation that inspired me to make this extension.

In 2006 I was developing the Paradise EDSL which is a DSL embedded in Haskell for describing computations and user interfaces. I wanted a way to make the EDSL less verbose and that's why record wildcards came about. Here's a simple example to illustrate the idea. We want to be able to input a coordinate record.

data Coord = Coord { x :: Double, y :: Double } coord :: P Coord coord = do x <- input 0.0 y <- input 0.0 return Coord{..} This says that x and y are input and that their default value is 0. We need to have an input for every field in the Coord record and at the end we need to construct the record itself. Without the record wildcard the construction would have been Coord{x=x,y=y}. This isn't too bad, but for hundreds of inputs it gets tedious.

I made a quick and dirty implementation of this in GHC, but it was too ugly to get into the official release. Instead Dan Licata reimplemented (and extended) it under Simon PJ's supervision into what we have today.

I'm actually quite ambivalent about this extension. It's incredibly convenient (especially in the pattern form), but it introduces new bound names without an explicit mention of the name in the binder. This makes it harder to read code. The same objection can be made about the Haskell import declaration when it lacks an import list.

Day 4: Bang PatternsI don't have much to say about BangPatterns. One day they simply appeared in a new GHC version. :)

The Clean language has long has something similar, but in Clean the annotation goes on the type instead of on the pattern. In Haskell it seems like a nice duality to have it on the pattern since there also lazy patterns introduced by ~.

Day 5: Rebindable SyntaxThe rebindable syntax is almost like going back the older Haskell reports. They lacked an exact specification of where the identifier introduced by desugaring were bound. Of course, it was resolved so that they always refer to the Prelude identifiers. But when experimenting with other preludes it's very useful to be able to override this. Which is exactly what RebindableSyntax allows you to do.

In my opinion, this extension should be a little different. It ought to be possible to give a module name for where the names should be found. So normally this would be Prelude, but I'd like to be able to say RebindableSyntax=MyPrelude, and then all names introduced by desugaring will be found in MyPrelude (even if they are not in scope).

Day 6: List comprehensionsThis bundles together a number of list comprehension extensions.

First, MonadComprehensions, which allows list comprehension syntax for any monad. This is simply going back to Haskell 1.4 where monad comprehensions were introduced. The monad comprehension syntax had been used by Phil Wadler before that. Monad comprehension were removed from 1.4, because they were deemed too confusing for beginners. Monad comprehensions were brought back to GHC by George Giorgidze et al.

The ParallelListComp allows zip like operation to be expression in the comprehension. The idea originates from Galois in the design of Cryptol. John Launchbury, Jeff Lewis, and Thomas Nordin were turning crypto networks into recursive equations and they wanted a nicer notation than using zipWith. (Thanks to Andy Adams-Moran for the history lesson.)

The origins TransformListComp are simple. It's just a way to bring more of the power of SQL into list comprehensions. It's an extension that introduces far too much special syntax for my taste, but the things you can do are quite neat.

Categories: Offsite Blogs

FP Complete: Stackage: Survey results, easier usage, and LTS Haskell 0.X

Planet Haskell - Mon, 12/15/2014 - 10:00am

There was tremendous response to our Stackage survey, so I'd like to say: thank you everyone who participated, the feedback was invaluable. Additionally, in the past two weeks, I think we've added around 100 new packages to Stackage based on everyone's pull requests, so again, thank you for everyone who got involved. You can view the survey results yourself. Of particular interest to me were the freeform responses, which gave us a lot of valuable information.

Chris Done and I went over the results together, and by far, the strongest impression that we got was that the Stackage setup process was too onerous. Lack of direct cabal-install support and need to choose from among six possible snapshots were very problematic. Furthermore, some people found the homepage confusing, and didn't understand from it why they should use Stackage. There was also fear that, by using Stackage, you'd end up missing out on some important packages, either because those packages aren't present, or because it's unclear how to add new packages.

So today, we're announcing a number of changes on to address these concerns, as well as to pave the way for the upcoming LTS Haskell release. These changes are still a work in process, so please give us feedback (or feel free to send a pull request as well).

Simplified choices

In order to use Stackage, you first had to choose GHC 7.8, GHC 7.8 + Haskell Platform, or GHC 7.6. You then had to decide if you wanted exclusive vs inclusive. Once we add LTS Haskell, that's another choice to add to the mix. So we've decided to simplify the options advertised on the homepage to two:

Each of these will be compatible with only one version of GHC (7.8.3 for now). Another piece of feedback is that users are by far using inclusive more commonly than exclusive. So we're going to default to giving inclusive instructions.

One important thing to keep in mind is that this will not affect current users at all. All snapshots currently hosted on will remain there in perpetuity. We're talking about discovery for new users here.

Simplified setup

Until now, we've recommended setting up Stackage by changing your remote-repo setting to point to the appropriate URL. In October, Greg Weber came up with the idea of generating a cabal.config file to specify constraints instead of using a manual URL. We've decided to make this the preferred Stackage setup method. This provides a number of benefits for you immediately:

  • Works directly with cabal, without needing a bleeding-edge version with remote-repo support
  • It's fully supported by cabal sandboxes
  • It's easy to tweak your version requirements if desired
  • You keep Hackage server as your package source, which may be desired by some

The downsides with this are:

  • There are a few bugs in cabal-install around cabal.config support, see the issue for more information
  • This approach only works for "inclusive"-style snapshots. However, as we're now recommending inclusive as the default mechanism, this makes sense. The cabal.config file contains an optional remote-repo line which you can uncomment to get back exclusive-style snapshots.
  • There are some concerns about Hackage server's reliability. If you'd like to have a more reliable server, FP Complete offers as an alternative remote-repo, hosted by Amazon S3.

As a result of this change, getting set up with Stackage is now as easy as downloading a cabal.config file, placing it in your project directory, and running cabal install. Our homepage has easy to use instructions for this as well.

More focused homepage

Speaking of the homepage: we've updated it to:

  • Give easy-to-use installation instructions
  • Give a clear, concise explanation of what Stackage is and the problems it solves
  • Provide a link for installation instructions, for people without a Haskell toolchain installed
  • Provide information on how to deal with a package not included in Stackage
  • Provide a link for package authors to get involved with Stackage

Relevant Github issue with more details

More informative snapshot pages

The snapshot pages now list all packages in the snapshot, together with version numbers, synopses, and documentation links (if available). The setup instructions are also much simpler on each snapshot page.

We've also set up nicer URLs for the commonly used snapshots. In particular:

  • /nightly will take you to the latest nightly
  • /nightly/2014-12-15 will take you to the December 15, 2014 nightly
  • /lts will take you to the latest LTS
  • /lts/1 will take you to the latest LTS in the 1 series
  • /lts/1.3 will take you to LTS 1.3

Relevant Github issue with more details

More informative package pages

We've streamlined the package pages to provide the most pertinent information. Have a look for yourself. Of particular interest, we now have inline links for Haddock documentation. You can now very easily start browsing docs from just looking at a package page.

Relevant Github issue with more details

New installation instructions

There's now a dedicated installation instructions page targeted at people without a Haskell installation. This page is controlled by a Markdown file on Github, and pull requests to improve content are very much welcome!

LTS repo has started, updated Stackage codebase

I've created the LTS Haskell repo. I'm populating it with 0.X releases now as pre-release testing. To reiterate: LTS Haskell is not launched yet, and I will be holding off on an official 1.0 until January 1. So if you have packages you want in LTS, you still have two weeks to get them in.

I've also done a major overhaul of the Stackage codebase itself to make for far more reliable builds. There are lots of details involved, but they're probably not too terribly interesting to most readers. The important takeaways are:

  • Each build is now fully represented by a YAML file that contains a lot of useful metadata
  • There are completely automated executables to create new LTS and nightly releases
  • The codebase is well set up to create reusable binary package databases, if anyone's interested in doing that (I know we'll be using it at FP Complete)
Stopping future GHC 7.6/Haskell Platform builds (?)

This decision is still up for discussion, but my plan is to discontinue Stackage daily builds for GHC 7.6 and GHC 7.8 + Haskell Platform. The reasons are:

  • It puts a large burden on package authors to maintain their packages with old dependencies, which is precisely the opposite of what we want to do with LTS Haskell
  • Very few people are using GHC 7.6
  • There are some fundamental problems with the current Haskell Platform package set. I hope these are addressed- hopefully by unifying with LTS Haskell. But currently, the package sets based on Haskell Platform are inherently buggy by using package versions with known deficiencies.

If you're using a Haskell Platform installed toolchain now, I recommend trying out the new installation instructions to get a toolchain that will be fully compatible with both LTS Haskell and Stackage Nightly.

Future: GHC upgrade policy

One future policy decision we'll need to make is: when do we upgrade to a new version of GHC. My proposed plan is that, once we get a successful nightly build with a new GHC version, we stop generating nightlies for the old version. For LTS Haskell, we'll use whatever version of GHC is used by the nightlies at the time we take a snapshot.

The upshot of this will be that, at any given time, there will be at most two supported GHC versions by the official Stackage snapshots: whatever nightly supports, and the version used by LTS, which may be one version behind.

Categories: Offsite Blogs

A typeclass for substitutions

Haskell on Reddit - Mon, 12/15/2014 - 9:58am

A while back I built a little type inferencer for a concatenative language in the style of Cat.

I ended up with a few different datatypes to represent stack types vs. scalar types vs. type equations, etc. I'm not sure if that all ended up being reasonable, but what really puzzled me was trying to write code for substitutions without a ton of duplication (by "substitution" I mean substituting e.g. a type variable with its inferred type).

The code for it is here - I ended up with a typeclass Subst l r t, meant to indicate that within a t I can substitute an l for an r. It has a few problems, viz:

  • It requires MultiParamTypeClasses
  • It's not as extensible as I'd like it to be. E.g., it seems natural to be able to have instance (Subst l r t) => Subst l r (t, t), but the MultiParamTypeClasses rules seem to forbid this
  • I haven't seen this sort of thing done elsewhere before, so I naturally assume there's a better way to do it :)

Anyone have tips or code samples to solve this problem less hackily?

submitted by babblingbree
[link] [2 comments]
Categories: Incoming News

Data.Foldable - Mon, 12/15/2014 - 9:01am
Categories: Offsite Blogs

Data.Functor - Mon, 12/15/2014 - 9:00am
Categories: Offsite Blogs

LambdaCon 2015: The first FP conference in Italy (28th of March, 2015)

Haskell on Reddit - Mon, 12/15/2014 - 5:23am

Hello fellow Haskellers,

This has been around for a while but never properly advertised here so I will do it. I'm quite thrilled that Bologna (a lovely city in the north of Italy) will host the LambdaCon, the first conference on FP in Italy.

Despite the location all the talks will be in English, and I (Alfredo) will speak about "Using Haskell Professionally" which is the spiritual followup of my Road To Haskell.

So if you fancy a trip to our beautiful country to listen to a couple of talks and have a beer with me, please consider doing so!

The opening keynote will be from Bartosz Milewski, a reason more to come!


submitted by CharlesStain
[link] [9 comments]
Categories: Incoming News

Why QuickCheck is not more widely adopted in other languages?

Haskell on Reddit - Mon, 12/15/2014 - 5:22am

I've been recently doing some testing using QuickCheck, and even though I knew it for a long time, only recently I started to appreciate it's power. I catch so many bugs without manually crafting examples and realized edge cases without running my program.

I started to think QuickCheck as an essential tool, and that led me wonder: Why isn't it adopted in other languages? I know Erlang has it too, but other than that I can't see lots of adoption from other languages.

To me it seems like that the idea is not specific to Haskell. I think we can even roll our own QuickCheck implementation on top of standard unit testing libraries, just by using standard OOP stuff. (although the syntax may not be as nice as in Haskell)

So I'm wondering why it's not adopted by other languages. Any ideas? Is there anything that makes it Haskell-specific, or more useful in Haskell? Note that I'm using QC even to test IO stuff and I find it very useful for non-pure code too.

submitted by semanticistZombie
[link] [47 comments]
Categories: Incoming News

I find Haskell has a philosophical, poetic quality to it. Who agrees with me?

Haskell on Reddit - Mon, 12/15/2014 - 3:25am
data Maybe a = Just a | Nothing

This can be read as "maybe 'something' equals just 'something' or nothing," which I find quite existential, philosophical and slightly poetic. Don't you agree?

Do you have any favourite examples like this of your own? I'd love to see some fine Haskell poetry.

submitted by The_Prodigal_Coder
[link] [comment]
Categories: Incoming News