News aggregator

Thoughts on Hakyll vs. Octopress for an experienced Haskeller but HTML newbie

Haskell on Reddit - Thu, 04/24/2014 - 11:31pm

I'm starting a personal website/blog, just the typical thing you see a lot of programmers keeping. I'm looking at using Github Pages, but as I don't want to hand-code everything, I'll be using a static site generator. I'm choosing between Hakyll and Octopress, but I wanted to get some opinions before I put in the effort of learning and installing one.

I'll start by saying that I'm comfortable with Monads, EDSLs, and Haskell in general. Conversely, I have next to no HTML/CSS skill. What I'm really wondering is, how much hand-HTML does Hakyll require you to do?

In particular:

  1. Does Hakyll have any support for visual "themes"? If not, are there libraries/examples of visual layouts I could choose from?

  2. Does Hakyll integrate well with Github pages?

  3. How hard is it to get things like a Facebook like button, twitter sidebar, comments, etc. working in Hakyll? Is there anything equivalent to Octopress's plugins?

  4. Octopress advertises itself as dealing well with code snippets. Does Hakyll deal well with code within pages?

I realize that many of these things will not be built into Hakyll, so cases where it's not built in but there are lots of easy examples to take from are welcome.

Reccomendations of one over the other are welcome, but I'm more looking for feedback and knowledge from people who have used Hakyll (and Octopress, though that's more tangential to this subreddit).


EDIT: Generalized to "comments" instead of just "disqus comments"

submitted by jmite
[link] [15 comments]
Categories: Incoming News

Douglas M. Auclair (geophf): 'T' is for Theorem-proving

Planet Haskell - Thu, 04/24/2014 - 8:36pm

'T' is for Theorem-proving
So, you've been theorem-proving all your life.
Let's just get that one out there.
When you're given a math problem that says:
"Solve x + 1 = 2 (for x) (of course)"
And you say, "Why, x is 1, of course," quick as thinking, you've just proved a theorem.
Here, let me show you:
x + 1 = 2 (basis)     -1 = -1 (reflexive)x + 0 = 1 (addition)x       = 1 (identity)
Q.E.D. ('cause you rock)
So, yeah. Theorem proving is this, correction: theorem proving is simply this: going from step 1, to step 2 to step 3 until you've got to where you want to be.
How do you go from step 1 to step 2, etc? The same way you do everything! You follow the rules you're given.
Let's prove a theorem, right now.
So, in classical logic, we have a theorem that says
p → p
That is, if you've got a p, that implies (that you've got a) p.
But proving that? How do we go about doing that?
Well, in classical logic, you're given three basic axioms (thanks to sigfpe for his article "Adventures in Classic Land"):
1. p → (q → p)2. (p → (q → r)) → ((p → q) → (p → r))3. ¬¬p → p
So, p → p. How do we get there? Let's start with axiom 1 above.
1. p → ((p → p) → p) axiom 1 (where q = (p → p))2. (p → ((p → p) → p) → ((p → (p → p)) → (p → p)) axiom 2 (where q = (p → p) and r = p)3. (p → (p → p)) → (p → p) modus ponens4. p → (p → p) axiom 1 (where q = p)5. p → p modus ponens
Q.E.D. (a.k.a. whew!)
I used something called modus ponens in the above proof. It says, basically, that if you've proved something that you're depending on, you can drop it. Or, logically:
p → q, p       ∴ q
Or, we have "p implies q" we've proved p, so now we can just use q.
Now, there's been a lot of thought put into theory, coming up with theorems and proving them, and this has been over a long time, from Aristotle up to now. The big question for a long time was that ...
Well, theorem proving is a lot of hard work. Is there any way to automate it?
So there's been this quest to make theorem proving mechanical.
But to make theorem-proving mechanical, you have to mechanize everything, the axioms, the rules, and the process. Up until recent history, theory has been a lot of great minds spouting truths, and 'oh, here's a new way to look at the world!'
And that has been great (it has), but it hasn't helped mechanize things.
Then along came Frege. What Frege did was to give us a set of symbols that represented logical connectives and then symbols that represented things.
And there you had it: when you have objects and their relations, you have an ontology, a knowledge-base. Frege provided the tools to represent knowledge as discreet things that could be manipulated by following the rules of the (symbolized) relations.
He gave abstraction to knowledge and an uniform way of manipulating those abstractions, so, regardless of the underlying knowledge be it about money or chickens or knowledge, itself, it could be represented and then manipulated.
That way you could go from step 1 to step 2 to step 3, etc, until you arrived at your conclusion, or, just as importantly, arrived at a conclusion (including one that might possibly say what you were trying to prove was inadmissible).
Since Frege, there has been (a lot of) refinement to his system, but we've been using his system since because it works so well. He was the one who came up with the concept of types ('T' is for Types) and from that we've improved logic to be able to deliver this blog post to you on a laptop that is, at base, a pile of sand that constantly proving theorems in a descendent of Frege's logic.
Let's take a look at one such mechanized system. It's a Logic Framework, and is called tweLF. The first example proof from the Princeton team that uses this system is 'implies true,' and it goes like this:
imp_true: pf B -> pf (A imp B) = ...
That's the declaration. It's saying: the proof of B, or pf B, implies the proof of A implies B, or pf (A imp B).
How can you claim such a thing?
Well, you prove it.
imp_true: pf B -> pf (A imp B) = [p1: pf B]            % you have the proof of B % ... but now what? for we don't have the proof that A implies B.
So the 'now what' is our theorem-proving adventure. Mechanized.
We don't have the proof that A implies B, but we have a logical loop-hole (called a hole) that a proof some something that's true is its proof:
hole: {C} pf C.
Which says, mechanized what I just said in words, so with that:
imp_true: pf B -> pf (A imp B) = [p1: pf B] (hole (A imp B)).
And there you go.
... that is, if you're fine with a big, gaping loophole in your proof of such a thing.
If you are satisfied, then, well, here, sit on this rusty nail until you get unsatisfied.
So there.
Okay, so we have an implication, but we need to prove that.
So we introduce the implication into the proof, which is defined as:
imp_i: (pf A -> pf B) -> pf (A imp B)
SO! we have that, and can use it:
imp_true: pf B -> pf (A imp B) = [p1 : pf B] (imp_i ([p2: pf A] (hole B))).
So, we have the proof of A and that leads to B. But wait! We already have B, don't we? It's p1 (that is [p1 : pf B])
imp_true: pf B -> pf (A imp B) = [p1 : pf B] (imp_i ([p2 : pf A] p1)).
This is what theorem-proving is: you start with what you want, e.g.:
imp_true: pf B -> pf (A imp B)
which is "implies-true is that if you have B proved, then you have the proof that (A implies B) is true, too."
Then you take what you've got:
pf B
And give it a value:
[p1 : pf B]
Then you apply your rules (in this case, implication introduction, or 'imp_i') to fill the holes in your proof, to get:
[p1: pf B] (imp_i [p2: pf A] p1)
Which is the demonstration of your proof, and that's why we say 'Q.E.D.' or 'quod est demonstratum.' 'Thus it is demonstrated.'
Ta-dah! Now you have all the tools you need to prove theorems.
Now go prove Fermat's Last Theorem.
(I am so mean!)
Okay, okay, so maybe Fermat's Last Theorem is a bit much, but if you want to try your hand at proving some theorems, there's a list of some simple theorems to prove.
Categories: Offsite Blogs

Lenses don't compose backwards.

Haskell on Reddit - Thu, 04/24/2014 - 8:34pm

Lenses don't compose backwards. I know that there are a lot of people who believe this, but it's just not true. I'm not sure where this misunderstanding comes from, but I suspect it's from thinking of lenses as fancy field accessors. After all, a field accessor composes left to right, doesn't it? If I have t :: (a,(b,c)) and I want to get out the b element, I have to do fst.snd :: (a,(b,c)) -> b. The lens to do is all wonky, by comparison: while _1 corresponds to fst and _2 to second, I have to write _2._1, which is backwards!

Except all of that is wrong. Lenses are not accessors. The right way to think about lenses is as focusers. A lens "focuses" on a particular location in a structure, to do something in that location. Let's look at another kind of focuses first: map functions. Consider the following three map functions:

map :: (a -> b) -> [a] -> [b] mapTree :: (a -> b) -> Tree a -> Tree b mapMaybe :: (a -> b) -> Maybe a -> Maybe b

Now of course these are all familiar to you and are the fmaps of the obvious functors, but look at what they do abstractly: they take an action on elements --- and a -> b --- and live it to actions on structures containing those elements --- [a] -> [b], etc. That is to say, they focus simultaneously on all of the a elements in a structure, and then let you act on those elements. Now look at the type of a lens:

_1 :: Functor f => (a -> f b) -> (a,c) -> f (b,c)

If we ignore the functor stuff briefly, we can see that it looks an awful lot like the type of a map function. In fact, with the right choice of f, it is a map function! Namely, it would be mapFst :: (a -> b) -> (a,c) -> (b,c)

Lenses, like map functions, focus on elements. However, unlike map functions, lenses have the capacity to focus on individual elements, rather than all elements simultaneously, and they don't simply make it possible to act on the elements but also to retrieve them. The functor stuff in that type gives us options: pick f = Identity and we get a map function, pick f = Const a an we get back an accessor, etc.

But, how does this explain why lenses don't in fact compose backwards? Well, if lenses are analogous to maps, but more general, then we ought to look at how maps compose. Here's a perfectly good composition of map functions:

map.mapTree.mapMaybe :: (a -> b) -> [Tree (Maybe a)] -> [Tree (Maybe b)]

Notice how the left-most map function is associated with the outermost structure, and so forth. This maps complete sense from the perspective of maps-as-focuses: the first map focuses on elements of a [], the second on elements of a Tree, and the third on elements of a Maybe. When we finally feed it some action to perform, it pushes that action down into the focused locations, and performs it. Nothing mysterious.

But now let's look at some special other map functions:

mapHead :: (a -> a) -> [a] -> [a] mapHead f (x:xs) = f x : xs mapTop :: (a -> a) -> Tree a -> Tree a mapTop f (Branch x l r) = Branch (f x) l r mapJust :: (a -> b) -> Maybe a -> Maybe b mapJust f (Just x) = Just (f x)

We can compose these together like so:

mapHead.mapTop.mapJust :: (a -> a) -> [Tree (Maybe a)] -> [Tree (Maybe a)]

This is also not a magical thing to do. It's just map and mapTree specialized to only apply in one location instead of all locations. So these map functions only focus on one place. This composite map mapHead.mapTop.mapJust is not mysterious at all, and definitely isn't "composed backwards".

If you agree with me so far, then you now know why lenses don't compose backwards. Why? Because, remember, if we pick our lens's f correctly, we get a map. And guess what? that's just what we've done:

_head :: Functor f => (a -> f a) -> [a] -> f [a] _top :: Functor f => (a -> f a) -> Tree a -> f (Tree a) _just :: Functor f => (a -> f b) -> Maybe a -> f (Maybe b) _head._top._just :: Functor f => (a -> f a) -> [Tree (Maybe a)] -> f [Tree (Maybe a)]

_head at f = Identity is just mapHead, _top is just mapTop, and _just is just mapJust.

The composite lens just peeks into the head of the list, into the top tree node's label, and into the element of the Maybe, and applies a function. It pushes down one layer of structure at a time, outside in. But that's not backwards at all, since these are just those very same maps focused on only one element instead of all of them, with a slightly fancier type!

So how can maps compose normally, but when we rename them and make their types slightly more general, they suddenly compose "backwards"? They can't.

Lenses don't compose backwards.

submitted by psygnisfive
[link] [35 comments]
Categories: Incoming News

How to get Haskell value out of request url using Yesod

Haskell on Reddit - Thu, 04/24/2014 - 7:18pm

Suppose I have a Yesod route like:


Is there a handler that will parse a request to /foo/data to give me a SomeDataType? Surely, Yesod itself must parse the route itself and bind a #SomeDataValue to a handler. I want to do something like that myself. Where to look?

submitted by 2piix
[link] [2 comments]
Categories: Incoming News

Ken T Takusagawa: [ffuprlwq] Counting

Planet Haskell - Thu, 04/24/2014 - 6:45pm

countWith :: [a] -> [[a]];
countWith l = [] : liftM2 (flip (:)) (countWith l) l;

Enumerates all the length-zero lists, then all the length-1 lists, then the length-2 lists, and so on, for a given list of elements (the "digits"). Vaguely inspired by Perl's .. operator which creates a list, e.g., for("a".."zzz"). Unlike Perl, the lists increase little-endianly.

> countWith "01"
["", "0", "1", "00", "10", "01", "11", "000", "100", "010", "110", "001", "101", "011", "111", "0000", "1000", "0100", "1100", "0010", "1010", "0110", "1110", "0001", "1001", "0101", "1101", "0011", "1011", "0111", "1111", "00000", "10000", "01000", "11000", "00100", "10100", "01100", "11100", "00010" ...

Here is a more understandable way of writing it. The list is used as the nondeterminism monad.

countWith l = [] : do { t <- countWith l; h <- l; return $ h:t; };
Categories: Offsite Blogs

Danny Gratzer: You Could Have Invented GHC.Generics

Planet Haskell - Thu, 04/24/2014 - 6:00pm
Posted on April 25, 2014

In Haskell right now there seem to be two main approaches to data-generic programming. There’s the whole Typeable/Data approaches which is a bit magic. Lately however, there’s been a new kid on the block, GHC.Generics.

In this post we’ll step through the intuition for the library and (hopefully) help shed some light on why it exists and how to use it.


Let’s imagine you, our young and brilliant Haskell hacker cranking out some code. You’ve probably gone the typesafe route and have lots and lots of types to encode invariants.

However, this proliferation of types is cramping your style a bit, you’re forced to create a new function over each type which seems to do exactly the same thing!

mapFoo :: (a -> b) -> Foo a -> Foo b mapBar :: (a -> b) -> Bar a -> Bar b mapQuux :: (a -> b) -> Quux a -> Quux b

But, we’re clever enough to notice that this is obviously just fmap! So we can scrap all of this with fmap and -XDeriveFunctor.

But, what about other functions. There are a lot of things that are basically mechanical to define over each type. Serialization, field selection, and so on and so on. Each of these operations have something in common; they deal with the structure of the types rather than the actual representation of it.

Selecting the first fields from

data Foo a = Foo a a a data Bar a = Bar a a a

is almost identical! The only difference is in the name. So, let’s figure out a way to talk about the structure of our types.

Dissecting an Algebraic Type

Now, when we go to dissect some type data Foo = ... we have two things to consider

  1. A list of constructors
  2. A list of fields for each constructor

Let’s start with (2) since it’s simpler. For types that are of the form

data SomeType = OneConstructor field1 field2 field3 ...

we can almost think of them as really, really big tuples.

type SomeType' = (field1, field2, field3, ...)

But, since we want to encode different numbers of fields in just one type, let’s transform this further into

type SomeType'' = (field1, (field2, (field3, ...)))

There we have it, we can encode lists of fields as a deeply nested group of tuples.

We can now imagine something like

{-# LANGUAGE TypeFamilies #-} type family TupleForm a data Foo a = Foo a a type instance TupleForm (Foo a) = (a, a) data Bar a = Bar a a type instance TupleForm (Bar a) = (a, a) class Tuple a where toT :: a -> TupleForm a fromT :: TupleForm a -> a instance Tuple (Foo a) where ... instance Tuple (Bar a) where ...

Now we can write generic functions by only writing them for the TupleForm of Foo and Bar. For example,

gfst :: (TupleForm a ~ (b, c), Tuple a) => a -> b gfst = fst . toF

Now that we understand fields, let’s move on to constructors!

A list constructors is the dual to a list of fields, representing OR rather than AND. We can make a bit of a leap from this to thinking that our representations of the two should be dual. So what would be the dual of (a, b)? Why that would be Either a b!

This means for a type

data SomeType = Bar Int | Baz Char | Quux () type SomeType' = Either Int (Either Char ())

This covers almost every case, we just need to make sure we represent no argument constructors as constructors of one argument: (). Take a moment to think why.

A Procedure for Reifying

Let’s now outline an algorithm for turning some arbitrary type to the corresponding generic version.

For a type C, with constructors C1, C2, C3.. and fields C1^1, C1^2, C2^1…

  1. Change each set Cx^* to the TupleForm, call this TupleForm Tx
  2. Nest the Tx’s in Either’s, Either T1 (Either T2 (Either T3 ...))

And that’s it, let’s practice on some data types to check that it works.

data Test = Foo Int Char | Bar Int Bool Char | Quux type Test' = Either (Int, Char) (Either (Int, (Bool, Char)) ()) data Maybe a = Just a | Nothing type Maybe' a = Either a ()

So we can see that this transformation is pretty mechanical! There’s one hiccup though: what do we do with recursive types?

We’ll handle it the same way that GHC.Generics does, we just don’t transform the recursive arguments into the generic representation lest we end up with an infinite tree.

So [a] should look like

type List a = Either (a, [a]) () Building a Library

Now if we want to build this into a library, we’d like to provide a few of our own data types rather than hijacking Either and (,).

{-# LANGUAGE TypeOperators #-} data (:*:) a b = a :*: b -- Like (,) a b data (:+:) a b = InL a | InR b -- Like Either a b data U = U -- Like (), U is for Unit

Now all our transformation are the same, but the results are prettier thanks to the type level operators

type List a = (a :*: [a]) :+: U type Maybe' a = a :+: U

Now to facilitate generic programming, we’ll lug one more parameter through each of these constructors and add another two types to wrap meta information and constants respectively

data (:*:) a b p = a p :*: b p -- Like (,) a b data (:+:) a b p = L1 (a p) | R1 (b p) -- Like Either a b data U p = U -- Like (), U is for Unit newtype M1 i c f p = M1 (f p) -- i and c are meta info newtype K1 i c p = K1 c

Now because we’re expecting all our arguments to (:*:) and (:+:) to be of kind * -> * we use K1 to wrap a normal type like Int so that it can take an argument.

M1 is a bit odd, it’s used to store information about our data entirely in phantom types. We can imagine having a bunch of types that represent different things, like whether the tree of constructors represents such and such data type or what constructor we’re dealing with. It’s not terribly relevant to the rest of this post, but useful in some odd cases.

Now we can repeat our transformation we’d discussed earlier just using the new constructors instead. We can imagine wrapping up this whole class like this

class Generic a where type family Rep a :: * -> * to :: a -> Rep a from :: Rep a p -> a

This is very much in the spirit of our Tuple type class, but now our type family returns something of type * -> * to leave room for our extra p parameter

The Real Deal

As clever readers will have noticed, the above type class is precisely what GHC.Generics exports! We have successfully reached full circle and now have arrived at GHC.Generics' API.

The only difference between us and GHC.Generics is their Generic class can be derived almost identically to our algorithm. The only slight difference is rather than a “list” of :*:’s or :+:’s they make a tree, this makes little difference to most programs however.

To wrap things up, let’s finish by showcasing making a simple generic debugging dumper.

To begin with, we’ll define a class GDump and will make instances for the GHC.Generics types

class GDump a where gdump :: a -> String instance GDump (U1 p) where gdump U1 = "()" instance Show c => GDump (K1 i c p) where gdump (K1 c) = show c instance (GDump (f p), GDump (g p)) => GDump ((:*:) f g p) where gdump (a :*: b) = "(" ++ gdump a ++ " :*: " ++ gdump b ++ ")" instance (GDump (f p), GDump (g p)) => GDump ((:+:) f g p) where gdump (L1 a) = "(Left " ++ gdump a ++ ")" gdump (R1 a) = "(Right " ++ gdump a ++ ")" instance (GDump (f p)) => GDump (M1 a b f p) where gdump (M1 f) = gdump f

And now we can create a class for “normal” values and use -XDefaultSignatures to give the default implementation a Generic constraint

class Dump a where dump :: a -> String default dump :: (Generic a, GDump (Rep a ())) => a -> String dump a = gdump (from' a) where from' :: Generic a => a -> Rep a () from' = from -- A hack to stop the type checker from whining about p

And now we can just use this default implementation.

instance Show a => Dump (Maybe a)

Using this we can print suitably boring representations of generic types, for free!

<script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Christopher Done: Prescience

Planet Haskell - Thu, 04/24/2014 - 6:00pm
<style>body{text-align: center}</style>
Categories: Offsite Blogs

Joachim Breitner: gtk-vector-screenshot screencast

Planet Haskell - Thu, 04/24/2014 - 4:07pm

I recently installed piwik on my website, to get some idea about how it is being used. Piwik is a very nice and shiny tool, although it is slightly scary to watch people walk through your website, click for click. Anyways, I now know that I get around 50 visitors per day.

But I also learn where they are coming from, and this way I noticed that “Shnatsel” did a very nice screencast of my gtk-vector-screenshot tool – if I had not created it myself I would think it was fake.

Categories: Offsite Blogs

Typeable and Data in Haskell - Thu, 04/24/2014 - 12:32pm
Categories: Offsite Blogs

Functional Jobs: functional software developer at OpinionLab (Full-time)

Planet Haskell - Thu, 04/24/2014 - 12:10pm

OpinionLab is seeking a Software Developer with strong agile skills to join our Chicago, IL based Product Development team in the West Loop.

As a member of our Product Development team, you will play a critical role in the architecture, design, development, and deployment of OpinionLab's web-based applications and services. You will be part of a high-visibility agile development team empowered to deliver high-quality, innovative, and market leading voice-of-customer (VoC) data acquisition and feedback intelligence solutions. If you thrive in a collaborative, fast-paced, get-it-done environment and want to be a part of one of Chicago's most innovative companies, we want to speak with you!

Key Responsibilities include:

  • Development of scalable data collection, storage, processing & distribution platforms & services.
  • Architecture and design of a mission critical SaaS platform and associated APIs.
  • Usage of and contribution to open-source technologies and framework.
  • Collaboration with all members of the technical staff in the delivery of best-in-class technology solutions.
  • Proficiency in Unix/Linux environments.
  • Work with UX experts in bringing concepts to reality.
  • Bridge the gap between design and engineering.
  • Participate in planning, review, and retrospective meetings (à la Scrum).

Desired Skills & Experience:

  • BDD/TDD, Pair Programming, Continuous Integration, and other agile craftsmanship practices
  • Desire to learn Clojure (if you haven't already)
  • Experience with both functional and object-oriented design and development within an agile environment
  • Polyglot programmer with mastery of one or more of the following languages: Lisp (Clojure, Common Lisp, Scheme), Haskell, Scala, Python, Ruby, JavaScript
  • Knowledge of one or more of: AWS, Lucene/Solr/Elasticsearch, Storm, Chef

Get information on how to apply for this position.

Categories: Offsite Blogs

Tour of the Haskell Syntax - Thu, 04/24/2014 - 11:58am
Categories: Offsite Blogs

Pointwise Lenses

Haskell on Reddit - Thu, 04/24/2014 - 10:09am
Categories: Incoming News

Chris Reade: Repa Laplace and SOR

Planet Haskell - Thu, 04/24/2014 - 9:54am
Using the Repa Library for Laplace’s Equation and SOR

This describes the result of some experiments to enhance an existing elegant (haskell parallel array) laplace solver by using successive over-relaxation (SOR) techniques. The starting point was a laplace solver based on stencil convolution and using the Haskell Repa library (Regular Parallel Arrays) as reported in a paper by Lippmeier, Keller and Peyton-Jones. (There is also a 2011 published paper with the first two authors (Lippmeier and Keller 2011))

Background Laplace’s Equation and Iterative solvers

The Laplace equation

is usually abbreviated to simply , where is a scalar potential (such as temperature) over a smooth surface.

For numerical solutions we will restrict attention to a rectangular surface with some fixed boundary values for u. (The boundary is not necessarily the border of the rectangle, as we can allow for fixed values at internal regions as well). The problem is made discrete for finite difference methods by imposing a grid of points over the surface and here we will assume this is spaced evenly ( in the direction and in the direction). Approximating solutions numerically amounts to approximating a fixedpoint for the grid values satisfying the boundary values and also, for non-boundary points (i,j) satisfying


If we also assume then this simplifies to

Iterating to find this fixed poiint from some suitable starting values is called relaxation. The simplest (Jacobi) method involves iterating the calculation of new values from previous values until the iterations are close enough to converging.

This method, whilst very simple, can be extremely slow to converge. One improvement is given by the Gauss-Seidel method which assumes the next set of values in each iteration are calculated row by row in a scan across the columns allowing some new values to be used earlier during the same iteration step (changing to for calculated values in the previous column and in the previouus row).

Unfortunately, this assumption about the order of evaluation of array elements interferes with opportunities to use parallel computation of the array elements (as we wish to do using Repa array primitives).

Successive over-relaxation (SOR)

Successive over-relaxation is a method used to speed up convergence of the iterations by using a weighted sum of the previous iteration with the new one at each relaxation step. Using as the weight we calculate first (substituting for in the above equation), then calculate

Values of less than 1 will slow down the convergence (under-relaxation). A value of between 1 and 2 should speed up convergence. In fact the optimal value for will vary for each iteration, but we will work with constant values only here. It is also important to note that starting with values above 1 will generally cause oscillations (divergence) if just the Jacobi scheme is used, so successive over-relaxation relies on a speed up of the basic iteration such as that provided by the Gauss-Seidel scheme.

The original stencil version using Repa

In the paper a stencil mapping technique is used to get fast performance using the Repa array primitives. The code uses the Repa library version 3

The main relaxation step is expressed as

relaxLaplace arr = computeP $ szipWith (+) arrBoundValue -- boundary setting $ szipWith (*) arrBoundMask -- boundary clearing $ smap (/ 4) $ mapStencil2 (BoundConst 0) [stencil2| 0 1 0 1 0 1 0 1 0 |] arr

We will be taking this as a starting point, so we review details of this (working from the bottom up).

A stencil is described in a quoted form to express which neighbours are to be added (using zeros and ones) with the element being scanned assumed to be the middle item. Thus this stencil describes adding the north, west, east, and south neighbours for each item. The mapStencil2 is a Repa array primitive which applies the stencil to all elements of the previous iteration array (arr). It is also supplied with a border directive (Boundconst 0) which just says that any stencil item arising from indices outside the array is to be treated as 0. This stencil mapping will result in a non-manifest array, in fact, in a partitioned array split up into border parts and non-border parts to aid later parallel computation. After the stencil map there is a mapping across all the results to divide by 4 using smap (which improves over the basic repa array map for partitioned arrays). These calculations implement the Jacobi scheme.

The subsequent two szipWith operations are used to reset the fixed boundary values. (The primitive szipWith again improves over the basic repa array zipWith by catering explicitly for partitioned arrays.) More specifically, szipWith(*) is used with an array (arrBoundMask) which has zero for boundary positions and 1 for other positions – thus setting the boundary items to 0. After this szipWith(+) is used with an array (arrBoundValues) which has the fixed initial values for the boundary items and zero for all other items. Thus the addition reinstates the initial boundary values.

These array calculations are all delayed so a final computeP is used to force the parallel evaluations of items to produce a manifest array result. This technique allows a single pass with inlining and fusion optimisations of the code to be possible for the calculation of each element. Use of computeP requires the whole calculation to be part of a monad computation. This is a type constraint for the primitive to exclude the possibility of nested calls of computeP overlapping.

A significant advantage of this stencil implementation is that programming directly with array indices is avoided (these are built into the primitives once and for all and are implemented to avoid array bound checks being repeated unnecessarily).

Unfortunately, though, this implementation is using the Jacobi scheme for the iteration steps which is known to have very slow convergence.

We would like to improve on this by using successive over-relaxation, but as pointed out earlier, this will not work with the Jacobi scheme. Furthermore, the Gauss-Seidel improvement will not help because it is based on retrieving values from an array still being updated. This is not compatible with functional array primitives and stencils and prevents simple exploitation of parallel array calculations.

To the rescue – the red-black scheme for calculating iterations.

Red-black Iterations

This scheme is well known and based on the observation that on a red-black checkerboard, for any red square, the north, west, east, and south neighbours are all black and conversely neighbours of black squares are all red. This leads to the simple idea of calculating updates to all the red squares first, then using these updated values to calculate the new black square values.

Implementation using stencils and repa The set up

We split the original array into a red array and black array with simple traverse operations. We assume the original array (arr) has an even number of columns so the number of red and black cells are the same. As a convention we will take the (0,0) item to be red so we want, for red array (r)

r(i,j) = arr(i, 2*j + (i `mod` 2))


arr has i <- 0..n-1, j <- 0..2m-1 r has i <- 0..n-1, j <- 0..m-1

The traverse operation from the Repa library takes as arguments, the original array, a mapping to express the change in shape, and a lookup function to calculate values in the new array (when given a get operation for the original array and a coordinate (i,j) in the new array)

> projectRed :: Array U DIM2 Double -> Array D DIM2 Double > projectRed arr = > traverse arr > (\ (e :. i :. j) -> (e :. i :. (j `div` 2))) > (\get (e :. i :. j) -> get (e :. i :. 2*j + (i `mod` 2)))

Here (and throughout) we have restricted the type to work with two dimensional arrays of Double, although a more general type is possible. Notice also that the argument array is assumed to be manifest (U) and the result is a delayed array (D).

Similarly for the black array (b) we want

b(i,j) = arr(i, 2*j + ((i+1) `mod` 2))

with the same extent (shape) as for r, hence

> projectBlack :: Array U DIM2 Double -> Array D DIM2 Double > projectBlack arr = > traverse arr > (\ (e :. i :. j) -> (e :. i :. (j `div` 2))) > (\ get (e :. i :. j) -> get(e :. i :. 2*j + ((i+1) `mod` 2)))

We can also use these same functions to set up boundary mask and value arrays separately for red and black from the starting array (arrInit), initial mask (arrBoundMask), and boundary values (arrBoundValue)

do redBoundMask <- computeP $ projectRed arrBoundMask blackBoundMask <- computeP $ projectBlack arrBoundMask redBoundValue <- computeP $ projectRed arrBoundValue blackBoundValue <- computeP $ projectBlack arrBoundValue redInit <- computeP $ projectRed arrInit blackInit <- computeP $ projectBlack arrInit

This is part of a monad computation with each step using a computeP (parallel array computation) to create manifest versions of each of the arrays we need before beginning the iterations. These calculations are independent, so the sequencing is arbitrary.

The finish

At the end of the iterations we will reverse the split into red and black by combining using the traverse2 operation from the Repa library. We need

arr(i,j) = r(i, j `div` 2) when even(i+j) = b(i, j `div` 2) otherwise


arr has i <- 0..n-1 , j <- 0..2m-1 r has i <- 0..n-1 , j <- 0..m-1 b has i <- 0..n-1 , j <- 0..m-1

The traverse2 operation takes two arrays (here – of the same extent), a mapping to express the new extent (when given the two extents of the argument arrays), and a function to calculate values in the new array (when given the respective get operations for the original arrays (here – get1 and get2) and a coordinate (i,j) in the new array).

> combineRB r b = > traverse2 r b > (\ (e :. i :. j) _ -> (e :. i :. 2*j)) > (\ get1 get2 (e :. i :. j) -> > (if even(i+j) then get1 else get2) (e :. i :. j `div` 2) > ) Stencils in the iteration step

We use stencils as in the original, but when we consider a stencil on black cells which are to be combined as neighbours of the red cell r(i,j) we need the following shape, where b(i,j) corresponds to the middle item

0 1 0 1 1 0 0 1 0

BUT this is only when processing EVEN rows from the black array. For odd rows we need a different shape

0 1 0 0 1 1 0 1 0

That is, we need to apply one of two different stencils depending on whether the row is even or odd. Similarly in processing the red array to combine red neighbours of b(i,j) we need the same shaped stencils but the first shape above is used on odd rows of red cells and the second shape on even rows. We define and name the stencils as leftSt and rightSt

> leftSt :: Stencil DIM2 Double > leftSt = [stencil2| 0 1 0 > 1 1 0 > 0 1 0 |] > rightSt :: Stencil DIM2 Double > rightSt = [stencil2| 0 1 0 > 0 1 1 > 0 1 0 |]

Critically, we will need an efficient way to apply alternate stencils as we map across an array. At first, it may seem that we might have to rebuild a version of the primitive mapStencil2 to accommodate this, but that will get us into some complex array representation handling which is built into that primitive. On reflection, though, the combination of lazy evaluation and smart inlining/fusion optimisation by the compiler should allow us to simply apply both stencils on all elements and then choose the results we actually want. The laziness should ensure that the unwanted stencil applications are not actually calculated, and the administration of choosing should be simplified by compiler optimisations.

To apply our stencils we will use the Repa array primitive mapStencil2 which takes as arguments, a border handling directive, a stencil, and a (manifest) array, to produce a resulting (delayed) array.

mapStencil2 :: Boundary Double -> Stencil DIM2 Double -> Array U DIM2 Double -> Array D DIM2 Double

The choice of stencil will depend on the position (actually the evenness of the row). As we cannot refer to the indices when using the stencil mapping operation we are led to using traverse2 after mapping both stencils to select results from the two arrays produced. Our alternate stencil mapping operation will have a similar type to mapStencil2 except that it expects two stencils rather than one.

altMapStencil2 :: Boundary Double -> Stencil DIM2 Double -> Stencil DIM2 Double -> Array U DIM2 Double -> Array D DIM2 Double altMapStencil2 !bd !s1 !s2 !arr = traverse2 (mapStencil2 bd s1 arr) (mapStencil2 bd s2 arr) (\ e _ -> e) (\ get1 get2 (e :. i :. j) -> (if even i then get1 else get2) (e :. i :. j) )

This function needs to be inlined (with a pragma) and the bang annotations on arguments are there to improve optimisation opportunities.

The iteration step

The main relaxLaplace iteration step involves the following (where r and b are the previous red and black arrays)

  • Calculating a new version of r using stencils over b r1 = smap (/4) $ altMapStencil2 (BoundConst 0) leftSt rightSt b
  • Over-relaxation of this taking weighted sums (using r and r1) r2 = szipWith weightedSum r r1 where weightedSum old new = (1-omega)*old + omega*new
  • Reset the boundaries to get the new version of r (r’) r' = szipWith (+) redBoundValue -- boundary resetting $ szipWith (*) redBoundMask r2 -- boundary clearing
  • Do similar steps for calculating b’ but starting with stencils over r’ (not r) and switching the left and right stencils

This is combined in the following monad computation (where r and b are the old arrays). The monad is necessary because we want to use computeP to ensure that the final returned arrays are manifest.

do r' <- computeP $ relaxStep r b redBoundValue redBoundMask leftSt rightSt b' <- computeP $ relaxStep b r' blackBoundValue blackBoundMask rightSt leftSt ...

which uses

relaxStep !arrOld !arrNbs !boundValue !boundMask !stencil1 !stencil2 = szipWith (+) boundValue $ szipWith (*) boundMask $ szipWith weightedSum arrOld $ smap (/4) $ altMapStencil2 (BoundConst 0) stencil1 stencil2 arrNbs weightedSum !old !new = (1-omega)*old + omega*new

The first argument for relaxStep is the old (red or black) array we are calculating an update for, and the second argument is the neighbour array to use stencils on. The old array is needed for the over-relaxation step with weightedSum.

It is also worth pointing out that we have the over-relaxation step being done before the two boundary resetting steps, but this could just as easily be done after the boundary resetting as it does not make a difference to the final array produced.

Laplace with Red-Black and Stencils

Finally the main function (solveLaplace) sets up the initial arrays and passes them to the function with the main loop (iterateLaplace)

> solveLaplace:: > Monad m > => Int -- ^ Number of iterations to use. > -> Double -- ^ weight for over relaxing (>0.0 and <2.0) > -> Array U DIM2 Double -- ^ Boundary value mask. > -> Array U DIM2 Double -- ^ Boundary values. > -> Array U DIM2 Double -- ^ Initial state. Should have even number of columns > -> m (Array U DIM2 Double) > solveLaplace !steps !omega !arrBoundMask !arrBoundValue !arrInit > do redBoundMask <- computeP $ projectRed arrBoundMask > blackBoundMask <- computeP $ projectBlack arrBoundMask > redBoundValue <- computeP $ projectRed arrBoundValue > blackBoundValue <- computeP $ projectBlack arrBoundValue > redInit <- computeP $ projectRed arrInit > blackInit <- computeP $ projectBlack arrInit > iterateLaplace steps omega redInit blackInit > redBoundValue blackBoundValue redBoundMask blackBoundMask


iterateLaplace !steps !omega !redInit !blackInit !redBoundValue !blackBoundValue !redBoundMask !blackBoundMask = go steps redInit blackInit where go 0 !r !b = computeP $ combineRB r b -- return final combined array go n !r !b = do r' <- computeP $ relaxStep r b redBoundValue redBoundMask leftSt rightSt b' <- computeP $ relaxStep b r' blackBoundValue blackBoundMask rightSt leftSt go (n - 1) r' b' {-# INLINE relaxStep #-} relaxStep !arrOld !arrNbs !boundValue !boundMask !stencil1 !stencil2 = szipWith (+) boundValue $ szipWith (*) boundMask $ szipWith weightedSum arrOld $ smap (/4) $ altMapStencil2 (BoundConst 0) stencil1 stencil2 arrNbs {-# INLINE weightedSum #-} weightedSum !old !new = (1-omega)*old + omega*new {-# INLINE iterateLaplace #-} Performance and a New Version

The number of calculations in an iteration of red-black is comparable to the original stencil implementation (with just the weighting operations addded in). Although there are two arrays to update, they are half the size of the original. We would expect optimised performance to be only fractionally slower than the original. The speedup in progress towards convergence can be dramatic (one iteration per 8 of the original for our test examples). So, in principle, this would be a big improvement. Unfortunately optimisation did not seem to be achieving the same speedups as the original and the code was roughly 12 times slower.

After much experimentation it looked as though the inner traverse2 operation of altMapStencil2 was inhibiting the optimisations (fusions with stencil mapping code and subsequent maps and zips).

A better performance was achieved by separating the stencil mapping from the traverse2 for alternate row selection and delaying the alternate row selection until after all the other operations. The new version drops altMapStencil2 and instead simply uses altRows

> altRows :: forall r1 r2 a . (Source r1 a, Source r2 a) > => Array r1 DIM2 a -> Array r2 DIM2 a -> Array D DIM2 a > > altRows !arr1 !arr2 = -- assumes argument arrays with the same shape > traverse2 arr1 arr2 > (\ e _ -> e) > (\ get1 get2 e@(_ :. i :. _) -> > if even i then get1 e else get2 e > ) > > {-# INLINE altRows #-}

The function relaxStep in the following revised version of iterateLaplace has altRows done last, with mapStencil2 applied first (using a different stencil in each alternative)

> iterateLaplace :: > Monad m > => Int > -> Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> m (Array U DIM2 Double) > > iterateLaplace !steps !omega !redInit !blackInit > !redBoundValue !blackBoundValue !redBoundMask !blackBoundMask > = go steps redInit blackInit > where > go 0 !r !b = computeP $ combineRB r b -- return final combined array > go n !r !b > = do r' <- computeP > $ relaxStep r b redBoundValue redBoundMask leftSt rightSt > b' <- computeP > $ relaxStep b r' blackBoundValue blackBoundMask rightSt leftSt > go (n - 1) r' b' > > {-# INLINE relaxStep #-} > relaxStep !arrOld !arrNbs !boundValue !boundMask !stencil1 !stencil2 > = altRows (f stencil1) (f stencil2) > where > {-# INLINE f #-} > f s = szipWith (+) boundValue > $ szipWith (*) boundMask > $ szipWith weightedSum arrOld > $ smap (/4) > $ mapStencil2 (BoundConst 0) s arrNbs > > {-# INLINE weightedSum #-} > weightedSum !old !new = (1-omega)*old + omega*new > > {-# INLINE iterateLaplace #-}

This now seems to be only a little slower to run than the original stencil solution (about 4 times slower so about 3 times faster than the previous version of red-black). Thus, with an approximately 8-fold speed up in convergence, this does give an overall improvement.

In order to compile this version, it was necessary to use not just the ghc -O2 flag, but also -fsimpl-tick-factor=1000. The need for this flag was indicated by a ghc compiler bug message.

All the code above with birdfeet ( >) in this (incomplete) literate haskell document is essentially that in the module RedBlackStencilOpt.hs (which has some extra preliminaries and imports and checking of arguments in functions which were elided here for clarity). This code can be found here along with the module RedBlackStencil.hs which contains the first version. These can both be loaded by the wrapper newWrapper.hs which is just an adaptation of the original wrapper to allow for passing the extra parameter.

Acknowledgements and References

There are many numerical methods books covering relevant background mathematics, but online, there are also lecture notes. In particular, on Computational Numerical Analysis of Partial Differential Equations by J.M.McDonough and on Numerical Solution of Laplace Equation by G.E.Urroz. T. Kadin has some tutorial notes with a diagram for the red-black scheme (and an implementation using Fortran 90). There is a useful Repa tutorial online and more examples on parallel array fusion are reported in (Lippmeier et al. 2012).

Lippmeier, Ben, and Gabriele Keller. 2011. “Efficient Parallel Stencil Convolution in Haskell.” In Proceedings of the 4th ACM Symposium on Haskell, 59–70. Haskell ’11. New York, NY, USA: ACM. doi:10.1145/2034675.2034684.

Lippmeier, Ben, Manuel Chakravarty, Gabriele Keller, and Simon Peyton Jones. 2012. “Guiding Parallel Array Fusion with Indexed Types.” SIGPLAN Not. 47 (12) (September): 25–36. doi:10.1145/2430532.2364511.

Categories: Offsite Blogs

Template project for web dev?

Haskell on Reddit - Thu, 04/24/2014 - 8:54am

I'm looking for a sort of template application/architecture for web dev using haskell. The scotty-starter repository is a nice example of this - though lacking any form of database persistence. I would love to know if there are any similar projects for other frameworks or more complete projects, but my google-fu is failing me here, hence this post.

The goal is to have a small sample application which demonstrates a good architectural style/convention to follow.

submitted by Madsn
[link] [8 comments]
Categories: Incoming News