News aggregator

First call for papers IFL 2014

haskell-cafe - Thu, 05/15/2014 - 10:56am
Hello, Please, find below the first call for papers for IFL 2014. Please forward these to anyone you think may be interested. Apologies for any duplicates you may receive. best regards, Jurriaan Hage Publicity Chair of IFL --- CALL FOR PAPERS 26th SYMPOSIUM ON IMPLEMENTATION AND APPLICATION OF FUNCTIONAL LANGUAGES - IFL 2014 NORTHEASTERN UNIVERSITY/BOSTON, USA OCTOBER 1-3, 2014 We are pleased to announce that the 26th edition of the IFL series will be held at Northeastern University in Boston, USA. The symposium will be held from 1st to 3rd of October 2014. Scope ----- The goal of the IFL symposia is to bring together researchers actively engaged in the implementation and application of functional and function-based programming languages. IFL 2014 will be a venue for researchers to present and discuss new ideas and concepts, work in progress, and publication-ripe results related to the implementation and application of functional languages and function-based programming.
Categories: Offsite Discussion

First call for papers IFL 2014

General haskell list - Thu, 05/15/2014 - 10:56am
Hello, Please, find below the first call for papers for IFL 2014. Please forward these to anyone you think may be interested. Apologies for any duplicates you may receive. best regards, Jurriaan Hage Publicity Chair of IFL --- CALL FOR PAPERS 26th SYMPOSIUM ON IMPLEMENTATION AND APPLICATION OF FUNCTIONAL LANGUAGES - IFL 2014 NORTHEASTERN UNIVERSITY/BOSTON, USA OCTOBER 1-3, 2014 We are pleased to announce that the 26th edition of the IFL series will be held at Northeastern University in Boston, USA. The symposium will be held from 1st to 3rd of October 2014. Scope ----- The goal of the IFL symposia is to bring together researchers actively engaged in the implementation and application of functional and function-based programming languages. IFL 2014 will be a venue for researchers to present and discuss new ideas and concepts, work in progress, and publication-ripe results related to the implementation and application of functional languages and function-based programming.
Categories: Incoming News

[JOB] Compiler engineer at Jane Street

General haskell list - Thu, 05/15/2014 - 10:24am
Jane Street is looking to hire an experienced compiler engineer to work on improving the OCaml compiler. The focus would be on performance-related improvements to the compiler and runtime system. The job would also include working on other aspects of the compiler and the supporting toolchain including our internal development tools. We're particularly interested in people with experience in areas like optimization, GC and language runtimes, and are happy to consider candidates who are not (yet) OCaml experts. The position would be full-time, and could be based in either London or New York. If you're interested (or know someone I should reach out to), please email me directly, at yminsky< at >
Categories: Incoming News

Problem while installing multiple haskell platform

Haskell on Reddit - Thu, 05/15/2014 - 8:27am

I have been trying to configure multiple haskell platforms through a script. I manged to isolate the installation directories using --prefix but they all refer to same package database at $HOME/.ghc/arch-os-7.6.3/package.conf.d

How do I change the default path of .ghc while installing haskell platform so that I can maintain multiple databases per haskell platform .

Any help would be really appreciated

submitted by skymax786
[link] [5 comments]
Categories: Incoming News

ANN: fsnotify-0.1

haskell-cafe - Thu, 05/15/2014 - 8:03am
A new version of fsnotify, a cross-platform filesystem notification library, is available. The main user-visible changes since the last version are: * ability to stop a listening job * ability to set a polling interval (when polling is used) * a couple of small semantic changes (see [changelog] for details) * improved documentation Plus, there was a fair bit of internal refactoring, including a new test suite. [changelog]: There's still a long-standing bug about wrong event types being generated on OS X. We know how to fix it (basically, use kqueue instead of OS X's own API), but I don't have time to do it. So if anyone is interested, help would be welcome! See Thanks to Miguel Mitrofanov for providing access to an OS X host, which was extremely valuable while working on and testing this version. Roman ______________________
Categories: Offsite Discussion

Haskell.SG meetup in May

Haskell on Reddit - Thu, 05/15/2014 - 7:51am
Categories: Incoming News

Chris Reade: Multigrid Methods with Repa

Planet Haskell - Thu, 05/15/2014 - 7:31am
Multigrid Methods with Repa Introduction

We implement the multigrid and full multigrid schemes using the Haskell parallel array library Repa. Whilst this file is a literate Haskell program it omits some preliminaries. The full code can be found on GitHub at multiGrid.

Repa (short for ‘regular parallel arrays’) is a Haskell library for efficiently calculating with arrays functionally. It allows parallel computation to be expressed with a single primitive (computeP). This is possible because we only use immutable arrays so calculations are deterministic without any need to worry about locks/mutual exclusion/race conditions/etc.

The multigrid and full multigrid schemes are used for approximating solutions of partial differential equations but they are strategies rather than solvers. They can be used to drastically improve more basic solvers provided convergence and error analysis are considered for the basic solver.

We only consider linear partial differential equations and the Poisson equation in particular where linear partial differential equations can be summarised in the form

Here, is a linear operator (such as the laplace operator or higher order or lower order partial derivatives or any linear combination of these), is given and is the function we want to solve for, and both are defined on a region . The multigrid scheme starts with a fine grid (where is the grid spacing) on which we want to obtain an approximate solution by finite difference methods. The scheme involves the use of a coarser grid (e.g. ) and, recursively, a stack of coarser and coarser grids to apply error correction on approximations. The full multigrid scheme also uses coarser grids to improve initial approximations used by multigrid.

Outline of Coarse Grid Error Correction

Assume we have a basic method to approximate solutions which we call ‘approximately solve’. Then the scheme for coarse grid correction of errors is

  1. Take an initial guess on the fine grid () and use it to approximately solve to get a new approximation .

    We want to estimate errors

    We do not have to calculate them, but the errors satisfy

    The negative of these values are called the residuals () and these we can calculate.

  2. Compute residuals
  3. Move to the coarse grid and (recursively) solve

    (This is a problem of the same form as we started with, but it requires solving with restrictions of and for the coarse grid.)

    We use zeros as initial guess for errors and this recursion results in an approximation for errors ,

  4. Interpolate coarse errors () into the fine grid to get and get a corrected guess

  5. Approximately solve again (on the fine grid), but now starting with to get a final approximation.

So a basic requirement of the multigrid scheme is to be able to move values to a coarser grid (restriction) and from a coarser grid to a finer grid (interpolation). We will write functions for these below. First we give a quick summary of Repa and operations we will be using, which experienced Repa users may prefer to skip.

Background Repa Summary

Repa array types have an extent (shape) and an indication of representation as well as a content type. For example Array U DIM2 Double is the type of a two dimensional array of Doubles. The U indicates a ‘manifest’ representation which means that the contents are fully calculated rather than delayed. A delayed representation is expressed with D rather than U. The only other representation type we use explicitly is (TR PC5) which is the type of a partitioned, array resulting from a stencil mapping operation. Non-manifest arrays are useful for enabling the compiler to combine (fuse) operations to optimise after inlining code. Some operations require the argument array to be manifest, so to make a delayed array manifest we can use

  • computeP which employes parallel evaluation on the array elements. The type of computeP requires that it is used within a monad. This is to prevent us accidentally writing nested calls of computeP.

The extent DIM2 abbreviates Z :. Int :. Int (where Z represents a base shape for the empty zero dimensional array). The integers give the sizes in each dimension. We can have higher dimensions (adding more with :.) but will only be using DIM2 here. Values of an extent type are used to express the sizes of an array in each dimensions and also used as indexes into an array where indexing starts at 0.

We only use a few other Repa operations

  • fromListUnboxed takes an extent and a list of values to create a new manifest array.

  • traverse takes 3 arguments to produce a new delayed array. The first is the (old) array. The second is a function which produces the extent of the new array when given the extent of the old array. The third is a function to calculate any item in the new array [when supplied with an operation (get) to retrieve values from the old array and an index in the new array for the item to calculate].

  • szipWith is analgous to zipWith for lists, taking a binary operation to combine two arrays. (There is also a zipWith for arrays, but the szipWith version is tuned for partitioned arrays)

  • smap is analgous to map for lists, mapping an operation over an array. (There is also a map for arrays, but the smap version is tuned for partitioned arrays)

  • mapStencil2 takes a boundary handling option (we only use BoundClamp here), a stencil, and a two dimensional array. It maps (convolves) the stencil over the array to produce a new (partititioned) array.

There is a convenient way of writing down stencils which are easy to visualize. We will see examples later, but this format requires pragmas

> {-# LANGUAGE TemplateHaskell #-} > {-# LANGUAGE QuasiQuotes #-} Interpolation (Prolongation) and Restriction

To interpolate from coarse to fine we want a (linear) mapping that is full rank. E.g identity plus averaging neighbours.

We can restrict by injection or take some weighting of neighbours as well. A common requirement is that this is also linear and a constant times the transpose of the interpolation when these are expressed as matrix operations.

Repa examples of restriction and interpolation in 2D.

We will be using DIM2 grids and work with odd extents so that border points remain on the border when moving between grids. A common method for restriction is to take a weighted average of fine neighbours with each coarse grid point and we can easily express this as a stencil mapping. We calculate one sixteenth of the results from mapping this stencil

> restStencil :: Stencil DIM2 Double > restStencil = [stencil2| 1 2 1 > 2 4 2 > 1 2 1 |]

I.e we map this stencil then divide by 16 and take the coarse array of items where both the row and column are even. Taking the coarse grid items can be done with an array traversal

> {-# INLINE coarsen #-} > coarsen :: Array U DIM2 Double -> Array D DIM2 Double > coarsen !arr > = traverse arr -- i+1 and j+1 to deal with odd extents for arr correctly > (\ (e :. i :. j) -> (e :. (i+1) `div` 2 :. (j+1) `div` 2)) > (\ get (e :. i :. j) -> get (e :. 2*i :. 2*j))

Here the second argument for traverse – the function to calculate the new extent from the old – adds one before the division to ensures that odd extents are dealt with appropriately.
The inline pragma and bang pattern (!) on the argument are needed for good optimisation.

Coarsening after mapping the above stencil works well with odd extents but is not so good for even extents. For completeness we allow for even extents as well and in this case it is more appropriate to calculate one quarter of the results from mapping this stencil

> sum4Stencil :: Stencil DIM2 Double > sum4Stencil = [stencil2| 0 0 0 > 0 1 1 > 0 1 1 |]

We treat mixed even and odd extents this way as well so we can express restriction for all cases as

> {-# INLINE restrict #-} > restrict :: Array U DIM2 Double -> Array D DIM2 Double > restrict !arr > | odd n && odd m > = coarsen > $ smap (/16) > $ mapStencil2 BoundClamp restStencil arr > | otherwise > = coarsen > $ smap (/4) > $ mapStencil2 BoundClamp sum4Stencil arr > where _ :. m :. n = extent arr

For interpolation in the case of odd extents we want to distribute coarse to fine values according to the “give-to” stencil

1/4 1/2 1/4 1/2 1 1/2 1/4 1/2 1/4

This means that the fine value at the centre becomes the coarse value at the corresponding position (if you picture the coarse grid spaced out to overlay the fine grid). The fine values around this central value inherit proportions of the coarse value as indicated by the give-to stencil (along with proportions from other neighbouring coarse values).

Odd Extent Interpolation

For the even extent case we simply make four copies of each coarse value using a traverse

> {-# INLINE inject4 #-} > inject4 :: Source r a => Array r DIM2 a -> Array D DIM2 a > inject4 !arr > = traverse arr -- mod 2s deal with odd extents > (\ (e :. i :. j) -> (e :. 2*i - (i `mod` 2) :. 2*j - (j `mod` 2))) > (\get (e :. i :. j) -> get(e :. i `div` 2 :. j `div` 2))

Even Extent Interpolation

Again, we just use the latter version to cover cases with mixed even and odd extents. There is no primitive for “give-to” stencils but it is easy enough to define what we want with a traverse.

> {-# INLINE interpolate #-} > interpolate :: Array U DIM2 Double -> Array D DIM2 Double > interpolate !arr > | odd m && odd n > = traverse arr > (\ (e :. i :. j) -> (e :. 2*i - (i `mod` 2) :. 2*j - (j `mod` 2))) > (\get (e :. i :. j) -> case () of > _ | even i && even j -> get(e :. i `div` 2 :. j `div` 2) > | even i -> (0.5)*(get(e :. i `div` 2 :. j `div` 2) > + get(e :. i `div` 2 :. (j `div` 2)+1)) > -- odd i > | even j -> (0.5)*(get(e :. i `div` 2 :. j `div` 2) > + get(e :. (i `div` 2)+1 :. j `div` 2)) > -- odd i and j > | otherwise -> (0.25)*(get(e :. i `div` 2 :. j `div` 2) > + get(e :. i `div` 2 :. (j `div` 2)+1) > + get(e :. (i `div` 2)+1 :. j `div` 2) > + get(e :. (i `div` 2)+1 :. (j `div` 2)+1)) > ) > | otherwise = inject4 arr > where _ :. n :. m = extent arr

The uses of mod 2 in the new size calculation are there to ensure that odd extents remain odd.

An alternative interpolation calculation

As an aside, we found that the above interpolation for odd extents can be implemented with a stencil convolution using sum4Stencil (which we defined above) after applying inject4 and then dividing by 4.
So we could define (for odd extents)

interpolate' :: Monad m => Array U DIM2 Double -> m (Array (TR PC5) DIM2 Double) interpolate' arr = do fineArr <- computeP $ inject4 arr return $ smap (/4) $ mapStencil2 BoundClamp sum4Stencil fineArr

We have to make the intermediate fineArr manifest (using computeP) before mapping the stencil over it which is why a monad is used. The final array is not manifest but a structured array resulting from the stencil map. By not making this manifest, we allow the computation to be combined with other computations improving inlining optimisation opportunities.

To illustrate the effect of interpolate', suppose the coarse array content is just

a b c d e f g h i

Then the injected array content will be

a a b b c c a a b b c c d d e e f f d d e e f f g g h h i i g g h h i i

After mapping the stencil and dividing by 4 we have (assuming top left is (0,0))

at (1,1) (a+b+d+e)/4 (odd row, odd column) at (2,1) (d+e+d+e)/4 = (d+e)/2 (even row, odd column) at (1,2) (b+b+e+e)/4 = (b+e)/2 (odd row, even column) at (2,2) (e+e+e+e)/4 = e (even row, even column)

This even handles the bottom and right boundaries as in the original interpolation.

Slightly more generally, after inject4 and for any w x y z then convolution with the stencil

0 0 0 0 w x 0 y z

will produce the same as interpolating with the “give-to” stencil

z (z+y) y (z+x) (z+y+x+w) (y+w) x (x+w) w

Conversely after inject4, the give-to stencil has to have this form for a stencil convolution to produce the same results.

Since the stencil version does require making an intermediate array manifest it is not clear at face value which is more efficient, so we will stick with interpolate.

We note that for even extents, the combination of an interpolation followed by a restriction is an identity (inject4 followed by coarsen). For odd extents, the combination preserves internal values but not values on the boarder. This will not be a problem if boundary conditions of the problem are used to update boarders of the array.

MultiGrid with Coarse Grid Correction Scheme

The problem to solve for is

where is a linear operator. This linear operator could be represented as a matrix or as a matrix multiplication, but this is not the most efficient way of implementing a solver. In the multigrid scheme described above we need knowledge of to implement an approximate solver, and also to calculate residuals. There are ways to implement these latter two operations with some mathematical analysis of the linear operator using finite differences. We will therefore be using an approximator and a residual calculator directly rather than a direct representation of as a parameter.

Let us assume that the approximate solver is a single step improvement which we want to iterate a few times. We can write down the iterator (iterateSolver)

> {-# INLINE iterateSolver #-} > iterateSolver !opA !steps !arrInit > = go steps arrInit > where > go 0 !arr = return arr > go n !arr > = do arr' <- computeP $ opA arr > go (n - 1) arr'

In the definition of multiGrid we make use of a function to create an array of zeros of a given shape (extent)

> {-# INLINE zeroArray #-} > zeroArray :: Shape sh => sh -> Array U sh Double > zeroArray !sh = fromListUnboxed sh $ replicate (size sh) 0.0

and a function to determine when we have the coarsest grid (the bases case for odd extents will be 3 by 3)

> {-# INLINE coarsest #-} > coarsest :: Array U DIM2 Double -> Bool > coarsest !arr = m<4 where (Z :. _ :. m) = extent arr

We also use some global parameters to indicate the number of iteration steps (to simplify expressions by passing fewer arguments)

> steps1,steps2 :: Int > steps1 = 5 -- number of iterations for first step in multiGrid > steps2 = 5 -- number of iterations for last step in multiGrid

To write down a first version of multiGrid, we temporarily ignore boundary conditions. We will also assume that the grid spacing is the same in each dimension and use just one parameter (h) as the grid spacing.

{- version ignoring boundary conditions -} multiGrid approxOp residualOp h f uInit = if coarsest uInit then do computeP $ approxOp h f uInit else do v <- iterateSolver (approxOp h f) steps1 uInit r <- computeP $ residualOp h f v -- calculate fine residuals r' <- computeP $ restrict r -- move to coarser grid err <- multiGrid approxOp residualOp (2*h) r' $ zeroArray (extent r') vC <- computeP $ szipWith (+) v $ interpolate err -- correct with errors on fine grid iterateSolver (approxOp h f) steps2 vC -- solve again with improved approximation

The parameters are

  1. approxOp – an approximate solver to be iterated.
  2. residualOp – a residual calculator
  3. h – the grid spacing
  4. f – a representation for the (negative of) the function on the right hand side of the equation
  5. uInit – an array representing a suitable first guess to start from.

Note that both approxOp and residualOp need to be passed h and f as well as an array when they are used. Also, the recursive call of multiGrid requres the two function parameters to work at each coarseness of grid. The grid spacing doubles and the array of residuals has to be converted to the coarser grid for the recursive call.

Next we consider how we deal with the boundary conditions. We will only deal with the Dirichlet type of boundary conditions here.

MultiGrid with Dirichlet boundary conditions

For as above, Dirichlet boundary conditions have the form for some function on the boundary . We will take the standard approach of representing the boundary conditions using a mask array (with 0’s indicating boundary positions and 1’s at non-boundary positions) along with a boundary values array which has 0’s at non-boundary positions. This will enable us to adapt these two arrays for different coarsenesses of grid.

We are assuming we are working in just two dimensions so . We can thus assume two dimensional arrays represent and the boundary mask and boundary values.

We can now define multiGrid with boundary conditions as:

multiGrid approxOp residualOp h f boundMask boundValues uInit = if coarsest uInit then do computeP $ approxOp h f boundMask boundValues uInit else do v <- iterateSolver (approxOp h f boundMask boundValues) steps1 uInit r <- computeP $ residualOp h f boundMask v -- calculate fine residuals boundMask' <- computeP $ coarsen boundMask -- move to coarser grid r' <- computeP $ szipWith (*) boundMask' $ restrict r let zeros = zeroArray (extent r') err <- multiGrid approxOp residualOp (2*h) r' boundMask' zeros zeros vC <- computeP $ szipWith (+) v $ szipWith (*) boundMask $ interpolate err -- correct with errors on fine grid iterateSolver (approxOp h f boundMask boundValues) steps2 vC

In the base case (when the grid size is 3*3) there is only one value at the centre point which needs to be evaluated (assuming the surrounding 8 points are given by the boundary conditions). This will be solved exactly with a single step of the approximate solver. The recursive call uses zeros for both the boundValues (redundantly) as well the initial guess. We note that the residual calculation makes use of the mask but not the boundary values as these should be zero for residuals. The restriction of residuals for the coarse grid also requires applying the mask adapted for the coarse grid. The interpolation of errors uses the (fine version of the) mask but the boundary values will already be set in v which is added to the errors.

Full MultiGrid

Before looking at specific examples, we can now write down the algorithm for the full multigrid scheme which extends multigrid as follows.

Before calling multiGrid we precalculate the initial guess using a coarser grid and interpolate the result. The precalculation on the coarser grid involves the same full multigrid process recursively on coarser and coarser grids.

fullMG approxOp residualOp h f boundMask boundValues = if coarsest boundValues then do computeP $ approxOp h f boundMask boundValues boundValues -- an approximation with 1 interior point will be exact using boundValues as initial else do -- recursively precalculate for the coarser level f' <- computeP $ restrict f boundMask' <- computeP $ coarsen boundMask boundValues' <- computeP $ coarsen boundValues v' <- fullMG approxOp residualOp (2*h) f' boundMask' boundValues' -- move to finer level v <- computeP $ szipWith (+) boundValues $ szipWith (*) boundMask $ interpolate v' -- solve for finer level multiGrid approxOp residualOp h f boundMask boundValues v

Note that in the base case we need an initial guess for the coarsest initial array. We simply use the boundValues array for this. The interpolation of v requires resetting the boundary values.

Improved inlining using modules

The previous (higher order) versions of multiGrid and fullMG which take approxOp and residualOp as arguments can by substantially improved by importing the operations in a module and specialising the definitions for the imported functions. Thus we define a module which imports a separate module (defining approxOp and residualOp) and which exports

> multiGrid :: Monad m => > Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> m (Array U DIM2 Double) > multiGrid !h !f !boundMask !boundValues !uInit > = if coarsest uInit > then > do computeP $ approxOp h f boundMask boundValues uInit > else > do v <- iterateSolver (approxOp h f boundMask boundValues) steps1 uInit > r <- computeP $ residualOp h f boundMask v > boundMask' <- computeP $ coarsen boundMask > r' <- computeP $ szipWith (*) boundMask' $ restrict r > let zeros = zeroArray (extent r') > err <- multiGrid (2*h) r' boundMask' zeros zeros > vC <- computeP $ szipWith (+) v > $ szipWith (*) boundMask > $ interpolate err > iterateSolver (approxOp h f boundMask boundValues) steps2 vC

and the main full multigrid operation

> fullMG :: Monad m => > Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> m (Array U DIM2 Double) > fullMG !h !f !boundMask !boundValues > = if coarsest boundValues > then do computeP $ approxOp h f boundMask boundValues boundValues > else do > f' <- computeP $ restrict f > boundMask' <- computeP $ coarsen boundMask > boundValues' <- computeP $ coarsen boundValues > v' <- fullMG (2*h) f' boundMask' boundValues' > v <- computeP $ szipWith (*) boundValues > $ szipWith (*) boundMask > $ interpolate v' > multiGrid h f boundMask boundValues v

These versions perform significantly better when optimised.

Poisson’s Equation

For the two dimensional case, where , Poisson’s equation has the form

for subject to the Dirichlet boundary condition for

So the linear operator here is the Laplace operator

We need to make use of finite difference analysis to implement the approximation step and the residual calculation for Poisson’s equation.

Finite difference analysis with the central difference operator leads to the following (five point difference) formula approximating Poisson’s equation

Rewriting the above as

gives us an algorithm (approxOp) for the approximating iteration step. This is known as the the Jacobi method. We map a suitable stencil to add the four neighbours of each item, then we add corresponding elements of the array f multiplied by , then we divide by 4 and reset the boundary with the mask and values.

> {-# INLINE approxOp #-} > approxOp :: Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array (TR PC5) DIM2 Double > approxOp !h !f !boundMask !boundValues !arr > = szipWith (+) boundValues > $ szipWith (*) boundMask > $ smap (/4) > $ szipWith (+) ( (*hSq) f ) > $ mapStencil2 (BoundConst 0) > [stencil2| 0 1 0 > 1 0 1 > 0 1 0 |] arr > where hSq = h*h

We have chosen to leave the resulting array in a non-manifest form so that inlining can combine this operation with any subsequent operations to optimise the combination.

To calculate residuals from an estimation , we can use the same formula as above to approximate . Rearranging we have

This leads us to implement residualOp with a five point stencil that adds four neighbours and subtracts four times the middle item. After this we divide all items by and then add f and reset the boundary.

> {-# INLINE residualOp #-} > residualOp :: Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array U DIM2 Double > -> Array (TR PC5) DIM2 Double > residualOp !h !f !boundMask !v > = szipWith (*) boundMask > $ szipWith (+) f > $ smap (*hFactor) > $ mapStencil2 (BoundConst 0) > [stencil2| 0 1 0 > 1 -4 1 > 0 1 0 |] v > where hFactor = 1/(h*h)

As noted earlier, the boundary is reset simply with the mask. The boundary values should all be zero for residuals, so we can drop any resetting of boundary values as the mask will have done that already.

The above two functions (approxOP and residualOp) are put in a module PoissonOps to be imported by the module MultiGrid_Poisson.

Setting up to run an example

This example is taken from (Iserles 2009 p.162). On the unit square

The Dirichlet boundary conditions are


We define the following to calculate values from the number of grids to be used.

> fineGridSpaces:: Int -> Int > fineGridSpaces gridStackSize > = 2^gridStackSize > fineGridShape:: Int -> DIM2 > fineGridShape gridStackSize > = (Z :. n :. n) where n = 1+fineGridSpaces gridStackSize

We will set a global value for the sides of the area (the unit square) and also set a stack size of 9 grids so the finest will have by (i.e. 513 by 513) grid points

> distance :: Double > distance = 1.0 -- length of sides for square area > gridStackSize :: Int > gridStackSize = 9 -- number of grids to be used including finest and coarsest > intervals = fineGridSpaces gridStackSize > shapeInit = fineGridShape gridStackSize > hInit :: Double > hInit = distance / fromIntegral intervals -- initial finest grid spacing

The initial arrays are defined using

> boundMask :: Array U DIM2 Double > boundMask = > fromListUnboxed shapeInit $ concat > [edgeRow, > take ((intervals-1)*(intervals+1)) (cycle mainRow), > edgeRow > ] > where edgeRow = replicate (intervals+1) 0.0 > mainRow = 0.0: replicate (intervals-1) 1.0 Prelude.++ [0.0] > coordList :: [Double] > coordList = ((hInit*) . fromIntegral) [0..intervals] > boundValues :: Array U DIM2 Double > boundValues = > fromListUnboxed shapeInit $ concat > [ (\j -> sin (pi*j)) coordList, > concat $ mainRow $ tail $ init coordList, > (\j -> exp pi * sin (pi*j) + 0.5*j^2) coordList > ] > where mainRow i = replicate intervals 0.0 Prelude.++ [0.5*i^2] > fInit :: Array U DIM2 Double > fInit = -- negative of RHS of Poisson Equation > fromListUnboxed shapeInit $ concat $ row coordList > where row i = (item i) coordList > item i j = -(i^2 + j^2) > uInit :: Array U DIM2 Double > uInit = boundValues

We are now in a position to calculate

> test1 = multiGrid hInit fInit boundMask boundValues uInit


> test2 = fullMG hInit fInit boundMask boundValues

as well as comparing with iterations of the basic approximation operation on its own

> solverTest :: Monad m => Int -> m(Array U DIM2 Double) > solverTest n = iterateSolver (approxOp hInit fInit boundMask boundValues) n uInit

We note that this particular example of Poisson’s equation does have a known exact solution

So we can calculate an array with values for the exact solution and look at the errors. We just look at the maximum error here.

> exact :: Array U DIM2 Double > exact = > fromListUnboxed shapeInit $ > concat $ row coordList > where row i = (item i) coordList > item i j = exp (pi*i) * sin (pi*j) + 0.5*i^2*j^2 > maxError :: Monad m => m(Array U DIM2 Double) -> m (Double) > maxError test = > do ans <- test > err :: Array U DIM2 Double > <- computeP > $ abs > $ R.zipWith (-) ans exact > foldAllS max 0.0 err A note on errors

The Jacobi method used for the approximation step is known to converge very slowly, but fullMG significantly improves convergence rates. The literature analysing multigrid and full multigrid shows that the errors are reduced rapidly if the approximation method smooths the high frequency errors quickly (error components with a wavelength less than half the grid size). Unfortunately the Jacobi method does not do this. Gauss-Seidel and SOR methods do smooth high frequency errors but their usual implementation relies on in-place updating of arrays which is problematic for parallel evaluation. However, although multiGrid with the Jacobi approxOp does not reduce errors rapidly, fullMG does (with only gridStackSize=7 and step1=step2=3). This shows that the improved initial guess using fullMG outweighs the slow reduction of errors in multiGrid.

Some results

Clearly this code is designed to be run on multi-cores to use the parallelism. However, even using a single processor, we can see the impressive performance of Repa with optimisations on some simple tests of the Poisson example (running on a 2.13 GHz Intel Core 2 Duo).

Results for fullMG (steps1 = 5, steps2 = 5) Grid Stack Fine Grid cpu Time(MS) Max Error 6 65*65 22 2.1187627917047536e-3 7 129*129 83 5.321669730857792e-4 8 257*257 276 1.3324309721163274e-4 9 513*513 1417 3.332619984242058e-5 10 1025*1025 6017 8.332744698691386e-6 11 2049*2049 26181 2.0832828990791086e-6

So increasing the grid stack size by 1 roughly quadruples the size of the fine grid, increases runtime by a similar factor, and improves the error by a factor of 4 as well.

As a comparison, multiGrid alone (with the Jacobi approxOp) has high errors and increasing grid stack size does not help.

Results for multiGrid (steps1 = 5, steps2 = 5) Grid Stack cpu Time(MS) Max Error 6 14 1.116289597967647 7 49 1.1682431240072333 8 213 1.1912130219418806 9 1118 1.2018233367410431 10 4566 1.206879281001907 11 20696 1.209340584664126

To see the problems with the basic Jacobi method on its own we note that after 400 iteration steps (with grid stack size of 6) the max error is 5.587133822631921. Increasing the grid stack size just makes this worse.

Compiler options

As we had observed in previous work Repa Laplace and SOR the large amount of inline optimisation seems to trigger a ghc compiler bug warning. This is resolved by adding the compiler option -fsimpl-tick-factor=1000 (along with -O2).

Some variations

We also tried two variations on opproxOp.

The first was to add in an extra relaxation step which weights the old value with the new value at each step using a weighting between 0 and 1. This slightly increases runtime, but does not improve errors for fullMG. It’s purpose is to improve smoothing of errors, but as we have seen although this may be significant for multiGrid it is not for fullMG. The improved starting points are much more significant for the latter. Note that successive over relaxing (SOR) with between 1 and 2 is not suitable for use with the Jacobi method in general as this is likely to diverge.

omega :: Double omega = 2/3 {-# INLINE approxOp #-} approxOp :: Double -> Array U DIM2 Double -> Array U DIM2 Double -> Array U DIM2 Double -> Array U DIM2 Double -> Array (TR PC5) DIM2 Double approxOp !h !f !boundMask !boundValues !arr = szipWith weightedSum arr -- extra relaxation step $ szipWith (+) boundValues $ szipWith (*) boundMask $ smap (/4) $ szipWith (+) ( (*hSq) f ) $ mapStencil2 (BoundConst 0) [stencil2| 0 1 0 1 0 1 0 1 0 |] arr where hSq = h*h weightedSum old new = (1-omega) * old + omega * new

The second variation is based on the ‘modified 9 point scheme’ method discussed in (Iserles 2009 p.162). This has slower runtimes than the 5 point Jacobi scheme as it involves two stencil mappings (one across f as well as u). It had no noticable improvement on error reduction for fullMG.

-- modified 9 point scheme approxOp :: Double -> Array U DIM2 Double -> Array U DIM2 Double -> Array U DIM2 Double -> Array U DIM2 Double -> Array (TR PC5) DIM2 Double approxOp !h !f !boundMask !boundValues !arr = R.szipWith (+) boundValues $ R.szipWith (*) boundMask $ R.smap (/20) $ R.szipWith (+) ( (*hFactor) $ mapStencil2 (BoundConst 0) [stencil2| 0 1 0 1 8 1 0 1 0 |] f ) $ mapStencil2 (BoundConst 0) [stencil2| 1 4 1 4 0 4 1 4 1 |] arr where hFactor = h*h/2

This algorithm was obtained from rearranging the formula

Parallel Processing

The important point to note is that this code is ready to run using parallel processing. Results using multi-core processors will be posted in a separate blog.

Acknowledgements and References

Iserles (2009) gives some analysis of the multigrid method and this book also provided the example used (and the modified 9 point scheme). There are numerous other finite methods books covering multigrid and some devoted to it. Online there are some tutorial slides by Briggs part1 and part2 and some by Stüben amongst many others. A paper by Lippmeier, Keller, and Peyton Jones discussing Repa design and optimisation for stencil convolution in Haskell can be found here.

Iserles, Arieh. 2009. A First Course in Numerical Analysis of Differential Equations. Second edition. CUP.

Categories: Offsite Blogs

possible bug in latest hackage Elf (Elf-0.27)

haskell-cafe - Thu, 05/15/2014 - 7:18am
Hi list, with below test program: -- ​/* test1.c: build with -g -O0 */ include <stdio.h> static const char appmsg[] = "hello, world"; int main(int argc, char* argv[]) { fputs(appmsg, stdout); return 0; } --- elf-test1.hs module Main where import qualified Data.ByteString as B import qualified Data.ByteString.Char8 as C import Control.Monad import Data.Elf testelf = "/tmp/test1" testelfsym = C.pack "appmsg" lookupSymbol1 _ [] = Nothing lookupSymbol1 sym (t:ts) = case (snd (steName t)) of Nothing -> lookupSymbol1 sym ts Just sname -> if sname == sym then Just t else lookupSymbol1 sym ts lookupSymbol _ [] = Nothing lookupSymbol sym (t:ts) = case (lookupSymbol1 sym t) of Nothing -> lookupSymbol sym ts t1 -> t1 test1 elf symtab symbol = mapM_ (print) (elfSections elf) test2 elf symtab symbol = lookupSymbol symbol symtab test3 elf symtab symbol = lookupSymbol symbol symtab >>= \et -> findSymbolDefinition et mainloop elf symtab symbo
Categories: Offsite Discussion

Theory Lunch (Institute of Cybernetics, Tallinn): Many choices from few parameters

Planet Haskell - Thu, 05/15/2014 - 6:48am

At the end of March I gave a talk about how to obtain a wide range of Bernoulli distributions with as few parameters as possible. This originates from the needs of simulations of complex systems with cellular automata machine. The solution I described comes from Mark Smith’s doctoral thesis, performed under the supervision of Tommaso Toffoli.

I wrote a post on this on my blog on cellular automata.


Categories: Offsite Blogs

Philip Wadler: Elm's time-travelling debugger

Planet Haskell - Thu, 05/15/2014 - 6:22am
<object class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase=",0,40,0" data-thumbnail-src="" height="266" width="320"><param name="movie" value=";source=uds"/><param name="bgcolor" value="#FFFFFF"/><param name="allowFullScreen" value="true"/><embed allowfullscreen="true" height="266" src=";source=uds" type="application/x-shockwave-flash" width="320"></embed></object>The Elm language now supports interactive debugging and hot swapping.
So what does a debugger look like for a high-level language like Elm? What is possible when you have purity, FRP, and hot-swapping? At Elm Workshop 2013, Laszlo Pandy presented the Elm Debugger. Inspired by talks like Bret Victor’s Inventing on Principle, Laszlo implemented a debugger that lets you travel backwards and forwards in time. It lets you change history. On a deeper level, it lets you visualize and interact with a program’s meaning. It lets you see how a program changes over time.Evan Czaplicki, Elm's inventor, explains why the features of Elm make this surprisingly easy.
It is possible to change a program so much that it is no longer compatible with previous versions. If we try to hot-swap with incompatible code, it will lead to runtime errors. The programmer will be left wondering if their new code introduced a bug or if it was just a case of bad hot-swapping. Perhaps they start hunting for a bug that does not exist. Perhaps they ignore a bug that does exist. This is not Interactive Programming, this is a buggy IDE.
To make hot-swapping reliable, we must know when programs are incompatible. The more precise we can be, the more reliable hot-swapping can be. There are two major categories of incompatibilies:
  • The API has changed. If the types of the arguments to a function change, it is no longer compatible with the rest of the program.
  • It is not possible to copy the old state into the new program. Perhaps there are now many more values and it is unclear how they relate to our new program. Perhaps functions are tightly coupled with state, making it hard to change either independently.
The ability to diagnose and respond to these issues depends on the language you are working with. We will now see how both cases can be addressed with features like static types, immutability and purity, and predictable structure. Previously: Bret Victor's Inventing on Principle.
Categories: Offsite Blogs

Yesod is Fun

Haskell on Reddit - Thu, 05/15/2014 - 6:15am
Categories: Incoming News

language-puppet: 7 Startups - part 3 - An interpreter for the rules

Planet Haskell - Thu, 05/15/2014 - 4:44am
In the [previous episode](/blog/2014/05/12/7-startups-part-2-game-rules-definition/) I implemented the game rules, but did not test them. I also had some reservations about some code I wrote, but predicted it would be mostly right, even without tests. Today's episode is about pretty printing and operational ! ## Minor modifications since last time * I [refactored]( the `getCardFunding` and `getCardVictory` functions so that they are now pure. I toyed with the idea of having a monad morphism (I learned today it was [called like that]( to integrate `Reader GameState` actions in the `MonadState GameState` functions, but this was not warranted as the functions are so simple. * I [refactored]( neighborhood relationship so that it encodes more invariants. A player now must have a left and right neighbor. They might be invalid though. * I [refactored]( the type of the interfaces between the game rules and the players, so that you can't pass empty lists where they are forbidden. I was later told this type already existed in [semigroups]( ## Why pretty printing ? I hinted heavily last time that there would be a dedicated pretty printer. An example of such an implementation is in the [ansi-wl-pprint]( package. It introduces functions and combinators that let you easily create a `Doc` value that will look neat on your screen. Unfortunately, in order to properly support all text-based backends (IRC, XMPP, email, console) it doesn't seem to be possible to reuse an existing printer. For example, the color set between all these backends is quite distinct, and some are even capable of printing pictures. I tried to engineer one that would be at the same time flexible, easy to use and good-looking an all backends. Time will tell if this was a success. I will not give a dissertation on the subject, and have copied the interface from other pretty printing libraries. I will just give some implementation details here. ### Basic pretty printing types Speaking of stealing from other pretty printers, I really should have looked at their code too ! Here are my basic types: <figure class="code"> 1 2 3 4 5 6 7 newtype PrettyDoc = PrettyDoc { getDoc :: Seq PrettyElement } deriving (Eq, Monoid) data PrettyElement = RawText T.Text | NewLine | Space | ... </figure> So you basically have all “elements” in `PrettyElement`, and they can be appended in a Monoidal fashion in a `PrettyDoc`, which is just a `newtype` for `Seq PrettyElement`. This is a very inelegant decision, and I will be sure to refactor it for the next episode ! Looking at [another implementation](, it is clear that a single type was required, and that the Monoidal structure could be achieved by adding `Empty` and `Cat` constructors. There is a reason I wrote my type like this though, and it is related to how I intended to solve the problem of backends with poor or no support for multiline messages, but this will featured in another episode ! ### Specific design choices I decided to directly encode the game entities as part of the pretty printing types. That should be obvious from the list of [elements]( A `VictoryPoint`, a `Neighbor` or even a `CardType` are directly representable, so that the backends can optimize their rendering. Other than that, the code is pretty boring. ### A pretty-pretty printer ? My first backend will be the console, as it will not have any networking or concurrency problems to solve. I used the aforementionned [ansi-wl-pprint]( package, and wrote a [pretty instance]( for `PrettyElement` and `PrettyDoc`. This leads to strange code such as `print (PP.pretty (pe something))`. ## Implementing the GameMonad During the last episode, I wrote all the rules in an abstract monad that is an instance of `GameMonad`, meaning it featured a few functions for interacting with the players. I took a typeclass approach so that I could start writing the rules without worrying about the actual implementation of this abstract monad. Now that the rules are written, it is time to give them a try. In order to do so, I ditched the typeclass, and expressed it in terms of `ProgramT`, from the [operational]( package. It only takes a few steps to refactor : ### The instructions GADT You must start by writing all the operations that must be supported as a GADT. We previously had : <figure class="code"> 1 2 3 4 5 6 7 8 9 10 11 type NonInteractive m = (MonadState GameState m, Monad m, MonadError Message m, Functor m, Applicative m) class NonInteractive m => GameMonad m where playerDecision :: Age -> Turn -> PlayerId -> [Card] -> GameState -> m (PlayerAction, Exchange) askCard :: Age -> PlayerId -> [Card] -> GameState -> Message -> m Card tellPlayer :: PlayerId -> Message -> m () generalMessage :: Message -> m () </figure> And now have : <figure class="code"> 1 2 3 4 5 6 7 8 data GameInstr a where PlayerDecision :: Age -> Turn -> PlayerId -> NonEmpty Card -> GameInstr (PlayerAction, Exchange) AskCard :: Age -> PlayerId -> NonEmpty Card -> Message -> GameInstr Card TellPlayer :: PlayerId -> Message -> GameInstr () GeneralMessage :: Message -> GameInstr () ActionsRecap :: M.Map PlayerId (PlayerAction, Exchange) -> GameInstr () ThrowError :: Message -> GameInstr a CatchError :: GameMonad a -> (Message -> GameMonad a) -> GameInstr a </figure> So … there have been some choices going on here. First of all, we need to support all the features we previously had, namely `MonadState`, `MonadError` and four game-specific functions. You can spot these four functions quite easily (along with a new one, which will be covered in a minute). We get `MonadState` and `MonadError` in the following way : <figure class="code"> 1 2 3 4 5 type GameMonad = ProgramT GameInstr (State GameState) instance MonadError PrettyDoc (ProgramT GameInstr (State GameState)) where throwError = singleton . ThrowError catchError a handler = singleton (CatchError a handler) </figure> I decided to use the monad transformer `ProgramT` over a base `State GameState` monad, but encode the error part with the provided instructions. It would have been easier to encode the state part that way, except I don’t know how to write an instance for `ProgramT` (see [this post comment]( The interaction functions no longer have a `GameState` in their types, because the interpreter will have access to the state when decoding this instruction, so it is not necessary to pass it here too. ### Mechanically refactor all mentions of GameMonad Now all you have to do is to replace all type signatures that looked like : <figure class="code"> 1 GameMonad m => a -> b -> m c </figure> Into : <figure class="code"> 1 a -> b -> GameMonad c </figure> ### Write an interpreter I decided to write a generic interpreter, that takes a [group of functions]( in some monad `m`, a base `GameState`, and gives you a function that computes any `GameMonad a` expression in the `m` monad. The implementation is pretty obvious, and not very interesting, but it should be easy to write backends now. Perhaps of interest is the fact that the game state is explicitely passed as a parameter all over the place, so it can be passed to the backends at the interpreter level. ### A pure backend The easiest backend to write is a pure one, with no real player interaction. I could have used `Identity` as the base monad, but instead opted for `State StdGen`. That way, I can easily have the “players” play random cards, which will help with testing. The [implementation]( is also nothing special, but made me write a [lot of code]( to support it. In particular, the `allowableActions` function is pretty tricky, and is not entirely satisfying. Given a game state, a player name and a list of cards in his hands, it gives a list of all the non obviously stupid legal actions that are available. It does so in the most direct way, enumerating all possible combinations of resources, neighbor resources, exchanges, etc. that would work. Then it removes all duplicates, and the actions that are obviously too expensive. Fortunately, all this code will also be used by the other backends. ## So … are there bugs yet ? I wrote a simple test that checks for errors. Theorically, the pure backend should always result in games that end well (we should get a `Right …` instead of a `Left rr`. So I wrote a simple property-based test that gets an arbitrary seed and number of players (between 3 and 7), runs a pure game and checks its result. And there were runtime errors ! * The `Monoid` instance for `AddMap` had an [infinite loop]( * The `allowableActions` function sometimes returned no valid actions. I forgot to always add the possibility to drop a card … To prevent the second case from happening again, I [wrote the “prepend drop actions” before the big case statement](, and modified the type of the `askCardSafe` function so that [it can’t accept an empty list]( This means that if I introduce another bug in `allowableActions`, I should get a `Left …` instead of a runtime exception. There also was a “rule” bug, due to the fact that I had not understood a rule correctly. Basically, I use a fictionnal 7th round to emulate the efficiency capability, but there should be no “hand rotation” before that turn. I fixed it [wrong]( once, and then [properly]( However, I did not discover nor fix this bug because of tests. ## The console backend Before writing the console backend I needed a bit of code for [pretty printing stuff]( Once this was done, the backend was [quickly written]( The opponents still play randomly, which explains the kind of results depicted below, but it is a genuine pleasure to finally play ! ![Crushing victory](/images/7startups-console1.png) I also realized when using the console backends that the messaging functions, while generic, would probably not work well on all backends. I decided to include more specialized functions, such as `ActionsRecap`, which can be passed a map of all the actions the players undertook in a turn. The current version also lacks a way of getting the results of the poacher wars between the ages, but that should be trivial to add. ## Next time Next time should get more interesting, as I will try to write an interesting backend. It will be a bit harder to design because I want players using distinct backends to be able to participate in the same game.
Categories: Offsite Blogs