# News aggregator

### [sac_svt_2017_publicity ] SAC SVT 2017 - First Call forPapers - Deadline September 15

General haskell list - Tue, 07/05/2016 - 9:10am
[Please accept our apologies for potential duplicates] ================================================== 32nd Annual ACM Symposium on Applied Computing Software Verification and Testing Track April 3 - 7, 2017, Marrakech, Morocco More information: http://http://antares.sip.ucm.es/svt2017/ <http://http//antares.sip.ucm.es/svt2017/> and http://www.sigapp.org/sac/sac2017/ <http://www.sigapp.org/sac/sac2017/> =================================================== Important dates ——————— * September 15, 2016: Papers and SRC submission * November 10, 2016: Paper and SRC notification * November 25, 2016: Camera-ready copies ACM Symposium on Applied Computing ————————————————— The ACM Symposium on Applied Computing (SAC) has gathered scientists from different areas of computing over the last thirty years. The forum represents an opportunity to interact with different communities sharing an interest in applied computing. SAC 2017 is sponsored by the ACM Special
Categories: Incoming News

### Gabriel Gonzalez: Auto-generate service API endpoints from records

Planet Haskell - Tue, 07/05/2016 - 7:45am

Haskell has pretty cool support for code generation from data type definitions using GHC generics. So I thought: "why not generate a service from a data type?".

The basic idea is pretty simple. Given a data type definition like this:

data Command
= Create { filepath :: FilePath, contents :: String }
| Delete { filepath :: FilePath }

... we'll auto-generate two API endpoints:

• /create?filepath=:string&contents=:string
• /delete?filepath=:string

Each endpoint accepts query parameters matching the fields for their respective constructors:

$curl 'localhost:8080/create?filepath=test.txt&contents=ABC' "File created"$ cat test.txt
ABC
$curl 'localhost:8080/delete?filepath=test.txt' "File deleted"$ cat test.txt
cat: test.txt: No such file or directory

The complete code to build the server looks like this:

{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE DeriveGeneric #-}

import Server.Generic
import System.Directory (removeFile)

data Command
= Create { filepath :: FilePath, contents :: String }
| Delete { filepath :: FilePath }
deriving (Generic, ParseRecord)

handler :: Command -> IO String
handler (Create file text) = do
writeFile file text
return "File created"
handler (Delete file) = do
removeFile file
return "File deleted"

main :: IO ()
main = serveJSON 8080 handler

You can test it yourself by running:

$stack build server-generic$ stack runghc AboveExample.hs

... and then in a separate terminal you can hit each endpoint with curl as illustrated above.

GHC Generics

The Haskell magic is in this one line of code:

deriving (Generic, ParseRecord)

This auto-generates code that tells the server how to marshal the route and query parameters into our Command data type. All we have to do is supply a handler that pattern matches on the incoming Command to decide what to do:

handler :: Command -> IO String
handler (Create file text) = do
writeFile file text
return "File created"
handler (Delete file) = do
removeFile file
return "File deleted"

You can read that as saying:

• "If a client hits the /create endpoint, create the specified file"
• "If a client hits the /delete endpoint, delete the specified file"

As an exercise, you can try modifying the handler to respond with the name of the file that was created or deleted.

However, you're not limited to query parameters. You can also parse data from path tokens by just omitting field labels from the data type:

{-# LANGUAGE DeriveGeneric #-}

import Server.Generic

data Command
| Multiply Double Double
deriving (Generic)

instance ParseRecord Command

handler :: Command -> IO Double
handler (Add x y) = return (x + y)
handler (Multiply x y) = return (x * y)

main :: IO ()
main = serveJSON 8080 handler

If you run the above code, you get a server that has two endpoints:

• /multiply/:double/:double

You can run the server and test that they work:

$curl 'localhost:8080/add/2/3' 5$ curl 'localhost:8080/multiply/2/3'
6

This library also intelligently handles optional and repeated fields in data type definitions. For example, suppose we serve this data type::

{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE DeriveAnyClass #-}

import Server.Generic

data Example = Example
{ list :: [Int]
, optional :: Maybe Int
, first :: First Int
, last :: Last Int
} deriving (Generic, ParseRecord, ToJSON)

handler :: Example -> IO Example
handler = return -- Serve decoded value back to client as JSON

main :: IO ()
main = serveJSON 8080 handler

... then the server will echo back the decoded type as JSON, correctly handling absent or repeated fields:

$curl 'localhost:8080/example' {"list":[],"first":null,"last":null,"optional":null}$ curl 'localhost:8080/example?optional=1&list=1&list=2&first=1&first=2&last=1&last=2'
{"list":[1,2],"first":1,"last":2,"optional":1}

Also, these repeated and optional annotations work for path components, too, in case you were wondering:

{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE DeriveAnyClass #-}

import Server.Generic

data Example = Example [Int] (Maybe Text)
deriving (Generic, ParseRecord, ToJSON)

handler :: Example -> IO Example
handler = return

main :: IO ()
main = serveJSON 8080 handler

The above server does "the right thing" and doesn't need to be told where the Ints end and the Text begins:

$curl 'localhost:8080/example' [[],null]$ curl 'localhost:8080/example/1/2/foo'
[[1,2],"foo"]
$curl 'localhost:8080/example/1/2/3' [[1,2,3],null]$ curl 'localhost:8080/example/foo'
[[],"foo"]

The server uses backtracking when parsing the route so the server knows when the Ints end and the Text begins.

Types

The whole thing is strongly typed, which means several things in the context of service programming.

For example, if you define a data type that expects an Int field, then by golly your handler will get an Int field or the server will automatically reject the request for you. You don't have to worry about checking that the field is present nor do you need to validate that the parameter decodes to an Int correctly. If you want the parameter to be optional then you need to make that explicit by marking the field as type Maybe Int.

You also don't have to handle fields that belong to other endpoints. Each endpoint only gets exactly the fields it requested; no more, no less. If a given endpoint gets the wrong set of path tokens or query parameters then the server rejects the request for you.

This is also strongly typed in the sense that more logic is pushed into the type and less logic goes in the handler. If you want just the first or last occurrence of a query parameter, you just annotate the type with First or Last, respectively. The more logic you push into declarative type-level programming the more you distill your handler to focus on business logic.

Caveats

I wrote this library to provide a quick an easy way to spin up Haskell web services but the library could still use some improvement. I'm not really a web developer so I only kind of know what I'm doing and could use help from people more knowledgeable than me.

The most notable deficiency is that the library does not take care to serve proper HTTP status codes for different types of errors. Every failed request returns a 404 status code.

Also, if the route is malformed the error message is a completely unhelpful "404 Not Found" error message that doesn't indicate how to fix the error.

Another blatant deficiency is that the server completely ignores the request method. I wasn't sure how to design this to work within the framework of data type generic programming.

If you have ideas about how to improve things I would greatly welcome any contributions.

Conclusions

People familiar with Haskell will recognize that this library resembles the servant library in some respects. The high-level difference is that this is a subset of servant that is much simpler but also significantly less featureful. For example, servant can also generate client-side bindings and Swagger resource declarations and servant also permits a much greater degree of customization.

This library focuses primarily on simple quick-and-dirty services; for anything more polished and production-ready you will probably want to try other Haskell service libraries. I just wanted to make it as easy as possible for people to get started with back-end development in Haskell and also show off how cool and powerful GHC generics can be.

Categories: Offsite Blogs

### Neil Mitchell: More space leaks: Alex/Happy edition

Planet Haskell - Tue, 07/05/2016 - 6:43am

Summary: Alex and Happy had three space leaks, now fixed.

Using the techniques described in my previous blog post I checked happy and alex for space leaks. As expected, both had space leaks. Three were clear and unambiguous space leaks, two were more nuanced. In this post I'll describe all five, starting with the obvious ones.

1: Happy - non-strict accumulating fold

Happy contains the code:

indexInto :: Eq a => Int -> a -> [a] -> Maybe Int
indexInto _ _ [] = Nothing
indexInto i x (y:ys) | x == y = Just i
| otherwise = indexInto (i+1) x ys

This code finds the index of an element in a list, always being first called with an initial argument of 0. However, as it stands, the first argument is a classic space leak - it chews through the input list, building up an equally long chain of +1 applications, which are only forced later.

The fix is simple, change the final line to:

let j = i + 1 in j seq indexInto j x ys

Or (preferably) switch to using the space-leak free Data.List.elemIndex. Fixed in a pull request.

2: Happy - sum using foldr

Happy also contained the code:

foldr (\(a,b) (c,d) -> (a+b,b+d)) (0,0) conflictList

The first issue is that the code is using foldr to produce a small atomic value, when foldl' would be a much better choice. Even after switching to foldl' we still have a space leak because foldl' only forces the outer-most value - namely just the pair, not the Int values inside. We want to force the elements inside the pair so are forced into the more painful construction:

foldl' (\(a,b) (c,d) ->
let ac = a + c; bd = b + d
in ac seq bd seq (ac,bd))
(0,0) conflictList

Not as pleasant, but it does work. In some cases people may prefer to define the auxiliary:

let strict2 f !x !y = f x y
in foldr (\(a,b) (c,d) -> strict2 (,) (a+b) (b+d)) (0,0) conflictList

Fixed in a pull request.

3: Alex - lazy state in a State Monad

Alex features the code:

N $\s n _ -> (s, addEdge n, ()) Here N roughly corresponds to a state monad with 2 fields, s and n. In this code n is a Map, which operates strictly, but the n itself is not forced until the end. We solve the problem by forcing the value before returning the triple: N$ \s n _ -> let n' = addEdge n in n' seq (s, n', ())

Fixed in a pull request.

4: Alex - array freeze

Alex calls the Data.Array.MArray.freeze function, to convert an STUArray (unboxed mutable array in the ST monad) into a UArray (unboxed immutable array). Unfortunately the freeze call in the array library uses an amount of stack proportional to the size of the array. Not necessarily a space leak, but not ideal either. Looking at the code, it's also very inefficient, constructing and deconstructing lots of intermediate data. Fortunately under normal optimisation a rewrite rule fires for this type to replace the call with one to freezeSTUArray, which is much faster and has bounded stack, but is not directly exported.

Usually I diagnose space leaks under -O0, on the basis that any space leak problems at -O0 may eventually cause problems anyway if an optimisation opportunity is lost. In this particular case I had to -O1 that module.

5: Happy - complex fold

The final issue occurs in a function fold_lookahead, which when given lists of triples does an mconcat on all values that match in the first two components. Using the extra library that can be written as:

map (\((a,b),cs) -> (a,b,mconcat cs)) .
groupSort .
map (\(a,b,c) -> ((a,b),c))

We first turn the triple into a pair where the first two elements are the first component of the pair, call groupSort, then mconcat the result. However, in Happy this construction is encoded as a foldr doing an insertion sort on the first component, followed by a linear scan on the second component, then individual mappend calls. The foldr construction uses lots of stack (more than 1Mb), and also uses an O(n^2) algorithm instead of O(n log n).

Alas, the algorithms are not identical - the resulting list is typically in a different order. I don't believe this difference matters, and the tests all pass, but it does make the change more dangerous than the others. Fixed in a pull request.

The result

Thanks to Simon Marlow for reviewing and merging all the changes. After these changes Happy and Alex on the sample files I tested them with use < 1Kb of stack. In practice the space leaks discovered here are unlikely to materially impact any real workflows, but they probably go a bit faster.

Categories: Offsite Blogs

### Philip Wadler: Roseburn to Leith Walk Cycleway: the website

Planet Haskell - Tue, 07/05/2016 - 3:38am
There is a new website in support of the Roseburn to Leith Walk Cycleway. I especially like their promise to be evidence-based and to cover both sides, something increasingly rare in political discourse.
We are an informal association of local residents working to get the best for our community from the Proposed Cycle Route.  We've spent hundreds of hours gathering the facts.
1. Even if you don't cycle, the Cycle Route has big benefits for you
2. Common concerns don't match the facts
3. The Council's latest revised design has addressed key concerns.
Our promiseOur site is evidence-based, including the disadvantages.  We are ready to engage in discussion with anyone.  Tell us about errors or omissions and we'll fix them as soon as we can.
Our goals
• Get the best for all residents. This is not a website advocating cycling.  It's true that many of the founders do cycle, and many of us also drive.
• Make decisions based on the facts.  We are concerned how many people are voicing fears and criticisms yet don't seem to know about some of the most important data and studies.
• Listen to all views, and bring people together for discussion.  We don't accept the whole notion of classifying people as 'cyclists' and 'drivers'.  We are all ordinary people who use a variety of means of travel appropriate to different situations.
Categories: Offsite Blogs

### Yesod Web Framework: New http-client and mono-traversable releases

Planet Haskell - Tue, 07/05/2016 - 12:15am

I'm happy to announce the release of a number of packages today, but mostly this comes down to upgrades to http-client and mono-traversable. This blog post will cover the main changes you should be aware of it you are using one of these two packages. In addition to these two packages, the following related packages are being released as well: http-client-tls, http-client-openssl, http-conduit, mono-traversable-instances, minlen, chunked-data, conduit-combinators, mutable-containers, classy-prelude, classy-prelude-conduit, classy-prelude-yesod.

http-client

http-client is an HTTP client library leveraged by a number of other packages, including http-conduit, pipes-http, and wreq. This release is mostly about addressing issue #193, about an controversial design decision to throw runtime exceptions on non-successful HTTP response statuses. If you want a quick explanation of what's changed and what you should do:

• If the HTTP server you're talking to gives a non-success HTTP response status (e.g., 404 not found), you will no longer get a runtime exception by default.

• If you want the old behavior, switch to the parseUrlThrow function
• The parseUrl function remains with the old behavior, but is deprecated in favor of parseRequest
• The IsString instance has also switched to the non-throwing behavior
• In an effort to make the remaining exceptions more useful, and avoid people accidentally relying on the old exception behavior, there's a new structure to the HttpException type. In particular, almost all exceptions are now contained in the HttpExceptionContent type, and will be wrapped up with the HttpExceptionRequest constructor, which provies information on the Request used to generate the exception. Hopefully this will make for much more useful error messages.

Based on the feedback I've received, this should bring the default behavior for http-client into line with what people expect, and will hopefully have a minimal impact of migration for existing users relying on the current behavior.

mono-traversable

The mono-traversable package provides a typeclass hierarchy based around the idea of monomorphic containers. This allows, for examples, a unified foldMap function that works on lists, ByteStrings, and unboxed vectors, as well as abstractions over sequences (functions like break and drop) and containers (Maps and Sets).

I laid out a plan for a cleanup of the mono-traversable package. Most of the changes here were minor, and will not affect end users. Of import:

• The mono-traversable package itself has much reduced dependencies, by putting a number of instances into the new mono-traversable-instances package.
• A few typeclasses that used to live in chunked-data have moved to mono-traversable
• The Data.NonNull module is much simpler, and no longer based on Data.MinLen (which lives on in the minlen package)
classy-prelude

Mostly, classy-prelude is just inheriting the upstream changes in mono-traversable. The only other point I'd like to make about this classy-prelude release is that it is switching over to the new safe-exceptions package, which I recently announced on the FP Complete blog.

Mega repo

To simplify maintenance, and address a common problem of people not knowing which repo to report issues to, I've combined a few repos together into a mono-traversable mega-repo:

• classy-prelude (and -conduit and -yesod)
• chunked-data
• mutable-containers

I've updated the old repo locations with a final commit pointing to the new location.

Categories: Offsite Blogs

### Intanciate data type

haskell-cafe - Mon, 07/04/2016 - 9:43pm
Hi all, I have a data type looking like this: data Foo e a where Foo :: e → Foo e a I would like to instantiate it to make it equivalent to: data Bar a where A :: String → Bar Int B :: Maybe a → Bar a How can I do that? With a functional dependency? I probably need to change the definition of Foo. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

### Ken T Takusagawa: [vuqzjhmd] Heptagon

Planet Haskell - Mon, 07/04/2016 - 3:51pm

Rainbow-shaded regular heptagon.  The source code below draws it in two different ways. It is unfortunate that the JuicyPixels Codec.Picture generateImage mapPixels encodePng pipeline (implemented in the function heptagonpic) is not lazy.  A 32000 by 32000 image ran out of memory.  The alternative route, outputting a Netpbm PPM (implemented in the function heptagon) then piping standard output to pnmtopng worked just fine.

There is a curious optical illusion of radial spikes, especially in the red and cyan.

Categories: Offsite Blogs

### Proposal: ArgumentDo

glasgow-user - Mon, 07/04/2016 - 11:31am
Hi glasgow-haskell-users, I have written a wiki page about a proposed extension called ArgumentDo. It's a small syntactic extension that allows "do" expressions, lambdas and a few other kinds of expressions to be used as function arguments, without parentheses. https://ghc.haskell.org/trac/ghc/wiki/ArgumentDo Any feedback is appreciated. In particular, since the idea has received mixed support (see the "Discussion" section on the wiki page), I'd like to make sure that there is enough support for this feature to justify an implementation in GHC. Regards, Takano Akio _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
Categories: Offsite Discussion

### SIGPLAN Programming Languages Mentoring Workshop < at >SPLASH'16

General haskell list - Mon, 07/04/2016 - 8:40am
SIGPLAN Programming Languages Mentoring Workshop < at > SPLASH'16 Amsterdam, The Netherlands (part of SPLASH 2016) Tuesday, November 1st, 2016 http://2016.splashcon.org/track/splash-2016-plmw We are pleased to invite students interested in programming languages to the Programming Languages Mentoring Workshop (PLMW) at SPLASH. The goal of this workshop is to introduce senior undergraduate and early graduate students to research topics in programming languages as well as provide career mentoring advice. We have recruited leaders from the programming languages community to provide overviews of current research topics and give students valuable advice about how to thrive in graduate school, search for a job, and cultivate habits and skills that will help them in research careers. This workshop is part of the activities surrounding SPLASH, and takes place the day before the main conference. A key goal of the workshop is to make SPLASH more accessible to newcomers. Through the generous donation of our sponsors, we
Categories: Incoming News

### Dominic Steinitz: Modelling an Ecosystem via Hamiltonian Monte Carlo

Planet Haskell - Mon, 07/04/2016 - 6:31am
Introduction

Recall from the previous post that the Hare growth parameter undergoes Brownian motion so that the further into the future we go, the less certain we are about it. In order to ensure that this parameter remains positive, let’s model the log of it to be Brownian motion.

where the final equation is a stochastic differential equation with being a Wiener process.

By Itô we have

Again, we see that the populations become noisier the further into the future we go.

Inference

Now let us infer the growth rate using Hamiltonian Monte Carlo and the domain specific probabilistic language Stan (Stan Development Team (2015b), Stan Development Team (2015a), Hoffman and Gelman (2014), Carpenter (2015)). Here’s the model expressed in Stan.

functions { real f1 (real a, real k1, real b, real p, real z) { real q; q = a * p * (1 - p / k1) - b * p * z; return q; } real f2 (real d, real k2, real c, real p, real z) { real q; q = -d * z * (1 + z / k2) + c * p * z; return q; } } data { int<lower=1> T; // Number of observations real y[T]; // Observed hares real k1; // Hare carrying capacity real b; // Hare death rate per lynx real d; // Lynx death rate real k2; // Lynx carrying capacity real c; // Lynx birth rate per hare real deltaT; // Time step } parameters { real<lower=0> mu; real<lower=0> sigma; real<lower=0> p0; real<lower=0> z0; real<lower=0> rho0; real w[T]; } transformed parameters { real<lower=0> p[T]; real<lower=0> z[T]; real<lower=0> rho[T]; p[1] = p0; z[1] = z0; rho[1] = rho0; for (t in 1:(T-1)){ p[t+1] = p[t] + deltaT * f1 (exp(rho[t]), k1, b, p[t], z[t]); z[t+1] = z[t] + deltaT * f2 (d, k2, c, p[t], z[t]); rho[t+1] = rho[t] * exp(sigma * sqrt(deltaT) * w[t] - 0.5 * sigma * sigma * deltaT); } } model { mu ~ uniform(0.0,1.0); sigma ~ uniform(0.0, 0.5); p0 ~ lognormal(log(100.0), 0.2); z0 ~ lognormal(log(50.0), 0.1); rho0 ~ normal(log(mu), sigma); w ~ normal(0.0,1.0); for (t in 1:T) { y[t] ~ lognormal(log(p[t]),0.1); } }

Let’s look at the posteriors of the hyper-parameters for the Hare growth parameter.

Again, the estimate for is pretty decent. For our generated data, and given our observations are quite noisy maybe the estimate for this is not too bad also.

Appendix: The R Driving Code

install.packages("devtools") library(devtools) install_github("libbi/RBi",ref="master") install_github("sbfnk/RBi.helpers",ref="master") rm(list = ls(all.names=TRUE)) unlink(".RData") library('RBi') try(detach(package:RBi, unload = TRUE), silent = TRUE) library(RBi, quietly = TRUE) library('RBi.helpers') library('ggplot2', quietly = TRUE) library('gridExtra', quietly = TRUE) endTime <- 50 PP <- bi_model("PP.bi") synthetic_dataset_PP <- bi_generate_dataset(endtime=endTime, model=PP, seed="42", verbose=TRUE, add_options = list( noutputs=500)) rdata_PP <- bi_read(synthetic_dataset_PP) df <- data.frame(rdata_PP$P$nr, rdata_PP$P$value, rdata_PP$Z$value, rdata_PP$P_obs$value) ggplot(df, aes(rdata_PP$P$nr, y = Population, color = variable), size = 0.1) + geom_line(aes(y = rdata_PP$P$value, col = "Hare"), size = 0.1) + geom_line(aes(y = rdata_PP$Z$value, col = "Lynx"), size = 0.1) + geom_point(aes(y = rdata_PP$P_obs$value, col = "Observations"), size = 0.1) + theme(legend.position="none") + ggtitle("Example Data") + xlab("Days") + theme(axis.text=element_text(size=4), axis.title=element_text(size=6,face="bold")) + theme(plot.title = element_text(size=10)) ggsave(filename="diagrams/LVdata.png",width=4,height=3) library(rstan) rstan_options(auto_write = TRUE) options(mc.cores = parallel::detectCores()) lvStanModel <- stan_model(file = "SHO.stan",verbose=TRUE) lvFit <- sampling(lvStanModel, seed=42, data=list(T = length(rdata_PP$P_obs$value), y = rdata_PP$P_obs$value, k1 = 2.0e2, b = 2.0e-2, d = 4.0e-1, k2 = 2.0e1, c = 4.0e-3, deltaT = rdata_PP$P_obs$time[2] - rdata_PP$P_obs$time[1] ), chains=1) samples <- extract(lvFit) gs1 <- qplot(x = samples$mu, y = ..density.., geom = "histogram") + xlab(expression(\mu)) gs2 <- qplot(x = samples$sigma, y = ..density.., geom = "histogram") + xlab(expression(samples$sigma)) gs3 <- grid.arrange(gs1, gs2) ggsave(plot=gs3,filename="diagrams/LvPosteriorStan.png",width=4,height=3) synthetic_dataset_PP1 <- bi_generate_dataset(endtime=endTime, model=PP, init = list(P = 100, Z=50), seed="42", verbose=TRUE, add_options = list( noutputs=500)) rdata_PP1 <- bi_read(synthetic_dataset_PP1) synthetic_dataset_PP2 <- bi_generate_dataset(endtime=endTime, model=PP, init = list(P = 150, Z=25), seed="42", verbose=TRUE, add_options = list( noutputs=500)) rdata_PP2 <- bi_read(synthetic_dataset_PP2) df1 <- data.frame(hare = rdata_PP$P$value, lynx = rdata_PP$Z$value, hare1 = rdata_PP1$P$value, lynx1 = rdata_PP1$Z$value, hare2 = rdata_PP2$P$value, lynx2 = rdata_PP2$Z$value) ggplot(df1) + geom_path(aes(x=df1$hare, y=df1$lynx, col = "0"), size = 0.1) + geom_path(aes(x=df1$hare1, y=df1$lynx1, col = "1"), size = 0.1) + geom_path(aes(x=df1$hare2, y=df1$lynx2, col = "2"), size = 0.1) + theme(legend.position="none") + ggtitle("Phase Space") + xlab("Hare") + ylab("Lynx") + theme(axis.text=element_text(size=4), axis.title=element_text(size=6,face="bold")) + theme(plot.title = element_text(size=10)) ggsave(filename="diagrams/PPviaLibBi.png",width=4,height=3) PPInfer <- bi_model("PPInfer.bi") bi_object_PP <- libbi(client="sample", model=PPInfer, obs = synthetic_dataset_PP) bi_object_PP$run(add_options = list( "end-time" = endTime, noutputs = endTime, nsamples = 2000, nparticles = 128, seed=42, nthreads = 1), verbose = TRUE, stdoutput_file_name = tempfile(pattern="pmmhoutput", fileext=".txt")) bi_file_summary(bi_object_PP$result$output_file_name) mu <- bi_read(bi_object_PP, "mu")$value g1 <- qplot(x = mu[2001:4000], y = ..density.., geom = "histogram") + xlab(expression(mu)) sigma <- bi_read(bi_object_PP, "sigma")$value g2 <- qplot(x = sigma[2001:4000], y = ..density.., geom = "histogram") + xlab(expression(sigma)) g3 <- grid.arrange(g1, g2) ggsave(plot=g3,filename="diagrams/LvPosterior.png",width=4,height=3) df2 <- data.frame(hareActs = rdata_PP$P$value, hareObs = rdata_PP$P_obs$value) ggplot(df, aes(rdata_PP$P$nr, y = value, color = variable)) + geom_line(aes(y = rdata_PP$P$value, col = "Phyto")) + geom_line(aes(y = rdata_PP$Z$value, col = "Zoo")) + geom_point(aes(y = rdata_PP$P_obs$value, col = "Phyto Obs")) ln_alpha <- bi_read(bi_object_PP, "ln_alpha")$value P <- matrix(bi_read(bi_object_PP, "P")$value,nrow=51,byrow=TRUE) Z <- matrix(bi_read(bi_object_PP, "Z")$value,nrow=51,byrow=TRUE) data50 <- bi_generate_dataset(endtime=endTime, model=PP, seed="42", verbose=TRUE, add_options = list( noutputs=50)) rdata50 <- bi_read(data50) df3 <- data.frame(days = c(1:51), hares = rowMeans(P), lynxes = rowMeans(Z), actHs = rdata50$P$value, actLs = rdata50$Z\$value) ggplot(df3) + geom_line(aes(x = days, y = hares, col = "Est Phyto")) + geom_line(aes(x = days, y = lynxes, col = "Est Zoo")) + geom_line(aes(x = days, y = actHs, col = "Act Phyto")) + geom_line(aes(x = days, y = actLs, col = "Act Zoo")) Bibliography

Carpenter, Bob. 2015. “Stan: A Probabilistic Programming Language.” Journal of Statistical Software.

Hoffman, Matthew D., and Andrew Gelman. 2014. “The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.” Journal of Machine Learning Research.

Stan Development Team. 2015a. Stan Modeling Language User’s Guide and Reference Manual, Version 2.10.0. http://mc-stan.org/.

———. 2015b. “Stan: A C++ Library for Probability and Sampling, Version 2.10.0.” http://mc-stan.org/.

Categories: Offsite Blogs

### Octree and Bounding Boxes

haskell-cafe - Mon, 07/04/2016 - 3:03am
I've forked the octree library and added some bounding box functionality. The problem is, it turns out kd-trees are better for my use case. I'd rather not junk the code, but I can't see a use case for the octree library as it stands. Below is the link to the branch where the code is, is there a use case for point octrees needing bounding boxes? https://github.com/mlitchard/octree/tree/test-reorg/Data/Octree/BoundingBox _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

### Gabriel Gonzalez: list-transformer - A beginner-friendly ListT

Planet Haskell - Sun, 07/03/2016 - 10:48am

Currently, Hackage has four implementations of "ListT-done-right" that I'm aware of:

• LogicT
• pipes (which provides a ListT type)
• list-t
• List

However, I felt that all of these libraries were more complex than they needed to be so I tried to distill them down to the simplest library possible. I want to encourage more people to use ListT so I'm releasing the beginner-friendly list-transformer library .

There are a few design choices I made to improve the new user experience for this library:

First, the ListT data type is not abstract and users are encouraged to use constructors to both assemble and pattern match on ListT values. This helps them build an intuition for how ListT works under the hood since the type is small and not difficult to use directly:

newtype ListT m a = ListT { next :: m (Step m a) }

data Step m a = Cons a (ListT m a) | Nil

Second, the API is tiny in order to steer users towards leaning on Haskell's "collections API" as much as possible. Specifically, I try to direct people towards these type classes:

Right now there are only three functions in the API that are not class methods:

• runListT
• fold
• foldM

Everything else is a method for one of the standard type classes and I avoid introducing new type classes.

Third, the API does not provide convenient helper functions for fully materializing the list. In other words, there is no utility function of this type:

toList :: ListT m a -> m [a]

A determined user can still force the list either indirectly via the fold function or by manually recursing over the ListT type. The goal is not to forbid this behavior, but rather to gently encourage people to preserve streaming. The API promotes the intuition that you're supposed to transform and consume results one-by-one instead of in large batches.

Fourth, the library comes with a long inline tutorial which is longer than the actual code. I think the tutorial could still use some improvement so if you would like to contribute to improve the tutorial please do!

Conclusion

You can find the list-transformer library either on Hackage or on Github.

Hopefully this encourages more people to give ListT a try and provides a stepping stone for understanding more complex streaming abstractoins.

Categories: Offsite Blogs

Lambda the Ultimate - Sat, 07/02/2016 - 11:16pm

As many of you know, the email functionality of the website has not been working for a very, very long time. In addition, all new users are still being approved by me, to combat spam. All this means manual work by me, prompted by frustrated emails by new members. Alas, given other commitments, I find that the backlog is growing and I simply cannot find the time to handle these emails (i.e., approve the user, set an initial password, let them know and ask them to change it). If any member want to help me with this, I would be grateful. This will involve getting extra admin privileges on the site, after which I can forward the requests in the pipe line to you to approve.

Thanks!

Categories: Offsite Discussion

haskell-cafe - Sat, 07/02/2016 - 10:33pm
Categories: Offsite Discussion

### How to make Repa do stencils unboxed?

haskell-cafe - Sat, 07/02/2016 - 4:34am
Hello all, TL;DR: Repa appears to be calculating my stencil operation using boxed arithmetic, which is very slow. How do I make it do unboxed arithmetic? I am currently working on a project where I need to simulate a 2D environment that exhibits wave behavior. I'm accomplishing this by directly discretizing the wave equation and turning it into a cellular automaton. The math here is relatively simple. Let's specify the example to the physical system of a rubber sheet, which exhibits wave behavior. To simulate a rubber sheet, you divide it up into grids. Then, you keep track of the velocity and height of each grid. At each iteration of the simulation, you update the velocity of each grid based on the force exerted upon it by its neighboring grids. This is simply a linear multiple of the distance between the grid and its neighbors (i.e. how stretched the sheet is). For example, if cell x is at height X, and its neighbors a,b,c,d are at height A,B,C,D, the tension on X is, for some constant L, T = L*((A-X
Categories: Offsite Discussion

### Philip Wadler: Where we find ourselves

Planet Haskell - Sat, 07/02/2016 - 1:00am
Jonathan Freedland at the Guardian sums up the state of affairs.It’s gripping, of course. Game of Thrones meets House of Cards, played out at the tempo of a binge-viewed box-set. ...
This week’s antics of Gove and Johnson are a useful reminder. For the way one has treated the other is the way both have treated the country. ...
[Johnson] didn’t believe a word of his own rhetoric, we know that now. His face last Friday morning, ashen with the terror of victory, proved it. That hot mess of a column he served up on Monday confirmed it again: he was trying to back out of the very decision he’d persuaded the country to make.  ...
When doctrine is kept distilled, pure and fervently uncontaminated by reality, it turns into zealotry
So we have the appalling sight of Gove on Friday, proclaiming himself a proud believer in the UK even though it was obvious to anyone who cared to look that a leave vote would propel Scotland towards saying yes in a second independence referendum. The more honest leavers admit – as Melanie Phillips did when the two of us appeared on Newsnight this week – that they believe the break-up of the union is a price worth paying for the prize of sovereignty. ...
They did it with lies, whether the false promise that we could both halt immigration and enjoy full access to the single market or that deceitful £350m figure, still defended by Gove, which tricked millions into believing a leave vote would bring a cash windfall to the NHS. They did it with no plan, as clueless about post-Brexit Britain as Bush and Blair were about post-invasion Iraq.
Senior civil servants say Brexit will consume their energies for years to come, as they seek to disentangle 40 years of agreements. It will be the central focus of our politics and our government, a massive collective effort demanding ingenuity and creativity. Just think of what could have been achieved if all those resources had been directed elsewhere. Into addressing, for instance, the desperate, decades-long needs – for jobs, for housing, for a future – of those towns that have been left behind by the last 30 years of change, those towns whose people voted leave the way a passenger on a doomed train pulls the emergency cord.
<aside class="element element-pullquote element--supporting" style="background-color: white; box-sizing: border-box; clear: left; color: #333333; float: left; font-family: "Guardian Text Egyptian Web", Georgia, serif; line-height: 24px; margin-bottom: 0.375rem; margin-left: -16.25rem; padding: 0.1875rem 1.25rem 0px; width: 16.25rem;"></aside>
Categories: Offsite Blogs

### Philip Wadler: Roseburn to Leith Walk Cycleway: A vs B

Planet Haskell - Sat, 07/02/2016 - 12:01am

Edinburgh City Council has released its report on consultation on the Roseburn to Leith Walk cycleway, and Henry Whaley has written a comment for the Edinburgh Evening News.

A few shopkeepers in the area are concerned that the cycleway will have a negative impact on their shops, a well-known cycling fallacy. Whaley describes the current impasse:
The Council have responded to specific concerns from some shopkeepers and residents by reinstating the loading bay on the north side of Roseburn Terrace, increasing the right turn lane and eliminating the floating bus stop whilst maintaining the cyclepath to form a much improved ‘Option A’.
The Council are also assessing an alternative ‘Option B’, which would take the cyclepath away from Roseburn Terrace, on an indirect and complicated route involving three road crossings as well as restricting the space for motor vehicles at the already tight Roseburn Street Junction and not widening the Roseburn Terrace pavements. It’s another option, but one that is worse for most people.And from the City Council:
Transport Convener Councillor Lesley Hinds said: "We had an overwhelming response to the consultation and the exercise has been extremely helpful to officers working on the proposed Roseburn to Leith Walk cycleway and street improvements.
"Thanks to the feedback received, we've been able to make deliverable adjustments to a number of aspects of the scheme. In terms of the Roseburn section, local concerns have prompted us to present an alternative route (Option B) via Roseburn Place and Roseburn Street for consideration by committee members. However, we remain in favour of Option A because it will enhance the street environment in Roseburn Terrace and is more direct for cyclists - involving one road crossing rather than the three that would be required for Option B.
"After further planned consultation with businesses, community councils and the Council's Active Travel Forum, the project team will consolidate feedback and finalise the preliminary design scheme for presentation to the Transport and Environment Committee on 30 August 2016."
Categories: Offsite Blogs

### type level Min function for GHC.TypeLits?

haskell-cafe - Fri, 07/01/2016 - 6:28pm
Hi Cafe, I'm trying to do some basic type level tricks, I wonder whether it is possible to get vZipWith in below code snippet working: {-# LANGUAGE DataKinds #-} {-# LANGUAGE PolyKinds #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE UndecidableInstances #-} {-# OPTIONS_GHC -fplugin GHC.TypeLits.Normalise #-} import GHC.TypeLits import Data.Proxy data Vect :: (Nat -> * -> *) where VNil :: Vect 0 a VCons :: a -> Vect n a -> Vect (1+n) a deriving instance (Show a) => Show (Vect n a) vlength :: forall n a . KnownNat n => Vect n a -> Integer vlength _ = natVal (Proxy :: Proxy n) vconcat :: Vect n a -> Vect m a -> Vect (n+m) a -- Need type checker plugin vconcat VNil ys = ys vconcat xs VNil = xs vconcat (VCons x xs) ys = VCons x (vconcat xs ys) type family Min3 (t :: Bool) (m :: Nat) (n :: Nat) :: Nat type
Categories: Offsite Discussion

### Position as lab docent Software Engineering at the University of Amsterdam

General haskell list - Fri, 07/01/2016 - 6:23pm
Dear all, I would like to draw your attention to the following job opening at the University of Amsterdam: http://www.uva.nl/en/about-the-uva/working-at-the-uva/vacancies/item/16-267-lecturer-software-engineering.html This is a full-time teaching position in our MSc Software Engineering programme that is rated among the best ICT Master programmes in the Netherlands: http://www.uva.nl/en/education/master-s/master-s-programmes/item/software-engineering.html The deadline for application is formally today, but we will (silently) accept applications until Wednesday July 6. If you have any questions, do not hesitate to contact me. Best regards, Clemens Grelck
Categories: Incoming News

### Joachim Breitner: When to reroll a six

Planet Haskell - Fri, 07/01/2016 - 1:47pm

This is a story about counter-intuitive probabilities and how a small bit of doubt turned out to be very justified.

It begins with the game “To Court the King” (German: „Um Krone und Kragen“). It is a nice game with dice and cards, where you start with a few dice, and use your dice rolls to buy additional cards, which give you extra dice or special powers to modify the dice that you rolled. You can actually roll your dice many times, but every time, you have to set aside at least one die, which you can no longer change or reroll, until eventually all dice have been set aside.

A few years ago, I have played this game a lot, both online (on yucata.de) as well as in real life. It soon became apparent that it is almost always better to go for the cards that give you an extra die, instead of those that let you modify the dice. Intuitively, this is because every additional die allows you to re-roll your dice once more.

I concluded that if I have a certain number of dice (say, n), and I want to have a sum as high as possible at the end, then it may make sense to reroll as many dice as possible, setting aside only those showing a 6 (because that is the best you can get) or, if there is no dice showing a 6, then a single die with the best score. Besides for small number of dice (2 or 3), where even a 4 or 5 is worth keeping, this seemed to be a simple, obvious and correct strategy to maximize the expected outcome of this simplified game.

It is definitely simple and obvious. But some doubt that it was correct remained. Having one more die still in the game (i.e. not set aside) definitely improves your expected score, because you can reroll the dice more often. How large is this advantage? What if it ever exceeds 6 – then it would make sense to reroll a 6. The thought was strange, but I could not dismiss it.

So I did what one does these days if one has a question: I posed it on the mathematics site of StackExchange. That was January 2015, and nothing happened.

I tried to answer it myself a month later, or at least work towards at an answer, and did that by brute force. Using a library for probabilistic calculations for Haskell I could write some code that simply calculated the various expected values of n dice for up to n = 9 (beyond that, my unoptimized code would take too long):

1: 3.50000 (+3.50000) 2: 8.23611 (+4.73611) 3: 13.42490 (+5.18879) 4: 18.84364 (+5.41874) 5: 24.43605 (+5.59241) 6: 30.15198 (+5.71592) 7: 35.95216 (+5.80018) 8: 41.80969 (+5.85753) 9: 47.70676 (+5.89707)

Note that this calculation, although printed as floating point numbers, is performed using fractions of unbounded integers, so there are no rounding issues that could skew the result.

The result supported the hypothesis that there is no point in rolling a 6 again: The value of an additional die grows and approaches 6 from beyond, but – judging from these number – is never going to reach it.

Then again nothing happened. Until 14 month later, when some Byron Schmuland came along, found this an interesting puzzle, and set out a 500 point bounty to whoever solved this problem. This attracted a bit attention, and a few not very successful attempts at solving this. Eventually it reached twitter, where Roman Cheplyaka linked to it.

Coincidally a day later some joriki came along, and he had a very good idea: Why not make our life easier and think about dice with less sides, and look at 3 instead of 6. This way, and using a more efficient implementation (but still properly using rationals), he could do a similar calculation for up to 50 dice. And it was very lucky that he went to 50, and not just 25, because up to 27, the results were very much as expected, approaching value of +3 from below. But then it surpassed +3 and became +3.000000008463403.

In other words: If you have roll 28 dice, and you have exactly two dice showing a 3, then it gives you better expected score if you set aside only one 3, and not both of them. The advantage is minuscule, but that does not matter – it is there.

From then on, the results behaved strangely. Between 28 and 34, the additional value was larger than 3. Then, from 35 on again lower than 2. It oscillated. Something similar could be observed when the game is played with coins.

Eventually, joriki improved his code and applied enough tricks so that he could solve it also for the 6-sided die: The difference of the expected value of 198 dice and having 199 dice is larger than 6 (by 10 − 21...)!

The optimizations that allowed him to calculate these numbers in a reasonable amount of time unfortunately was to assume that my original hypothesis (never rerolling a 6 is optimal), which held until n < 199. But this meant that for n > 199, the code did not yield correct results.

What is the rationale of the story? Don’t trust common sense when it comes to statistics; don’t judge a sequence just from a few initial numbers; if you have an interesting question, post it online and wait for 16 months.

Categories: Offsite Blogs