News aggregator

Help to choose a library name

haskell-cafe - Mon, 06/27/2016 - 8:38am
Hi community, I'm writing a reactive programming library (yet another). I need it for the game Nomyx, but couldn't find the features I wanted from the existing libraries. Now the library is called Nomyx-Events. But I'd like to find a cool name that is not related to Nomyx... Some propositions: - Nomev - Noa Some French names: - Imprevu (French for unforseen, like in "unforseen event"). - Rendez-vous - Dejavu I like a lot Imprevu. How does it sound to English native speakers? Thanks _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

The Haddock documentation is not showing up on the Hackage

General haskell list - Mon, 06/27/2016 - 5:28am
Hi, I uploaded a package named enchant on the Hackage last week, but the Haddock documentation is not showing up yet. The Status field says "Docs pending" and "Build status unknown". https://hackage.haskell.org/package/enchant-0.1.0.0 enchant uses c2hs as a build tool to generate the FFI binding and requires libenchant-dev to be installed on the machine. I wonder how I can tell these build requirements to the Hackage server. Regards, Kwang Yul Seo _______________________________________________ Haskell mailing list Haskell< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell
Categories: Incoming News

Philip Wadler: Brexit implies Techxit?

Planet Haskell - Mon, 06/27/2016 - 4:48am
In the wake of the EU referendum, there appears to be considerable information about its consequences that many might wish to have seen before the vote. Some of this concerns the negative impact of Brexit on technology firms. Among others, the BBC has a summary.I was particularly struck by one comment in the story, made by start-up mentor Theo Priestley (pictured above),.And Mr Priestley thinks that in the event of a Scottish independence referendum that leads to reunification with the EU, it's possible some start-ups could move north of the border, perhaps to rekindle "Silicon Glen" - a 1980s attempt to compete in the semiconductor industry.
Categories: Offsite Blogs

Deadline extended: 21st International Conference on Engineering of Complex Computer Systems (ICECCS 2016), Dubai, United Arab Emirates, November 6-8 2016

General haskell list - Mon, 06/27/2016 - 1:41am
ICECCS is an A-ranked conference by the Computing Research and Education Association of Australasia (CORE) 2014 ranking (http://portal.core.edu.au/conf-ranks/?search=ICECCS&by=all&source=CORE2014&sort=atitle&page=1 ). Please kindly consider submitting papers to the conference, and please encourage your colleagues and students to submit too. --------------------------------------------------------------- 21st International Conference on Engineering of Complex Computer Systems (ICECCS 2016) || November 6-8, Dubai, United Arab Emirates || http://www.aston.ac.uk/eas/about-eas/academic-groups/computer-science/iceccs-2016/ Overview --------------------- Over the past several years, we have seen a rapid rising emphasis on design, implement and manage complex computer systems to help us deal with an increasingly volatile, globalised complex world. These systems are critical for dealing with the Grand Challenge problems we are facing in the 21st century, including health care, urbanization, education, energy, fi
Categories: Incoming News

[ANN] vty-5.7 released

General haskell list - Sun, 06/26/2016 - 9:53pm
Hi, On the heels of version Vty 5.6, version 5.7 is now on Hackage. This release adds support for changing the behavior of mouse and paste modes both at Vty startup time and during application execution. In addition to no longer being on by default, they can be enabled or disabled at any time and Vty can be queried to tell whether they are supported. See the CHANGELOG for details. http://hackage.haskell.org/package/vty-5.7 https://github.com/coreyoconnor/vty/releases/tag/5.7 Enjoy!
Categories: Incoming News

Bryn Keller: Python Class Properties

Planet Haskell - Sun, 06/26/2016 - 6:00pm

Class properties are a feature that people coming to Python from other object-oriented languages expect, and expect to be easy. Unfortunately, it’s not. In many cases, you don’t actually want class properties in Python - after all, you can have first class module-level functions as well, you might very well be happier with one of those.

I sometimes see people claim that you can’t do class properties at all in Python, and that’s not right either. It can be done, and it’s not too bad. Read on!

I’m going to assume here that you already know what class (sometimes called “static”) properties are in languages like Java, and that you’re somewhat familiar with Python metaclasses.

To make this feature work, we have to use a metaclass. In this example, we’ll suppose that we want to be able to access a list of all the instances of our class, as well as reference to the most recently created instance. It’s artificial, but it gives us a reason to have both read-only and read-write properties. We define a metaclass, which is again a class that extends type.

class Extent(type): @property def extent(self): ext = getattr(self, '_extent', None) if ext is None: self._extent = [] ext = self._extent return ext @property def last_instance(self): return getattr(self, '_last_instance', None) @last_instance.setter def last_instance(self, value): self._last_instance = value

Please note that if you want to do something like this for real, you may well need to protect access to these shared class properties with synchronization tools like RLock and friends to prevent different threads from overwriting each others’ work willy-nilly.

Next we create a class that uses that metaclass. The syntax is different in Python 2.7, so you may need to adjust if you’re working in an older version.

class Thing(object, metaclass=Extent): def __init__(self): self.__class__.extent.append(self) self.__class__.last_instance = self

Another note for real code: these references (the extent and the last_instance) will keep your object from being garbage collected, so if you actually want to keep extents for your classes, you should do so using something like weakref.

Now we can try out our new class:

>>> t1 = Thing() >>> t2 = Thing() >>> Thing.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>] >>> Thing.last_instance <__main__.Thing object at 0x101c5d2b0> >>>

Great, we have what we wanted! There are a couple of things to remember, though:

  • Class properties are inherited!
  • Class properties are not accessible via instances, only via classes.

Let’s see an example that demonstrates both. Suppose we add a new subclass of Thing called SuperThing:

>>> class SuperThing(Thing): ... @property ... def extent(self): ... return self.__class__.extent ... >>> s = SuperThing()

See how we created a normal extent property that just reads from the class property? So we can now do this:

>>> s.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>, <__main__.SuperThing object at 0x101c5d2e8>]

Whereas if we were to try that with one of the original Things, it wouldn’t work:

>>> t1.extent Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Thing' object has no attribute 'extent'

We can of course still access either one via classes:

>>> t1.__class__.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>, <__main__.SuperThing object at 0x101c5d2e8>] >>> s.__class__.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>, <__main__.SuperThing object at 0x101c5d2e8>] >>>

Also note that the extent for each of these classes is the same, which shows that class properties are inherited.

Did you spot the bug in Thing? It only manifests when we have subclasses like SuperThing. We inherited the __init__ from Thing, which adds each new instance to the extent, and sets last_instance. In this case, self.__class__.extent was already initialized, on Thing, and so we added our SuperThing to the existing list. For last_instance, however, we assigned directly, rather than first reading and appending, as we did with the list property, and so SuperThing.last_instance will be our s, and Thing.last_instance will be our t2. Tread carefully, it’s easy to make a mistake with this kind of thing!

Hopefully this has been a (relatively) simple example of how to build your own class properties, with or without setters.

Categories: Offsite Blogs

Using streams to clarify (?) the signature ofData.Text.replace

haskell-cafe - Sun, 06/26/2016 - 3:39pm
In the "text" package, the signature of Data.Text.replace <http://hackage.haskell.org/package/text-1.2.2.1/docs/Data-Text.html#v:replace> always sends me looking into the haddocks: replace :: Text -> Text -> Text -> Text Which argument is the text to replace, which is the replacement and which is the text that should be scanned? Imagine a generalized version of replace that 1) works on streams, and 2) allows replacing a sequence of texts (like, say, chapter headers) instead of replacing the same text repeatedly. It could have the following signature: replace' :: Stream (Stream (Of Text) m) m () Do you find easy to intuit, just by looking at that signature, which is the function of each argument? _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

Dominic Steinitz: Ecology, Dynamical Systems and Inference via PMMH

Planet Haskell - Sun, 06/26/2016 - 6:53am
Introduction

In the 1920s, Lotka (1909) and Volterra (1926) developed a model of a very simple predator-prey ecosystem.

Although simple, it turns out that the Canadian lynx and showshoe hare are well represented by such a model. Furthermore, the Hudson Bay Company kept records of how many pelts of each species were trapped for almost a century, giving a good proxy of the population of each species.

We can capture the fact that we do not have a complete model by describing our state of ignorance about the parameters. In order to keep this as simple as possible let us assume that log parameters undergo Brownian motion. That is, we know the parameters will jiggle around and the further into the future we look the less certain we are about what values they will have taken. By making the log parameters undergo Brownian motion, we can also capture our modelling assumption that birth, death and predation rates are always positive. A similar approach is taken in Dureau, Kalogeropoulos, and Baguelin (2013) where the (log) parameters of an epidemiological model are taken to be Ornstein-Uhlenbeck processes (which is biologically more plausible although adds to the complexity of the model, something we wish to avoid in an example such as this).

Andrieu, Doucet, and Holenstein (2010) propose a method to estimate the parameters of such models (Particle Marginal Metropolis Hastings aka PMMH) and the domain specific probabilistic language LibBi (Murray (n.d.)) can be used to apply this (and other inference methods).

For the sake of simplicity, in this blog post, we only model one parameter as being unknown and undergoing Brownian motion. A future blog post will consider more sophisticated scenarios.

A Dynamical System Aside

The above dynamical system is structurally unstable (more on this in a future post), a possible indication that it should not be considered as a good model of predator–prey interaction. Let us modify this to include carrying capacities for the populations of both species.

Data Generation with LibBi

Let’s generate some data using LibBi.

// Generate data assuming a fixed growth rate for hares rather than // e.g. a growth rate that undergoes Brownian motion. model PP { const h = 0.1; // time step const delta_abs = 1.0e-3; // absolute error tolerance const delta_rel = 1.0e-6; // relative error tolerance const a = 5.0e-1 // Hare growth rate const k1 = 2.0e2 // Hare carrying capacity const b = 2.0e-2 // Hare death rate per lynx const d = 4.0e-1 // Lynx death rate const k2 = 2.0e1 // Lynx carrying capacity const c = 4.0e-3 // Lynx birth rate per hare state P, Z // Hares and lynxes state ln_alpha // Hare growth rate - we express it in log form for // consistency with the inference model obs P_obs // Observations of hares sub initial { P ~ log_normal(log(100.0), 0.2) Z ~ log_normal(log(50.0), 0.1) } sub transition(delta = h) { ode(h = h, atoler = delta_abs, rtoler = delta_rel, alg = 'RK4(3)') { dP/dt = a * P * (1 - P / k1) - b * P * Z dZ/dt = -d * Z * (1 + Z / k2) + c * P * Z } } sub observation { P_obs ~ log_normal(log(P), 0.1) } }

We can look at phase space starting with different populations and see they all converge to the same fixed point.

Data Generation with Haskell

Since at some point in the future, I plan to produce Haskell versions of the methods given in Andrieu, Doucet, and Holenstein (2010), let’s generate the data using Haskell.

> {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > module LotkaVolterra ( > solLv > , solPp > , h0 > , l0 > , baz > , logBM > , eulerEx > )where > import Numeric.GSL.ODE > import Numeric.LinearAlgebra > import Data.Random.Source.PureMT > import Data.Random hiding ( gamma ) > import Control.Monad.State

Here’s the unstable model.

> lvOde :: Double -> > Double -> > Double -> > Double -> > Double -> > [Double] -> > [Double] > lvOde rho1 c1 rho2 c2 _t [h, l] = > [ > rho1 * h - c1 * h * l > , c2 * h * l - rho2 * l > ] > lvOde _rho1 _c1 _rho2 _c2 _t vars = > error $ "lvOde called with: " ++ show (length vars) ++ " variable" > rho1, c1, rho2, c2 :: Double > rho1 = 0.5 > c1 = 0.02 > rho2 = 0.4 > c2 = 0.004 > deltaT :: Double > deltaT = 0.1 > solLv :: Matrix Double > solLv = odeSolve (lvOde rho1 c1 rho2 c2) > [50.0, 50.0] > (fromList [0.0, deltaT .. 50])

And here’s the stable model.

> ppOde :: Double -> > Double -> > Double -> > Double -> > Double -> > Double -> > Double -> > [Double] -> > [Double] > ppOde a k1 b d k2 c _t [p, z] = > [ > a * p * (1 - p / k1) - b * p * z > , -d * z * (1 + z / k2) + c * p * z > ] > ppOde _a _k1 _b _d _k2 _c _t vars = > error $ "ppOde called with: " ++ show (length vars) ++ " variable" > a, k1, b, d, k2, c :: Double > a = 0.5 > k1 = 200.0 > b = 0.02 > d = 0.4 > k2 = 50.0 > c = 0.004 > solPp :: Double -> Double -> Matrix Double > solPp x y = odeSolve (ppOde a k1 b d k2 c) > [x, y] > (fromList [0.0, deltaT .. 50]) > gamma, alpha, beta :: Double > gamma = d / a > alpha = a / (c * k1) > beta = d / (a * k2) > fp :: (Double, Double) > fp = ((gamma + beta) / (1 + alpha * beta), (1 - gamma * alpha) / (1 + alpha * beta)) > h0, l0 :: Double > h0 = a * fst fp / c > l0 = a * snd fp / b > foo, bar :: Matrix R > foo = matrix 2 [a / k1, b, c, negate d / k2] > bar = matrix 1 [a, d] > baz :: Maybe (Matrix R) > baz = linearSolve foo bar

This gives a stable fixed point of

ghci> baz Just (2><1) [ 120.00000000000001 , 10.0 ]

Here’s an example of convergence to that fixed point in phase space.

The Stochastic Model

Let us now assume that the Hare growth parameter undergoes Brownian motion so that the further into the future we go, the less certain we are about it. In order to ensure that this parameter remains positive, let’s model the log of it to be Brownian motion.

where the final equation is a stochastic differential equation with being a Wiener process.

By Itô we have

We can use this to generate paths for .

where .

> oneStepLogBM :: MonadRandom m => Double -> Double -> Double -> m Double > oneStepLogBM deltaT sigma rhoPrev = do > x <- sample $ rvar StdNormal > return $ rhoPrev * exp(sigma * (sqrt deltaT) * x - 0.5 * sigma * sigma * deltaT) > iterateM :: Monad m => (a -> m a) -> m a -> Int -> m [a] > iterateM f mx n = sequence . take n . iterate (>>= f) $ mx > logBMM :: MonadRandom m => Double -> Double -> Int -> Int -> m [Double] > logBMM initRho sigma n m = > iterateM (oneStepLogBM (recip $ fromIntegral n) sigma) (return initRho) (n * m) > logBM :: Double -> Double -> Int -> Int -> Int -> [Double] > logBM initRho sigma n m seed = > evalState (logBMM initRho sigma n m) (pureMT $ fromIntegral seed)

We can see the further we go into the future the less certain we are about the value of the parameter.

Using this we can simulate the whole dynamical system which is now a stochastic process.

> f1, f2 :: Double -> Double -> Double -> > Double -> Double -> > Double > f1 a k1 b p z = a * p * (1 - p / k1) - b * p * z > f2 d k2 c p z = -d * z * (1 + z / k2) + c * p * z > oneStepEuler :: MonadRandom m => > Double -> > Double -> > Double -> Double -> > Double -> Double -> Double -> > (Double, Double, Double) -> > m (Double, Double, Double) > oneStepEuler deltaT sigma k1 b d k2 c (rho1Prev, pPrev, zPrev) = do > let pNew = pPrev + deltaT * f1 (exp rho1Prev) k1 b pPrev zPrev > let zNew = zPrev + deltaT * f2 d k2 c pPrev zPrev > rho1New <- oneStepLogBM deltaT sigma rho1Prev > return (rho1New, pNew, zNew) > euler :: MonadRandom m => > (Double, Double, Double) -> > Double -> > Double -> Double -> > Double -> Double -> Double -> > Int -> Int -> > m [(Double, Double, Double)] > euler stateInit sigma k1 b d k2 c n m = > iterateM (oneStepEuler (recip $ fromIntegral n) sigma k1 b d k2 c) > (return stateInit) > (n * m) > eulerEx :: (Double, Double, Double) -> > Double -> Int -> Int -> Int -> > [(Double, Double, Double)] > eulerEx stateInit sigma n m seed = > evalState (euler stateInit sigma k1 b d k2 c n m) (pureMT $ fromIntegral seed)

We see that the populations become noisier the further into the future we go.

Notice that the second order effects of the system are now to some extent captured by the fact that the growth rate of Hares can drift. In our simulation, this is demonstrated by our decreasing lack of knowledge the further we look into the future.

Inference

Now let us infer the growth rate using PMMH. Here’s the model expressed in LibBi.

// Infer growth rate for hares model PP { const h = 0.1; // time step const delta_abs = 1.0e-3; // absolute error tolerance const delta_rel = 1.0e-6; // relative error tolerance const a = 5.0e-1 // Hare growth rate - superfluous for inference // but a reminder of what we should expect const k1 = 2.0e2 // Hare carrying capacity const b = 2.0e-2 // Hare death rate per lynx const d = 4.0e-1 // Lynx death rate const k2 = 2.0e1 // Lynx carrying capacity const c = 4.0e-3 // Lynx birth rate per hare state P, Z // Hares and lynxes state ln_alpha // Hare growth rate - we express it in log form for // consistency with the inference model obs P_obs // Observations of hares param mu, sigma // Mean and standard deviation of hare growth rate noise w // Noise sub parameter { mu ~ uniform(0.0, 1.0) sigma ~ uniform(0.0, 0.5) } sub proposal_parameter { mu ~ truncated_gaussian(mu, 0.02, 0.0, 1.0); sigma ~ truncated_gaussian(sigma, 0.01, 0.0, 0.5); } sub initial { P ~ log_normal(log(100.0), 0.2) Z ~ log_normal(log(50.0), 0.1) ln_alpha ~ gaussian(log(mu), sigma) } sub transition(delta = h) { w ~ normal(0.0, sqrt(h)); ode(h = h, atoler = delta_abs, rtoler = delta_rel, alg = 'RK4(3)') { dP/dt = exp(ln_alpha) * P * (1 - P / k1) - b * P * Z dZ/dt = -d * Z * (1 + Z / k2) + c * P * Z dln_alpha/dt = -sigma * sigma / 2 - sigma * w / h } } sub observation { P_obs ~ log_normal(log(P), 0.1) } }

Let’s look at the posteriors of the hyper-parameters for the Hare growth parameter.

The estimate for is pretty decent. For our generated data, and given our observations are quite noisy maybe the estimate for this is not too bad also.

Appendix: The R Driving Code

All code including the R below can be downloaded from github but make sure you use the straight-libbi branch and not master.

install.packages("devtools") library(devtools) install_github("sbfnk/RBi",ref="master") install_github("sbfnk/RBi.helpers",ref="master") rm(list = ls(all.names=TRUE)) unlink(".RData") library('RBi') try(detach(package:RBi, unload = TRUE), silent = TRUE) library(RBi, quietly = TRUE) library('RBi.helpers') library('ggplot2', quietly = TRUE) library('gridExtra', quietly = TRUE) endTime <- 50 PP <- bi_model("PP.bi") synthetic_dataset_PP <- bi_generate_dataset(endtime=endTime, model=PP, seed="42", verbose=TRUE, add_options = list( noutputs=500)) rdata_PP <- bi_read(synthetic_dataset_PP) df <- data.frame(rdata_PP$P$nr, rdata_PP$P$value, rdata_PP$Z$value, rdata_PP$P_obs$value) ggplot(df, aes(rdata_PP$P$nr, y = Population, color = variable), size = 0.1) + geom_line(aes(y = rdata_PP$P$value, col = "Hare"), size = 0.1) + geom_line(aes(y = rdata_PP$Z$value, col = "Lynx"), size = 0.1) + geom_point(aes(y = rdata_PP$P_obs$value, col = "Observations"), size = 0.1) + theme(legend.position="none") + ggtitle("Example Data") + xlab("Days") + theme(axis.text=element_text(size=4), axis.title=element_text(size=6,face="bold")) + theme(plot.title = element_text(size=10)) ggsave(filename="diagrams/LVdata.png",width=4,height=3) synthetic_dataset_PP1 <- bi_generate_dataset(endtime=endTime, model=PP, init = list(P = 100, Z=50), seed="42", verbose=TRUE, add_options = list( noutputs=500)) rdata_PP1 <- bi_read(synthetic_dataset_PP1) synthetic_dataset_PP2 <- bi_generate_dataset(endtime=endTime, model=PP, init = list(P = 150, Z=25), seed="42", verbose=TRUE, add_options = list( noutputs=500)) rdata_PP2 <- bi_read(synthetic_dataset_PP2) df1 <- data.frame(hare = rdata_PP$P$value, lynx = rdata_PP$Z$value, hare1 = rdata_PP1$P$value, lynx1 = rdata_PP1$Z$value, hare2 = rdata_PP2$P$value, lynx2 = rdata_PP2$Z$value) ggplot(df1) + geom_path(aes(x=df1$hare, y=df1$lynx, col = "0"), size = 0.1) + geom_path(aes(x=df1$hare1, y=df1$lynx1, col = "1"), size = 0.1) + geom_path(aes(x=df1$hare2, y=df1$lynx2, col = "2"), size = 0.1) + theme(legend.position="none") + ggtitle("Phase Space") + xlab("Hare") + ylab("Lynx") + theme(axis.text=element_text(size=4), axis.title=element_text(size=6,face="bold")) + theme(plot.title = element_text(size=10)) ggsave(filename="diagrams/PPviaLibBi.png",width=4,height=3) PPInfer <- bi_model("PPInfer.bi") bi_object_PP <- libbi(client="sample", model=PPInfer, obs = synthetic_dataset_PP) bi_object_PP$run(add_options = list( "end-time" = endTime, noutputs = endTime, nsamples = 4000, nparticles = 128, seed=42, nthreads = 1), ## verbose = TRUE, stdoutput_file_name = tempfile(pattern="pmmhoutput", fileext=".txt")) bi_file_summary(bi_object_PP$result$output_file_name) mu <- bi_read(bi_object_PP, "mu")$value g1 <- qplot(x = mu[2001:4000], y = ..density.., geom = "histogram") + xlab(expression(mu)) sigma <- bi_read(bi_object_PP, "sigma")$value g2 <- qplot(x = sigma[2001:4000], y = ..density.., geom = "histogram") + xlab(expression(sigma)) g3 <- grid.arrange(g1, g2) ggsave(plot=g3,filename="diagrams/LvPosterior.png",width=4,height=3) df2 <- data.frame(hareActs = rdata_PP$P$value, hareObs = rdata_PP$P_obs$value) ggplot(df, aes(rdata_PP$P$nr, y = value, color = variable)) + geom_line(aes(y = rdata_PP$P$value, col = "Phyto")) + geom_line(aes(y = rdata_PP$Z$value, col = "Zoo")) + geom_point(aes(y = rdata_PP$P_obs$value, col = "Phyto Obs")) ln_alpha <- bi_read(bi_object_PP, "ln_alpha")$value P <- matrix(bi_read(bi_object_PP, "P")$value,nrow=51,byrow=TRUE) Z <- matrix(bi_read(bi_object_PP, "Z")$value,nrow=51,byrow=TRUE) data50 <- bi_generate_dataset(endtime=endTime, model=PP, seed="42", verbose=TRUE, add_options = list( noutputs=50)) rdata50 <- bi_read(data50) df3 <- data.frame(days = c(1:51), hares = rowMeans(P), lynxes = rowMeans(Z), actHs = rdata50$P$value, actLs = rdata50$Z$value) ggplot(df3) + geom_line(aes(x = days, y = hares, col = "Est Phyto")) + geom_line(aes(x = days, y = lynxes, col = "Est Zoo")) + geom_line(aes(x = days, y = actHs, col = "Act Phyto")) + geom_line(aes(x = days, y = actLs, col = "Act Zoo")) Bibliography

Andrieu, Christophe, Arnaud Doucet, and Roman Holenstein. 2010. “Particle Markov chain Monte Carlo methods.” Journal of the Royal Statistical Society. Series B: Statistical Methodology 72 (3): 269–342. doi:10.1111/j.1467-9868.2009.00736.x.

Dureau, Joseph, Konstantinos Kalogeropoulos, and Marc Baguelin. 2013. “Capturing the time-varying drivers of an epidemic using stochastic dynamical systems.” Biostatistics (Oxford, England) 14 (3): 541–55. doi:10.1093/biostatistics/kxs052.

Lotka, Alfred J. 1909. “Contribution to the Theory of Periodic Reactions.” The Journal of Physical Chemistry 14 (3): 271–74. doi:10.1021/j150111a004.

Murray, Lawrence M. n.d. “Bayesian State-Space Modelling on High-Performance Hardware Using LibBi.”

Volterra, Vito. 1926. “Variazioni e fluttuazioni del numero d’individui in specie animali conviventi.” Memorie Della R. Accademia Dei Lincei 6 (2): 31–113. http://www.liberliber.it/biblioteca/v/volterra/variazioni{\_}e{\_}fluttuazioni/pdf/volterra{\_}variazioni{\_}e{\_}fluttuazioni.pdf.


Categories: Offsite Blogs

[ANN] vty-5.6 released

General haskell list - Sat, 06/25/2016 - 9:59pm
Hi, I'm pleased to announce the release of version 5.6 of the Vty library, a terminal user interface programming library. This version of the library adds some great new features: * Support for mouse events in most terminals: those that implement mouse control sequences as described at http://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h2-Mouse-Tracking * Support for bracketed paste mode, a special mode for receiving operating system clipboard pastes without treating paste contents as normal input: http://cirw.in/blog/bracketed-paste Vty 5.6 can be found on Hackage at http://hackage.haskell.org/package/vty-5.6 and on GitHub at https://github.com/coreyoconnor/vty Enjoy!
Categories: Incoming News

Munich Haskell Meeting,2016-06-29 < at > 19:30 Augustiner-Keller

haskell-cafe - Sat, 06/25/2016 - 6:29pm
Dear all, Next week, our monthly Munich Haskell Meeting will take place again on Wednesday, June 29 at Augustiner-Keller Arnulfstr. at 19h30. **Please note the different day and location!** For details see here: http://muenchen.haskell.bayern/dates.html (Yes, we got a new domain!) If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-jun-2016/ Everybody is welcome! cu,
Categories: Offsite Discussion

José Pedro Magalhães: Marie Curie individual fellowship application text

Planet Haskell - Sat, 06/25/2016 - 5:22am
Back in 2014 I applied for a Marie Curie fellowship. This was towards the end of my postdoc at Oxford, and then I got my current position at Standard Chartered before I heard back from the European Commission on the results of the Marie Curie. It was accepted with a score of 96%, which is something I'm very proud of, and also very thankful to everyone who helped me prepare the submission.
However, it is now clear that I will not be taking that position, as I am settled in London and at Standard Chartered. I still quite like the proposal, though, and, when writing it, I remember having wanted to see examples of successful Marie Curie proposals to have an idea of what they would look like. As such, I'm making the text of my own Marie Curie fellowship application available online. I hope this can help others to write successful applications, and maybe some of its ideas can be taken on by other researchers. Feel free to adapt any of the ideas in the proposal (but please give credit when it is due, and remember that the European Commission uses plagiarism detection software). It's available on my website, and linked below. I made the LaTeX template for the application available before.

José Pedro Magalhães. Models of Structure in Music (MoStMusic). Marie Sklodowska-Curie Individual Fellowship application, 2014.
Categories: Offsite Blogs

User Requirements Survey for CT Software (Beta)

haskell-cafe - Fri, 06/24/2016 - 4:54pm
Hello, My name is Joie Murphy and I am a Summer Research Student at the US National Institute of Standards and Technology (NIST), working with Drs. Spencer Breiner and Eswaran Subrahmanian. We are currently gathering user requirements for category theoretic software to be developed by or with NIST in the future. This questionnaire will give us insight about your past or present use of CT software and your ideal uses for such software. Providing us with the information on how you would like to use this type of software will help us to make the right design choices in development. The survey is available on Google Forms: http://goo.gl/forms/vfOgR26dHnKynHU23 If you have any colleagues who might be willing to fill out this survey, you can forward our message or you can provide us with their contact information at the end of the survey. If you have any questions or concerns, please feel free to contact us by replying to this email. We would like to thank you for your participation in this initial step of the
Categories: Offsite Discussion

Backtracking when building a bytestring

haskell-cafe - Fri, 06/24/2016 - 10:23am
Hi, I'm working on a network protocol that involves sending frames of data prefixed by their length and a checksum. The only realistic way to work out the length of a frame is to actually write the bytes out, although the length and checksum take up a fixed number of bytes. If I were working in C I'd be filling a buffer, leaving space for the length/checksum, and then go back and fill them in at the end. So this is _almost_ what ByteString.Builder does except for the backtracking bit. Does anyone know if there's an implementation of a thing that's a bit like ByteString.Builder but also allows for this kind of backtracking? Ideally I want to be able to batch up a number of frames into a single buffer, and deal gracefully with overflowing the buffer by allocating some more, much as Builder does. I can't think of a terribly good way of doing this using the existing Builder implementation as it stands, although it looks quite easy to modify it to add this functionality so I might just do that locally if need
Categories: Offsite Discussion

Philip Wadler: A sad day

Planet Haskell - Fri, 06/24/2016 - 4:08am
I don't know what to say. I feel let down, and I feel the UK has let the world down, as well as itself.

Categories: Offsite Blogs

up-to-date glfw examples

haskell-cafe - Thu, 06/23/2016 - 9:02pm
Hi, I've been looking but the examples are all fairly old and are broken in both imports from glfw and opengl. Just looking for very basic examples that work with the current versions of glfw-b and opengl. Thank you, Brian _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

JLAMP special issue for PLACES (2nd Call for Papers)

General haskell list - Thu, 06/23/2016 - 8:48pm
-------------------------------- 2nd Call for papers: Special Issue of JLAMP for PLACES (Programming Language Approaches to Concurrency and Communication-cEntric Software) -------------------------------- Submission deadline: July 29th 2016 -------------------------------- http://www.journals.elsevier.com/journal-of-logical-and-algebraic-methods-in-programming/call-for-papers/special-issue-on-programming-language-approaches/ -------------------------------- This special issue of the Journal of Logical and Algebraic Methods in Programming (JLAMP) is devoted to the topics of the 9th International Workshop on Programming Language Approaches to Concurrency and Communication-cEntric Software (PLACES 2016), which took place in April 2016 in Eindhoven as part of ETAPS. This is however an *open call* for papers, therefore both participants of the workshop and other authors are encouraged to submit their contributions. Themes: Modern hardware platforms, from the very small to the very large,
Categories: Incoming News

Haskell in Leipzig 2016: Final Call for Papers

haskell-cafe - Thu, 06/23/2016 - 5:00pm
                             Haskell in Leipzig                             September 14-15, 2016                             HTKW Leipzig, Germany                          http://hal2016.haskell.org/ This is the third and last call for submissions for the Haskell in Leipzig workshop. In case you were hesitating whether you want to submit something, the answer is „Yes!“. I’m looking forward to receiving your contribution on July 1st (next Friday). == About HaL == The workshop series “Haskell in Leipzig”, now in its 11th year, brings together Haskell developers, Haskell researchers, Haskell enthusiasts and Haskell beginners to listen to talks, take part in tutorials, and join in interesting conversations. Everything related to Haskell is on topic, whether it is about current research, practical applications, interesting ideas off the beaten track, education, or art, and topics may extend to function
Categories: Offsite Discussion

wren gayle romano: Self-improvement goals, overcoming perfectionism, and dissertating

Planet Haskell - Thu, 06/23/2016 - 2:43pm

This year's self-improvement goal was to get back into blogging regularly. Part of that goal was just to get back into writing regularly; the other part was specifically to publish more regularly.

I've done fairly well on the first half, actually. I'd hoped to do better, but then all year I've had to deal with spoon-draining circumstances, so I've probably done about as well as I can without sacrificing my health. One of my other self-improvement goals has been to take my health seriously, to listen to my body rather than pushing it beyond its limits. I'm on-track for improving at both of these, I just need to stop beating myself up over it.

For the second half, the publishing bit, that I've done poorly. I'd like to blame the spoon vortex here too, but really I think the biggest problem is my perfectionism. Perfectionism greatly amplifies the problem of lacking spoons: both the editing itself, as well as the emotional fallout of missing the mark or of having taken the entire day to hit it, both of these cost spoons. The real aim behind my goal to publish regularly wasn't to have more words to my name, but rather to “get out there” more, to be more productive in-and-of-itself rather than to have more products. So I've started thinking: the real target for this self-improvement goal should not be publishing regularly, but rather should be (working to) overcome perfectionism.

If perfectionism is a problem of fear, then the thing I must address is that fear. So how to do it? One of the suggestions in that article is to let yourself fail. Not to lower your unreasonable standards (the party-line for what to do), but rather to allow yourself to not meet those standards. One of my standards is to be thought provoking, and hence to focus overmuch on essays. To try and break free from this, I'm thinking to start posting summaries of my daily dissertation progress. A nanowrimo sort of thing, though without the focus on word-count per se. I've read a few articles suggesting one should start their day by summarizing the previous day's progress, but I've never tried it. So here goes nothing :)



comments
Categories: Offsite Blogs

PEPM 2017 Call for Papers

haskell-cafe - Thu, 06/23/2016 - 9:16am
CALL FOR PAPERS Workshop on PARTIAL EVALUATION AND PROGRAM MANIPULATION (PEPM 2017) http://conf.researchr.org/home/PEPM-2017 Paris, France, January 16th - 17th, 2017 (co-located with POPL 2017) PEPM is the premier forum for discussion of semantics-based program manipulation. The first ACM SIGPLAN PEPM symposium took place in 1991, and meetings have been held in affiliation with POPL every year since 2006. PEPM 2017 will be based on a broad interpretation of semantics-based program manipulation, reflecting the expanded scope of PEPM in recent years beyond the traditionally covered areas of partial evaluation and specialization. Specifically, PEPM 2017 will include practical applications of program transformations such as refactoring tools, and practical implementation techniques such as rule-based transformation systems. In addition, the scope of PEPM covers manipulation and transformations of program and system representations such as structural and semantic models that occur in the context of model-d
Categories: Offsite Discussion

Disciple/DDC: Type 'Int' does not match type 'Int'

Planet Haskell - Thu, 06/23/2016 - 6:48am
I joke to myself that the last project I complete before I retire will be a book entitled "Anti-patterns in compiler engineering", which will be a complete summary of my career as a computer scientist.

I spent a couple of days last week undoing one particular anti-pattern which was causing bogus error messages like: "Type 'Int' does not match type 'Int'". In DDC the root cause these messages was invariably this data type:
data Bound n
= UIx Int -- An anonymous, deBruijn variable.
| UName n -- A named variable or constructor.
| UPrim n (Type n) -- A primitive value (or type) with its type (or kind).
A value of type Bound n represents the bound occurrence of a variable or constructor, where n is the underlying type used for names. In practice n is often Text or String. The data type has three constructors, UIx for occurrences of anonymous variables, UName for named variables and constructors, and UPrim for names of primitives. We use Bound type for both terms and types.

The intent was that when type checking an expression, to determine the type (or kind) of a Bound thing in UIx or UName form, we would look it up in the type environment. However, as the types (and kinds) of primitives are fixed by the language definition, we would have their types attached directly to the UPrim constructor and save ourselves the cost of environment lookup. For example, we would represent the user defined type constructor 'List' as (UName "List"), but the primitive type constructor 'Int' as (UPrim "Int" kStar), where 'kStar' refers to the kind of data types.

The pain begins the first time you accidentally represent a primitive type constructor in the wrong form. Suppose you're parsing type constructor names from a source file, and happen to represent Int as (UName "Int") instead of (UPrim "Int" kData). Both versions are pretty printed as just "Int", so dumping the parsed AST does not reveal the problem. However, internally in the compiler the types of primitive operators like add and mul are all specified using the (UPrim "Int" kData) form, and you can't pass a value of type (UName "Int") to a function expecting a (UPrim "Int" kData). The the uninformative error message produced by the compiler simply "Type 'Int' does not match type 'Int'", disaster.

The first time this happens it takes an hour to find the problem, but when found you think "oh well, that was a trivial mistake, I can just fix this instance". You move on to other things, but next week it happens again, and you spend another hour -- then a month later it happens again and it takes two hours to find. In isolation each bug is fixable, but after a couple of years this reoccurring problem becomes a noticeable drain on your productivity. When new people join the project they invariably hit the same problem, and get discouraged because the error message on the command line doesn't give any hints about what might be wrong, or how to fix it.

A better way to handle names is to parameterise the data types that represent your abstract syntax tree with separate types for each different sort of name: for the bound and binding occurrences of variables, for bound and binding occurrences of constructors, and for primitives. If the implementation is in Haskell we can use type families to produce the type of each name based on a common index type, like so:
type family BindVar l
type family BoundVar l
type family BindCon l
type family BoundCon l
type family Prim l

data Exp l
= XVar (BoundVar l)
| XCon (BoundCon l)
| XPrim (Prim l)
| XLam (BindVar l) (Exp l)
DDC now uses this approach for the representation of the source AST. To represent all names by a single flat text value we define a tag type to represent this variation, then give instances for each of the type families:
data Flat = Flat

type instance BindVar Flat = Text
type instance BoundVar Flat = Text
type instance BindCon Flat = Text
type instance BoundCon Flat = Text
type instance Prim Flat = Text

type ExpFlat = Exp Flat
On the other hand, if we want a form that allows deBruijn indices for variables, and uses separate types for constructors and primitives we can use:
data Separate = Separate

data Bind = BAnon | BName Text
data Bound = UIx Int | UName Text
data ConName = ConName Text
data PrimName = PrimName Text

type instance BindVar Separate = Bind
type instance BoundVar Separate = Bound
type instance BindCon Separate = ConName
type instance BoundCon Separate = ConName
type instance Prim Separate = PrimName

type ExpSeparate = Exp Separate
It's also useful to convert between the above two representations. We might use ExpSeparate internally during program transformation, but use ExpFlat as an intermediate representation when pretty printing. To interface with legacy code we can also instantiate BoundVar with our old Bound type, so the new generic representation is strictly better than the old non-generic one using a hard-wired Bound.

Compiler engineering is full of traps of representation. Decisions taken about how to represent the core data structures permeate the entire project, and once made are very time consuming to change. Good approaches are also difficult to learn. Suppose we inspect the implementation of another compiler and the developers have set up their core data structures in some particular way. Is it set up like that because it's a good way to do so?, or is it set up like that because it's a bad way of doing so, but now it's too difficult to change? For the particular case of variable binding, using type like Bound above is a bad way of doing it. Using the generic representation is strictly better. Let this be a warning to future generations.
Categories: Offsite Blogs