# News aggregator

### Help to choose a library name

### The Haddock documentation is not showing up on the Hackage

### Philip Wadler: Brexit implies Techxit?

### Deadline extended: 21st International Conference on Engineering of Complex Computer Systems (ICECCS 2016), Dubai, United Arab Emirates, November 6-8 2016

### [ANN] vty-5.7 released

### Bryn Keller: Python Class Properties

Class properties are a feature that people coming to Python from other object-oriented languages expect, and expect to be easy. Unfortunately, it’s not. In many cases, you don’t actually want class properties in Python - after all, you can have first class module-level functions as well, you might very well be happier with one of those.

I sometimes see people claim that you can’t do class properties at all in Python, and that’s not right either. It can be done, and it’s not too bad. Read on!

I’m going to assume here that you already know what class (sometimes called “static”) properties are in languages like Java, and that you’re somewhat familiar with Python metaclasses.

To make this feature work, we have to use a metaclass. In this example, we’ll suppose that we want to be able to access a list of all the instances of our class, as well as reference to the most recently created instance. It’s artificial, but it gives us a reason to have both read-only and read-write properties. We define a metaclass, which is again a class that extends type.

class Extent(type): @property def extent(self): ext = getattr(self, '_extent', None) if ext is None: self._extent = [] ext = self._extent return ext @property def last_instance(self): return getattr(self, '_last_instance', None) @last_instance.setter def last_instance(self, value): self._last_instance = valuePlease *note* that if you want to do something like this for real, you may well need to protect access to these shared class properties with synchronization tools like RLock and friends to prevent different threads from overwriting each others’ work willy-nilly.

Next we create a class that uses that metaclass. The syntax is different in Python 2.7, so you may need to adjust if you’re working in an older version.

class Thing(object, metaclass=Extent): def __init__(self): self.__class__.extent.append(self) self.__class__.last_instance = selfAnother *note* for real code: these references (the extent and the last_instance) *will* keep your object from being garbage collected, so if you actually want to keep extents for your classes, you should do so using something like weakref.

Now we can try out our new class:

>>> t1 = Thing() >>> t2 = Thing() >>> Thing.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>] >>> Thing.last_instance <__main__.Thing object at 0x101c5d2b0> >>>Great, we have what we wanted! There are a couple of things to remember, though:

- Class properties are inherited!
- Class properties are not accessible via instances, only via classes.

Let’s see an example that demonstrates both. Suppose we add a new subclass of Thing called SuperThing:

>>> class SuperThing(Thing): ... @property ... def extent(self): ... return self.__class__.extent ... >>> s = SuperThing()See how we created a *normal* extent property that just reads from the class property? So we can now do this:

Whereas if we were to try that with one of the original Things, it wouldn’t work:

>>> t1.extent Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Thing' object has no attribute 'extent'We can of course still access either one via classes:

>>> t1.__class__.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>, <__main__.SuperThing object at 0x101c5d2e8>] >>> s.__class__.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>, <__main__.SuperThing object at 0x101c5d2e8>] >>>Also note that the extent for each of these classes is the same, which shows that class properties are inherited.

Did you spot the bug in Thing? It only manifests when we have subclasses like SuperThing. We inherited the __init__ from Thing, which adds each new instance to the extent, and sets last_instance. In this case, self.__class__.extent was already initialized, on Thing, and so we added our SuperThing to the existing list. For last_instance, however, we assigned directly, rather than first reading and appending, as we did with the list property, and so SuperThing.last_instance will be our s, and Thing.last_instance will be our t2. Tread carefully, it’s easy to make a mistake with this kind of thing!

Hopefully this has been a (relatively) simple example of how to build your own class properties, with or without setters.

### Using streams to clarify (?) the signature ofData.Text.replace

### Dominic Steinitz: Ecology, Dynamical Systems and Inference via PMMH

In the 1920s, Lotka (1909) and Volterra (1926) developed a model of a very simple predator-prey ecosystem.

Although simple, it turns out that the Canadian lynx and showshoe hare are well represented by such a model. Furthermore, the Hudson Bay Company kept records of how many pelts of each species were trapped for almost a century, giving a good proxy of the population of each species.

We can capture the fact that we do not have a complete model by describing our state of ignorance about the parameters. In order to keep this as simple as possible let us assume that log parameters undergo Brownian motion. That is, we know the parameters will jiggle around and the further into the future we look the less certain we are about what values they will have taken. By making the log parameters undergo Brownian motion, we can also capture our modelling assumption that birth, death and predation rates are always positive. A similar approach is taken in Dureau, Kalogeropoulos, and Baguelin (2013) where the (log) parameters of an epidemiological model are taken to be Ornstein-Uhlenbeck processes (which is biologically more plausible although adds to the complexity of the model, something we wish to avoid in an example such as this).

Andrieu, Doucet, and Holenstein (2010) propose a method to estimate the parameters of such models (Particle Marginal Metropolis Hastings aka PMMH) and the domain specific probabilistic language LibBi (Murray (n.d.)) can be used to apply this (and other inference methods).

For the sake of simplicity, in this blog post, we only model one parameter as being unknown and undergoing Brownian motion. A future blog post will consider more sophisticated scenarios.

A Dynamical System AsideThe above dynamical system is structurally unstable (more on this in a future post), a possible indication that it should not be considered as a good model of predator–prey interaction. Let us modify this to include carrying capacities for the populations of both species.

Data Generation with LibBi

Let’s generate some data using LibBi.

// Generate data assuming a fixed growth rate for hares rather than // e.g. a growth rate that undergoes Brownian motion. model PP { const h = 0.1; // time step const delta_abs = 1.0e-3; // absolute error tolerance const delta_rel = 1.0e-6; // relative error tolerance const a = 5.0e-1 // Hare growth rate const k1 = 2.0e2 // Hare carrying capacity const b = 2.0e-2 // Hare death rate per lynx const d = 4.0e-1 // Lynx death rate const k2 = 2.0e1 // Lynx carrying capacity const c = 4.0e-3 // Lynx birth rate per hare state P, Z // Hares and lynxes state ln_alpha // Hare growth rate - we express it in log form for // consistency with the inference model obs P_obs // Observations of hares sub initial { P ~ log_normal(log(100.0), 0.2) Z ~ log_normal(log(50.0), 0.1) } sub transition(delta = h) { ode(h = h, atoler = delta_abs, rtoler = delta_rel, alg = 'RK4(3)') { dP/dt = a * P * (1 - P / k1) - b * P * Z dZ/dt = -d * Z * (1 + Z / k2) + c * P * Z } } sub observation { P_obs ~ log_normal(log(P), 0.1) } }We can look at phase space starting with different populations and see they all converge to the same fixed point.

Data Generation with HaskellSince at some point in the future, I plan to produce Haskell versions of the methods given in Andrieu, Doucet, and Holenstein (2010), let’s generate the data using Haskell.

> {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > module LotkaVolterra ( > solLv > , solPp > , h0 > , l0 > , baz > , logBM > , eulerEx > )where > import Numeric.GSL.ODE > import Numeric.LinearAlgebra > import Data.Random.Source.PureMT > import Data.Random hiding ( gamma ) > import Control.Monad.StateHere’s the unstable model.

> lvOde :: Double -> > Double -> > Double -> > Double -> > Double -> > [Double] -> > [Double] > lvOde rho1 c1 rho2 c2 _t [h, l] = > [ > rho1 * h - c1 * h * l > , c2 * h * l - rho2 * l > ] > lvOde _rho1 _c1 _rho2 _c2 _t vars = > error $ "lvOde called with: " ++ show (length vars) ++ " variable" > rho1, c1, rho2, c2 :: Double > rho1 = 0.5 > c1 = 0.02 > rho2 = 0.4 > c2 = 0.004 > deltaT :: Double > deltaT = 0.1 > solLv :: Matrix Double > solLv = odeSolve (lvOde rho1 c1 rho2 c2) > [50.0, 50.0] > (fromList [0.0, deltaT .. 50])And here’s the stable model.

> ppOde :: Double -> > Double -> > Double -> > Double -> > Double -> > Double -> > Double -> > [Double] -> > [Double] > ppOde a k1 b d k2 c _t [p, z] = > [ > a * p * (1 - p / k1) - b * p * z > , -d * z * (1 + z / k2) + c * p * z > ] > ppOde _a _k1 _b _d _k2 _c _t vars = > error $ "ppOde called with: " ++ show (length vars) ++ " variable" > a, k1, b, d, k2, c :: Double > a = 0.5 > k1 = 200.0 > b = 0.02 > d = 0.4 > k2 = 50.0 > c = 0.004 > solPp :: Double -> Double -> Matrix Double > solPp x y = odeSolve (ppOde a k1 b d k2 c) > [x, y] > (fromList [0.0, deltaT .. 50]) > gamma, alpha, beta :: Double > gamma = d / a > alpha = a / (c * k1) > beta = d / (a * k2) > fp :: (Double, Double) > fp = ((gamma + beta) / (1 + alpha * beta), (1 - gamma * alpha) / (1 + alpha * beta)) > h0, l0 :: Double > h0 = a * fst fp / c > l0 = a * snd fp / b > foo, bar :: Matrix R > foo = matrix 2 [a / k1, b, c, negate d / k2] > bar = matrix 1 [a, d] > baz :: Maybe (Matrix R) > baz = linearSolve foo barThis gives a stable fixed point of

ghci> baz Just (2><1) [ 120.00000000000001 , 10.0 ]Here’s an example of convergence to that fixed point in phase space.

The Stochastic ModelLet us now assume that the Hare growth parameter undergoes Brownian motion so that the further into the future we go, the less certain we are about it. In order to ensure that this parameter remains positive, let’s model the log of it to be Brownian motion.

where the final equation is a stochastic differential equation with being a Wiener process.

By Itô we have

We can use this to generate paths for .

where .

> oneStepLogBM :: MonadRandom m => Double -> Double -> Double -> m Double > oneStepLogBM deltaT sigma rhoPrev = do > x <- sample $ rvar StdNormal > return $ rhoPrev * exp(sigma * (sqrt deltaT) * x - 0.5 * sigma * sigma * deltaT) > iterateM :: Monad m => (a -> m a) -> m a -> Int -> m [a] > iterateM f mx n = sequence . take n . iterate (>>= f) $ mx > logBMM :: MonadRandom m => Double -> Double -> Int -> Int -> m [Double] > logBMM initRho sigma n m = > iterateM (oneStepLogBM (recip $ fromIntegral n) sigma) (return initRho) (n * m) > logBM :: Double -> Double -> Int -> Int -> Int -> [Double] > logBM initRho sigma n m seed = > evalState (logBMM initRho sigma n m) (pureMT $ fromIntegral seed)We can see the further we go into the future the less certain we are about the value of the parameter.

Using this we can simulate the whole dynamical system which is now a stochastic process.

> f1, f2 :: Double -> Double -> Double -> > Double -> Double -> > Double > f1 a k1 b p z = a * p * (1 - p / k1) - b * p * z > f2 d k2 c p z = -d * z * (1 + z / k2) + c * p * z > oneStepEuler :: MonadRandom m => > Double -> > Double -> > Double -> Double -> > Double -> Double -> Double -> > (Double, Double, Double) -> > m (Double, Double, Double) > oneStepEuler deltaT sigma k1 b d k2 c (rho1Prev, pPrev, zPrev) = do > let pNew = pPrev + deltaT * f1 (exp rho1Prev) k1 b pPrev zPrev > let zNew = zPrev + deltaT * f2 d k2 c pPrev zPrev > rho1New <- oneStepLogBM deltaT sigma rho1Prev > return (rho1New, pNew, zNew) > euler :: MonadRandom m => > (Double, Double, Double) -> > Double -> > Double -> Double -> > Double -> Double -> Double -> > Int -> Int -> > m [(Double, Double, Double)] > euler stateInit sigma k1 b d k2 c n m = > iterateM (oneStepEuler (recip $ fromIntegral n) sigma k1 b d k2 c) > (return stateInit) > (n * m) > eulerEx :: (Double, Double, Double) -> > Double -> Int -> Int -> Int -> > [(Double, Double, Double)] > eulerEx stateInit sigma n m seed = > evalState (euler stateInit sigma k1 b d k2 c n m) (pureMT $ fromIntegral seed)We see that the populations become noisier the further into the future we go.

Notice that the second order effects of the system are now to some extent captured by the fact that the growth rate of Hares can drift. In our simulation, this is demonstrated by our decreasing lack of knowledge the further we look into the future.

InferenceNow let us infer the growth rate using PMMH. Here’s the model expressed in LibBi.

// Infer growth rate for hares model PP { const h = 0.1; // time step const delta_abs = 1.0e-3; // absolute error tolerance const delta_rel = 1.0e-6; // relative error tolerance const a = 5.0e-1 // Hare growth rate - superfluous for inference // but a reminder of what we should expect const k1 = 2.0e2 // Hare carrying capacity const b = 2.0e-2 // Hare death rate per lynx const d = 4.0e-1 // Lynx death rate const k2 = 2.0e1 // Lynx carrying capacity const c = 4.0e-3 // Lynx birth rate per hare state P, Z // Hares and lynxes state ln_alpha // Hare growth rate - we express it in log form for // consistency with the inference model obs P_obs // Observations of hares param mu, sigma // Mean and standard deviation of hare growth rate noise w // Noise sub parameter { mu ~ uniform(0.0, 1.0) sigma ~ uniform(0.0, 0.5) } sub proposal_parameter { mu ~ truncated_gaussian(mu, 0.02, 0.0, 1.0); sigma ~ truncated_gaussian(sigma, 0.01, 0.0, 0.5); } sub initial { P ~ log_normal(log(100.0), 0.2) Z ~ log_normal(log(50.0), 0.1) ln_alpha ~ gaussian(log(mu), sigma) } sub transition(delta = h) { w ~ normal(0.0, sqrt(h)); ode(h = h, atoler = delta_abs, rtoler = delta_rel, alg = 'RK4(3)') { dP/dt = exp(ln_alpha) * P * (1 - P / k1) - b * P * Z dZ/dt = -d * Z * (1 + Z / k2) + c * P * Z dln_alpha/dt = -sigma * sigma / 2 - sigma * w / h } } sub observation { P_obs ~ log_normal(log(P), 0.1) } }Let’s look at the posteriors of the hyper-parameters for the Hare growth parameter.

The estimate for is pretty decent. For our generated data, and given our observations are quite noisy maybe the estimate for this is not too bad also.

Appendix: The R Driving CodeAll code including the R below can be downloaded from github but make sure you use the *straight-libbi* branch and *not* master.

Andrieu, Christophe, Arnaud Doucet, and Roman Holenstein. 2010. “Particle Markov chain Monte Carlo methods.” *Journal of the Royal Statistical Society. Series B: Statistical Methodology* 72 (3): 269–342. doi:10.1111/j.1467-9868.2009.00736.x.

Dureau, Joseph, Konstantinos Kalogeropoulos, and Marc Baguelin. 2013. “Capturing the time-varying drivers of an epidemic using stochastic dynamical systems.” *Biostatistics (Oxford, England)* 14 (3): 541–55. doi:10.1093/biostatistics/kxs052.

Lotka, Alfred J. 1909. “Contribution to the Theory of Periodic Reactions.” *The Journal of Physical Chemistry* 14 (3): 271–74. doi:10.1021/j150111a004.

Murray, Lawrence M. n.d. “Bayesian State-Space Modelling on High-Performance Hardware Using LibBi.”

Volterra, Vito. 1926. “Variazioni e fluttuazioni del numero d’individui in specie animali conviventi.” *Memorie Della R. Accademia Dei Lincei* 6 (2): 31–113. http://www.liberliber.it/biblioteca/v/volterra/variazioni{\_}e{\_}fluttuazioni/pdf/volterra{\_}variazioni{\_}e{\_}fluttuazioni.pdf.

### [ANN] vty-5.6 released

### Munich Haskell Meeting,2016-06-29 < at > 19:30 Augustiner-Keller

### José Pedro Magalhães: Marie Curie individual fellowship application text

However, it is now clear that I will not be taking that position, as I am settled in London and at Standard Chartered. I still quite like the proposal, though, and, when writing it, I remember having wanted to see examples of successful Marie Curie proposals to have an idea of what they would look like. As such, I'm making the text of my own Marie Curie fellowship application available online. I hope this can help others to write successful applications, and maybe some of its ideas can be taken on by other researchers. Feel free to adapt any of the ideas in the proposal (but please give credit when it is due, and remember that the European Commission uses plagiarism detection software). It's available on my website, and linked below. I made the LaTeX template for the application available before.

José Pedro Magalhães. Models of Structure in Music (MoStMusic). Marie Sklodowska-Curie Individual Fellowship application, 2014.

### User Requirements Survey for CT Software (Beta)

### Backtracking when building a bytestring

### Philip Wadler: A sad day

### up-to-date glfw examples

### JLAMP special issue for PLACES (2nd Call for Papers)

### Haskell in Leipzig 2016: Final Call for Papers

### wren gayle romano: Self-improvement goals, overcoming perfectionism, and dissertating

This year's self-improvement goal was to get back into blogging regularly. Part of that goal was just to get back into *writing* regularly; the other part was specifically to publish more regularly.

I've done fairly well on the first half, actually. I'd hoped to do better, but then all year I've had to deal with spoon-draining circumstances, so I've probably done about as well as I can without sacrificing my health. One of my other self-improvement goals has been to take my health seriously, to listen to my body rather than pushing it beyond its limits. I'm on-track for improving at both of these, I just need to stop beating myself up over it.

For the second half, the publishing bit, that I've done poorly. I'd like to blame the spoon vortex here too, but really I think the biggest problem is my perfectionism. Perfectionism greatly amplifies the problem of lacking spoons: both the editing itself, as well as the emotional fallout of missing the mark or of having taken the entire day to hit it, both of these cost spoons. The real aim behind my goal to publish regularly wasn't to have more words to my name, but rather to “get out there” more, to be more productive in-and-of-itself rather than to have more products. So I've started thinking: the real target for this self-improvement goal should not be publishing regularly, but rather should be (working to) overcome perfectionism.

If perfectionism is a problem of fear, then the thing I must address is that fear. So how to do it? One of the suggestions in that article is to let yourself fail. Not to lower your unreasonable standards (the party-line for what to do), but rather to allow yourself to not meet those standards. One of my standards is to be thought provoking, and hence to focus overmuch on essays. To try and break free from this, I'm thinking to start posting summaries of my daily dissertation progress. A nanowrimo sort of thing, though without the focus on word-count per se. I've read a few articles suggesting one should start their day by summarizing the previous day's progress, but I've never tried it. So here goes nothing :)

comments

### PEPM 2017 Call for Papers

### Disciple/DDC: Type 'Int' does not match type 'Int'

I spent a couple of days last week undoing one particular anti-pattern which was causing bogus error messages like: "Type 'Int' does not match type 'Int'". In DDC the root cause these messages was invariably this data type:

data Bound n

= UIx Int -- An anonymous, deBruijn variable.

| UName n -- A named variable or constructor.

| UPrim n (Type n) -- A primitive value (or type) with its type (or kind).

A value of type Bound n represents the bound occurrence of a variable or constructor, where n is the underlying type used for names. In practice n is often Text or String. The data type has three constructors, UIx for occurrences of anonymous variables, UName for named variables and constructors, and UPrim for names of primitives. We use Bound type for both terms and types.

The intent was that when type checking an expression, to determine the type (or kind) of a Bound thing in UIx or UName form, we would look it up in the type environment. However, as the types (and kinds) of primitives are fixed by the language definition, we would have their types attached directly to the UPrim constructor and save ourselves the cost of environment lookup. For example, we would represent the user defined type constructor 'List' as (UName "List"), but the primitive type constructor 'Int' as (UPrim "Int" kStar), where 'kStar' refers to the kind of data types.

The pain begins the first time you accidentally represent a primitive type constructor in the wrong form. Suppose you're parsing type constructor names from a source file, and happen to represent Int as (UName "Int") instead of (UPrim "Int" kData). Both versions are pretty printed as just "Int", so dumping the parsed AST does not reveal the problem. However, internally in the compiler the types of primitive operators like add and mul are all specified using the (UPrim "Int" kData) form, and you can't pass a value of type (UName "Int") to a function expecting a (UPrim "Int" kData). The the uninformative error message produced by the compiler simply "Type 'Int' does not match type 'Int'", disaster.

The first time this happens it takes an hour to find the problem, but when found you think "oh well, that was a trivial mistake, I can just fix this instance". You move on to other things, but next week it happens again, and you spend another hour -- then a month later it happens again and it takes two hours to find. In isolation each bug is fixable, but after a couple of years this reoccurring problem becomes a noticeable drain on your productivity. When new people join the project they invariably hit the same problem, and get discouraged because the error message on the command line doesn't give any hints about what might be wrong, or how to fix it.

A better way to handle names is to parameterise the data types that represent your abstract syntax tree with separate types for each different sort of name: for the bound and binding occurrences of variables, for bound and binding occurrences of constructors, and for primitives. If the implementation is in Haskell we can use type families to produce the type of each name based on a common index type, like so:

type family BindVar l

type family BoundVar l

type family BindCon l

type family BoundCon l

type family Prim l

data Exp l

= XVar (BoundVar l)

| XCon (BoundCon l)

| XPrim (Prim l)

| XLam (BindVar l) (Exp l)

DDC now uses this approach for the representation of the source AST. To represent all names by a single flat text value we define a tag type to represent this variation, then give instances for each of the type families:

data Flat = Flat

type instance BindVar Flat = Text

type instance BoundVar Flat = Text

type instance BindCon Flat = Text

type instance BoundCon Flat = Text

type instance Prim Flat = Text

type ExpFlat = Exp Flat

On the other hand, if we want a form that allows deBruijn indices for variables, and uses separate types for constructors and primitives we can use:

data Separate = Separate

data Bind = BAnon | BName Text

data Bound = UIx Int | UName Text

data ConName = ConName Text

data PrimName = PrimName Text

type instance BindVar Separate = Bind

type instance BoundVar Separate = Bound

type instance BindCon Separate = ConName

type instance BoundCon Separate = ConName

type instance Prim Separate = PrimName

type ExpSeparate = Exp Separate

It's also useful to convert between the above two representations. We might use ExpSeparate internally during program transformation, but use ExpFlat as an intermediate representation when pretty printing. To interface with legacy code we can also instantiate BoundVar with our old Bound type, so the new generic representation is strictly better than the old non-generic one using a hard-wired Bound.

Compiler engineering is full of traps of representation. Decisions taken about how to represent the core data structures permeate the entire project, and once made are very time consuming to change. Good approaches are also difficult to learn. Suppose we inspect the implementation of another compiler and the developers have set up their core data structures in some particular way. Is it set up like that because it's a good way to do so?, or is it set up like that because it's a bad way of doing so, but now it's too difficult to change? For the particular case of variable binding, using type like Bound above is a bad way of doing it. Using the generic representation is strictly better. Let this be a warning to future generations.