News aggregator

Stefan Jacholke: Fun Blocks Prototype

Planet Haskell - Wed, 06/29/2016 - 4:00pm

After looking up the constituent technologies which will be used for the project, work on the initial prototype began. For the initial version, we want a simple user interface with which simple CodeWorld applications can be built. Thus we want a version where a user can:

  • Drag and drop blocks from the toolbox
  • Provide simple error feedback
  • Generate code for a valid CodeWorld application
  • Run the generated CodeWorld applications
  • Saving and loading

Some more advanced features which are omitted for this release are:

  • Support for functions
  • Support for lists
  • Advanced CodeWorld applications (animations, events)

Simply put, we want a visual block-based interface which can be used to build CodeWorld applications.

This project is educationally based. We want to introduce programming to younger students and make it easy for them to understand the concepts behind it. For more advanced projects students may use the regular CodeWorld text UI.

The initial prototype can be found at code.world/funblocks

Blockly

This project relies heavily on Blockly for the user interface. Blockly provides an editor where blocks can be dragged and dropped on a canvas. These blocks can then be interlocked with one another through a snapping behavior.

Blockly provides many features that make it easy to adapt to a new language. It even incorporates simple type checking, luckily, however, there is a project that improves Blockly’s type checking.

For the user-interface, the choice was made to keep the interface clean and simple. The idea is to have a simple toolbar on the left-side with blocks that can be dragged onto the canvas on the right.

Since the project uses Blockly, blocks can be snapped into each other. A block has a single output and may have multiple inputs, each having a type.

In the image above we can see a Translated block, taking a picture and two numbers, and outputs another picture that is translated according to the given x and y values.

Each block may have zero or more inputs and one output type. All blocks are typed, and the functions in CodeWorld work on immutable data structures. A function may take a color as an input to produce another modified color. This works well with a visual block interface and we can expressively see how the program flows.

Blockly provides a lot of features out of the box, though sometimes changes are necessary specifically for this project.

The largest has been to use Anthony’s, which provides some useful typing and polymorphic features though it also diverges from the main Blockly branch and is almost 300 commits behind.

Other changes have been required as well, such as:

  • implementing the let blocks
  • changing default methods of validation for numbers and function names
  • adding error highlighting

and other small changes and additions. Thus we maintain a separate fork of Blockly for this project.

CodeGen

When we build the program below:

we obtain the output code of:

main = drawingOf(rotated(colored(solidRectangle(7,5), white),(60 + 70) * 0.9) & solidCircle(4))

Which looks rather ugly, but code formatting will be addressed at a later stage.

Currently, the regular Blockly code generation paradigm is followed, where a code generation function is given for each block, that converts the block into code. For a single block, we convert all input blocks to code, which are then combined into a single block of code. This process is repeated for all top-level blocks.

For each block, a generation function is assigned, which looks something like this:

blockSector :: GeneratorFunction blockSector block = do startangle <- valueToCode block "STARTANGLE" CNone endangle <- valueToCode block "ENDANGLE" CNone radius <- valueToCode block "RADIUS" CNone return $ none $ "sector(" ++ startangle ++ "," ++ endangle ++ "," ++ radius ++ ")"

This approach has some disadvantages, such being:

  • Difficult to ensure code is consistent and that all types match.
  • Difficult to find errors when they occur
  • Having to define a block for every CodeWorld function

The first problem is alleviated with a different version of Blockly that supports polymorphic types. I’m using an updated version by Anthony that also support automatic coloring of Blocks. Hence the strategy currently taken is to prevent errors through the block based editor.

This version of Blockly fits this project great. Since we don’t have side-effects the type of the if-block should be of the same as the then or else expressions. In order to have a single if block, it must be polymorphic, and this is provided by the alternate Blockly.

Above we can see the if block matching the Text type; below we can see the if block matching the Picture type.

Regardless, the generated code is then displayed and syntax highlighted using CodeMirror.

The generated code is sent to the CodeWorld server, which compiles the code into JavaScript. The compiled code is then run in the browser and displayed on an HTML canvas.

Error handling

First priority is to prevent a large class of errors from ever occurring. This is done by only allowing certain connections between blocks, validating input, providing default blocks and so on. This makes it easy for the user to construct valid programs.

However, errors do happen. What happens when a block contains a disconnected input ?

In this version, we visually indicate at which block the error occurs and display an error message for the block. This can be seen in the picture below.

Some other errors, such as runtime errors will be addressed at a later stage. Luckily, we currently don’t have missing patterns or partial functions to check for.

Project Management

Project management falls in line with the way things are currently done with the regular CodeWorld site.

Blockly supports exporting the current workspace as XML. This XML is then sent to the server and a hash is returned. This hash identifies a CodeWorld FunBlocks application. For example code.world/funblocks#PN2GoG8W2OMugJrzTw0GKYA loads a drawing of a coordinate plane.

Google’s authentication API can be used to log in. Once a user is logged in he can then save and load his projects by name as well.

Issues / Problems

A large majority of the project is in JavaScript, this includes the Blockly library.

Originally I intended to keep the majority of the code in Haskell, however, this is difficult to do. Currently, there is a large part of additional code for Blockly and an additional JavaScript file that handles the user interface initialization and logic.

Most of the code that define the various types of Blocks are in a Haskell file. However, some more standard blocks such as the math number block (which included some validation) are modified from the regular Blockly code base. The let definition is also modified Blockly code.

Thus organisationally, similar features and code are split between the main project and the Blockly fork.

Another issue currently is the renaming of caller blocks for Let definitions. When a Let definition is renamed the callerb blocks should get renamed as well. This currently disabled as Blockly is sometimes calling the text input validator for the wrong block, which tries to rename all of foo’s blocks to bar.

The block type unification for the Let definition is also currently producing strange results and the issues on this are covered in the issue tracker.

Further Work

Currently, we are only able to build very simple programs. For more complicated applications such as animations, we require first class functions. We also need to cover more advanced data types. A starting point would be to cover lists and tuples.

For CodeWorld simulations, the user is required to supply his own world data type, and a user-friendly way is needed for the user to build these in the Blocks UI.

Some more advanced functional features such as pattern matching would make this easier, and it remains to be seen how this will be implemented.

Everything is currently on track, which might mean we will get to spend time on same of the more advanced features later.

GHCJS

Working on the project I’m exposed to GHCJS and JavaScript a lot and I really like what GHCJS is doing. JavaScript has a tendency to let anything almost work and I spend a lot of time debugging and finding JS errors when working on Blockly code.

Straight after starting out I decided to try and keep the bulk of the work in Haskell, which meant using GHCJS. The GHCJS ecosystem lacks good documentation and sometimes in order to accomplish things that are simple in JS, a lot of time has to be spent navigating through many files of GHCJS code and a bit of luck. One such is an example is when you want to assign a higher order function in JS:

Blockly.FunBlocks["block_number"] = function(block){...}

where we want to assign a function to do the code generation for the block indexed as “block_number”

Which involved looking at GHCJS callback source files , even though there are Stackoverflow questions regarding similar issues, the GHCJS project is fast moving and the libraries keep changing.

Haskell Summer of Code

I would like to say thanks for being part of this year’s Haskell Summer of Code as I’m enjoying working the project quite a bit. Thanks goes to all of those who made it possible as well.

I think the way the project is managed is quite good. We (me and my mentor, Chris Smith) are primarily communicating through emails and Github. Working on Github helps a lot and the issue system greatly helps manage the project. It helps when you want to have some idea or goal to work on or if you want to fix something quickly. It’s too easy to forget about a minor bug if it isn’t written down somewhere.

Chris has also been a great help testing new features, providing feedback, opening issues and reporting bugs (which sometimes are a lot, I apologize !) and providing overall direction for the project.

Categories: Offsite Blogs

FP Complete: Announce: safe-exceptions, for async exception safety

Planet Haskell - Tue, 06/28/2016 - 6:00pm

This blog post is an initial announcement of a new package, safe-exceptions (and Github repo). This is a follow up to a number of comments I made in last week's blog post. To quote the README:

Safe, consistent, and easy exception handling

Runtime exceptions - as exposed in base by the Control.Exception module - have long been an intimidating part of the Haskell ecosystem. This package, and this README for the package, are intended to overcome this. By providing an API that encourages best practices, and explaining the corner cases clearly, the hope is to turn what was previously something scary into an aspect of Haskell everyone feels safe using.

This is an initial release of the package. I fully expect the library to expand in the near future, and in particular there are two open issues for decisions that need to be made in the short term. I'm releasing the package in its current state since:

  1. I think it's useful as-is
  2. I'm hoping to get feedback on how to improve it

On the second point, I've created a survey to get feedback on the interruptible/uninterruptible issue and the throw naming issue. Both are described in this blog post.

I'm hoping this library can bring some sanity and comfort to people dealing with IO and wanting to ensure proper exception handling! Following is the content of the README, which can also be read on Github.

Goals

This package provides additional safety and simplicity versus Control.Exception by having its functions recognize the difference between synchronous and asynchronous exceptions. As described below, synchronous exceptions are treated as recoverable, allowing you to catch and handle them as well as clean up after them, whereas asynchronous exceptions can only be cleaned up after. In particular, this library prevents you from making the following mistakes:

  • Catching and swallowing an asynchronous exception
  • Throwing an asynchronous exception synchronously
  • Throwing a synchronous exception asynchronously
  • Swallowing asynchronous exceptions via failing cleanup handlers
Quickstart

This section is intended to give you the bare minimum information to use this library (and Haskell runtime exceptions in general) correctly.

  • Import the Control.Exception.Safe module. Do not import Control.Exception itself, which lacks the safety guarantees that this library adds. Same applies to Control.Monad.Catch.
  • If something can go wrong in your function, you can report this with the throw. (For compatible naming, there are synonyms for this of throwIO and throwM.)
  • If you want to catch a specific type of exception, use catch, handle, or try.
  • If you want to recover from anything that may go wrong in a function, use catchAny, handleAny, or tryAny.
  • If you want to launch separate threads and kill them externally, you should use the async package.
  • Unless you really know what you're doing, avoid the following functions:
    • catchAsync
    • handleAsync
    • tryAsync
    • impureThrow
    • throwTo
  • If you need to perform some allocation or cleanup of resources, use one of the following functions (and don't use the catch/handle/try family of functions):

    • onException
    • withException
    • bracket
    • bracket_
    • finally
    • bracketOnError
    • bracketOnError_

Hopefully this will be able to get you up-and-running quickly.

Request to readers: if there are specific workflows that you're unsure of how to accomplish with this library, please ask so we can develop a more full-fledged cookbook as a companion to this file.

Terminology

We're going to define three different versions of exceptions. Note that these definitions are based on how the exception is thrown, not based on what the exception itself is:

  • Synchronous exceptions are generated by the current thread. What's important about these is that we generally want to be able to recover from them. For example, if you try to read from a file, and the file doesn't exist, you may wish to use some default value instead of having your program exit, or perhaps prompt the user for a different file location.

  • Asynchronous exceptions are thrown by either a different user thread, or by the runtime system itself. For example, in the async package, race will kill the longer-running thread with an asynchronous exception. Similarly, the timeout function will kill an action which has run for too long. And the runtime system will kill threads which appear to be deadlocked on MVars or STM actions.

    In contrast to synchronous exceptions, we almost never want to recover from asynchronous exceptions. In fact, this is a common mistake in Haskell code, and from what I've seen has been the largest source of confusion and concern amongst users when it comes to Haskell's runtime exception system.

  • Impure exceptions are hidden inside a pure value, and exposed by forcing evaluation of that value. Examples are error, undefined, and impureThrow. Additionally, incomplete pattern matches can generate impure exceptions. Ultimately, when these pure values are forced and the exception is exposed, it is thrown as a synchronous exception.

    Since they are ultimately thrown as synchronous exceptions, when it comes to handling them, we want to treat them in all ways like synchronous exceptions. Based on the comments above, that means we want to be able to recover from impure exceptions.

Why catch asynchronous exceptions?

If we never want to be able to recover from asynchronous exceptions, why do we want to be able to catch them at all? The answer is for resource cleanup. For both sync and async exceptions, we would like to be able to acquire resources - like file descriptors - and register a cleanup function which is guaranteed to be run. This is exemplified by functions like bracket and withFile.

So to summarize:

  • All synchronous exceptions should be recoverable
  • All asynchronous exceptions should not be recoverable
  • In both cases, cleanup code needs to work reliably
Determining sync vs async

Unfortunately, GHC's runtime system provides no way to determine if an exception was thrown synchronously or asynchronously, but this information is vitally important. There are two general approaches to dealing with this:

  • Run an action in a separate thread, don't give that thread's ID to anyone else, and assume that any exception that kills it is a synchronous exception. This approach is covered in the School of Haskell article catching all exceptions, and is provided by the enclosed-exceptions package.

  • Make assumptions based on the type of an exception, assuming that certain exception types are only thrown synchronously and certain only asynchronously.

Both of these approaches have downsides. For the downsides of the type-based approach, see the caveats section at the end. The problems with the first are more interesting to us here:

  • It's much more expensive to fork a thread every time we want to deal with exceptions
  • It's not fully reliable: it's possible for the thread ID of the forked thread to leak somewhere, or the runtime system to send it an async exception
  • While this works for actions living in IO, it gets trickier for pure functions and monad transformer stacks. The latter issue is solved via monad-control and the exceptions packages. The former issue, however, means that it's impossible to provide a universal interface for failure for pure and impure actions. This may seem esoteric, and if so, don't worry about it too much.

Therefore, this package takes the approach of trusting type information to determine if an exception is asynchronous or synchronous. The details are less interesting to a user, but the basics are: we leverage the extensible extension system in GHC and state that any extension type which is a child of SomeAsyncException is an async exception. All other exception types are assumed to be synchronous.

Handling of sync vs async exceptions

Once we're able to distinguish between sync and async exceptions, and we know our goals with sync vs async, how we handle things is pretty straightforward:

  • If the user is trying to install a cleanup function (such as with bracket or finally), we don't care if the exception is sync or async: call the cleanup function and then rethrow the exception.
  • If the user is trying to catch an exception and recover from it, only catch sync exceptions and immediately rethrow async exceptions.

With this explanation, it's useful to consider async exceptions as "stronger" or more severe than sync exceptions, as the next section will demonstrate.

Exceptions in cleanup code

One annoying corner case is: what happens if, when running a cleanup function after an exception was thrown, the cleanup function itself throws an exception. For this, we'll consider action `onException` cleanup. There are four different possibilities:

  • action threw sync, cleanup threw sync
  • action threw sync, cleanup threw async
  • action threw async, cleanup threw sync
  • action threw async, cleanup threw async

Our guiding principle is: we cannot hide a more severe exception with a less severe exception. For example, if action threw a sync exception, and then cleanup threw an async exception, it would be a mistake to rethrow the sync exception thrown by action, since it would allow the user to recover when that is not desired.

Therefore, this library will always throw an async exception if either the action or cleanup thows an async exception. Other than that, the behavior is currently undefined as to which of the two exceptions will be thrown. The library reserves the right to throw away either of the two thrown exceptions, or generate a new exception value completely.

Typeclasses

The exceptions package provides an abstraction for throwing, catching, and cleaning up from exceptions for many different monads. This library leverages those type classes to generalize our functions.

Naming

There are a few choices of naming that differ from the base libraries:

  • throw in this library is for synchronously throwing within a monad, as opposed to in base where throwIO serves this purpose and throw is for impure throwing. This library provides impureThrow for the latter case, and also provides convenience synonyms throwIO and throwM for throw.
  • The catch function in this package will not catch async exceptions. Please use catchAsync if you really want to catch those, though it's usually better to use a function like bracket or withException which ensure that the thrown exception is rethrown.
Caveats

Let's talk about the caveats to keep in mind when using this library.

Checked vs unchecked

There is a big debate and difference of opinion regarding checked versus unchecked exceptions. With checked exceptions, a function states explicitly exactly what kinds of exceptions it can throw. With unchecked exceptions, it simply says "I can throw some kind of exception." Java is probably the most famous example of a checked exception system, with many other languages (including C#, Python, and Ruby) having unchecked exceptions.

As usual, Haskell makes this interesting. Runtime exceptions are most assuredly unchecked: all exceptions are converted to SomeException via the Exception typeclass, and function signatures do not state which specific exception types can be thrown (for more on this, see next caveat). Instead, this information is relegated to documentation, and unfortunately is often not even covered there.

By contrast, approaches like ExceptT and EitherT are very explicit in the type of exceptions that can be thrown. The cost of this is that there is extra overhead necessary to work with functions that can return different types of exceptions, usually by wrapping all possible exceptions in a sum type.

This library isn't meant to settle the debate on checked vs unchecked, but rather to bring sanity to Haskell's runtime exception system. As such, this library is decidedly in the unchecked exception camp, purely by virtue of the fact that the underlying mechanism is as well.

Explicit vs implicit

Another advantage of the ExceptT/EitherT approach is that you are explicit in your function signature that a function may fail. However, the reality of Haskell's standard libraries are that many, if not the vast majority, of IO actions can throw some kind of exception. In fact, once async exceptions are considered, every IO action can throw an exception.

Once again, this library deals with the status quo of runtime exceptions being ubiquitous, and gives the rule: you should consider the IO type as meaning both that a function modifies the outside world, and may throw an exception (and, based on the previous caveat, may throw any type of exception it feels like).

There are attempts at alternative approaches here, such as unexceptionalio. Again, this library isn't making a value statement on one approach versus another, but rather trying to make today's runtime exceptions in Haskell better.

Type-based differentiation

As explained above, this library makes heavy usage of type information to differentiate between sync and async exceptions. While the approach used is fairly well respected in the Haskell ecosystem today, it's certainly not universal, and definitely not enforced by the Control.Exception module. In particular, throwIO will allow you to synchronously throw an exception with an asynchronous type, and throwTo will allow you to asynchronously throw an exception with a synchronous type.

The functions in this library prevent that from happening via exception type wrappers, but if an underlying library does something surprising, the functions here may not work correctly. Further, even when using this library, you may be surprised by the fact that throw Foo `catch` (\Foo -> ...) won't actually trigger the exception handler if Foo looks like an asynchronous exception.

The ideal solution is to make a stronger distinction in the core libraries themselves between sync and async exceptions.

Deadlock detection exceptions

Two exceptions types which are handled surprisingly are BlockedIndefinitelyOnMVar and BlockedIndefinitelyOnSTM. Even though these exceptions are thrown asynchronously by the runtime system, for our purposes we treat them as synchronous. The reasons are twofold:

  • There is a specific action taken in the local thread - blocking on a variable which will never change - which causes the exception to be raised. This makes their behavior very similar to synchronous exceptions. In fact, one could argue that a function like takeMVar is synchronously throwing BlockedIndefinitelyOnMVar
  • By our standards of recoverable vs non-recoverable, these exceptions certainly fall into the recoverable category. Unlike an intentional kill signal from another thread or the user (via Ctrl-C), we would like to be able to detect that we entered a deadlock condition and do something intelligent in an application.
Possible future changesInterruptible vs uninterruptible masking

This discussion is now being tracked at: https://github.com/fpco/safe-exceptions/issues/3

In Control.Exception, allocation functions and cleanup handlers in combinators like bracket are masked using the (interruptible) mask function, in contrast to uninterruptibleMask. There have been some debates about the correctness of this in the past, notably a libraries mailing list discussion kicked off by Eyal Lotem. It seems that general consensus is:

  • uninterruptibleMask is a better choice
  • But changing the core library like this would potentially break too many programs

In its current version, this library uses mask (interruptible) for allocation functions and uninterruptibleMask cleanup handlers. This is a debatable decision (and one worth debating!). An example of alternatives would be:

  • Use uninterruptibleMask for both allocation and cleanup pieces
  • Match Control.Exception's behavior
  • Provide two versions of each function, or possibly two modules
Naming of the synchronous monadic throwing function

We may decide to rename throw to something else at some point. Please see https://github.com/fpco/safe-exceptions/issues/4

Categories: Offsite Blogs

Roman Cheplyaka: Install Fedora Linux on an encrypted SSD

Planet Haskell - Tue, 06/28/2016 - 2:00pm

I just replaced the SSD in my laptop with a bigger one and installed a fresh Fedora Linux on it, essentially upgrading from F23 to F24.

Here are a few notes which could be useful to others and myself in the future.

Verifying the downloaded image

How do you verify the downloaded image? You verify the checksum.

How do you verify the checksum? You check its gpg signature.

How do you verify the authenticity of the gpg key? You could just check the fingerprint against the one published on the website above, but this is hardly better than trusting the checksum, since they both come from the same source.

Here’s a better idea: if you already have a Fedora system, you have the keys at /etc/pki/rpm-gpg.

In my case, I imported /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-24-primary (yes, my F23 system already contained the F24 signing keys), and was able to check the checksum signature.

This protects you against a scenario when getfedora.org is compromised and the checksums/signatures/keys are replaced there.

Installing from a USB partition

Turned out the only optical disc in my house was damaged, and I didn’t have a USB stick big enough to burn the Fedora image either.

I did have an external USB drive with some free space on it, but it contained a lot of data, so I couldn’t just make it one big ISO partition.

There are several instructions on how to create bootable USB partitions, but most of them look fragile and complicated.

Luckily, Fedora makes this super easy.

  1. Install the RPM package livecd-tools (which is a packaged version of this repo)
  2. Create a partition big enough for the ISO and format it. Unlike many other instructions that tell you to use FAT, this one works with ext[234] just fine.
  3. livecd-iso-to-disk Fedora-Workstation-Live-x86_64-24-1.2.iso /dev/sdb1
Setting up disk encryption

I was impressed by how easy it was to set up full disk encryption. I just checked the box “Encrypt my data” in the installer, and it used a very sensible partitioning scheme close to what I used to set up manually before:

  • Unencrypted /boot partition
  • Encrypted partition with LVM on top of it
    • Three logical volumes on the encrypted LVM: root, /home, and swap.

The only thing that I had to do was to enable TRIM support:

  1. For LVM: set issue_discards = 1 in /etc/lvm/lvm.conf.
  2. For cryptsetup: change none to discard in /etc/crypttab.
  3. Enable weekly trims systemctl enable fstrim.timer && systemctl start fstrim.timer
Categories: Offsite Blogs

Functional Jobs: Senior Software Engineer (Haskell) at Front Row Education (Full-time)

Planet Haskell - Tue, 06/28/2016 - 10:35am
Position

Senior Functional Web Engineer to join fast-growing education startup transforming the way 3+ million K-8 students learn Math and English.

What you will be doing

Architect, design and develop new applications, tools and distributed systems for the Front Row ecosystem in Haskell, Flow, PostgreSQL, Ansible and many others. You will get to work on your deliverable end-to-end, from the UX to the deployment logic.

Mentor and support more junior developers in the organization

Create, improve and refine workflows and processes for delivering quality software on time and without incurring debt

Work as part of a very small (there's literally half a dozen of us!), world-class team of engineers with a track record of rapidly delivering valuable software to millions of users.

Work closely with Front Row educators, product managers, customer support representatives and account executives to help the business move fast and efficiently through relentless automation.

Why you should join Front Row

Our mission is important to us, and we want it to be important to you as well: millions of students learn math using Front Row every month. Our early results show students improve twice as much while using Front Row than their peers who aren’t using the program.

As an experienced engineer, you will have a massive impact on the company, product, and culture; you’ll have a ton of autonomy and responsibility; you’ll have equity to match the weight of this role. If you're looking for an opportunity to both grow and do meaningful work, surrounded and supported by like-minded professionals, this is THE place for you.

You will be working side by side with well known world-class personalities in the Haskell and Functional Programming community whose work you've likely used. Front Row is an active participant to the Open Source community and contributor to some of the most popular Haskell libraries.

A lot of flexibility: while we all work towards the same goals, you’ll have a lot of autonomy in what you work on. You can work from home up to one day a week, and we have a very flexible untracked vacation days policy

The company and its revenue are growing at a rocketship pace. Front Row is projected to make a massive impact on the world of education in the next few years. It's a once in a lifetime opportunity to join a small organization with great odds of becoming the Next Big Thing.

Must haves
  • You have experience doing full-stack web development. You understand HTTP, networking, databases and the world of distributed systems.
  • You have functional programming experience.
  • Extreme hustle: you’ll be solving a lot of problems you haven’t faced before without the resources and the support of a giant organization. You must thrive on getting things done, whatever the cost.
  • Soft skills: we want you to move into a leadership position, so you must be an expert communicator
Nice-to-haves
  • You have led a software development team before
  • You have familiarity with a functional stack (Haskell / Clojure / Scala / OCaml etc)
  • You understand and have worked all around the stack before, from infrastructure automation all the way to the frontend
  • You're comfortable with the Behavior-Driven Development style
  • You have worked at a very small startup before: you thrive on having a lot of responsibility and little oversight
  • You have worked in small and effective Agile/XP teams before
  • You have delivered working software to large numbers of users before
Benefits
  • Competitive salary
  • Generous equity option grants
  • Medical, Dental, and Vision
  • Catered lunch and dinner 4 times a week
  • Equipment budget
  • (onsite only) One flexible work day per week
  • (onsite only) Working from downtown SF, very accessible location
  • Professional yet casual work environment

Get information on how to apply for this position.

Categories: Offsite Blogs

Instances for (->) a (b :: * -> *)?

haskell-cafe - Tue, 06/28/2016 - 8:36am
Jim Pryor wrote: Then one defines a new type: newtype TwoArrow a b x = TwoArrow{unTA:: a -> b -> x} instance MyClass (TwoArrow a b) where ... Ditto for the composition. Alas, one is stuck with adding the dummy conversions TwoArrow/unTA at various places. The first instance of the curly-braces notation on this list in more than a decade! I think you are trying to build a monad with several pieces of environment. Assuming that just making a record with two different pieces (and making that record the single environment) doesn't work for you, you can find many solutions on Hackage. For example, various extensible effects libraries offer the desired functionality right out of the box. Or, if you really want to define a new class, why not to do something more general, like class Monad m => MonadMReader var r m | var m -> r where ask :: var -> m r to be used like data Var1 = Var1; data Var2 = Var2 do x <- ask Var1 y <- ask Var2 return $ x + y (and implement it, that is, define the insta
Categories: Offsite Discussion

Deadline extended: 21st International Conference on Engineering of Complex Computer Systems (ICECCS 2016), Dubai, United Arab Emirates, November 6-8 2016

General haskell list - Tue, 06/28/2016 - 4:06am
ICECCS is an A-ranked conference by the Computing Research and Education Association of Australasia (CORE) 2014 ranking (http://portal.core.edu.au/conf-ranks/?search=ICECCS&by=all&source=CORE2014&sort=atitle&page=1 ). Please kindly consider submitting papers to the conference, and please encourage your colleagues and students to submit too. --------------------------------------------------------------- 21st International Conference on Engineering of Complex Computer Systems (ICECCS 2016) || November 6-8, Dubai, United Arab Emirates || http://www.aston.ac.uk/eas/about-eas/academic-groups/computer-science/iceccs-2016/ Overview --------------------- Over the past several years, we have seen a rapid rising emphasis on design, implement and manage complex computer systems to help us deal with an increasingly volatile, globalised complex world. These systems are critical for dealing with the Grand Challenge problems we are facing in the 21st century, including health care, urbanization, education, energy, fi
Categories: Incoming News

Proposal for containers: Add 'lookup' function to Data.Set

libraries list - Mon, 06/27/2016 - 10:45pm
WHAT It is proposed to add a ‘lookup' function on 'Set' in the "containers" package. Feedback during the next two weeks is welcome. The function is almost indentical to the 'member' function but, in addition, returns the value stored in the set. WHY The point of this proposal is to facilitate program-wide data sharing. The 'lookup' function gives access to a pointer to an object already stored in a Set and equal to a given argument. The 'lookup' function is a natural extension to the current 'lookupLT', 'lookupGT', 'lookupLE' and 'lookupGE' functions, with obvious semantics. Example use case: In a parser, the memory footprint can be reduced by collapsing all equal strings to a single instance of each string. To achieve this, one needs a way to get a previously seen string (internally, a pointer) equal to a newly parsed string. Amazingly, this is very difficult with the current "containers" library interface. One current option is to use a Map instead, e.g., 'Map String String' which stores twice as
Categories: Offsite Discussion

Help to choose a library name

haskell-cafe - Mon, 06/27/2016 - 8:38am
Hi community, I'm writing a reactive programming library (yet another). I need it for the game Nomyx, but couldn't find the features I wanted from the existing libraries. Now the library is called Nomyx-Events. But I'd like to find a cool name that is not related to Nomyx... Some propositions: - Nomev - Noa Some French names: - Imprevu (French for unforseen, like in "unforseen event"). - Rendez-vous - Dejavu I like a lot Imprevu. How does it sound to English native speakers? Thanks _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

The Haddock documentation is not showing up on the Hackage

General haskell list - Mon, 06/27/2016 - 5:28am
Hi, I uploaded a package named enchant on the Hackage last week, but the Haddock documentation is not showing up yet. The Status field says "Docs pending" and "Build status unknown". https://hackage.haskell.org/package/enchant-0.1.0.0 enchant uses c2hs as a build tool to generate the FFI binding and requires libenchant-dev to be installed on the machine. I wonder how I can tell these build requirements to the Hackage server. Regards, Kwang Yul Seo _______________________________________________ Haskell mailing list Haskell< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell
Categories: Incoming News

Philip Wadler: Brexit implies Techxit?

Planet Haskell - Mon, 06/27/2016 - 4:48am
In the wake of the EU referendum, there appears to be considerable information about its consequences that many might wish to have seen before the vote. Some of this concerns the negative impact of Brexit on technology firms. Among others, the BBC has a summary.I was particularly struck by one comment in the story, made by start-up mentor Theo Priestley (pictured above),.And Mr Priestley thinks that in the event of a Scottish independence referendum that leads to reunification with the EU, it's possible some start-ups could move north of the border, perhaps to rekindle "Silicon Glen" - a 1980s attempt to compete in the semiconductor industry.
Categories: Offsite Blogs

Deadline extended: 21st International Conference on Engineering of Complex Computer Systems (ICECCS 2016), Dubai, United Arab Emirates, November 6-8 2016

General haskell list - Mon, 06/27/2016 - 1:41am
ICECCS is an A-ranked conference by the Computing Research and Education Association of Australasia (CORE) 2014 ranking (http://portal.core.edu.au/conf-ranks/?search=ICECCS&by=all&source=CORE2014&sort=atitle&page=1 ). Please kindly consider submitting papers to the conference, and please encourage your colleagues and students to submit too. --------------------------------------------------------------- 21st International Conference on Engineering of Complex Computer Systems (ICECCS 2016) || November 6-8, Dubai, United Arab Emirates || http://www.aston.ac.uk/eas/about-eas/academic-groups/computer-science/iceccs-2016/ Overview --------------------- Over the past several years, we have seen a rapid rising emphasis on design, implement and manage complex computer systems to help us deal with an increasingly volatile, globalised complex world. These systems are critical for dealing with the Grand Challenge problems we are facing in the 21st century, including health care, urbanization, education, energy, fi
Categories: Incoming News

[ANN] vty-5.7 released

General haskell list - Sun, 06/26/2016 - 9:53pm
Hi, On the heels of version Vty 5.6, version 5.7 is now on Hackage. This release adds support for changing the behavior of mouse and paste modes both at Vty startup time and during application execution. In addition to no longer being on by default, they can be enabled or disabled at any time and Vty can be queried to tell whether they are supported. See the CHANGELOG for details. http://hackage.haskell.org/package/vty-5.7 https://github.com/coreyoconnor/vty/releases/tag/5.7 Enjoy!
Categories: Incoming News

Bryn Keller: Python Class Properties

Planet Haskell - Sun, 06/26/2016 - 6:00pm

Class properties are a feature that people coming to Python from other object-oriented languages expect, and expect to be easy. Unfortunately, it’s not. In many cases, you don’t actually want class properties in Python - after all, you can have first class module-level functions as well, you might very well be happier with one of those.

I sometimes see people claim that you can’t do class properties at all in Python, and that’s not right either. It can be done, and it’s not too bad. Read on!

I’m going to assume here that you already know what class (sometimes called “static”) properties are in languages like Java, and that you’re somewhat familiar with Python metaclasses.

To make this feature work, we have to use a metaclass. In this example, we’ll suppose that we want to be able to access a list of all the instances of our class, as well as reference to the most recently created instance. It’s artificial, but it gives us a reason to have both read-only and read-write properties. We define a metaclass, which is again a class that extends type.

class Extent(type): @property def extent(self): ext = getattr(self, '_extent', None) if ext is None: self._extent = [] ext = self._extent return ext @property def last_instance(self): return getattr(self, '_last_instance', None) @last_instance.setter def last_instance(self, value): self._last_instance = value

Please note that if you want to do something like this for real, you may well need to protect access to these shared class properties with synchronization tools like RLock and friends to prevent different threads from overwriting each others’ work willy-nilly.

Next we create a class that uses that metaclass. The syntax is different in Python 2.7, so you may need to adjust if you’re working in an older version.

class Thing(object, metaclass=Extent): def __init__(self): self.__class__.extent.append(self) self.__class__.last_instance = self

Another note for real code: these references (the extent and the last_instance) will keep your object from being garbage collected, so if you actually want to keep extents for your classes, you should do so using something like weakref.

Now we can try out our new class:

>>> t1 = Thing() >>> t2 = Thing() >>> Thing.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>] >>> Thing.last_instance <__main__.Thing object at 0x101c5d2b0> >>>

Great, we have what we wanted! There are a couple of things to remember, though:

  • Class properties are inherited!
  • Class properties are not accessible via instances, only via classes.

Let’s see an example that demonstrates both. Suppose we add a new subclass of Thing called SuperThing:

>>> class SuperThing(Thing): ... @property ... def extent(self): ... return self.__class__.extent ... >>> s = SuperThing()

See how we created a normal extent property that just reads from the class property? So we can now do this:

>>> s.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>, <__main__.SuperThing object at 0x101c5d2e8>]

Whereas if we were to try that with one of the original Things, it wouldn’t work:

>>> t1.extent Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Thing' object has no attribute 'extent'

We can of course still access either one via classes:

>>> t1.__class__.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>, <__main__.SuperThing object at 0x101c5d2e8>] >>> s.__class__.extent [<__main__.Thing object at 0x101c5d080>, <__main__.Thing object at 0x101c5d2b0>, <__main__.SuperThing object at 0x101c5d2e8>] >>>

Also note that the extent for each of these classes is the same, which shows that class properties are inherited.

Did you spot the bug in Thing? It only manifests when we have subclasses like SuperThing. We inherited the __init__ from Thing, which adds each new instance to the extent, and sets last_instance. In this case, self.__class__.extent was already initialized, on Thing, and so we added our SuperThing to the existing list. For last_instance, however, we assigned directly, rather than first reading and appending, as we did with the list property, and so SuperThing.last_instance will be our s, and Thing.last_instance will be our t2. Tread carefully, it’s easy to make a mistake with this kind of thing!

Hopefully this has been a (relatively) simple example of how to build your own class properties, with or without setters.

Categories: Offsite Blogs

Using streams to clarify (?) the signature ofData.Text.replace

haskell-cafe - Sun, 06/26/2016 - 3:39pm
In the "text" package, the signature of Data.Text.replace <http://hackage.haskell.org/package/text-1.2.2.1/docs/Data-Text.html#v:replace> always sends me looking into the haddocks: replace :: Text -> Text -> Text -> Text Which argument is the text to replace, which is the replacement and which is the text that should be scanned? Imagine a generalized version of replace that 1) works on streams, and 2) allows replacing a sequence of texts (like, say, chapter headers) instead of replacing the same text repeatedly. It could have the following signature: replace' :: Stream (Stream (Of Text) m) m () Do you find easy to intuit, just by looking at that signature, which is the function of each argument? _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

Dominic Steinitz: Ecology, Dynamical Systems and Inference via PMMH

Planet Haskell - Sun, 06/26/2016 - 6:53am
Introduction

In the 1920s, Lotka (1909) and Volterra (1926) developed a model of a very simple predator-prey ecosystem.

Although simple, it turns out that the Canadian lynx and showshoe hare are well represented by such a model. Furthermore, the Hudson Bay Company kept records of how many pelts of each species were trapped for almost a century, giving a good proxy of the population of each species.

We can capture the fact that we do not have a complete model by describing our state of ignorance about the parameters. In order to keep this as simple as possible let us assume that log parameters undergo Brownian motion. That is, we know the parameters will jiggle around and the further into the future we look the less certain we are about what values they will have taken. By making the log parameters undergo Brownian motion, we can also capture our modelling assumption that birth, death and predation rates are always positive. A similar approach is taken in Dureau, Kalogeropoulos, and Baguelin (2013) where the (log) parameters of an epidemiological model are taken to be Ornstein-Uhlenbeck processes (which is biologically more plausible although adds to the complexity of the model, something we wish to avoid in an example such as this).

Andrieu, Doucet, and Holenstein (2010) propose a method to estimate the parameters of such models (Particle Marginal Metropolis Hastings aka PMMH) and the domain specific probabilistic language LibBi (Murray (n.d.)) can be used to apply this (and other inference methods).

For the sake of simplicity, in this blog post, we only model one parameter as being unknown and undergoing Brownian motion. A future blog post will consider more sophisticated scenarios.

A Dynamical System Aside

The above dynamical system is structurally unstable (more on this in a future post), a possible indication that it should not be considered as a good model of predator–prey interaction. Let us modify this to include carrying capacities for the populations of both species.

Data Generation with LibBi

Let’s generate some data using LibBi.

// Generate data assuming a fixed growth rate for hares rather than // e.g. a growth rate that undergoes Brownian motion. model PP { const h = 0.1; // time step const delta_abs = 1.0e-3; // absolute error tolerance const delta_rel = 1.0e-6; // relative error tolerance const a = 5.0e-1 // Hare growth rate const k1 = 2.0e2 // Hare carrying capacity const b = 2.0e-2 // Hare death rate per lynx const d = 4.0e-1 // Lynx death rate const k2 = 2.0e1 // Lynx carrying capacity const c = 4.0e-3 // Lynx birth rate per hare state P, Z // Hares and lynxes state ln_alpha // Hare growth rate - we express it in log form for // consistency with the inference model obs P_obs // Observations of hares sub initial { P ~ log_normal(log(100.0), 0.2) Z ~ log_normal(log(50.0), 0.1) } sub transition(delta = h) { ode(h = h, atoler = delta_abs, rtoler = delta_rel, alg = 'RK4(3)') { dP/dt = a * P * (1 - P / k1) - b * P * Z dZ/dt = -d * Z * (1 + Z / k2) + c * P * Z } } sub observation { P_obs ~ log_normal(log(P), 0.1) } }

We can look at phase space starting with different populations and see they all converge to the same fixed point.

Data Generation with Haskell

Since at some point in the future, I plan to produce Haskell versions of the methods given in Andrieu, Doucet, and Holenstein (2010), let’s generate the data using Haskell.

> {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > module LotkaVolterra ( > solLv > , solPp > , h0 > , l0 > , baz > , logBM > , eulerEx > )where > import Numeric.GSL.ODE > import Numeric.LinearAlgebra > import Data.Random.Source.PureMT > import Data.Random hiding ( gamma ) > import Control.Monad.State

Here’s the unstable model.

> lvOde :: Double -> > Double -> > Double -> > Double -> > Double -> > [Double] -> > [Double] > lvOde rho1 c1 rho2 c2 _t [h, l] = > [ > rho1 * h - c1 * h * l > , c2 * h * l - rho2 * l > ] > lvOde _rho1 _c1 _rho2 _c2 _t vars = > error $ "lvOde called with: " ++ show (length vars) ++ " variable" > rho1, c1, rho2, c2 :: Double > rho1 = 0.5 > c1 = 0.02 > rho2 = 0.4 > c2 = 0.004 > deltaT :: Double > deltaT = 0.1 > solLv :: Matrix Double > solLv = odeSolve (lvOde rho1 c1 rho2 c2) > [50.0, 50.0] > (fromList [0.0, deltaT .. 50])

And here’s the stable model.

> ppOde :: Double -> > Double -> > Double -> > Double -> > Double -> > Double -> > Double -> > [Double] -> > [Double] > ppOde a k1 b d k2 c _t [p, z] = > [ > a * p * (1 - p / k1) - b * p * z > , -d * z * (1 + z / k2) + c * p * z > ] > ppOde _a _k1 _b _d _k2 _c _t vars = > error $ "ppOde called with: " ++ show (length vars) ++ " variable" > a, k1, b, d, k2, c :: Double > a = 0.5 > k1 = 200.0 > b = 0.02 > d = 0.4 > k2 = 50.0 > c = 0.004 > solPp :: Double -> Double -> Matrix Double > solPp x y = odeSolve (ppOde a k1 b d k2 c) > [x, y] > (fromList [0.0, deltaT .. 50]) > gamma, alpha, beta :: Double > gamma = d / a > alpha = a / (c * k1) > beta = d / (a * k2) > fp :: (Double, Double) > fp = ((gamma + beta) / (1 + alpha * beta), (1 - gamma * alpha) / (1 + alpha * beta)) > h0, l0 :: Double > h0 = a * fst fp / c > l0 = a * snd fp / b > foo, bar :: Matrix R > foo = matrix 2 [a / k1, b, c, negate d / k2] > bar = matrix 1 [a, d] > baz :: Maybe (Matrix R) > baz = linearSolve foo bar

This gives a stable fixed point of

ghci> baz Just (2><1) [ 120.00000000000001 , 10.0 ]

Here’s an example of convergence to that fixed point in phase space.

The Stochastic Model

Let us now assume that the Hare growth parameter undergoes Brownian motion so that the further into the future we go, the less certain we are about it. In order to ensure that this parameter remains positive, let’s model the log of it to be Brownian motion.

where the final equation is a stochastic differential equation with being a Wiener process.

By Itô we have

We can use this to generate paths for .

where .

> oneStepLogBM :: MonadRandom m => Double -> Double -> Double -> m Double > oneStepLogBM deltaT sigma rhoPrev = do > x <- sample $ rvar StdNormal > return $ rhoPrev * exp(sigma * (sqrt deltaT) * x - 0.5 * sigma * sigma * deltaT) > iterateM :: Monad m => (a -> m a) -> m a -> Int -> m [a] > iterateM f mx n = sequence . take n . iterate (>>= f) $ mx > logBMM :: MonadRandom m => Double -> Double -> Int -> Int -> m [Double] > logBMM initRho sigma n m = > iterateM (oneStepLogBM (recip $ fromIntegral n) sigma) (return initRho) (n * m) > logBM :: Double -> Double -> Int -> Int -> Int -> [Double] > logBM initRho sigma n m seed = > evalState (logBMM initRho sigma n m) (pureMT $ fromIntegral seed)

We can see the further we go into the future the less certain we are about the value of the parameter.

Using this we can simulate the whole dynamical system which is now a stochastic process.

> f1, f2 :: Double -> Double -> Double -> > Double -> Double -> > Double > f1 a k1 b p z = a * p * (1 - p / k1) - b * p * z > f2 d k2 c p z = -d * z * (1 + z / k2) + c * p * z > oneStepEuler :: MonadRandom m => > Double -> > Double -> > Double -> Double -> > Double -> Double -> Double -> > (Double, Double, Double) -> > m (Double, Double, Double) > oneStepEuler deltaT sigma k1 b d k2 c (rho1Prev, pPrev, zPrev) = do > let pNew = pPrev + deltaT * f1 (exp rho1Prev) k1 b pPrev zPrev > let zNew = zPrev + deltaT * f2 d k2 c pPrev zPrev > rho1New <- oneStepLogBM deltaT sigma rho1Prev > return (rho1New, pNew, zNew) > euler :: MonadRandom m => > (Double, Double, Double) -> > Double -> > Double -> Double -> > Double -> Double -> Double -> > Int -> Int -> > m [(Double, Double, Double)] > euler stateInit sigma k1 b d k2 c n m = > iterateM (oneStepEuler (recip $ fromIntegral n) sigma k1 b d k2 c) > (return stateInit) > (n * m) > eulerEx :: (Double, Double, Double) -> > Double -> Int -> Int -> Int -> > [(Double, Double, Double)] > eulerEx stateInit sigma n m seed = > evalState (euler stateInit sigma k1 b d k2 c n m) (pureMT $ fromIntegral seed)

We see that the populations become noisier the further into the future we go.

Notice that the second order effects of the system are now to some extent captured by the fact that the growth rate of Hares can drift. In our simulation, this is demonstrated by our decreasing lack of knowledge the further we look into the future.

Inference

Now let us infer the growth rate using PMMH. Here’s the model expressed in LibBi.

// Infer growth rate for hares model PP { const h = 0.1; // time step const delta_abs = 1.0e-3; // absolute error tolerance const delta_rel = 1.0e-6; // relative error tolerance const a = 5.0e-1 // Hare growth rate - superfluous for inference // but a reminder of what we should expect const k1 = 2.0e2 // Hare carrying capacity const b = 2.0e-2 // Hare death rate per lynx const d = 4.0e-1 // Lynx death rate const k2 = 2.0e1 // Lynx carrying capacity const c = 4.0e-3 // Lynx birth rate per hare state P, Z // Hares and lynxes state ln_alpha // Hare growth rate - we express it in log form for // consistency with the inference model obs P_obs // Observations of hares param mu, sigma // Mean and standard deviation of hare growth rate noise w // Noise sub parameter { mu ~ uniform(0.0, 1.0) sigma ~ uniform(0.0, 0.5) } sub proposal_parameter { mu ~ truncated_gaussian(mu, 0.02, 0.0, 1.0); sigma ~ truncated_gaussian(sigma, 0.01, 0.0, 0.5); } sub initial { P ~ log_normal(log(100.0), 0.2) Z ~ log_normal(log(50.0), 0.1) ln_alpha ~ gaussian(log(mu), sigma) } sub transition(delta = h) { w ~ normal(0.0, sqrt(h)); ode(h = h, atoler = delta_abs, rtoler = delta_rel, alg = 'RK4(3)') { dP/dt = exp(ln_alpha) * P * (1 - P / k1) - b * P * Z dZ/dt = -d * Z * (1 + Z / k2) + c * P * Z dln_alpha/dt = -sigma * sigma / 2 - sigma * w / h } } sub observation { P_obs ~ log_normal(log(P), 0.1) } }

Let’s look at the posteriors of the hyper-parameters for the Hare growth parameter.

The estimate for is pretty decent. For our generated data, and given our observations are quite noisy maybe the estimate for this is not too bad also.

Appendix: The R Driving Code

All code including the R below can be downloaded from github but make sure you use the straight-libbi branch and not master.

install.packages("devtools") library(devtools) install_github("sbfnk/RBi",ref="master") install_github("sbfnk/RBi.helpers",ref="master") rm(list = ls(all.names=TRUE)) unlink(".RData") library('RBi') try(detach(package:RBi, unload = TRUE), silent = TRUE) library(RBi, quietly = TRUE) library('RBi.helpers') library('ggplot2', quietly = TRUE) library('gridExtra', quietly = TRUE) endTime <- 50 PP <- bi_model("PP.bi") synthetic_dataset_PP <- bi_generate_dataset(endtime=endTime, model=PP, seed="42", verbose=TRUE, add_options = list( noutputs=500)) rdata_PP <- bi_read(synthetic_dataset_PP) df <- data.frame(rdata_PP$P$nr, rdata_PP$P$value, rdata_PP$Z$value, rdata_PP$P_obs$value) ggplot(df, aes(rdata_PP$P$nr, y = Population, color = variable), size = 0.1) + geom_line(aes(y = rdata_PP$P$value, col = "Hare"), size = 0.1) + geom_line(aes(y = rdata_PP$Z$value, col = "Lynx"), size = 0.1) + geom_point(aes(y = rdata_PP$P_obs$value, col = "Observations"), size = 0.1) + theme(legend.position="none") + ggtitle("Example Data") + xlab("Days") + theme(axis.text=element_text(size=4), axis.title=element_text(size=6,face="bold")) + theme(plot.title = element_text(size=10)) ggsave(filename="diagrams/LVdata.png",width=4,height=3) synthetic_dataset_PP1 <- bi_generate_dataset(endtime=endTime, model=PP, init = list(P = 100, Z=50), seed="42", verbose=TRUE, add_options = list( noutputs=500)) rdata_PP1 <- bi_read(synthetic_dataset_PP1) synthetic_dataset_PP2 <- bi_generate_dataset(endtime=endTime, model=PP, init = list(P = 150, Z=25), seed="42", verbose=TRUE, add_options = list( noutputs=500)) rdata_PP2 <- bi_read(synthetic_dataset_PP2) df1 <- data.frame(hare = rdata_PP$P$value, lynx = rdata_PP$Z$value, hare1 = rdata_PP1$P$value, lynx1 = rdata_PP1$Z$value, hare2 = rdata_PP2$P$value, lynx2 = rdata_PP2$Z$value) ggplot(df1) + geom_path(aes(x=df1$hare, y=df1$lynx, col = "0"), size = 0.1) + geom_path(aes(x=df1$hare1, y=df1$lynx1, col = "1"), size = 0.1) + geom_path(aes(x=df1$hare2, y=df1$lynx2, col = "2"), size = 0.1) + theme(legend.position="none") + ggtitle("Phase Space") + xlab("Hare") + ylab("Lynx") + theme(axis.text=element_text(size=4), axis.title=element_text(size=6,face="bold")) + theme(plot.title = element_text(size=10)) ggsave(filename="diagrams/PPviaLibBi.png",width=4,height=3) PPInfer <- bi_model("PPInfer.bi") bi_object_PP <- libbi(client="sample", model=PPInfer, obs = synthetic_dataset_PP) bi_object_PP$run(add_options = list( "end-time" = endTime, noutputs = endTime, nsamples = 4000, nparticles = 128, seed=42, nthreads = 1), ## verbose = TRUE, stdoutput_file_name = tempfile(pattern="pmmhoutput", fileext=".txt")) bi_file_summary(bi_object_PP$result$output_file_name) mu <- bi_read(bi_object_PP, "mu")$value g1 <- qplot(x = mu[2001:4000], y = ..density.., geom = "histogram") + xlab(expression(mu)) sigma <- bi_read(bi_object_PP, "sigma")$value g2 <- qplot(x = sigma[2001:4000], y = ..density.., geom = "histogram") + xlab(expression(sigma)) g3 <- grid.arrange(g1, g2) ggsave(plot=g3,filename="diagrams/LvPosterior.png",width=4,height=3) df2 <- data.frame(hareActs = rdata_PP$P$value, hareObs = rdata_PP$P_obs$value) ggplot(df, aes(rdata_PP$P$nr, y = value, color = variable)) + geom_line(aes(y = rdata_PP$P$value, col = "Phyto")) + geom_line(aes(y = rdata_PP$Z$value, col = "Zoo")) + geom_point(aes(y = rdata_PP$P_obs$value, col = "Phyto Obs")) ln_alpha <- bi_read(bi_object_PP, "ln_alpha")$value P <- matrix(bi_read(bi_object_PP, "P")$value,nrow=51,byrow=TRUE) Z <- matrix(bi_read(bi_object_PP, "Z")$value,nrow=51,byrow=TRUE) data50 <- bi_generate_dataset(endtime=endTime, model=PP, seed="42", verbose=TRUE, add_options = list( noutputs=50)) rdata50 <- bi_read(data50) df3 <- data.frame(days = c(1:51), hares = rowMeans(P), lynxes = rowMeans(Z), actHs = rdata50$P$value, actLs = rdata50$Z$value) ggplot(df3) + geom_line(aes(x = days, y = hares, col = "Est Phyto")) + geom_line(aes(x = days, y = lynxes, col = "Est Zoo")) + geom_line(aes(x = days, y = actHs, col = "Act Phyto")) + geom_line(aes(x = days, y = actLs, col = "Act Zoo")) Bibliography

Andrieu, Christophe, Arnaud Doucet, and Roman Holenstein. 2010. “Particle Markov chain Monte Carlo methods.” Journal of the Royal Statistical Society. Series B: Statistical Methodology 72 (3): 269–342. doi:10.1111/j.1467-9868.2009.00736.x.

Dureau, Joseph, Konstantinos Kalogeropoulos, and Marc Baguelin. 2013. “Capturing the time-varying drivers of an epidemic using stochastic dynamical systems.” Biostatistics (Oxford, England) 14 (3): 541–55. doi:10.1093/biostatistics/kxs052.

Lotka, Alfred J. 1909. “Contribution to the Theory of Periodic Reactions.” The Journal of Physical Chemistry 14 (3): 271–74. doi:10.1021/j150111a004.

Murray, Lawrence M. n.d. “Bayesian State-Space Modelling on High-Performance Hardware Using LibBi.”

Volterra, Vito. 1926. “Variazioni e fluttuazioni del numero d’individui in specie animali conviventi.” Memorie Della R. Accademia Dei Lincei 6 (2): 31–113. http://www.liberliber.it/biblioteca/v/volterra/variazioni{\_}e{\_}fluttuazioni/pdf/volterra{\_}variazioni{\_}e{\_}fluttuazioni.pdf.


Categories: Offsite Blogs

[ANN] vty-5.6 released

General haskell list - Sat, 06/25/2016 - 9:59pm
Hi, I'm pleased to announce the release of version 5.6 of the Vty library, a terminal user interface programming library. This version of the library adds some great new features: * Support for mouse events in most terminals: those that implement mouse control sequences as described at http://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h2-Mouse-Tracking * Support for bracketed paste mode, a special mode for receiving operating system clipboard pastes without treating paste contents as normal input: http://cirw.in/blog/bracketed-paste Vty 5.6 can be found on Hackage at http://hackage.haskell.org/package/vty-5.6 and on GitHub at https://github.com/coreyoconnor/vty Enjoy!
Categories: Incoming News

Munich Haskell Meeting,2016-06-29 < at > 19:30 Augustiner-Keller

haskell-cafe - Sat, 06/25/2016 - 6:29pm
Dear all, Next week, our monthly Munich Haskell Meeting will take place again on Wednesday, June 29 at Augustiner-Keller Arnulfstr. at 19h30. **Please note the different day and location!** For details see here: http://muenchen.haskell.bayern/dates.html (Yes, we got a new domain!) If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-jun-2016/ Everybody is welcome! cu,
Categories: Offsite Discussion

José Pedro Magalhães: Marie Curie individual fellowship application text

Planet Haskell - Sat, 06/25/2016 - 5:22am
Back in 2014 I applied for a Marie Curie fellowship. This was towards the end of my postdoc at Oxford, and then I got my current position at Standard Chartered before I heard back from the European Commission on the results of the Marie Curie. It was accepted with a score of 96%, which is something I'm very proud of, and also very thankful to everyone who helped me prepare the submission.
However, it is now clear that I will not be taking that position, as I am settled in London and at Standard Chartered. I still quite like the proposal, though, and, when writing it, I remember having wanted to see examples of successful Marie Curie proposals to have an idea of what they would look like. As such, I'm making the text of my own Marie Curie fellowship application available online. I hope this can help others to write successful applications, and maybe some of its ideas can be taken on by other researchers. Feel free to adapt any of the ideas in the proposal (but please give credit when it is due, and remember that the European Commission uses plagiarism detection software). It's available on my website, and linked below. I made the LaTeX template for the application available before.

José Pedro Magalhães. Models of Structure in Music (MoStMusic). Marie Sklodowska-Curie Individual Fellowship application, 2014.
Categories: Offsite Blogs

User Requirements Survey for CT Software (Beta)

haskell-cafe - Fri, 06/24/2016 - 4:54pm
Hello, My name is Joie Murphy and I am a Summer Research Student at the US National Institute of Standards and Technology (NIST), working with Drs. Spencer Breiner and Eswaran Subrahmanian. We are currently gathering user requirements for category theoretic software to be developed by or with NIST in the future. This questionnaire will give us insight about your past or present use of CT software and your ideal uses for such software. Providing us with the information on how you would like to use this type of software will help us to make the right design choices in development. The survey is available on Google Forms: http://goo.gl/forms/vfOgR26dHnKynHU23 If you have any colleagues who might be willing to fill out this survey, you can forward our message or you can provide us with their contact information at the end of the survey. If you have any questions or concerns, please feel free to contact us by replying to this email. We would like to thank you for your participation in this initial step of the
Categories: Offsite Discussion

Backtracking when building a bytestring

haskell-cafe - Fri, 06/24/2016 - 10:23am
Hi, I'm working on a network protocol that involves sending frames of data prefixed by their length and a checksum. The only realistic way to work out the length of a frame is to actually write the bytes out, although the length and checksum take up a fixed number of bytes. If I were working in C I'd be filling a buffer, leaving space for the length/checksum, and then go back and fill them in at the end. So this is _almost_ what ByteString.Builder does except for the backtracking bit. Does anyone know if there's an implementation of a thing that's a bit like ByteString.Builder but also allows for this kind of backtracking? Ideally I want to be able to batch up a number of frames into a single buffer, and deal gracefully with overflowing the buffer by allocating some more, much as Builder does. I can't think of a terribly good way of doing this using the existing Builder implementation as it stands, although it looks quite easy to modify it to add this functionality so I might just do that locally if need
Categories: Offsite Discussion