# News aggregator

### Writing JavaScript games in Haskell

### Haskell development role in Strats at Standard Chartered | Control.Monad.Writer

### Functional Programming at Bloomberg

### Don Stewart (dons): Haskell development role in Strats at Standard Chartered

The Strats team at Standard Chartered has an open position for a typed functional programming developer to join the team in London.

This is a “front office” finance role – meaning you will work on the trading floor, directly with traders, building software to automate their work and improve their efficiency. The role is highly development focused, and you will use Haskell for almost all tasks: data analysis, market data publishing, database access, web services, desktop GUIs, large parallel tasks, quantitative models, solvers, everything. This is a fast paced role – code you write today will be deployed within hours to hundreds of users and has to work.

You will join an expert team in London, and demonstrable experience in typed FP (Haskell, OCaml, F# etc) is required. We have around 2.5 million lines of Haskell, and our own Haskell compiler. In this context we look for skill and taste in typed functional programming to capture and abstract over complex, messy systems.

Experience writing typed APIs to external systems such as databases, web services, pub/sub platforms is very desirable. We like working code, so if you have Hackage or github libraries, we want to see them. We also like StackOverflow answers, blog posts, academic papers, or other arenas where you can show broad FP ability. A PhD in computer science is a strong advantage.

The role requires physical presence on the trading floor in London. Remote work isn’t an option. Ideally you have some project and client management skills — you will talk to users, understand their problems, and then implement and deliver what they really need. No financial background is required.

More info about our development process is in the 2012 PADL keynote, and a 2013 HaskellCast interview.

If this sounds exciting to you, please send your resume to me – donald.stewart <at> sc.com.

Role posted 2015-03-16

Tagged: jobs, london

### Graphics and FPL: Getting Into Netwire

### Graphics and FPL: Getting Into Netwire

### [libraries proposal] Generalizing map?

### Dimitri Sabadie: Getting Into Netwire

Nowadays, there is a cool concept out there in the *Functional Programming* world which is called *FRP*. It stands for *Functional Reactive Programming* and is a pretty decent way to make *event-driven* programs.

The problem with FRP is that, beside Wikipedia, Haskell.org and a few other resources, like Conal Elliott’s papers, we lack learning materials. Getting into FRP is really not trivial and because of the lack of help, you’ll need to be brave to get your feet wet.

Because I found it **hard** learning it from scratch and because I think it’s a good thing to pass knowledge by, I decided to write a few about it so that people can learn via an easier path.

I’ll be talking about netwire, which is not the *de-facto* library to use in Haskell, because, eh… we don’t have any. However, netwire exposes a lot of very interesting concepts and helped me to understand more general abstractions. I hope it’ll help you as well. :)

In traditional event-driven programming codebase, you’d find constructions such as events polling (when you explicitely retrieve last occurring events), callbacks and *event handlers*. GLFW is a very famous example of callback uses for event-driven programming. Such functions as glfwSetWindowCloseCallback require you to pass a callback that will be called when the event occurs. While that way to go seems nice, it’s actually error-prone and ill-formed design:

- you eventually end up with spaghetti code
- debugging callbacks is a true nightmare as the codebase grows
- because of the first point, the code doesn’t compose – or in very minimal ways – and is almost impossible to test against
- you introduce side-effects that might introduce nasty bugs difficult to figure out
- debugging is like hitting your head on a tree log

However, it’s not black or white. Callbacks are mandatory. They’re useful, and we’ll see that later on.

What FRP truely is?FRP is a new way to look at event-driven programming. Instead of representing reaction through callbacks, we consume events over time. Instead of building a callback that will be passed as reaction to the setPushButtonCallback function, we consume and transform events over time. The main idea of FRP could be summed up with the following concepts:

*behaviors*: a behavior is a value that reacts to time*events*: events are just values that have occurrences in time*switching*: the act of changing of behavior

According to Wikipedia, *a behavior is the range of actions and mannerisms made by individuals, organisms, systems, or artificial entities in conjunction with themselves or their environment, which includes the other systems or organisms around as well as the (inanimate) physical environment*. If we try to apply that to a simple version of FRP that only takes into account the time as external stimulus, a behavior isany kind of value that consumes time. What’s that? Well…

A behavior is a simple function from time (Double) to a given value. Let’s take an example. Imagine you want to represent a cube rotating around the *X axis*. You can represent the actual rotation with a Behavior Rotation, because the angle of rotation is linked to the time:

rotationAngle = Behavior $ \t -> rotate xAxis t

Pretty simple, see?! However, it’d would more convenient if we could chose the type of time. We don’t really know what the time will be in the final application. It could be the current UTC time, it could be an integral time (think of a stepped discrete simulation), it could be the monotonic time, the system time, something that we don’t even know. So let’s make our Behavior type more robust:

newtype Behavior t a = Behavior { stepBehavior :: t -> a }Simple change, but nice improvement.

That is the typical way to picture a *behavior*. However, we’ll see later that the implementation is way different that such a naive one. Keep on reading.

An event is *something happening at some time*. Applied to FRP, an event is a pair of time – giving the time of occurrence – and a carried value:

For instance, we could create an event that yields a rotation of 90° around X at 35 seconds:

rotate90XAt35s :: Event Float Rotationrotate90XAt35s = Event (35,rotate xAxis $ 90 * pi / 180)

Once again, that’s the naive way to look at events. Keep in mind that events have time occurrences and carry values.

Behavior switchYou switch your behavior every time. Currently, you’re reading this paper, but you may go grab some food, go to bed, go to school or whatever you like afterwards. You’re already used to behavior switching because that’s what we do every day in a lifetime.

However, applying that to FRP is another thing. The idea is to express this:

*“Given a first behavior, I’ll switch to another behavior when a given event occurs.”*

This is how we express that in FRP:

switch :: Behavior t a -> Event t (Behavior t a) -> Behavior t aLet me decrypt switch for you.

The first parameter, a Behavior t a, is the initial behavior. For instance, currently, you’re reading. That could be the first behavior you’d pass to switch.

The second parameter, an Event t (Behavior t a), is an event that yields a new Behavior t a. Do you start to get it? No? Well then:

switch reading finishedreading is the initial behavior, and finished is an event that occurs when you’re done reading. switch reading finished is then a behavior that equals to reading until finished happens. When it does, switch reading finished extracts the behavior from the event, and uses it instead.

I tend to think switch is a bad name, and I like naming it until:

reading `until` finishedNicer isn’t it?! :)

SteppingStepping is the act of passing the input – i.e. the time t in our case – down to the Behavior t a and extract the resulting a value. Behaviors are commonly connected to each other and form a *reactive network*.

That operation is also often refered to as *reactimation* in certain implementations, but is more complex than just stepping the world. You don’t have to focus on that yet, just keep in mind the reactimate function. You might come across it at some time.

Everything you read in that paper until now was just pop-science so that you understand the main idea of what FRP is. The next part will cover a more decent and real-world implementation and use of FRP, especially efficient implementations.

Let’s build a FRP library!The first common error a lot of programmers make is trying to write algorithms or libraries to solve a problem they don’t even know. Let’s then start with an example so that we can figure out the problem.

Initial programSetting upLet’s say we want to be able to control a camera with a keyboard:

- W would go forward
- S would go backward
- A would left strafe
- D would right strafe
- R would go up
- F would go down

Let’s write the Input type:

data Input= W

| S

| A

| D

| R

| F

| Quit

deriving (Eq,Read,Show)

Straight-forward. We also have a function that polls events from IO:

pollEvents :: IO [Input]pollEvents = fmap treatLine getLine

where

treatLine = concatMap (map fst . reads) . words

We use [Input] because we could have several events at the same time (imagine two pressed keys). The function is using dumb implementation in order to abstract over event polling. In your program, you won’t use getLine but a function from SDL or similar.

And the camera:

newtype Camera = Camera { _cameraPosition :: V3 Float } deriving (Eq,Read,Show)makeLenses ''Camera

V3 is a type from linear. You’ll need that lib then, and import Linear.V3 to make the Camera compile. You’ll also need lens and the GHC TemplateHaskell extension enabled as well as import Control.Lens.

Ok, let’s react to events!

First attempt: the regular and naive oneThe idea is to use some kind of state we’ll change on an event. In our case the state is pretty simple:

data AppSt = AppSt {_appCamera :: Camera

} deriving (Eq,Read,Show)

makeLenses ''AppSt

updateAppSt :: AppSt -> Input -> Maybe AppSt

updateAppSt appst input = case input of

W -> Just $ appst & appCamera . cameraPosition . _z -~ 0.1

S -> Just $ appst & appCamera . cameraPosition . _z +~ 0.1

A -> Just $ appst & appCamera . cameraPosition . _x -~ 0.1

D -> Just $ appst & appCamera . cameraPosition . _x +~ 0.1

F -> Just $ appst & appCamera . cameraPosition . _y -~ 0.1

R -> Just $ appst & appCamera . cameraPosition . _y +~ 0.1

Quit -> Nothing

A lot of boilerplate on updateAppSt, but that doesn’t matter that much. The idea is to modify the application state and just return it for all inputs but Quit, in which case we return Nothing to express the wish to quit the application.

I’ve been using that idea for a while. It’s simple, nice and neat, because we don’t spread IO actions in our program logic, which remains then pure. That’s a very good way of doing it, and in most cases, it’ll even be sufficient. However, that idea suffers from a serious issue: *it doesn’t scale*.

Who has only one camera? No one. You have a camera – maybe more than just one, lights, objects, terrains, AI, sounds, assets, maps and so on and so forth. Our little AppSt type would explode as we add objects. That *doesn’t scale at all*. You could, though, go on and add all your objects in your AppSt – I did that once, it was a pretty harsh experience.

Furthermore, imagine you want to add a new behavior to the camera, like being able to handle the mouse cursor move – Input being augmented, of course. You’d need to change / add lines in our updateAppSt function. Imagine how messy updateAppSt would be… That would, basically, gather all reactions into a single function. Not neat.

Adding FRPFRP enables us to use our reactive values as if they are regular values. You can add reactive values, you can substract them, combine them in any way you want. The semantics of your values should be true for the reactive values.

Typically, with FRP, you don’t have event handlers anymore. The codebase can then grow sanely without having to accumulate big states every now and then. FRP applications scale and compose pretty well.

Let’s start with a simple FRP implementation for our example.

Naive FRP implementationLet’s start with the behavior:

newtype Behavior t a = Behavior { stepBehavior :: t -> a }How could we implement our camera’s behavior with that? We actually can’t since we don’t have any ways to pass events.

*“I guess we could build a Behavior t Camera by passing our Input to the initial function?”*

Something like this?

camera inputs = Behavior $ \_ -> -- implement the behavior with inputsThat could be a way to go, yeah. However, how would you implement switching with that? Remember the type of until:

until :: Behavior t a -> Event (Behavior t a) -> Behavior t acamera is not a behavior, it’s a function from events to a behavior. You have to apply the events on camera in order to get its behavior. Once you’ve done that, you cannot pass events to the next behavior. What a pity. That is more a configured behavior than a behavior consuming inputs / events.

With the current Behavior t a implementation, a behavior network is reduced to a function t -> a, basically. Then, the only stimulus you got from outside is… *time*. We lack something.

*“A way to forward events?”*

Yes! But more mainly, a way to extend our Behavior t a with inputs! Don’t get me wrong, we are not talking about a reactive *value* here. We are talking about a reactive *relationship*:

That’s way better! Our new behavior represents a relationship between two reactive objects. The b is our old a, and the new a is the input! Let’s see what we can do with that.

camera :: Behavior t [Input] Cameracamera = Behavior (const treatEvents)

where

treatEvents events

| W `elem` events = Camera $ V3 0 0 (-0.1)

| S `elem` events = Camera $ V3 0 0 0.1

| otherwise = Camera $ V3 0 0 0

That is not exactly what we intented to express. Here, if we push the W button, we just put the camera in V3 0 0 (-0.1), while we’d like to move it forward. That is due to the fact we need switching.

The idea is the following: the initial behavior is idleing. We just don’t do anything. Then, we switch to a given behavior when a given event occurs. We’ll need recursion so that we can *ping-pong* between behaviors. Let’s write the idleing behavior:

idleing = Behavior (const snd)

That behavior requires as input our Input events list and a Camera and simply returns the Camera. Pretty nice.

How do we switch then? We need Event. Consider this:

newtype Event t a = Event { getEvent :: (t,a) }In order to switch, we need a to be a behavior. In the first place, we’ll create several Event t [Input] and pass them as input to the behavior network. How could we change the [Input] to something more interesting? Simple: Functor!

instance Functor (Event t) wherefmap f (Event e) = Event (fmap f e)

**Note**: because of Event t a being a newtype, you should rather use the GHC GeneralizedNewtypeDeriving extension to automatically let GHC infer the instance for you.

Then, we can use the Functor instance to change the type of the carried value of the event. That’s great because we don’t change the occurrence time, only the carried value. Transforming events is really important in FRP. We can then transform the [Input] into a single behavior:

inputToBehavior i = case i ofW -> Behavior $ \_ (_,cam) -> cam & cameraPosition . _z -~ 0.1

S -> Behavior $ \_ (_,cam) -> cam & cameraPosition . _z +~ 0.1

A -> -- and so on

D -> -- and so forth

F -> -- ;)

R -> -- ...

_ -> Behavior $ \_ (_,cam) -> cam

Pretty simple, see? When we push W, we go forward forever. We could have implemented the function above with another until call in order to go back to idleing, making some kind of behavior loop.

However, switching is fairly poorly implemented here. It’s not very efficient, and requires a ton of boilerplate.

AutoThere is a very cool type out there called Auto, used to implement automatons.

newtype Auto a b = Auto { runAuto :: a -> (b,Auto a b) }An Auto a b is a function-like structure. It has an input and an output. The difference with a regular function is the fact it also has a secondary output, which is another Auto a b. That is, it’s the next automaton to be used.

Auto a b wraps pretty cool concepts, such as *locally defined states*. It’s also a great ally when implementing switching in a FRP system, because we can easily state that Behavior ≃ Auto. A Behavior is a function from the environment state to the next reactive value, and has also another output representing what to do “next”.

Let’s then change our Behavior type to make it look like a bit more like Auto:

newtype Behavior t a b = Behavior { stepBehavior :: t -> a -> (b,Behavior t a b) }Yeah, that’s it! That’s a pretty good start!

Useful abstractionsBefore going on, I’d like to introduce those scary abstractions you are afraid of. Because they’re actually not. They’re **all simple**. At least for Haskell purposes.

**Note**: I do know we could simply use the GeneralizedNewtypeDeriving` extension but I want to detail all the implementation, so we’ll see how to implement all the nice abstractions.

Arrows are a generalization of functions along the axis of computation. A computation has inputs and outputs. So does a behavior.

Although arrows are not really used in Haskell, they’re ultra simple (themselves and the common combinators built over the abstraction) and useful in some cases.

In order to implement arrows, we need to provide code for both the arr function, which type is arr :: (Arrow a) => (b -> c) -> a b c and first, which type is first :: (Arrow a) => a b c -> a (b,d) (c,d). arr is used to lift a common function into the arrowized version, and first takes an arrow which takes a value as input and exposes an arrow that takes a pair as input, applying the given function on the *first* value of the pair. Let’s implement that:

arr f = fix $ \r -> Behavior $ \t a -> (f a,r)

first f = Behavior $ \t (b,d) ->

let (c,fn) = stepBehavior f t b

in ((c,d),first fn)Category

A category basically exposes two concepts: composition and identity. In our case, the identity represents a constant behavior in time and the composition composes two behaviors in time. Let’s implement Category by providing implementation for both id and (.):

instance Category (Behavior t) whereid = arr id

x . y = Behavior $ \t a ->

let (yr,yn) = stepBehavior y t a

(xr,xn) = stepBehavior x t yr

in (xr,xn . yn)

**Note**: because of Prelude exporting specialized implementation of id and (.) – the function ones – you should hide them in order to implement Category:

A semigroup is a pretty cool algebraic structure used in Haskell to represent “anything that associates”. It exposes an associative binary function over a set. In the case of behaviors, if two behaviors output semigroup values, we can associates the behaviors to build a single one.

A Semigroup is implemented via a single typeclass method, (<>). Let’s do that for behaviors:

instance (Semigroup b) => Semigroup (Behavior t a b) wherex <> y = Behavior $ \t a ->

let (xr,xn) = stepBehavior x t a

(yr,yn) = stepBehavior y t a

in (xr <> yr,xn <> yn)

Simple and neat.

FunctorYou might already know that one since I talked about it a few lines ago, but let’s write the instance for our Behavior:

instance Functor (Behavior t a) wherefmap f b = Behavior $ \t a ->

let (br,bn) = stepBehavior b t a

in (f br,fmap f bn)

Pretty cool eh?

ApplicativeA very known one too. Let’s see how we could implement Applicative:

instance Applicative (Behavior t a) wherepure = arr . const

f <*> x = Behavior $ \t a ->

let (fr,fn) = stepBehavior f t a

(xr,xn) = stepBehavior x t a

in (fr xr,fn <*> xn)Profunctor

This one is special. You don’t have to know what a profunctor is, but eh, you should, because profunctors are pretty simple to use in Haskell, and are very useful. I won’t explain what they are – you should have a look at this article for further details.

If you do know them, here’s the implementation for dimap:

instance Profunctor (Behavior t) wheredimap l r x = Behavior $ \t a ->

let (xr,xn) = stepBehavior x t (l a)

in (r xr,dimap l r xn)InhibitionBare concept

Behaviors consume environment state and have outputs. However, they sometimes just don’t. They don’t output anything. That could be the case for a behavior that only emits during a certain period of time. It could also be the case for a signal function that’s defined on a given interval: what should we output for values that lie outside?

Such a scenario is called *inhibition*. There’re several solutions to implement inhibition. The simplest and most famous one is by using Maybe as a wrapper over the output. Like the following:

If (Maybe b) is Nothing, the output is undefined, then the behavior inhibits.

However, using a bare Maybe exposes the user directly to inhibition. There’s another way to do that:

newtype Behavior t a b = Behavior { stepBehavior :: t -> a -> Maybe (b,Behavior t a b) }Here we are. We have behaviors that can inhibit. If a behavior doesn’t inhibit, it returns Just (output,nextBehavior), otherwise it outputs Nothing and inhibits forever.

**Exercise**: try to reimplement all the above abstractions with the new type of Behavior.

We can add a bunch of other interesting functions:

Inhibition-related combinatorsdead :: Behavior t a bdead = Behavior $ \_ _ -> Nothing

one :: b -> Behavior t a b

one x = Behavior $ \_ _ -> Just (x,dead)

dead is a behavior that inhibits forever. That is, it doesn’t produce any value at any time.

one x produces x once, and then inhibits forever. That’s a nice combinator to use when you want to *pulse* values in time. We’ll see later that it’s very useful to represent discrete events, like key presses or mouse motion.

However, inhibiting can be useful. For instance, we can implement a new kind of behavior switching using inhibition. Let’s try to implement a function that takes two behaviors and switches to the latter when the former starts inhibiting:

revive :: Behavior t a b -> Behavior t a b -> Behavior t a brevive x y = Behavior $ \t a -> case stepBehavior x t a of

Just (xr,xn) -> return (xr,revive xn y)

Nothing -> stepBehavior y t a

(~>) :: Behavior t a b -> Behavior t a b -> Behavior t a b

(~>) = revive

(~>) is a handy alias to revive. Then, a ~> b is a behavior that is a until it inhibits, afterwhile it’s b. Simple, and useful.

In *netwire*, revive – or (~>) – is (-->). There’s also an operator that does the opposite thing: (>--). a >-- b is a until b starts producing – i.e. until b doesn’t inhibit anymore.

**Exercise**: write the implementatof of (>~), our version for netwire’s (>--).

Now you have a better idea of how you could implement a behavior, let’s talk about netwire’s one.

netwire’s behavior type is called Wire. It’s actually:

Wire s e m a bs is the *session* time – it’s basically a type that’s used to extract time. e is the inhibition value. m is a inner monad – yes, you can use monadic code within netwire, which is not really useful actually, except for Reader, I guess. And a and b are respectively inputs and outputs.

*“What is that inhibition type?”*

Yeah, netwire doesn’t use Maybe for inhibition. Picture Wire as:

newtype Wire s e m a b = Wire { stepWire :: s -> a -> m (Either e (b,Wire s e m a b)) }Instead of using Maybe (b,Wire s e m a b), it uses Either. Some functions require e to be a Monoid. I guess netwire uses that to accumulate during inhibition. I don’t see decent use cases of such a feature, but it’s there. I tend to use this kind of wire in all my uses of netwire:

Wire s () Identity a b -- I think this is the most common type of wireKeep in mind that although you can set m to IO, it’s not what netwire – and FRP – was designed for.

EventsWhat about events? Well, netwire exposes events as a home-made Maybe:

data Event a= Event a

| NoEvent

deriving (Eq,Show)

instance Functor Event where

fmap f e = case e of

Event a -> Event (f a)

NoEvent -> NoEvent

That’s actually enough, because we can attach Event a to time occurences with Behavior t b a. You’ll find every now and then functions using Wire s e m (Event a) b, for instance. You should get used to that as you write toy examples, and real-world ones, of course.

In the endWhat a trek… As you can see, we were able to approach netwire’s implementation understanding pretty closely. There are a few concepts I haven’t covered – like intrinsic switches, continuable switches, deferred switches… – but I don’t pretend having a comprehensive FRP article. You’ll have to dig in a bit more ;)

I’ll write another article about FRP and netwire to implement the camera example with netwire so that you can have a concrete example.

### Tom Schrijvers: PPDP 2015: Extended Deadline

Call for papers 17th International Symposium on Principles and Practice of Declarative Programming PPDP 2015

Special Issue of Science of Computer Programming (SCP)

Siena, Italy, July 14-16, 2015 (co-located with LOPSTR 2015)

http://costa.ls.fi.upm.es/<wbr></wbr>ppdp15 ==============================<wbr></wbr>==============================<wbr></wbr>==========

!!!!!! EXTENDED DEADLINE: April 6, 2015 !!!!!!

==============================<wbr></wbr>==============================<wbr></wbr>==========

PPDP 2015 is a forum that brings together researchers from thedeclarative programming communities, including those working in thelogic, constraint and functional programming paradigms, but alsoembracing languages, database languages, and knowledge representationlanguages. The goal is to stimulate research in the use of logicalformalisms and methods for specifying, performing, and analyzingcomputations, including mechanisms for mobility, modularity,concurrency, object-orientation, security, verification and staticanalysis. Papers related to the use of declarative paradigms and toolsin industry and education are especially solicited. Topics of interestinclude, but are not limited to

* Functional programming* Logic programming* Answer-set programming* Functional-logic programming* Declarative visual languages* Constraint Handling Rules* Parallel implementation and concurrency* Monads, type classes and dependent type systems* Declarative domain-specific languages* Termination, resource analysis and the verification of declarative programs* Transformation and partial evaluation of declarative languages* Language extensions for security and tabulation* Probabilistic modeling in a declarative language and modeling reactivity* Memory management and the implementation of declarative systems* Practical experiences and industrial application

This year the conference will be co-located with the 25thInternational Symposium on Logic-Based Program Synthesis andTransformation (LOPSTR 2015).

The conference will be held in Siena, Italy. Previous symposia wereheld at Canterbury (UK), Madrid (Spain), Leuven (Belgium), Odense(Denmark), Hagenberg (Austria), Coimbra (Portugal), Valencia (Spain),Wroclaw (Poland), Venice (Italy), Lisboa (Portugal), Verona (Italy),Uppsala (Sweden), Pittsburgh (USA), Florence (Italy), Montreal(Canada), and Paris (France). You might have a look at the contents ofpast PPDP symposia.

Papers must describe original work, be written and presented inEnglish, and must not substantially overlap with papers that have beenpublished or that are simultaneously submitted to a journal,conference, or workshop with refereed proceedings. Work that alreadyappeared in unpublished or informally published workshop proceedingsmay be submitted (please contact the PC chair in case of questions).

After the symposium, a selection of the best papers will be invited toextend their submissions in the light of the feedback solicited at thesymposium. The papers are expected to include at least 30% extramaterial over and above the PPDP version. Then, after another round ofreviewing, these revised papers will be published in a special issueof SCP with a target publication date by Elsevier of 2016.

Important Dates

Abstract Submission: 30 March, 2015 Paper submission: 6 April, 2015 Notification: 14 May, 2015 Camera-ready: To be announced Symposium: 14-16 July, 2015 Authors should submit an electronic copy of the full paper inPDF. Papers should be submitted to the submission website for PPDP2015. Each submission must include on its first page the paper title;authors and their affiliations; abstract; and three to fourkeywords. The keywords will be used to assist the program committee inselecting appropriate reviewers for the paper. Papers should consistof the equivalent of 12 pages under the ACM formattingguidelines. These guidelines are available online, along withformatting templates or style files. Submitted papers will be judgedon the basis of significance, relevance, correctness, originality, andclarity. They should include a clear identification of what has beenaccomplished and why it is significant. Authors who wish to provideadditional material to the reviewers beyond the 12-page limit can doso in clearly marked appendices: reviewers are not required to readsuch appendices.

Invited speakers:

Patrick Cousot (NYU, Jointly with LOPSTR)Martin Hofmann (LMU, Munich)Dale Miller (INRIA and Ecole Polytechnique, Jointly with LOPSTR)

Program Committee

Michael Adams, University of Utah, USA Puri Arenas, Complutense University of Madrid, Spain Amir Ben-Amram, Tel-Aviv Academic College, Israel Ines Castro, Universidade do Porto, Portugal Patrick Cousot, New York University, USA Gregory Duck, National University of Singapore, Singapore Fabio Fioravanti, University of Chieti-Pescara, Italy Thom Frühwirth, University of Ulm, Germany Roberto Giacobazzi, University of Verona, Italy Michael Hanus, CAU Kiel, Germany Andy King, University of Kent, UK F. López-Fraguas, Complutense University of Madrid, Spain Ian Mackie, University of Sussex, UK Dale Miller, INRIA and LIX/Ecole Polytechnique, France Torsten Schaub, University of Potsdam, Germany Tom Schrijvers KU Leuven, Belgium Frank D. Valencia, CNRS and LIX, Ecole Polytechnique, France German Vidal, Universitat Politecnica de Valencia, Spain Marina Vos, University of Bath, UK Nobuko Yoshida, Imperial College London, UK

Program Chair

Elvira Albert Complutense University of Madrid C/ Profesor Garcia Santesmases E-28040 Madrid, Spain Email: elvira@sip.ucm.es

Symposium Chair

Moreno Falaschi Department of information engineering and mathematics University of Siena, Italy Email: moreno.falaschi@unisi.it

### Distributing Haskell on a cluster

### I want to make a Vim clone in Haskell. Where do I start?

So, since I've been seeing the "choose a long-term project" thing everywhere, I've decided to start writing a Vim-ish text editor in Haskell.

Any ideas? (like on architecture, curses libraries, whatever - I haven't written anything bigger than 100 loc yet, barely managed to finish LYAH properly)

Also, is there a toy implementation? Like there's this one, but it's in Rust.

submitted by octatoan[link] [43 comments]

### is build breakage data about the main Haskellplatform projects available?

### Proposal: add chunksOf to Data.Sequence

### Dynamic loadable modules?

Is there an easy way to do dynamic loadable Haskell modules? For example I have the modules A, B, C and they all export something of type Something. Based on the arguments by which the application is invoked I'd like to do import a specified module.

The alternative is of course to create a datatype like:

data LoadModule = ModuleA Something | ModuleB Something | etc... type Modules = [LoadModule]Where I would transform the given input in a datatype by which I could find the desired one via constructor pattern matching. However I'm going to have around 40+ such modules and there is a lot more boilerplate with this approach.

submitted by alt_account10[link] [13 comments]

### package ParsecToken

### Question: Type System and N-Dimensional Vectors

AFAIK, in haskell, the two main ways to encode Vectors( In the mathematical sense ) are n-dimensional Tuples (t,t) or Lists [t]. Lists, of course, do not carry size information as part of their type. Tuples do, but the size must be constant--hardcoded into the type. My question is, is there any way to statically check vectors of arbitrary size. For instance, could we define a function like zip, which statically checks that its arguments are the same length, but doesn't specify the length. something like myZip:: vec t n -> vec t n -> vec (t,t) n. where t is the type of data held in the vector and n is the length. Thanks.

submitted by swisskid22[link] [13 comments]

### Brent Yorgey: Anafunctors

This is part four in a series of posts on avoiding the axiom of choice (part one, part two, part three).

In my previous post, we considered the “Axiom of Protoequivalence”—that is, the statement that every fully faithful, essentially surjective functor (*i.e.* every *protoequivalence*) is an equivalance—and I claimed that in a traditional setting this is equivalent to the axiom of choice. However, intuitively it feels like AP “ought to” be true, whereas AC must be rejected in constructive logic.

One way around this is by generalizing functors to *anafunctors*, which were introduced by Makkai (1996). The original paper is difficult going, since it is full of tons of detail, poorly typeset, and can only be downloaded as seven separate postscript files. There is also quite a lot of legitimate depth to the paper, which requires significant categorical sophistication (more than I possess) to fully understand. However, the basic ideas are not too hard to grok, and that’s what I will present here.

It’s important to note at the outset that anafunctors are much more than just a technical device enabling the Axiom of Protoequivalence. More generally, if everything in category theory is supposed to be done “up to isomorphism”, it is a bit suspect that functors have to be defined for objects *on the nose*. Anafunctors can be seen as a generalization of functors, where each object in the source category is sent not just to a single object, but to an entire *isomorphism class* of objects, without privileging any particular object in the class. In other words, anafunctors are functors whose “values are specified only up to unique isomorphism”.

Such functors represent a many-to-many relationship between objects of and objects of . Normal functors, as with any function, may of course map multiple objects of to the same object in . The novel aspect is the ability to have a single object of correspond to multiple objects of . The key idea is to add a class of “specifications” which mediate the relationship between objects in the source and target categories, in exactly the same way that a “junction table” must be added to support a many-to-many relationship in a database schema, as illustrated below:

On the left is a many-to-many relation between a set of shapes and a set of numbers. On the right, this relation has been mediated by a “junction table” containing a set of “specifications”—in this case, each specification is simply a pair of a shape and a number—together with two mappings (one-to-many relations) from the specifications to both of the original sets, such that a specification maps to a shape and number if and only if and were originally related.

In particular, an *anafunctor* is defined as follows.

- There is a class of
*specifications*. - There are two functions mapping specifications to objects of and .

, , and together define a many-to-many relationship between objects of and objects of . is called a *specified value of at * if there is some specification such that and , in which case we write . Moreover, is *a value of at * (not necessarily a *specified* one) if there is some for which .

The idea now is to impose additional conditions which ensure that “acts like” a regular functor .

- Functors are defined on all objects; so we require each object of to have at least one specification which corresponds to it—that is, must be surjective.
- Functors transport morphisms as well as objects. For each (the middle of the below diagram) and each in (the left-hand side below), there must be a morphism in (the right-hand side):

- Functors preserve identities: for each we should have .
- Finally, functors preserve composition: for all (in the middle below), , and (the left side below), it must be the case that :

Our initial intuition was that an anafunctor should map objects of to isomorphism classes of objects in . This may not be immediately apparent from the definition, but is in fact the case. In particular, the identity morphism maps to isomorphisms between specified values of ; that is, under the action of an anafunctor, an object together with its identity morphism “blow up” into an isomorphism class (aka a *clique*). To see this, let be two different specifications corresponding to , that is, . Then by preservation of composition and identities, we have , so and constitute an isomorphism between and .

There is an alternative, equivalent definition of anafunctors, which is somewhat less intuitive but usually more convenient to work with: an anafunctor is a *category* of specifications together with a span of *functors* where is fully faithful and (strictly) surjective on objects.

Note that in this definition, must be *strictly* (as opposed to *essentially*) surjective on objects, that is, for every there is some such that , rather than only requiring . Given this strict surjectivity on objects, it is equivalent to require to be full, as in the definition above, or to be (strictly) surjective on the class of all morphisms.

We are punning on notation a bit here: in the original definition of anafunctor, is a set and and are functions on objects, whereas in this more abstract definition is a category and and are functors. Of course, the two are closely related: given a span of functors , we may simply take the objects of as the class of specifications , and the actions of the functors and on objects as the functions from specifications to objects of and . Conversely, given a class of specifications and functions and , we may construct the category with and with morphisms in acting as morphisms in . From to , we construct the functor given by on objects and the identity on morphisms, and the other functor maps in to in .

Every functor can be trivially turned into an anafunctor . Anafunctors also compose. Given compatible anafunctors and , consider the action of their composite on objects: each object of may map to multiple objects of , via objects of . Each such mapping corresponds to a zig-zag path . In order to *specify* such a path it suffices to give the pair , which determines , , and . Note, however, that not every pair in corresponds to a valid path, but only those which agree on the middle object . Thus, we may take as the set of specifications for the composite , with and . On morphisms, . It is not hard to check that this satisfies the anafunctor laws.

If you know what a pullback is, note that the same thing can also be defined at a higher level in terms of spans. , the category of all (small) categories, is complete, and in particular has pullbacks, so we may construct a new anafunctor from to by taking a pullback of and and then composing appropriately.

One can go on to define ananatural transformations between anafunctors, and show that together these constitute a -category which is analogous to the usual -category of (small) categories, functors, and natural transformations; in particular, there is a fully faithful embedding of into , which moreover is an equivalence if AC holds.

To work in category theory based on set theory and classical logic, while avoiding AC, one is therefore justified in “mixing and matching” functors and anafunctors as convenient, but discussing them all as if they were regular functors (except when defining a particular anafunctor). Such usage can be formalized by turning everything into an anafunctor, and translating functor operations and properties into corresponding operations and properties of anafunctors.

However, as I will argue in some future posts, there is a better solution, which is to throw out set theory as a foundation of category theory and start over with homotopy type theory. In that case, thanks to a generalized notion of equality, regular functors act like anafunctors, and in particular AP holds.

ReferencesMakkai, Michael. 1996. “Avoiding the Axiom of Choice in General Category Theory.” *Journal of Pure and Applied Algebra* 108 (2). Elsevier: 109–73.