News aggregator

Memory consumption issues under heavy networkthroughput/concurrency loads

haskell-cafe - Tue, 07/15/2014 - 6:18pm
I have been testing solutions in several languages for running a network daemon that accepts hundreds of thousands of websocket connections, and moves messages between them. Originally I used this websocket lib, http://jaspervdj.be/websockets/, but upon discovering a rather severe memory leak issue even when sending just a basic ping, switched to Michael Snoyman's first stab at a websocket impl for Yesod here: https://github.com/yesodweb/yesod/commit/66437453f57e6a2747ff7c2199aa7ad25db5904c#diff-dea2b092b8e392f12cc93acb345745e1R58. When under mild load (5k connections, each pinging once every 10+ seconds), memory usage remained stable and acceptable around 205 MB. Switching to higher load (pinging every second or less), memory usage spiked to 560 MB, and continued to 'leak' slowly. When I quit the server with the profile diagnostics, it indicate that hundreds of MB were "lost due to fragmentation". In addition, merely opening the connections and dropping them, repeatedly, made the base memory usage go up.
Categories: Offsite Discussion

Is there any way to compile a program in an embedded language?

Haskell on Reddit - Tue, 07/15/2014 - 11:38am

Haskell is very good for designing embedded DSLs - everyone knows that. Once that language is complete, though, you now will have to design a compiler for it, if you want any serious computation. I'm aware of projects like PyPy, which gives you a JIT compiler for free, given a correct interpreter. Is there something in those lines for Haskell?

submitted by SrPeixinho
[link] [6 comments]
Categories: Incoming News

What is the state of the art in testing codegeneration?

haskell-cafe - Tue, 07/15/2014 - 11:26am
Tom Ellis wrote: First I should point out Magic Haskeller (which generates typed (Haskell) terms and then accepts those that satisfy the given input-output relationship). http://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.html It is MagicHaskeller on Hackage. The most straightforward method is to generate random terms and filter well-typed ones. This is usually a bad method since many or most randomly generated terms will be ill-typed. One should generate well-typed terms from the beginning, without any rejections. That is actually not difficult: one merely needs to take the type checker (which one has to write anyway) and run it backwards. Perhaps it is not as simple as I made it sound, depending on the type system (for example, the type inference for simply-typed lambda-calculus without any annotations requires guessing. One has to guess correctly all the type, otherwise the process becomes slow). It has been done, in a mainstream functional language: OCaml. The code can be re-written for
Categories: Offsite Discussion

Designing a Lisp based on the ideas on "Total Funcional Programming".

Haskell on Reddit - Tue, 07/15/2014 - 11:25am

I'm thinking in how one could apply the ideas on this paper to design an unityped language similar to Lisp. There are no algebraic datatypes or pattern matching - just lists, numbers and predicates. The type system would be much simpler. I'm thinking in STLC with only one type, Term, which Lists and Numbers embedded, plus a few mathematical primitives and a fold\unfold operator on the List.

data Term = Cons Term Term | Nil | Num Float | Lam Term | Var Int | Fold ... etc fold :: (Term → Term → Term) → Term → Term fold f i (Cons x xs) = f x (fold f i xs) fold f i Nil = i unfold :: (Term → Term) → Term → Term unfold f x = go (f x) where go (Cons x xs) = Cons x (unfold f xs) go x = x

If I'm not mistaken, the STLC part makes up for a total language, and the fold/unfold operators enable most practical algorithms.

Questions:

  1. Is that indeed a total language?

  2. Is there any important practical algorithm that would be missing from that language?

  3. Is there a decision algorithm that will find the complexity of an operation before evaluating it?

submitted by SrPeixinho
[link] [20 comments]
Categories: Incoming News

Hackage view source

Haskell on Reddit - Tue, 07/15/2014 - 9:20am

Where did the option the view the source and more detailed documentation go on Hackage? It's no longer on the right and I can't find it.

Edit: I can see it on this page: http://hackage.haskell.org/package/base-4.7.0.0/docs/Prelude.html

But not here. http://hackage.haskell.org/package/com-1.2.3.1

submitted by Tyr42
[link] [8 comments]
Categories: Incoming News

I have an application... but am lost on how to deploy it. bamse? Inno Setup? Help?!

Haskell on Reddit - Tue, 07/15/2014 - 8:26am

At first in my ultra naivety I just thought you could take the exe file produced by GHC and give that to someone (without any haskell related software on their system (cabal,ghc,etc)) and they could run it. Well that was wrong.

I did some searching and the haskell wiki came up and gave me a list of things that could create windows installers.

(http://www.haskell.org/haskellwiki/Windows#Shipping_Installable_Applications)

I first tried bamse (http://hackage.haskell.org/package/bamse) because I figured it would be the easiest seeing as I could install it with cabal. It failed the install though (failed installing com package).

I next tried Inno Setup, but I don't think I included all the right dependancies or something, because when I tried running the installed application there was this error: "user error (unknown GLUT entry glutInit)"

I tried finding some examples for haskell applications, because I have no idea what dependancies I have to include but I had no luck.

If anyone could help in any way, I'd really appreciate it!

edit: I'm using Graphics.Rendering.OpenGL.GL and Graphics.UI.GLUT, which is where I think the error is coming from. I'm also using Data.Sequence, so do I just have to include all those dlls in my Inno Setup thing?

submitted by flippflopp
[link] [11 comments]
Categories: Incoming News

Feature Proposal: GHC Flag for implicit externalPrelude

haskell-cafe - Tue, 07/15/2014 - 4:23am
Hey all, I would like to propose a very minor flag to add to GHC. I would like GHC to have a --with-prelude flag, which would specify an alternate Prelude to use instead of the default Haskell prelude. This would have an effect similar to -XNoImplicitPrelude and an additional import MyNewPrelude in the source file. It might be a *little* different semantically, as a qualified import would disable the original implicit import, just like it does with the default Haskell prelude. The benefit this would have is that this would give alternate preludes a more first-class status. Instead of having to import an alternative prelude everywhere, you could just have a ghc-options: --with-prelude=... flag in your *.cabal file, and have a different prelude be used. This is important for my own work, as I highly prefer other preludes for my non-library development; I think this is a feature which will be very useful as Haskell develops and we try to figure out how to get rid of the warts in the current Prelude. In or
Categories: Offsite Discussion

Alessandro Vermeulen: Notes on the Advanced Akka course

Planet Haskell - Tue, 07/15/2014 - 4:00am

The Advanced Akka course is provided by Typesafe and is aimed at teaching advanced usages of Akka. The course covers the basics of Akka, Remoting, Clustering, Routers, CRDTs, Cluster Sharding and Akka Persistance. The following post starts with a general introduction to Akka and presents the takeaways from the course as we experienced them.

A general overview of Akka

The reader which is already familiar with Akka can skip this section.

According to the Akka site this is Akka:

Akka is a toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM.

Akka achieves this by using Actors.

Actors are very lightweight concurrent entities.

Each Actor has a corresponding mailbox stored separately from the Actor. The Actors together with their mailboxes reside in an ActorSystem. Additionally, the ActorSystem contains the Dispatcher which executes the handling of a message by an actor. Each Actor only handles a single message at a time.

In Akka everything is remote by design and philosophy. In practice this means that each Actor is identified by its ActorRef. This is a reference to the actor which provides Location Transparency.

Actors communicate with each other by sending messages to an another Actor through an ActorRef. This sending of the message takes virtually no time.

In addition to ActorRef there exists also an ActorSelection which contains a path to one or more actors. Upon each sending of the message the path is traversed until the actor is found or when not. No message is send back when the actor is not found however.

States: Started - Stopped - Terminated If an actor enters the Stopped state it first stops its child actors before entering the Terminated state.

Best-practices

Import the context.dispatcher instead of the global Scala ExecutionContext. It is the ExecutionContext managed by Akka. Using the global context causes the Actors to be run in the global Thread pool.

You should not use PoisonPill as it will be removed from future versions of Akka since it is not specific enough. Roll your own message to make sure the appropriate actions for graceful shutdown are done. Use context.stop to stop your actor.

Place your business logic in a separate trait and mix it in to the actor. This increases testability as you can easily unit test the trait containing the business logic. Also, you should put the creation of any child actors inside a separate method so the creation can be overridden from tests.

Remoting

With the Remoting extension it is possible to communicate with other Actor Systems. This communication is often done through ActorSelections instead of ActorRef.

Remoting uses Java serialisation by default which is slow and fragile in light of changing definitions. It is possible and recommended to use another mechanism such as Google Protobuf.

Clustering

Akka has a simple perspective on cluster management with regards to split-brain scenarios. Nodes become dead when they are observed as dead and they cannot resurrect. The only way a node can come up again is if it registers itself again.

When a net split happens the other nodes are marked as unreachable. When using a Singleton, this means that only the nodes that can reach the singleton will access it. The others will not decide on a new Singleton in order to prevent a split-brain scenario.

Another measure against split-brain is contacting the seed nodes in order. The first seed node is required to be up.

The seed nodes are tried in order.

FSM

There is an library for writing finite state machines called FSM. For larger actors it can be useful to use the FSM. Otherwise stick to pure become and unbecome.

FSM also has an interval timer for scheduling messages. However, the use of stay() resets the interval timer therefore you could have issues with never executing what is at the end of the timer.

Routers

There are two different kinds of routers: Pools and Groups. Pools are in charge of their own children and they are created and killed by the pool. Groups are configured with an ActorSelection that defines the actors to which the group should sent its messages. There are several implementations: Consistent Hash, Random, Round Robin, BroadCast, Scatter - Gather First, and Smallest Mailbox. The names are self-explanatory.

Synchronisation of data with CRDTs

Synchronising data between multiple nodes can be done by choosing your datatype so that If the timestamps and events are generated in one place no duplicate entries occur. Therefore merging a map from a different node in your map is easily done by copying entries you don’t already have to your own data.

This can be implemented by letting each member node broadcast which data-points they have. Each node can then detect which information is lacking and request the specific data from the node that claimed to have the data. At some future point in time all nodes will be in sync. This is called eventual consistency.

Singleton

If you have a singleton cluster manager proxy it only starts when the cluster is formed. A cluster is formed if a member connects. The proxy will then pass on the buffered messages.

Cluster Sharding

Sharding is a way to split up a group of actors in a cluster. This can be useful if the group is too large to fit in the memory of a single machine. The Cluster Sharding feature takes care of the partitioning of the actors using a hash you have to define with a function shardResolver. The sharded actors can be messaged with an unique identifier using ClusterSharding(system).shardRegion("Counter") which proxies the message to the correct actor. ClusterSharding.start is what the Manager is to Singletons.

It is recommended to put the sharding functions into a singleton object for easy re-use of your shards, containing the functions to start the sharding extension and proxy to the shard etc. It is also convenient to adds tell and initialise helper functions to respectively send a message and initialise the actor by its unique id.

Akka Persistence

Akka persistence uses a Journal to store which messages were processed. One of the supported storage mechanisms is Cassandra. It is also possible to use a file-based journal which, of course, is not recommended.

In the current version of Akka there are two approaches to persistence: command sourcing and event sourcing. Simply but, in command storing each message is first persisted and then offered to the actor to do as it pleases whereas in event sourcing only the results of actions are persisted. The latter is preferred and will be the only remaining method in following versions.

Both methods support storing a snapshot of the current state and recovering from it.

Command Sourcing

The main problem with command sourcing lies in that all messages are replayed. This includes requests for information from dead actors which wastes resources for nothing. Moreover, in case of errors, the last message that killed the actor is also replayed and probably killing the actor again in the proces.

Event Sourcing

With event sourcing one only stores state changing events. Events are received by the receiveRecover method. External side-effects should be performed in the receive method. The code for the internal side-effect of the event should be the same in both the receive and receiveRecover methods. The actor or trait for this will be named PersistentActor.

Actor offloading

One can use Akka Persistence to “pause” long living actors, e.g. actors that have seen no activity lately. This frees up memory. When the actor is needed again it can be safely restored from the persistence layer.

Tidbits

Akka 3 is to be released “not super soon”. It will contain typed actors. The consequence of this is that the sender field will be removed from the actor. Therefore, for request-response, the ActorRef should be added to the request itself.

Concluding

The Advanced Akka course gives a lot of insights and concrete examples of how to use the advanced Akka features of clustering, sharding and persisting data across multiple nodes in order to create a system that really is highly available, resilient and scalable. It also touches on the bleeding edge functionalities, the ideas and concepts around it and what to expect next in this growing ecosystem.

Categories: Offsite Blogs

Type-based lift

Haskell on Reddit - Tue, 07/15/2014 - 2:40am
Categories: Incoming News

Howto detect infinite loop?

Haskell on Reddit - Tue, 07/15/2014 - 2:18am
import qualified Data.List as List import qualified Data.List.Ordered as Lord -- Lord.member :: Ord a => a -> [a] -> Bool Lord.member 1 [2..] -- False -- List.elem :: Eq a => a -> [a] -> Bool List.elem 1 [2..] -- infinite loop

List.elem must crawl through whole list, but I cannot tell that from type signature. Passing infinite data structure into such function is obviously error (here). How do I find out that evaluating function with some infinite data structure would turn my execution into infinite loop? Could I tell that from strictness of function and if so, can I find out function's strictness?

submitted by mirpa
[link] [17 comments]
Categories: Incoming News

Why no `instance (Monoid a, Applicative f)=> Monoid (f a)` for IO?

glasgow-user - Mon, 07/14/2014 - 11:55pm
It seems to me that this should be true for all `f a` like: instance (Monoid a, Applicative f)=> Monoid (f a) where mappend = liftA2 mappend mempty = pure mempty But I can't seem to find the particular `instance (Monoid a)=> Monoid (IO a)` anywhere. Would that instance be incorrect, or does it live somewhere else? FWIW I noticed this when I started thinking about an instance I wanted for 'contravariant': instance (Monoid a, Applicative f)=> Monoid (Op (f a) b) where mempty = Op $ const $ pure mempty mappend (Op f) (Op g) = Op (\b-> liftA2 mappend (f b) (g b)) at which point I realized (I think) all `f a` are monoidal, and so we ought to be able to get the instance above with just a deriving Monoid. Brandon
Categories: Offsite Discussion

Mark Jason Dominus: Types are theorems; programs are proofs

Planet Haskell - Mon, 07/14/2014 - 6:00pm

[ Summary: I gave a talk Monday night on the Curry-Howard isomorphism; my talk slides are online. ]

I sent several proposals to !!con, a conference of ten-minute talks. One of my proposals was to explain the Curry-Howard isomorphism in ten minutes, but the conference people didn't accept it. They said that they had had four or five proposals for talks about the Curry-Howard isomorphism, and didn't want to accept more than one of them.

The CHI talk they did accept turned out to be very different from the one I had wanted to give; it discussed the Coq theorem-proving system. I had wanted to talk about the basic correspondence between pair types, union types, and function types on the one hand, and reasoning about logical conjunctions, disjunctions, and implications on the other hand, and the !!con speaker didn't touch on this at all.

But mathematical logic and programming language types turn out to be the same! A type in a language like Haskell can be understood as a statement of logic, and the statement will be true if and only if there is actually a value with the corresponding type. Moreover, if you have a proof of the statement, you can convert the proof into a value of the corresponding type, or conversely if you have a value with a certain type you can convert it into a proof of the corresponding statement. The programming language features for constructing or using values with function types correspond exactly to the logical methods for proving or using statements with implications; similarly pair types correspond to logical conjunction, and union tpyes to logical disjunction, and exceptions to logical negation. I think this is incredible. I was amazed the first time I heard of it (Chuck Liang told me sometime around 1993) and I'm still amazed.

Happily Philly Lambda, a Philadelphia-area functional programming group, had recently come back to life, so I suggested that I give them a longer talk about about the Curry-Howard isomorphism, and they agreed.

I gave the talk yesterday, and the materials are online. I'm not sure how easy they will be to understand without my commentary, but it might be worth a try.

If you're interested and want to look into it in more detail, I suggest you check out Sørensen and Urzyczyn's Lectures on the Curry-Howard Isomorphism. It was published as an expensive yellow-cover book by Springer, but free copies of the draft are still available.

Categories: Offsite Blogs

Danny Gratzer: Examining Hackage: extensible-effects

Planet Haskell - Mon, 07/14/2014 - 6:00pm
Posted on July 15, 2014

I had a few people tell me after my last post that they would enjoy a write up on reading extensible-effects so here goes.

I’m going to document my process of reading through and understanding how extensible-effects is implemented. Since this is a fairly large library (about 1k) of code, we’re not going over all of it. Rather we’re just reviewing the core modules and enough of the extra ones to get a sense for how everything is implemented.

If you’re curious or still have questions, the modules that we don’t cover should serve as a nice place for further exploration.

Which Modules

extensible-effects comes with quite a few modules, my find query reveals

$ find src -name "*.hs" src/Data/OpenUnion1.hs src/Control/Eff/Reader/Strict.hs src/Control/Eff/Reader/Lazy.hs src/Control/Eff/Fresh.hs src/Control/Eff/Cut.hs src/Control/Eff/Exception.hs src/Control/Eff/State/Strict.hs src/Control/Eff/State/Lazy.hs src/Control/Eff/Writer/Strict.hs src/Control/Eff/Writer/Lazy.hs src/Control/Eff/Coroutine.hs src/Control/Eff/Trace.hs src/Control/Eff/Choose.hs src/Control/Eff/Lift.hs src/Control/Eff.hs src/Control/Eff/Reader/Strict.hs

Whew! Well I’m going to take a leap and assume that extensible-effects is similar to the mtl in the sense that there are a few core modules, an then a bunch of “utility” modules. So there’s Control.Monad.Trans and then Control.Monad.State and a bunch of other implementations of MonadTrans.

If we assume extensible-effects is formatted like this, then we need to look at

  1. Data.OpenUnion1
  2. Control.Monad.Eff

And maybe a few other modules to get a feel for how to use these two. I’ve added Data.OpenUnion1 because it’s imported by Control.Monad.Eff so is presumably important.

Since Data.OpenUnion1 is at the top of our dependency DAG, we’ll start with it.

Data.OpenUnion1

So we’re starting with Data.OpenUnion1. If the authors of this code have stuck to normal Haskell naming conventions, that’s an open union of type constructors, stuff with the kind * -> *.

Happily, this module has an export list so we can at least see what’s public.

module Data.OpenUnion1( Union (..) , SetMember , Member , (:>) , inj , prj , prjForce , decomp , unsafeReUnion ) where

So we’re looking at a data type Union, which we export everything for. Two type classes SetMember and Member, a type operator :>, and a handful of functions, most likely to work with Union.

So let’s figure out exactly what this union thing is

data Union r v = forall t. (Functor t, Typeable1 t) => Union (t v)

So Union r v is just a wrapper around some of functor applied to v. This seems a little odd, what’s this r thing? The docs hint that Member t r should always hold.

Member is a type class of two parameters with no members. In fact, greping the entire source reveals that the entire definition and instances for Member in this code base is

infixr 1 :> data ((a :: * -> *) :> b) class Member t r instance Member t (t :> r) instance Member t r => Member t (t' :> r)

So this makes it a bit clearer, :> acts like a type level cons and Member just checks for membership!

Now Union makes a bit more sense, especially in light of the inj function

inj :: (Functor t, Typeable1 t, Member t r) => t v -> Union r v inj = Union

So Union takes some t in r and hides it away in an existential applied to v. Now this is kinda like having a great nested bunch of Eithers with every t applied to v.

Dual to inj, we can define a projection from a Union to some t in r. This will need to return something wrapped in Maybe since we don’t know which member of r our Union is wrapping.

prj :: (Typeable1 t, Member t r) => Union r v -> Maybe (t v) prj (Union v) = runId <$> gcast1 (Id v)

prj does some evil Typeable casts, but this is necessary since we’re throwing away all our type information with that existential. That Id runId pair is needed since gcast1 has the type

-- In our case, `c ~ Id` gcast1 :: (Typeable t', Typeable t) => c (t a) -> Maybe (c (t' a))

They’re just defined as

newtype Id a = Id { runId :: a } deriving Typeable

so just like Control.Monad.Identity.

Now let’s try to figure out what this SetMember thing is.

class Member t r => SetMember set (t :: * -> *) r | r set -> t instance SetMember set t r => SetMember set t (t' :> r)

This is unhelpful, all we have is the recursive step with no base case! Resorting to grep reveals that our base case is defined in Control.Eff.Lift so we’ll temporarily put this class off until then.

Now the rest of the file is defining a few functions to operate over Unions.

First up is an unsafe “forced” version of prj.

infixl 4 <?> (<?>) :: Maybe a -> a -> a Just a <?> _ = a _ <?> a = a prjForce :: (Typeable1 t, Member t r) => Union r v -> (t v -> a) -> a prjForce u f = f <$> prj u <?> error "prjForce with an invalid type"

prjForce is really exactly what it says on the label, it’s a version of prj that throws an exception if we’re in the wrong state of Union.

Next is a way of unsafely rejiggering the type level list that Union is indexed over.

unsafeReUnion :: Union r w -> Union t w unsafeReUnion (Union v) = Union v

We need this for our last function, decom. This function partially unfolds our Union into an Either

decomp :: Typeable1 t => Union (t :> r) v -> Either (Union r v) (t v) decomp u = Right <$> prj u <?> Left (unsafeReUnion u)

This provides a way to actually do some sort of induction on r by breaking out each type piece by piece with some absurd case for when we don’t have a :> b.

That about wraps up this little Union library, let’s move on to see how this is actually used.

Control.Eff

Now let’s talk about the core of extensible-effects, Control.Eff. As always we’ll start by taking a look at the export list

module Control.Eff( Eff (..) , VE (..) , Member , SetMember , Union , (:>) , inj , prj , prjForce , decomp , send , admin , run , interpose , handleRelay , unsafeReUnion ) where

So right away we can see that we’re exporting stuff Data.Union1 as well as several new things, including the infamous Eff.

The first definition we come across in this module is VE. VE is either a simple value or a Union applied to a VE!

data VE r w = Val w | E !(Union r (VE r w))

Right away we notice that “pure value or X” pattern we see with free monads and other abstractions over effects.

We also include a quick function to try to extract a pure value form Vals

fromVal :: VE r w -> w fromVal (Val w) = w fromVal _ = error "extensible-effects: fromVal was called on a non-terminal effect."

Now we’ve finally reached the definition of Eff!

newtype Eff r a = Eff { runEff :: forall w. (a -> VE r w) -> VE r w }

So Eff bears a striking resemblance to Cont. There are two critical differences though, first is that we specialize our return type to something constructed with VE r. The second crucial difference is that by universally quantifying over w we sacrifice a lot of the power of Cont, including callCC!

Next in Control.Eff is the instances for Eff

instance Functor (Eff r) where fmap f m = Eff $ \k -> runEff m (k . f) {-# INLINE fmap #-} instance Applicative (Eff r) where pure = return (<*>) = ap instance Monad (Eff r) where return x = Eff $ \k -> k x {-# INLINE return #-} m >>= f = Eff $ \k -> runEff m (\v -> runEff (f v) k) {-# INLINE (>>=) #-}

Notice that these are all really identical to Conts instances. Functor adds a function to the head of the continuation. Monad dereferences m and feeds the result into f. Exactly as with Cont.

Next we can look at our primitive function for handling effects

send :: (forall w. (a -> VE r w) -> Union r (VE r w)) -> Eff r a send f = Eff (E . f)

I must admit, this tripped me up for a while. Here’s how I read it, “provide a function, which when given a continuation for the rest of the program expecting an a, produces a side effecting VE r w and we’ll map that into Eff”.

Remember how Union holds functors? Well each of our effects must act like as a functor and wrap itself in that union. By being open, we get the “extensible” in extensible-effects.

Next we look at how to remove effects once they’ve been added to our set of effects. In mtl-land, this is similar to the collection of runFooT functions that are used to gradually strip a layer of transformers away.

The first step towards this is to transform the CPS-ed effectful computation Eff, into a more manageable form, VE

admin :: Eff r w -> VE r w admin (Eff m) = m Val

This is a setup step so that we can traverse the “tree” of effects that our Eff monad built up for us.

Next, we know that we can take an Eff with no effects and unwrap it into a pure value. This is the “base case” for running an effectful computation.

run :: Eff () w -> w run = fromVal . admin

Concerned readers may notice that we’re using a partial function, this is OK since the E case is “morally impossible” since there is no t so that Member t () holds.

Next is the function to remove just one effect from an Eff

handleRelay :: Typeable1 t => Union (t :> r) v -- ^ Request -> (v -> Eff r a) -- ^ Relay the request -> (t v -> Eff r a) -- ^ Handle the request of type t -> Eff r a handleRelay u loop h = either passOn h $ decomp u where passOn u' = send (<$> u') >>= loop

Next to send, this function gave me the most trouble. The trick was to realize that that decomp will leave us in two cases.

  1. Some effect producing a v, Union r v
  2. A t producing a v, t v

If we have a t v, then we’re all set since we know exactly how to map that to a Eff r a with h.

Otherwise we need to take this effect, add it back into our computation. send (<$> u') takes the rest of the computation, that continuation and feeds it the v that we know our effects produce. This gives us the type Eff r v, where that outer Eff r contains our most recent effect as well as everything else. Now to convert this to a Eff r a we need to transform that v to an a. The only way to do that is to use the supplied loop function so we just bind to that.

Last but not least is a function to modify an effect somewhere in our effectful computation. A grep reveals will see this later with things like local from Control.Eff.Reader for example.

To do this we want something like handleRelay but without removing t from r. We also need to generalize the type so that t can be anywhere in our. Otherwise we’ll have to prematurally solidify our stack of effects to use something like modify.

interpose :: (Typeable1 t, Functor t, Member t r) => Union r v -> (v -> Eff r a) -> (t v -> Eff r a) -> Eff r a interpose u loop h = maybe (send (<$> u) >>= loop) h $ prj u

Now this is almost identical to handleRelay except instead of using decomp which will split off t and only works when r ~ t :> r', we use prj! This gives us a Maybe and since the type of u doesn’t need to change we just recycle that for the send (<$> u) >>= loop sequence.

That wraps up the core of extensible-effects, and I must admit that when writing this I was still quite confused as to actually use Eff to implement new effects. Reading a few examples really helped clear things up for me.

Control.Eff.State

The State monad has always been the sort of classic monad example so I suppose we’ll start here.

module Control.Eff.State.Lazy( State (..) , get , put , modify , runState , evalState , execState ) where

So we’re not reusing the State from Control.Monad.State but providing our own. It looks like

data State s w = State (s -> s) (s -> w)

So what is this supposed to do? Well that s -> w looks a continuation of sorts, it takes the state s, and produces the resulting value. The s -> s looks like something that modify should use.

Indeed this is the case

modify :: (Typeable s, Member (State s) r) => (s -> s) -> Eff r () modify f = send $ \k -> inj $ State f $ \_ -> k () put :: (Typeable e, Member (State e) r) => e -> Eff r () put = modify . const

we grab the continuation from send and add a State effect on top which uses our modification function s. The continuation that State takes ignores the value it’s passed, the current state, and instead feeds the program computation the () it’s expecting.

get is defined in a similar manner, but instead of modifying the state, we use State’s continuation to feed the program the current state.

get :: (Typeable e, Member (State e) r) => Eff r e get = send (inj . State id)

So we grab the continuation, feed it to a State id which won’t modify the state, and then inject that into our open union of effects.

Now that we have the API for working with states, let’s look at how to remove that effect.

runState :: Typeable s => s -- ^ Initial state -> Eff (State s :> r) w -- ^ Effect incorporating State -> Eff r (s, w) -- ^ Effect containing final state and a return value runState s0 = loop s0 . admin where loop s (Val x) = return (s, x) loop s (E u) = handleRelay u (loop s) $ \(State t k) -> let s' = t s in loop s' (k s')

runState first preps our effect to be pattern matched on with admin. We then start loop with the initial state.

loop has two components, if we have run into a value, then we don’t interpret any effects, just stick the state and value together and return them.

If we do have an effect, we use handleRelay to split out the State s from our effects. To handle the case where we get a VE w, we just loop with the current state. However, if we get a State t k, we update the state with t and pass the continuation k.

From runState evalState and execState.

evalState :: Typeable s => s -> Eff (State s :> r) w -> Eff r w evalState s = fmap snd . runState s execState :: Typeable s => s -> Eff (State s :> r) w -> Eff r s execState s = fmap fst . runState s

That wraps up the interface for Control.Eff.State. The nice bit is this makes it a lot clearer how to use send, handleRelay and a few other functions from the core.

Control.Eff.Reader

Now we’re on to Reader. The interesting thing here is that local highlights how to use interpose properly.

As always, we start by looking at what exactly this module provides

module Control.Eff.Reader.Lazy( Reader (..) , ask , local , reader , runReader ) where

The definition of Reader is refreshingly simple

newtype Reader e v = Reader (e -> v)

Keen readers will note that this is just half of the State definition which makes sense; Reader is half of State.

ask is defined almost identically to get

ask :: (Typeable e, Member (Reader e) r) => Eff r e ask = send (inj . Reader)

We just feed the continuation for the program into Reader. A simple wrapper over this gives our equivalent of reads

reader :: (Typeable e, Member (Reader e) r) => (e -> a) -> Eff r a reader f = f <$> ask

Next up is local, which is the most interesting bit of this module.

local :: (Typeable e, Member (Reader e) r) => (e -> e) -> Eff r a -> Eff r a local f m = do e <- f <$> ask let loop (Val x) = return x loop (E u) = interpose u loop (\(Reader k) -> loop (k e)) loop (admin m)

So local starts by grabbing the view of the environment we’re interested in, e. From there we define our worker function which looks a lot like runState. The key difference is that instead of using handleRelay we use interpose to replace each Reader effect with the appropriate environment. Remember that interpose is not going to remove Reader from the set of effects, just update each Reader effect in the current computation.

Finally, we simply rejigger the computation with admin and feed it to loop.

In fact, this is very similar to how runReader works!

runReader :: Typeable e => Eff (Reader e :> r) w -> e -> Eff r w runReader m e = loop (admin m) where loop (Val x) = return x loop (E u) = handleRelay u loop (\(Reader k) -> loop (k e)) Control.Eff.Lift

Now between Control.Eff.Reader and Control.Eff.State I felt I had a pretty good handle on most of what I’d read in extensible-effects. There was just one remaining loose end: SetMember. Don’t remember what that was? It was a class in Data.OpenUnion1 that was conspicuously absent of detail or use.

I finally found where it seemed to be used! In Control.Eff.Lift.

First let’s poke at the exports of his module

module Control.Eff.Lift( Lift (..) , lift , runLift ) where

This module is designed to lift an arbitrary monad into the world of effects. There’s a caveat though, since monads aren’t necessarily commutative, the order in which we run them in is very important. Imagine for example the difference between IO (m a) and m (IO a).

So to ensure that Eff can support lifted monads we have to do some evil things. First we must require that we never have to lifted monads and we always run the monad last. This is a little icky but it’s usefulness outweighs such ugliness.

To ensure condition 1, we need SetMember.

instance SetMember Lift (Lift m) (Lift m :> ())

So we define a new instance of SetMember. Basically this says that any Lift is a SetMember ... r iff Lift m is the last item in r.

To ensure condition number two we define runLift with the more restrictive type

runLift :: (Monad m, Typeable1 m) => Eff (Lift m :> ()) w -> m w

We can now look into exactly how Lift is defined.

data Lift m v = forall a. Lift (m a) (a -> v)

So this Lift acts sort of like a “suspended bind”. We postpone actually binding the monad and simulate doing so with a continuation a -> v.

We can define our one operation with Lift, lift.

lift :: (Typeable1 m, SetMember Lift (Lift m) r) => m a -> Eff r a lift m = send (inj . Lift m)

This works by suspending the rest of the program in a our faux binding to be unwrapped later in runLift.

runLift :: (Monad m, Typeable1 m) => Eff (Lift m :> ()) w -> m w runLift m = loop (admin m) where loop (Val x) = return x loop (E u) = prjForce u $ \(Lift m' k) -> m' >>= loop . k

The one interesting difference between this function and the rest of the run functions we’ve seen is that here we use prjForce. The reason for this is that we know that r is just Lift m :> (). This drastically simplifies the process and means all we’re essentially doing is transforming each Lift into >>=.

That wraps up our tour of the module and with it, extensible-effects.

Wrap Up

This post turned out a lot longer than I’d expected, but I think it was worth it. We’ve gone through the coroutine/continuation based core of extensible-effects and walked through a few different examples of how to actually use them.

If you’re still having some trouble putting the pieces together, the rest of extensible effects is a great collection of useful examples of building effects.

I hope you had as much fun as I did with this one!

Thanks to Erik Rantapaa a much longer post than I led him to believe

<script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Doctest, quickcheck and typeclass

haskell-cafe - Mon, 07/14/2014 - 5:39pm
Hi all, Is there a way, with doctest and quickcheck, to declare test at typeclass level (such as laws) and play them automatically at instance level? For example:
Categories: Offsite Discussion

Proposal: add -XPolyKinds to Data.Monoid

libraries list - Mon, 07/14/2014 - 5:37pm
At the moment, the Monoid instance for Data.Proxy.Proxy is only defined for types with kind *. Prelude Data.Proxy Data.Monoid> (Proxy :: Proxy String) <> Proxy Proxy Prelude Data.Proxy Data.Monoid> :set -XDataKinds Prelude Data.Proxy Data.Monoid> (Proxy :: Proxy 5) <> Proxy <interactive>:8:20: No instance for (Monoid (Proxy 5)) arising from a use of ‘<>’ In the expression: (Proxy :: Proxy 5) <> Proxy In an equation for ‘it’: it = (Proxy :: Proxy 5) <> Proxy Enabling -XPolyKinds while compiling the Data.Monoid module is the only change necessary to make the Monoid instance for Proxy polykinded like Proxy itself is. I don't believe adding a language pragma would have any other effect on that module, and I believe the only effect of changing that instance would be allowing more correct programs to compile. This is my first proposal to this list, so I'm starting with something that I hope isn't controversial. Let me know if I should do anything different pr
Categories: Offsite Discussion

How to walk under binders "for free" in 'bound'?

haskell-cafe - Mon, 07/14/2014 - 4:51pm
Dear list, I'm trying to implement a small dependently typed language (based on [1]) using the generalised De Bruijn indices from the 'bound' library [2]. My main two datatypes are: Representing my two types of binders, and: Representing terms. My problem lies with implementing type-inference for lambda-binders. Originally I started with: Where `toScope . inferType1 ctx' . fromScope` entails doing three subsequent traversals: - `fromScope` traverses the term in `s`, which quotients placements of 'F' distributing them to the leaves - `inferType ctx'` then traverses the result of `fromScope` to infer the type - `toScope` transforms the inferred type to a generalised De Bruijn form, requiring a full tree traversal. Also, the new context, `ctx'`, is defined in term of a traversal (fmap). Also note that, for every Scope I walk under, `ctx'` gets an additional traversal. Doing so many traversals seems/is expensive, so after some thinking I improved `inferType2` to: Where the only traversal needed to get
Categories: Offsite Discussion