News aggregator

Snap projects to learn from

Haskell on Reddit - Mon, 08/11/2014 - 4:05pm

Can you recommend reliable open-source Snap projects?

I have some knowledge of Haskell (mostly gained from Learn You a Haskell), and I have experience creating web apps with Django.

The idea is to see good examples of web development using Haskell that we can learn from. (The focus of my interest would be project structuring, testing and session handling.)

submitted by danbal
[link] [6 comments]
Categories: Incoming News

Dan Burton: Similarities: Monoid, MonadPlus, Category

Planet Haskell - Mon, 08/11/2014 - 3:41pm
This is perhaps obvious to anyone who has thoroughly studied category theory, but the similarities between Monoid, MonadPlus, and Category, have really struck me lately. I’m going to take a smidgeon of artistic license to present this train of thought. … Continue reading →
Categories: Offsite Blogs

How to write reverse of this function?

Haskell on Reddit - Mon, 08/11/2014 - 12:01pm

I made a function in Haskell that takes in a number and returns a list of integers from 1 to that number.

gen x = [1..x] gen 4 -> [1, 2, 3, 4]

As I was learning about the monads I noticed that I could bind a list into this function as it has the type of (a -> m a).

[2, 3, 6, 1, 4] >>= gen -> [1, 2, 1, 2, 3, 1, 2, 3, 4, 5, 6, 1, 1, 2, 3, 4]

Then I wanted to write a function that would reverse the (>>= gen) and i came up with this function.

revgen (x:xs) = case (null xs) of False -> case ((head xs) - x) of 1 -> 1 + revgen xs otherwise -> 0 : revgen xs True -> 1 : []

But it doesn't work, I think because of the revgen returning both numbers and lists. But I can't come up with a way to do this any other way with one function. Also the nested cases look ugly as hell but I quess they work as intented because at least it compiles.

submitted by SpacemanInBikini
[link] [10 comments]
Categories: Incoming News

Updating code to work with vinyl 0.4

haskell-cafe - Mon, 08/11/2014 - 10:43am
Hi, I'm looking for some help in helping to migrate to vinyl-0.4 from earlier version. I'll first outline how I currently have things set up and then ask a few questions. For reference the whole project is at [1]. You can just skip to questions in the end if you want to. I have a small project which goes out to a few sites, fetches some similar XML data, parses some fields and returns a data type. My original attempt was to generate a data type per site but that was very ugly: couldn't reuse functions, accessors, anything. So I then turned to vinyl and have the following solution: I have a bunch of common fields defined like this: -- file_url ∷ "file_url" ::: String file_url = Field parent_id ∷ "parent_id" ::: Maybe Integer parent_id = Field -- and so on. I'm using HXT to process the XML I get from each site so I have created many HXT arrows creating each field like this: -- parent_idA ∷ (Functor (cat XmlTree), ArrowXml cat) ⇒ cat XmlTree (PlainRec '["parent_id" ::: Maybe Integer]
Categories: Offsite Discussion

OTAS Quant 1: Position Keeping

Haskell on Reddit - Mon, 08/11/2014 - 10:09am
Categories: Incoming News

Additive/Multiplicative Inverse of Algebraic Types?

Haskell on Reddit - Mon, 08/11/2014 - 9:35am

I apologize if this is way off topic, but I've been poring over this for a while and I think this might be a good place to ask: has there been any research on 'inverse types'? Either in Haskell or in general.

I'm trying to learn more about abstract algebra and algebraic data types, and from what I can tell the types form a semiring (though if there's a more accurate/complete structure please let me know!). What I'm wondering is if it's possible to extend this semiring to a full ring, or even a field. I've heard of a Grothendieck construction for groups and suspect it is also applicable to semirings, but I haven't really investigated that area very much.

Additionally I can't seem to find any intuitive expression of the additive or multiplicative inverse of a type offhand, but I'm curious if there even is one. Additive inverses seem to make a little bit of sense perhaps in a resource consumption sort of way, but it's shaky at best. And I have no idea what the reciprocal of a type would represent either, though I did see a 'quotient type' mentioned at the end of the clowns and jokers paper.

submitted by asampson
[link] [30 comments]
Categories: Incoming News

Easy Haskell Profiling

Haskell on Reddit - Mon, 08/11/2014 - 9:19am
Categories: Incoming News

WLP/WFLP - Call for Participation - September 15-17

General haskell list - Mon, 08/11/2014 - 9:17am
Call for Participation 28th Workshop on (Constraint) Logic Programming (WLP 2014) co-located with 23rd International Workshop on Functional and (Constraint) Logic Programming (WFLP 2014) Lutherstadt Wittenberg, Germany, September 15 - 17 Early registration deadline: August 26 The international workshops on functional and logic programming aim at bringing together researchers interested in functional programming, logic programming, as well as their integration. The workshops on (constraint) logic programming serve as the scientific forum of the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence, and operations research. Invited Speakers: Michael Hanus: Declarative Multi-paradigm Programming (Tutorial). Sven Thiele: Answer Set Programming for Systems Biology. Accepted Papers and Preliminary Program: http://dbs.informatik.uni-halle.de/wlp14/prog
Categories: Incoming News

Type families on unboxed types

Haskell on Reddit - Mon, 08/11/2014 - 4:24am

I have:

type family Flip a where Flip (a -> b) = b -> a > :kind! Flip (Int -> Bool) Flip (Int -> Bool) :: * = Bool -> Int

but it doesn't work for unboxed values:

> Flip (Int# -> Bool) Flip (Int# -> Bool) :: * = Flip (Int# -> Bool) > import Data.Type.Equality > :t Refl :: (Flip (Int# -> Bool) :~: (Bool -> Int#)) <interactive>:1:1: Warning: Couldn't match type ‘Flip (Int# -> Bool)’ with ‘Bool -> Int#’ Expected type: Flip (Int# -> Bool) :~: (Bool -> Int#) Actual type: (Bool -> Int#) :~: (Bool -> Int#) In the expression: Refl :: Flip (Int# -> Bool) :~: (Bool -> Int#) Refl :: (Flip (Int# -> Bool) :~: (Bool -> Int#)) :: (Bool -> Int#) :~: (Bool -> Int#)

Is there any way to fix that?

submitted by haskellthrowaway
[link] [8 comments]
Categories: Incoming News

Safely Composable Type-Specific Languages

Lambda the Ultimate - Mon, 08/11/2014 - 12:27am

Cyrus Omar, Darya Kurilova, Ligia Nistor, Benjamin Chung, Alex Potanin, and Jonathan Aldrich, "Safely Composable Type-Specific Languages", ECOOP14.

Programming languages often include specialized syntax for common datatypes (e.g. lists) and some also build in support for specific specialized datatypes (e.g. regular expressions), but user-defined types must use general-purpose syntax. Frustration with this causes developers to use strings, rather than structured data, with alarming frequency, leading to correctness, performance, security, and usability issues. Allowing library providers to modularly extend a language with new syntax could help address these issues. Unfortunately, prior mechanisms either limit expressiveness or are not safely composable: individually unambiguous extensions can still cause ambiguities when used together. We introduce type-specific languages (TSLs): logic associated with a type that determines how the bodies of generic literals, able to contain arbitrary syntax, are parsed and elaborated, hygienically. The TSL for a type is invoked only when a literal appears where a term of that type is expected, guaranteeing non-interference. We give evidence supporting the applicability of this approach and formally specify it with a bidirectionally typed elaboration semantics for the Wyvern programming language.

Categories: Offsite Discussion

Type class with unbox :: Prim -> Prim# method

Haskell on Reddit - Sun, 08/10/2014 - 9:31pm

Is there any way to make a type class that looks something like:

class Unbox ty where type Unboxed ty :: # unbox :: ty -> Unboxed ty instance Unbox Int where type Unboxed Int = Int# unbox (I# i) = i

Fails with an unsurprising error. Making the kind of Unboxed ty * I used the fact that Int -> Int# :: * but that compiles:

class Unbox ty where type Unboxed ty unbox :: Unboxed ty instance Unboxed Int where type Unboxed Int = Int -> Int# unbox (I# i) = i instance Unboxed Float where type Unboxed Float = Float -> Float# unbox (F# f) = f

But isn't actually usable:

> :t unbox ... :: Unbox a => Unboxed a > :t unbox (5 :: Int) ... :: (Unbox a, Unboxed a ~ (Int -> t)) => t > :t unbox (5 :: Int) :: Int# <interactive>:1:1: Warning: No instance for (Unbox a0) arising from a use of ‘unbox’ The type variables ‘k0’, ‘a0’ are ambiguous Note: there is a potential instance available: instance [overlap ok] Unbox Int -- Defined at /tmp/tmp.4JSrL7ihDP.hs:211:10 In the expression: unbox (5 :: Int) :: Int# <interactive>:1:1: Warning: Couldn't match expected type ‘Int -> Int#’ with actual type ‘Unboxed a0’ The type variables ‘k0’, ‘a0’ are ambiguous The function ‘unbox’ is applied to one argument, but its type ‘Unboxed a0’ has none In the expression: unbox (5 :: Int) :: Int#

Comments welcome

submitted by haskellthrowaway
[link] [5 comments]
Categories: Incoming News

(Nearly) first-class unboxed values: Retrospect

Haskell on Reddit - Sun, 08/10/2014 - 9:20pm

Appearing in the paper Unboxed value as first class citizens in a non-strict functional language, GHC allows allowing user defined data types to contain unboxed values.

This seems to have worked great in Haskell and allows adding optimizations like -ffast-math as a library. THe paper appeared in 1991, have any other languages adopted this view of unboxed values? If not, is it because it only benefits non-strict languages? Something else?

submitted by haskellthrowaway
[link] [2 comments]
Categories: Incoming News

Irritating Referential Transparency behavior?

haskell-cafe - Sun, 08/10/2014 - 7:20pm
Hello all, I came across something strange: This code compiles: ------------------------------------------------------------ data LProcess a = LP {lstep :: Writer [String] (Event a -> ([Event a], LProcess a))} ------------------------------------------------------------ exProcess2 :: ProcState -> Location -> LProcess EvtData exProcess2 pState loc = LP {lstep = xxx} where xxx = myStep pState myStep _ = return $ \(E t l _) -> ([E (t+1) l Wakeup], (proc' pState)) proc' state = exProcess2 (trc state) loc but when I replace xxx by myStep pState as in exProcess2 pState loc = LP {lstep = myStep pState} where xxx = myStep pState myStep _ = return $ \(E t l _) -> ([E (t+1) l Wakeup], (proc' pState)) proc' state = exProcess2 (trc state) loc I get No instance for (Monad m0) arising from a use of `myStep' The type variable `m0' is ambiguous Possible fix: add a type signature that fixes these type variable(s) I
Categories: Offsite Discussion

Functional Jobs: Big Data Engineer / Data Scientist at Recruit IT (Full-time)

Planet Haskell - Sun, 08/10/2014 - 6:32pm
  • Are you a Big Data Engineer who wants to work on innovative cloud and real-time data analytic technologies?
  • Do you have a passion for turning data into meaningful information?
  • Does working on a world-class big data project excite you?

Our client is currently looking in growth phase and looking for passionate and creative Data Scientists who can design, development, and implement robust, and scalable big data solutions. This is a role where you will need to enjoy being on the cusp of emerging technologies, and have a genuine interest in breaking new ground.

Your skills and experience will cover the majority of the following:

  • Experience working across real-time data analytics, machine learning, and big data solutions
  • Experience working with large data sets and cloud clusters
  • Experience with various NoSQL technologies and Big Data platforms including; Hadoop, Cassandra, HBASE, Accumulo, and MapReduce
  • Experience with various functional programming languages including; Scala, R, Clojure, Erlang, F#, Caml, Haskell, Common Lisp, or Scheme

This is an excellent opportunity for someone who is interested in a change in lifestyle, and where you would be joining other similar experienced professionals!

New Zealand awaits!

Get information on how to apply for this position.

Categories: Offsite Blogs

Gabriel Gonzalez: Equational reasoning at scale

Planet Haskell - Sun, 08/10/2014 - 6:13pm

Haskell programmers care about the correctness of their software and they specify correctness conditions in the form of equations that their code must satisfy. They can then verify the correctness of these equations using equational reasoning to prove that the abstractions they build are sound. To an outsider this might seem like a futile, academic exercise: proving the correctness of small abstractions is difficult, so what hope do we have to prove larger abstractions correct? This post explains how to do precisely that: scale proofs to large and complex abstractions.

Purely functional programming uses composition to scale programs, meaning that:

  • We build small components that we can verify correct in isolation
  • We compose smaller components into larger components

If you saw "components" and thought "functions", think again! We can compose things that do not even remotely resemble functions, such as proofs! In fact, Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:

  • We build small proofs that we can verify correct in isolation
  • We compose smaller proofs into larger proofs

The following sections illustrate in detail how this works in practice, using Monoids as the running example. We will prove the Monoid laws for simple types and work our way up to proving the Monoid laws for much more complex types. Along the way we'll learn how to keep the proof complexity flat as the types grow in size.

Monoids

Haskell's Prelude provides the following Monoid type class:

class Monoid m where
mempty :: m
mappend :: m -> m -> m

-- An infix operator equivalent to `mappend`
(<>) :: Monoid m => m -> m -> m
x <> y = mappend x y

... and all Monoid instances must obey the following two laws:

mempty <> x = x -- Left identity

x <> mempty = x -- Right identity

(x <> y) <> z = x <> (y <> z) -- Associativity

For example, Ints form a Monoid:

-- See "Appendix A" for some caveats
instance Monoid Int where
mempty = 0
mappend = (+)

... and the Monoid laws for Ints are just the laws of addition:

0 + x = x

x + 0 = x

(x + y) + z = x + (y + z)

Now we can use (<>) and mempty instead of (+) and 0:

>>> 4 <> 2
6
>>> 5 <> mempty <> 5
10

This appears useless at first glance. We already have (+) and 0, so why are we using the Monoid operations?

Extending Monoids

Well, what if I want to combine things other than Ints, like pairs of Ints. I want to be able to write code like this:

>>> (1, 2) <> (3, 4)
(4, 6)

Well, that seems mildly interesting. Let's try to define a Monoid instance for pairs of Ints:

instance Monoid (Int, Int) where
mempty = (0, 0)
mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)

Now my wish is true and I can "add" binary tuples together using (<>) and mempty:

>>> (1, 2) <> (3, 4)
(4, 6)
>>> (1, 2) <> (3, mempty) <> (mempty, 4)
(4, 6)
>>> (1, 2) <> mempty <> (3, 4)
(4, 6)

However, I still haven't proven that this new Monoid instance obeys the Monoid laws. Fortunately, this is a very simple proof.

I'll begin with the first Monoid law, which requires that:

mempty <> x = x

We will begin from the left-hand side of the equation and try to arrive at the right-hand side by substituting equals-for-equals (a.k.a. "equational reasoning"):

-- Left-hand side of the equation
mempty <> x

-- x <> y = mappend x y
= mappend mempty x

-- `mempty = (0, 0)`
= mappend (0, 0) x

-- Define: x = (xL, xR), since `x` is a tuple
= mappend (0, 0) (xL, xR)

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= (0 + xL, 0 + xR)

-- 0 + x = x
= (xL, xR)

-- x = (xL, xR)
= x

The proof for the second Monoid law is symmetric

-- Left-hand side of the equation
= x <> mempty

-- x <> y = mappend x y
= mappend x mempty

-- mempty = (0, 0)
= mappend x (0, 0)

-- Define: x = (xL, xR), since `x` is a tuple
= mappend (xL, xR) (0, 0)

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= (xL + 0, xR + 0)

-- x + 0 = x
= (xL, xR)

-- x = (xL, xR)
= x

The third Monoid law requires that (<>) is associative:

(x <> y) <> z = x <> (y <> z)

Again I'll begin from the left side of the equation:

-- Left-hand side
(x <> y) <> z

-- x <> y = mappend x y
= mappend (mappend x y) z

-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)
= mappend (xL + yL, xR + yR) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)
= mappend ((xL + yL) + zL, (xR + yR) + zR)

-- (x + y) + z = x + (y + z)
= mappend (xL + (yL + zL), xR + (yR + zR))

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= mappend (xL, xR) (yL + zL, yR + zR)

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= mappend (xL, xR) (mappend (yL, yR) (zL, zR))

-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend x (mappend y z)

-- x <> y = mappend x y
= x <> (y <> z)

That completes the proof of the three Monoid laws, but I'm not satisfied with these proofs.

Generalizing proofs

I don't like the above proofs because they are disposable, meaning that I cannot reuse them to prove other properties of interest. I'm a programmer, so I loathe busy work and unnecessary repetition, both for code and proofs. I would like to find a way to generalize the above proofs so that I can use them in more places.

We improve proof reuse in the same way that we improve code reuse. To see why, consider the following sort function:

sort :: [Int] -> [Int]

This sort function is disposable because it only works on Ints. For example, I cannot use the above function to sort a list of Doubles.

Fortunately, programming languages with generics let us generalize sort by parametrizing sort on the element type of the list:

sort :: Ord a => [a] -> [a]

That type says that we can call sort on any list of as, so long as the type a implements the Ord type class (a comparison interface). This works because sort doesn't really care whether or not the elements are Ints; sort only cares if they are comparable.

Similarly, we can make the proof more "generic". If we inspect the proof closely, we will notice that we don't really care whether or not the tuple contains Ints. The only Int-specific properties we use in our proof are:

0 + x = x

x + 0 = x

(x + y) + z = x + (y + z)

However, these properties hold true for all Monoids, not just Ints. Therefore, we can generalize our Monoid instance for tuples by parametrizing it on the type of each field of the tuple:

instance (Monoid a, Monoid b) => Monoid (a, b) where
mempty = (mempty, mempty)

mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

The above Monoid instance says that we can combine tuples so long as we can combine their individual fields. Our original Monoid instance was just a special case of this instance where both the a and b types are Ints.

Note: The mempty and mappend on the left-hand side of each equation are for tuples. The memptys and mappends on the right-hand side of each equation are for the types a and b. Haskell overloads type class methods like mempty and mappend to work on any type that implements the Monoid type class, and the compiler distinguishes them by their inferred types.

We can similarly generalize our original proofs, too, by just replacing the Int-specific parts with their more general Monoid counterparts.

Here is the generalized proof of the left identity law:

-- Left-hand side of the equation
mempty <> x

-- x <> y = mappend x y
= mappend mempty x

-- `mempty = (mempty, mempty)`
= mappend (mempty, mempty) x

-- Define: x = (xL, xR), since `x` is a tuple
= mappend (mempty, mempty) (xL, xR)

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= (mappend mempty xL, mappend mempty xR)

-- Monoid law: mappend mempty x = x
= (xL, xR)

-- x = (xL, xR)
= x

... the right identity law:

-- Left-hand side of the equation
= x <> mempty

-- x <> y = mappend x y
= mappend x mempty

-- mempty = (mempty, mempty)
= mappend x (mempty, mempty)

-- Define: x = (xL, xR), since `x` is a tuple
= mappend (xL, xR) (mempty, mempty)

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= (mappend xL mempty, mappend xR mempty)

-- Monoid law: mappend x mempty = x
= (xL, xR)

-- x = (xL, xR)
= x

... and the associativity law:

-- Left-hand side
(x <> y) <> z

-- x <> y = mappend x y
= mappend (mappend x y) z

-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)
= mappend (mappend xL yL, mappend xR yR) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)
= (mappend (mappend xL yL) zL, mappend (mappend xR yR) zR)

-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)
= (mappend xL (mappend yL zL), mappend xR (mappend yR zR))

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= mappend (xL, xR) (mappend yL zL, mappend yR zR)

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= mappend (xL, xR) (mappend (yL, yR) (zL, zR))

-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend x (mappend y z)

-- x <> y = mappend x y
= x <> (y <> z)

This more general Monoid instance lets us stick any Monoids inside the tuple fields and we can still combine the tuples. For example, lists form a Monoid:

-- Exercise: Prove the monoid laws for lists
instance Monoid [a] where
mempty = []

mappend = (++)

... so we can stick lists inside the right field of each tuple and still combine them:

>>> (1, [2, 3]) <> (4, [5, 6])
(5, [2, 3, 5, 6])
>>> (1, [2, 3]) <> (4, mempty) <> (mempty, [5, 6])
(5, [2, 3, 5, 6])
>>> (1, [2, 3]) <> mempty <> (4, [5, 6])
(5, [2, 3, 5, 6])

Why, we can even stick yet another tuple inside the right field and still combine them:

>>> (1, (2, 3)) <> (4, (5, 6))
(5, (7, 9))

We can try even more exotic permutations and everything still "just works":

>>> ((1,[2, 3]), ([4, 5], 6)) <> ((7, [8, 9]), ([10, 11), 12)
((8, [2, 3, 8, 9]), ([4, 5, 10, 11], 18))

This is our first example of a "scalable proof". We began from three primitive building blocks:

  • Int is a Monoid
  • [a] is a Monoid
  • (a, b) is a Monoid if a is a Monoid and b is a Monoid

... and we connected those three building blocks to assemble a variety of new Monoid instances. No matter how many tuples we nest the result is still a Monoid and obeys the Monoid laws. We don't need to re-prove the Monoid laws every time we assemble a new permutation of these building blocks.

However, these building blocks are still pretty limited. What other useful things can we combine to build new Monoids?

IO

We're so used to thinking of Monoids as data, so let's define a new Monoid instance for something entirely un-data-like:

-- See "Appendix A" for some caveats
instance Monoid b => Monoid (IO b) where
mempty = return mempty

mappend io1 io2 = do
a1 <- io1
a2 <- io2
return (mappend a1 a2)

The above instance says: "If a is a Monoid, then an IO action that returns an a is also a Monoid". Let's test this using the getLine function from the Prelude:

-- Read one line of input from stdin
getLine :: IO String

String is a Monoid, since a String is just a list of characters, so we should be able to mappend multiple getLine statements together. Let's see what happens:

>>> getLine -- Reads one line of input
Hello<Enter>
"Hello"
>>> getLine <> getLine
ABC<Enter>
DEF<Enter>
"ABCDEF"
>>> getLine <> getLine <> getLine
1<Enter>
23<Enter>
456<Enter>
"123456"

Neat! When we combine multiple commands we combine their effects and their results.

Of course, we don't have to limit ourselves to reading strings. We can use readLn from the Prelude to read in anything that implements the Read type class:

-- Parse a `Read`able value from one line of stdin
readLn :: Read a => IO a

All we have to do is tell the compiler which type a we intend to Read by providing a type signature:

>>> readLn :: IO (Int, Int)
(1, 2)<Enter>
(1 ,2)
>>> readLn <> readLn :: IO (Int, Int)
(1,2)<Enter>
(3,4)<Enter>
(4,6)
>>> readLn <> readLn <> readLn :: IO (Int, Int)
(1,2)<Enter>
(3,4)<Enter>
(5,6)<Enter>
(9,12)

This works because:

  • Int is a Monoid
  • Therefore, (Int, Int) is a Monoid
  • Therefore, IO (Int, Int) is a Monoid

Or let's flip things around and nest IO actions inside of a tuple:

>>> let ios = (getLine, readLn) :: (IO String, IO (Int, Int))
>>> let (getLines, readLns) = ios <> ios <> ios
>>> getLines
1<Enter>
23<Enter>
456<Enter>
123456
>>> readLns
(1,2)<Enter>
(3,4)<Enter>
(5,6)<Enter>
(9,12)

We can very easily reason that the type (IO String, IO (Int, Int)) obeys the Monoid laws because:

  • String is a Monoid
  • If String is a Monoid then IO String is also a Monoid
  • Int is a Monoid
  • If Int is a Monoid, then (Int, Int) is also a `Monoid
  • If (Int, Int) is a Monoid, then IO (Int, Int) is also a Monoid
  • If IO String is a Monoid and IO (Int, Int) is a Monoid, then (IO String, IO (Int, Int)) is also a Monoid

However, we don't really have to reason about this at all. The compiler will automatically assemble the correct Monoid instance for us. The only thing we need to verify is that the primitive Monoid instances obey the Monoid laws, and then we can trust that any larger Monoid instance the compiler derives will also obey the Monoid laws.

The Unit Monoid

Haskell Prelude also provides the putStrLn function, which echoes a String to standard output with a newline:

putStrLn :: String -> IO ()

Is putStrLn combinable? There's only one way to find out!

>>> putStrLn "Hello" <> putStrLn "World"
Hello
World

Interesting, but why does that work? Well, let's look at the types of the commands we are combining:

putStrLn "Hello" :: IO ()
putStrLn "World" :: IO ()

Well, we said that IO b is a Monoid if b is a Monoid, and b in this case is () (pronounced "unit"), which you can think of as an "empty tuple". Therefore, () must form a Monoid of some sort, and if we dig into Data.Monoid, we will discover the following Monoid instance:

-- Exercise: Prove the monoid laws for `()`
instance Monoid () where
mempty = ()

mappend () () = ()

This says that empty tuples form a trivial Monoid, since there's only one possible value (ignoring bottom) for an empty tuple: (). Therefore, we can derive that IO () is a Monoid because () is a Monoid.

Functions

Alright, so we can combine putStrLn "Hello" with putStrLn "World", but can we combine naked putStrLn functions?

>>> (putStrLn <> putStrLn) "Hello"
Hello
Hello

Woah, how does that work?

We never wrote a Monoid instance for the type String -> IO (), yet somehow the compiler magically accepted the above code and produced a sensible result.

This works because of the following Monoid instance for functions:

instance Monoid b => Monoid (a -> b) where
mempty = \_ -> mempty

mappend f g = \a -> mappend (f a) (g a)

This says: "If b is a Monoid, then any function that returns a b is also a Monoid".

The compiler then deduced that:

  • () is a Monoid
  • If () is a Monoid, then IO () is also a Monoid
  • If IO () is a Monoid then String -> IO () is also a Monoid

The compiler is a trusted friend, deducing Monoid instances we never knew existed.

Monoid plugins

Now we have enough building blocks to assemble a non-trivial example. Let's build a key logger with a Monoid-based plugin system.

The central scaffold of our program is a simple main loop that echoes characters from standard input to standard output:

main = do
hSetEcho stdin False
forever $ do
c <- getChar
putChar c

However, we would like to intercept key strokes for nefarious purposes, so we will slightly modify this program to install a handler at the beginning of the program that we will invoke on every incoming character:

install :: IO (Char -> IO ())
install = ???

main = do
hSetEcho stdin False
handleChar <- install
forever $ do
c <- getChar
handleChar c
putChar c

Notice that the type of install is exactly the correct type to be a Monoid:

  • () is a Monoid
  • Therefore, IO () is also a Monoid
  • Therefore Char -> IO () is also a Monoid
  • Therefore IO (Char -> IO ()) is also a Monoid

Therefore, we can combine key logging plugins together using Monoid operations. Here is one such example:

type Plugin = IO (Char -> IO ())

logTo :: FilePath -> Plugin
logTo filePath = do
handle <- openFile filePath WriteMode
hSetBuffering handle NoBuffering
return (hPutChar handle)

main = do
hSetEcho stdin False
handleChar <- logTo "file1.txt" <> logTo "file2.txt"
forever $ do
c <- getChar
handleChar c
putChar c

Now, every key stroke will be recorded to both file1.txt and file2.txt. Let's confirm that this works as expected:

$ ./logger
Test<Enter>
ABC<Enter>
42<Enter>
<Ctrl-C>
$ cat file1.txt
Test
ABC
42
$ cat file2.txt
Test
ABC
42

Try writing your own Plugins and mixing them in with (<>) to see what happens. "Appendix C" contains the complete code for this section so you can experiment with your own Plugins.

Applicatives

Notice that I never actually proved the Monoid laws for the following two Monoid instances:

instance Monoid b => Monoid (a -> b) where
mempty = \_ -> mempty
mappend f g = \a -> mappend (f a) (g a)

instance Monoid a => Monoid (IO a) where
mempty = return mempty

mappend io1 io2 = do
a1 <- io1
a2 <- io2
return (mappend a1 a2)

The reason why is that they are both special cases of a more general pattern. We can detect the pattern if we rewrite both of them to use the pure and liftA2 functions from Control.Applicative:

import Control.Applicative (pure, liftA2)

instance Monoid b => Monoid (a -> b) where
mempty = pure mempty

mappend = liftA2 mappend

instance Monoid b => Monoid (IO b) where
mempty = pure mempty

mappend = liftA2 mappend

This works because both IO and functions implement the following Applicative interface:

class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b

-- Lift a binary function over the functor `f`
liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c
liftA2 f x y = (pure f <*> x) <*> y

... and all Applicative instances must obey several Applicative laws:

pure id <*> v = v

((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

pure f <*> pure x = pure (f x)

u <*> pure x = pure (\f -> f y) <*> u

These laws may seem a bit adhoc, but this paper explains that you can reorganize the Applicative class to this equivalent type class:

class Functor f => Monoidal f where
unit :: f ()
(#) :: f a -> f b -> f (a, b)

Then the corresponding laws become much more symmetric:

fmap snd (unit # x) = x -- Left identity

fmap fst (x # unit) = x -- Right identity

fmap assoc ((x # y) # z) = x # (y # z) -- Associativity
where
assoc ((a, b), c) = (a, (b, c))

fmap (f *** g) (x # y) = fmap f x # fmap g y -- Naturality
where
(f *** g) (a, b) = (f a, g b)

I personally prefer the Monoidal formulation, but you go to war with the army you have, so we will use the Applicative type class for this post.

All Applicatives possess a very powerful property: they can all automatically lift Monoid operations using the following instance:

instance (Applicative f, Monoid b) => Monoid (f b) where
mempty = pure mempty

mappend = liftA2 mappend

This says: "If f is an Applicative and b is a Monoid, then f b is also a Monoid." In other words, we can automatically extend any existing Monoid with some new feature f and get back a new Monoid.

Note: The above instance is bad Haskell because it overlaps with other type class instances. In practice we have to duplicate the above code once for each Applicative. Also, for some Applicatives we may want a different Monoid instance.

We can prove that the above instance obeys the Monoid laws without knowing anything about f and b, other than the fact that f obeys the Applicative laws and b obeys the Applicative laws. These proofs are a little long, so I've included them in Appendix B.

Both IO and functions implement the Applicative type class:

instance Applicative IO where
pure = return

iof <*> iox = do
f <- iof
x <- iox
return (f x)

instance Applicative ((->) a) where
pure x = \_ -> x

kf <*> kx = \a ->
let f = kf a
x = kx a
in f x

This means that we can kill two birds with one stone. Every time we prove the Applicative laws for some functor F:

instance Applicative F where ...

... we automatically prove that the following Monoid instance is correct for free:

instance Monoid b => Monoid (F b) where
mempty = pure mempty

mappend = liftA2 mappend

In the interest of brevity, I will skip the proofs of the Applicative laws, but I may cover them in a subsequent post.

The beauty of Applicative Functors is that every new Applicative instance we discover adds a new building block to our Monoid toolbox, and Haskell programmers have already discovered lots of Applicative Functors.

Revisiting tuples

One of the very first Monoid instances we wrote was:

instance (Monoid a, Monoid b) => Monoid (a, b) where
mempty = (mempty, mempty)

mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

Check this out:

instance (Monoid a, Monoid b) => Monoid (a, b) where
mempty = pure mempty

mappend = liftA2 mappend

This Monoid instance is yet another special case of the Applicative pattern we just covered!

This works because of the following Applicative instance in Control.Applicative:

instance Monoid a => Applicative ((,) a) where
pure b = (mempty, b)

(a1, f) <*> (a2, x) = (mappend a1 a2, f x)

This instance obeys the Applicative laws (proof omitted), so our Monoid instance for tuples is automatically correct, too.

Composing applicatives

In the very first section I wrote:

Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:

  • We build small proofs that we can verify correct in isolation
  • We compose smaller proofs into larger proofs

I don't like to use the word compose lightly. In the context of category theory, compose has a very rigorous meaning, indicating composition of morphisms in some category. This final section will show that we can actually compose Monoid proofs in a very rigorous sense of the word.

We can define a category of Monoid proofs:

So in our Plugin example, we began on the proof that () was a Monoid and then composed three Applicative morphisms to prove that Plugin was a Monoid. I will use the following diagram to illustrate this:

+-----------------------+
| |
| Legend: * = Object |
| |
| v |
| | = Morphism |
| v |
| |
+-----------------------+

* `()` is a `Monoid`

v
| IO
v

* `IO ()` is a `Monoid`

v
| ((->) String)
v

* `String -> IO ()` is a `Monoid`

v
| IO
v

* `IO (String -> IO ())` (i.e. `Plugin`) is a `Monoid`

Therefore, we were literally composing proofs together.

Conclusion

You can equationally reason at scale by decomposing larger proofs into smaller reusable proofs, the same way we decompose programs into smaller and more reusable components. There is no limit to how many proofs you can compose together, and therefore there is no limit to how complex of a program you can tame using equational reasoning.

This post only gave one example of composing proofs within Haskell. The more you learn the language, the more examples of composable proofs you will encounter. Another common example is automatically deriving Monad proofs by composing monad transformers.

As you learn Haskell, you will discover that the hard part is not proving things. Rather, the challenge is learning how to decompose proofs into smaller proofs and you can cultivate this skill by studying category theory and abstract algebra. These mathematical disciplines teach you how to extract common and reusable proofs and patterns from what appears to be disposable and idiosyncratic code.

Appendix A - Missing Monoid instances

These Monoid instance from this post do not actually appear in the Haskell standard library:

instance Monoid b => Monoid (IO b)

instance Monoid Int

The first instance was recently proposed here on the Glasgow Haskell Users mailing list. However, in the short term you can work around it by writing your own Monoid instances by hand just by inserting a sufficient number of pures and liftA2s.

For example, suppose we wanted to provide a Monoid instance for Plugin. We would just newtype Plugin and write:

newtype Plugin = Plugin { install :: IO (String -> IO ()) }

instance Monoid Plugin where
mempty = Plugin (pure (pure (pure mempty)))

mappend (Plugin p1) (Plugin p2) =
Plugin (liftA2 (liftA2 (liftA2 mappend)) p1 p2)

This is what the compiler would have derived by hand.

Alternatively, you could define an orphan Monoid instance for IO, but this is generally frowned upon.

There is no default Monoid instance for Int because there are actually two possible instances to choose from:

-- Alternative #1
instance Monoid Int where
mempty = 0

mappend = (+)

-- Alternative #2
instance Monoid Int where
mempty = 1

mappend = (*)

So instead, Data.Monoid sidesteps the issue by providing two newtypes to distinguish which instance we prefer:

newtype Sum a = Sum { getSum :: a }

instance Num a => Monoid (Sum a)

newtype Product a = Product { getProduct :: a}

instance Num a => Monoid (Product a)

An even better solution is to use a semiring, which allows two Monoid instances to coexist for the same type. You can think of Haskell's Num class as an approximation of the semiring class:

class Num a where
fromInteger :: Integer -> a

(+) :: a -> a -> a

(*) :: a -> a -> a

-- ... and other operations unrelated to semirings

Note that we can also lift the Num class over the Applicative class, exactly the same way we lifted the Monoid class. Here's the code:

instance (Applicative f, Num a) => Num (f a) where
fromInteger n = pure (fromInteger n)

(+) = liftA2 (+)

(*) = liftA2 (*)

(-) = liftA2 (-)

negate = fmap negate

abs = fmap abs

signum = fmap signum

This lifting guarantees that if a obeys the semiring laws then so will f a. Of course, you will have to specialize the above instance to every concrete Applicative because otherwise you will get overlapping instances.

Appendix B

These are the proofs to establish that the following Monoid instance obeys the Monoid laws:

instance (Applicative f, Monoid b) => Monoid (f b) where
mempty = pure mempty

mappend = liftA2 mappend

... meaning that if f obeys the Applicative laws and b obeys the Monoid laws, then f b also obeys the Monoid laws.

Proof of the left identity law:

mempty <> x

-- x <> y = mappend x y
= mappend mempty x

-- mappend = liftA2 mappend
= liftA2 mappend mempty x

-- mempty = pure mempty
= liftA2 mappend (pure mempty) x

-- liftA2 f x y = (pure f <*> x) <*> y
= (pure mappend <*> pure mempty) <*> x

-- Applicative law: pure f <*> pure x = pure (f x)
= pure (mappend mempty) <*> x

-- Eta conversion
= pure (\a -> mappend mempty a) <*> x

-- mappend mempty x = x
= pure (\a -> a) <*> x

-- id = \x -> x
= pure id <*> x

-- Applicative law: pure id <*> v = v
= x

Proof of the right identity law:

x <> mempty = x

-- x <> y = mappend x y
= mappend x mempty

-- mappend = liftA2 mappend
= liftA2 mappend x mempty

-- mempty = pure mempty
= liftA2 mappend x (pure mempty)

-- liftA2 f x y = (pure f <*> x) <*> y
= (pure mappend <*> x) <*> pure mempty

-- Applicative law: u <*> pure y = pure (\f -> f y) <*> u
= pure (\f -> f mempty) <*> (pure mappend <*> x)

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure (.) <*> pure (\f -> f mempty)) <*> pure mappend) <*> x

-- Applicative law: pure f <*> pure x = pure (f x)
= (pure ((.) (\f -> f mempty)) <*> pure mappend) <*> x

-- Applicative law : pure f <*> pure x = pure (f x)
= pure ((.) (\f -> f mempty) mappend) <*> x

-- `(.) f g` is just prefix notation for `f . g`
= pure ((\f -> f mempty) . mappend) <*> x

-- f . g = \x -> f (g x)
= pure (\x -> (\f -> f mempty) (mappend x)) <*> x

-- Apply the lambda
= pure (\x -> mappend x mempty) <*> x

-- Monoid law: mappend x mempty = x
= pure (\x -> x) <*> x

-- id = \x -> x
= pure id <*> x

-- Applicative law: pure id <*> v = v
= x

Proof of the associativity law:

(x <> y) <> z

-- x <> y = mappend x y
= mappend (mappend x y) z

-- mappend = liftA2 mappend
= liftA2 mappend (liftA2 mappend x y) z

-- liftA2 f x y = (pure f <*> x) <*> y
= (pure mappend <*> ((pure mappend <*> x) <*> y)) <*> z

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= (((pure (.) <*> pure mappend) <*> (pure mappend <*> x)) <*> y) <*> z

-- Applicative law: pure f <*> pure x = pure (f x)
= ((pure f <*> (pure mappend <*> x)) <*> y) <*> z
where
f = (.) mappend

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((((pure (.) <*> pure f) <*> pure mappend) <*> x) <*> y) <*> z
where
f = (.) mappend

-- Applicative law: pure f <*> pure x = pure (f x)
= (((pure f <*> pure mappend) <*> x) <*> y) <*> z
where
f = (.) ((.) mappend)

-- Applicative law: pure f <*> pure x = pure (f x)
= ((pure f <*> x) <*> y) <*> z
where
f = (.) ((.) mappend) mappend

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f = ((.) mappend) . mappend

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x = (((.) mappend) . mappend) x

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x = (.) mappend (mappend x)

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f x = mappend . (mappend x)

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x y = (mappend . (mappend x)) y

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x y = mappend (mappend x y)

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x y z = mappend (mappend x y) z

-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)
= ((pure f <*> x) <*> y) <*> z
where
f x y z = mappend x (mappend y z)

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x y z = (mappend x . mappend y) z

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x y = mappend x . mappend y

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f x y = (.) (mappend x) (mappend y)

-- (f . g) x = f
= ((pure f <*> x) <*> y) <*> z
where
f x y = (((.) . mappend) x) (mappend y)

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x y = ((((.) . mappend) x) . mappend) y

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x = (((.) . mappend) x) . mappend

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f x = (.) (((.) . mappend) x) mappend

-- Lambda abstraction
= ((pure f <*> x) <*> y) <*> z
where
f x = (\k -> k mappend) ((.) (((.) . mappend) x))

-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x = (\k -> k mappend) (((.) . ((.) . mappend)) x)

-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f = (\k -> k mappend) . ((.) . ((.) . mappend))

-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f = (.) (\k -> k mappend) ((.) . ((.) . mappend))

-- Applicative law: pure f <*> pure x = pure (f x)
= (((pure g <*> pure f) <*> x) <*> y) <*> z
where
g = (.) (\k -> k mappend)
f = (.) . ((.) . mappend)

-- Applicative law: pure f <*> pure x = pure (f x)
= ((((pure (.) <*> pure (\k -> k mappend)) <*> pure f) <*> x) <*> y) <*> z
where
f = (.) . ((.) . mappend)

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure (\k -> k mappend) <*> (pure f <*> x)) <*> y) <*> z
where
f = (.) . ((.) . mappend)

-- u <*> pure y = pure (\k -> k y) <*> u
= (((pure f <*> x) <*> pure mappend) <*> y) <*> z
where
f = (.) . ((.) . mappend)


-- (.) f g = f . g
= (((pure f <*> x) <*> pure mappend) <*> y) <*> z
where
f = (.) (.) ((.) . mappend)

-- Applicative law: pure f <*> pure x = pure (f x)
= ((((pure g <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z
where
g = (.) (.)
f = (.) . mappend

-- Applicative law: pure f <*> pure x = pure (f x)
= (((((pure (.) <*> pure (.)) <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z
where
f = (.) . mappend

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= (((pure (.) <*> (pure f <*> x)) <*> pure mappend) <*> y) <*> z
where
f = (.) . mappend

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z
where
f = (.) . mappend

-- (.) f g = f . g
= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z
where
f = (.) (.) mappend

-- Applicative law: pure f <*> pure x = pure (f x)
= (((pure f <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z
where
f = (.) (.)

-- Applicative law: pure f <*> pure x = pure (f x)
= ((((pure (.) <*> pure (.)) <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure (.) <*> (pure mappend <*> x)) <*> (pure mappend <*> y)) <*> z

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= (pure mappend <*> x) <*> ((pure mappend <*> y) <*> z)

-- liftA2 f x y = (pure f <*> x) <*> y
= liftA2 mappend x (liftA2 mappend y z)

-- mappend = liftA2 mappend
= mappend x (mappend y z)

-- x <> y = mappend x y
= x <> (y <> z)Appendix C: Monoid key logging

Here is the complete program for a key logger with a Monoid-based plugin system:

import Control.Applicative (pure, liftA2)
import Control.Monad (forever)
import Data.Monoid
import System.IO

instance Monoid b => Monoid (IO b) where
mempty = pure mempty

mappend = liftA2 mappend

type Plugin = IO (Char -> IO ())

logTo :: FilePath -> Plugin
logTo filePath = do
handle <- openFile filePath WriteMode
hSetBuffering handle NoBuffering
return (hPutChar handle)

main = do
hSetEcho stdin False
handleChar <- logTo "file1.txt" <> logTo "file2.txt"
forever $ do
c <- getChar
handleChar c
putChar c
Categories: Offsite Blogs

Gabriel Gonzalez: managed-1.0.0: A monad for managed resources

Planet Haskell - Sun, 08/10/2014 - 5:58pm

I'm splitting off the Managed type from the mvc library into its own stand-alone library. I've wanted to use this type outside of mvc for some time now, because it's an incredibly useful Applicative that I find myself reaching for in my own code whenever I need to acquire resources.

If you're not familiar with the Managed type, it's simple:

-- The real implementation uses smart constructors
newtype Managed a =
Managed { with :: forall r . (a -> IO r) -> IO r }

-- It's a `Functor`/`Applicative`/`Monad`
instance Functor Managed where ...
instance Applicative Managed where ...
instance Monad Managed where ...

-- ... and also implements `MonadIO`
instance MonadIO Managed where ...

Here's an example of mixing the Managed monad with pipes to copy one file to another:

import Control.Monad.Managed
import System.IO
import Pipes
import qualified Pipes.Prelude as Pipes

main = runManaged $ do
hIn <- managed (withFile "in.txt" ReadMode)
hOut <- managed (withFile "out.txt" WriteMode)
liftIO $ runEffect $
Pipes.fromHandle hIn >-> Pipes.toHandle hOut

However, this is not much more concise than the equivalent callback-based version. The real value of the Managed type is its Applicative instance, which you can use to lift operations from values that it wraps.

Equational reasoning

My previous post on equational reasoning at scale describes how you can use Applicatives to automatically extend Monoids while preserving the Monoid operations. The Managed Applicative is no different and provides the following type class instance that automatically lifts Monoid operations:

instance Monoid a => Monoid (Managed a)

Therefore, you can treat the Managed Applicative as yet another useful building block in your Monoid tool box.

However, Applicatives can do more than extend Monoids; they can extend Categorys, too. Given any Category, if you extend it with an Applicative you can automatically derive a new Category. Here's the general solution:

import Control.Applicative
import Control.Category
import Prelude hiding ((.), id)

newtype Extend f c a b = Extend (f (c a b))

instance (Applicative f, Category c)
=> Category (Extend f c) where
id = Extend (pure id)

Extend f . Extend g = Extend (liftA2 (.) f g)

So let's take advantage of this fact to extend one of the pipes categories with simple resource management. All we have to do is wrap the pull-based pipes category in a bona-fide Category instance:

import Pipes

newtype Pull m a b = Pull (Pipe a b m ())

instance Monad m => Category (Pull m) where
id = Pull cat

Pull p1 . Pull p2 = Pull (p1 <-< p2)

Now we can automatically define resource-managed pipes by Extending them with the Managed Applicative:

import Control.Monad.Managed
import qualified Pipes.Prelude as Pipes
import System.IO

fromFile :: FilePath -> Extend Managed (Pull IO) () String
fromFile filePath = Extend $ do
handle <- managed (withFile filePath ReadMode)
return (Pull (Pipes.fromHandle handle))

toFile :: FilePath -> Extend Managed (Pull IO) String X
toFile filePath = Extend $ do
handle <- managed (withFile filePath WriteMode)
return (Pull (Pipes.toHandle handle))

All we need is a way to run Extended pipes and then we're good to go:

runPipeline :: Extend Managed (Pull IO) () X -> IO ()
runPipeline (Extend mp) = runManaged $ do
Pull p <- mp
liftIO $ runEffect (return () >~ p)

If we compose and run these Extended pipes they just "do the right thing":

main :: IO ()
main = runPipeline (fromFile "in.txt" >>> toFile "out.txt")

Let's check it out:

$ cat in.txt
1
2
3
$ ./example
$ cat out.txt
1
2
3

We can even reuse existing pipes, too:

reuse :: Monad m => Pipe a b m () -> Extend Managed (Pull m) a b
reuse = Extend . pure . Pull

main = runPipeline $
fromFile "in.txt" >>> reuse (Pipes.take 2) >>> toFile "out.txt"

... and reuse does the right thing:

$ ./example
$ cat out.txt
1
2

What does it mean for reuse to "do the right thing"? Well, we can specify the correctness conditions for reuse as the following functor laws:

reuse (p1 >-> p2) = reuse p1 >>> reuse p2

reuse cat = id

These two laws enforce that reuse is "well-behaved" in a rigorous sense.

This is just one example of how you can use the Managed type to extend an existing Category. As an exercise, try to take other categories and extend them this way and see what surprising new connectable components you can create.

Conclusion

Experts will recognize that Managed is a special case of Codensity or ContT. The reason for defining a separate type is:

  • simpler inferred types,
  • additional type class instances, and:
  • a more beginner-friendly name.

Managed is closely related in spirit to the Resource monad, which is now part of resourcet. The main difference between the two is:

  • Resource preserves the open and close operations
  • Managed works for arbitrary callbacks, even unrelated to resources

This is why I view the them as complementary Monads.

Like all Applicatives, the Managed type is deceptively simple. This type does not do much in isolation, but it grows in power the more you compose it with other Applicatives to generate new Applicatives.

Categories: Offsite Blogs