# News aggregator

### Network.CGI maintenance - character encodings

### Gabriel Gonzalez: Equational reasoning at scale

Haskell programmers care about the correctness of their software and they specify correctness conditions in the form of equations that their code must satisfy. They can then verify the correctness of these equations using equational reasoning to prove that the abstractions they build are sound. To an outsider this might seem like a futile, academic exercise: proving the correctness of small abstractions is difficult, so what hope do we have to prove larger abstractions correct? This post explains how to do precisely that: scale proofs to large and complex abstractions.

Purely functional programming uses composition to scale programs, meaning that:

- We build small components that we can verify correct in isolation
- We compose smaller components into larger components

If you saw "components" and thought "functions", think again! We can compose things that do not even remotely resemble functions, such as proofs! In fact, Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:

- We build small proofs that we can verify correct in isolation
- We compose smaller proofs into larger proofs

The following sections illustrate in detail how this works in practice, using Monoids as the running example. We will prove the Monoid laws for simple types and work our way up to proving the Monoid laws for much more complex types. Along the way we'll learn how to keep the proof complexity flat as the types grow in size.

MonoidsHaskell's Prelude provides the following Monoid type class:

class Monoid m wheremempty :: m

mappend :: m -> m -> m

-- An infix operator equivalent to `mappend`

(<>) :: Monoid m => m -> m -> m

x <> y = mappend x y

... and all Monoid instances must obey the following two laws:

mempty <> x = x -- Left identityx <> mempty = x -- Right identity

(x <> y) <> z = x <> (y <> z) -- Associativity

For example, Ints form a Monoid:

-- See "Appendix A" for some caveatsinstance Monoid Int where

mempty = 0

mappend = (+)

... and the Monoid laws for Ints are just the laws of addition:

0 + x = xx + 0 = x

(x + y) + z = x + (y + z)

Now we can use (<>) and mempty instead of (+) and 0:

>>> 4 <> 26

>>> 5 <> mempty <> 5

10

This appears useless at first glance. We already have (+) and 0, so why are we using the Monoid operations?

Extending MonoidsWell, what if I want to combine things other than Ints, like pairs of Ints. I want to be able to write code like this:

>>> (1, 2) <> (3, 4)(4, 6)

Well, that seems mildly interesting. Let's try to define a Monoid instance for pairs of Ints:

instance Monoid (Int, Int) wheremempty = (0, 0)

mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)

Now my wish is true and I can "add" binary tuples together using (<>) and mempty:

>>> (1, 2) <> (3, 4)(4, 6)

>>> (1, 2) <> (3, mempty) <> (mempty, 4)

(4, 6)

>>> (1, 2) <> mempty <> (3, 4)

(4, 6)

However, I still haven't proven that this new Monoid instance obeys the Monoid laws. Fortunately, this is a very simple proof.

I'll begin with the first Monoid law, which requires that:

mempty <> x = xWe will begin from the left-hand side of the equation and try to arrive at the right-hand side by substituting equals-for-equals (a.k.a. "equational reasoning"):

-- Left-hand side of the equationmempty <> x

-- x <> y = mappend x y

= mappend mempty x

-- `mempty = (0, 0)`

= mappend (0, 0) x

-- Define: x = (xL, xR), since `x` is a tuple

= mappend (0, 0) (xL, xR)

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)

= (0 + xL, 0 + xR)

-- 0 + x = x

= (xL, xR)

-- x = (xL, xR)

= x

The proof for the second Monoid law is symmetric

-- Left-hand side of the equation= x <> mempty

-- x <> y = mappend x y

= mappend x mempty

-- mempty = (0, 0)

= mappend x (0, 0)

-- Define: x = (xL, xR), since `x` is a tuple

= mappend (xL, xR) (0, 0)

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)

= (xL + 0, xR + 0)

-- x + 0 = x

= (xL, xR)

-- x = (xL, xR)

= x

The third Monoid law requires that (<>) is associative:

(x <> y) <> z = x <> (y <> z)Again I'll begin from the left side of the equation:

-- Left-hand side(x <> y) <> z

-- x <> y = mappend x y

= mappend (mappend x y) z

-- x = (xL, xR)

-- y = (yL, yR)

-- z = (zL, zR)

= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)

= mappend (xL + yL, xR + yR) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)

= mappend ((xL + yL) + zL, (xR + yR) + zR)

-- (x + y) + z = x + (y + z)

= mappend (xL + (yL + zL), xR + (yR + zR))

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)

= mappend (xL, xR) (yL + zL, yR + zR)

-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)

= mappend (xL, xR) (mappend (yL, yR) (zL, zR))

-- x = (xL, xR)

-- y = (yL, yR)

-- z = (zL, zR)

= mappend x (mappend y z)

-- x <> y = mappend x y

= x <> (y <> z)

That completes the proof of the three Monoid laws, but I'm not satisfied with these proofs.

Generalizing proofsI don't like the above proofs because they are disposable, meaning that I cannot reuse them to prove other properties of interest. I'm a programmer, so I loathe busy work and unnecessary repetition, both for code and proofs. I would like to find a way to generalize the above proofs so that I can use them in more places.

We improve proof reuse in the same way that we improve code reuse. To see why, consider the following sort function:

sort :: [Int] -> [Int]This sort function is disposable because it only works on Ints. For example, I cannot use the above function to sort a list of Doubles.

Fortunately, programming languages with generics let us generalize sort by parametrizing sort on the element type of the list:

sort :: Ord a => [a] -> [a]That type says that we can call sort on any list of as, so long as the type a implements the Ord type class (a comparison interface). This works because sort doesn't really care whether or not the elements are Ints; sort only cares if they are comparable.

Similarly, we can make the proof more "generic". If we inspect the proof closely, we will notice that we don't really care whether or not the tuple contains Ints. The only Int-specific properties we use in our proof are:

0 + x = xx + 0 = x

(x + y) + z = x + (y + z)

However, these properties hold true for all Monoids, not just Ints. Therefore, we can generalize our Monoid instance for tuples by parametrizing it on the type of each field of the tuple:

instance (Monoid a, Monoid b) => Monoid (a, b) wheremempty = (mempty, mempty)

mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

The above Monoid instance says that we can combine tuples so long as we can combine their individual fields. Our original Monoid instance was just a special case of this instance where both the a and b types are Ints.

Note: The mempty and mappend on the left-hand side of each equation are for tuples. The memptys and mappends on the right-hand side of each equation are for the types a and b. Haskell overloads type class methods like mempty and mappend to work on any type that implements the Monoid type class, and the compiler distinguishes them by their inferred types.

We can similarly generalize our original proofs, too, by just replacing the Int-specific parts with their more general Monoid counterparts.

Here is the generalized proof of the left identity law:

-- Left-hand side of the equationmempty <> x

-- x <> y = mappend x y

= mappend mempty x

-- `mempty = (mempty, mempty)`

= mappend (mempty, mempty) x

-- Define: x = (xL, xR), since `x` is a tuple

= mappend (mempty, mempty) (xL, xR)

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

= (mappend mempty xL, mappend mempty xR)

-- Monoid law: mappend mempty x = x

= (xL, xR)

-- x = (xL, xR)

= x

... the right identity law:

-- Left-hand side of the equation= x <> mempty

-- x <> y = mappend x y

= mappend x mempty

-- mempty = (mempty, mempty)

= mappend x (mempty, mempty)

-- Define: x = (xL, xR), since `x` is a tuple

= mappend (xL, xR) (mempty, mempty)

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

= (mappend xL mempty, mappend xR mempty)

-- Monoid law: mappend x mempty = x

= (xL, xR)

-- x = (xL, xR)

= x

... and the associativity law:

-- Left-hand side(x <> y) <> z

-- x <> y = mappend x y

= mappend (mappend x y) z

-- x = (xL, xR)

-- y = (yL, yR)

-- z = (zL, zR)

= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)

= mappend (mappend xL yL, mappend xR yR) (zL, zR)

-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)

= (mappend (mappend xL yL) zL, mappend (mappend xR yR) zR)

-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)

= (mappend xL (mappend yL zL), mappend xR (mappend yR zR))

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

= mappend (xL, xR) (mappend yL zL, mappend yR zR)

-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

= mappend (xL, xR) (mappend (yL, yR) (zL, zR))

-- x = (xL, xR)

-- y = (yL, yR)

-- z = (zL, zR)

= mappend x (mappend y z)

-- x <> y = mappend x y

= x <> (y <> z)

This more general Monoid instance lets us stick any Monoids inside the tuple fields and we can still combine the tuples. For example, lists form a Monoid:

-- Exercise: Prove the monoid laws for listsinstance Monoid [a] where

mempty = []

mappend = (++)

... so we can stick lists inside the right field of each tuple and still combine them:

>>> (1, [2, 3]) <> (4, [5, 6])(5, [2, 3, 5, 6])

>>> (1, [2, 3]) <> (4, mempty) <> (mempty, [5, 6])

(5, [2, 3, 5, 6])

>>> (1, [2, 3]) <> mempty <> (4, [5, 6])

(5, [2, 3, 5, 6])

Why, we can even stick yet another tuple inside the right field and still combine them:

>>> (1, (2, 3)) <> (4, (5, 6))(5, (7, 9))

We can try even more exotic permutations and everything still "just works":

>>> ((1,[2, 3]), ([4, 5], 6)) <> ((7, [8, 9]), ([10, 11), 12)((8, [2, 3, 8, 9]), ([4, 5, 10, 11], 18))

This is our first example of a "scalable proof". We began from three primitive building blocks:

- Int is a Monoid
- [a] is a Monoid
- (a, b) is a Monoid if a is a Monoid and b is a Monoid

... and we connected those three building blocks to assemble a variety of new Monoid instances. No matter how many tuples we nest the result is still a Monoid and obeys the Monoid laws. We don't need to re-prove the Monoid laws every time we assemble a new permutation of these building blocks.

However, these building blocks are still pretty limited. What other useful things can we combine to build new Monoids?

IOWe're so used to thinking of Monoids as data, so let's define a new Monoid instance for something entirely un-data-like:

-- See "Appendix A" for some caveatsinstance Monoid b => Monoid (IO b) where

mempty = return mempty

mappend io1 io2 = do

a1 <- io1

a2 <- io2

return (mappend a1 a2)

The above instance says: "If a is a Monoid, then an IO action that returns an a is also a Monoid". Let's test this using the getLine function from the Prelude:

-- Read one line of input from stdingetLine :: IO String

String is a Monoid, since a String is just a list of characters, so we should be able to mappend multiple getLine statements together. Let's see what happens:

>>> getLine -- Reads one line of inputHello<Enter>

"Hello"

>>> getLine <> getLine

ABC<Enter>

DEF<Enter>

"ABCDEF"

>>> getLine <> getLine <> getLine

1<Enter>

23<Enter>

456<Enter>

"123456"

Neat! When we combine multiple commands we combine their effects and their results.

Of course, we don't have to limit ourselves to reading strings. We can use readLn from the Prelude to read in anything that implements the Read type class:

-- Parse a `Read`able value from one line of stdinreadLn :: Read a => IO a

All we have to do is tell the compiler which type a we intend to Read by providing a type signature:

>>> readLn :: IO (Int, Int)(1, 2)<Enter>

(1 ,2)

>>> readLn <> readLn :: IO (Int, Int)

(1,2)<Enter>

(3,4)<Enter>

(4,6)

>>> readLn <> readLn <> readLn :: IO (Int, Int)

(1,2)<Enter>

(3,4)<Enter>

(5,6)<Enter>

(9,12)

This works because:

- Int is a Monoid
- Therefore, (Int, Int) is a Monoid
- Therefore, IO (Int, Int) is a Monoid

Or let's flip things around and nest IO actions inside of a tuple:

>>> let ios = (getLine, readLn) :: (IO String, IO (Int, Int))>>> let (getLines, readLns) = ios <> ios <> ios

>>> getLines

1<Enter>

23<Enter>

456<Enter>

123456

>>> readLns

(1,2)<Enter>

(3,4)<Enter>

(5,6)<Enter>

(9,12)

We can very easily reason that the type (IO String, IO (Int, Int)) obeys the Monoid laws because:

- String is a Monoid
- If String is a Monoid then IO String is also a Monoid
- Int is a Monoid
- If Int is a Monoid, then (Int, Int) is also a `Monoid
- If (Int, Int) is a Monoid, then IO (Int, Int) is also a Monoid
- If IO String is a Monoid and IO (Int, Int) is a Monoid, then (IO String, IO (Int, Int)) is also a Monoid

However, we don't really have to reason about this at all. The compiler will automatically assemble the correct Monoid instance for us. The only thing we need to verify is that the primitive Monoid instances obey the Monoid laws, and then we can trust that any larger Monoid instance the compiler derives will also obey the Monoid laws.

The Unit MonoidHaskell Prelude also provides the putStrLn function, which echoes a String to standard output with a newline:

putStrLn :: String -> IO ()Is putStrLn combinable? There's only one way to find out!

>>> putStrLn "Hello" <> putStrLn "World"Hello

World

Interesting, but why does that work? Well, let's look at the types of the commands we are combining:

putStrLn "Hello" :: IO ()putStrLn "World" :: IO ()

Well, we said that IO b is a Monoid if b is a Monoid, and b in this case is () (pronounced "unit"), which you can think of as an "empty tuple". Therefore, () must form a Monoid of some sort, and if we dig into Data.Monoid, we will discover the following Monoid instance:

-- Exercise: Prove the monoid laws for `()`instance Monoid () where

mempty = ()

mappend () () = ()

This says that empty tuples form a trivial Monoid, since there's only one possible value (ignoring bottom) for an empty tuple: (). Therefore, we can derive that IO () is a Monoid because () is a Monoid.

FunctionsAlright, so we can combine putStrLn "Hello" with putStrLn "World", but can we combine naked putStrLn functions?

>>> (putStrLn <> putStrLn) "Hello"Hello

Hello

Woah, how does that work?

We never wrote a Monoid instance for the type String -> IO (), yet somehow the compiler magically accepted the above code and produced a sensible result.

This works because of the following Monoid instance for functions:

instance Monoid b => Monoid (a -> b) wheremempty = \_ -> mempty

mappend f g = \a -> mappend (f a) (g a)

This says: "If b is a Monoid, then any function that returns a b is also a Monoid".

The compiler then deduced that:

- () is a Monoid
- If () is a Monoid, then IO () is also a Monoid
- If IO () is a Monoid then String -> IO () is also a Monoid

The compiler is a trusted friend, deducing Monoid instances we never knew existed.

Monoid pluginsNow we have enough building blocks to assemble a non-trivial example. Let's build a key logger with a Monoid-based plugin system.

The central scaffold of our program is a simple main loop that echoes characters from standard input to standard output:

main = dohSetEcho stdin False

forever $ do

c <- getChar

putChar c

However, we would like to intercept key strokes for nefarious purposes, so we will slightly modify this program to install a handler at the beginning of the program that we will invoke on every incoming character:

install :: IO (Char -> IO ())install = ???

main = do

hSetEcho stdin False

handleChar <- install

forever $ do

c <- getChar

handleChar c

putChar c

Notice that the type of install is exactly the correct type to be a Monoid:

- () is a Monoid
- Therefore, IO () is also a Monoid
- Therefore Char -> IO () is also a Monoid
- Therefore IO (Char -> IO ()) is also a Monoid

Therefore, we can combine key logging plugins together using Monoid operations. Here is one such example:

type Plugin = IO (Char -> IO ())logTo :: FilePath -> Plugin

logTo filePath = do

handle <- openFile filePath WriteMode

return (hPutChar handle)

main = do

hSetEcho stdin False

handleChar <- logTo "file1.txt" <> logTo "file2.txt"

forever $ do

c <- getChar

handleChar c

putChar c

Now, every key stroke will be recorded to both file1.txt and file2.txt. Let's confirm that this works as expected:

$ ./loggerTest<Enter>

ABC<Enter>

42<Enter>

<Ctrl-C>

$ cat file1.txt

Test

ABC

42

$ cat file2.txt

Test

ABC

42

Try writing your own Plugins and mixing them in with (<>) to see what happens. "Appendix C" contains the complete code for this section so you can experiment with your own Plugins.

ApplicativesNotice that I never actually proved the Monoid laws for the following two Monoid instances:

instance Monoid b => Monoid (a -> b) wheremempty = \_ -> mempty

mappend f g = \a -> mappend (f a) (g a)

instance Monoid a => Monoid (IO a) where

mempty = return mempty

mappend io1 io2 = do

a1 <- io1

a2 <- io2

return (mappend a1 a2)

The reason why is that they are both special cases of a more general pattern. We can detect the pattern if we rewrite both of them to use the pure and liftA2 functions from Control.Applicative:

import Control.Applicative (pure, liftA2)instance Monoid b => Monoid (a -> b) where

mempty = pure mempty

mappend = liftA2 mappend

instance Monoid b => Monoid (IO b) where

mempty = pure mempty

mappend = liftA2 mappend

This works because both IO and functions implement the following Applicative interface:

class Functor f => Applicative f wherepure :: a -> f a

(<*>) :: f (a -> b) -> f a -> f b

-- Lift a binary function over the functor `f`

liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c

liftA2 f x y = (pure f <*> x) <*> y

... and all Applicative instances must obey several Applicative laws:

pure id <*> v = v((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

pure f <*> pure x = pure (f x)

u <*> pure x = pure (\f -> f y) <*> u

These laws may seem a bit adhoc, but this paper explains that you can reorganize the Applicative class to this equivalent type class:

class Functor f => Monoidal f whereunit :: f ()

(#) :: f a -> f b -> f (a, b)

Then the corresponding laws become much more symmetric:

fmap snd (unit # x) = x -- Left identityfmap fst (x # unit) = x -- Right identity

fmap assoc ((x # y) # z) = x # (y # z) -- Associativity

where

assoc ((a, b), c) = (a, (b, c))

fmap (f *** g) (x # y) = fmap f x # fmap g y -- Naturality

where

(f *** g) (a, b) = (f a, g b)

I personally prefer the Monoidal formulation, but you go to war with the army you have, so we will use the Applicative type class for this post.

All Applicatives possess a very powerful property: they can all automatically lift Monoid operations using the following instance:

instance (Applicative f, Monoid b) => Monoid (f b) wheremempty = pure mempty

mappend = liftA2 mappend

This says: "If f is an Applicative and b is a Monoid, then f b is also a Monoid." In other words, we can automatically extend any existing Monoid with some new feature f and get back a new Monoid.

Note: The above instance is bad Haskell because it overlaps with other type class instances. In practice we have to duplicate the above code once for each Applicative. Also, for some Applicatives we may want a different Monoid instance.

We can prove that the above instance obeys the Monoid laws without knowing anything about f and b, other than the fact that f obeys the Applicative laws and b obeys the Applicative laws. These proofs are a little long, so I've included them in Appendix B.

Both IO and functions implement the Applicative type class:

instance Applicative IO wherepure = return

iof <*> iox = do

f <- iof

x <- iox

return (f x)

instance Applicative ((->) a) where

pure x = \_ -> x

kf <*> kx = \a ->

let f = kf a

x = kx a

in f x

This means that we can kill two birds with one stone. Every time we prove the Applicative laws for some functor F:

instance Applicative F where ...... we automatically prove that the following Monoid instance is correct for free:

instance Monoid b => Monoid (F b) wheremempty = pure mempty

mappend = liftA2 mappend

In the interest of brevity, I will skip the proofs of the Applicative laws, but I may cover them in a subsequent post.

The beauty of Applicative Functors is that every new Applicative instance we discover adds a new building block to our Monoid toolbox, and Haskell programmers have already discovered lots of Applicative Functors.

Revisiting tuplesOne of the very first Monoid instances we wrote was:

instance (Monoid a, Monoid b) => Monoid (a, b) wheremempty = (mempty, mempty)

mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)

Check this out:

instance (Monoid a, Monoid b) => Monoid (a, b) wheremempty = pure mempty

mappend = liftA2 mappend

This Monoid instance is yet another special case of the Applicative pattern we just covered!

This works because of the following Applicative instance in Control.Applicative:

instance Monoid a => Applicative ((,) a) wherepure b = (mempty, b)

(a1, f) <*> (a2, x) = (mappend a1 a2, f x)

This instance obeys the Applicative laws (proof omitted), so our Monoid instance for tuples is automatically correct, too.

Composing applicativesIn the very first section I wrote:

Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:

- We build small proofs that we can verify correct in isolation
- We compose smaller proofs into larger proofs

I don't like to use the word compose lightly. In the context of category theory, compose has a very rigorous meaning, indicating composition of morphisms in some category. This final section will show that we can actually compose Monoid proofs in a very rigorous sense of the word.

We can define a category of Monoid proofs:

- Objects are types and their associated Monoid proofs
- Morphisms are Applicative Functors
- The identity morphism is the Identity applicative
- The composition operation is composition of Applicative Functors
- The category laws are isomorphisms instead of equalities

So in our Plugin example, we began on the proof that () was a Monoid and then composed three Applicative morphisms to prove that Plugin was a Monoid. I will use the following diagram to illustrate this:

+-----------------------+| |

| Legend: * = Object |

| |

| v |

| | = Morphism |

| v |

| |

+-----------------------+

* `()` is a `Monoid`

v

| IO

v

* `IO ()` is a `Monoid`

v

| ((->) String)

v

* `String -> IO ()` is a `Monoid`

v

| IO

v

* `IO (String -> IO ())` (i.e. `Plugin`) is a `Monoid`

Therefore, we were literally composing proofs together.

ConclusionYou can equationally reason at scale by decomposing larger proofs into smaller reusable proofs, the same way we decompose programs into smaller and more reusable components. There is no limit to how many proofs you can compose together, and therefore there is no limit to how complex of a program you can tame using equational reasoning.

This post only gave one example of composing proofs within Haskell. The more you learn the language, the more examples of composable proofs you will encounter. Another common example is automatically deriving Monad proofs by composing monad transformers.

As you learn Haskell, you will discover that the hard part is not proving things. Rather, the challenge is learning how to decompose proofs into smaller proofs and you can cultivate this skill by studying category theory and abstract algebra. These mathematical disciplines teach you how to extract common and reusable proofs and patterns from what appears to be disposable and idiosyncratic code.

Appendix A - Missing Monoid instancesThese Monoid instance from this post do not actually appear in the Haskell standard library:

instance Monoid b => Monoid (IO b)instance Monoid Int

The first instance was recently proposed here on the Glasgow Haskell Users mailing list. However, in the short term you can work around it by writing your own Monoid instances by hand just by inserting a sufficient number of pures and liftA2s.

For example, suppose we wanted to provide a Monoid instance for Plugin. We would just newtype Plugin and write:

newtype Plugin = Plugin { install :: IO (String -> IO ()) }instance Monoid Plugin where

mempty = Plugin (pure (pure (pure mempty)))

mappend (Plugin p1) (Plugin p2) =

Plugin (liftA2 (liftA2 (liftA2 mappend)) p1 p2)

This is what the compiler would have derived by hand.

Alternatively, you could define an orphan Monoid instance for IO, but this is generally frowned upon.

There is no default Monoid instance for Int because there are actually two possible instances to choose from:

-- Alternative #1instance Monoid Int where

mempty = 0

mappend = (+)

-- Alternative #2

instance Monoid Int where

mempty = 1

mappend = (*)

So instead, Data.Monoid sidesteps the issue by providing two newtypes to distinguish which instance we prefer:

newtype Sum a = Sum { getSum :: a }instance Num a => Monoid (Sum a)

newtype Product a = Product { getProduct :: a}

instance Num a => Monoid (Product a)

An even better solution is to use a semiring, which allows two Monoid instances to coexist for the same type. You can think of Haskell's Num class as an approximation of the semiring class:

class Num a wherefromInteger :: Integer -> a

(+) :: a -> a -> a

(*) :: a -> a -> a

-- ... and other operations unrelated to semirings

Note that we can also lift the Num class over the Applicative class, exactly the same way we lifted the Monoid class. Here's the code:

instance (Applicative f, Num a) => Num (f a) wherefromInteger n = pure (fromInteger n)

(+) = liftA2 (+)

(*) = liftA2 (*)

(-) = liftA2 (-)

negate = fmap negate

abs = fmap abs

signum = fmap signum

This lifting guarantees that if a obeys the semiring laws then so will f a. Of course, you will have to specialize the above instance to every concrete Applicative because otherwise you will get overlapping instances.

Appendix BThese are the proofs to establish that the following Monoid instance obeys the Monoid laws:

instance (Applicative f, Monoid b) => Monoid (f b) wheremempty = pure mempty

mappend = liftA2 mappend

... meaning that if f obeys the Applicative laws and b obeys the Monoid laws, then f b also obeys the Monoid laws.

Proof of the left identity law:

mempty <> x-- x <> y = mappend x y

= mappend mempty x

-- mappend = liftA2 mappend

= liftA2 mappend mempty x

-- mempty = pure mempty

= liftA2 mappend (pure mempty) x

-- liftA2 f x y = (pure f <*> x) <*> y

= (pure mappend <*> pure mempty) <*> x

-- Applicative law: pure f <*> pure x = pure (f x)

= pure (mappend mempty) <*> x

-- Eta conversion

= pure (\a -> mappend mempty a) <*> x

-- mappend mempty x = x

= pure (\a -> a) <*> x

-- id = \x -> x

= pure id <*> x

-- Applicative law: pure id <*> v = v

= x

Proof of the right identity law:

x <> mempty = x-- x <> y = mappend x y

= mappend x mempty

-- mappend = liftA2 mappend

= liftA2 mappend x mempty

-- mempty = pure mempty

= liftA2 mappend x (pure mempty)

-- liftA2 f x y = (pure f <*> x) <*> y

= (pure mappend <*> x) <*> pure mempty

-- Applicative law: u <*> pure y = pure (\f -> f y) <*> u

= pure (\f -> f mempty) <*> (pure mappend <*> x)

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

= ((pure (.) <*> pure (\f -> f mempty)) <*> pure mappend) <*> x

-- Applicative law: pure f <*> pure x = pure (f x)

= (pure ((.) (\f -> f mempty)) <*> pure mappend) <*> x

-- Applicative law : pure f <*> pure x = pure (f x)

= pure ((.) (\f -> f mempty) mappend) <*> x

-- `(.) f g` is just prefix notation for `f . g`

= pure ((\f -> f mempty) . mappend) <*> x

-- f . g = \x -> f (g x)

= pure (\x -> (\f -> f mempty) (mappend x)) <*> x

-- Apply the lambda

= pure (\x -> mappend x mempty) <*> x

-- Monoid law: mappend x mempty = x

= pure (\x -> x) <*> x

-- id = \x -> x

= pure id <*> x

-- Applicative law: pure id <*> v = v

= x

Proof of the associativity law:

(x <> y) <> z-- x <> y = mappend x y

= mappend (mappend x y) z

-- mappend = liftA2 mappend

= liftA2 mappend (liftA2 mappend x y) z

-- liftA2 f x y = (pure f <*> x) <*> y

= (pure mappend <*> ((pure mappend <*> x) <*> y)) <*> z

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

= (((pure (.) <*> pure mappend) <*> (pure mappend <*> x)) <*> y) <*> z

-- Applicative law: pure f <*> pure x = pure (f x)

= ((pure f <*> (pure mappend <*> x)) <*> y) <*> z

where

f = (.) mappend

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

= ((((pure (.) <*> pure f) <*> pure mappend) <*> x) <*> y) <*> z

where

f = (.) mappend

-- Applicative law: pure f <*> pure x = pure (f x)

= (((pure f <*> pure mappend) <*> x) <*> y) <*> z

where

f = (.) ((.) mappend)

-- Applicative law: pure f <*> pure x = pure (f x)

= ((pure f <*> x) <*> y) <*> z

where

f = (.) ((.) mappend) mappend

-- (.) f g = f . g

= ((pure f <*> x) <*> y) <*> z

where

f = ((.) mappend) . mappend

-- Eta conversion

= ((pure f <*> x) <*> y) <*> z

where

f x = (((.) mappend) . mappend) x

-- (f . g) x = f (g x)

= ((pure f <*> x) <*> y) <*> z

where

f x = (.) mappend (mappend x)

-- (.) f g = f . g

= ((pure f <*> x) <*> y) <*> z

where

f x = mappend . (mappend x)

-- Eta conversion

= ((pure f <*> x) <*> y) <*> z

where

f x y = (mappend . (mappend x)) y

-- (f . g) x = f (g x)

= ((pure f <*> x) <*> y) <*> z

where

f x y = mappend (mappend x y)

-- Eta conversion

= ((pure f <*> x) <*> y) <*> z

where

f x y z = mappend (mappend x y) z

-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)

= ((pure f <*> x) <*> y) <*> z

where

f x y z = mappend x (mappend y z)

-- (f . g) x = f (g x)

= ((pure f <*> x) <*> y) <*> z

where

f x y z = (mappend x . mappend y) z

-- Eta conversion

= ((pure f <*> x) <*> y) <*> z

where

f x y = mappend x . mappend y

-- (.) f g = f . g

= ((pure f <*> x) <*> y) <*> z

where

f x y = (.) (mappend x) (mappend y)

-- (f . g) x = f

= ((pure f <*> x) <*> y) <*> z

where

f x y = (((.) . mappend) x) (mappend y)

-- (f . g) x = f (g x)

= ((pure f <*> x) <*> y) <*> z

where

f x y = ((((.) . mappend) x) . mappend) y

-- Eta conversion

= ((pure f <*> x) <*> y) <*> z

where

f x = (((.) . mappend) x) . mappend

-- (.) f g = f . g

= ((pure f <*> x) <*> y) <*> z

where

f x = (.) (((.) . mappend) x) mappend

-- Lambda abstraction

= ((pure f <*> x) <*> y) <*> z

where

f x = (\k -> k mappend) ((.) (((.) . mappend) x))

-- (f . g) x = f (g x)

= ((pure f <*> x) <*> y) <*> z

where

f x = (\k -> k mappend) (((.) . ((.) . mappend)) x)

-- Eta conversion

= ((pure f <*> x) <*> y) <*> z

where

f = (\k -> k mappend) . ((.) . ((.) . mappend))

-- (.) f g = f . g

= ((pure f <*> x) <*> y) <*> z

where

f = (.) (\k -> k mappend) ((.) . ((.) . mappend))

-- Applicative law: pure f <*> pure x = pure (f x)

= (((pure g <*> pure f) <*> x) <*> y) <*> z

where

g = (.) (\k -> k mappend)

f = (.) . ((.) . mappend)

-- Applicative law: pure f <*> pure x = pure (f x)

= ((((pure (.) <*> pure (\k -> k mappend)) <*> pure f) <*> x) <*> y) <*> z

where

f = (.) . ((.) . mappend)

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

= ((pure (\k -> k mappend) <*> (pure f <*> x)) <*> y) <*> z

where

f = (.) . ((.) . mappend)

-- u <*> pure y = pure (\k -> k y) <*> u

= (((pure f <*> x) <*> pure mappend) <*> y) <*> z

where

f = (.) . ((.) . mappend)

-- (.) f g = f . g

= (((pure f <*> x) <*> pure mappend) <*> y) <*> z

where

f = (.) (.) ((.) . mappend)

-- Applicative law: pure f <*> pure x = pure (f x)

= ((((pure g <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z

where

g = (.) (.)

f = (.) . mappend

-- Applicative law: pure f <*> pure x = pure (f x)

= (((((pure (.) <*> pure (.)) <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z

where

f = (.) . mappend

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

= (((pure (.) <*> (pure f <*> x)) <*> pure mappend) <*> y) <*> z

where

f = (.) . mappend

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z

where

f = (.) . mappend

-- (.) f g = f . g

= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z

where

f = (.) (.) mappend

-- Applicative law: pure f <*> pure x = pure (f x)

= (((pure f <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z

where

f = (.) (.)

-- Applicative law: pure f <*> pure x = pure (f x)

= ((((pure (.) <*> pure (.)) <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

= ((pure (.) <*> (pure mappend <*> x)) <*> (pure mappend <*> y)) <*> z

-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)

= (pure mappend <*> x) <*> ((pure mappend <*> y) <*> z)

-- liftA2 f x y = (pure f <*> x) <*> y

= liftA2 mappend x (liftA2 mappend y z)

-- mappend = liftA2 mappend

= mappend x (mappend y z)

-- x <> y = mappend x y

= x <> (y <> z)Appendix C: Monoid key logging

Here is the complete program for a key logger with a Monoid-based plugin system:

import Control.Applicative (pure, liftA2)import Control.Monad (forever)

import Data.Monoid

import System.IO

instance Monoid b => Monoid (IO b) where

mempty = pure mempty

mappend = liftA2 mappend

type Plugin = IO (Char -> IO ())

logTo :: FilePath -> Plugin

logTo filePath = do

handle <- openFile filePath WriteMode

return (hPutChar handle)

main = do

hSetEcho stdin False

handleChar <- logTo "file1.txt" <> logTo "file2.txt"

forever $ do

c <- getChar

handleChar c

putChar c

### This just in, from my local GHC/Cabal checkout... (re: Cabal Hell)

Transcript: http://lpaste.net/107785

Commentary: Here's what the test is doing:

- It builds and installs a package p-1.0, which contains "module P where p = 1"
- It builds and installs a package q, depending on p, which contains "module Q where import P; q = p + 10"
- It builds and installs a package p-1.1, which contains "module P where p = 1"
- It
**non-destructively reinstalls**package q, but this time building against p-1.1 instead of p-1.0 - It builds an executable r (building against p-1.1 and q-1.0)
- With the same package database, we specify constraint that p==1.0, and rebuild executable r, which prints the value of q (building against p-1.0 and the other instance of q-1.0)
- It tests the two executables to make sure they behave as expected

There are still a minor pile of kinks to work out (in particular, only cabal configure understands this; cabal install's dependency solver also has to be clued in), but from here, it should be possible to do away with one of the major use-cases for Cabal sandboxes and put them all in the same package database, and support a 'cabal upgrade' command.

P.S. It's also possible to link q-1.0(p-1.0) and q-1.0(p-1.1) together in the same program, although I don't think this feature will be too useful without Backpack-like holes.

submitted by ezyang[link] [13 comments]

### Failure compiling ghc-mtl with ghc-7.8.{2,3}

### Creating a Point

### How many Haskell programmers are there?

### ANN: hackage-diff - Compare the public API ofdifferent versions of a Hackage library

### Mark Jason Dominus: Similarity analysis of quilt blocks

As I've discussed elsewhere, I once wrote a program to enumerate all the possible quilt blocks of a certain type. The quilt blocks in question are, in quilt jargon, sixteen-patch half-square triangles. A half-square triangle, also called a “patch”, is two triangles of fabric sewn together, like this:

Then you sew four of these patches into a four-patch, say like this:

Then to make a sixteen-patch block of the type I was considering, you take four identical four-patch blocks, and sew them together with rotational symmetry, like this:

It turns out that there are exactly 72 different ways to do this. (Blocks equivalent under a reflection are considered the same, as are blocks obtained by exchanging the roles of black and white, which are merely stand-ins for arbitrary colors to be chosen later.) Here is the complete set of 72:

It's immediately clear that some of these resemble one another, sometimes so strongly that it can be hard to tell how they differ, while others are very distinctive and unique-seeming. I wanted to make the computer classify the blocks on the basis of similarity.

My idea was to try to find a way to get the computer to notice which blocks have distinctive components of one color. For example, many blocks have a distinctive diamond shape in the center.

Some have a pinwheel like this:

which also has the diamond in the middle, while others have a different kind of pinwheel with no diamond:

I wanted to enumerate such components and ask the computer to list which blocks contained which shapes; then group them by similarity, the idea being that that blocks with the same distinctive components are similar.

The program suite uses a compact notation of blocks and of shapes that makes it easy to figure out which blocks contain which distinctive components.

Since each block is made of four identical four-patches, it's enough just to examine the four-patches. Each of the half-square triangle patches can be oriented in two ways:

Here are two of the 12 ways to orient the patches in a four-patch:

Each 16-patch is made of four four-patches, and you must imagine that the
four-patches shown above are in the *upper-left* position in the
16-patch. Then symmetry of the 16-patch block means that triangles with the
same label are in positions that are symmetric with respect to the
entire block. For example, the two triangles labeled b are on
opposite sides of the block's northwest-southeast diagonal. But there
is no symmetry of the full 16-patch block that carries triangle d to
triangle g, because d is on the edge of the block, while g is in the interior.

Triangles must be colored opposite colors if they are part of the same patch, but other than that there are no constraints on the coloring.

A block might, of course, have patches in both orientations:

All the blocks with diagonals oriented this way are assigned descriptors made from the letters bbdefgii.

Once you have chosen one of the 12 ways to orient the diagonals in the four-patch, you still have to color the patches. A descriptor like bbeeffii describes the orientation of the diagonal lines in the squares, but it does not describe the way the four patches are colored; there are between 4 and 8 ways to color each sort of four-patch. For example, the bbeeffii four-patch shown earlier can be colored in six different ways:

In each case, all four diagonals run from northwest to southeast. (All other ways of coloring this four-patch are equivalent to one of these under one or more of rotation, reflection, and exchange of black and white.)

We can describe a patch by listing the descriptors of the eight triangles, grouped by which triangles form connected regions. For example, the first block above is:

b/bf/ee/fi/i

because there's an isolated white b triangle, then a black parallelogram made of a b and an f patch, then a white triangle made from the two white e triangles then another parallelogram made from the black f and i, and finally in the middle, the white i. (The two white e triangles appear to be separated, but when four of these four-patches are joined into a 16-patch block, the two white e patches will be adjacent and will form a single large triangle: )

The other five bbeeffii four-patches are, in the same order they are shown above:

b/b/e/e/f/f/i/i b/b/e/e/fi/fi b/bfi/ee/f/i bfi/bfi/e/e bf/bf/e/e/i/iAll six have bbeeffii, but grouped differently depending on the colorings. The second one ( b/b/e/e/f/f/i/i) has no regions with more than one triangle; the fifth ( bfi/bfi/e/e) has two large regions of three triangles each, and two isolated triangles. In the latter four-patch, the bfi in the descriptor has three letters because the patch has a corresponding distinctive component made of three triangles.

I made up a list of the descriptors for all 72 blocks; I think I did this by hand. (The work directory contains a blocks file that maps blocks to their descriptors, but the Makefile does not say how to build it, suggesting that it was not automatically built.) From this list one can automatically extract a list of descriptors of interesting shapes: an interesting shape is two or more letters that appear together in some descriptor. (Or it can be the single letter j, which is exceptional; see below.) For example, bffh represents a distinctive component. It can only occur in a patch that has a b, two fs, and an h, like this one:

and it will only be significant if the b, the two fs, and the h are the same color:

in which case you get this distinctive and interesting-looking hook component.

There is only one block that includes this distinctive hook component; it has descriptor b/bffh/ee/j, and looks like this: . But some of the distinctive components are more common. The ee component represents the large white half-diamonds on the four sides. A block with "ee" in its descriptor always looks like this:

and the blocks formed from such patches always have a distinctive half-diamond component on each edge, like this:

(The stippled areas vary from block to block, but the blocks with ee in their descriptors always have the half-diamonds as shown.)

The blocks listed at http://hop.perl.plover.com/quilt/analysis/images/ee.html all have the ee component. There are many differences between them, but they all have the half-diamonds in common.

Other distinctive components have similar short descriptors. The two pinwheels I mentioned above are gh and fi, respectively; if you look at the list of gh blocks and the list of fi blocks you'll see all the blocks with each kind of pinwheel.

Descriptor j is an exception. It makes an interesting shape all by itself, because any block whose patches have j in their descriptor will have a distinctive-looking diamond component in the center. The four-patch looks like this:

so the full sixteen-patch looks like this:

where the stippled parts can vary. A look at the list of blocks with component j will confirm that they all have this basic similarity.

I had made a list of the descriptors for each of the the 72 blocks, and from this I extracted a list of the descriptors for interesting component shapes. Then it was only a matter of finding the component descriptors in the block descriptors to know which blocks contained which components; if the two blocks share two different distinctive components, they probably look somewhat similar.

Then I sorted the blocks into groups, where two blocks were in the same group if they shared two distinctive components. The resulting grouping lists, for each block, which other blocks have at least two shapes in common with it. Such blocks do indeed tend to look quite similar.

This strategy was actually the second thing I tried; the first thing didn't work out well. (I forget just what it was, but I think it involved finding polygons in each block that had white inside and black outside, or vice versa.) I was satisfied enough with this second attempt that I considered the project a success and stopped work on it.

The complete final results were:

- This tabulation of blocks that are somewhat similar
- This tabulation of blocks that are distinctly similar (This is
*the*final product; I consider this a sufficiently definitive listing of “similar blocks”.) - This tabulation of blocks that are extremely similar

And these tabulations of all the blocks with various distinctive components: bd bf bfh bfi cd cdd cdf cf cfi ee eg egh egi fgh fh fi gg ggh ggi gh gi j

It may also be interesting to browse the work directory.

### ghc 7.8 and ghc-mtl

I'm trying to build ghc-mtl-1.2.1.0 with ghc-7.8.3, getting these errors. Any ideas on how to fix those? What exactly causes them?

submitted by ihamsa[link] [2 comments]

### Export a type and its record-syntax functions, but not the constructor

I have the follow data type:

data X = X { getName :: String, getAddress :: Address }When exporting this, I would like to export the type X and the functions getName and getAddress without exporting the type constructor X.

Is there a simple syntax to do this without having to name every single function and type in the module export statement?

I tried googling this but I only am finding examples of exporting only the type or exporting certain functions in the construction. ie:

module ... ( X(getName, getAddress) ) where...will do what I want, but that means I have to explicitly name every function and type. Any syntactic sugar out that would do this?

submitted by Die-Nacht[link] [5 comments]

### hackage-diff: Compare the public API of different versions of a Hackage library

A couple of days ago, in the GHC 7.8.3 release thread, Michael Snoyman and Austin Seipp had the following exchange:

Now, what would be nice if we had some kind of report of all API changes between releases, but that's really a general purpose tool that would make life nicer for many different package users, not just ghc the library.

Can somebody just create a kickstarter for this or something already? I've been wishing for this tool for forever.

I found myself agreeing with that a lot. I constantly wonder what exactly has changed between two releases of a library. I thought, maybe I can write a tool like that, how hard can it be?

Slightly harder than I thought, as usual. But yeah, here it is, my first shot at the problem:

https://github.com/blitzcode/hackage-diff

It actually works pretty well in practice! Please let me know if this is useful for you, or if you can think of a way to make it nicer.

submitted by SirRockALot1[link] [31 comments]

### Being in the functional side, like a badass!!!

This is probably a weird question, not quite related to programming skills, but rather to code aestheticism. I'm looking for some kind of haskell snippet that could be a good forearm tattoo. I need something meaningfull, and somehow quite a sight (I chose haskell because of its elegant syntax!). Any advice? (I'm not looking for some sort of tattoo guru...and I don't want to post anythink in some kind of tattoo subreddit. I simply want to know what haskell users think about it!)

submitted by Dyamon[link] [39 comments]

### Thoughts on Elixir?

Anybody in Haskell community been using or playing with Elixir? It's a dynamically typed compiled pure functional language on top of Erlang VM, adding support for macros, list comprehensions, and features from Ruby and Python to core Erlang functionality, plus Ruby friendly syntax. I am curious about how favorably it compares to Haskell for developing projects. Is there a significant level of overlap (more so than Erlang) between Elixir and Haskell? I am completely new to pure functional programming, and am considering learning both Haskell and Elixir. Reasons being that the training might be complementary and synergistic, and that I might have more options ultimately in undertaking a future project in a purely functional style. Haskell is obviously mature and undergoing a bit of a renaissance lately, while Elixir is still a beta (not 1.0 ready) but Elixir seems poised to be the new Ruby of functional development, at least if you drink their Kool-Aid. Any thoughts if learning Haskell and Elixir together is not so great an idea? Any thoughts on quality of Elixir as a functional language compared to Haskell standard?

submitted by RaymondWies[link] [10 comments]

### No `base-4.7.0.1` in hackage

### Cabal question

### Dominic Steinitz: Fun with (Kalman) Filters Part I

Suppose we wish to estimate the mean of a sample drawn from a normal distribution. In the Bayesian approach, we know the prior distribution for the mean (it could be a non-informative prior) and then we update this with our observations to create the posterior, the latter giving us improved information about the distribution of the mean. In symbols

Typically, the samples are chosen to be independent, and all of the data is used to perform the update but, given independence, there is no particular reason to do that, updates can performed one at a time and the result is the same; nor is the order of update important. Being a bit imprecise, we have

The standard notation in Bayesian statistics is to denote the parameters of interest as and the observations as . For reasons that will become apparent in later blog posts, let us change notation and label the parameters as and the observations as .

Let us take a very simple example of a prior where is known and then sample from a normal distribution with mean and variance for the -th sample where is known (normally we would not know the variance but adding this generality would only clutter the exposition unnecessarily).

The likelihood is then

As we have already noted, instead of using this with the prior to calculate the posterior, we can update the prior with each observation separately. Suppose that we have obtained the posterior given samples (we do not know this is normally distributed yet but we soon will):

Then we have

Writing

and then completing the square we also obtain

More Formally

Now let’s be a bit more formal about conditional probability and use the notation of -algebras to define and where , is as before and . We have previously calculated that and that and the tower law for conditional probabilities then allows us to conclude . By Jensen’s inequality, we have

Hence is bounded in and therefore converges in and almost surely to . The noteworthy point is that if if and only if converges to 0. Explicitly we have

which explains why we took the observations to have varying and known variances. You can read more in Williams’ book (Williams 1991).

A Quick CheckWe have reformulated our estimation problem as a very simple version of the celebrated Kalman filter. Of course, there are much more interesting applications of this but for now let us try “tracking” the sample from the random variable.

> {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > {-# OPTIONS_GHC -fno-warn-type-defaults #-} > {-# OPTIONS_GHC -fno-warn-unused-do-bind #-} > {-# OPTIONS_GHC -fno-warn-missing-methods #-} > {-# OPTIONS_GHC -fno-warn-orphans #-} > module FunWithKalmanPart1 ( > obs > , nObs > , estimates > , uppers > , lowers > ) where > > import Data.Random.Source.PureMT > import Data.Random > import Control.Monad.State > var, cSquared :: Double > var = 1.0 > cSquared = 1.0 > > nObs :: Int > nObs = 100 > createObs :: RVar (Double, [Double]) > createObs = do > x <- rvar (Normal 0.0 var) > ys <- replicateM nObs $ rvar (Normal x cSquared) > return (x, ys) > > obs :: (Double, [Double]) > obs = evalState (sample createObs) (pureMT 2) > > updateEstimate :: (Double, Double) -> (Double, Double) -> (Double, Double) > updateEstimate (xHatPrev, varPrev) (y, cSquared) = (xHatNew, varNew) > where > varNew = recip (recip varPrev + recip cSquared) > xHatNew = varNew * (y / cSquared + xHatPrev / varPrev) > > estimates :: [(Double, Double)] > estimates = scanl updateEstimate (y, cSquared) (zip ys (repeat cSquared)) > where > y = head $ snd obs > ys = tail $ snd obs > > uppers :: [Double] > uppers = map (\(x, y) -> x + 3 * (sqrt y)) estimates > > lowers :: [Double] > lowers = map (\(x, y) -> x - 3 * (sqrt y)) estimates BibliographyWilliams, David. 1991. *Probability with Martingales*. Cambridge University Press.