# Blogs

## Is logging really this tuff with Haskell?

I quote: http://programming.reddit.com/info/5ze57/comments

Haskell is too weird for me (if I want to add a logging statement, I don't want to rewrite ten functions to use monads instead of the types they previously had).

## md5 :: LazyByteString -> MD5Ctx

# Introduction

In June 2007 I developed a Haskell MD5 implementation intended to be competitive with the 'C' version while still remaining readable. This is a lazy ByteString implementation that makes heavy use of inlining, unboxing and strictness annotations. During this process I [re?]discovered some querks about GHC optimizations and Data.Binary.

# The Facts

The first relatively operational version used unsafePerformIO to extract the Word32s from the ByteString. Minor modifications were made to use Data.Binary - this is, after all, an intended use of Data.Binary.

There are four incarnations. First, there is the manual round unrolling and a foldl' version; second, there is a version that uses Data.Binary to extract the Word32s and another that uses unsafePerformIO + Ptr operations.

I standardized the benchmark as the time it takes to hash a 200MB file (seconds) and show performance relative to standard md5sum.

### GHC 6.6.1 results

```
MD5.unroll.unsafe: 5s 6x
MD5.unroll.Binary: 10s 12x
MD5.rolled.Binary: 45s 58x
MD5.rolled.unsafe: 27s 35x
md5sum: 0.78s 1x, by def.
```

### UnsafeIO v. Binary

Sadly, changing the (getNthWord32 :: Int -> ByteString -> Word32) function from unsafePerformIO to using Data.Binary made the system take twice as long. It was my hope that Data.Binary was optimized to the same degree as its unsafePerformIO counterpart.

### Manual unrolling? This isn't (insert hated language/compiler)

As you can see, the manual unrolling has saved quite a bit of time. Why did unrolling save computation time? I was optimistic in hoping GHC would eliminate unneeded arrays/accesses when unfolding (among other optimizations).

### Profiling funkyness

When profiling (-p -auto-all), the 'Binary' version attributes much memory to 'applyMD5Rounds' while the 'unsafe' version attributes more to the ff,gg,hh,ii functions. I am guessing the 'unsafe' version is correct... is the program profile using Data.Binary wrong?

### Dreams of SuperO

I am not trying to be Data.Binary/GHC centric. Matter of fact, I look forward to seeing what YHC/supero would do - I just need to be able to get it (and know how to**use**it).

## Observations from core (-ddump-simpl)

I am not patient or knowledgeable enough to know if my understanding of core is correct... that doesn't stop me from interpreting it. It looks like the programs are boxing/unboxing the context between every fold of foldl'! If this is True it explains some of the performance issues. Folding over rounds would cost 64 box/unboxes per block in the rolled version (once for every byte hashed). Folding between each round would cost one box/unbox per block even in the unrolled version (32 million box/unboxings when hashing 200MB). If this is true, it is an obvious place for performance improvement

# Minor Alterations With Unexpected Results

Eliminating md5Finalize resulted in a sub 2s run (>2x speed increase, ~3x slower than 'C'). Finalize only runs once (at the end of the hash computation) and is profiled at 1 tick. The only explanation I can see is that md5Finalize use of md5Update complicates the optimization. Inlining md5Update doesn't help and neither does making/using identical copies of md5Update (md5Update') and all sub-functions.

Edit: Benchmark includes nanoMD5 too!

### GHC 6.8.0.20070914 Results

```
Time v. 'C' v. GHC 6.6.1
md5.unroll.unsafe 5s 6x 1x
md5.roll.unsafe 17s 21x 0.63x
md5.nano 0.9s 1.15x -
```

So I see a significant improvement in the rolled version thanks to everyone involved in GHC.

Any suggestions on how to close the remaining performance gap would be welcome.

### Style

Yes, I know, it looks ugly! I don't feel like cleaning it up much right now. If someone voices that they care, that would be a motivator.

# Summary (new from edit)

# Footer (for the really curious)

Hardware: 2.2Ghz AMD x86_64 (Athlon 3400+) with 1GB of memory. Software: Linux 2.6.17, GHC (6.6.1 and 6.8.0 as indicated), md5sum 5.97.

flags: -O2 -funfolding-use-threshold66 -funfolding-creation-threshold66 -funfolding-update-in-place -fvia-c -optc -funroll-all-loops -optc-ffast-math

## CODE

EDIT: Now on darcs:

darcs get http://code.haskell.org/~tommd/pureMD5

- TomMD's blog
- Login to post comments

## The algorithm for Discordian text encryption

module Crypt_Discordian

where

```
```import List

vowel_list = "aeiouAEIOU"

is_vowel c = c `elem` vowel_list

move_vowels lis = move_vowels' lis [] []

move_vowels' [] c v = v ++ c

move_vowels' (x:xs) c v

| is_vowel x = move_vowels' xs c (x:v)

| otherwise = move_vowels' xs (x:c) v

remove_spaces str = filter (\x -> x /= ' ') str

encrypt str = List.sort $ move_vowels $ remove_spaces str

{-

The algorithm for Discordian text encryption is given at:

http://www.principiadiscordia.com/book/78.php

After implementing this, I realized that all the early steps are a farce.

But anyway, here is the algorithm in case you don't enjoy tilting your

head to read a page:

Step 1. Write out message (HAIL ERIS) and put all vowels at the end

(HLRSAIEI)

Step 2. Reverse order (IEIASRLH)

Step 3. Convert to numbers (9-5-9-1-19-18-12-8)

Step 4. Put into numerical order (1-5-8-9-9-12-18-19)

Step 5. Convert back to letter (AEHIILRS)

This cryptographic cypher code is GUARANTEED TO BE 100% UNBREAKABLE

.. so says the Principia Discordia. But I think we can generate and

test to break it.

Many thanks to kosmikus and Pseudonym for their help in developing

this module

-}

{-

Discordians. They are a

group of people who believe that spirituality is as much about

disharmony as it is about harmony.

So, in their primary book, Principia Discorida:

http://principiadiscordia.com/

you never know what is serious and what is play. The wikipedia has

more to say on them:

http://en.wikipedia.org/wiki/Discordian

-}

- metaperl's blog
- Login to post comments

## My Haskell Emacs configuration

I generally like some of the features that come with haskell-mode for Emacs. But I cannot reconcile my style with that of the "smart" indentation mode. I'm not too keen on the default indentation function of Emacs either, at least, not for Haskell code. I use the basic tab-to-tab-stop function but it needs a little tweaking to bring it down to reasonable levels of indentation.

Tabs should never be used.

(setq-default indent-tabs-mode nil)

I like align-regexp, a neat tool from Emacs 22.

(global-set-key (kbd "C-x a r") 'align-regexp)

Basic haskell-mode loading:

(add-to-list 'load-path "path/to/haskell-mode")

(load-library "haskell-site-file")

(add-to-list 'auto-mode-alist '("\\.hs\\'" . haskell-mode))

My indentation settings. I wrote my own "newline and indent" function which brings any code you split onto the newline back up to the same indentation level it was at previously.

(remove-hook 'haskell-mode-hook 'turn-on-haskell-indent)

;; Just use tab-stop indentation, 2-space tabs

(add-hook 'haskell-mode-hook

(lambda ()

(turn-on-haskell-doc-mode)

(turn-on-haskell-simple-indent)

(setq indent-line-function 'tab-to-tab-stop)

(setq tab-stop-list

(loop for i from 2 upto 120 by 2 collect i))

(local-set-key (kbd "RET") 'newline-and-indent-relative))

(defun newline-and-indent-relative ()

(interactive)

(newline)

(indent-to-column (save-excursion

(forward-line -1)

(back-to-indentation)

(current-column))))

And just in case you were wondering how to tweak syntax highlighting colors in your .emacs, here's what I do:

(set-face-bold-p 'font-lock-builtin-face nil)

(set-face-bold-p 'font-lock-comment-face nil)

(set-face-bold-p 'font-lock-function-name-face nil)

(set-face-bold-p 'font-lock-keyword-face nil)

(set-face-bold-p 'font-lock-variable-name-face nil)

(set-face-foreground 'font-lock-builtin-face "cyan")

(set-face-foreground 'font-lock-comment-face "pale green")

(set-face-foreground 'font-lock-function-name-face "green")

(set-face-foreground 'font-lock-variable-name-face "pale green")

I think it's fairly self-explanatory. You can play around with the values and find what you like. `M-x list-colors-display`

is a handy tool to list out all the possible colors along with examples.

- mrd's blog
- Login to post comments

## Strongly Specified Functions

### How to make writing easy functions hard.

Kidding aside, a strongly specified function in Coq not only computes but simultaneously provides a proof of some property. The construction of the proof is integrated with the body of the function. This differentiates it from "weakly" specified functions, which are written separately from the proof.

Proving properties about functions is one of the main practical uses of Coq. Unfortunately, it's more difficult than simply writing code. But it is more interesting (and useful) than, say, simple proofs about the monad laws. One thing I learned, the hard way, is that it's better to start small.

The first example to talk about is reverse. It has a simple inductive definition, so that isn't a big deal, like it can be for some things. It uses basic structural induction on lists, so I don't need to define my own well founded induction principle. Hopefully, it illustrates the meaning of "integrated computation and proof."

I've split the property up into two inductive definitions. The first one is the actual "reversed" relation between two lists. Notice that it is not necessarily in `Set`

. The second definition is what actually gets constructed by the function. It says: in order to construct a reversed list of `xs`

, you need to supply a list `xs'`

and a proof of `reversed xs xs'`

.

The definition is slightly different than other functions you may have seen in Coq (or not). It actually looks more like a theorem proof than a function. This is a good thing.

I use the `refine`

tactic now extensively. This tactic allows you to specify an exact proof term, like `exact`

, but also to leave "jokers" to be filled in by further elaboration. The two spots I've left correspond to the base case and the inductive case, which you should be able to figure out from the `match`

. Side note: the `return`

type-annotation is necessary, otherwise type reconstruction won't figure it out.

At this point, I've told Coq that there is a `nil`

case and a "cons" case. The `nil`

case is easy, so I just supplied an exact proof term. You can read that as saying: "The reverse of `nil`

is `nil`

and a proof of this is `reversed_nil`

."

For the inductive case, I will use `refine`

again to drill deeper into the function and the proof. Notice that the recursive call is made with `match`

. This allows me to separate the recursively computed value `xs'`

from the usage of the induction hypothesis `Hxs'`

. There is a little trap here -- if you use automation prior to making the recursive call you may get a "Proof completed" message. But you won't be able to save the proof, because the "termination checker" will yell at you. The same thing happens if you incorrectly invoke the recursion.

The "proof completed" message isn't bogus -- it's just that the function the `auto`

tactic had proven was not the function for which we were looking.

The `refine`

statement provides the necessary hint to the theorem prover. I construct the returned data in a separate `refine`

just to illustrate how you can drill down with the tool. This just leaves the simple proof that `xs'++(x::nil)`

is in fact, the reverse.

`Recursive Extraction`

pulls out all the computation-related (stuff in `Set`

) bits while leaving behind the proof-related bits. If you want it in Haskell, as always, change the language with `Extraction Language Haskell.`

(All examples were written using Coq 8.1pl1)

- mrd's blog
- Login to post comments

## Revisiting board-building in chess

So, in my last blog, we looked at the beginnings of a chess engine. That was nearly 2 months ago. I've been kicking around some possibilities for the last few months, trying to come up with a somewhat elegant solution. Ultimately, I've wound up completely refactoring what I started with. I'll be posting more on that here within the next few weeks, but I'm really excited by how one piece came out, so I'd like to show that part at least.

Last time, I mentioned FEN notation for the state of a chess game. Let's look at that in a bit more detail. Here's the FEN notation for the start of a game:

rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1

You can see, there are 6 space-separated fields in this string. What I want to write is this function:

loadFEN :: String -> GameState

...which constructs the state of the game from the FEN. Now, this is notoriously tedious in most languages, but the solution in haskell looks quite elegant to me.

So, let's see. Each field in the FEN is going to tell us something about how to construct the GameState. The first field tells us where the pieces are, the second tells us whose turn it is, the third deals with castling statuses, etc. So we can start with an empty state, and apply multiple functions to it, based on the fields, to build up a new state. That sounds like a fold pattern to me!

However, this seems to be a bit tricky at first. We'll want to apply a different function for each field. That is, the first function will look at that first field and build up the actual squares on the board with pieces. The second function will parse the indicator of whose turn it is, and make that change to the state, etc. So let's start by zipping these functions up with the fields they correspond with:

loadFEN fen = foldl apply emptyGameState (zip funcs fields) where fields = words fen funcs = [parseBoard, parseCastleStatus]

Here, I've just listed the functions for the first 2 fields, to give us an idea. The only mystery that's left is this apply function. Let's look at its type:

apply :: GameState -> (String -> GameState -> GameState, String) -> GameState

That is, it takes a current state, and a pair, consisting of a function and a field to apply it to, and a field, and returns a new state. Well, that's not so bad:

apply state (function, field) = function field state

Running this through the @pl formatter on lambdabot in #haskell gives us an ugly mess:

apply = (`ap` snd) . flip (flip . fst)

Hmm, this is interesting though, look at the flipping going on. What if we took the input in a different order? Suppose we wrote this:

apply (function, field) state = function field state

Running this through @pl gives the very elegant:

apply = ap fst snd

Thus, the final code winds up being:

loadFEN :: String -> GameState loadFEN fen = foldl apply emptyGameState (zip funcs fields) where fields = words fen apply = flip (ap fst snd) funcs = [parseBoard, parseCastleStatus]

Very nifty! Now all the real work gets moved off to smaller functions, which just have to worry about the particular field they're designed to work with. Not only that, but if I only want to deal with the first few fields for now, they're the only ones I have to parse, and I'll still wind up with a valid GameState.

- chessguy's blog
- Login to post comments

## Coq and The Monad Laws: The Third

## The Third Monad Law

The previous two articles introduced Coq and the first two Monad Laws. I am discussing the third one separately because it will take longer to prove.

The proof for the third law will proceed at first like the others, induction and some simplification.

```
Theorem third_law :
forall (A B C : Set) (m : list A)
(f : A -> list B) (g : B -> list C),
bind B C (bind A B m f) g =
bind A C m (fun x => bind B C (f x) g).
Proof.
induction m.
(* base case *)
simpl.
trivial.
(* inductive case *)
intros f g.
simpl.
unfold bind.
unfold bind in IHm.
```

Which brings us to this state in the interactive theorem prover.

```
1 subgoal
A : Set
B : Set
C : Set
a : A
m : list A
IHm : forall (f : A -> list B) (g : B -> list C),
flat_map g (flat_map f m) =
flat_map (fun x : A => flat_map g (f x)) m
f : A -> list B
g : B -> list C
============================
flat_map g (f a ++ flat_map f m) =
flat_map g (f a) ++ flat_map (fun x : A => flat_map g (f x)) m
```

At this point, if we could rewrite
`flat_map g (f a ++ flat_map f m)`

into
`flat_map g (f a) ++ flat_map g (flat_map f m))`

then we would be able to apply the Inductive Hypothesis and
be home free.

The "cut" tactic allows you to make an assumption, and then later come back and prove your assumption correct. Using "cut",

```
cut (flat_map g (f a ++ flat_map f m) =
flat_map g (f a) ++ flat_map g (flat_map f m)).
intro Distrib.
rewrite Distrib.
rewrite IHm.
reflexivity.
```

the original goal is easily solved. But Coq has generated an additional subgoal: we must now prove that this cut is correct.

```
1 subgoal
A : Set
B : Set
C : Set
a : A
m : list A
IHm : forall (f : A -> list B) (g : B -> list C),
flat_map g (flat_map f m) =
flat_map (fun x : A => flat_map g (f x)) m
f : A -> list B
g : B -> list C
============================
flat_map g (f a ++ flat_map f m) =
flat_map g (f a) ++ flat_map g (flat_map f m)
```

We'll proceed by induction on `f a`

which
has inductive type `list B`

.

```
induction (f a).
(* base case *)
simpl.
reflexivity.
(* inductive case *)
simpl.
rewrite IHl.
rewrite app_ass.
reflexivity.
Qed.
End ListMonad.
```

All done. We only needed the association property of list append,
which I found by querying `SearchAbout app.`

## Summary

Here is a much shorter proof which takes advantage of some of Coq's automated tactics.

```
Theorem third_law' :
forall (A B C : Set) (m : list A)
(f : A -> list B) (g : B -> list C),
bind B C (bind A B m f) g =
bind A C m (fun x => bind B C (f x) g).
Proof.
induction m; simpl; intuition.
replace (bind B C (f a ++ bind A B m f) g)
with (bind B C (f a) g ++ bind B C (bind A B m f) g);
[ rewrite IHm
| induction (f a); simpl; auto;
rewrite app_ass; rewrite IHl
]; auto.
Qed.
```

On a final note, Coq has the ability to extract code into several different languages.

```
Extraction Language Haskell.
Recursive Extraction bind ret.
```

results in

```
module Main where
import qualified Prelude
data List a = Nil | Cons a (List a)
app l m = case l of
Nil -> m
Cons a l1 -> Cons a (app l1 m)
flat_map f l = case l of
Nil -> Nil
Cons x t -> app (f x) (flat_map f t)
ret :: a1 -> List a1
ret a = Cons a Nil
bind :: (List a1) -> (a1 -> List a2) -> List a2
bind m f = flat_map f m
```

- mrd's blog
- Login to post comments

## Coq and The Monad Laws: The First and Second

## The First Monad Law

In the previous article, I gave a brief introduction to Coq and the List Monad. I listed three Theorems but did not prove them. Now I will show how to prove the first two Theorems.

Here is the context of our code:

```
Section ListMonad.
Require Import List.
Definition ret (A : Set) (a : A) := a :: nil.
Definition bind (A B : Set) (m : list A) (f : A -> list B) :=
flat_map f m.
```

The first Theorem, restated:

```
Theorem first_law : forall (A B : Set) (a : A) (f : A -> list B),
bind A B (ret A a) f = f a.
```

At this point, Coq will enter an interactive proof mode. In
`coqtop`

, the prompt will change accordingly. In
ProofGeneral Emacs, a new window will pop up. In either case,
you will be presented with a view of the current assumptions and goals
that looks like this:

```
1 subgoal
============================
forall (A B : Set) (a : A) (f : A -> list B), bind A B (ret A a) f = f a
```

There are no assumptions, and one goal to prove.

```
Proof.
intros A B a f.
```

Coq proofs can proceed with "tactics" which instruct the
proof engine to perform a certain operation, or series of operations.
In this case, we are introducing assumptions named `A,B,a,f`

drawn from the antecedents of the goal. Now the proof view looks like this:

```
1 subgoal
A : Set
B : Set
a : A
f : A -> list B
============================
bind A B (ret A a) f = f a
```

The "unfold" tactic takes the definition of the given argument and tries to substitute it directly into the goal.

```
unfold bind.
unfold ret.
unfold flat_map.
```

Now we have a vastly simplified goal.

```
1 subgoal
A : Set
B : Set
a : A
f : A -> list B
============================
f a ++ nil = f a
```

Everyone knows that appending `nil`

to a list returns the same list.
But how do we explain this to Coq? Search and find out if Coq already
knows about this fact.

```
SearchRewrite (_ ++ nil).
```

`SearchRewrite`

is used to search for theorems of the form
`a = b`

which can be used to rewrite portions of your goal.
I use the wildcard `_`

to indicate that I want to find any
theorem involving appending `nil`

to anything. Coq finds:

```
app_nil_end: forall (A : Set) (l : list A), l = l ++ nil
```

We can use this with the "rewrite" tactic. We want to
rewrite our code which looks like `l ++ nil`

into `l`

. That is a right-to-left rewrite,
which is specified like so:

```
rewrite <- app_nil_end.
```

And finally we have a goal `f a = f a`

which
Coq can trivially solve with

```
reflexivity.
```

Or any of the more automated tactics, like "trivial" or "auto".

```
Qed.
```

The first monad law is discharged and proven. You can run
`Check first_law.`

to see the new theorem.

## The Second Monad Law

This second proof will proceed by induction on the structure of the
list parameter `m`

. The "induction" tactic causes Coq to
try a proof by induction. The proof will be split into two parts,
based on case breakdown of the list type.

```
Theorem second_law : forall (A : Set) (m : list A),
bind A A m (ret A) = m.
Proof.
induction m.
```

`nil`

, and
the second is the inductive case for `cons`

.
```
2 subgoals
A : Set
============================
bind A A nil (ret A) = nil
subgoal 2 is:
bind A A (a :: m) (ret A) = a :: m
```

The base case is easy, the "simpl" tactic can take care of the unfolding and simplification.

```
simpl.
reflexivity.
```

The inductive case is also straightforward, because
after the "simpl" step, the Inductive Hypothesis is
already applicable. The appropriate `IHm`

assumption is provided by the "induction" tactic.

```
simpl.
rewrite IHm.
reflexivity.
Qed.
```

## Summary

The full proofs, minus the commentary:

```
Theorem first_law : forall (A B : Set) (a : A) (f : A -> list B),
bind A B (ret A a) f = f a.
Proof.
intros A B a f.
unfold bind.
unfold ret.
unfold flat_map.
rewrite <- app_nil_end.
reflexivity.
Qed.
Theorem second_law : forall (A : Set) (m : list A),
bind A A m (ret A) = m.
Proof.
induction m.
(* base case *)
simpl.
reflexivity.
(* inductive case *)
simpl.
rewrite IHm.
reflexivity.
Qed.
```

Next time, the Third Monad Law, and a more complicated proof.

- mrd's blog
- Login to post comments

## Coq and The Monad Laws: Introduction

Over the past few months I've been spending some time with Coq, the proof assistant. I am still very much a beginner, so as an exercise I decided to try and prove the three Monad laws for the implementation of the List Monad.

In Haskell, the List Monad looks like this:

```
instance Monad [] where
return x = [x]
m >>= f = concatMap f m
```

Coq is a proof-assistant for a constructive logic, so it can also be used as a (dependently-typed) functional programming language.
You can open an interactive session and type the lines in using `coqtop`

. However, I use the ProofGeneral Emacs-based interface and save my code in a file like "listmonad.v". You can then step through the lines using `C-c C-n`

. There also exists a custom GUI called CoqIDE.

In Coq, I defined the Monad methods like so:

I opened up a new section (but did not close it yet). The definitions are fairly unremarkable, except that now the types A and B are explicit parameters. In Coq, a single colon is used for type annotations, and a double colon is used for list consing. `flat_map`

is Coq's version of `concatMap`

. I did not know that before, but found it by using `SearchAbout list.`

at the interactive Coq prompt.

I can see what Coq thinks of my definitions:

```
Coq < Check ret.
ret
: forall A : Set, A -> list A
Coq < Check bind.
bind
: forall A B : Set, list A -> (A -> list B) -> list B
```

Now we are ready to state the three Monad Laws. First, in Haskell notation:

```
1. return a >>= f = f a
2. m >>= return = m
3. (m >>= f) >>= g = m >>= (\x -> f x >>= g)
```

The first two Monad laws, in essence, assert that return is an identity operation on the left and right sides of bind. The third says that bind is associative.

In Coq:

```
Theorem first_law :
forall (A B : Set) (a : A) (f : A -> list B),
bind A B (ret A a) f = f a.
Theorem second_law :
forall (A : Set) (m : list A),
bind A A m (ret A) = m.
Theorem third_law :
forall (A B C : Set) (m : list A)
(f : A -> list B) (g : B -> list C),
bind B C (bind A B m f) g =
bind A C m (fun x => bind B C (f x) g).
```

Entering any of these Theorems into Coq will engage the interactive proof mode. In the next article, I will examine the proofs of the first two laws.

- mrd's blog
- Login to post comments

## Haskell and XML

I installed HXT this week to use in a small project which dealt with some XML data. It is a slick library, and it has introduced me to the joy of Arrows. I scavenged for tutorial information, and there are some useful pages out there -- The Haskellwiki page on HXT, and Neil Bartlett's recent blog entry.

However, when it came down to getting some practical work done, I found myself scouring the HXT Arrow API documentation. While it is pretty neat to figure out which combinator to use solely based on the examination of types, I would have preferred a selection of code examples.

That is what Practical Examples of HXT in Action is attempting to be. I've concocted 3 articles so far, and I'm hoping to add more. The current examples demonstrate: arrow syntax, basic useful combinators, a little development methodology, optional and list data, and simple integration with Network.HTTP.

- mrd's blog
- Login to post comments