Applications

Haskell Bookmarks

Submitted by metaperl on Thu, 10/06/2005 - 2:14am.
Web Apps Framework

cgi and database options for Haskell.

Drupal said I needed 10 words, so here are some more.

I am confused on the implementation of (\\)

Submitted by metaperl on Wed, 10/05/2005 - 5:05pm.

From the Haskell Online Report we have:

(\\) :: Eq a => [a] -> [a] -> [a]
(\\) = foldl (flip delete)

The (flip delete) changes the type signature to the following:


*Main> :t (flip delete)
forall a. (Eq a) => [a] -> a -> [a]

and foldl usually is obvious: it folds the list from the left, or the beginning. Meaning, it takes the seed element and the first element of the list and produces a result. This result is the new seed to be used with the next element of the list and so on until a result is produced.

But what is the flipped version of foldl doing in this case? It receives an entire list in the position it normally just receives an element.

Recursion and strong static typing can help you build functions!

Submitted by metaperl on Tue, 10/04/2005 - 5:57pm.

There is a synergy between recursion and strong static typing that I never
realized and here it is:

If you express problem solution recursively, every choice made on every
branch of the recursion at any time in the problem space has to match
the type signature of the promised outcome.

Take this function to delete a node from a tree for example:


delete :: Ord a => a -> Tree a -> Tree a

So one thing we know for dead certain, is that delete will be returning a
Tree. So, if we code delete with a number of cases for different scenarios
each of those cases must return a tree.

Given this, one way to try to find the solution for this is take a look at
all the possible trees that you know about and ask when they would apply as
a solution. It kind of gets you going towards thinking about the mechanics
of the problem. First let's rewrite the type signature as the function head:


delete val (Node v t1 t2)

and now let's ask the pivotal question once again: "what sorts of trees can
I think of?" Well, we are looking at two of them in the second argument
to delete() - t1 and t2. When is t1 the solution to this problem? Well,
that's the same as asking when would we only return the left subtree after hitt\ing a node. And when would we do that? Well, the answer to that is:
"if val == v and there is no left subtree, then we
simply need to return t1." Nice! Codewise, it looks like this:

(val == v) && (isNil t2) = t1

Now of course we have the "mirror" situation
of when do we only return t2? And the answer is very similar to the other one:

(val == v) && (isNil t1) = t2

Ok, so we've nailed down two cases just by taking a look at the head
of the rule. So, let's see both of those cases had to do with being at a
node where val == v. The only difference was that the left or right subtree
was not there.

So obviously, we want to start thinking about when val == v and both
subtrees are there. Well in that case, we need to remove the root node and
join the 2 subtrees, maintaining the required binary tree ordering. So
that clause looks like this, assuming we use the fallback properties of
guards:

otherwise = join t1 t2

So we have thorougly cased out all of the scenarios where we have reached
the node of interest. What happens in the current node has a value greater
than the one we want to delete? Well, the node we are looking for must
exist down the left subtree:

val < v = Node v (delete val t1) t2

and conversely, if the current node's value is less than the one we are
looking for then we need to try to delete down the right subtree:

val > v = Node v t1 (delete val t2)

So now that we have cased all deletion scenarios out, we have our
final delete function:

delete val (Node v t1 t2)
| val < v = Node v (delete val t1) t2
| val > v = Node v t1 (delete val t2)
| isNil t2 = t1
| isNil t1 = t2
| otherwise = join t1 t2

conclusion

SJT says that one should start with an informal description of a problem, then come up with your types. I add to that: once you get your types, then create type signatures for your functions and then once you get your type signatures, build the head of your function with pattern matching and then use the required types to express various cases.

SJT's book has a limited definition of relation... why?

Submitted by metaperl on Tue, 10/04/2005 - 1:58pm.

In 16.9 of SJT's book "Craft of Functional Programming" he says "A binary relation relates together certain elements of a set"

My problem with this is that the elements do not have to be in the same set. A relation is a subset of the cross product of sets A and B, where A and B are not of the same type.

Next he gives a type definition

type Relation a = Set (a,a)

And I think it should be:

type Relation a,b = Set (a,b)

does haskell optimize list surgery?

Submitted by metaperl on Mon, 10/03/2005 - 12:46pm.

does Haskell optimize the list dissection in something like this:


take n lis ++ listElem ++ drop (n+1) lis

interspersing a space in a string

Submitted by metaperl on Mon, 10/03/2005 - 12:44pm.

Cale gibbard came up with a Monad for interspering:


"hello" >>= (\x -> [x,' '])

but what if you don't want a trailing space?

Creating a code table for a Huffman tree

Submitted by metaperl on Sun, 10/02/2005 - 3:04am.

Our ultimate goal is to have a list in which each element gives the route on a tree to a particular letter. E.g.

('t',[L])

means go left from the top to code for a t

Similarly:

('a',[R,L])

means go right then left from the top to code for an a

So, we want to be able to do this:


*Main> codeTable (codes "tab")
[('t',[L]),('a',[R,L]),('b',[R,R])]

and the question is how?

Well, step one is to ask what granularity we need to take the input data on. And the answer is: we can take the input data one at a time.

Questions two and three are: what should our final and transition actions look like? Well, let's start with the final action. The final action for any particular leaf is going to yield something like this:

('a',[R,L])

we can restate that as: the final action as we move through a tree is to take a leaf and return the character there as well as the entire path to it.

Having stated the final action, we see what our transition action must be, we need to build up the entire path to that element as we make the transitions

The final code from SJT's implementation shows these final and transition actions as the solution to our problem:

-- Making a table from a Huffman tree.

codeTable :: Tree -> Table

codeTable = convert []

-- Auxiliary function used in conversion to a table. The first argument is
-- the HCode which codes the path in the tree to the current Node, and so
-- codeTable is initialised with an empty such sequence.

convert :: HCode -> Tree -> Table

-- Action Final
convert cd (Leaf c n) = [(c,cd)]

-- Action Transition
convert cd (Node n t1 t2)
= (convert (cd++[L]) t1) ++ (convert (cd++[R]) t2)

How to think about programming in Haskell

Submitted by metaperl on Sun, 10/02/2005 - 2:54am.

This is a node I intend to expand over time.

use recursion and types to your advantage
This is central to Haskell programming, in fact
it is the most important part of your reasoning process. Getting to know your data type and pulling it apart to handle it case-by-case is the way of Haskell development.

Now that we have established the utility of recursion and strong static typing in building functions, let's use it. Let's see how we can write functions which obey our type rules by looking around for various instances of the output type and asking when they would make sense in the function definition. This might be aseat-of-the-pants approach, but hey, if it works, do it!

input chunking
Earlier in my blog I noted that it is important to realize the granularity of a decision. In that problem, one makes a decision at a point in the string not just on a single character but a chunk of the input string that is the same length as the search string.
final product and transition
When you move over input, your functions relate to one of the aspects of this input - it's initial phase, its intermediate phases or its terminal phase. In recursive functions, the terminal phase is handled by what is known as a base case and this base case relates directly to the final desired output. It must return or represent that directly. The intermediate phase is also important and must be handled properly.

This example of creating a table of codes for a huffman tree shows the concept of final product and transition in action.

can you use primitive recursion?
Primitive recursion is characterized by a base case and an inductive case. For example:

sum [] = 0
sum (x:xs) = x + sum xs

More Haskell and Dylan data

Submitted by metaperl on Sun, 10/02/2005 - 1:09am.

The computer language shootout is a good way to compare two languages. I like the Haskell way of breaking a complex task down into functions.

Overloading functions and haskell

Submitted by metaperl on Sat, 10/01/2005 - 6:54am.

Both Haskell and Dylan have strong track records in the ICFP contests. Dylan is a completely object-oriented language based around Generic Functions. According to Bruce Hoult "...you can think of it as being a function that can accept arguments of any type, and examines the types of the actual arguments in order to decide what specialized function to call" More extensively The Dylan Book says

Dylan is a dynamic language — everything in Dylan is defined in terms of a dynamic execution model. As we saw in Section 5.5, page 63, the execution model of how a method is chosen when a generic function is called with a particular set of arguments is highly dynamic: the arguments are evaluated; the types of the arguments are determined; the applicable methods are found and sorted according to specificity; and, finally, the most specific, applicable method is called. This model implies that values and types can change, and that methods can be added right up until the generic function is called, and any of these changes still have an effect on which method is ultimately chosen. This dynamism — the model that value, number, and type of arguments; return values; applicable method; and method choice and execution are all determined at the last possible moment — is what gives the Dylan language its power.

This research paper by Simon Peyton-Jones shows how pure Haskell lacks this power.

Haskell is a very minimalist language consisting of referentially transparent (context-insensitive) strongly-typed functions and hardly anything else. It is a robust set of building blocks that requires clear thought and the ability to concisely specify a problem without a bunch of hemming and hawing. When you lack state and plumb infinite data structures at will, you must be willing to rely on your ability to describe problems rather than hand-hold them, and Haskell forces this degree of conceptualization.

On the other hand, Dylan (DYnamic LANguage) is a powerful language. again from the Dylan book we have the following. Substitute the word "Haskell" everywhere you see "C" below. And substitute "function application" for "primitive machine operations"

Dylan is a powerful language: Many of the built-in, or primitive, language operations are high-level operations, such as the method-dispatch mechanism, the collection facility, and the exception mechanism. Because of Dylan's powerful features, it can be hard for the programmer to develop an efficiency model — a model of the absolute or relative cost of different approaches to a problem.

In contrast, in other languages, such as C, every language construct can be explained directly in terms of a small number of machine instructions. Although it may be easy to understand the performance of a C program in terms of a simple model, programming in C is more work for the programmer — the higher-level abstractions are not provided, and must often be built from scratch.

For example, a C programmer expects that the run-time cost of calling a function is the cost of possibly saving registers on a stack, passing the arguments, executing a machine instruction for jumping to a subroutine, and then executing a return instruction at the end of the function; if it is a call through a function pointer, or a C++ virtual function, the cost of an indirect jump must be added. In Dylan, the story is more complicated, because Dylan has a more sophisticated execution model: A call to a generic function might be much more expensive in a dynamic situation, because computing the most specific method could take much longer than would execution of the method itself.