For some reason, I keep picking up "The Haskell Road to Logic, Maths and Programming"
All my life I have avoided proofs and mathematics, but perhaps now is the moment to meet my Maker. I keep going back to this book.
For some reason, it is infectious. A beginner introduction to how thought is structured. A beginner introduction to what math is all about.
I had been wondering what I was going to do with Haskell once I finished learning it. I am a Perl professional by trade and even though some interplay has been going on, I never really knew why I was learning Haskell. I just liked the conciseness and elegance of the language and the mathematical purity.
The most important thing is to have passion behind what you do. If I want to go deeper into math for math's sake, then so be it... no need to artificially create projects for myself. Just enjoy haskell and math. It is a very very good fit.
The Haskell equations
f x y = ... used in the definition
of a function
f are genuine mathematical equations. They
state that the left hand side of the right hand side of the equation
have the same value. This is very different from the use of
= in imperative languages like C or Java. In a C or Java
program the statement
x = x*y does not mean that
x*y have the same value, but rather it
is a command to throw away the old value of
x and put the
x*y in its place. It is a so-called
destructive assignment statement: the old value of a variable
is destroyed and replaced by a new one.
Reasoning about Haskell definitions is a lot easier than reasoning
about programs that use destructive assignment. In Haskell, standard
reasoning about mathematical equations applies. E.g. after the Haskell
x= 1 and
y = 2, the Haskell
x = x + y will raise an error
multiply defined. ...
= in Haskell has
the meaning "is by definition equal to"...
This was a huge landslide victory for me. Because I quit trying to write programs to get data here, data there. Values here, values there. Instead, I simply began to rewrite the original function as a new definition.
I became so confident that I was able to write a program to return all the leaves of a tree. and here it is:
Looking back at "Algorithms in Haskell" by Rabhi and "Craft of FP" by Simon Thompson, they do both make this same statement, but somehow it never really hit me right.
I would like to create an infinite lazy list whose contents consist of:
[ x, f x, f (f x), f (f (f x)) ... ] -- ad infinitum.
How might I do this?
I am being told that I have overlapping pattern matches. I'm pretty sure it has to do with the
otherwise clause and the following
subString (x:xs)  clause, but I don't know how to fix it
prefix  ys = True
prefix xs  = False
prefix [x] [y] = (x == y)
prefix (x:xs) (y:ys) = (x == y) && prefix xs ys
subString xs ys
| prefix xs ys = True
| otherwise = subString xs ys'
where ys' = tail ys
subString (x:xs)  = False
- Web Apps Framework
- cgi and database options for Haskell.
Drupal said I needed 10 words, so here are some more.
From the Haskell Online Report we have:
(\\) :: Eq a => [a] -> [a] -> [a]
(\\) = foldl (flip delete)
(flip delete) changes the type signature to the following:
*Main> :t (flip delete)
forall a. (Eq a) => [a] -> a -> [a]
foldl usually is obvious: it folds the list from the left, or the beginning. Meaning, it takes the seed element and the first element of the list and produces a result. This result is the new seed to be used with the next element of the list and so on until a result is produced.
But what is the flipped version of foldl doing in this case? It receives an entire list in the position it normally just receives an element.
There is a synergy between recursion and strong static typing that I never
realized and here it is:
If you express problem solution recursively, every choice made on every
branch of the recursion at any time in the problem space has to match
the type signature of the promised outcome.
Take this function to delete a node from a tree for example:
delete :: Ord a => a -> Tree a -> Tree a
So one thing we know for dead certain, is that delete will be returning a
Tree. So, if we code delete with a number of cases for different scenarios
each of those cases must return a tree.
Given this, one way to try to find the solution for this is take a look at
all the possible trees that you know about and ask when they would apply as
a solution. It kind of gets you going towards thinking about the mechanics
of the problem. First let's rewrite the type signature as the function head:
delete val (Node v t1 t2)
and now let's ask the pivotal question once again: "what sorts of trees can
I think of?" Well, we are looking at two of them in the second argument
to delete() - t1 and t2. When is t1 the solution to this problem? Well,
that's the same as asking when would we only return the left subtree after hitt\ing a node. And when would we do that? Well, the answer to that is:
"if val == v and there is no left subtree, then we
simply need to return t1." Nice! Codewise, it looks like this:
(val == v) && (isNil t2) = t1
Now of course we have the "mirror" situation
of when do we only return t2? And the answer is very similar to the other one:
(val == v) && (isNil t1) = t2
Ok, so we've nailed down two cases just by taking a look at the head
of the rule. So, let's see both of those cases had to do with being at a
node where val == v. The only difference was that the left or right subtree
was not there.
So obviously, we want to start thinking about when val == v and both
subtrees are there. Well in that case, we need to remove the root node and
join the 2 subtrees, maintaining the required binary tree ordering. So
that clause looks like this, assuming we use the fallback properties of
otherwise = join t1 t2
So we have thorougly cased out all of the scenarios where we have reached
the node of interest. What happens in the current node has a value greater
than the one we want to delete? Well, the node we are looking for must
exist down the left subtree:
val < v = Node v (delete val t1) t2
and conversely, if the current node's value is less than the one we are
looking for then we need to try to delete down the right subtree:
val > v = Node v t1 (delete val t2)
So now that we have cased all deletion scenarios out, we have our
final delete function:
delete val (Node v t1 t2)
| val < v = Node v (delete val t1) t2
| val > v = Node v t1 (delete val t2)
| isNil t2 = t1
| isNil t1 = t2
| otherwise = join t1 t2
SJT says that one should start with an informal description of a problem, then come up with your types. I add to that: once you get your types, then create type signatures for your functions and then once you get your type signatures, build the head of your function with pattern matching and then use the required types to express various cases.
In 16.9 of SJT's book "Craft of Functional Programming" he says "A binary relation relates together certain elements of a set"
My problem with this is that the elements do not have to be in the same set. A relation is a subset of the cross product of sets A and B, where A and B are not of the same type.
Next he gives a type definition
type Relation a = Set (a,a)
And I think it should be:
type Relation a,b = Set (a,b)
does Haskell optimize the list dissection in something like this:
take n lis ++ listElem ++ drop (n+1) lis
Cale gibbard came up with a Monad for interspering:
"hello" >>= (\x -> [x,' '])
but what if you don't want a trailing space?