Discussion

Quantum Programming

Submitted by Greg Buchholz on Fri, 12/02/2005 - 4:48pm.

Lately I've been pondering the programming of quantum computers. Just because we don't have the hardware yet seems like an awfully bad excuse for not having languages and emulators/simulators. I really don't know much about quantum computing (book recommendations anyone?), but it seems to be based on a somewhat nondeterministic model. Is it going to be like programming in Prolog, but with backtracking that doesn't cost anything in terms of efficiency? Curious.

Haskell is not an academic toy, but it is not corporate ready either

Submitted by metaperl on Sat, 11/05/2005 - 6:15am.

Today in #haskell, Sylvan said that Haskell had a reputation as an academic toy and that books should start off with Monadic I/O and other "practical" concerns to eliminate that stereotype.

Cale responded by saying that GHCi's REPL provided adequate I/O facilities.

I responded by saying that the most practical thing you could do in learning Haskell is to obtain the ability to decompose a problem into a set of functions. Everything else about Haskell follows from pure lazy functions and until you are comfortable and conversant in this paradigm, you really can't develop any new and useful code. You will simply be an ignorant little monkey, aping things you saw in a tutorial which crippled your ability to analyze and synthesize strongly-typed, lazy functions.

I did not say this then, but I also have this response. The impression of Haskell will not change with another book with practical examples. It will only change when Haskell has huge corporate-ready libraries for every huge corporate issue: when Haskell has a CPAN locked and loaded and ready to go. When Haskell has something as widely used as Mailman. When every website you see is done with Haskell instead of PHP. When you can deploy CORBA apps with Haskell. When you can cache and pre-compute queries for a SQL database in Haskell. When you have the speed and flexibility and eager persistent data loading of mod_perl for Haskell. When you have the best or second best or third best or any high-quality fully functional efficient streaming media player sitting on everyone's Linux desktop. When the favorite free drawing program is written in Haskell. For Haskell equating CPAN. CGI has been a web standard for almost 10 years. Try matching the real-world ready-to-go functionality of CGI.pm - that's one CPAN module in the Perl toolbox. Just one. Then you have 4000 more to go.

Haskell is the most beautiful, elegant, and correctly designed programming language on this planet. It is also the most powerful at chopping apart new specs and writing reliable code in short periods of time - how else could it stomp every other language 2 years in a row in the ICFP? But the only way to remove an impression is with results - tangibles, deliverables and unfortunately the focus on beauty, elegance and correctness seems to get in the way of ever getting anything corporate-ready out the door.

I am grateful to be studying Haskell. It has done wonders for my corporate-level Perl development. The book to read is "The Haskell Road to Logic, Maths and Computer Programming." This book will force you to think clearly. It does not cover monads, but monads can wait until you can specify and solve problems. But my improved thought processes are one thing. Delivering projects on a deadline is another and Perl continues to deliver through a cooperative community based around CPAN. The Haskell community is based around the reason of masterminds doing the intelligent things. The Perl community is based around the needs of people doing practical things in the best way that they can.

Finally, getting corporate ready is not something you can plan. It happens. It happens by necessity. It happens when someone has to swim or sink.

Deriving the function to compute sublists of a list

Submitted by metaperl on Wed, 10/26/2005 - 12:42pm.

Earlier I discussed how = in Haskell is equational. Well, it certainly came to the rescue this time. I was trying to understand the given solution in "The Haskell Road to Logic, Maths, and Programming" for the problem of calculating all the sublists of a list.


powerList [] = [ [] ]
powerList (x:xs) = powerList xs ++ (map (x:) powerList xs)

So I looked at this program, but had no idea how in the world this was the solution. So what I did is I worked upwards from the most trivial case like so:

-- The powerlist of a list with no elements is a list consisting of
-- the empty list

powerList [] = [[]]

-- Now, let's work with a list with 1 element:

powerList [a] = [ [a], [] ]

-- Fair enough, now how about a list with 2 elements:

powerList [a,b] = [ [a,b], [b], [a], [] ]

-- Hmm, but let's rewrite that

[ [a,b], [b], [a], [] ] = powerList [a] ++ [ [a, b], [b] ]

-- And rewrite once again

powerList [a] ++ [ [a, b], [b] ] = powerList [a] ++ (map (b:) powerList [a])

-- Now let's try 3 elements just to make sure we have a pattern:

powerList [c,a,b] = [[c,a,b], [c,b], [c,a], [c]] ++ powerList [a,b]

-- but with order independance we have:

powerList [c,a,b] = powerList [a,b] ++ [[c,a,b], [c,b], [c,a], [c]]

-- but [[c,a,b], [c,b], [c,a], [c]] = map (c:) powerList [a,b] so:

powerList [c,a,b] = powerList [a,b] ++ map (c:) powerList [a,b]

-- generalizing we have:

powerList (x:xs) = powerList xs ++ (map (x:) powerList xs)

Computer Assisted Programming

Submitted by Greg Buchholz on Thu, 10/20/2005 - 2:46pm.

Thought I'd post a link to one of my static typing rants, since there will probably be more "true believers" on this site.

dy/dx

Submitted by Greg Buchholz on Thu, 10/20/2005 - 2:23pm.

Over on comp.lang.lisp we have someone trying to use Haskell for a little symbolic differentiation problem. Since it doesn't even compile, I thought I'd throw my hat in the ring and clean it up a little (Is there a better way to display snippets on this site? I had to use <pre> tags instead of <code> tags)...

infixl 5 :+
infixl 6 :*
data Exp = Num Integer
         | Var Sym
         | Exp :+ Exp
         | Exp :* Exp deriving (Eq,Show)

data Sym = X | Y deriving (Eq,Show)

main = do let exp = (Num 4 :* Var X :* Var X)
          let deriv = d exp X
          putStrLn $ "Original expression : " ++ (show exp)
          putStrLn $ "Derivative : " ++ (show $ simplify deriv)
          putStrLn $ "Derivative evaluated at X=10 : "
                     ++ (show $ eval (d exp X) [(X,10)])

--take the derivative...
d (Num n)  x = Num 0
d (Var y)  x | x==y      = Num 1
             | otherwise = Num 0 
d (f :+ g) x = (d f x) :+ (d g x)
d (f :* g) x = (d f x) :* g :+ f :* (d g x)

--evaluate an Exp...
eval (Num x) env = x
eval (Var x) env = case (lookup x env) of 
                        (Just n)  -> n
                        (Nothing) -> error $ "no variable "++(show x)++" in env"
eval (x :+ y) env = eval x env + eval y env
eval (x :* y) env = eval x env * eval y env

--a few algebraic simplification rules
simp (x :+ y) | x == y = simp (Num 2):*x
simp ((Num 0) :+ x) = simp x
simp (x :+ (Num 0)) = simp x
simp ((Num x) :+ (Num y)) = Num (x+y)
simp (x :+ y) = simp x :+ simp y
simp ((Num 0) :* x) = Num 0
simp (x :* (Num 0)) = Num 0
simp ((Num 1) :* x) = simp x
simp (x :* (Num 1)) = simp x
simp ((Num x) :* (Num y)) = Num (x*y)
simp (x :* y) = simp x :* simp y
simp x = x

--apply simplification rules until the expression doesn't change anymore
simplify x = let a = iterate simp x
                 fix = dropWhile (\(c,d)->c/=d) $ zip a (tail a)
             in (fst.head) fix

Were there any bitter battles in the design of Haskell?

Submitted by metaperl on Sun, 10/16/2005 - 8:02pm.

I find it hard to believe that a language designed by committee did not have bitter furious fights over some issues in the language. Then again, I suppose the design goals with Haskell were fairly clear. Purely functional, types because they are necesary. Laziness because it fits with pure functions. End of discussion!

By the way, I have to comment that the Haskell community is so much friendlier than Perl.

The most important thing you can know about Haskell and Functional Programming

Submitted by metaperl on Thu, 10/06/2005 - 6:16pm.
I bought "The Haskell Road to Logic, Maths, and Programming" but never looked at it until recently. Even though I had gone through 16 chapters of Simon Thompson's book, I had failed to grasp just what Haskell was about but I knew there was something that I was missing. And then I saw it in Section 1.9 of "The Haskell Road..." entitled "Haskell Equations and Equational Reasoning"

The Haskell equations f x y = ... used in the definition of a function f are genuine mathematical equations. They state that the left hand side of the right hand side of the equation have the same value. This is very different from the use of = in imperative languages like C or Java. In a C or Java program the statement x = x*y does not mean that x and x*y have the same value, but rather it is a command to throw away the old value of x and put the value of x*y in its place. It is a so-called destructive assignment statement: the old value of a variable is destroyed and replaced by a new one.

Reasoning about Haskell definitions is a lot easier than reasoning about programs that use destructive assignment. In Haskell, standard reasoning about mathematical equations applies. E.g. after the Haskell declarations x= 1 and y = 2, the Haskell declaration x = x + y will raise an error "x" multiply defined. ... = in Haskell has the meaning "is by definition equal to"...

This was a huge landslide victory for me. Because I quit trying to write programs to get data here, data there. Values here, values there. Instead, I simply began to rewrite the original function as a new definition.

I became so confident that I was able to write a program to return all the leaves of a tree. and here it is:

data Tree a = Empty | Node a (Tree a) (Tree a) -- leaves takes a tree and an empty list and returns a list of leaves -- of the tree leaves :: Tree a -&gt; [a] -&gt; [a] leaves tree lis | empty tree = lis -- an empty tree is just the leaves so far -- add on current node if it is terminal.. NO! scratch that! no add -- on! That is an action. We are simply rewriting leaves tree lis -- as something else based on what we found out about leaves tree lis | terminal currentNode = currentNode tree : lis | empty rightBranch = leaves (leftBranch tree) lis | empty leftBranch = leaves (rightBranch tree) lis | otherwise = leaves (leftBranch tree) lis ++ leaves (rightBranch tree) lis

Looking back at "Algorithms in Haskell" by Rabhi and "Craft of FP" by Simon Thompson, they do both make this same statement, but somehow it never really hit me right.

I am confused on the implementation of (\\)

Submitted by metaperl on Wed, 10/05/2005 - 5:05pm.

From the Haskell Online Report we have:

(\\) :: Eq a => [a] -> [a] -> [a]
(\\) = foldl (flip delete)

The (flip delete) changes the type signature to the following:


*Main> :t (flip delete)
forall a. (Eq a) => [a] -> a -> [a]

and foldl usually is obvious: it folds the list from the left, or the beginning. Meaning, it takes the seed element and the first element of the list and produces a result. This result is the new seed to be used with the next element of the list and so on until a result is produced.

But what is the flipped version of foldl doing in this case? It receives an entire list in the position it normally just receives an element.

Recursion and strong static typing can help you build functions!

Submitted by metaperl on Tue, 10/04/2005 - 5:57pm.

There is a synergy between recursion and strong static typing that I never
realized and here it is:

If you express problem solution recursively, every choice made on every
branch of the recursion at any time in the problem space has to match
the type signature of the promised outcome.

Take this function to delete a node from a tree for example:


delete :: Ord a => a -> Tree a -> Tree a

So one thing we know for dead certain, is that delete will be returning a
Tree. So, if we code delete with a number of cases for different scenarios
each of those cases must return a tree.

Given this, one way to try to find the solution for this is take a look at
all the possible trees that you know about and ask when they would apply as
a solution. It kind of gets you going towards thinking about the mechanics
of the problem. First let's rewrite the type signature as the function head:


delete val (Node v t1 t2)

and now let's ask the pivotal question once again: "what sorts of trees can
I think of?" Well, we are looking at two of them in the second argument
to delete() - t1 and t2. When is t1 the solution to this problem? Well,
that's the same as asking when would we only return the left subtree after hitt\ing a node. And when would we do that? Well, the answer to that is:
"if val == v and there is no left subtree, then we
simply need to return t1." Nice! Codewise, it looks like this:

(val == v) && (isNil t2) = t1

Now of course we have the "mirror" situation
of when do we only return t2? And the answer is very similar to the other one:

(val == v) && (isNil t1) = t2

Ok, so we've nailed down two cases just by taking a look at the head
of the rule. So, let's see both of those cases had to do with being at a
node where val == v. The only difference was that the left or right subtree
was not there.

So obviously, we want to start thinking about when val == v and both
subtrees are there. Well in that case, we need to remove the root node and
join the 2 subtrees, maintaining the required binary tree ordering. So
that clause looks like this, assuming we use the fallback properties of
guards:

otherwise = join t1 t2

So we have thorougly cased out all of the scenarios where we have reached
the node of interest. What happens in the current node has a value greater
than the one we want to delete? Well, the node we are looking for must
exist down the left subtree:

val < v = Node v (delete val t1) t2

and conversely, if the current node's value is less than the one we are
looking for then we need to try to delete down the right subtree:

val > v = Node v t1 (delete val t2)

So now that we have cased all deletion scenarios out, we have our
final delete function:

delete val (Node v t1 t2)
| val < v = Node v (delete val t1) t2
| val > v = Node v t1 (delete val t2)
| isNil t2 = t1
| isNil t1 = t2
| otherwise = join t1 t2

conclusion

SJT says that one should start with an informal description of a problem, then come up with your types. I add to that: once you get your types, then create type signatures for your functions and then once you get your type signatures, build the head of your function with pattern matching and then use the required types to express various cases.

SJT's book has a limited definition of relation... why?

Submitted by metaperl on Tue, 10/04/2005 - 1:58pm.

In 16.9 of SJT's book "Craft of Functional Programming" he says "A binary relation relates together certain elements of a set"

My problem with this is that the elements do not have to be in the same set. A relation is a subset of the cross product of sets A and B, where A and B are not of the same type.

Next he gives a type definition

type Relation a = Set (a,a)

And I think it should be:

type Relation a,b = Set (a,b)