It would be very much like Opera... when I use Opera, I feel like I am using something orthogonal to the internet and opposed to right down in it.
Opera goes to great pains to be elegant and correct and will not let down its hair in the name of practicality or speed. It is very beautiful and it would be nice if the internet raised its standards to opera. But that isn't going to happen.
Life is unfortunately more of a slugfest than a ballet dance. More of a play-it-by-the-ear than force-a-conclusion-by-mathematical-induction.
Ok, first LD(n) is the least divisor of n. It is a number, p, such
that p*a = n and a > p and p > 1.
So now to express LD(n) as a Haskell function:
ld n = ldf 2 n
Where ldf 2 n means the least divisor of n that is >= 2.
Hopefully it is understood that the Haskell expression of ld(n) is
equivalent to our initial mathematical definition.
Now, something else that is asserted and then proved:
n > 1 and n not prime IMPLIES (LD(n))^2 less than or equal to n
So then the following definition of ldf k n is offered
ld n = ldf 2 n ldf k n | divides n k = k | k^2 > n = n | otherwise = ldf (k+1) n
But I do not understand why the second guard is true. Clearly it has
to do with "n > 1 and n not prime IMPLIES (LD(n))^2 <= n" but I do not
see how they relate.
Stating things tractably:
Let's forget about my problem for a second and look at how to state
It may be true that if n is prime then that means that n is only
divisible by 1 and itself. But is that the most algebraic way of
stating it? Can you go from that statement you just made to anything
useful? Maybe, but let's take a look at this way of stating it:
n prime means that only 1 * n = n and nothing else
One other thing. Let's try this same technique for what it means for a to be divisible by b? It means that b * x = a, where x is integral.
Now what have we bought ourselves? Instead of primeness and divisibility being two concepts with no apparent relation, we see them both in terms of what it means for products involving things which are prime and things which are divisible. Both concepts are now expressed as
E * Fwhere E and F vary based on which concept we are talking about.
Why is this useful? Because at one point in my following the proof, I hit this statement: assume c = LD(n) is not prime. In other words "Assume that the least divisor of n is c and it is not prime." Now we reach the second thing:
State all implicationsSo let's take our statement and explore each isolated fact: First, c is the least divisor of n tell us all of this:
And c not prime tells us all of this:
Now what threw me is when the author of the text said: "Then there are natural numbers a and b with c = a * b, and also 1 < a and a < c. but then a divides n..." and that is what confused me. He did not directly show how it was true that a divides n. But if you look above, we have
c * q = nbut also
a * b = c. By substituing
a * bfor c, we have
a * b * q = nwhich means that a divide n.
In the same article Joel Spolsky seems to imply that it is impossible to do recursion in Java.
I find that hard to believe...
In the Perils of Java Schools, Joel Spoelsky states:
Without understanding functional programming, you can't invent MapReduce, the algorithm that makes Google so massively scalable. The terms Map and Reduce come from Lisp and functional programming. MapReduce is, in retrospect, obvious to anyone who remembers from their 6.001-equivalent programming class that purely functional programs have no side effects and are thus trivially parallelizable. The very fact that Google invented MapReduce, and Microsoft didn't, says something about why Microsoft is still playing catch up trying to get basic search features to work, while Google has moved on to the next problem: building Skynet^H^H^H^H^H^H the world's largest massively parallel supercomputer. I don't think Microsoft completely understands just how far behind they are on that wave
[17:00:07] > let fibs = 1:1:(zipWith (+) fibs (tail fibs)) in take 10 $ fibs
[17:00:14] <_metaperl> the free haskell books are not very good
[17:00:31] they're good enough for me, for now
[17:00:49] <_metaperl> jethr0: he did something like [ | x <- [1, 1] , y <- [2, ] ]
[17:00:52] <_metaperl> I cant remember it
[17:01:07] let fibs = [1,1] ++ [x + y | x <- fibs, y <- tail fibs] in take 10 $ fibs
#haskell, Sylvan said that Haskell had a reputation as an academic toy and that books should start off with Monadic I/O and other "practical" concerns to eliminate that stereotype.
Cale responded by saying that
GHCi's REPL provided adequate I/O facilities.
I responded by saying that the most practical thing you could do in learning Haskell is to obtain the ability to decompose a problem into a set of functions. Everything else about Haskell follows from pure lazy functions and until you are comfortable and conversant in this paradigm, you really can't develop any new and useful code. You will simply be an ignorant little monkey, aping things you saw in a tutorial which crippled your ability to analyze and synthesize strongly-typed, lazy functions.
I did not say this then, but I also have this response. The impression of Haskell will not change with another book with practical examples. It will only change when Haskell has huge corporate-ready libraries for every huge corporate issue: when Haskell has a CPAN locked and loaded and ready to go. When Haskell has something as widely used as Mailman. When every website you see is done with Haskell instead of PHP. When you can deploy CORBA apps with Haskell. When you can cache and pre-compute queries for a SQL database in Haskell. When you have the speed and flexibility and eager persistent data loading of mod_perl for Haskell. When you have the best or second best or third best or any high-quality fully functional efficient streaming media player sitting on everyone's Linux desktop. When the favorite free drawing program is written in Haskell. For Haskell equating CPAN. CGI has been a web standard for almost 10 years. Try matching the real-world ready-to-go functionality of CGI.pm - that's one CPAN module in the Perl toolbox. Just one. Then you have 4000 more to go.
Haskell is the most beautiful, elegant, and correctly designed programming language on this planet. It is also the most powerful at chopping apart new specs and writing reliable code in short periods of time - how else could it stomp every other language 2 years in a row in the ICFP? But the only way to remove an impression is with results - tangibles, deliverables and unfortunately the focus on beauty, elegance and correctness seems to get in the way of ever getting anything corporate-ready out the door.
I am grateful to be studying Haskell. It has done wonders for my corporate-level Perl development. The book to read is "The Haskell Road to Logic, Maths and Computer Programming." This book will force you to think clearly. It does not cover monads, but monads can wait until you can specify and solve problems. But my improved thought processes are one thing. Delivering projects on a deadline is another and Perl continues to deliver through a cooperative community based around CPAN. The Haskell community is based around the reason of masterminds doing the intelligent things. The Perl community is based around the needs of people doing practical things in the best way that they can.
Finally, getting corporate ready is not something you can plan. It happens. It happens by necessity. It happens when someone has to swim or sink.
Earlier I discussed how
= in Haskell is equational. Well, it certainly came to the rescue this time. I was trying to understand the given solution in "The Haskell Road to Logic, Maths, and Programming" for the problem of calculating all the sublists of a list.
powerList  = [  ]
powerList (x:xs) = powerList xs ++ (map (x:) powerList xs)
So I looked at this program, but had no idea how in the world this was the solution. So what I did is I worked upwards from the most trivial case like so:
-- The powerlist of a list with no elements is a list consisting of
-- the empty list
powerList  = []
-- Now, let's work with a list with 1 element:
powerList [a] = [ [a],  ]
-- Fair enough, now how about a list with 2 elements:
powerList [a,b] = [ [a,b], [b], [a],  ]
-- Hmm, but let's rewrite that
[ [a,b], [b], [a],  ] = powerList [a] ++ [ [a, b], [b] ]
-- And rewrite once again
powerList [a] ++ [ [a, b], [b] ] = powerList [a] ++ (map (b:) powerList [a])
-- Now let's try 3 elements just to make sure we have a pattern:
powerList [c,a,b] = [[c,a,b], [c,b], [c,a], [c]] ++ powerList [a,b]
-- but with order independance we have:
powerList [c,a,b] = powerList [a,b] ++ [[c,a,b], [c,b], [c,a], [c]]
-- but [[c,a,b], [c,b], [c,a], [c]] = map (c:) powerList [a,b] so:
powerList [c,a,b] = powerList [a,b] ++ map (c:) powerList [a,b]
-- generalizing we have:
powerList (x:xs) = powerList xs ++ (map (x:) powerList xs)
I find it hard to believe that a language designed by committee did not have bitter furious fights over some issues in the language. Then again, I suppose the design goals with Haskell were fairly clear. Purely functional, types because they are necesary. Laziness because it fits with pure functions. End of discussion!
By the way, I have to comment that the Haskell community is so much friendlier than Perl.