# Applications

## analyzing the stringification of a tree data structure

In section 8.3 of the discussion on stringifying tree structures Hudak says:

Because (++) has time complexity linear in the length of its left argument, showTree is potentially quadratic in the size of the tree.

in response to this code:

showTree (Leaf x) = show x

showTree (Branch l r) = "<" ++ showTree l ++ "|" ++ showTree r ++ ">"

So this brings up two questions:

- Why does
`(++)`

have time complexity linear in the length of its left argument? - Why is showTree potentially quadratic in the size of the tree?

## Novice Questions

- For a function
`fn (x:xs) ...`

what happens if it is called like this`fn []`

?

- An error
- A convenient way to alias a type is how?

`type String = [Char]`

- metaperl's blog
- Login to post comments

## Todo List

- A Haskell paste page which syntax highlights Haskell code. Using paste.lisp.org is not a good idea because people don't respond to paste questions listed there

- metaperl's blog
- Login to post comments

## First Post: Cellular Automata Simulator

Alright, I'm new to haskell, and I just wrote a cellular automata simlator that a few people have expressed interest in seeing, so here it is:

module Main where import Data.Bits import Text.Printf (printf) newtype State = State ([Bool],Int) stateLineToString (x:xs) | x == True = 'X':(stateLineToString xs) | otherwise = ' ':(stateLineToString xs) stateLineToString [] = "|" instance Show (State) where show (State (x,y)) = printf "%04.0d:%s|" y (stateLineToString x) {- hood = 3 cell neighborhood; this determines which bit of the ruleNumber we look at for the result; it's three least significant bits determine which neighborhood it is -} applyRule :: Int -> (Bool,Bool,Bool) -> Bool applyRule ruleNum hood = testBit ruleNum (tripleToInt hood) tripleToInt :: (Bool,Bool,Bool) -> Int tripleToInt (x,y,z) = (x `trueNum` 4) + (y `trueNum` 2) + (z `trueNum` 1) where trueNum x y | x == True = y | otherwise = 0 applyRuleToState :: ((Bool,Bool,Bool) -> Bool) -> State -> State -- [Bool] -> [Bool] applyRuleToState f (State (x,y)) = State (False:(applyRuleToList f x),(y+1)) applyRuleToList :: ((Bool,Bool,Bool) -> Bool) -> [Bool] -> [Bool] applyRuleToList rule (a:b:c:rest) = (rule (a,b,c)):(applyRuleToList rule (b:c:rest)) applyRuleToList _ [x,y] = [x] testState = State ((take 100 (repeat False)) ++ [True] ++ (take 100 (repeat False)),0) test = applyRuleToState rule30 testState rule30 = applyRule 30 rule30All = iterate (applyRuleToState rule30) testState rule90 = applyRule 90 rule90All = iterate (applyRuleToState rule90) testState rulesToString :: [State] -> String rulesToString (x:xs) = ((show x) ++ ['\n'])++(rulesToString xs) rulesToSTring [] = ['\n'] main :: IO () main = putStrLn (rulesToString (take 100 rule90All))

As I'm still new there are some sloppy things with it, but it'll output a rule 90 cellular automata. With slight modification, it'll output any that use 3 cell neighborhoods.

- Revision17's blog
- Login to post comments

## Haskell Links

- metaperl's blog
- Login to post comments

## If haskellers built a web browser...

It would be very much like Opera... when I use Opera, I feel like I am using something orthogonal to the internet and opposed to right down in it.

Opera goes to great pains to be elegant and correct and will not let down its hair in the name of practicality or speed. It is very beautiful and it would be nice if the internet raised its standards to opera. But that isn't going to happen.

Life is unfortunately more of a slugfest than a ballet dance. More of a play-it-by-the-ear than force-a-conclusion-by-mathematical-induction.

## understanding the Haskell implementation of the least divisor function

Ok, first LD(n) is the least divisor of n. It is a number, p, such

that p*a = n and a > p and p > 1.

So now to express LD(n) as a Haskell function:

ld n = ldf 2 n

```
```Where ldf 2 n means the least divisor of n that is >= 2.

Hopefully it is understood that the Haskell expression of ld(n) is

equivalent to our initial mathematical definition.

Now, something else that is asserted and then proved:

n > 1 and n not prime IMPLIES (LD(n))^2 less than or equal to n

So then the following definition of ldf k n is offered

ld n = ldf 2 n
ldf k n | divides n k = k
| k^2 > n = n
| otherwise = ldf (k+1) n

But I do not understand why the second guard is true. Clearly it has

to do with "n > 1 and n not prime IMPLIES (LD(n))^2 <= n" but I do not

see how they relate.

```
```

```
```## HR - proof - state things [ 'tractably' , 'comprehensively']

Submitted by metaperl on Sat, 04/01/2006 - 11:33am. ::
I am reading p.4 of The
Haskell Road and for a long time struggled with one part of a
proof on that page. It was his comment about c, a divisor of n.
... there are
natural numbers a and b with c = a * b, and also 1 < a and a <
c. *but then a divides n*..."
and that is what confused me. He
did not directly show how it was true that a divides n just because c
was the least divisor of n.

# Stating things tractably:

#
Let's forget about my problem for a second and look at how to state
things tractably.
It may be true that if n is prime then that means that n is only
divisible by 1 and itself. But is that the most algebraic way of
stating it? Can you go from that statement you just made to anything
useful? Maybe, but let's take a look at this way of stating it:
n prime means that only 1 * n = n and nothing else

One other thing. Let's try this same technique for what it means for
a to be divisible by b? It
means that b * x = a, where x is integral.

Now what have we bought ourselves? Instead of *primeness* and
*divisibility* being two concepts with no apparent relation, we
see them both in terms of what it means for products involving things
which are prime and things which are divisible. Both concepts are now
expressed as

E * F

where E and F vary based on which
concept we are talking about.
Why is this useful? Because at one point in my following the proof, I
hit this statement:
assume c = LD(n) is not prime. In other words "Assume that the least
divisor of n is c and it is not prime."
Now we reach the second thing:

# State all implications

So let's take our statement and explore each isolated fact:
First, c is the least divisor of n tell us all of this:
c * q = n
1 * n = n
c > 1
And c not prime tells us all of this:

1 * c = c
a * b = c, where a != 1
Now what threw me is when the author of the text said: "Then there are
natural numbers a and b with c = a * b, and also 1 < a and a <
c. *but then a divides n*..." and that is what confused me. He
did not directly show how it was true that a divides n. But if you
look above, we have

c * q = n

but also
a * b = c

. By substituing a * b

for c, we have
a * b * q = n

which means that a divide n.
- metaperl's blog
- Login to post comments

## Does Java lack recursion?

Submitted by metaperl on Mon, 01/23/2006 - 8:34pm. ::
In the same article Joel Spolsky seems to imply that it is impossible to do recursion in Java.

I find that hard to believe...

## Google outstrips the competition via Functional Programming

Submitted by metaperl on Mon, 01/23/2006 - 8:32pm. ::
In the Perils of Java Schools, Joel Spoelsky states:

Without understanding functional programming, you can't invent MapReduce, the algorithm that makes Google so massively scalable. The terms Map and Reduce come from Lisp and functional programming. MapReduce is, in retrospect, obvious to anyone who remembers from their 6.001-equivalent programming class that purely functional programs have no side effects and are thus trivially parallelizable. The very fact that Google invented MapReduce, and Microsoft didn't, says something about why Microsoft is still playing catch up trying to get basic search features to work, while Google has moved on to the next problem: building Skynet^H^H^H^H^H^H the world's largest massively parallel supercomputer. I don't think Microsoft completely understands just how far behind they are on that wave

- metaperl's blog
- Login to post comments