News aggregator

Ken T Takusagawa: [ojorjgmk] 1D Lego fractal

Planet Haskell - Fri, 12/19/2014 - 11:26am

A one dimensional cellular automaton generated using the following state transition rules. Note that each cell in the next line (generation) depends on four parents, so it is best drawn on a hexagonal grid, rather than the square grid on which 1D cellular automata are usually drawn (three parents). Vaguely reminiscent of the Sierpinski gasket, and more to Rule 30.

* represents either 0 or 1 ("don't care"), so 16 rules total, as one would expect with 4 parents.

*00* -> 0
*11* -> 0
101* -> 0
*101 -> 0
001* -> 1
*100 -> 1

The general idea is that cells grow to the left and right, but try to avoid crowding. The structure is a binary tree: branches never re-meet.

Bigger picture (click to enlarge):

What is the asymptotic density?

Inspired by an attempt to build a dense Lego pyramid that could still be easily taken apart. This fractal is the vertical cross section; concentric squares around the vertical axis. (The inner structures turned out to be too fragile.)

Haskell source code to generate the images.

Categories: Offsite Blogs

Lennart Augustsson: A commentary on 24 days of GHC extensions, part 2

Planet Haskell - Fri, 12/19/2014 - 10:45am
Day 7: Type OperatorsI understand the usefulness of the type operator GHC extension, but I don't like it. For value identifiers any operator not starting with a ':' is treated the same way as identifiers starting with a lower case letter. This is how it used to be on the type level too. But with the latest version of the TypeOperators extension these operators now behave like upper case identifiers and can be used to name types. An ugly wart, IMO.

Day 8: Recursive doThis extension stems from Levent Erkök's PhD thesis Value Recursion in Monadic Computations. The motivation was to be able to describe circuits with feedback using monadic notation.

There's actually two way to get recursive do-notation. You can either use the rec keyword inside the do or you can use the keyword mdo instead of do. Why mdo, you ask? It should really be μdo, where μ is the recursion operator. But people at GHC HQ are scared of Unicode, so instead we got mdo.

Day 9: Nullary Type ClassesOnce you have accepted multi-parameter type classes then nullary type classes are a natural consequence; zero is also a number, after all.

In 1994 I worked on the pH language when I visited MIT. The acronym pH stands for "parallel Haskell", which is what it is. It had some imperative features that were not in a monad. This was in 1994, and the ST monad was just being invented, so we were trying out other things. To keep track of what parts of the program used imperative features I used a nullary type class Imperative that tainted any code that used assignment et al. (As far as I know, this is the earliest use of a nullary type class.) When mixing imperative code and pure code it is important to not be too polymorphic. ML-like languages use the value restriction to accomplish this. Since Haskell's monomorphism restriction is essentially the same as the value restriction, having the Imperative type class neatly accomplished the value restriction for pH.

Day 10: Implicit ParametersThe implicit parameter started as an attempt to simplify Haskell overloading. Haskell overloading, when implemented by dictionary passing, really has two parts: first there is a way to implicitly pass dictionaries around, and second, dictionaries are constructed automatically by using the instance declarations as a logical inference system.

The implicit parameters do the automagic passing of parameters, and is thus half of what is needed for overloading. But they were never used for overloading, instead it was discovered that they can be made to behave like dynamic binding, but with static types. This is quite cool! Even if one doesn't like dynamic binding (I don't).

There idea is presented Erik Meijer et al's paper Implicit Parameters: Dynamic Scoping with Static Types. It's worth reading.

Day 11: Type FamiliesType families grew out of the need to have type classes with associated types. The latter is not strictly necessary since it can be emulated with multi-parameter type classes, but it gives a much nicer notation in many cases. The same is true for type families; they can also be emulated by multi-parameter type classes. But MPTC gives a very logic programming style of doing type computation; whereas type families (which are just type functions that can pattern match on the arguments) is like functional programming.

Using closed type families adds some extra strength that cannot be achieved by type classes. To get the same power from type classes we would need to add closed type classes. Which would be quite useful; this is what instance chains gives you.

Day 12: Multi-Parameter Type ClassesEven the original paper on Haskell type classes mentions multiple parameters, so this idea has been with us from the start. But it's not present in Stefan Kaes original work on type classes.

Unfortunately, multiple type class parameters very quickly lead to type ambiguities in the code, and we are forced to add type signatures. So MPTC was not really useful until Mark Jones introduced functional dependencies. Mark got the idea from database theory where the same idea is used to limit the relations in the database.

Categories: Offsite Blogs

www.well-typed.com

del.icio.us/haskell - Fri, 12/19/2014 - 6:32am
Categories: Offsite Blogs

Danny Gratzer: Treating Programs like Vending Machines

Planet Haskell - Thu, 12/18/2014 - 6:00pm
Posted on December 19, 2014 Tags: haskell, types

Proving things about programs is quite hard. In order to make it simpler, we often lie a bit. We do this quite a lot in Haskell when we say things like “assuming everything terminates” or “for all sane values”. Most of the time, this is alright. We sometimes need to leave the universe of terminating things, though, whenever we want to prove things about streams or other infinite structures.

In fact, once we step into the wondrous world of the infinite we can’t rely on structural induction anymore. Remember that induction relies on the “well foundedness” of the thing we’re inducting upon, meaning that there can only be finitely many things smaller than any object. However there is an infinite descending chain of things smaller than an infinite structure! For some intuition here, something like foldr (which behaves just like structural induction) may not terminate on an infinite list.

This is quite a serious issue since induction was one of our few solid tools for proof. We can replace it though with a nifty trick called coinduction which gives rise to a useful notion of equality with bisimulation.

Vending Machines

Before we get to proving programs correct, let’s start with proving something simpler. The equivalence of two simple machines. These machines (A and B) have 3 buttons. Each time we push a button the machine reconfigures itself. A nice real world example of such machines would be vending machines. We push a button for coke and out pops a (very dusty) can of coke and the machine is now slightly different.

Intuitively, we might say that two vending machines are equivalent if and only if our interactions with them can’t distinguish one from the other. That is to say, pushing the same buttons on one gives the same output and leaves both machines in equivalent states.

To formalize this, we first need to formalize our notion of a vending machine. A vending machine is a comprised set of states. These states are connected by arrows labeled with a transition. We’ll refer to the start of a transition as its domain and its target as the codomain. This group of transitions and states is called a labeled transition system (LTS) properly.

To recap how this all relates back to vending machines

  1. A state is a particular vending machine at a moment in time
  2. A transition between A and B would mean we could push a button on A and wind up with B
  3. The label of such a transition is the delicious sugary drink produced by pushing the button

Notice that this view pays no attention to all the mechanics going on behind the scenes of pushing a button, only the end result of the button push. We refer to the irrelevant stuff as the “internal state” of the vending machine.

Let’s consider a relation R with A R B if and only if

  1. There exists a function f from transitions from A to transitions from B so that x and f(x) have the same label.
  2. Further, if A R B and A has a transition x, then the codomain of x is related to the codomain of f(x).
  3. There is a g satisfying 1. and 2., but from transitions from B to transitions from A.

This definition sets out to capture the notion that two states are related if we can’t distinguish between them. The fancy term for such a relation is a bisimulation. Now our notion of equivalence is called bisimilarity and denoted ~, it is the union of all bisimulations.

Now how could we prove that A ~ B? Since ~ is the union of all bisimulations, all we need to is construct a bisimulation so that A R B and hey presto, they’re bisimilar.

To circle back to vending machine terms, if for every button on machine A there’s a button on B that produces the same drink and leaves us with related machines then A and B are the same.

From Vending Machines to Programs

It’s all very well and good that we can talk about the equality of labeled transition systems, but we really want to talk about programs and pieces of data. How can we map our ideas about LTSs into programs?

Let’s start with everyone’s favorite example, finite and infinite lists. We define our domain of states to be

L(A) = {nil} ∪ {cons(n, xs) | n ∈ ℕ ∧ xs ∈ A}

We have to define this as a function over A which represents the tail of the list which means this definition isn’t recursive! It’s equivalent to

data ListF a = Cons Int a | Nil

What we want here is a fixed point of L, an element X so that L(X) = X. This is important because it means

cons :: ℕ → X → L(X) cons :: ℕ → L(X) → L(X)

Which is just the type we’d expect cons to have. There’s still a snag here, what fixed point do we want? How do we know one even exists? I’d prefer to not delve into the math behind this (see TAPL’s chapter on infinite types) but the gist of it is, if for any function F

  1. F is monotone so that x ⊆ y ⇒ F(x) ⊆ F(y)
  2. F is cocontinuous so that ∩ₓF(x) = F(∩ₓ x)

Then there exists an X = F(X) which is greater or equal to all other fixpoints. The proof of this isn’t too hard, I encourage the curious reader to go and have a look. Furthermore, poking around why we need cocontinuity is enlightening, it captures the notion of “nice” lazy functions. If you’ve looked at any domain theory, it’s similar to why we need continuity for least fixed pointed (inductive) functions.

This greatest fixed point what we get with Haskell’s recursive types and that’s what we want to model. What’s particularly interesting is that the greatest fixed point includes infinite data which is very different than the least fixed point which is what we usually prefer to think about when dealing with things like F-algebras and proofs by induction.

Now anyways, to show L has a fixed point we have to show it’s monotone. If X ⊆ Y then L(X) ⊆ L(Y) because x ∈ L(X) means x = nil ∈ L(Y) or x = cons(h, t), but since t ∈ X ⊆ Y then cons(h, t) ∈ L(Y). Cocontinuity is left as an exercise to the reader.

So L has a greatest fixed point: X. Let’s define an LTS with states being L(X) and with the transitions cons(a, x) → x labeled by a. What does bisimilarity mean in this context? Well nil ~ nil since neither have any transitions. cons(h, t) ~ cons(h', t') if and only if h = h' and t ~ t'. That sounds a lot like how equality works!

Demonstrate this let’s define two lists

foo = cons(1, foo) bar = cons(1, cons(1, bar))

Let’s prove that foo ~ bar. Start by defining a relation R with foo R bar. Now we must show that each transition from foo can be matched with one from bar, since there’s only one from each this is easy. There’s a transition from foo → foo labeled by 1 and a transition from bar → cons(1, bar) also labeled by one. Here lies some trouble though, since we don’t know that foo R cons(1, bar), only that foo R bar. We can easily extend R with foo R cons(1, bar) though and now things are smooth sailing. The mapping of transitions for this new pair is identical to what we had before and since we know that foo R bar, our proof is finished.

To see the portion of the LTS our proof was about

foo bar | 1 | 1 foo cons(1, bar) | 1 | 1 foo bar

and our bisimulation R is just given by {(foo, bar), (foo, cons(1, bar))}.

Now that we’ve seen that we can map our programs into LTSs and apply our usual tricks there, let’s formalize this a bit.

A More Precise Formulation of Coinduction

First, what exactly is [co]induction? Coinduction is a proof principle for proving something about elements of the greatest fixed point of a function, F. We can prove that the greatest fixed point, X, is the union of all the sets Y so that Y ⊆ F(Y).

If we can prove that there exists an Y ⊆ F(Y) that captures our desired proposition then we know that Y ⊆ gfp(F). That is the principle of coinduction. Unlike the principle of induction we don’t get proofs about all members of a set, rather we get proves that there exists members which satisfy this property. It also should look very similar to how we proved things about ~.

So now that we’ve defined coinduction across a function, what functions do we want to actually plop into this? We already now what we want for lists,

List(A) = {nil} ∪ {cons(h, t) | h ∈ X ∧ t ∈ A}

But what about everything else? Well, we do know that each value is introduced by a rule. These rules are always of the form

Some Premises Here —————————————————— conclusion here

So for example, for lists we have

—————————————— nil ∈ List(A) h ∈ H t ∈ A ————————————–———————— cons(h, t) ∈ List(A)

Now our rules can be converted into a function with a form like

F(A) = ∪ᵣ {conclusion | premises}

So for lists this gives

F(A) = {nil} ∪ {cons(h, t) | h ∈ H ∧ t ∈ A}

as expected. We can imagine generalizing this to other things, like trees for example

————————–————— leaf ∈ Tree(A) x ∈ H l ∈ A r ∈ A ————————–—————————————– node(h, l, r) ∈ Tree(A) Tree(A) = {leaf} ∪ {node(h, l, r) | x ∈ H ∧ l ∈ A ∧ r ∈ A}

Now the most common thing we want to prove is some notion of equability. This is harder then it seems because the usual notions of equality don’t work.

Instead we can apply bisimulation. Our approach is the same, we define a criteria for what it means to be a bisimulation across a certain type and define ~ as the union of all bisimulations. On lists we wanted the heads to be equal and the tails to be bisimilar, but what about on trees? We can take the same systematic approach we did before by considering what an LTS on trees would look like. leaf has no information contained in it and therefore no transitions. node(a, l, r) should have two transitions, left or right. Both of these give you a subtree contained by this node. What should they be labeled with? We can follow our intuitions from lists and label them both with a.

This leaves us with the following definition, R is a bisimulation on trees if and only if

  • leaf R leaf
  • node(a, l, r) R node(a', l', r') if and only if a = a' ∧ l R l' and r R r'

So to prove an equality between trees, all we must do is provide such an R and then we know R ⊆ ~!

This describes how we deal with coinductive things in general really. We define what it means for a relation to partially capture what we’re talking about (like a bisimulation) and then the full thing is the union of all of these different views! Sortedness could be expressed as a unitary relation S where

  • nil ∈ S
  • cons(a, xs) ∈ S → xs ∈ S ∧ Just a ≤ head xs

the sorted predicate is the union of all such relations!

Dithering about Duality

I’d like to reel off some pithy dualities between induction and coinduction

  • Coinduction is about observation, induction is about construction
  • Coinduction proves existentials, induction proves universals
  • Coinduction is proving your property is a subset of a group, induction is about proving a group is a subset of your property
Wrap Up

So that about wraps up this post.

We’ve seen how infinite structures demand a fundamentally different approach to proofs then finite ones. It’s not all puppies and rainbows though, considering how we managed to spend nearly 300 lines talking about it, coinduction is a lot less intuitive. It is however, our only choice if we want to have “real proofs” in Haskell (terms and conditions may apply).

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Run Haskell Online

Haskell on Reddit - Thu, 12/18/2014 - 3:38pm
Categories: Incoming News