# News aggregator

### Punctuated equilibrium in the large scale evolution of programming languages

*Sergi Valverde and Ricard Solé, "Punctuated equilibrium in the large scale evolution of programming languages", SFI working paper 2014-09-030*

Here we study the large scale historical development of programming languages, which have deeply marked social and technological advances in the last half century. We analyse their historical connections using network theory and reconstructed phylogenetic networks. Using both data analysis and network modelling, it is shown that their evolution is highly uneven, marked by innovation events where new languages are created out of improved combinations of different structural components belonging to previous languages. These radiation events occur in a bursty pattern and are tied to novel technological and social niches. The method can be extrapolated to other systems and consistently captures the major classes of languages and the widespread horizontal design exchanges, revealing a punctuated evolutionary path.

The results developed here are perhaps not that surprising to people familiar with the history of programming languages. But it's interesting to see it all formalized and analyzed.

### Help constructing a monadic hylomorphism from a recursive definition

I have an ugly, complicated monadic recursive function that I'm trying to refactor into an elegant, simple recursion scheme. I've run into several problems along the way, and I'm sure to run into more, but the one that concerns me right now is: **"How do you meaningfully use Reader in a monadic hylomorphism?"** I'm aware that this could be an instance of the XY problem, so I'll try to give a representative simplification.

I have a directed, possibly cyclic graph, and I would like to do a depth-first traversal through it, performing side-effects (for example, IO) at each vertex and keeping a stack for cycle detection in a Reader.

dfs graph start = asks (elem start) >>= \b -> if b then echo "Cycle detected involving: " start else do echo "Visiting: " start local (start:) $ do mapM_ (dfs graph) $ graph start echo "Done with: " start main = runReaderT (dfs g 0) [](A more complete Haskell example; a Python version)

Now, I feel like I ought to be able to express dfs as a (monadic) hylomorphism, with graph :: Int -> [Int] being the "ana" part, more or less; unfortunately, the use of local becomes a problem.

Does anyone have any suggestions? Any references on practical programming with recursion schemes? Thanks all.

submitted by rpglover64[link] [4 comments]

### AVoCS 2015: Joint Call for Research Idea Papers &Participation

### pusher-haskell: A client for Pusher.com

Hi all,

I've been (slowly) working on a server client for Pusher.com's API for a while now, and I just pushed an initial version to hackage. I'm submitting here hoping for criticisms and contributions. It's relatively simple, so I think beginning haskellers can contribute meaningfully.

submitted by sidraval[link] [comment]

### Cakes, Custard, and Category Theory

Eugenia Cheng's new popular coscience book is out, in the U.K. under the title Cakes, Custard and Category Theory: Easy recipes for understanding complex maths, and in the U.S. under the title How to Bake Pi: An Edible Exploration of the Mathematics of Mathematics:

Most people imagine maths is something like a slow cooker: very useful, but pretty limited in what it can do. Maths, though, isn't just a tool for solving a specific problem - and it's definitely not something to be afraid of. Whether you're a maths glutton or have forgotten how long division works (or never really knew in the first place), the chances are you've missed what really makes maths exciting. Calling on a baker's dozen of entertaining, puzzling examples and mathematically illuminating culinary analogies - including chocolate brownies, iterated Battenberg cakes, sandwich sandwiches, Yorkshire puddings and Möbius bagels - brilliant young academic and mathematical crusader Eugenia Cheng is here to tell us why we should all love maths.

From simple numeracy to category theory ('the mathematics of mathematics'), Cheng takes us through the joys of the mathematical world. Packed with recipes, puzzles to surprise and delight even the innumerate, Cake, Custard & Category Theory will whet the appetite of maths whizzes and arithmophobes alike. (Not to mention aspiring cooks: did you know you can use that slow cooker to make clotted cream?) This is maths at its absolute tastiest.

Cheng, one of the Catsters, gives a guided tour of mathematical thinking and research activities, and through the core philosophy underlying category theory. This is the kind of book you can give to your grandma and grandpa so they can boast to their friends what her grandchildren are doing (and bake you a nice dessert when you come and visit :) ). A pleasant weekend reading.

### StateT for reasoning about operations on Storable data?

Is this a fruitful abstraction for reasoning about e.g. destructive (in-place) operation performed by some FFI functions?

submitted by ocramz[link] [1 comment]

### Is anyone working on a Haskell implementation of QUIC?

It's an encrypted, authenticated UDP-based protocol which unlike TLS has a fast-path of zero handshaking: https://en.wikipedia.org/wiki/QUIC So no need to waste like 100 milliseconds for every connection. Just curious if anyone here is working on either a pure Haskell implementation or bindings to the Google stuff. (I would much prefer a pure Haskell implementation)

submitted by pchiusano[link] [10 comments]

### IAG is hiring Haskell programmers

### very general directed hypergraphs over text

### Call for Contributions: Haskell Implementors'Workshop 2015

### wiki username request: Eric

### Danny Gratzer: Coinduction in JonPRL for Low Low Prices

So as a follow up to my prior tutorial on JonPRL I wanted to demonstrate a nice example of JonPRL being used to prove something

- Interesting
Unreasonably difficult in Agda or the like

*I think I’m asking to be shown up when I say stuff like this…*

I would like to implement the conatural numbers in JonPRL but without a notion of general coinductive *or* even inductive types. Just the natural numbers. The fun bit is that we’re basically going to lift the definition of a coinductively defined set straight out of set theory into JonPRL!

First, let’s go through some math. How can we formalize the notion of an coinductively defined type as we’re used to in programming languages? Recall that something is coinductively if it contains all terms so that we can eliminate the term according to the elimination form for our type. For example, Martin-Lof has proposed we view functions (Π-types) as coinductively defined. That is,

x : A ⊢ f(x) : B(x) ———————————————————— f : Π x : A. B(x)In particular, there’s no assertion that f needs to be a lambda, just that f(x) is defined and belongs to the right type. This view of “if we can use it, it’s in the type” applies to more than just functions. Let’s suppose we have a type with the following elimination form

L : List M : A x : Nat, y : List : A —————————————————————————————————————— case(L; M; x.y.N) : AThis is more familiar to Haskellers as

case L of [] -> M x :: y -> NNow if we look at the coinductively defined type built from this elimination rule we have not finite lists, but streams! There’s nothing in this elimination rule that specifies that the list be finite in length for it to terminate. All we need to be able to do is evaluate the term to either a :: of a Nat and a List or nil. This means that

fix x. cons(0; x) : ListLet’s now try to formalize this by describing what it means to build a coinductively type up as a set of terms. In particular the types we’re interested in here are algebraic ones, constructed from sums and products.

Now unfortunately I’m going to be a little handwavy. I’m going to act is if we’ve worked out a careful set theoretic semantics for this programming language (like the one that exists for MLTT). This means that All the equations you see here are across sets and that these sets contain programs so that ⊢ e : τ means that e ∈ τ where τ on the right is a set.

Well we start with some equation of the form

Φ = 1 + ΦThis particular equation a is actually how we would go about defining the natural numbers. If I write it in a more Haskellish notation we’d have

data Φ = Zero | Succ ΦNext, we transform this into a function. This step is a deliberate move so we can start applying the myriad tools we know of for handling this equation.

Φ(X) = 1 + XWe now want to find some X so that Φ(X) = X. If we can do this, then I claim that X is a solution to the equation given above since

X = Φ(X) X = 1 + Xprecisely mirrors the equation we had above. Such an X is called a “fixed point” of the function Φ. However, there’s a catch: there may well be more than one fixed point of a function! Which one do we choose? The key is that we want the coinductively defined version. Coinduction means that we should always be able to examine a term in our type and its outermost form should be 1 + ???. Okay, let’s optimistically start by saying that X is ⊤ (the collection of all terms).

Ah okay, this isn’t right. This works only so long as we don’t make any observations about a term we claim is in this type. The minute we pattern match, we might have found we claimed a function was in our type! I have not yet managed to pay my rent by saying “OK, here’s the check… but don’t try to *use* it and it’s rent”. So perhaps we should try something else. Okay, so let’s not say ⊤, let’s say

Now, if t ∈ X, we know that t ∈ 1 + ???. This means that if we run e ∈ X, we’ll get the correct outermost form. However, this code is still potentially broken:

case e of Inl _ -> ... Inr x -> case e of Inl _ -> ... Inr _ -> ...This starts off as being well typed, but as we evaluate, it may actually become ill typed. If we claimed that this was a fixed point to our language, our language would be type-unsafe. This is an unappealing quality in a type theory.

Okay, so that didn’t work. What if we fixed this code by doing

X = ⊤ ⋂ Φ(⊤) ⋂ Φ(Φ(⊤))Now this fixes the above code, but can you imagine a snippet of code where this still gets stuck? So each time we intersect X with Φ(X) we get a new type which behaves like the real fixed point so long as we only observe n + 1 times where X behaves like the fixed point for n observations. Well, we can only make finitely many observations so let’s just iterate such an intersection

X = ⋂ₙ Φⁿ(⊤)So if e ∈ X, then no matter how many times we pattern match and examine the recursive component of e we know that it’s still in ⋂ₙ Φⁿ(⊤) and therefore still in X! In fact, it’s easy to prove that this is the case with two lemmas

- If X ⊆ Y then Φ(X) ⊆ Φ(Y)
- If I have a collection S of sets, then ⋂ Φ(S) = Φ(⋂ S) where we define Φ on a collection of sets by applying Φ to each component.

These two properties state the monotonicity and cocontinuity of Φ. In fact, cocontinuity should imply monotonicity (can you see how?). We then may show that

Φ(⋂ₙ Φⁿ(⊤)) = ⋂ₙ Φ(Φⁿ(⊤)) = ⊤ ⋂ (⋂ₙ Φ(Φⁿ(⊤))) = ⋂ₙ Φⁿ(⊤)As desired.

The CodeNow that we have some idea of how to formalize coinduction, can we port this to JonPRL? Well, we have natural numbers and we can take the intersection of types… Seems like a start. Looking at that example, we first need to figure out what ⊤ corresponds to. It should include all programs, which sounds like the type base in JonPRL. However, it also should be the case that x = y ∈ ⊤ for all x and y. For that we need an interesting trick:

Operator top : (). [top] =def= [isect(void; _.void)].In prettier notation,

top ≙ ⋂ x : void. voidNow x ∈ top if x ∈ void for all _ ∈ void. Hey wait a minute… No such _ exists so the if is always satisfied vacuously. Ok, that’s good. Now x = y ∈ top if for all _ ∈ void, x = y ∈ void. Since no such _ exists again, all things are in fact equal in void. We can even prove this within JonPRL

Theorem top-is-top : [isect(base; x. isect(base; y. =(x; y; top)))] { unfold <top>; auto }.This proof is really just:

- Unfold all the definitions.
- Hey! There’s a x : void in my context! Tell me more about that.

Now the fact that x ∈ top is a trivial corollary since our theorem tells us that x = x ∈ top and the former is just sugar for the latter. With this defined, we can now write down a general operator for coinduction!

Operator corec : (1). [corec(F)] =def= [isect(nat; n. natrec(n; top; _.x. so_apply(F;x)))].To unpack this, corec takes one argument which binds one variable. We then intersect the type natrec(n; top; _.x.so_apply(F;x)) for all n ∈ nat. That natrec construct is really saying Fⁿ(⊤), it’s just a little obscured. Especially since we have to use so_apply, a sort of “meta-application” which lets us apply a term binding a variable to another term. This should look familiar, it’s just how we defined fixed point of a Φ!

For a fun demo, let’s define an F so that cofix(F) will give us the conatural numbers. I know that the natural numbers come from the least fixed point of X ↦ 1 + X (because I said so above, so it must be so) so let’s define that.

Operator conatF : (0). [conatF(X)] =def= [+(unit; X)].This is just that X ↦ 1 + X I wrote above in JonPRL land instead of math notation. Next we need to actually define conatural numbers using corec.

Operator conat : (). [conat] =def= [corec(R. conatF(R))].Now I’ve defined this, but that’s no fun unless we can actual build some terms so that member(X; conat). Specifically I want to prove two things to start

- member(czero; conat)
- fun(member(M; conat); _.member(csucc(M); conat))

This states that conat is closed under some zero and successor operations. Now what should those operations be? Remember what I said before, that we had this correspondence?

X ↦ 1 + X Nat Zero Suc XNow remember that conat is isect(nat; n....) and when constructing a member of isect we’re not allowed to mention that n in it (as opposed to fun where we do exactly that). So that means czero has to be a member of top and sum(unit; ...). The top bit is easy, everything is in top! That diagram above suggests inl of *something* in unit

So now we want to prove that this in fact in conat.

Theorem zero-wf : [member(czero; conat)] { }.Okay loading this into JonPRL gives

⊢ czero ∈ conatFrom there we start by unfolding all the definitions

{ unfold <czero conat conatF corec top> }This gives us back the desugared goal

⊢ inl(<>) ∈ ⋂n ∈ nat. natrec(n; top; _.x.+(unit; x))Next let’s apply all the obvious introductions so that we’re in a position to try to prove things

unfold <czero conat conatF corec top>; autoThis gives us back

1. [n] : nat ⊢ inl(<>) = inl(<>) ∈ natrec(n; top; _.x.+(unit; x))Now we’re stuck. We want to show inl is in something, but we’re never going to be able to do that until we can reduce that natrec(n; top; _.x.+(unit; x)) to a canonical form. Since it’s stuck on n, let’s induct on that n. After that, let’s immediately reduce.

{ unfold <czero conat conatF corec top>; auto; elim #1; reduce }Now we have to cases, the base and inductive case.

1. [n] : nat ⊢ inl(<>) = inl(<>) ∈ top 1. [n] : nat 2. n' : nat 3. ih : inl(<>) = inl(<>) ∈ natrec(n'; top; _.x.+(unit; x)) ⊢ inl(<>) = inl(<>) ∈ +(unit; natrec(n'; top; _.x.+(unit; x)))Now that we have canonical terms on the right of the ∈m, let’s let auto handle the rest.

Theorem zero-wf : [member(czero; conat)] { unfold <czero conat conatF corec top>; auto; elim #1; reduce; auto }.So now we have proven that czero is in the correct type. Let’s figure out csucc? Going by our noses, inl corresponded to czero and our diagram says that inr should correspond to csucc. This gives us

Operator csucc : (0). [csucc(M)] =def= [inr(M)].Now let’s try to prove the corresponding theorem for csucc

Theorem succ-wf : [isect(conat; x. member(csucc(x); conat))] { }.Now we’re going to start off this proof like we did with our last one. Unfold everything, apply the introduction rules, and induct on n.

{ unfold <csucc conat conatF corec top>; auto; elim #2; reduce }Like before, we now have two subgoals:

1. [x] : ⋂n ∈ nat. natrec(n; ⋂_ ∈ void. void; _.x.+(unit; x)) 2. [n] : nat ⊢ inr(x) = inr(x) ∈ ⋂_ ∈ void. void 1. [x] : ⋂n ∈ nat. natrec(n; ⋂_ ∈ void. void; _.x.+(unit; x)) 2. [n] : nat 3. n' : nat 4. ih : inr(x) = inr(x) ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ⊢ inr(x) = inr(x) ∈ +(unit; natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)))The first one looks pretty easy, that’s just foo ∈ top, I think auto should handle that.

{ unfold <csucc conat conatF corec top>; auto; elim #2; reduce; auto }This just leaves one goal to prove

1. [x] : ⋂n ∈ nat. natrec(n; ⋂_ ∈ void. void; _.x.+(unit; x)) 2. [n] : nat 3. n' : nat 4. ih : inr(x) = inr(x) ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ⊢ x = x ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x))Now, as it turns out, this is nice and easy: look at what our first assumption says! Since x ∈ isect(nat; n.Foo) and our goal is to show that x ∈ Foo(n') this should be as easy as another call to elim.

{ unfold <csucc conat conatF corec top>; auto; elim #2; reduce; auto; elim #1 [n']; auto }Note that the [n'] bit there lets us supply the term we wish to substitute for n while eliminating. This leaves us here:

1. [x] : ⋂n ∈ nat. natrec(n; ⋂_ ∈ void. void; _.x.+(unit; x)) 2. [n] : nat 3. n' : nat 4. ih : inr(x) = inr(x) ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) 5. y : natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) 6. z : y = x ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ⊢ x = x ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x))Now a small hiccup: we know that y = x is in the right type. so x = x in the right type. But how do we prove this? The answer is to substitute all occurrences of x for y. This is written

{ unfold <csucc conat conatF corec top>; auto; elim #2; reduce; auto; elim #1 [n']; auto; hyp-subst ← #6 [h.=(h; h; natrec(n'; isect(void; _.void); _.x.+(unit;x)))]; }There are three arguments here, a direction to substitute, an index telling us which hypothesis to use as the equality to substitute with and finally, a term [h. ...]. The idea with this term is that each occurrence of h tells us where we want to substitute. In our case we used h in two places: both where we use x, and the direction says to replace the right hand side of the equality with the left side of the equality.

Actually running this gives

1. [x] : ⋂n ∈ nat. natrec(n; ⋂_ ∈ void. void; _.x.+(unit; x)) 2. [n] : nat 3. n' : nat 4. ih : inr(x) = inr(x) ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) 5. y : natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) 6. z : y = x ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ⊢ y = y ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) 1. [x] : ⋂n ∈ nat. natrec(n; ⋂_ ∈ void. void; _.x.+(unit; x)) 2. [n] : nat 3. n' : nat 4. ih : inr(x) = inr(x) ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) 5. y : natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) 6. z : y = x ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) 7. h : natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ⊢ h = h ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ∈ U{i}The first goal is the result of our substitution and it’s trivial; auto will handle this now. The second goal is a little strange. It basically says that the result of our substitution is still a well-formed type. JonPRL’s thought process is something like this

You said you were substituting for things of this type here. However, I know that just because x : A doesn’t mean we’re using it in all those spots as if it has type A. What if you substitute things equal in top (always equal) for when they’re being used as functions! This would let us prove that zero ∈ Π(...) or something silly. Convince me that when we fill in those holes with *something* of the type you mentioned, the goal is still a type (in a universe).

However, these well-formedness goals usually go away with auto. This completes our theorem in fact.

Theorem succ-wf : [isect(conat; x. member(csucc(x); conat))] { unfold <csucc conat conatF corec top>; auto; elim #2; reduce; auto; elim #1 [n']; auto; hyp-subst ← #6 [h.=(h; h; natrec(n'; isect(void; _.void); _.x.+(unit;x)))]; auto }.Okay so we now have something kind of number-ish, with zero and successor. But in order to demonstrate that this is the conatural numbers there’s one big piece missing.

The ClincherThe thing that distinguishes the *co*natural numbers from the inductive variety is the fact that we include infinite terms. In particular, I want to show that Ω (infinitely many csuccs) belongs in our type.

In order to say Ω in JonPRL we need recursion. Specifically, we want to write

[omega] =def= [csucc(omega)].But this doesn’t work! Instead, we’ll define the Y combinator and say

Operator omega : (). [omega] =def= [Y(x.csucc(x))].So what should this Y be? Well the standard definition of Y is

Y(F) = (λ x. F (x x)) (λ x. F (x x))Excitingly, we can just say that in JonPRL; remember that we have a full untyped computation system after all!

Operator Y : (1). [Y(f)] =def= [ap(lam(x.so_apply(f;ap(x;x)));lam(x.so_apply(f;ap(x;x))))].This is more or less a direct translation, we occasionally use so_apply for the reasons I explained above. As a fun thing, try to prove that this is a fixed point, eg that

isect(U{i}; A. isect(fun(A; _.A); f . ceq(Y(f); ap(f; Y(f)))))The complete proof is in the JonPRL repo under example/computational-equality.jonprl. Anyways, we now want to prove

Theorem omega-wf : [member(omega; conat)] { }.Let’s start with the same prelude

{ *{unfold <csucc conat conatF corec top omega Y>}; auto; elim #1; }Two goals just like before

1. [n] : nat ⊢ (λx. inr(x[x]))[λx. inr(x[x])] = (λx. inr(x[x]))[λx. inr(x[x])] ∈ natrec(zero; ⋂_ ∈ void. void; _.x.+(unit; x)) 1. [n] : nat 2. n' : nat 3. ih : (λx. inr(x[x]))[λx. inr(x[x])] = (λx. inr(x[x]))[λx. inr(x[x])] ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ⊢ (λx. inr(x[x]))[λx. inr(x[x])] = (λx. inr(x[x]))[λx. inr(x[x])] ∈ natrec(succ(n'); ⋂_ ∈ void. void; _.x.+(unit; x))The goals start to get fun now. I’ve also carefully avoided using reduce ike we did before. The reason is simple, if we reduce in the second goal, our ih will reduce as well and we’ll end up completely stuck in a few steps (try it and see). So instead we’re going to finesse it a bit.

First let’s take care of that first goal. We can tell JonPRL to apply some tactics to just the first goal with the focus tactic

{ *{unfold <csucc conat conatF corec top omega Y>}; auto; elim #1; focus 0 #{reduce 1; auto}; }Here reduce 1 says “reduce by only one step” since really omega will diverge if we just let it run. This takes care of the first goal leaving just

1. [n] : nat 2. n' : nat 3. ih : (λx. inr(x[x]))[λx. inr(x[x])] = (λx. inr(x[x]))[λx. inr(x[x])] ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ⊢ (λx. inr(x[x]))[λx. inr(x[x])] = (λx. inr(x[x]))[λx. inr(x[x])] ∈ natrec(succ(n'); ⋂_ ∈ void. void; _.x.+(unit; x))Here’s the proof sketch for what’s left

- Reduce the goal by one step but
*carefully avoid touching the ih* - Step everything by one
- The result follows by intro and assumption

You can stop here or you can see how we actually do this. It’s somewhat tricky. The basic complication is that there’s no built-in tactic for 1. Instead we use a new type called ceq which is “computational equality”. It ranges between two terms, no types involved here. It’s designed to work thusly if ceq(a; b), either

- a and b run to weak-head normal form (canonical verifications) with the same outermost form, and all the inner operands are ceq
- a and b both diverge

So if ceq(a; b) then a and b “run the same”. What’s a really cool upshot of this is that if ceq(a; b) then if a = a ∈ A and b = b ∈ A then a = b ∈ A! ceq is the strictest equality in our system and we can rewrite with it absolutely everywhere without regards to types. Proving this requires showing the above definition forms a congruence (two things are related if their subcomponents are related).

This was a *big deal* because until Doug Howe came up with this proof NuPRL/CTT was awash with rules trying to specify this idea chunk by chunk and showing those rules were valid wasn’t trivial. Actually, you should read that paper: it’s 6 pages and the proof concept comes up a lot.

So, in order to do 1. we’re going to say “the goal and the goal if we step it twice are computationally equal” and then use this fact to substitute for the stepped version. The tactic to use here is called csubst. It takes two arguments

- The ceq we’re asserting
- Another h. term to guide the rewrite

This leaves us with two goals

1. [n] : nat 2. n' : nat 3. ih : (λx. inr(x[x]))[λx. inr(x[x])] = (λx. inr(x[x]))[λx. inr(x[x])] ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ⊢ ceq((λx. inr(x[x]))[λx. inr(x[x])]; inr((λx. inr(x[x]))[λx. inr(x[x])])) 1. [n] : nat 2. n' : nat 3. ih : (λx. inr(x[x]))[λx. inr(x[x])] = (λx. inr(x[x]))[λx. inr(x[x])] ∈ natrec(n'; ⋂_ ∈ void. void; _.x.+(unit; x)) ⊢ inr((λx. inr(x[x]))[λx. inr(x[x])]) = inr((λx. inr(x[x]))[λx. inr(x[x])]) ∈ natrec(succ(n'); ⋂_ ∈ void. void; _.x.+(unit; x))Now we have two goals. The first is that ceq proof obligation. The second is our goal post-substitution. The first one can easily be dispatched by step. step let’s us prove ceq by saying

- ceq(a; b) holds if
- a steps to a' in one step
- ceq(a'; b)

This will leave us with ceq(X; X) which auto can handle. The second term is.. massive. But also simple. We just need to step it once and we suddenly have inr(X) = inr(X) ∈ sum(_; A) where X = X ∈ A is our assumption! So that can also be handled by auto as well. That means we need to run step on the first goal, reduce 1 on the second, and auto on both.

Theorem omega-wf : [member(omega; conat)] { unfolds; unfold <omega Y>; auto; elim #1; focus 0 #{reduce 1; auto}; csubst [ceq(ap(lam(x.inr(ap(x;x))); lam(x.inr(ap(x;x)))); inr(ap(lam(x.inr(ap(x;x))); lam(x.inr(ap(x;x))))))] [h.=(h;h; natrec(succ(n'); isect(void; _. void); _.x.+(unit; x)))]; [step, reduce 1]; auto }.And we’ve just proved that omega ∈ conat, a term that is certainly the canonical (heh) example of coinduction in my mind.

Wrap UpWhew, I actually meant for this to be a short blog post but that didn’t work out so well. Hopefully this illustrated a cool trick in computer science (intersect your way to coinduction) and in JonPRL.

Funnily enough before this was written no one had actually realized you could do coinduction in JonPRL. I’m still somewhat taken with the fact that a *very* minimal proof assistant like JonPRL is powerful enough to let you do this by giving you such general purpose tools as family intersection and a full computation system to work with. Okay that’s enough marketing from me.

Cheers.

*Huge thanks to Jon Sterling for the idea on how to write this code and then touching up the results*

### ANN: repa-linear-algebra = HMatrix + REPA

### Is there a book that's considered the "Haskell bible" like "effective c++" is? Reviews on Amazon are not convincing for any book

I'm determined to get Haskell under my belt at this point. But I'd really love a good book that's written for beginners, spares no detail, and is readable. Amazon reviews make me feel like I need to pick two out of three.

I'm using web "tutorials" for now, but it's just not the same as buckling down with a serious book

The sidebar has some books, but see above.

There's some Amazon books that come up when you search for "Haskell" about functional programming (instead of specifically Haskell). Am I better served there?

submitted by rorriMnmaD[link] [53 comments]

### Dimitri Sabadie: Don’t use Default

In **Haskell**, we have a lot of typeclasses. Those are very handy and – in general – come with laws. Laws are very important and give hints on how we are supposed to use a *typeclass*.

For instance, the Semigroup *typeclass* exposes an operator ((<>)) and has an **associativity** law. If a, b and c have the same type T, if we know that T is a Semigroup, we have the following law:

If T is a Monoid, which is a Semigroup with an identity (called mempty), we have a law for *monoids*:

Those laws can – *have* – to be used in our code base to take advantage over the structures, optimize or avoid boilerplate.

In some situations, we want a way to express default values. That’s especially true in OO languages, like in the following **C++** function signature:

In **Haskell**, we cannot do that, because we have to pass all arguments to a function. The following doesn’t exist:

foo (i = 1) = -- […]

And there’s more. Even in C++, how do you handle the case when you have several arguments and want only the first one to be defaulted? You **cannot**.

So, so… Some **Haskellers** decided to solve that problem with a *typeclass*. After all, we can define the following typeclass:

def :: a

We can then implement Default and have a *default* value for a given type.

def = Radians $ 2*pi

instance Default Fullscreen where

def = Fullscreen False

However, there is an issue with that. You cannot use Default without creating newtypes to overload them. Why? Well, consider the following Default instance:

instance Default Float wheredef = -- what should we put here?

Remember that, in **Haskell**, an instance is defined only once and is automatically imported when you import the module holding it. That means you cannot have the following:

instance Default Float where

def = 0

-- in a module B

instance Default Float where

def = 1

-- in a module C

import A

import B

-- What instances should we use?

*Hey, that’s easy. We just have to keep the modules apart, and import the one we want to use!*

Yeah, well. No. No.

Orphan instances are **wrong**. You should read this for further explanations.

That’s why we have to use newtypes everywhere. And that’s boring. Writing code should always have a goal. When we write as much boilerplate code as real code, we can start thinking there’s something wrong. Worse, if we have more boilerplate than real code, well, something is terribly wrong. In our case, we’re introducing a lot of newtypes for only being able to use def at a few spots. Is that even worth it? Of course not.

Default is evilThe Default typeclass is evil. It’s shipped with default instances, like one for [a], which defaults to the empty list – []. It might be clear for you but why would I want to default to the empty list? Why not to [0]? Or a more complex list? I really doubt someone ever uses def :: [a].

Another reason why Default is wrong? There’s absolutely **no law**. You just have a default value, and that’s all.

Can you say what the default is for? Of course you cannot. Because there’s no law. The instance has no real meaning. A default value makes sense only for the computation using it. For instance, the empty list makes sense if we glue it to the *list concatenation*.

In base, there’re already several ways to express *defaulted values*.

The Monoid’s mempty is a way to express a default value regarding its binary operation ((<>)).

The Alternative’s empty provides a similar default, but for first-class types.

The MonadZero’s mzero provides a different default, used to *absorb* everything. That’s a law of MonadZero:

a >> mzero = mzero

Those defaults are interesting because we can reason about. If I see mzero in a *monadic code*, I know that whatever is following will be discarded.

So please. Stop using Default. It doesn’t bring any sense to your codebase. It actually removes some! If you want to provide functions with defaulted arguments, consider partially applied functions.

data-default is a very famous package – look at the downloads! You can now have some hindsight about it before downloading it and ruining your design. ;)

Happy hacking. :)

### In example code, I often see the pattern `x:xs` or something like that. What does the 's' stand for?

I understand how this pattern-match works. I'm curious why the convention? The 's' must stand for something obvious, because it seems that everyone uses it.

submitted by rorriMnmaD[link] [63 comments]

### Code review request: AFRP breakout-like game

I've implemented a breakout-like game using *netwire* and *gloss*. I'd be really grateful if you could take a look at the code and help with some suggestions. I'm particularly interested if

- the gameplay (Gameplay.hs) could be expressed more elegantly, with other (A)FRP constructs;
- the switching combinator I defined has any design flaws – the libraries
*netwire*and*auto*provide different switching combinators, but this seems more intuitive to me; - there are any other unidiomatic parts.

Thanks!

submitted by vtan_[link] [7 comments]