News aggregator
Proposal: Add rtsIsThreaded to base
[ANN] Erlang Workshop 2016  CFP
Magic classes for Overloaded Record Fields, overlaps, FunDeps
When using Cereal, what is the right way to write `get` for multiconstructor type?
Cabal extensions tracking
Chris Smith: CodeWorld & Summer of Haskell 2016
As most Haskell community members know, Haskell was turned down by Google Summer of Code this year, and has instead been seeking to continue the tradition with Summer of Haskell, funded by smaller donations. I’m happy to announce that CodeWorld will be part of Summer of Haskell! I’ve donated to support one student working specifically on CodeWorld.
Are you a student, and interested in helping to build a platform for education in expressive mathematics and computer science? Want to work on a project with immediate impact teaching Haskell in multiple schools? Please propose a project at https://summer.haskell.org/ between now and May 6th.
Project IdeasA great source of CodeWorld project ideas is the bug tracker. Less welldefined projects are tagged as proposals, while more defined features are tagged as enhancements. A few big ones to think about are:
 Export of CodeWorld projects as mobile applications
 Better and more languageaware editor support for Haskell in CodeMirror.
 Implementing constructive geometry
 Building social, gallery, and/or showcase features to help student work be more visible.
 Building a purely functional blockbased programming environment.
 Implementing visual tools to help students understand substitution, list comprehensions, and more.
I look forward to working with someone this summer building something cool!
By the way, HUGE thanks to Edward Kmett and other Haskell.org committee members for making this happen this year!
Postdoc ad: Domainspecific languages
Interactive ASCII graphics
Munich Haskell Meeting, 20160428 < at > 19:30
Ken T Takusagawa: [vcppbmdn] Rubik's cube combinations
We implement in Haskell the formulae at http://www.speedcubing.com/chris/cubecombos.html to compute the number of combinations of a NxN Rubik's cube, as well as a supercube (for which center facet orientation matters) and a supersupercube (also called a "real" cube, for which inner cubie orientations also matter) variations.
import Math.Combinatorics.Exact.Factorial(factorial);
div_exact :: Integer > Integer > Integer;
div_exact x y = case divMod x y of {
(q,0) > q;
_ > error "division had a remainder";
};
common :: Integer > Integer > Integer > Integer;
common i den_base n = let {
e1 :: Integer;
e1 = div (n*n  2*n) 4;
e2 :: Integer;
e2 = div ((n2)^2) 4;
numerator :: Integer;
numerator = (24 * 2^i * factorial 12)^(mod n 2) * factorial 7 * 3^6 * (factorial 24)^e1;
denominator :: Integer;
denominator = den_base ^ e2;
} in div_exact numerator denominator;
 https://oeis.org/A075152
cube :: Integer > Integer;
cube = common 10 ((factorial 4)^6);
 https://oeis.org/A080660
supercube :: Integer > Integer;
supercube = common 21 2;
 https://oeis.org/A080659
super_supercube :: Integer > Integer;
super_supercube n = let {
e1 :: Integer;
e1 = div_exact (n^34*n+(mod n 2)*(12  9*n)) 24;
e2 :: Integer;
e2 = div (n2) 2 + mod n 2;
e4 :: Integer;
e4 = mod n 2 * div n 2;
} in (div_exact (factorial 24) 2)^e1 * 24^e2 * (factorial 7 * 3^6)^(div n 2) * (factorial 12 * 2^21)^e4;
Incidentally, the output numbers factor easily. The largest prime factor in them is 23. One could do the calculations over a factor base of primes from 2 to 23 to get the output directly in compact factored form.
Next, we consider the Megaminx, and higher oddorder variants (so not Flowerminx / Kilominx):
 http://michaelgottlieb.blogspot.com/2008/05/numberofpositionsofgeneralized.html
 n=1 megaminx; n=2 gigaminx; etc.
megaminx :: Integer > Integer;
megaminx n = div_exact (factorial 30 * factorial 20 * (factorial 60)^(n^21) * (factorial 5)^(12*n) * 2^28 * 3^19) ((factorial 5)^(12*n^2) * 2^n);
Dropping support for NHC98 and old GHC from containers
(no subject)
wren gayle romano: Quantifiers in type theory
All this stuff is "well known", but I want to put it out there for folks who may not have encountered it, or not encountered it all together in one picture.
The Damas–Hindley–Milner type system (i.e., the type system that Algorithm W is inferring types for) is propositional logic extended with rank1 secondorder universal quantifiers. It is interesting because it is so particularly stable with respect to inference, decidability, etc. That is, we can come up with many other algorithms besides Algorithm W and they enjoy nice properties like the fact that adding type signatures won't cause inference to fail. (It's worth noting, that Algorithm W is DEXPTIMEcomplete; so while in practice it's often linear time, for pathological inputs it can take exponentially long. However, if we put a constant bound on the depth of nested letbindings, then the upper bound becomes polynomial.)
The extension of DHM with rank1 secondorder existential quantifiers is strictly more powerful. It is interesting because it allows unrestricted use of both of the quantifiers in prenex position; thus, it is the limit/top of the alternating quantifier hierarchy (à la the arithmetical hierarchy) that starts with DHM. Surely there are other interesting properties here, but this system is understudied relative to the ones above and below. Edit: Although GHC gets by with encoding existentials away, it's worth noting that MLF allows existentials where the unpacking is implicit rather than requiring an "unseal" or case eliminator (Leijen 2006); and also that UHC does in fact offer firstclass existentials (Dijkstra 2005).
The extension with rank2 secondorder universals (i.e., where the universal quantifier can appear to the left of one function arrow) is strictly more powerful still. Here we can encode rank1 existentials, but my point in this whole post is to point out that rank1 existentials themselves are strictly weaker than the rank2 universals it takes to encode them! Also, one littleknown fact: this type system is interesting because it is the last one in this progression where type inference is decidable. The decidability of rank2 universal quantification is part of the reason why GHC distinguishes between XRank2Types vs XRankNTypes. Alas, although inference is decidable —and thus of mathematical interest— it is not decidable in the same robust way that DHM is. That is, if we care about human factors like good error messages or not breaking when the user adds type signatures, then we don't get those properties here. Still, the fact that this system is at the cusp of decidable inference is important to know. Edit: Also of interest, this system has the same typeable terms as simplytyped λcalculus with rank2 intersection types, and the type inference problem here is fundamentally DEXPTIMEcomplete (Jim 1995).
Things keep alternating back and forth between existentials and universals of each rank; so far as I'm aware, none of these systems are of any particular interest until we hit the limit: rankω (aka: rankN) secondorder quantification. This type system is often called "System F", but that's a misnomer. It is important to differentiate between the syntactic system (i.e., actual System F) we're inferring types for, vs the type system (aka: propositional logic with secondorder quantifiers) in which the inferred types live. That is, we can perfectly well have a syntactic system which doesn't have explicit type abstractions/applications but for which we still ascribe rankω types. It so happens that the type inference problem is undecidable for that syntactic system, but it was already undecidable way back at rank3 so the undecidability isn't particularly novel.
Twitter Facebook Google+ Tumblr WordPresscomments
Gabriel Gonzalez: Data is Code
<meta content="text/html; charset=utf8" httpequiv="ContentType"/>
The title of this post is a play on the Lisp aphorism: "Code is Data". In the Lisp world everything is data; code is just another data structure that you can manipulate and transform.
However, you can also go to the exact opposite extreme: "Data is Code"! You can make everything into code and implement data structures in terms of code.
You might wonder what that even means: how can you write any code if you don't have any primitive data structures to operate on? Fascinatingly, Alonzo Church discovered a long time ago that if you have the ability to define functions you have a complete programming language. "Church encoding" is the technique named after his insight that you could transform data structures into functions.
This post is partly a Church encoding tutorial and partly an announcement for my newly released annah compiler which implements the Church encoding of data types. Many of the examples in this post are valid annah code that you can play with. Also, to be totally pedantic annah implements BoehmBerarducci encoding which you can think of as the typed version of Church encoding.
This post assumes that you have basic familiarity with lambda expressions. If you do not, you can read the first chapter (freely available) of the Haskell Programming from First Principles which does an excellent job of teaching lambda calculus.
If you would like to follow along with these examples, you can download and install annah by following these steps:
 Install the stack tool
Create the following stack.yaml file
$ cat > stack.yaml
resolver: lts5.13
packages: []
extradeps:
 annah1.0.0
 morte1.6.0
<CtrlD> Run stack setup
 Run stack install annah
Add the installed executable to your $PATH
In the untyped lambda calculus, you only have lambda expressions at your disposal and nothing else. For example, here is how you encode the identity function:
λx → xThat's a function that takes one argument and returns the same argument as its result.
We call this "abstraction" when we introduce a variable using the Greek lambda symbol and we call the variable that we introduce a "bound variable". We can then use that "bound variable" anywhere within the "body" of the lambda expression.
+ Abstraction
+ Bound variable

vv
λx → x
^

+ Body of lambda expression
Any expression that begins with a lambda is an anonymous function which we can apply to another expression. For example, we can apply the the identity function to itself like this:
(λx → x) (λy → y) βreduction
= λy → y
We call this "application" when we supply an argument to an anonymous function.
We can define a function of multiple arguments by nested "abstractions":
λx → λy → xThe above code is an anonymous function that returns an anonymous function. For example, if you apply the outermost anonymous function to a value, you get a new function:
(λx → λy → x) 1 βreduce
λy → 1
... and if you apply the lambda expression to two values, you return the first value:
(λx → λy → x) 1 2 βreduce
(λy → 1) 2
 βreduce
1
So our lambda expression behaves like a function of two arguments, even though it's really a function of one argument that returns a new function of one argument. We call this "currying" when we simulate functions of multiple arguments using functions one argument. We will use this trick because we will be programming in a lambda calculus that only supports functions of one argument.
Typed lambda calculusIn the typed lambda calculus you have to specify the types of all function arguments, so you have to write something like this:
λ(x : a) → x... where a is the type of the bound variable named x.
However, the above function is still not valid because we haven't specified what the type a is. In theory, we could specify a type like Int:
λ(x : Int) → x... but the premise of this post was that we could program without relying on any builtin data types so Int is out of the question for this experiment.
Fortunately, some typed variations of lambda calculus (most notably: "System F") let you introduce the type named a as yet another function argument:
λ(a : *) → λ(x : a) → xThis is called "type abstraction". Here the * is the "type of types" and is a universal constant that is always in scope, so we can always introduce new types as function arguments this way.
The above function is the "polymorphic identity function", meaning that this is the typed version of the identity function that still preserves the ability to operate on any type.
If we had builtin types like Int we could apply our polymorphic function to the type just like any other argument, giving back an identity function for a specific type:
(λ(a : *) → λ(x : a) → x) Int βreduction
λ(x : Int) → x
This is called "type application" or (more commonly) "specialization". A "polymorphic" function is a function that takes a type as a function argument and we "specialize" a polymorphic function by applying the function to a specific type argument.
However, we are forgoing builtin types like Int, so what other types do we have at our disposal?
Well, every lambda expression has a corresponding type. For example, the type of our polymorphic identity function is:
∀(a : *) → ∀(x : a) → aYou can read the type as saying:
 this is a function of two arguments, one argument per "forall" (∀) symbol
 the first argument is named a and a is a type
 the second argument is named x and the type of x is a
 the result of the function must be a value of type a
This type uniquely determines the function's implementation. To be totally pedantic, there is exactly one implementation up to extensional equality of functions. Since this function has to work for any possible type a there is only one way to implement the function. We must return x as the result, since x is the only value available of type a.
Passing around types as values and function arguments might seem a bit strange to most programmers since most languages either:
do not use types at all
Example: Javascript
// The polymorphic identity function in Javascript
function id(x) {
return x
}
// Example use of the function
id(true)do use types, but they hide type abstraction and type application from the programmer through the use of "type inference"
Example: Haskell
 The polymorphic identity function in Haskell
id x = x
 Example use of the function
id Truethey use a different syntax for type abstraction/application versus ordinary abstraction and application
Example: Scala
 The polymorphic identity function in Scala
def id[A](x : a)
 Example use of the function
 Note: Scala lets you omit the `[Boolean]` here thanks
 to type inference but I'm making the type
 application explicit just to illustrate that
 the syntax is different from normal function
 application
id[Boolean](true)
For the purpose of this post we will program with explicit type abstraction and type application so that there is no magic or hidden machinery.
So, for example, suppose that we wanted to apply the typed, polymorphic identity function to itself. The untyped version was this:
(λx → x) (λy → y)... and the typed version is this:
(λ(a : *) → λ(x : a) → x)(∀(b : *) → ∀(y : b) → b)
(λ(b : *) → λ(y : b) → y)
 βreduction
= (λ(x : ∀(b : *) → ∀(y : b) → b) → x)
(λ(b : *) → λ(y : b) → y)
 βreduction
= (λ(b : *) → λ(y : b) → y)
So we can still apply the identity function to itself, but it's much more verbose. Languages with type inference automate this sort of tedious work for you while still giving you the safety guarantees of types. For example, in Haskell you would just write:
(\x > x) (\y > y)... and the compiler would figure out all the type abstractions and type applications for you.
Exercise: Haskell provides a const function defined like this:
const :: a > b > aconst x y = x
Translate const function to a typed and polymorphic lambda expression in System F (i.e. using explicit type abstractions)
Boolean valuesLambda expressions are the "code", so now we need to create "data" from "code".
One of the simplest pieces of data is a boolean value, which we can encode using typed lambda expressions. For example, here is how you implement the value True:
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → TrueNote that the names have no significance at all. I could have equally well written the expression as:
λ(a : *) → λ(x : a) → λ(y : a) → x... which is "αequivalent" to the previous version (i.e. equivalent up to renaming of variables).
We will save the above expression to a file named ./True in our current directory. We'll see why shortly.
We can either save the expression using Unicode characters:
$ cat > ./Trueλ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
<CtrlD>
... or using ASCII, replacing each lambda (i.e. λ) with a backslash (i.e. \) and replacing each arrow (i.e. →) with an ASCII arrow (i.e. >)
$ cat > ./True\(Bool : *) > \(True : Bool) > \(False : Bool) > True
<CtrlD>
... whichever you prefer. For the rest of this tutorial I will use Unicode since it's easier to read.
Similarly, we can encode False by just changing our lambda expression to return the third argument named False instead of the second argument named True. We'll name this file ./False:
$ cat > ./Falseλ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False
<CtrlD>
What's the type of a boolean value? Well, both the ./True and ./False files have the same type, which we shall call ./Bool:
$ cat > ./Bool∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
... and if you are following along with ASCII you can replace each forall symbol (i.e. ∀) with the word forall:
$ cat > ./Boolforall (Bool : *) > forall (True : Bool) > forall (False : Bool) > Bool
We are saving these terms and types to files because we can use the annah compiler to work with any lambda expression or type saved as a file. For example, I can use the annah compiler to verify that the file ./True has type ./Bool:
$ annah Read this as: "./True has type ./Bool"
./True : ./Bool
<CtrlD>
./True
$ echo $?
0
If the expression typechecks then annah will just compile the expression to lambda calculus (by removing the unnecessary type annotation in this case) and return a zero exit code. However, if the expression does not typecheck:
$ annah./True : ./True
annah:
Expression: ∀(x : λ(Bool : *) → λ(True : Bool) → λ(False : Bool)
→ True) → λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
Error: Invalid input type
Type: λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
$ echo $?
1
... then annah will throw an exception and return a nonzero exit code. In this case annah complains that the ./True on the righthand side of the type annotation is not a valid type.
The last thing we need is a function that can consume values of type ./Bool, like an ./if function:
$ cat > ./ifλ(x : ./Bool ) → x
 ^
 
 + Note the space. Filenames must end with a space
The definition of ./if is blindingly simple: ./if is just the identity function on ./Bools!
To see why this works, let's see what the type of ./if is. We can ask for the type of any expression by feeding the expression to the morte compiler via standard input:
$ morte < ./if∀(x : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) →
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(x : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) → x
morte is a lambda calculus compiler installed alongside annah and annah is a higherlevel interface to the morte language. By default, the morte compiler will:
 resolve all file references (transitively, if necessary)
 typecheck the expression
 optimize the expression
 write the expression's type to standard error as the first line of output
 write the optimized expression to standard output as the last line of output
In this case we only cared about the type, so we could have equally well just asked the morte compiler to resolve and infer the type of the expression:
$ morte resolve < ./Bool/if  morte typecheck∀(x : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) →
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
The above type is the same thing as:
∀(x : ./Bool ) → ./BoolIf you don't believe me you can prove this to yourself by asking morte to resolve the type:
$ echo "∀(x : ./Bool ) → ./Bool"  morte resolve∀(x : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) →
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
However, the type will make the most sense if you only expand out the second ./Bool in the type but leave the first ./Bool alone:
./Bool → ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → BoolYou can read this type as saying that the ./if function takes four arguments:
 the first argument is the ./Bool that we want to branch on (i.e. ./True or ./False)
 the second argument is the result type of our ./if expression
 the third argument is the result we return if the ./Bool evaluates to ./True (i.e. the "then" branch)
 the fourth argument is the result we return if the ./Bool evaluates to ./False (i.e. the "else" branch)
For example, this Haskell code:
if Truethen False
else True
... would translate to this Annah code:
$ annah./if ./True
./Bool  The type of the result
./False  The `then` branch
./True  The `else` branch
<CtrlD>
./if ./True ./Bool ./False ./True
annah does not evaluate the expression. annah only translates the expression into Morte code (and the expression is already valid Morte code) and typechecks the expression. If you want to evaluate the expression you need to run the expression through the morte compiler, too:
$ morte./if ./True
./Bool  The type of the result
./False  The `then` branch
./True  The `else` branch
<CtrlD>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False
morte deduces that the expression has type ./Bool and the expression evaluates to ./False.
morte evaluates the expression by resolving all references and repeatedly applying βreduction. This is what happens under the hood:
./if./True
./Bool
./False
./True
 Resolve the `./if` reference
= (λ(x : ./Bool ) → x)
./True
./Bool
./False
./True
 βreduce
= ./True
./Bool
./False
./True
 Resolve the `./True` reference
= (λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True)
./Bool
./False
./True
 βreduce
= (λ(True : ./Bool ) → λ(False : ./Bool ) → True)
./False
./True
 βreduce
= (λ(False : ./Bool ) → ./False )
./True
 βreduce
= ./False
 Resolve the `./False` reference
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False
The above sequence of steps is a white lie: the true order of steps is actually different, but equivalent.
The ./if function was not even necessary because every value of type ./Bool is already a "preformed if expression". That's why ./if is just the identity function on ./Bools. You can delete the ./if from the above example and the code will still work.
Now let's define the not function and save the function to a file:
$ annah > ./notλ(b : ./Bool ) →
./if b
./Bool
./False  If `b` is `./True` then return `./False`
./True  If `b` is `./False` then return `./True`
<CtrlD>
We can now use this file like an ordinary function:
$ morte./not ./False
<CtrlD>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
$ morte
./not ./True
<CtrlD>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False
Notice how ./not ./False returns ./True and ./not ./True returns ./False.
Similarly, we can define an and function and an or function:
$ annah > andλ(x : ./Bool ) → λ(y : ./Bool ) →
./if x
./Bool
y  If `x` is `./True` then return `y`
./False  If `x` is `./False` then return `./False`
<CtrlD>$ annah > or
λ(x : ./Bool ) → λ(y : ./Bool ) →
./if x
./Bool
./True  If `x` is `./True` then return `./True`
y  If `x` is `./False` then return `y`
<CtrlD>
... and use them:
$ morte./and ./True ./False
<CtrlD>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False$ morte
./or ./True ./False
<CtrlD>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
We started with nothing but lambda expressions, but still managed to implement:
 a ./Bool type
 a ./True value of type ./Bool
 a ./False value of type ./Bool
 ./if, ./not, ./and, and ./or functions
... and we can do real computation with them! In other words, we've modeled boolean data types entirely as code.
Exercise: Implement an xor function
Natural numbersYou might wonder what other data types you can implement in terms of lambda calculus. Fortunately, you don't have to wonder because the annah compiler will actually compile data type definitions to lambda expressions for you.
For example, suppose we want to define a natural number type encoded using Peano numerals. We can write:
$ annah typestype Nat
data Succ (pred : Nat)
data Zero
fold foldNat
<CtrlD>
You can read the above datatype specification as saying:
 Define a type named Nat ...
 ... with a constructor named Succ with one field named pred of type Nat ...
 ... with another constructor named Zero with no fields
 ... and a fold named foldNat
annah then translates the datatype specification into the following files and directories:
+ ./Nat.annah  `annah` implementation of `Nat`
` ./Nat

+ @  `morte` implementation of `Nat`
 
  If you import the `./Nat` directory this file is
  imported instead

+ Zero.annah  `annah` implementation of `Zero`

+ Zero  `morte` implementation of `Zero`

+ Succ.annah  `annah` implementation of `Succ`

+ Succ  `morte` implementation of `Succ`

+ foldNat.annah  `annah` implementation of `foldNat`

` foldNat  `morte` implementation of `foldNat`
Let's see how the Nat type is implemented:
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → NatAll BoehmBerarducciencoded datatypes are encoded as substitution functions, including ./Nat. Any value of ./Nat is a function that takes three arguments that we will substitute into our natural number expression:
 The first argument replace every occurrence of the Nat type
 The second argument replaces every occurrence of the Succ constructor
 The third argument replaces every occurrence of the Zero constructor
This will make more sense if we walk through a specific example. First, we will build the number 3 using the ./Nat/Succ and ./Nat/Zero constructors:
$ morte./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero ))
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))
Now suppose that we want to compute whether or not our natural number is even. The only catch is that we must limit ourselves to substitution when computing even. We have to figure out something that we can substitute in place of the Succ constructors and something that we can substitute in place of the Zero constructors that will then evaluate to ./True if the natural number is even and ./False otherwise.
One substitution that works is the following:
 Replace every Zero with ./True (because Zero is even)
 Replace every Succ with ./not (because Succ alternates between even and odd)
So in other words, if we began with this:
./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero ))... and we substitute with ./Nat/Succ with ./not and substitute ./Nat/Zero with ./True:
./not (./not (./not ./True ))... then the expression will reduce to ./False.
Let's prove this by saving the above number to a file named ./three:
$ morte > ./three./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero ))
$ cat three
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))
The first thing we need to do is to replace the Nat with ./Bool:
./three ./Bool Resolve `./three`
= (λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))
) ./Bool
 βreduce
= λ(Succ : ∀(pred : ./Bool ) → ./Bool ) → λ(Zero : ./Bool ) →
Succ (Succ (Succ Zero))
Now the next two arguments have exactly the right type for us to substitute in ./not and ./True. The argument named ./Succ is now a function of type ∀(pred : ./Bool ) → ./Bool, which is the same type as ./not. The argument named Zero is now a value of type ./Bool, which is the same type as ./True. This means that we can proceed with the next two arguments:
./three ./Bool ./not ./True Resolve `./three`
= (λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))
) ./Bool ./not ./True
 βreduce
= (λ(Succ : ∀(pred : ./Bool ) → ./Bool ) → λ(Zero : ./Bool ) →
Succ (Succ (Succ Zero))
) ./not ./True
 βreduce
= (λ(Zero : ./Bool ) → ./not (./not (./not Zero))) ./True
 βreduce
= ./not (./not (./not ./True )))
The result is exactly what we would have gotten if we took our original expression:
./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero ))... and replaced ./Nat/Succ with ./not and replaced ./Nat/Zero with ./True.
Let's verify that this works by running the code through the morte compiler:
$ morte./three ./Bool ./not ./True
<CtrlD>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False
morte computes that the number ./three is not even, returning ./False.
We can even go a step further and save an ./even function to a file:
$ annah > even\(n : ./Nat ) →
n ./Bool
./not  Replace each `./Nat/Succ` with `./not`
./True  Replace each `./Nat/Zero` with `./True`
... and use our newlyformed ./even function:
$ morte./even ./three
<CtrlD>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False$ morte
./even ./Nat/Zero
<CtrlD>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
The annah compiler actually provides direct support for natural number literals, so you can also just write:
$ annah  morte./even 100
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
What about addition? How do we add two numbers using only substitution?
Well, one way we can add two numbers, m and n, is that we substitute each ./Nat/Succ in m with ./Nat/Succ (i.e. keep them the same) and substitute the Zero with n. In other words:
$ annah > plusλ(m : ./Nat ) → λ(n : ./Nat ) →
m ./Nat  The result will still be a `./Nat`
./Nat/Succ  Replace each `./Nat/Succ` with `./Nat/Succ`
n  Replace each `./Nat/Zero` with `n`
Let's verify that this works:
$ annah  morte./plus 2 2
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ Zero)))
We get back a Churchencoded 4!
What happened under the hood was the following substitutions:
./plus 2 2 Resolve `./plus`
= (λ(m : ./Nat ) → λ(n : ./Nat ) → m ./Nat ./Nat/Succ n) 2 2
 βreduce
= (λ(n : ./Nat ) → 2 ./Nat ./Nat/Succ n) 2
 βreduce
= 2 ./Nat ./Nat/Succ 2
 Definition of 2
= (./Nat/Succ (./Nat/Succ ./Nat/Zero )) ./Nat ./Nat/Succ 2
 Resolve and βreduce the definition of 2 (multiple steps)
= (λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ Zero)
) ./Nat ./Nat/Succ 2
 βreduce
= (λ(Succ : ∀(pred : ./Nat ) → ./Nat ) → λ(Zero : ./Nat ) →
Succ (Succ Zero)
) ./Nat/Succ 2
 βreduce
= (λ(Zero : ./Nat ) → ./Nat/Succ (./Nat/Succ Zero)) 2
 βreduce
= ./Nat/Succ (./Nat/Succ 2)
 Definition of 2
= ./Nat/Succ (./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero )))
 Resolve and βreduce (multiple steps)
= λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ Zero)))
So we can encode natural numbers in lambda calculus, albeit very inefficiently! There are some tricks that we can use to greatly speed up both the time complexity and constant factors, but it will never be competitive with machine arithmetic. This is more of a proof of concept that you can model arithmetic purely in code.
Exercise: Implement a function which multiplies two natural numbers
Data typesannah also lets you define "temporary" data types that scope over a given expression. In fact, that's how Nat was implemented. You can look at the corresponding *.annah files to see how each type and term is defined in annah before conversion to morte code.
For example, here is how the Nat type is defined in annah:
$ cat Nat.annahtype Nat
data Succ (pred : Nat)
data Zero
fold foldNat
in Nat
The first four lines are identical to what we wrote when we invoked the annah types command from the command line. We can use the exact same data type specification to create a scoped expression that can reference the type and data constructors we specified.
When we run this expression through annah we get back the Nat type:
$ annah < Nat.annah∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat
You can use these scoped datatype declarations to quickly check how various datatypes are encoded without polluting your current working directory. For example, I can ask annah how the type Maybe is encoded in lambda calculus:
$ annahλ(a : *) →
type Maybe
data Just (x : a)
data Nothing
 You can also leave out this `fold` if you don't use it
fold foldMaybe
in Maybe
<CtrlD>
λ(a : *) → ∀(Maybe : *) → ∀(Just : ∀(x : a) → Maybe) →
∀(Nothing : Maybe) → Maybe
A Maybe value is just another substitution function. You provide one branch that you substitute for Just and another branch that you substitute for Nothing. For example, the Just constructor always substitutes in the first branch and ignores the Nothing branch that you supply:
$ annahλ(a : *) →
type Maybe
data Just (x : a)
data Nothing
in Just
<CtrlD>
λ(a : *) → λ(x : a) → λ(Maybe : *) → λ(Just : ∀(x : a) → Maybe)
→ λ(Nothing : Maybe) → Just x
Vice versa, the Nothing constructor substitutes in the Nothing branch that you supply and ignores the Just branch:
$ annahλ(a : *) →
type Maybe
data Just (x : a)
data Nothing
in Nothing
<CtrlD>
λ(a : *) → λ(Maybe : *) → λ(Just : ∀(x : a) → Maybe) → λ(Nothing : Maybe) → Nothing
Notice how we've implemented Maybe and Just purely using functions. This implies that any language with functions can encode Maybe, like Python!
Let's translate the above definition of Just and Nothing to the equivalent Python code. The only difference is that we delete the type abstractions because they are not necessary in Python:
def just(x):def f(just, nothing):
return just(x)
return f
def nothing():
def f(just, nothing):
return nothing
return f
We can similarly translate Haskellstyle pattern matching like this:
example :: Maybe Int > IO ()example m = case m of
Just n > print n
Nothing > return ()
... into this Python code:
def example(m):def just(n): # This is what we substitute in place of `Just`
print(n)
def nothing(): # This is what we substitute in place of `Nothing`
return
m(just, nothing)
... and verify that our pattern matching function works:
>>> example(nothing())>>> example(just(1))
1
Neat! This means that any algebraic data type can be embedded into any language with functions, which is basically every language!
Warning: your colleagues may get angry at you if you do this! Consider using a language with builtin support for algebraic data types instead of trying to twist your language into something it's not.
Let expressionsYou can also translate let expressions to lambda calculus, too. For example, instead of saving something to a file we can just use a let expression to temporarily define something within a program.
For example, we could write:
$ annah  mortelet x : ./Nat = ./plus 1 2
in ./plus x x
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ (Succ (Succ Zero)))))
... but that doesn't really tell us anything about how annah desugars let because we only see the final evaluated result. We can ask annah to desugar without performing any other transformations using the annah desugar command:
$ annah desugarlet x : ./Nat = ./plus 1 2
in ./plus x x
<CtrlD>
(λ(x : ./Nat ) → ./plus x x) (./plus (λ(Nat : *) → λ(Succ :
∀(pred : Nat) → Nat) → λ(Zero : Nat) → Succ Zero) (λ(Nat : *) →
λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) → Succ (Succ
Zero)))
... which makes more sense if we clean up the result through the use of numeric literals:
(λ(x : ./Nat ) → ./plus x x) (./plus 1 2)Every time we write an expression of the form:
let x : t = yin e
... we decode that to lambda calculus as:
(λ(x : t) → e) yWe can also decode function definitions, too. For example, you can write:
$ annah  mortelet increment (x : ./Nat ) : ./Nat = ./plus x 1
in increment 3
<CtrlD>
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ Zero)))
... and the intermediate desugared form also encodes the function definition as a lambda expression:
$ annah desugarlet increment (x : ./Nat ) : ./Nat = ./plus x 1
in increment 3
<CtrlD>
(λ(increment : ∀(x : ./Nat ) → ./Nat ) → increment (λ(Nat : *)
→ λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) → Succ (Succ
(Succ Zero)))) (λ(x : ./Nat ) → ./plus x (λ(Nat : *) → λ(Succ
: ∀(pred : Nat) → Nat) → λ(Zero : Nat) → Succ Zero)
... which you can clean up as this expression:
(λ(increment : ∀(x : ./Nat ) → ./Nat ) → increment 3)(λ(x : ./Nat ) → ./plus x 1)
We can combine let expressions with data type expressions, too. For example, here's our original not program, except without saving anything to files:
$ annahtype Bool
data True
data False
fold if
in
let not (b : Bool) : Bool = if b Bool False True
in
not False
<CtrlD>
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → TrueLists
annah also provides syntactic support for lists as well. For example:
$ annah[nil ./Bool , ./True , ./False , ./True ]
<CtrlD>
λ(List : *) → λ(Cons : ∀(head : ./Bool ) → ∀(tail : List) →
List) → λ(Nil : List) → Cons ./True (Cons ./False (Cons
./True Nil))
Just like all the other data types, a list is defined in terms of what you use to substitute each Cons and Nil constructor. I can replace each Cons with ./and and the Nil with ./True like this:
$ annah  morte<CtrlD>
[nil ./Bool , ./True , ./False , ./True ] ./Bool ./and ./True
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False
This conceptually followed the following reduction sequence:
( λ(List : *)→ λ(Cons : ∀(head : ./Bool ) → ∀(tail : List) → List)
→ λ(Nil : List)
→ Cons ./True (Cons ./False (Cons ./True Nil))
) ./Bool
./and
./True
 βreduction
= ( λ(Cons : ∀(head : ./Bool ) → ∀(tail : ./Bool ) → ./Bool )
→ λ(Nil : ./Bool )
→ Cons ./True (Cons ./False (Cons ./True Nil))
) ./and
./True
 βreduction
= ( λ(Nil : ./Bool )
→ ./and ./True (./and ./False (./and ./True Nil))
) ./True
 βreduction
= ./and ./True (./and ./False (./and ./True ./True))
Similarly, we can sum a list by replacing each Cons with ./plus and replacing each Nil with 0:
$ annah  morte[nil ./Nat , 1, 2, 3, 4] ./Nat ./plus 0
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
Zero)))))))))
This behaves as if we had written:
./plus 1 (./plus 2 (./plus 3 (./plus 4 0)))Preludeannah also comes with a Prelude to show some more sophisticated examples of what you can encode in pure lambda calculus. You can find version 1.0 of the Prelude here:
http://sigil.place/prelude/annah/1.0/
You can use these expressions directly within your code just by referencing their URL. For example, the remote Bool expression is located here:
http://sigil.place/prelude/annah/1.0/Bool/@
... and the remote True expression is located here:
http://sigil.place/prelude/annah/1.0/Bool/True
... so we can check if the remote True's type matches the remote Bool by writing this:
$ annahhttp://sigil.place/prelude/annah/1.0/Bool/True : http://sigil.place/prelude/annah/1.0/Bool
<CtrlD>
http://sigil.place/prelude/annah/1.0/Bool/True
$ echo $?
0
Similarly, we can build a natural number (very verbosely) using remote Succ and Zero:
$ annah  mortehttp://sigil.place/prelude/annah/1.0/Nat/Succ
( http://sigil.place/prelude/annah/1.0/Nat/Succ
( http://sigil.place/prelude/annah/1.0/Nat/Succ
http://sigil.place/prelude/annah/1.0/Nat/Zero
)
)
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))
However, we can also locally clone the Prelude using wget if we wish to refer to local file paths instead of remote paths:
$ wget np r cutdirs=3 http://sigil.place/prelude/annah/1.0/$ cd sigil.place
$ ls
(>) Defer.annah List.annah Path Sum0.annah
(>).annah Eq Maybe Path.annah Sum1
Bool Eq.annah Maybe.annah Prod0 Sum1.annah
Bool.annah Functor Monad Prod0.annah Sum2
Category Functor.annah Monad.annah Prod1 Sum2.annah
Category.annah index.html Monoid Prod1.annah
Cmd IO Monoid.annah Prod2
Cmd.annah IO.annah Nat Prod2.annah
Defer List Nat.annah Sum0
Now we can use these expressions using their more succinct local paths:
./Nat/sum (./List/(++) ./Nat [nil ./Nat , 1, 2] [nil ./Nat , 3, 4])<CtrlD>
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
Zero)))))))))
Also, every expression has a corresponding *.annah file that documents the expression's type using a let expression. For example, we can see the type of the ./List/(++) function by studying the ./List/(++).annah file:
cat './List/(++).annah'let (++) (a : *) (as1 : ../List a) (as2 : ../List a) : ../List a =
\(List : *)
> \(Cons : a > List > List)
> \(Nil : List)
> as1 List Cons (as2 List Cons Nil)
in (++)
The top line tells us that (++) is a function that takes three arguments:
 An argument named a for the type list elements you want to combine
 An argument named as1 for the left list you want to combine
 An argument named as2 for the right list you want to combine
... and the function returns a list of the same type as the two input lists.
BeyondExactly how far can you take lambda calculus? Well, here's a program written in annah that reads two natural numbers, adds them, and writes them out:
$ annah  morte./IO/Monad ./Prod0 (do ./IO {
x : ./Nat < ./IO/get ;
y : ./Nat < ./IO/get ;
_ : ./Prod0 < ./IO/put (./Nat/(+) x y);
})
∀(IO : *) → ∀(Get_ : ((∀(Nat : *) → ∀(Succ : ∀(pred : Nat) →
Nat) → ∀(Zero : Nat) → Nat) → IO) → IO) → ∀(Put_ : (∀(Nat : *)
→ ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat) → IO →
IO) → ∀(Pure_ : (∀(Prod0 : *) → ∀(Make : Prod0) → Prod0) → IO)
→ IO
λ(IO : *) → λ(Get_ : ((∀(Nat : *) → ∀(Succ : ∀(pred : Nat) →
Nat) → ∀(Zero : Nat) → Nat) → IO) → IO) → λ(Put_ : (∀(Nat : *)
→ ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat) → IO →
IO) → λ(Pure_ : (∀(Prod0 : *) → ∀(Make : Prod0) → Prod0) → IO)
→ Get_ (λ(r : ∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) →
∀(Zero : Nat) → Nat) → Get_ (λ(r : ∀(Nat : *) → ∀(Succ :
∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat) → Put_ (λ(Nat : *)
→ λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) → r@1 Nat Succ
(r Nat Succ Zero)) (Pure_ (λ(Prod0 : *) → λ(Make : Prod0) →
Make))))
This does not run the program, but it creates a syntax tree representing all program instructions and the flow of information.
annah supports do notation so you can do things like write list comprehensions in annah:
$ annah  morte./List/sum (./List/Monad ./Nat (do ./List {
x : ./Nat < [nil ./Nat , 1, 2, 3];
y : ./Nat < [nil ./Nat , 4, 5, 6];
_ : ./Nat < ./List/pure ./Nat (./Nat/(+) x y);
}))
<CtrlD>
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ Zero)))))))))))))))))))))))))))))))))))))))))
)))))))))))))))))))))
The above Annah program is equivalent to the following Haskell program:
sum (dox < [1, 2, 3]
y < [4, 5, 6]
return (x + y) )
... yet it is implemented 100% in a minimal and total lambda calculus without any builtin support for data types.
This tutorial doesn't cover how do notation works, but you can learn this and more by reading the Annah tutorial which is bundled with the Hackage package:
ConclusionsA lot of people underestimate how much you can do in a total lambda calculus. I don't recommend pure lambda calculus as a general programming language, but I could see a lambda calculus enriched with highefficiency primitives to be a realistic starting point for simple functional languages that are easy to port and distribute.
One of the projects I'm working towards in the long run is a "JSON for code" and annah is one step along the way towards that goal. annah will likely not be that language, but I still factored out annah as a separate and reusable project along the way so that others could fork and experiment with annah when experimenting with their own language design.
Also, as far as I can tell annah is the only project in the wild that actually implements the BoehmBerarducci encoding outlined in this paper:
... so if you prefer to learn the encoding algorithm by studying actual code you can use the annah source code as a reference realworld implementation.
You can find Annah project on Hackage or Github:
... and you can find the Annah prelude hosted online:
The Annah tutorial goes into the Annah language and compiler in much more detail than this tutorial, so if you would like to learn more I highly recommend reading the tutorial which walks through the compiler, desugaring, and the Prelude in much more detail:
Also, for those who are curious, both the annah and morte compilers are named after characters from the game Planescape: Torment.
Philip Wadler: John McCarthy presents Recursive Functions of Symbolic Expressions
Edward Z. Yang: HindleyMilner with toplevel existentials
Content advisory: This is a halfbaked research post.
Abstract. Toplevel unpacking of existentials are easy to integrate into HindleyMilner type inference. Haskell should support them. It's possible this idea can work for internal bindings of existentials as well (ala Fing modules) but I have not worked out how to do it.
Update. And UHC did it first!
Update 2. And rank2 type inference is decidable (and rank1 existentials are an even weaker system), although the algorithm for rank2 inference requires semiunification.
BackgroundThe difference between HindleyMilner and System F. Although in informal discussion, HindleyMilner is commonly described as a “type inference algorithm”, it should properly be described as a type system which is more restrictive than System F. Both type systems allow polymorphism via universal quantification of variables, but in System F this polymorphism is explicit and can occur anywhere, whereas in HindleyMilner the polymorphism is implicit, and can only occur at the “top level” (in a socalled “polytype” or “type scheme.”) This restriction of polymorphism is the key which makes inference (via Algorithm W) for HindleyMilner decidable (and practical), whereas inference for System F undecidable.
 Hindley Milner id :: a > a id = λx. x  System F id :: ∀a. a > a id = Λa. λ(x : a). xExistential types in System F. A common generalization of System F is to equip it with existential types:
Types τ ::= ...  ∃a. τ Terms e ::= ...  pack <τ, e>_τ  unpack <a, x> = e in eIn System F, it is technically not necessary to add existentials as a primitive concept, as they can be encoded using universal quantifiers by saying ∃a. τ = ∀r. (∀a. τ → r) → r.
Existential types in HindleyMilner? This strategy will not work for HindleyMilner: the encoding requires a higherrank type, which is precisely what HindleyMilner rules out for the sake of inference.
In any case, it is a fool's game to try to infer existential types: there's no best type! HM always infers the most general type for an expression: e.g., we will infer f :: a > a for the function f = \x > x, and not Int > Int. But the whole point of data abstraction is to pick a more abstract type, which is not going to be the most general type and, consequently, is not going to be unique. What should be abstract, what should be concrete? Only the user knows.
Existential types in Haskell. Suppose that we are willing to write down explicit types when existentials are packed, can HindleyMilner do the rest of the work: that is to say, do we have complete and decidable inference for the rest of the types in our program?
Haskell is an existence (cough cough) proof that this can be made to work. In fact, there are two ways to go about doing it. The first is what you will see if you Google for “Haskell existential type”:
{# LANGUAGE ExistentialQuantification #} data Ex f = forall a. Ex (f a) pack :: f a > Ex f pack = Ex unpack :: Ex f > (forall a. f a > r) > r unpack m k = case m of Ex x > f xEx f is isomorphic to ∃a. f a, and similar to the System F syntax, they can be packed with the Ex constructor and unpacked by patternmatching on them.
The second way is to directly use the System F encoding using Haskell's support for rankn types:
{# LANGUAGE RankNTypes #} type Ex f = forall r. (forall a. f a > r) > r pack :: f a > Ex f pack x = \k > k x unpack :: Ex f > (forall a. f a > r) > r unpack m k = m kThe boxy types paper demonstrated that you can do inference, so long as all of your higher rank types are annotated. Although, perhaps it was not as simple as hoped, since impredicative types are a source of constant bugs in GHC's type checker.
The problemExplicit unpacks suck. As anyone who has tried programming with existentials in Haskell can attest, the use of existentials can still be quite clumsy due to the necessity of unpacking an existential (casing on it) before it can be used. That is to say, the syntax let Ex x = ... in ... is not allowed, and it is an easy way to get GHC to tell you its brain exploded.
Leijen investigated the problem of handling existentials without explicit unpacks.
Loss of principal types without explicit unpacks, and Leijen's solution. Unfortunately, the naive type system does not have principal types. Leijen gives an example where there is no principal type:
wrap :: forall a. a > [a] key :: exists b. Key b  What is the type of 'wrap key'?  [exists b. Key b]?  exists b. [key b]?Neither type is a subtype of the other. In his paper, Leijen suggests that the existential should be unwrapped as late as possible (since you can go from the first type to the second, but not vice versa), and thus, the first type should be preferred.
The solutionA different approach. What if we always lift the existential to the top level? This is really easy to do if you limit unpacks to the toplevel of a program, and it turns out this works really well. (The downside is that dynamic use of existentials is not supported.)
There's an existential in every toplevel Haskell algebraic data type. First, I want to convince you that this is not all that strange of an idea. To do this, we look at Haskell's support for algebraic data types. Algebraic data types in Haskell are generative: each data type must be given a toplevel declaration and is considered a distinct type from any other data type. Indeed, Haskell users use this generativity in conjunction with the ability to hide constructors to achieve data abstraction in Haskell. Although there is not actually an existential lurking about—generativity is not data abstraction—generativity is an essential part of data abstraction, and HM has no problem with this.
Toplevel generativity corresponds to existentials that are unpacked at the toplevel of a program (ala Fing modules). We don't need existentials embedded inside our Haskell expressions to support the generativity of algebraic data types: all we need is the ability to pack an existential type at the top level, and then immediately unpack it into the toplevel context. In fact, Fing modules goes even further: existentials can always be lifted until they reach the top level of the program. Modular programming with applicative functors (the ML kind) can be encoded using toplevel existentials which are immediately unpacked as they are defined.
The proposal. So let us suggest the following type system, HindleyMilner with toplevel existentials (where a* denotes zero to many type variables):
Term variables ∈ f, x, y, z Type variables ∈ a, b, c Programs prog ::= let f = e in prog  seal (b*, f :: σ) = (τ*, e) in prog  { } Type schemes (polytypes) σ ::= ∀a*. τ Expressions e ::= x  \x > e  e e Monotypes τ ::= a  τ > τThere is one new toplevel binding form, seal. We can give it the following typing rule:
Γ ⊢ e :: τ₀[b* → τ*] a* = freevars(τ₀[b* → τ*]) Γ, b*, (f :: ∀a*. τ₀) ⊢ prog  Γ ⊢ seal (b*, f :: ∀a*. τ₀) = (τ*, e) in progIt also elaborates directly to System F with existentials:
seal (b*, f :: σ) = (τ*, e) in prog ===> unpack <b*, f> = pack <τ*, e>_{∃b*. σ} in progA few observations:
 In conventional presentations of HM, letbindings are allowed to be nested inside expressions (and are generalized to polytypes before being added to the context). Can we do something similar with seal? This should be possible, but the bound existential type variables must be propagated up.
 This leads to a second problem: naively, the order of quantifiers must be ∃b. ∀a. τ and not ∀a. ∃b. τ, because otherwise we cannot add the existential to the toplevel context. However, there is a "skolemization" trick (c.f. Shao and Fing modules) by which you can just make b a higherkinded type variable which takes a as an argument, e.g., ∀a. ∃b. b is equivalent to ∃b'. ∀a. b' a. This trick could serve as the way to support inner seal bindings, but the encoding tends to be quite involved (as you must close over the entire environment.)
 This rule is not very useful for directly modeling ML modules, as a “module” is usually thought of as a record of polymorphic functions. Maybe you could generalize this rule to bind multiple polymorphic functions?
Conclusion. And that's as far as I've worked it out. I am hoping someone can tell me (1) who came up with this idea already, and (2) why it doesn't work.
Mark Jason Dominus: Steph Curry: fluke or breakthrough?
[ Disclaimer: I know very little about basketball. I think there's a good chance this article contains at least one basketballrelated howler, but I'm too ignorant to know where it is. ]
Randy Olson recently tweeted a link to a New York Times article about Steph Curry's new 3point record. Here is Olson’s snapshot of a portion of the Times’ clever and attractive interactive chart:
(Skip this paragraph if you know anything about basketball. The object of the sport is to throw a ball through a “basket” suspended ten feet (3 meters) above the court. Normally a player's team is awarded two points for doing this. But if the player is sufficiently far from the basket—the distance varies but is around 23 feet (7 meters)—three points are awarded instead. Carry on!)
The chart demonstrates that Curry this year has shattered the singleseason record for threepoint field goals. The previous record, set last year, is 286, also by Curry; the new record is 406. A comment by the authors of the chart says
The record is an outlier that defies most comparisons, but here is one: It is the equivalent of hitting 103 home runs in a Major League Baseball season.
(The current singleseason home run record is 73, and .)
I found this remark striking, because I don't think the record is an outlier that defies most comparisons. In fact, it doesn't even defy the comparison they make, to the baseball singleseason home run record.
In 1919, the record for home runs in a single season was 29, hit by Babe Ruth. The 1920 record, also by Ruth, was 54. To make the same comparison as the authors of the Times article, that is the equivalent of hitting home runs in a Major League Baseball season.
No, far from being an outlier that defies most comparisons, I think what we're seeing here is something that has happened over and over in sport, a fundamental shift in the way they game is played; in short, a breakthrough. In baseball, Ruth's 1920 season was the end of what is now known as the deadball era. The end of the deadball era was the caused by the confluence of several trends (shrinking ballparks), rule changes (the spitball), and oneoff events (Ray Chapman, the Black Sox). But an important cause was simply that Ruth realized that he could play the game in a better way by hitting a crapload of home runs.
The new record was the end of a sudden and sharp upward trend. Prior to Ruth's 29 home runs in 1919, the record had been 27, a weird fluke set way back in 1887 when the rules were drastically different. Typical singleseason home run records in the intervening years were in the 11 to 16 range; the record exceeded 20 in only four of the intervening 25 years.
Ruth's innovation was promptly imitated. In 1920, the #2 hitter hit 19 home runs and the #10 hitter hit 11, typical numbers for the nineteenteens. By 1929, the #10 hitter hit 31 home runs, which would have been recordsetting in 1919. It was a different game.
For another example of a breakthrough, let's consider competitive hot dog eating. Between 1980 and 1990, champion hotdog eaters consumed between 9 and 16 hot dogs in 10 minutes. In 1991 the time was extended to 12 minutes and Frank Dellarosa set a new record, 21½ hot dogs, which was not too far out of line with previous records, and which was repeatedly approached in the following decade: through 1999 five different champions ate between 19 and 24½ hot dogs in 12 minutes, in every year except 1993.
But in 2000 Takeru Kobayashi (小林 尊) changed the sport forever, eating an unbelievably disgusting 50 hot dogs in 12 minutes. (50. Not a misprint. Fifty. Roman numeral Ⅼ.) To make the Times’ comparison again, that is the equivalent of hitting home runs in a Major League Baseball season.
At that point it was a different game. Did the record represent a fundamental shift in hot dog gobbling technique? Yes. Kobayashi won all five of the next five contests, eating between 44½ and 53¾ each time. By 2005 the second and thirdplace finishers were eating 35 or more hot dogs each; had they done this in 1995 they would have demolished the old records. A new generation of champions emerged, following Kobayashi's lead. The current record is 69 hot dogs in 10 minutes. The recordsetters of the 1990s would not even be in contention in a modern hot dog eating contest.
It is instructive to compare these breakthroughs with a different sort of astonishing sports record, the bizarre fluke. In 1967, the world record distance for the long jump was 8.35 meters. In 1968, Bob Beamon shattered this record, jumping 8.90 meters. To put this in perspective, consider that in one jump, Beamon advanced the record by 55 cm, the same amount that it had advanced (in 13 stages) between 1925 and 1967.
Progression of the world long jump record
The cliff at 1968 is Bob Beamon
Did Beamon's new record represent a fundamental shift in long jump technique? No: Beamon never again jumped more than 8.22m. Did other jumpers promptly imitate it? No, Beamon's record was approached only a few times in the following quartercentury, and surpassed only once. Beamon had the benefit of high altitude, a tail wind, and fabulous luck.
Another bizarre fluke is Joe DiMaggio's hitting streak: in the 1941 baseball season, DiMaggio achieved hits in 56 consecutive games. For extensive discussion of just how bizarre this is, see The Streak of Streaks by Stephen J. Gould. (“DiMaggio’s streak is the most extraordinary thing that ever happened in American sports.”) Did DiMaggio’s hitting streak represent a fundamental shift in the way the game of baseball was played, toward highaverage hitting? Did other players promptly imitate it? No. DiMaggio's streak has never been seriously challenged, and has been approached only a few times. (The modern runnerup is Pete Rose, who hit in 44 consecutive games in 1978.) DiMaggio also had the benefit of fabulous luck.
Is Curry’s new record a fluke or a breakthrough?
I think what we're seeing in basketball is a breakthrough, a shift in the way the game is played analogous to the arrival of baseball’s home run era in the 1920s. Unless the league tinkers with the rules to prevent it, we might expect the next generation of players to regularly lead the league with 300 or 400 threepoint shots in a season. Here's why I think so.
Curry's record wasn't unprecedented. He's been setting threepoint records for years. (Compare Ruth’s 1920 home run record, foreshadowed in 1919.) He's continuing a trend that he began years ago.
Curry’s record, unlike DiMaggio’s streak, does not appear to depend on fabulous luck. His 402 field goals this year are on 886 attempts, a 45.4% success rate. This is in line with his success rate every year since 2009; last year he had a 44.3% success rate. Curry didn't get lucky this year; he had 40% more field goals because he made almost 40% more attempts. There seems to be no reason to think he couldn't make the same number of attempts next year with equal success, if he wants to.
Does he want to? Probably. Curry’s new threepoint strategy seems to be extremely effective. In his previous three seasons he scored 1786, 1873, and 1900 points; this season, he scored 2375, an increase of 475, threequarters of which is due to his threepoint field goals. So we can suppose that he will continue to attempt a large number of threepoint shots.
Is this something unique to Curry or is it something that other players might learn to emulate? Curry’s threepoint field goal rate is high, but not exceptionally so. He's not the most accurate of all threepoint shooters; he holds the 62nd–64thhighest season percentages for threepoint success rate. There are at least a few other players in the league who must have seen what Curry did and thought “I could do that”. (Kyle Korver maybe? I'm on very shaky ground; I don't even know how old he is.) Some of those players are going to give it a try, as are some we haven’t seen yet, and there seems to be no reason why some shouldn't succeed.
A number of things could sabotage this analysis. For example, the league might take steps to reduce the number of threepoint field goals, specifically in response to Curry’s new record, say by moving the threepoint line farther from the basket. But if nothing like that happens, I think it's likely that we'll see basketball enter a new era of higher offense with more threepoint shots, and that future sport historians will look back on this season as a watershed.
[ Addendum 20160425: As I feared, my Korver suggestion was ridiculous. Thanks to the folks who explained why. Reason #1: He is 35 years old. ]