News aggregator

Gabriel Gonzalez: Data is Code

Planet Haskell - Sun, 04/24/2016 - 4:09pm

<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>

The title of this post is a play on the Lisp aphorism: "Code is Data". In the Lisp world everything is data; code is just another data structure that you can manipulate and transform.

However, you can also go to the exact opposite extreme: "Data is Code"! You can make everything into code and implement data structures in terms of code.

You might wonder what that even means: how can you write any code if you don't have any primitive data structures to operate on? Fascinatingly, Alonzo Church discovered a long time ago that if you have the ability to define functions you have a complete programming language. "Church encoding" is the technique named after his insight that you could transform data structures into functions.

This post is partly a Church encoding tutorial and partly an announcement for my newly released annah compiler which implements the Church encoding of data types. Many of the examples in this post are valid annah code that you can play with. Also, to be totally pedantic annah implements Boehm-Berarducci encoding which you can think of as the typed version of Church encoding.

This post assumes that you have basic familiarity with lambda expressions. If you do not, you can read the first chapter (freely available) of the Haskell Programming from First Principles which does an excellent job of teaching lambda calculus.

If you would like to follow along with these examples, you can download and install annah by following these steps:

  • Install the stack tool
  • Create the following stack.yaml file

    $ cat > stack.yaml
    resolver: lts-5.13
    packages: []
    extra-deps:
    - annah-1.0.0
    - morte-1.6.0
    <Ctrl-D>
  • Run stack setup
  • Run stack install annah
  • Add the installed executable to your $PATH

Lambda calculus

In the untyped lambda calculus, you only have lambda expressions at your disposal and nothing else. For example, here is how you encode the identity function:

λx → x

That's a function that takes one argument and returns the same argument as its result.

We call this "abstraction" when we introduce a variable using the Greek lambda symbol and we call the variable that we introduce a "bound variable". We can then use that "bound variable" anywhere within the "body" of the lambda expression.

+-- Abstraction
|
|+-- Bound variable
||
vv
λx → x
^
|
+-- Body of lambda expression

Any expression that begins with a lambda is an anonymous function which we can apply to another expression. For example, we can apply the the identity function to itself like this:

(λx → x) (λy → y)

-- β-reduction
= λy → y

We call this "application" when we supply an argument to an anonymous function.

We can define a function of multiple arguments by nested "abstractions":

λx → λy → x

The above code is an anonymous function that returns an anonymous function. For example, if you apply the outermost anonymous function to a value, you get a new function:

(λx → λy → x) 1

-- β-reduce
λy → 1

... and if you apply the lambda expression to two values, you return the first value:

(λx → λy → x) 1 2

-- β-reduce
(λy → 1) 2

-- β-reduce
1

So our lambda expression behaves like a function of two arguments, even though it's really a function of one argument that returns a new function of one argument. We call this "currying" when we simulate functions of multiple arguments using functions one argument. We will use this trick because we will be programming in a lambda calculus that only supports functions of one argument.

Typed lambda calculus

In the typed lambda calculus you have to specify the types of all function arguments, so you have to write something like this:

λ(x : a) → x

... where a is the type of the bound variable named x.

However, the above function is still not valid because we haven't specified what the type a is. In theory, we could specify a type like Int:

λ(x : Int) → x

... but the premise of this post was that we could program without relying on any built-in data types so Int is out of the question for this experiment.

Fortunately, some typed variations of lambda calculus (most notably: "System F") let you introduce the type named a as yet another function argument:

λ(a : *) → λ(x : a) → x

This is called "type abstraction". Here the * is the "type of types" and is a universal constant that is always in scope, so we can always introduce new types as function arguments this way.

The above function is the "polymorphic identity function", meaning that this is the typed version of the identity function that still preserves the ability to operate on any type.

If we had built-in types like Int we could apply our polymorphic function to the type just like any other argument, giving back an identity function for a specific type:

(λ(a : *) → λ(x : a) → x) Int

-- β-reduction
λ(x : Int) → x

This is called "type application" or (more commonly) "specialization". A "polymorphic" function is a function that takes a type as a function argument and we "specialize" a polymorphic function by applying the function to a specific type argument.

However, we are forgoing built-in types like Int, so what other types do we have at our disposal?

Well, every lambda expression has a corresponding type. For example, the type of our polymorphic identity function is:

∀(a : *) → ∀(x : a) → a

You can read the type as saying:

  • this is a function of two arguments, one argument per "forall" (∀) symbol
  • the first argument is named a and a is a type
  • the second argument is named x and the type of x is a
  • the result of the function must be a value of type a

This type uniquely determines the function's implementation. To be totally pedantic, there is exactly one implementation up to extensional equality of functions. Since this function has to work for any possible type a there is only one way to implement the function. We must return x as the result, since x is the only value available of type a.

Passing around types as values and function arguments might seem a bit strange to most programmers since most languages either:

  • do not use types at all

    Example: Javascript

    // The polymorphic identity function in Javascript
    function id(x) {
    return x
    }

    // Example use of the function
    id(true)
  • do use types, but they hide type abstraction and type application from the programmer through the use of "type inference"

    Example: Haskell

    -- The polymorphic identity function in Haskell
    id x = x

    -- Example use of the function
    id True
  • they use a different syntax for type abstraction/application versus ordinary abstraction and application

    Example: Scala

    -- The polymorphic identity function in Scala
    def id[A](x : a)

    -- Example use of the function
    -- Note: Scala lets you omit the `[Boolean]` here thanks
    -- to type inference but I'm making the type
    -- application explicit just to illustrate that
    -- the syntax is different from normal function
    -- application
    id[Boolean](true)

For the purpose of this post we will program with explicit type abstraction and type application so that there is no magic or hidden machinery.

So, for example, suppose that we wanted to apply the typed, polymorphic identity function to itself. The untyped version was this:

(λx → x) (λy → y)

... and the typed version is this:

(λ(a : *) → λ(x : a) → x)
(∀(b : *) → ∀(y : b) → b)
(λ(b : *) → λ(y : b) → y)

-- β-reduction
= (λ(x : ∀(b : *) → ∀(y : b) → b) → x)
(λ(b : *) → λ(y : b) → y)

-- β-reduction
= (λ(b : *) → λ(y : b) → y)

So we can still apply the identity function to itself, but it's much more verbose. Languages with type inference automate this sort of tedious work for you while still giving you the safety guarantees of types. For example, in Haskell you would just write:

(\x -> x) (\y -> y)

... and the compiler would figure out all the type abstractions and type applications for you.

Exercise: Haskell provides a const function defined like this:

const :: a -> b -> a
const x y = x

Translate const function to a typed and polymorphic lambda expression in System F (i.e. using explicit type abstractions)

Boolean values

Lambda expressions are the "code", so now we need to create "data" from "code".

One of the simplest pieces of data is a boolean value, which we can encode using typed lambda expressions. For example, here is how you implement the value True:

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True

Note that the names have no significance at all. I could have equally well written the expression as:

λ(a : *) → λ(x : a) → λ(y : a) → x

... which is "α-equivalent" to the previous version (i.e. equivalent up to renaming of variables).

We will save the above expression to a file named ./True in our current directory. We'll see why shortly.

We can either save the expression using Unicode characters:

$ cat > ./True
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
<Ctrl-D>

... or using ASCII, replacing each lambda (i.e. λ) with a backslash (i.e. \) and replacing each arrow (i.e. →) with an ASCII arrow (i.e. ->)

$ cat > ./True
\(Bool : *) -> \(True : Bool) -> \(False : Bool) -> True
<Ctrl-D>

... whichever you prefer. For the rest of this tutorial I will use Unicode since it's easier to read.

Similarly, we can encode False by just changing our lambda expression to return the third argument named False instead of the second argument named True. We'll name this file ./False:

$ cat > ./False
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False
<Ctrl-D>

What's the type of a boolean value? Well, both the ./True and ./False files have the same type, which we shall call ./Bool:

$ cat > ./Bool
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

... and if you are following along with ASCII you can replace each forall symbol (i.e. ∀) with the word forall:

$ cat > ./Bool
forall (Bool : *) -> forall (True : Bool) -> forall (False : Bool) -> Bool

We are saving these terms and types to files because we can use the annah compiler to work with any lambda expression or type saved as a file. For example, I can use the annah compiler to verify that the file ./True has type ./Bool:

$ annah
-- Read this as: "./True has type ./Bool"
./True : ./Bool
<Ctrl-D>
./True
$ echo $?
0

If the expression type-checks then annah will just compile the expression to lambda calculus (by removing the unnecessary type annotation in this case) and return a zero exit code. However, if the expression does not type-check:

$ annah
./True : ./True
annah:
Expression: ∀(x : λ(Bool : *) → λ(True : Bool) → λ(False : Bool)
→ True) → λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True

Error: Invalid input type

Type: λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True

$ echo $?
1

... then annah will throw an exception and return a non-zero exit code. In this case annah complains that the ./True on the right-hand side of the type annotation is not a valid type.

The last thing we need is a function that can consume values of type ./Bool, like an ./if function:

$ cat > ./if
λ(x : ./Bool ) → x
-- ^
-- |
-- +-- Note the space. Filenames must end with a space

The definition of ./if is blindingly simple: ./if is just the identity function on ./Bools!

To see why this works, let's see what the type of ./if is. We can ask for the type of any expression by feeding the expression to the morte compiler via standard input:

$ morte < ./if
∀(x : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) →
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(x : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) → x

morte is a lambda calculus compiler installed alongside annah and annah is a higher-level interface to the morte language. By default, the morte compiler will:

  • resolve all file references (transitively, if necessary)
  • type-check the expression
  • optimize the expression
  • write the expression's type to standard error as the first line of output
  • write the optimized expression to standard output as the last line of output

In this case we only cared about the type, so we could have equally well just asked the morte compiler to resolve and infer the type of the expression:

$ morte resolve < ./Bool/if | morte typecheck
∀(x : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) →
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

The above type is the same thing as:

∀(x : ./Bool ) → ./Bool

If you don't believe me you can prove this to yourself by asking morte to resolve the type:

$ echo "∀(x : ./Bool ) → ./Bool" | morte resolve
∀(x : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) →
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

However, the type will make the most sense if you only expand out the second ./Bool in the type but leave the first ./Bool alone:

./Bool → ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

You can read this type as saying that the ./if function takes four arguments:

  • the first argument is the ./Bool that we want to branch on (i.e. ./True or ./False)
  • the second argument is the result type of our ./if expression
  • the third argument is the result we return if the ./Bool evaluates to ./True (i.e. the "then" branch)
  • the fourth argument is the result we return if the ./Bool evaluates to ./False (i.e. the "else" branch)

For example, this Haskell code:

if True
then False
else True

... would translate to this Annah code:

$ annah
./if ./True
./Bool -- The type of the result
./False -- The `then` branch
./True -- The `else` branch
<Ctrl-D>
./if ./True ./Bool ./False ./True

annah does not evaluate the expression. annah only translates the expression into Morte code (and the expression is already valid Morte code) and type-checks the expression. If you want to evaluate the expression you need to run the expression through the morte compiler, too:

$ morte
./if ./True
./Bool -- The type of the result
./False -- The `then` branch
./True -- The `else` branch
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False

morte deduces that the expression has type ./Bool and the expression evaluates to ./False.

morte evaluates the expression by resolving all references and repeatedly applying β-reduction. This is what happens under the hood:

./if
./True
./Bool
./False
./True

-- Resolve the `./if` reference
= (λ(x : ./Bool ) → x)
./True
./Bool
./False
./True

-- β-reduce
= ./True
./Bool
./False
./True

-- Resolve the `./True` reference
= (λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True)
./Bool
./False
./True

-- β-reduce
= (λ(True : ./Bool ) → λ(False : ./Bool ) → True)
./False
./True

-- β-reduce
= (λ(False : ./Bool ) → ./False )
./True

-- β-reduce
= ./False

-- Resolve the `./False` reference
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False

The above sequence of steps is a white lie: the true order of steps is actually different, but equivalent.

The ./if function was not even necessary because every value of type ./Bool is already a "pre-formed if expression". That's why ./if is just the identity function on ./Bools. You can delete the ./if from the above example and the code will still work.

Now let's define the not function and save the function to a file:

$ annah > ./not
λ(b : ./Bool ) →
./if b
./Bool
./False -- If `b` is `./True` then return `./False`
./True -- If `b` is `./False` then return `./True`
<Ctrl-D>

We can now use this file like an ordinary function:

$ morte
./not ./False
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
$ morte
./not ./True
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False

Notice how ./not ./False returns ./True and ./not ./True returns ./False.

Similarly, we can define an and function and an or function:

$ annah > and
λ(x : ./Bool ) → λ(y : ./Bool ) →
./if x
./Bool
y -- If `x` is `./True` then return `y`
./False -- If `x` is `./False` then return `./False`
<Ctrl-D>$ annah > or
λ(x : ./Bool ) → λ(y : ./Bool ) →
./if x
./Bool
./True -- If `x` is `./True` then return `./True`
y -- If `x` is `./False` then return `y`
<Ctrl-D>

... and use them:

$ morte
./and ./True ./False
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False$ morte
./or ./True ./False
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True

We started with nothing but lambda expressions, but still managed to implement:

  • a ./Bool type
  • a ./True value of type ./Bool
  • a ./False value of type ./Bool
  • ./if, ./not, ./and, and ./or functions

... and we can do real computation with them! In other words, we've modeled boolean data types entirely as code.

Exercise: Implement an xor function

Natural numbers

You might wonder what other data types you can implement in terms of lambda calculus. Fortunately, you don't have to wonder because the annah compiler will actually compile data type definitions to lambda expressions for you.

For example, suppose we want to define a natural number type encoded using Peano numerals. We can write:

$ annah types
type Nat
data Succ (pred : Nat)
data Zero
fold foldNat
<Ctrl-D>

You can read the above datatype specification as saying:

  • Define a type named Nat ...
  • ... with a constructor named Succ with one field named pred of type Nat ...
  • ... with another constructor named Zero with no fields
  • ... and a fold named foldNat

annah then translates the datatype specification into the following files and directories:

+-- ./Nat.annah -- `annah` implementation of `Nat`
|
`-- ./Nat
|
+-- @ -- `morte` implementation of `Nat`
| --
| -- If you import the `./Nat` directory this file is
| -- imported instead
|
+-- Zero.annah -- `annah` implementation of `Zero`
|
+-- Zero -- `morte` implementation of `Zero`
|
+-- Succ.annah -- `annah` implementation of `Succ`
|
+-- Succ -- `morte` implementation of `Succ`
|
+-- foldNat.annah -- `annah` implementation of `foldNat`
|
`-- foldNat -- `morte` implementation of `foldNat`

Let's see how the Nat type is implemented:

∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

All Boehm-Berarducci-encoded datatypes are encoded as substitution functions, including ./Nat. Any value of ./Nat is a function that takes three arguments that we will substitute into our natural number expression:

  • The first argument replace every occurrence of the Nat type
  • The second argument replaces every occurrence of the Succ constructor
  • The third argument replaces every occurrence of the Zero constructor

This will make more sense if we walk through a specific example. First, we will build the number 3 using the ./Nat/Succ and ./Nat/Zero constructors:

$ morte
./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero ))
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))

Now suppose that we want to compute whether or not our natural number is even. The only catch is that we must limit ourselves to substitution when computing even. We have to figure out something that we can substitute in place of the Succ constructors and something that we can substitute in place of the Zero constructors that will then evaluate to ./True if the natural number is even and ./False otherwise.

One substitution that works is the following:

  • Replace every Zero with ./True (because Zero is even)
  • Replace every Succ with ./not (because Succ alternates between even and odd)

So in other words, if we began with this:

./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero ))

... and we substitute with ./Nat/Succ with ./not and substitute ./Nat/Zero with ./True:

./not (./not (./not ./True ))

... then the expression will reduce to ./False.

Let's prove this by saving the above number to a file named ./three:

$ morte > ./three
./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero ))
$ cat three
λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))

The first thing we need to do is to replace the Nat with ./Bool:

./three ./Bool


-- Resolve `./three`
= (λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))
) ./Bool

-- β-reduce
= λ(Succ : ∀(pred : ./Bool ) → ./Bool ) → λ(Zero : ./Bool ) →
Succ (Succ (Succ Zero))

Now the next two arguments have exactly the right type for us to substitute in ./not and ./True. The argument named ./Succ is now a function of type ∀(pred : ./Bool ) → ./Bool, which is the same type as ./not. The argument named Zero is now a value of type ./Bool, which is the same type as ./True. This means that we can proceed with the next two arguments:

./three ./Bool ./not ./True

-- Resolve `./three`
= (λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))
) ./Bool ./not ./True

-- β-reduce
= (λ(Succ : ∀(pred : ./Bool ) → ./Bool ) → λ(Zero : ./Bool ) →
Succ (Succ (Succ Zero))
) ./not ./True

-- β-reduce
= (λ(Zero : ./Bool ) → ./not (./not (./not Zero))) ./True

-- β-reduce
= ./not (./not (./not ./True )))

The result is exactly what we would have gotten if we took our original expression:

./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero ))

... and replaced ./Nat/Succ with ./not and replaced ./Nat/Zero with ./True.

Let's verify that this works by running the code through the morte compiler:

$ morte
./three ./Bool ./not ./True
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False

morte computes that the number ./three is not even, returning ./False.

We can even go a step further and save an ./even function to a file:

$ annah > even
\(n : ./Nat ) →
n ./Bool
./not -- Replace each `./Nat/Succ` with `./not`
./True -- Replace each `./Nat/Zero` with `./True`

... and use our newly-formed ./even function:

$ morte
./even ./three
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False$ morte
./even ./Nat/Zero
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True

The annah compiler actually provides direct support for natural number literals, so you can also just write:

$ annah | morte
./even 100
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True

What about addition? How do we add two numbers using only substitution?

Well, one way we can add two numbers, m and n, is that we substitute each ./Nat/Succ in m with ./Nat/Succ (i.e. keep them the same) and substitute the Zero with n. In other words:

$ annah > plus
λ(m : ./Nat ) → λ(n : ./Nat ) →
m ./Nat -- The result will still be a `./Nat`
./Nat/Succ -- Replace each `./Nat/Succ` with `./Nat/Succ`
n -- Replace each `./Nat/Zero` with `n`

Let's verify that this works:

$ annah | morte
./plus 2 2
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ Zero)))

We get back a Church-encoded 4!

What happened under the hood was the following substitutions:

./plus 2 2

-- Resolve `./plus`
= (λ(m : ./Nat ) → λ(n : ./Nat ) → m ./Nat ./Nat/Succ n) 2 2

-- β-reduce
= (λ(n : ./Nat ) → 2 ./Nat ./Nat/Succ n) 2

-- β-reduce
= 2 ./Nat ./Nat/Succ 2

-- Definition of 2
= (./Nat/Succ (./Nat/Succ ./Nat/Zero )) ./Nat ./Nat/Succ 2

-- Resolve and β-reduce the definition of 2 (multiple steps)
= (λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ Zero)
) ./Nat ./Nat/Succ 2

-- β-reduce
= (λ(Succ : ∀(pred : ./Nat ) → ./Nat ) → λ(Zero : ./Nat ) →
Succ (Succ Zero)
) ./Nat/Succ 2

-- β-reduce
= (λ(Zero : ./Nat ) → ./Nat/Succ (./Nat/Succ Zero)) 2

-- β-reduce
= ./Nat/Succ (./Nat/Succ 2)

-- Definition of 2
= ./Nat/Succ (./Nat/Succ (./Nat/Succ (./Nat/Succ ./Nat/Zero )))

-- Resolve and β-reduce (multiple steps)
= λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ Zero)))

So we can encode natural numbers in lambda calculus, albeit very inefficiently! There are some tricks that we can use to greatly speed up both the time complexity and constant factors, but it will never be competitive with machine arithmetic. This is more of a proof of concept that you can model arithmetic purely in code.

Exercise: Implement a function which multiplies two natural numbers

Data types

annah also lets you define "temporary" data types that scope over a given expression. In fact, that's how Nat was implemented. You can look at the corresponding *.annah files to see how each type and term is defined in annah before conversion to morte code.

For example, here is how the Nat type is defined in annah:

$ cat Nat.annah
type Nat
data Succ (pred : Nat)
data Zero
fold foldNat
in Nat

The first four lines are identical to what we wrote when we invoked the annah types command from the command line. We can use the exact same data type specification to create a scoped expression that can reference the type and data constructors we specified.

When we run this expression through annah we get back the Nat type:

$ annah < Nat.annah
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

You can use these scoped datatype declarations to quickly check how various datatypes are encoded without polluting your current working directory. For example, I can ask annah how the type Maybe is encoded in lambda calculus:

$ annah
λ(a : *) →
type Maybe
data Just (x : a)
data Nothing
-- You can also leave out this `fold` if you don't use it
fold foldMaybe
in Maybe
<Ctrl-D>
λ(a : *) → ∀(Maybe : *) → ∀(Just : ∀(x : a) → Maybe) →
∀(Nothing : Maybe) → Maybe

A Maybe value is just another substitution function. You provide one branch that you substitute for Just and another branch that you substitute for Nothing. For example, the Just constructor always substitutes in the first branch and ignores the Nothing branch that you supply:

$ annah
λ(a : *) →
type Maybe
data Just (x : a)
data Nothing
in Just
<Ctrl-D>
λ(a : *) → λ(x : a) → λ(Maybe : *) → λ(Just : ∀(x : a) → Maybe)
→ λ(Nothing : Maybe) → Just x

Vice versa, the Nothing constructor substitutes in the Nothing branch that you supply and ignores the Just branch:

$ annah
λ(a : *) →
type Maybe
data Just (x : a)
data Nothing
in Nothing
<Ctrl-D>
λ(a : *) → λ(Maybe : *) → λ(Just : ∀(x : a) → Maybe) → λ(Nothing : Maybe) → Nothing

Notice how we've implemented Maybe and Just purely using functions. This implies that any language with functions can encode Maybe, like Python!

Let's translate the above definition of Just and Nothing to the equivalent Python code. The only difference is that we delete the type abstractions because they are not necessary in Python:

def just(x):
def f(just, nothing):
return just(x)
return f

def nothing():
def f(just, nothing):
return nothing
return f

We can similarly translate Haskell-style pattern matching like this:

example :: Maybe Int -> IO ()
example m = case m of
Just n -> print n
Nothing -> return ()

... into this Python code:

def example(m):
def just(n): # This is what we substitute in place of `Just`
print(n)
def nothing(): # This is what we substitute in place of `Nothing`
return
m(just, nothing)

... and verify that our pattern matching function works:

>>> example(nothing())
>>> example(just(1))
1

Neat! This means that any algebraic data type can be embedded into any language with functions, which is basically every language!

Warning: your colleagues may get angry at you if you do this! Consider using a language with built-in support for algebraic data types instead of trying to twist your language into something it's not.

Let expressions

You can also translate let expressions to lambda calculus, too. For example, instead of saving something to a file we can just use a let expression to temporarily define something within a program.

For example, we could write:

$ annah | morte
let x : ./Nat = ./plus 1 2
in ./plus x x
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ (Succ (Succ Zero)))))

... but that doesn't really tell us anything about how annah desugars let because we only see the final evaluated result. We can ask annah to desugar without performing any other transformations using the annah desugar command:

$ annah desugar
let x : ./Nat = ./plus 1 2
in ./plus x x
<Ctrl-D>
(λ(x : ./Nat ) → ./plus x x) (./plus (λ(Nat : *) → λ(Succ :
∀(pred : Nat) → Nat) → λ(Zero : Nat) → Succ Zero) (λ(Nat : *) →
λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) → Succ (Succ
Zero)))

... which makes more sense if we clean up the result through the use of numeric literals:

(λ(x : ./Nat ) → ./plus x x) (./plus 1 2)

Every time we write an expression of the form:

let x : t = y
in e

... we decode that to lambda calculus as:

(λ(x : t) → e) y

We can also decode function definitions, too. For example, you can write:

$ annah | morte
let increment (x : ./Nat ) : ./Nat = ./plus x 1
in increment 3
<Ctrl-D>
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ Zero)))

... and the intermediate desugared form also encodes the function definition as a lambda expression:

$ annah desugar
let increment (x : ./Nat ) : ./Nat = ./plus x 1
in increment 3
<Ctrl-D>
(λ(increment : ∀(x : ./Nat ) → ./Nat ) → increment (λ(Nat : *)
→ λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) → Succ (Succ
(Succ Zero)))) (λ(x : ./Nat ) → ./plus x (λ(Nat : *) → λ(Succ
: ∀(pred : Nat) → Nat) → λ(Zero : Nat) → Succ Zero)

... which you can clean up as this expression:

(λ(increment : ∀(x : ./Nat ) → ./Nat ) → increment 3)
(λ(x : ./Nat ) → ./plus x 1)

We can combine let expressions with data type expressions, too. For example, here's our original not program, except without saving anything to files:

$ annah
type Bool
data True
data False
fold if
in

let not (b : Bool) : Bool = if b Bool False True
in

not False
<Ctrl-D>
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → TrueLists

annah also provides syntactic support for lists as well. For example:

$ annah
[nil ./Bool , ./True , ./False , ./True ]
<Ctrl-D>
λ(List : *) → λ(Cons : ∀(head : ./Bool ) → ∀(tail : List) →
List) → λ(Nil : List) → Cons ./True (Cons ./False (Cons
./True Nil))

Just like all the other data types, a list is defined in terms of what you use to substitute each Cons and Nil constructor. I can replace each Cons with ./and and the Nil with ./True like this:

$ annah | morte
<Ctrl-D>
[nil ./Bool , ./True , ./False , ./True ] ./Bool ./and ./True

∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False

This conceptually followed the following reduction sequence:

( λ(List : *)
→ λ(Cons : ∀(head : ./Bool ) → ∀(tail : List) → List)
→ λ(Nil : List)
→ Cons ./True (Cons ./False (Cons ./True Nil))
) ./Bool
./and
./True

-- β-reduction
= ( λ(Cons : ∀(head : ./Bool ) → ∀(tail : ./Bool ) → ./Bool )
→ λ(Nil : ./Bool )
→ Cons ./True (Cons ./False (Cons ./True Nil))
) ./and
./True

-- β-reduction
= ( λ(Nil : ./Bool )
→ ./and ./True (./and ./False (./and ./True Nil))
) ./True

-- β-reduction
= ./and ./True (./and ./False (./and ./True ./True))

Similarly, we can sum a list by replacing each Cons with ./plus and replacing each Nil with 0:

$ annah | morte
[nil ./Nat , 1, 2, 3, 4] ./Nat ./plus 0
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
Zero)))))))))

This behaves as if we had written:

./plus 1 (./plus 2 (./plus 3 (./plus 4 0)))Prelude

annah also comes with a Prelude to show some more sophisticated examples of what you can encode in pure lambda calculus. You can find version 1.0 of the Prelude here:

http://sigil.place/prelude/annah/1.0/

You can use these expressions directly within your code just by referencing their URL. For example, the remote Bool expression is located here:

http://sigil.place/prelude/annah/1.0/Bool/@

... and the remote True expression is located here:

http://sigil.place/prelude/annah/1.0/Bool/True

... so we can check if the remote True's type matches the remote Bool by writing this:

$ annah
http://sigil.place/prelude/annah/1.0/Bool/True : http://sigil.place/prelude/annah/1.0/Bool
<Ctrl-D>
http://sigil.place/prelude/annah/1.0/Bool/True
$ echo $?
0

Similarly, we can build a natural number (very verbosely) using remote Succ and Zero:

$ annah | morte
http://sigil.place/prelude/annah/1.0/Nat/Succ
( http://sigil.place/prelude/annah/1.0/Nat/Succ
( http://sigil.place/prelude/annah/1.0/Nat/Succ
http://sigil.place/prelude/annah/1.0/Nat/Zero
)
)
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ Zero))

However, we can also locally clone the Prelude using wget if we wish to refer to local file paths instead of remote paths:

$ wget -np -r --cut-dirs=3 http://sigil.place/prelude/annah/1.0/
$ cd sigil.place
$ ls
(->) Defer.annah List.annah Path Sum0.annah
(->).annah Eq Maybe Path.annah Sum1
Bool Eq.annah Maybe.annah Prod0 Sum1.annah
Bool.annah Functor Monad Prod0.annah Sum2
Category Functor.annah Monad.annah Prod1 Sum2.annah
Category.annah index.html Monoid Prod1.annah
Cmd IO Monoid.annah Prod2
Cmd.annah IO.annah Nat Prod2.annah
Defer List Nat.annah Sum0

Now we can use these expressions using their more succinct local paths:

./Nat/sum (./List/(++) ./Nat [nil ./Nat , 1, 2] [nil ./Nat , 3, 4])
<Ctrl-D>
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
Zero)))))))))

Also, every expression has a corresponding *.annah file that documents the expression's type using a let expression. For example, we can see the type of the ./List/(++) function by studying the ./List/(++).annah file:

cat './List/(++).annah'
let (++) (a : *) (as1 : ../List a) (as2 : ../List a) : ../List a =
\(List : *)
-> \(Cons : a -> List -> List)
-> \(Nil : List)
-> as1 List Cons (as2 List Cons Nil)
in (++)

The top line tells us that (++) is a function that takes three arguments:

  • An argument named a for the type list elements you want to combine
  • An argument named as1 for the left list you want to combine
  • An argument named as2 for the right list you want to combine

... and the function returns a list of the same type as the two input lists.

Beyond

Exactly how far can you take lambda calculus? Well, here's a program written in annah that reads two natural numbers, adds them, and writes them out:

$ annah | morte
./IO/Monad ./Prod0 (do ./IO {
x : ./Nat <- ./IO/get ;
y : ./Nat <- ./IO/get ;
_ : ./Prod0 <- ./IO/put (./Nat/(+) x y);
})
∀(IO : *) → ∀(Get_ : ((∀(Nat : *) → ∀(Succ : ∀(pred : Nat) →
Nat) → ∀(Zero : Nat) → Nat) → IO) → IO) → ∀(Put_ : (∀(Nat : *)
→ ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat) → IO →
IO) → ∀(Pure_ : (∀(Prod0 : *) → ∀(Make : Prod0) → Prod0) → IO)
→ IO

λ(IO : *) → λ(Get_ : ((∀(Nat : *) → ∀(Succ : ∀(pred : Nat) →
Nat) → ∀(Zero : Nat) → Nat) → IO) → IO) → λ(Put_ : (∀(Nat : *)
→ ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat) → IO →
IO) → λ(Pure_ : (∀(Prod0 : *) → ∀(Make : Prod0) → Prod0) → IO)
→ Get_ (λ(r : ∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) →
∀(Zero : Nat) → Nat) → Get_ (λ(r : ∀(Nat : *) → ∀(Succ :
∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat) → Put_ (λ(Nat : *)
→ λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) → r@1 Nat Succ
(r Nat Succ Zero)) (Pure_ (λ(Prod0 : *) → λ(Make : Prod0) →
Make))))

This does not run the program, but it creates a syntax tree representing all program instructions and the flow of information.

annah supports do notation so you can do things like write list comprehensions in annah:

$ annah | morte
./List/sum (./List/Monad ./Nat (do ./List {
x : ./Nat <- [nil ./Nat , 1, 2, 3];
y : ./Nat <- [nil ./Nat , 4, 5, 6];
_ : ./Nat <- ./List/pure ./Nat (./Nat/(+) x y);
}))
<Ctrl-D>
∀(Nat : *) → ∀(Succ : ∀(pred : Nat) → Nat) → ∀(Zero : Nat) → Nat

λ(Nat : *) → λ(Succ : ∀(pred : Nat) → Nat) → λ(Zero : Nat) →
Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ
(Succ (Succ (Succ Zero)))))))))))))))))))))))))))))))))))))))))
)))))))))))))))))))))

The above Annah program is equivalent to the following Haskell program:

sum (do
x <- [1, 2, 3]
y <- [4, 5, 6]
return (x + y) )

... yet it is implemented 100% in a minimal and total lambda calculus without any built-in support for data types.

This tutorial doesn't cover how do notation works, but you can learn this and more by reading the Annah tutorial which is bundled with the Hackage package:

Conclusions

A lot of people underestimate how much you can do in a total lambda calculus. I don't recommend pure lambda calculus as a general programming language, but I could see a lambda calculus enriched with high-efficiency primitives to be a realistic starting point for simple functional languages that are easy to port and distribute.

One of the projects I'm working towards in the long run is a "JSON for code" and annah is one step along the way towards that goal. annah will likely not be that language, but I still factored out annah as a separate and reusable project along the way so that others could fork and experiment with annah when experimenting with their own language design.

Also, as far as I can tell annah is the only project in the wild that actually implements the Boehm-Berarducci encoding outlined in this paper:

... so if you prefer to learn the encoding algorithm by studying actual code you can use the annah source code as a reference real-world implementation.

You can find Annah project on Hackage or Github:

... and you can find the Annah prelude hosted online:

The Annah tutorial goes into the Annah language and compiler in much more detail than this tutorial, so if you would like to learn more I highly recommend reading the tutorial which walks through the compiler, desugaring, and the Prelude in much more detail:

Also, for those who are curious, both the annah and morte compilers are named after characters from the game Planescape: Torment.

Categories: Offsite Blogs

Philip Wadler: John McCarthy presents Recursive Functions of Symbolic Expressions

Planet Haskell - Sun, 04/24/2016 - 9:58am
“John McCarthy presents Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I”, Painting, 1662, Ferdinand Bol. From Classic Programmer Paintings.
Categories: Offsite Blogs

Edward Z. Yang: Hindley-Milner with top-level existentials

Planet Haskell - Sun, 04/24/2016 - 2:05am

Content advisory: This is a half-baked research post.

Abstract. Top-level unpacking of existentials are easy to integrate into Hindley-Milner type inference. Haskell should support them. It's possible this idea can work for internal bindings of existentials as well (ala F-ing modules) but I have not worked out how to do it.

Update. And UHC did it first!

Update 2. And rank-2 type inference is decidable (and rank-1 existentials are an even weaker system), although the algorithm for rank-2 inference requires semiunification.

Background

The difference between Hindley-Milner and System F. Although in informal discussion, Hindley-Milner is commonly described as a “type inference algorithm”, it should properly be described as a type system which is more restrictive than System F. Both type systems allow polymorphism via universal quantification of variables, but in System F this polymorphism is explicit and can occur anywhere, whereas in Hindley-Milner the polymorphism is implicit, and can only occur at the “top level” (in a so-called “polytype” or “type scheme.”) This restriction of polymorphism is the key which makes inference (via Algorithm W) for Hindley-Milner decidable (and practical), whereas inference for System F undecidable.

-- Hindley Milner id :: a -> a id = λx. x -- System F id :: ∀a. a -> a id = Λa. λ(x : a). x

Existential types in System F. A common generalization of System F is to equip it with existential types:

Types τ ::= ... | ∃a. τ Terms e ::= ... | pack <τ, e>_τ | unpack <a, x> = e in e

In System F, it is technically not necessary to add existentials as a primitive concept, as they can be encoded using universal quantifiers by saying ∃a. τ = ∀r. (∀a. τ → r) → r.

Existential types in Hindley-Milner? This strategy will not work for Hindley-Milner: the encoding requires a higher-rank type, which is precisely what Hindley-Milner rules out for the sake of inference.

In any case, it is a fool's game to try to infer existential types: there's no best type! HM always infers the most general type for an expression: e.g., we will infer f :: a -> a for the function f = \x -> x, and not Int -> Int. But the whole point of data abstraction is to pick a more abstract type, which is not going to be the most general type and, consequently, is not going to be unique. What should be abstract, what should be concrete? Only the user knows.

Existential types in Haskell. Suppose that we are willing to write down explicit types when existentials are packed, can Hindley-Milner do the rest of the work: that is to say, do we have complete and decidable inference for the rest of the types in our program?

Haskell is an existence (cough cough) proof that this can be made to work. In fact, there are two ways to go about doing it. The first is what you will see if you Google for “Haskell existential type”:

{-# LANGUAGE ExistentialQuantification #-} data Ex f = forall a. Ex (f a) pack :: f a -> Ex f pack = Ex unpack :: Ex f -> (forall a. f a -> r) -> r unpack m k = case m of Ex x -> f x

Ex f is isomorphic to ∃a. f a, and similar to the System F syntax, they can be packed with the Ex constructor and unpacked by pattern-matching on them.

The second way is to directly use the System F encoding using Haskell's support for rank-n types:

{-# LANGUAGE RankNTypes #-} type Ex f = forall r. (forall a. f a -> r) -> r pack :: f a -> Ex f pack x = \k -> k x unpack :: Ex f -> (forall a. f a -> r) -> r unpack m k = m k

The boxy types paper demonstrated that you can do inference, so long as all of your higher rank types are annotated. Although, perhaps it was not as simple as hoped, since impredicative types are a source of constant bugs in GHC's type checker.

The problem

Explicit unpacks suck. As anyone who has tried programming with existentials in Haskell can attest, the use of existentials can still be quite clumsy due to the necessity of unpacking an existential (casing on it) before it can be used. That is to say, the syntax let Ex x = ... in ... is not allowed, and it is an easy way to get GHC to tell you its brain exploded.

Leijen investigated the problem of handling existentials without explicit unpacks.

Loss of principal types without explicit unpacks, and Leijen's solution. Unfortunately, the naive type system does not have principal types. Leijen gives an example where there is no principal type:

wrap :: forall a. a -> [a] key :: exists b. Key b -- What is the type of 'wrap key'? -- [exists b. Key b]? -- exists b. [key b]?

Neither type is a subtype of the other. In his paper, Leijen suggests that the existential should be unwrapped as late as possible (since you can go from the first type to the second, but not vice versa), and thus, the first type should be preferred.

The solution

A different approach. What if we always lift the existential to the top level? This is really easy to do if you limit unpacks to the top-level of a program, and it turns out this works really well. (The downside is that dynamic use of existentials is not supported.)

There's an existential in every top-level Haskell algebraic data type. First, I want to convince you that this is not all that strange of an idea. To do this, we look at Haskell's support for algebraic data types. Algebraic data types in Haskell are generative: each data type must be given a top-level declaration and is considered a distinct type from any other data type. Indeed, Haskell users use this generativity in conjunction with the ability to hide constructors to achieve data abstraction in Haskell. Although there is not actually an existential lurking about—generativity is not data abstraction—generativity is an essential part of data abstraction, and HM has no problem with this.

Top-level generativity corresponds to existentials that are unpacked at the top-level of a program (ala F-ing modules). We don't need existentials embedded inside our Haskell expressions to support the generativity of algebraic data types: all we need is the ability to pack an existential type at the top level, and then immediately unpack it into the top-level context. In fact, F-ing modules goes even further: existentials can always be lifted until they reach the top level of the program. Modular programming with applicative functors (the ML kind) can be encoded using top-level existentials which are immediately unpacked as they are defined.

The proposal. So let us suggest the following type system, Hindley-Milner with top-level existentials (where a* denotes zero to many type variables):

Term variables ∈ f, x, y, z Type variables ∈ a, b, c Programs prog ::= let f = e in prog | seal (b*, f :: σ) = (τ*, e) in prog | {- -} Type schemes (polytypes) σ ::= ∀a*. τ Expressions e ::= x | \x -> e | e e Monotypes τ ::= a | τ -> τ

There is one new top-level binding form, seal. We can give it the following typing rule:

Γ ⊢ e :: τ₀[b* → τ*] a* = free-vars(τ₀[b* → τ*]) Γ, b*, (f :: ∀a*. τ₀) ⊢ prog --------------------------------------------- Γ ⊢ seal (b*, f :: ∀a*. τ₀) = (τ*, e) in prog

It also elaborates directly to System F with existentials:

seal (b*, f :: σ) = (τ*, e) in prog ===> unpack <b*, f> = pack <τ*, e>_{∃b*. σ} in prog

A few observations:

  1. In conventional presentations of HM, let-bindings are allowed to be nested inside expressions (and are generalized to polytypes before being added to the context). Can we do something similar with seal? This should be possible, but the bound existential type variables must be propagated up.
  2. This leads to a second problem: naively, the order of quantifiers must be ∃b. ∀a. τ and not ∀a. ∃b. τ, because otherwise we cannot add the existential to the top-level context. However, there is a "skolemization" trick (c.f. Shao and F-ing modules) by which you can just make b a higher-kinded type variable which takes a as an argument, e.g., ∀a. ∃b. b is equivalent to ∃b'. ∀a. b' a. This trick could serve as the way to support inner seal bindings, but the encoding tends to be quite involved (as you must close over the entire environment.)
  3. This rule is not very useful for directly modeling ML modules, as a “module” is usually thought of as a record of polymorphic functions. Maybe you could generalize this rule to bind multiple polymorphic functions?

Conclusion. And that's as far as I've worked it out. I am hoping someone can tell me (1) who came up with this idea already, and (2) why it doesn't work.

Categories: Offsite Blogs

Mark Jason Dominus: Steph Curry: fluke or breakthrough?

Planet Haskell - Sat, 04/23/2016 - 6:00pm

[ Disclaimer: I know very little about basketball. I think there's a good chance this article contains at least one basketball-related howler, but I'm too ignorant to know where it is. ]

Randy Olson recently tweeted a link to a New York Times article about Steph Curry's new 3-point record. Here is Olson’s snapshot of a portion of the Times’ clever and attractive interactive chart:

(Skip this paragraph if you know anything about basketball. The object of the sport is to throw a ball through a “basket” suspended ten feet (3 meters) above the court. Normally a player's team is awarded two points for doing this. But if the player is sufficiently far from the basket—the distance varies but is around 23 feet (7 meters)—three points are awarded instead. Carry on!)


Stephen Curry

The chart demonstrates that Curry this year has shattered the single-season record for three-point field goals. The previous record, set last year, is 286, also by Curry; the new record is 406. A comment by the authors of the chart says

The record is an outlier that defies most comparisons, but here is one: It is the equivalent of hitting 103 home runs in a Major League Baseball season.

(The current single-season home run record is 73, and .)

I found this remark striking, because I don't think the record is an outlier that defies most comparisons. In fact, it doesn't even defy the comparison they make, to the baseball single-season home run record.



Babe Ruth

In 1919, the record for home runs in a single season was 29, hit by Babe Ruth. The 1920 record, also by Ruth, was 54. To make the same comparison as the authors of the Times article, that is the equivalent of hitting home runs in a Major League Baseball season.

No, far from being an outlier that defies most comparisons, I think what we're seeing here is something that has happened over and over in sport, a fundamental shift in the way they game is played; in short, a breakthrough. In baseball, Ruth's 1920 season was the end of what is now known as the dead-ball era. The end of the dead-ball era was the caused by the confluence of several trends (shrinking ballparks), rule changes (the spitball), and one-off events (Ray Chapman, the Black Sox). But an important cause was simply that Ruth realized that he could play the game in a better way by hitting a crapload of home runs.

The new record was the end of a sudden and sharp upward trend. Prior to Ruth's 29 home runs in 1919, the record had been 27, a weird fluke set way back in 1887 when the rules were drastically different. Typical single-season home run records in the intervening years were in the 11 to 16 range; the record exceeded 20 in only four of the intervening 25 years.

Ruth's innovation was promptly imitated. In 1920, the #2 hitter hit 19 home runs and the #10 hitter hit 11, typical numbers for the nineteen-teens. By 1929, the #10 hitter hit 31 home runs, which would have been record-setting in 1919. It was a different game.



Takeru Kobayashi

For another example of a breakthrough, let's consider competitive hot dog eating. Between 1980 and 1990, champion hot-dog eaters consumed between 9 and 16 hot dogs in 10 minutes. In 1991 the time was extended to 12 minutes and Frank Dellarosa set a new record, 21½ hot dogs, which was not too far out of line with previous records, and which was repeatedly approached in the following decade: through 1999 five different champions ate between 19 and 24½ hot dogs in 12 minutes, in every year except 1993.

But in 2000 Takeru Kobayashi (小林 尊) changed the sport forever, eating an unbelievably disgusting 50 hot dogs in 12 minutes. (50. Not a misprint. Fifty. Roman numeral Ⅼ.) To make the Times’ comparison again, that is the equivalent of hitting home runs in a Major League Baseball season.

At that point it was a different game. Did the record represent a fundamental shift in hot dog gobbling technique? Yes. Kobayashi won all five of the next five contests, eating between 44½ and 53¾ each time. By 2005 the second- and third-place finishers were eating 35 or more hot dogs each; had they done this in 1995 they would have demolished the old records. A new generation of champions emerged, following Kobayashi's lead. The current record is 69 hot dogs in 10 minutes. The record-setters of the 1990s would not even be in contention in a modern hot dog eating contest.



Bob Beamon

It is instructive to compare these breakthroughs with a different sort of astonishing sports record, the bizarre fluke. In 1967, the world record distance for the long jump was 8.35 meters. In 1968, Bob Beamon shattered this record, jumping 8.90 meters. To put this in perspective, consider that in one jump, Beamon advanced the record by 55 cm, the same amount that it had advanced (in 13 stages) between 1925 and 1967.


Progression of the world long jump record
The cliff at 1968 is Bob Beamon

Did Beamon's new record represent a fundamental shift in long jump technique? No: Beamon never again jumped more than 8.22m. Did other jumpers promptly imitate it? No, Beamon's record was approached only a few times in the following quarter-century, and surpassed only once. Beamon had the benefit of high altitude, a tail wind, and fabulous luck.



Joe DiMaggio

Another bizarre fluke is Joe DiMaggio's hitting streak: in the 1941 baseball season, DiMaggio achieved hits in 56 consecutive games. For extensive discussion of just how bizarre this is, see The Streak of Streaks by Stephen J. Gould. (“DiMaggio’s streak is the most extraordinary thing that ever happened in American sports.”) Did DiMaggio’s hitting streak represent a fundamental shift in the way the game of baseball was played, toward high-average hitting? Did other players promptly imitate it? No. DiMaggio's streak has never been seriously challenged, and has been approached only a few times. (The modern runner-up is Pete Rose, who hit in 44 consecutive games in 1978.) DiMaggio also had the benefit of fabulous luck.


Is Curry’s new record a fluke or a breakthrough?

I think what we're seeing in basketball is a breakthrough, a shift in the way the game is played analogous to the arrival of baseball’s home run era in the 1920s. Unless the league tinkers with the rules to prevent it, we might expect the next generation of players to regularly lead the league with 300 or 400 three-point shots in a season. Here's why I think so.

  1. Curry's record wasn't unprecedented. He's been setting three-point records for years. (Compare Ruth’s 1920 home run record, foreshadowed in 1919.) He's continuing a trend that he began years ago.

  2. Curry’s record, unlike DiMaggio’s streak, does not appear to depend on fabulous luck. His 402 field goals this year are on 886 attempts, a 45.4% success rate. This is in line with his success rate every year since 2009; last year he had a 44.3% success rate. Curry didn't get lucky this year; he had 40% more field goals because he made almost 40% more attempts. There seems to be no reason to think he couldn't make the same number of attempts next year with equal success, if he wants to.

  3. Does he want to? Probably. Curry’s new three-point strategy seems to be extremely effective. In his previous three seasons he scored 1786, 1873, and 1900 points; this season, he scored 2375, an increase of 475, three-quarters of which is due to his three-point field goals. So we can suppose that he will continue to attempt a large number of three-point shots.

  4. Is this something unique to Curry or is it something that other players might learn to emulate? Curry’s three-point field goal rate is high, but not exceptionally so. He's not the most accurate of all three-point shooters; he holds the 62nd–64th-highest season percentages for three-point success rate. There are at least a few other players in the league who must have seen what Curry did and thought “I could do that”. (Kyle Korver maybe? I'm on very shaky ground; I don't even know how old he is.) Some of those players are going to give it a try, as are some we haven’t seen yet, and there seems to be no reason why some shouldn't succeed.

A number of things could sabotage this analysis. For example, the league might take steps to reduce the number of three-point field goals, specifically in response to Curry’s new record, say by moving the three-point line farther from the basket. But if nothing like that happens, I think it's likely that we'll see basketball enter a new era of higher offense with more three-point shots, and that future sport historians will look back on this season as a watershed.

[ Addendum 20160425: As I feared, my Korver suggestion was ridiculous. Thanks to the folks who explained why. Reason #1: He is 35 years old. ]

Categories: Offsite Blogs

wxhaskell: Play sound not working on OSX?

haskell-cafe - Sat, 04/23/2016 - 1:08pm
Is it possible that play . sound might not work on OSX or I'm I doing something wrong? This advice from around 2006 is all I could find: "Did you build wxWidgets with configuring --enable-sound build option?" Is this still valid? If so, how do I do that? Tilmann |import Graphics.UI.WX main :: IO () main = start $ do f <- frame [] p <- panel f [] set f [layout := minsize(Size 100 100) $ widget p] play (sound "alarm.wav") | Using OSX 10.9.5, GHC 7.10.2, WX 0.92.2.0 http://stackoverflow.com/questions/36809989/play-sound-with-wxhaskell-not-working-on-osx _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

[ANN] Aivika: A parallel distributed discrete eventsimulation library

haskell-cafe - Sat, 04/23/2016 - 10:43am
Hi Cafe! I’m glad to announce the first add-on module [3] of Aivika simulation library that allows creating and running parallel distributed discrete event simulations. It is destined for clusters and supercomputers, although it works on a stand-alone notebook too. My module implements an optimistic strategy that implies transparent rollbacks. It is mainly inspired by ideas of the Time Warp algorithm, which first publication is dated early 80s. Please note that this is the first version. It is mainly experimental, but it already shows good results. For example, when using the reproducible random number generators on different nodes, I always get the same simulation results. Furthermore, for testing purposes, I rewrote one of my examples with discrete processes and the shared resource so that it uses too much messages just to see how it works. The add-on module allows doing everything that the generalized version [2] of Aivika can do in a part related to the discrete event simulation. Moreover, the pa
Categories: Offsite Discussion

Generators? Iterators?

haskell-cafe - Sat, 04/23/2016 - 6:28am
Hi. I'm more in the shallow end of the Haskell pool right now, so forgive me if this is really basic... In Haskell, what is the class for things that can provide an infinite sequence? I mean, things that produce a value as well as function for producing the next value? Clearly RandomGen does this, but what is the next level of abstraction above RandomGen (i.e., RandomGen minus the randomness aspect).
Categories: Offsite Discussion

Should webassembly be a target for GHC?

haskell-cafe - Sat, 04/23/2016 - 5:28am
I'm interested in webassembly's recent momentum https://hacks.mozilla.org/2016/03/a-webassembly-milestone/ Is this a way we could get a foot in the web? I'm curious, just putting the thought out there. Cheers, Dimitri
Categories: Offsite Discussion

Oliver Charles: Announcing transformers-eff

Planet Haskell - Fri, 04/22/2016 - 6:00pm

In my last post, I spent some time discussing a few different approaches to dealing with computational effects in Haskell - namely monad transformers, free monads, and the monad transformer library. I presented an approach to systematically building mtl-like type classes based on the idea of lifting languages for a given effect into larger monad transformer stacks. This approach felt so mechanical to me I set about exploring a way to formalise it, and am happy to announce a new experimental library – transformers-eff.

transformers-eff takes inspiration from the work of algebraic effects and handlers, and splits each effect into composable programs for introducing effects and handlers that eliminate these effects. As the name indicates, this work is also closely related to monad transformer stacks, as they provide the implementation of the specific effects. I believe the novelty in my approach is that we can do this entirely within the system of monad transformers, and this observation makes it very convenient to create re-usable effects.

Core API

Before looking at an example, I want to start by presenting the core API. First, we have the Eff monad transformer:

data Eff (f :: * -> *) (m :: * -> *) (a :: *)

If you squint, you’ll see that Eff has the familiar shape of a monad transformer - it transforms a given monad m, providing it access to effects described by f. As Eff f m is itself a monad, it’s possible to stack Effs together. The type parameter f is used to indicate which effects this Eff transformer talks about.

Next, the library provides a way to eliminate Eff by translating it into a concrete monad transformer:

translate :: (Monad m,Monad (t m),MonadTrans t) => (forall x r. f x -> ContT r (t m) x) -> Eff f m a -> t m a

Translations are defined by a single function that is very similar to the type of “lifts” we saw in my previous blog post. The difference here is that the homomorphism maps into ContT, which allows the translation to adjust control flow. For many effects it will be enough to simply lift directly into this, but it can be useful to inspect the continuation, for example to build non-deterministic computations.

Finally, we have one type class method:

interpret :: (Monad m) => f a -> m a

However, this type class is fairly constrained in its instances, so you should read m as actually being some sort of monad transformer stack containing Eff f.

Examples

Let’s dive in and look at some examples.

Reader effects

Last post we spent a lot of time looking at various representations of the reader monad, so let’s see how this looks under transformers-eff.

We already have a definition for our language, r -> a as we saw last week. While we could work directly with this, we’ll be interpreting into ReaderT so I’ll use the Reader newtype for a little extra readibility. Given this language, we just need to write a translation into a concrete monad transformer, which will be ReaderT:

effToReaderT :: Monad m => Eff (Reader e) m a -> ReaderT e m a effToReaderT = translate (\r -> lift (hoist generalize r))

This is a little dense, so let’s break it down. When we call translate, we have to provide a function with the type:

forall a m. Reader r a -> ContT _ (ReaderT r m) a

The ReaderT r m part is coming from the type we gave in the call to translate, that is – the type of effToReaderT. We don’t really need to concern outselves with continuations for this effect, as reading from a fixed environment does not change the flow of control - so we’ll begin with lift. We now have to produce a ReaderT r m a from a Reader r a. If we notice that Reader r a = ReaderT r Identity a, we can make use of the tools in the mmorph library, which lets us map that Identity to any m via hoist generalize.

We still need a way to easily introduce these effects into our programs, and that means writing an mtl type class. However, the instances require almost no work on our behalf and we only have to provide two, making this is a very quick process:

class (Monad m) => EffReader env m | m -> env where liftReader :: Reader env a -> m a instance Monad m => EffReader env (Eff (Reader env) m) where liftReader = interpret instance {-# OVERLAPPABLE #-} EffReader env m => EffReader env (Eff effects m) where liftReader = lift . liftReader

I then provide a user-friendly API built on this lift operation:

ask :: EffEnv e m => m e ask = liftReader (Reader id)

Finally, most users are probably more interested in running the effect rather than just translating it to ReaderT, so let’s provide a convenience function to translate and run all in one go:

runReader :: Eff (Reader r) m a -> r -> m a runReader eff r = runReaderT (effToReaderT eff) r

In total, the reader effect is described as:

class (Monad m) => EffReader env m | m -> env where liftReader :: Reader env a -> m a instance Monad m => EffReader env (Eff (Reader env) m) where liftReader = interpret instance {-# OVERLAPPABLE #-} EffReader env m => EffReader env (Eff effects m) where liftReader = lift . liftReader ask :: EffEnv e m => m e ask = liftReader (Reader id) effToReaderT :: Monad m => Eff (Reader e) m a -> ReaderT e m a effToReaderT = translate (\r -> lift (hoist generalize r)) A logging effect

We also looked at a logging effect last week, and this can also be built using transformers-eff:

data LoggingF message a = Log message deriving (Functor) class (Monad m) => EffLog message m | m -> message where liftLog :: Free (LoggingF message) a -> m a instance Monad m => EffLog env (Eff (Free (LoggingF message)) m) where liftLog = interpret instance {-# OVERLAPPABLE #-} EffLog env m => EffLog env (Eff effects m) where liftLog = lift . liftLog log :: EffLog message m => message -> m () log = liftLog . liftF . Log runLog :: (MonadIO m) => Eff (Free (LoggingF message) e) m a -> (message -> IO ()) -> m a runLog eff = runIdentityT (translate (iterM (\(Log msg) -> liftIO (io msg))))

The interpretation here is given an IO action to perform whenever a message is logged. I could have implemented this in a few ways - perhaps lifting the whole computation into ReaderT (message -> IO ()), but instead I have just used IdentityT as the target monad transformer, and added a MonadIO constraint onto m. Whenever a message is logged, we’ll directly call the given IO action. As you can also see, I’ve used a free monad as the source language for the effect. This example demonstrates that we are free to mix a variety of tools (here free monads, MonadIO and the identity transformer) in order to get the job done.

What does this approach bring? Less type class instances

We saw above that when we introduced our EffLog type class, it was immediately available for use along side EffReader effects - and we didn’t have to do anything extra! To me, this is a huge win - I frequently find myself frustrated with the amount of work required to do when composing many different projects together with mtl, and this is not just a theoretical frustration. To provide just one example from today, I wanted to use ListT with some Yesod code that required MonadLogger. There is obviously no MonadLogger instance for ListT, and it’s almost unsolvable to provide such an instance withoutrs/o using orphan instances - neither one of those libraries should need to depend on the other, so we’re stuck! If you stay within Eff, this problem doesn’t occur.

Many will be quick to point out that in mtl it doesn’t necessary make sense to have all transformers compose due to laws (despite the lack of any laws actually being stated…), and I’m curious if this is true here. In this library, due to the limitation on having to write your effectful programs based on an underlying algebra, I’m not sure it’s possible to introduce the problematic type class methods like local and catch.

One effect at a time

In the mtl approach a single monad transformer stack might be able to deal with a whole selection of effects in one go. However, I’ve found that this can actually make it quite difficult to reason about the flow of code. To provide an example, let’s consider this small API:

findOllie :: (MonadDb m, MonadPlus m) => m Person findOllie = do x <- dbLookup (PersonId 42) guard (personName x == "Ollie") return x type QueryError = String dbLookup :: (MonadDb m, MonadError QueryError m) => PersonId -> m Person data DbT m a instance Monad m => Monad (DbT m) instance Monad m => MonadDb (DbT m) runDb :: (MonadIO m) :: DbT m a -> m a

If we just try and apply runDb to findOllie, we’ll get

runDb findOllie :: (MonadError QueryError m, MonadIO m, MonadPlus m) => m Person

We still need to take care of MonadError and MonadPlus. For MonadError I’ll use ExceptT, and for MonadPlus I’ll use MaybeT:

runMaybeT (runExceptT (runDb findOllie)) :: IO (Maybe (Either QueryError Person))

Next, let’s consider a few scenarios. Firstly, the case where everything succeeds -

> runMaybeT (runExceptT (runDb findOllie)) Just (Right Person ...)

However, that query could fail, which would cause an error

> runMaybeT (runExceptT (runDb findOllie)) Just (Left "Table `person` not found")

Still as expected. Finally, person 42 might not actually be me, in which case we get

> runMaybeT (runExceptT (runDb findOllie)) Just (Left "")

Huh? What’s happened here is that we’ve hit the MonadPlus instance for ExceptT, and because our QueryError is a String we have a Monoid instance, so we were given an “empty” error. This is not at all what we were expecting!

While this example is a contrived one, I am very nervous that this accidental choice of instances could happen deep within another section of code, for example where I expect to do some local error handling and accidentally eliminate a chance of failure that I was expecting to deal with elsewhere.

In transformers-eff this is not possible, as each Eff deals with one and only one effect at a time. This could be done with mtl by introducing a separate type class for failure and only adding an instance for MaybeT, we are working around the problem by convention, and I would much rather bake that in to the types.

Fast code

The underlying implementation of Eff is built on top of continuations, and due to aggressive inlineing, GHC is able to work some serious magic. In fact, in all the benchmarks I’ve produced so far, Eff is as fast as transformers, and even comes out slightly faster in one (though within the same order of magnitude).

Compatible with the rest of Hackage

As Eff is just another monad transformer, you can stack in other monad transformers. Note that by doing this you may lack the type class instances you need, so explicit lifting might be necessary. I mainly expect this being useful by putting Eff “on the top” - for example I can use Eff locally with in a Snap monad computation, provided I eventually run back down to just Snap. This is the same pattern as locally using transformers.

Categories: Offsite Blogs

JLAMP special issue for PLACES

General haskell list - Fri, 04/22/2016 - 5:51pm
-------------------------------- Call for papers: Special Issue of JLAMP for PLACES (Programming Language Approaches to Concurrency and Communication-cEntric Software) -------------------------------- Submission deadline: July 29th 2016 -------------------------------- http://www.journals.elsevier.com/journal-of-logical-and-algebraic-methods-in-programming/call-for-papers/special-issue-on-programming-language-approaches/ -------------------------------- This special issue of the Journal of Logical and Algebraic Methods in Programming (JLAMP) is devoted to the topics of the 9th International Workshop on Programming Language Approaches to Concurrency and Communication-cEntric Software (PLACES 2016), which took place in April 2016 in Eindhoven as part of ETAPS. This is however an *open call* for papers, therefore both participants of the workshop and other authors are encouraged to submit their contributions. Themes: Modern hardware platforms, from the very small to the very large, inc
Categories: Incoming News

ANN: shine and shine-varying: Lightweight declarative 2D graphics à la gloss using GHCJS (and a FRP interface)

General haskell list - Fri, 04/22/2016 - 4:08pm
Shine wraps JavaScript’s drawing functions in a declarative API, hiding the boilerplate canvas code. The API mimics the one provided by gloss (Picture, animate, play…), but some types are different to better adapt to js code (ex. images, fonts, inputs). Example: main = runWebGUI $ \ webView -> do ctx <- fixedSizeCanvas webView 800 600 let concentricCircles = foldMap Circle [1,10..100] draw ctx concentricCircles -- one-shot drawing The only direct dependency is ghcjs-dom, so the resulting jsexe should be relatively lightweight. I also wrote shine-varying, a FRP interface to shine in terms of Vars plus some utility Vars. Example (translation of the resize-yogi Elm example): resizeImage img (x',y') = Translate (x/2) (y/2) -- Pictures are centered on (0,0), so we need to move it $ Image (Stretched x y) img -- Scale the picture to the given position where x = fromIntegral x' -- mousePosition is Integral y = fromIntegral y' main = runWebGUI $ \ webView -> d
Categories: Incoming News

Automatically Deriving Numeric Type Class Instances

haskell-cafe - Fri, 04/22/2016 - 3:23pm
Is it possible to automatically derive instances of Numeric type classes like Num, Fractional, Real, Floating, etc? I currently have two datatypes, Pair and Triple, that are defined like this: data Pair a = Pair a a data Triple a = Triple a a a I wrote these pretty trivial instances for Num and Floating: instance Num a => Num (Pair a) where (+) = liftA2 (+) (*) = liftA2 (*) abs = liftA abs negate = liftA negate signum = liftA signum fromInteger = pure . fromInteger instance Fractional a => Fractional (Pair a) where (/) = liftA2 (/) recip = liftA recip fromRational = pure . fromRational and practically identical instances for Triple as well. Is there anyway to have GHC derive these instances and the other numeric type classes? Thanks, Jake _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Haskell in Leipzig 2016: Call for Papers

General haskell list - Fri, 04/22/2016 - 12:21pm
                             Haskell in Leipzig                             September 14-15, 2016                             HTKW Leipzig, Germany                          http://hal2016.haskell.org/ == About HaL == The workshop series “Haskell in Leipzig”, now in its 11th year, brings together Haskell developers, Haskell researchers, Haskell enthusiasts and Haskell beginners to listen to talks, take part in tutorials, and join in interesting conversations. Everything related to Haskell is on topic, whether it is about current research, practical applications, interesting ideas off the beaten track, education, or art, and topics may extend to functional programming in general and its connections to other programming paradigms as well. This year, HaL is colocated with two related conferences,  * the Workshop on Functional and (Constraint) Logic Programming (WFLP) and  * the Workshop on (Co
Categories: Incoming News

Haskell in Leipzig 2016: Call for Papers

haskell-cafe - Fri, 04/22/2016 - 11:55am
                             Haskell in Leipzig                             September 14-15, 2016                             HTKW Leipzig, Germany                          http://hal2016.haskell.org/ == About HaL == The workshop series “Haskell in Leipzig”, now in its 11th year, brings together Haskell developers, Haskell researchers, Haskell enthusiasts and Haskell beginners to listen to talks, take part in tutorials, and join in interesting conversations. Everything related to Haskell is on topic, whether it is about current research, practical applications, interesting ideas off the beaten track, education, or art, and topics may extend to functional programming in general and its connections to other programming paradigms as well. This year, HaL is colocated with two related conferences,  * the Workshop on Functional and (Constraint) Logic Programming (WFLP) and  * the Workshop on (C
Categories: Offsite Discussion

I have a question about Haskell

haskell-cafe - Fri, 04/22/2016 - 4:13am
Hi i have a problem in my code! here is my code: here is error message: in very bottom function (polyEvaluate), why is not working “result <- map (\(a,b) -> a+b) powers” ??? in Prelude, it is working Thanks!_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

ANN: New Haskell.org committee members

General haskell list - Thu, 04/21/2016 - 8:43pm
The Haskell.org committee has selected its new member following the March self-nomination period. The new (returning) member is: * Gershom Bazerman Thank you to everyone who submitted a self-nomination. Note that if you have self-nominated in the past, but not been picked, please self-nominate again in the future.
Categories: Incoming News

[RV 2016] RV 2016, Sept 23-30 2016, Madrid,Spain - 3rd CFP

General haskell list - Thu, 04/21/2016 - 1:26pm
RV 2016 16th International Conference on Runtime Verification September 23-30, Madrid, Spain http://rv2016.imag.fr <https://urldefense.proofpoint.com/v2/url?u=http-3A__rv2016.imag.fr&d=BQMFaQ&c=8hUWFZcy2Z-Za5rBPlktOQ&r=6AsiS3HGCiiWbQayKR6NFGJTSnU8_M6_CIkQI2NdBtE&m=82hYGGGDEux2KhgphuqIj7RFX2AOXREa86oSYDTT2BE&s=oBZMLSt69qPHnIrDOKsM7YhIH4Sw92LmOgc2Il-XwdE&e=> Scope Runtime verification is concerned with monitoring and analysis of software and hardware system executions. Runtime verification techniques are crucial for system correctness, reliability, and robustness; they are significantly more powerful and versatile than conventional testing, and more practical than exhaustive formal verification. Runtime verification can be used prior to deployment, for testing, verification, and debugging purposes, and after deployment for ensuring reliability, safety, and security and for providing fault containment and recovery as well as online system repair. Topics of interest to the conference include: - specification l
Categories: Incoming News

Philip Wadler: Pedal on Parliament

Planet Haskell - Thu, 04/21/2016 - 3:25am

Come join Pedal on Parliament! Gather in the Meadows from 11am Saturday 23 April, procession sets off at noon.

A few years ago, I took my son with me to ICFP in Copenhagen. We had a blast cycling around the city, and marvelled that there were bike paths everywhere. When I lived in Morningside, my cycle to work was along quiet roads, but even so it felt safer when I arrived on the bike path through the Meadows. Now that I live near the Cameo, I'm even happier to get off the busy road and onto a path. And I look forward to the future, because Edinburgh is a city that invests in cycling and has a plan on the table that includes a cycle path from the Meadows to the Canal, which will run past my flat.

Getting more people cycling will cut pollution, benefit health, and increase quality of life. Studies show that people don't cycle because they feel sharing the road with cars is unsafe, so investment in cycle paths can make a huge difference. If people in the UK cycled and walked as much as people do in Copenhagen, the NHS would save around £17 billion within twenty years. The video below makes the case brilliantly.

Scotland has set a goal that 10% of all travel should be by cycle or foot (the buzzword is active travel), but only spends about 2% of its budget on active travel. The City of Edinburgh has pledged to up it's active travel budget by 1% a year until it reaches 10%. Pedal on Parliament is our chance to support the positive steps in Edinburgh, and encourage the rest of the country to take action.

<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/eLp4tUtdBWo/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/eLp4tUtdBWo?feature=player_embedded" width="320"></iframe>
Categories: Offsite Blogs

I've wanted a Haskell shirt for awhile

haskell-cafe - Thu, 04/21/2016 - 1:27am
Does anyone know where I can get one? Failing that, I made one to celebrate today. https://teespring.com/haskell-stay-lifted Disclaimer: I don't intend to be spammy, but I totally understand and respect if some people don't like this. I was just happy and wanted to share. Please don't hate. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Bryn Keller: Mac OS X C++ Development

Planet Haskell - Wed, 04/20/2016 - 6:00pm

I recently switched to a MacBook Pro. I have customers that use Linux and Mac, and I wanted to work in a similar environment. Also recently (a few months before the MacBook) I started working with C++ again after a long hiatus.

I had thought that the Mac, being a Unix, would be relatively close to Linux, and that things I was building for Linux would be much more likely to work there than on Windows. That might still be true, but it turns out that there are several things on Mac that are not obvious, and seriously complicate native code development compared with Linux. These are my notes on those differences and how to deal with them. Hopefully, it may be useful for other migrants to Mac as well.

Xcode

Apple includes something called Xcode. This is apparently a combination of a platform SDK, and an IDE with a user interface similar to iTunes. You have to have it, but you don’t have to use the IDE part. It must be installed from the App Store. Don’t fight it, just install it and move on.

Command line tools

You definitely want the Xcode command line tools. Run:

xcode-select --install

to install them. This will give you git as well.

Brew

There are actually two package managers for Mac OS X, MacPorts and Homebrew, and as a developer you’ll definitely need one of them. I use brew, because other people I know recommended it, and it’s been nice so far. You need it to install libraries and tools that don’t come with the Mac. Most notably gcc, cmake, and so on.

Clang and gcc

Apple ships the clang compiler with Mac OS X, so this is the considered the standard compiler for the platform. This means that some libraries (notably Qt) only support building with clang.

Some C/C++ projects assume (incorrectly) that everybody builds with gcc. For this reason (I guess), Apple did a really odd thing: they ship a gcc executable, which is actually clang in disguise:

> $ gcc clang: error: no input files

This (I guess) works sometimes, since many flags work the same in both compilers. However, it is deeply confusing and causes problems as well. For example, gcc supports OpenMP, a powerful parallel computing tool, and crucial for the work I’m doing. Recent versions of clang support it as well, but Apple’s fork of clang that ships with Macs does not. So to use OpenMP, I have to have the real gcc. This will cause other problems down the road, we’ll get to them in a bit.

You’ll want to install gcc with brew:

brew install gcc gdb

Since clang is already masquerading as gcc, the Homebrew folks came up with a workaround - the gcc package installs executables called gcc-5 and g++-5 instead of gcc and g++. I added the following in my profile to encourage build systems to use these compilers instead of clang.

export HOMEBREW_CC=gcc-5 export HOMEBREW_CXX=g++-5 export CC=gcc-5 export CXX=g++-5

Note the Homebrew-specific ones. Homebrew generally installs binary, precompiled packages rather than compiling on your machine, but you can pass --build-from-source to it to make it recompile. If you do that, it will honor the HOMEBREW_CC and HOMEBREW_CXX environment variables and use those to do the build.

I also aliased cmake to ensure that cmake uses gcc-5 and g++-5 by default as well:

alias cmake=/usr/local/bin/cmake -DCMAKE_C_COMPILER=$CC -DCMAKE_CXX_COMPILER=$CXX Compatibility

C++, unlike C, doesn’t specify a binary interface standard. This means that libraries that are compiled with different C++ compilers can have problems interoperating. So there’s that to consider when you use things that were compile with clang (such as anything you download using brew without recompiling) together with things you’ve been building with g++-5.

The most pressing example of this is related to the C++ standard library. There are, on Mac (and elsewhere too I suppose), at least two implementations: libstdc++, and libc++. By default, clang uses libc++ and gcc-5 uses libstdc++. In practice, this means that if you install a C++ based library with brew, you will be able to compile against it with g++-5, but when you get to the link stage, it will fail with lots of missing symbols. If this happens,

brew reinstall --build-from-source <package>

can often fix the problem. However, there are brew packages (e.g. pkg-config) that will fail to compile under g++-5, so there can be cases where this doesn’t work. One example: I was trying to build mxnet directly using the brew package for OpenCV support, and it failed with the aforementioned link errors. I tried to reinstall opencv with --build-from-source with brew, and it started recompiling all of opencv’s (many) dependencies, including pkg-config, which for some reason fails to compile. So in the end I had to pull opencv as well and build it manually, after which mxnet built fine too.

Next time

These were some important things to be aware of when starting to develop in C++ on Macs. In the next installment, we’ll talk about dynamic libraries, install names, and other such loveliness.

Categories: Offsite Blogs