News aggregator

Dimitri Sabadie: al 0.1 released!

Planet Haskell - Sat, 02/28/2015 - 9:58am
What is OpenAL?

OpenAL is an open-source and free audio library developped by Creative Technology. The first version, however, was made by Loki Software. The API targets spacialized audio and sound managing so that it’s simple to simulate sound attenuation over distance and Doppler effect. OpenAL is supported on a wide variety of platforms, from PC (Windows, Linux, FreeBSD, Mac OSX and so on) to consoles (PS3, Xbox…).

The latest version was released in 2005 (9 years old ago!) and is OpenAL 1.1. It’s then pretty logical that we should have OpenAL 1.1 in Haskell.

OpenAL and Haskell

“Wait… we already have OpenAL in Haskell!”

That’s true. However, the OpenAL Haskell package has some serious issues.

The first one is that package is not a direct binding to OpenAL. It’s a high-level wrapper that wraps stuff with obscure types, like StateVar or ObjectName. Even though such abstractions are welcomed, we should have a raw binding so that people who don’t want those abstractions can still use OpenAL. Moreover, such high libraries tend to lift resources allocations / deallocations into the GC1. That is a pretty bad idea because we, sometimes, want to control how long objects live.

The second one, which is as terrible, is the fact that those abstractions aren’t defined by the Haskell OpenAL package, but by another, which is… OpenGL. The latter is not a raw binding to OpenGL – for that have a look at OpenGLRaw or gl – but the same kind of high-level oriented package. A lot of us have asked the authors of both the package to agree of making those abstractions shared between both the libraries; they never seemed to agree.

And even though, having a dependency between OpenAL and OpenGL is insane!

Sooooo… since Edward Kmett had the idea of gl to fix similar issues, I think I should have the idea of al to go along with gl. The idea is that if you want to use one, you can use the other one without any conflict.

al 0.1

Yesterday night, around 20:00, I decided to make it. It took me several hours to write the raw binding, but I finally made it and released al 0.1 the very next day – uh, today!

Because OpenAL is not shipped with your 2 nor anything special, and thus because you have to explicitely install it, installing al requires a bit more than just typing cabal install al.

Installing alcabal update

First things first:

cabal update

You should have al available if you type cabal info al afterwards.

c2hs

al requires you to have c2hs – version 0.23.* – installed. If not:

cabal install c2hs

Be sure it’s correctly installed:

c2hs --version
C->Haskell Compiler, version 0.23.1 Snowbounder, 31 Oct 2014
build platform is "i386-mingw32" <1, True, True, 1>OpenAL SDK

Of course, you should have the OpenAL SDK installed as well. If not, install it. It should be in your package manager if you’re on Linux / FreeBSD, and you can find it here for Windows systems.

Important: write down the installation path, because we’ll need it!

Installing al

Here we are. In order to install, al needs the paths to your OpenAL SDK. I’m not sure whether it could vary a lot, though. The default Windows paths are:

  • C:\Program Files (x86)\OpenAL 1.1 SDK\libs for the libraries ;
  • C:\Program Files (x86)\OpenAL 1.1 SDK\include for the header files.

In future releases, I’ll default to those so that you don’t have to explicitely pass the paths if you have a default installation. But for the moment, you have to pass those paths when installing:

cabal install al --extra-lib-dirs="C:\Program Files (x86)\OpenAL 1.1 SDK\libs"
--extra-include-dirs="C:\Program Files (x86)\OpenAL 1.1 SDK\include"

Of course, you may adapt paths to your situation. You may even try without that for UNIX systems – it might already be in your PATH, or use slashs for MinGW.

Using al

al lays into two module trees:

  • Sound.AL
  • Sound.ALC

The former exports anything you might need to use the OpenAL API while the latter exports symbols about the OpenAL Context API. Please feel free to dig in each tree to import specific types or functions.

OpenAL documentation

al doesn’t have any documentation for a very simple reason: since it’s a raw binding to the C OpenAL API, you should read the C documentation. Translating to Haskell is straightforward.

The OpenAL 1.1 Specifications

What’s next?

Up to now, al doesn’t cover extensions. It’s not shipped with ALUT either as I want to keep the package low-level and raw – and don’t even try to make me change my mind about that ;) .

If you have any issues, please let me know on the issues tracker. I’m responsive and I’ll happily try to help you :) .

Once again, have fun, don’t eat too much chocolate nor sweets, and happy Easter Day!

  1. Garbage Collector

  2. Operating System

Categories: Offsite Blogs

Edwin Brady: Practical Erasure in Dependently Typed Languages

Planet Haskell - Sat, 02/28/2015 - 9:13am

Matus Tejiscak and I have produced a new draft paper titled Practical Erasure in Dependently Typed Languages, in which we explain how Idris erases computationally irrelevant parts of programs. The abstract is:

Full-spectrum dependently typed languages and tools, such as Idris and Agda, have recently been gaining interest due to the expressive power of their type systems, in particular their ability to describe precise properties of programs which can be verified by type checking.

With full-spectrum dependent types, we can treat types as first- class language constructs: types can be parameterised on values, and types can be computed like any other value. However, this power brings new challenges when compiling to executable code. Without special treatment, values which exist only for compile-time checking may leak into compiled code, even in relatively simple cases. Previous attempts to tackle the problem are unsatisfying in that they either fail to erase all irrelevant information, require user annotation or in some other way restrict the expressive power of the language.

In this paper, we present a new erasure mechanism based on whole-program analysis, currently implemented in the Idris programming language. We give some simple examples of dependently typed functional programs with compile-time guarantees of their properties, but for which existing erasure techniques fall short. We then describe our new analysis method and show that with it, erasure can lead to asymptotically faster code thanks to the ability to erase not only proofs but also indices.

Comments, feedback, questions, etc, all welcome!


Categories: Offsite Blogs

Is there any good library to handle command-line arguments?

Haskell on Reddit - Sat, 02/28/2015 - 4:09am

I am looking for something like Apache Commons CLI for Java, where I can easily define the flags and arguments I expect.

Thanks

submitted by OneU
[link] [15 comments]
Categories: Incoming News

Full page Haskell 2010 report request

haskell-cafe - Fri, 02/27/2015 - 9:32pm
Hi, has anyone managed to produce a single-page HTML export of the Haskell 2010 report? Here's one I made by munging together the multi-page one: https://www.haskell.org/report/mono/2010#x8-440003.12 (this link is currently unpublished on the home page). But it's incomplete due to link references and things. I'd like a proper output but in the interest of my mental health prefer not to touch LaTeX except by proxy through other poor souls. Anyone managed to do this? A full, linkable page (don't worry about the section/para links, I added them with a few lines of JS, I can do it again) is really conducive to discussion of Haskell's finer points. Ciao!
Categories: Offsite Discussion

PhD position in dependent types/functional programming atChalmers

General haskell list - Fri, 02/27/2015 - 8:57pm
We have an opening for a PhD student in dependent type theory and functional programming at Chalmers. Here is an excerpt from the ad: "The PhD student will join the Programming Logic group and contribute to its research on dependent type theory and functional programming. Topics of interest include the following directions of work: - Design of dependently typed functional programming languages. - Theory and implementation of type checkers, compilers etc. for dependently typed functional programming languages. - Investigations into the use of dependently typed functional programming languages, both as programming languages and as logical systems. - Models and applications of (homotopy) type theory." Note that work on and in Agda matches several of the topics above. Full text of the advertisement: http://www.chalmers.se/en/about-chalmers/vacancies/?rmpage=job&rmjob=2902 Application deadline: March 31, 2015
Categories: Incoming News

TYPES 2015 final call for contributions

General haskell list - Fri, 02/27/2015 - 7:33pm
[Selection of talks based on abstracts (2 pp easychair.cls) due 13 March 2015! A post-proceedings volume in LIPIcs, with an open call.] CALL FOR CONTRIBUTIONS 21st International Conference on Types for Proofs and Programs, TYPES 2015 Tallinn, Estonia, 18-21 May 2015 http://cs.ioc.ee/types15/ Background The TYPES meetings are a forum to present new and on-going work in all aspects of type theory and its applications, especially in formalized and computer assisted reasoning and computer programming. The meetings from 1990 to 2008 were annual workshops of a sequence of five EU funded networking projects. Since 2009, TYPES has been run as an independent conference series. Previous TYPES meetings were held in Antibes (1990), Edinburgh (1991), Båstad (1992), Nijmegen (1993), Båstad (1994), Torino (1995), Aussois (1996), Kloster Irsee (1998), Lökeberg (1999), Durham (2000), Berg en Dal near Nijmegen (2002), Torin
Categories: Incoming News

Free monoids in Haskell

haskell-cafe - Fri, 02/27/2015 - 6:43pm
Dear Dan, you posted an entry on the Comonad.Reader blog about free monoids in Haskell. I commented about FM being a monad, and meanwhile verified that indeed every monoid is an FM-algebra. The proofs are analogous to proving that the continuation (_ -> r) -> r for fixed r is a monad and that r is an algebra for it, respectively. Moreover, the FM-algebra stucture is a monoid homomorphism w.r.t. the monoid instance you gave for (FM a). I'd like to ask what happens when one omits the Monoid constraint. That is, what are the elements of newtype Y a = Y { runY :: forall b. (a -> b) -> b } (Y a) is like (Control.ForFree.Yoneda Identity a), that is, elements of (Y a) should be natural transformations from the reader functor ((->) a) to the identity functor. If that is true, then the Yoneda lemma tells us that (Y a) is isomorphic to (Identity a), but at least operationally that seems not to be the case: u = runY (return undefined) has different semantics than u' = runY (Y undefined) as can be se
Categories: Offsite Discussion

Danny Gratzer: An Explanation of Type Inference for ML/Haskell

Planet Haskell - Fri, 02/27/2015 - 6:00pm
Posted on February 28, 2015 Tags: sml, haskell, types

A couple of days ago I wrote a small implementation of a type inferencer for a mini ML language. It turns out there are very few explanations of how to do this properly and the ones that exist tend to be the really naive, super exponential algorithm. I wrote the algorithm in SML but nothing should be unfamiliar to the average Haskeller.

Type inference breaks down into essentially 2 components

  1. Constraint Generation
  2. Unification

We inspect the program we’re trying to infer a type for and generate a bunch of statements (constraints) which are of the form

This type is equal to this type

These types have “unification variables” in them. These aren’t normal ML/Haskell type variables. They’re generated by the compiler, for the compiler, and will eventually be filled in with either

  1. A rigid polymorphic variable
  2. A normal concrete type

They should be thought of as holes in an otherwise normal type. For example, if we’re looking at the expression

f a

We first just say that f : 'f where 'f is one of those unification variables I mentioned. Next we say that a : 'a. Since we’re apply f to a we can generate the constraints that

'f ~ 'x -> 'y 'a ~ 'x

Since we can only apply things with of the form _ -> _. We then unify these constraints to produce f : 'a -> 'x and a : 'a. We’d then using the surrounding constraints to produce more information about what exactly 'a and 'x might be. If this was all the constraints we had we’d then “generalize” 'a and 'x to be normal type variables, making our expression have the type x where f : a -> x and a : a.

Now onto some specifics

Set Up

In order to actually talk about type inference we first have to define our language. We have the abstract syntax tree:

type tvar = int local val freshSource = ref 0 in fun fresh () : tvar = !freshSource before freshSource := !freshSource + 1 end datatype monotype = TBool | TArr of monotype * monotype | TVar of tvar datatype polytype = PolyType of int list * monotype datatype exp = True | False | Var of int | App of exp * exp | Let of exp * exp | Fn of exp | If of exp * exp * exp

First we have type variables which are globally unique integers. To give us a method for actually producing them we have fresh which uses a ref-cell to never return the same result twice. This is probably surprising to Haskellers: SML isn’t purely functional and frankly this is less noisy than using something like monad-gen.

From there we have mono-types. These are normal ML types without any polymorphism. There are type/unification variables, booleans, and functions. Polytypes are just monotypes with an extra forall at the front. This is where we get polymorphism from. A polytype binds a number of type variables, stored in this representation as an int list. There is one ambiguity here, when looking at a variable it’s not clear whether it’s supposed to be a type variable (bound in a forall) and a unification variable. The idea is that we never ever inspect a type bound under a forall except when we’re converting it to a monotype with fresh unification variables in place of all of the bound variables. Thus, when inferring a type, every variable we come across is a unification variable.

Finally, we have expressions. Aside form the normal constants, we have variables, lambdas, applications, and if. The way we represent variables here is with DeBruijn variables. A variable is a number that tells you how many binders are between it and where it was bound. For example, const would be written Fn (Fn (Var 1)) in this representation.

With this in mind we define some helpful utility functions. When type checking, we have a context full of information. The two facts we know are

datatype info = PolyTypeVar of polytype | MonoTypeVar of monotype type context = info list

Where the ith element of a context indicates the piece of information we know about the ith DeBruijn variable. We’ll also need to substitute a type variable for a type. We also want to be able to find out all the free variables in a type.

fun subst ty' var ty = case ty of TVar var' => if var = var' then ty' else TVar var' | TArr (l, r) => TArr (subst ty' var l, subst ty' var r) | TBool => TBool fun freeVars t = case t of TVar v => [v] | TArr (l, r) => freeVars l @ freeVars r | TBool => []

Both of these functions just recurse over types and do some work at the variable case. Note that freeVars can contain duplicates, this turns out not to be important in all cases except one: generalizeMonoType. The basic idea is that given a monotype with a bunch of unification variables and a surrounding context, figure out which variables can be bound up in a polymorphic type. If they don’t appear in the surrounding context, we generalize them by binding them in a new poly type’s forall spot.

fun dedup [] = [] | dedup (x :: xs) = if List.exists (fn y => x = y) xs then dedup xs else x :: dedup xs fun generalizeMonoType ctx ty = let fun notMem xs x = List.all (fn y => x <> y) xs fun free (MonoTypeVar m) = freeVars m | free (PolyTypeVar (PolyType (bs, m))) = List.filter (notMem bs) (freeVars m) val ctxVars = List.concat (List.map free ctx) val polyVars = List.filter (notMem ctxVars) (freeVars ty) in PolyType (dedup polyVars, ty) end

Here the bulk of the code is deciding whether or not a variable is free in the surrounding context using free. It looks at a piece of info to determine what variables occur in it. We then accumulate all of these variables into cxtVars and use this list to decide what to generalize.

Next we need to take a polytype to a monotype. This is the specialization of a polymorphic type that we love and use when we use map on a function from int -> double. This works by taking each bound variable and replacing it with a fresh unification variables. This is nicely handled by folds!

fun mintNewMonoType (PolyType (ls, ty)) = foldl (fn (v, t) => subst (TVar (fresh ())) v t) ty ls

Last but not least, we have a function to take a context and a variable and give us a monotype which corresponds to it. This may produce a new monotype if we think the variable has a polytype.

exception UnboundVar of int fun lookupVar var ctx = case List.nth (ctx, var) handle Subscript => raise UnboundVar var of PolyTypeVar pty => mintNewMonoType pty | MonoTypeVar mty => mty

For the sake of nice error messages, we also throw UnboundVar instead of just subscript in the error case. Now that we’ve gone through all of the utility functions, on to unification!

Unification

A large part of this program is basically “I’ll give you a list of constraints and you give me the solution”. The program to solve these proceeds by pattern matching on the constraints.

In the empty case, we have no constraints so we give back the empty solution.

fun unify [] = []

In the next case we actually have to look at what constraint we’re trying to solve.

| unify (c :: constrs) = case c of

If we’re lucky, we’re just trying to unify TBool with TBool, this does nothing since these types have no variables and are equal. In this case we just recurse.

(TBool, TBool) => unify constrs

If we’ve got two function types, we just constrain their domains and ranges to be the same and continue on unifying things.

| (TArr (l, r), TArr (l', r')) => unify ((l, l') :: (r, r') :: constrs)

Now we have to deal with finding a variable. We definitely want to avoid adding (TVar v, TVar v) to our solution, so we’ll have a special case for trying to unify two variables.

| (TVar i, TVar j) => if i = j then unify constrs else addSol i (TVar j) (unify (substConstrs (TVar j) i constrs))

This is our first time actually adding something to our solution so there’s several new elements here. The first is this function addSol. It’s defined as

fun addSol v ty sol = (v, applySol sol ty) :: sol

So in order to make sure our solution is internally consistent it’s important that whenever we add a type to our solution we first apply the solution to it. This ensures that we can substitute a variable in our solution for its corresponding type and not worry about whether we need to do something further. Additionally, whenever we add a new binding we substitute for it in the constraints we have left to ensure we never have a solution which is just inconsistent. This prevents us from unifying v ~ TBool and v ~ TArr(TBool, TBool) in the same solution! The actual code for doing this is that substConstr (TVar j) i constrs bit.

The next case is the general case for unifying a variable with some type. It looks very similar to this one.

| ((TVar i, ty) | (ty, TVar i)) => if occursIn i ty then raise UnificationError c else addSol i ty (unify (substConstrs ty i constrs))

Here we have the critical occursIn check. This checks to see if a variable appears in a type and prevents us from making erroneous unifications like TVar a ~ TArr (TVar a, TVar a). This occurs check is actually very easy to implement

fun occursIn v ty = List.exists (fn v' => v = v') (freeVars ty)

Finally we have one last case: the failure case. This is the catch-all case for if we try to unify two things that are obviously incompatible.

| _ => raise UnificationError c

All together, that code was

fun applySol sol ty = foldl (fn ((v, ty), ty') => subst ty v ty') ty sol fun applySolCxt sol cxt = let fun applyInfo i = case i of PolyTypeVar (PolyType (bs, m)) => PolyTypeVar (PolyType (bs, (applySol sol m))) | MonoTypeVar m => MonoTypeVar (applySol sol m) in map applyInfo cxt end fun addSol v ty sol = (v, applySol sol ty) :: sol fun occursIn v ty = List.exists (fn v' => v = v') (freeVars ty) fun unify ([] : constr list) : sol = [] | unify (c :: constrs) = case c of (TBool, TBool) => unify constrs | (TVar i, TVar j) => if i = j then unify constrs else addSol i (TVar j) (unify (substConstrs (TVar j) i constrs)) | ((TVar i, ty) | (ty, TVar i)) => if occursIn i ty then raise UnificationError c else addSol i ty (unify (substConstrs ty i constrs)) | (TArr (l, r), TArr (l', r')) => unify ((l, l') :: (r, r') :: constrs) | _ => raise UnificationError c Constraint Generation

The other half of this algorithm is the constraint generation part. We generate constraints and use unify to turn them into solutions. This boils down to two functoins. The first is to glue together solutions.

fun <+> (sol1, sol2) = let fun notInSol2 v = List.all (fn (v', _) => v <> v') sol2 val sol1' = List.filter (fn (v, _) => notInSol2 v) sol1 in map (fn (v, ty) => (v, applySol sol1 ty)) sol2 @ sol1' end infixr 3 <+>

Given two solutions we figure out which things don’t occur in the in the second solution. Next, we apply solution 1 everywhere in the second solution, giving a consistent solution wihch contains everything in sol2, finally we add in all the stuff not in sol2 but in sol1. This doesn’t check to make sure that the solutions are actually consistent, this is done elsewhere.

Next is the main function here constrain. This actually generates solution and type given a context and an expression. The first few cases are nice and simple

fun constrain ctx True = (TBool, []) | constrain ctx False = (TBool, []) | constrain ctx (Var i) = (lookupVar i ctx, [])

In these cases we don’t infer any constraints, we just figure out types based on information we know previously. Next for Fn we generate a fresh variable to represent the arguments type and just constrain the body.

| constrain ctx (Fn body) = let val argTy = TVar (fresh ()) val (rTy, sol) = constrain (MonoTypeVar argTy :: ctx) body in (TArr (applySol sol argTy, rTy), sol) end

Once we have the solution for the body, we apply it to the argument type which might replace it with a concrete type if the constraints we inferred for the body demand it. For If we do something similar except we add a few constraints of our own to solve.

| constrain ctx (If (i, t, e)) = let val (iTy, sol1) = constrain ctx i val (tTy, sol2) = constrain (applySolCxt sol1 ctx) t val (eTy, sol3) = constrain (applySolCxt (sol1 <+> sol2) ctx) e val sol = sol1 <+> sol2 <+> sol3 val sol = sol <+> unify [ (applySol sol iTy, TBool) , (applySol sol tTy, applySol sol eTy)] in (tTy, sol) end

Notice how we apply each solution to the context for the next thing we’re constraining. This is how we ensure that each solution will be consistent. Once we’ve generated solutions to the constraints in each of the subterms, we smash them together to produce the first solution. Next, we ensure that the subcomponents have the right type by generating a few constraints to ensure that iTy is a bool and that tTy and eTy (the types of the branches) are both the same. We have to carefully apply the sol to each of these prior to unifying them to make sure our solution stays consistent.

This is practically the same as what the App case is

| constrain ctx (App (l, r)) = let val (domTy, ranTy) = (TVar (fresh ()), TVar (fresh ())) val (funTy, sol1) = constrain ctx l val (argTy, sol2) = constrain (applySolCxt sol1 ctx) r val sol = sol1 <+> sol2 val sol = sol <+> unify [(applySol sol funTy, applySol sol (TArr (domTy, ranTy))) , (applySol sol argTy, applySol sol domTy)] in (ranTy, sol) end

The only real difference here is that we generate different constraints: we make sure we’re applying a function whose domain is the same as the argument type.

The most interesting case here is Let. This implements let generalization which is how we actually get polymorphism. After inferring the type of the thing we’re binding we generalize it, giving us a poly type to use in the body of let. The key to generalizing it is that generalizeMonoType we had before.

| constrain ctx (Let (e, body)) = let val (eTy, sol1) = constrain ctx e val ctx' = applySolCxt sol1 ctx val eTy' = generalizeMonoType ctx' (applySol sol1 eTy) val (rTy, sol2) = constrain (PolyTypeVar eTy' :: ctx') body in (rTy, sol1 <+> sol2) end

We do pretty much everything we had before except now we carefully ensure to apply the solution we get for the body to the context and then to generalize the type with respect to that new context. This is how we actually get polymorphism, it will assign a proper polymorphic type to the argument.

That wraps up constraint generation. Now all that’s left to see if the overall driver for type inference.

fun infer e = let val (ty, sol) = constrain [] e in generalizeMonoType [] (applySol sol ty) end end

So all we do is infer and generalize a type! And there you have it, that’s how ML and Haskell do type inference.

Wrap Up

Hopefully that clears up a little of the magic of how type inference works. The next challenge is to figure out how to do type inference on a language with patterns and ADTs! This is actually quite fun, pattern checking involves synthesizing a type from a pattern which needs something like linear logic to handle pattern variables correctly.

With this we’re actually a solid 70% of the way to building a type checker to SML. Until I have more free time though, I leave this as an exercise to the curious reader.

Cheers,

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Danny Gratzer: A Twelf Introduction

Planet Haskell - Fri, 02/27/2015 - 6:00pm
Posted on February 28, 2015 Tags: twelf, types

For the last 3 or so weeks I’ve been writing a bunch of Twelf code for my research (hence my flat-lined github punch card). Since it’s actually a lot of fun I thought I’d share a bit about Twelf.

What Is Twelf

Since Twelf isn’t a terribly well known language it’s worth stating what exactly it is we’re talking about. Twelf is a proof assistant. It’s based on a logic called LF (similarly to how Coq is based on CiC).

Twelf is less powerful than some other proof assistants but by limiting some of its power it’s wonderfully suited to proving certain types of theorems. In particular, Twelf admits true “higher order abstract syntax” (don’t worry if you don’t know what this means) this makes it great for formalizing programming languages with variable bindings.

In short, Twelf is a proof assistant which is very well suited for defining and proving things about programming languages.

Getting Twelf

It’s much more fun to follow along a tutorial if you actually have a Twelf installation to try out the code. You can download and compile the sources to Twelf with SML/NJ or Mlton. You could also use smackage to get the compiler.

Once you’ve compiled the thing you should be left with a binary twelf-server. This is your primary way of interacting with the Twelf system. There’s quite a slick Emacs interface to smooth over this process. If you’ve installed twelf into a directory ~/twelf/ all you need is the incantation

(setq twelf-root "~/twelf/") (load (concat twelf-root "emacs/twelf-init.el"))

Without further ado, let’s look at some Twelf code.

Some Code

When writing Twelf code we encode the thing that we’re studying, the object language, as a bunch of type families and constructors in Twelf. This means that when we edit a Twelf file we’re just writing signatures.

For example, if we want to encode natural numbers we’d write something like

nat : type. z : nat. s : nat -> nat.

This is an LF signature, we declare a series of constants with NAME : TYPE.. Note the period at the end of each declaration. First we start by declaring a type for natural numbers called nat with nat : type. Here type is the base kind of all types in Twelf. Next we go on to declare what the values of type nat are.

In this case there are two constructors for nat. We either have zero, z, or the successor of another value of type nat, s. This gives us a canonical forms lemma for natural numbers: All values of type nat are either

  • z
  • s N for some value N : nat

Later on, we’ll justify the proofs we write with this lemma.

Anyways, now that we’ve encoded the natural numbers I wanted to point out a common point of confusion about Twelf. We’re not writing programs to be run. We’re writing programs exclusively for the purpose of typechecking. Heck, we’re not even writing programs at the term level! We’re just writing a bunch of constants out with their types! More than this even, Twelf is defined so that you can only write canonical forms. This means that if you write something in your program, it has to be in normal form, fully applied! In PL speak it has to be β-normal and η-long. This precludes actually writing programs for the sake of reducing them. You’re never going to write a web server in Twelf, you even be writing “Hello World”. You might use it to verify the language your writing them in though.

Now that we’ve gotten the awkward bit out the way, let’s now define a Twelf encoding of a judgment. We want to encode the judgment + which is given by the following rules

————————— z + n = n m + n = p ——————————————— s(m) + n = s(p)

In the rest of the world we have this idea that propositions are types. In twelf, we’re worried about defining logics and systems, so we have the metatheoretic equivalent: judgments are types.

So we define a type family plus.

plus : nat -> nat -> nat -> type

So plus is a type indexed over 3 natural numbers. This is our first example of dependent types: plus is a type which depends on 3 terms. Now we can list out how to construct a derivation of plus. This means that inference rules in a meta theory corresponds to constants in Twelf as well.

plus/z : {n : nat} plus z n n

This is some new syntax, in Twelf {NAME : TYPE} TYPE is a dependent function type, a pi type. This notation is awfully similar to Agda and Idris if you’re familiar with them. This means that this constructor takes a natural number, n and returns a derivation that plus z n n. The fact that the return type depends on what nat we supply is why this needs a dependent type.

In fact, this is such a common pattern that Twelf has sugar for it. If we write an unbound capital variable name Twelf will automagically introduce a binder {N : ...} at the front of our type. We can thus write our inference rules as

plus/z : plus z N N plus/s : plus N M P -> plus (s N) M (s P)

These rules together with our declaration of plus. In fact, there’s something kinda special about these two rules. We know that for any term n : nat which is in canonical form, there should be an applicable rule. In Twelf speak, we say that this type family is total.

We can ask Twelf to check this fact for us by saying

plus : nat -> nat -> nat -> type. %mode plus +N +M -P. plus/z : plus z N N. plus/s : plus N M P -> plus (s N) M (s P). %worlds () (plus _ _ _). %total (N) (plus N _ _).

We want to show that for all terms n, m : nat in canonical form, there is a term p in canonical form so that plus n m p. This sort of theorem is what we’d call a ∀∃-theorem. This is literally because it’s a theorem of the form “∀ something. ∃ something. so that something”. These are the sort of thing that Twelf can help us prove.

Here’s the workflow for writing one of these proofs in Twelf

  1. Write out the type family
  2. Write out a %mode specification to say what is bound in the ∀ and what is bound in the ∃.
  3. Write out the different constants in our type family
  4. Specify the context to check our proof in with %worlds, usually we want to say the empty context, ()
  5. Ask Twelf to check that we’ve created a proof according to the mode with %total where the N specifies what to induct on.

In our case we have a case for each canonical form of nat so our type family is total. This means that our theorem passes. Hurray!

Believe it or not this is what life is like in Twelf land. All the code I’ve written these last couple of weeks is literally type signatures and 5 occurrences of %total. What’s kind of fun is how unreasonably effective a system this is for proving things.

Let’s wrap things up by proving one last theorem, if plus A B N and plus A B M both have derivations, then we should be able to show that M and N are the same. Let’s start by defining what it means for two natural numbers to be the same.

nat-eq : nat -> nat -> type. nat-eq/r : nat-eq N N. nat-eq/s : nat-eq N M -> nat-eq (s N) (s M).

I’ve purposefully defined this so it’s amenable to our proof, but it’s still a believable formulation of equality. It’s reflexive and if N is equal to M, then s N is equal to s M. Now we can actually state our proof.

plus-fun : plus N M P -> plus N M P' -> nat-eq P P' -> type. %mode plus-fun +A +B -C.

Our theorem says if you give us two derivations of plus with the same arguments, we can prove that the outputs are equal. There are two cases we have to cover for our induction so there are two constructors for this type family.

plus-fun/z : plus-fun plus/z plus/z nat-eq/r. plus-fun/s : plus-fun (plus/s L) (plus/s R) (nat-eq/s E) <- plus-fun L R E.

A bit of syntactic sugar here, I used the backwards arrow which is identical to the normal -> except its arguments are flipped. Finally, we ask Twelf to check that we’ve actually proven something here.

%worlds () (plus-fun _ _ _). %total (P) (plus-fun P _ _).

And there you have it, some actual theorem we’ve mechanically checked using Twelf.

Wrap Up

I wanted to keep this short, so now that we’ve covered Twelf basics I’ll just refer you to one of the more extensive tutorials. You may be interested in

If you’re interested in learning a bit more about the nice mathematical foundations for LF you should check out “The LF Paper”.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Kind synonyms

haskell-cafe - Fri, 02/27/2015 - 12:45pm
Is there an extension that will allow me to do type T = (*, *) so I can replace type family F (k :: (*, *)) with type family F (k :: T) ? Tom
Categories: Offsite Discussion

Confusing behaviour with EnumFromThenTo contrasting Integer and Float?

Haskell on Reddit - Fri, 02/27/2015 - 7:19am

In GHCi:

hask > [1, 3..6] :: [Integer] [1,3,5] hask > [1, 3..6] :: [Float] [1.0,3.0,5.0,7.0]

The endpoint is 5 :: Integer vs 7 :: Float, surely float inaccuracy cannot explain this one? What am I missing?

submitted by kutvolbraaksel
[link] [41 comments]
Categories: Incoming News

Idiom Brackets for GHC (first full proposal)

haskell-cafe - Thu, 02/26/2015 - 11:04pm
Hi all, A few days ago I made a post here to gauge interest in adding idiom brackets to GHC. Response was a bit more mixed than I was hoping, but no one seemed to drastically against the idea, so I've moved forward with a more detailed proposal. You can find the full proposal here: https://ocharles.org.uk/IdiomBrackets.html A particular difference in my proposal from existing solutions comes from my desire to lift almost *all* expressions - with the original syntax - into idiom brackets. This means normal function application and tuples, but also case expressions, let bindings, record construction, record update, infix notation, and so on. At first I was skeptical about this, but I am finding uses for this more and more. I really like how it lets me use the interesting data (that is, whatever is "under" the applicative functor) where it's most relevant - rather than having to build a function and thread that value back through. Examples of this can be seen in my proposal. To prove its use, I've bee
Categories: Offsite Discussion

RankNTypes question

haskell-cafe - Thu, 02/26/2015 - 8:19pm
Hi Cafe, I can't find how can I do this: {-# LANGUAGE RankNTypes #-} example :: forall a. (forall b. b) -> a example = undefined The first `forall` is for cosmetic reasons. If I understand correctly this is a safe and legit function (please correct me if I'm wrong!). I thought that I can just define `example = id` and GHC would do the rest, but alas, that didn't happen. Thanks!
Categories: Offsite Discussion

safe ways how to get head/last of Seq (or Foldablein general)

haskell-cafe - Thu, 02/26/2015 - 4:18pm
Hi, today I was a bit surprised that apparently there is no easy way how to safely get the head element of `Seq` in a point-free way. Of course there is `viewl`, but it seems the data type has no folding function (something like 'foldViewL :: b -> (a -> Seq a -> b) -> b`. Is there any existing function like `Seq a -> Maybe a` to safely retrieve the head (or last) element? If not, I'd suggest to add headMaybe :: (Foldable t) => t a -> Maybe a headMaybe = getFirst . foldMap (First . Just) and similarly lastMaybe to Data.Foldable. Thanks, Petr _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

[ANN] hoodle 0.4

haskell-cafe - Thu, 02/26/2015 - 4:04pm
Hi, Haskellers! I am happy to announce a new version of hoodle that I uploaded to hackage right now. The newest version is 0.4. Hoodle is a pen notetaking program developed in haskell using gtk2hs. You can find relevant information here: http://ianwookim.org/hoodle If all the dependencies are correctly installed (hoodle has rather big dependencies though), the installation of hoodle should be easily doable by cabal install hoodle. I am managing hoodle in nix package repository, so nix users can also easily install hoodle by nix command once after nix expression in nixpkgs is updated. The major change in this update is *tabbed* interface! Like modern web browsers, you can open multiple hoodle files in one hoodle window and navigate them using tabs. Internally, I improved its rendering engine quite much. Now it has a fully asynchronous renderer, which enables a faster change of zoom ratio using cached image and then update rendering later more precisely. This improves display performance from user's point
Categories: Offsite Discussion

Would you use frozen-base

haskell-cafe - Thu, 02/26/2015 - 3:36pm
Hi, = Introduction (can be skipped) = Recently, I find myself often worried about stability of our ecosystem – may it be the FTP discussion, or the proposed¹ removal of functions you shouldn’t use, like fromJust. While all that makes our language and base libraries nicer, it also means that my chances that code from 3 or 4 years ago will compile without adjustment on newer compilers are quite low. Then I remember that a perl based webpage that I wrote 12 years ago just keeps working, even as I upgrade perl with every Debian stable release, and I’m happy about that. It makes me wonder if I am ever going to regret using Haskell because there, I’d have to decide whether I want to invest time to upgrade my code or simply stop using it. I definitely do not want to stop Haskell from evolving. Maybe we can make a more stable Haskell experience opt-in? This is one step in that direction, solving (as always) only parts of the problem. = The problem = One problem is that our central library "base" is bo
Categories: Offsite Discussion