News aggregator

Very large data structures in the heap

haskell-cafe - Tue, 02/17/2015 - 12:05pm
I have a function that builds a (cyclic) data structure for searching text. The resulting object can be serialized and re-built. Real-world problems would, however, result in a tree with about 10^9 nodes. I take it that standard data type declarations are not very memory-efficient, so my (fully evaluated) search tree would not fit into memory as a heap object. Its serialised form should be of tractable size, e.g. by storing an array of labels and the tree structure as a bit array of parentheses. I'd like to separate serialization (IO) from traversal of the stucture (pure), but how can one avoid the entire heap object from being built? Can lazyness be exploited? Olaf
Categories: Offsite Discussion

Overloadable list type notation

libraries list - Tue, 02/17/2015 - 6:30am
I had an idea whose goal is to ease people into Foldable, Traversable, etc. There could be a notation that shows the generalization that is occurring. map :: Functor [_] => (a -> b) -> [a] -> [b] This means that the syntax for the list type is now syntax for the variable created by the Functor constraint. Adding such a thing to the language is probably not a good idea, but someone might possibly like such a notation for pedagogy. Of course, it will be pointed out that a Functor is not just a container. But Foldable is sometimes expressed as "anything that toList can be called on", so it should make sense there. Greg Weber _______________________________________________ Libraries mailing list Libraries< at >
Categories: Offsite Discussion

Type-based lift and the burden of type signatures

Haskell on Reddit - Tue, 02/17/2015 - 5:57am

So I was thinking about using /u/roche's Type-based lift for a monad transformer I'm writing, but I have a qualm. Or possibly a quibble.

With the traditional MonadReader typeclass, type inference seems to work pretty well, allowing me to run simple Readers without type annotation:

λ runReader (Control.Monad.Reader.ask) 0 0

However, the same can't be said of the version from Type-based lift:

λ runReader (TypeBasedLift.Reader.ask) 0 <interactive>:237:1: Could not deduce (MonadReaderN (Find (ReaderT a) (ReaderT r0 Identity)) a (ReaderT r0 Identity)) arising from the ambiguity check for ‘it’ from the context (MonadReaderN (Find (ReaderT a) (ReaderT r Identity)) a (ReaderT r Identity), Num r) bound by the inferred type for ‘it’: (MonadReaderN (Find (ReaderT a) (ReaderT r Identity)) a (ReaderT r Identity), Num r) => a at <interactive>:237:1-38 The type variable ‘r0’ is ambiguous When checking that ‘it’ has the inferred type ‘forall r a. (MonadReaderN (Find (ReaderT a) (ReaderT r Identity)) a (ReaderT r Identity), Num r) => a’ Probable cause: the inferred type is ambiguous λ runReader (TypeBasedLift.Reader.ask :: Reader Int Int) 0 0

Here's the version of the code I'm working from:

{-# LANGUAGE TypeFamilies #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ConstraintKinds #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE MultiParamTypeClasses #-} module TypeBasedLift.Reader where import Control.Monad.Reader as Trans hiding (MonadReader) import Data.Proxy -- Peano naturals, promoted to types by DataKinds data Nat = Zero | Suc Nat type family Find (t :: (* -> *) -> (* -> *)) (m :: * -> *) :: Nat where Find t (t m) = Zero Find t (p m) = Suc (Find t m) class Monad m => MonadReaderN (n :: Nat) r m where askN :: Proxy n -> m r instance Monad m => MonadReaderN Zero r (ReaderT r m) where askN _ = Trans.ask instance (MonadTrans t, Monad (t m), MonadReaderN n r m, Monad m) => MonadReaderN (Suc n) r (t m) where askN _ = lift $ askN (Proxy :: Proxy n) -- Nice constraint alias type MonadReader r m = MonadReaderN (Find (ReaderT r) m) r m ask :: forall m r . MonadReader r m => m r ask = askN (Proxy :: Proxy (Find (ReaderT r) m))

As quibbles or qualms go, this isn't a huge one. Perhaps it's more of a query. Is the onus of requiring explicit type signatures for monadic values a reason to avoid the technique from Type-based lift? I feel like I usually write signatures anyway, so why not?

However, it feels like I'm shifting work from the library writer (writing typeclass instances) to the library user (writing type signatures), which makes me suspect.

submitted by rampion
[link] [7 comments]
Categories: Incoming News

Primitive Haskell

Haskell on Reddit - Tue, 02/17/2015 - 5:08am
Categories: Incoming News

The Elm Architecture

Haskell on Reddit - Tue, 02/17/2015 - 4:06am
Categories: Incoming News

FP Complete: Primitive Haskell

Planet Haskell - Tue, 02/17/2015 - 4:00am

I originally wrote this content as a chapter of Mezzo Haskell. I'm going to be starting up a similar effort to Mezzo Haskell in the next few days, and I wanted to get a little more attention on this content to get feedback on style and teaching approach. I'll be discussing that new initiative on the Commercial Haskell mailing list.

The point of this chapter is to help you peel back some of the layers of abstraction in Haskell coding, with the goal of understanding things like primitive operations, evaluation order, and mutation. Some concepts covered here are generally "common knowledge" in the community, while others are less well understood. The goal is to cover the entire topic in a cohesive manner. If a specific section seems like it's not covering anything you don't already know, skim through it and move on to the next one.

While this chapter is called "Primitive Haskell," the topics are very much GHC-specific. I avoided calling it "Primitive GHC" for fear of people assuming it was about the internals of GHC itself. To be clear: these topics apply to anyone compiling their Haskell code using the GHC compiler.

Note that we will not be fully covering all topics here. There is a "further reading" section at the end of this chapter with links for more details.

Let's do addition

Let's start with a really simple question: tell me how GHC deals with the expression 1 + 2. What actually happens inside GHC? Well, that's a bit of a trick question, since the expression is polymorphic. Let's instead use the more concrete expression 1 + 2 :: Int.

The + operator is actually a method of the Num type class, so we need to look at the Num Int instance:

instance Num Int where I# x + I# y = I# (x +# y)

Huh... well that looks somewhat magical. Now we need to understand both the I# constructor and the +# operator (and what's with the hashes all of a sudden?). If we do a Hoogle search, we can easily find the relevant docs, which leads us to the following definition:

data Int = I# Int#

So our first lesson: the Int data type you've been using since you first started with Haskell isn't magical at all, it's defined just like any other algebraic data type... except for those hashes. We can also search for +#, and end up at some documentation giving the type:

+# :: Int# -> Int# -> Int#

Now that we know all the types involved, go back and look at the Num instance I quoted above, and make sure you feel comfortable that all the types add up (no pun intended). Hurrah, we now understand exactly how addition of Ints works. Right?

Well, not so fast. The Haddocks for +# have a very convenient source link... which (apparently due to a Haddock bug) doesn't actually work. However, it's easy enough to find the correct hyperlinked source. And now we see the implementation of +#, which is:

infixl 6 +# (+#) :: Int# -> Int# -> Int# (+#) = let x = x in x

That doesn't look like addition, does it? In fact, let x = x in x is another way of saying bottom, or undefined, or infinite loop. We have now officially entered the world of primops.


primops, short for primary operations, are core pieces of functionality provided by GHC itself. They are the magical boundary between "things we do in Haskell itself" and "things which our implementation provides." This division is actually quite elegant; as we already explored, the standard + operator and Int data type you're used to are actually themselves defined in normal Haskell code, which provides many benefits: you get standard type class support, laziness, etc. We'll explore some of that in more detail later.

Look at the implementation of other functions in GHC.Prim; they're all defined as let x = x in x. When GHC reaches a call to one of these primops, it automatically replaces it with the real implementation for you, which will be some assembly code, LLVM code, or something similar.

Why do all of these functions end in a #? That's called the magic hash (enabled by the MagicHash language extension), and it is a convention to distinguish boxed and unboxed types and operations. Which, of course, brings us to our next topic.

Unboxed types

The I# constructor is actually just a normal data constructor in Haskell, which happens to end with a magic hash. However, Int# is not a normal Haskell data type. In GHC.Prim, we can see that it's implementation is:

data Int#

Which, like everything else in GHC.Prim is really a lie. In fact, it's provided by the implementation, and is in fact a normal long int from C (32-bit or 64-bit, depending on architecture). We can see something even funnier about it in GHCi:

> :k Int Int :: * > :k Int# Int# :: #

That's right, Int# has a different kind than normal Haskell datatypes: #. To quote the GHC docs:

Most types in GHC are boxed, which means that values of that type are represented by a pointer to a heap object. The representation of a Haskell Int, for example, is a two-word heap object. An unboxed type, however, is represented by the value itself, no pointers or heap allocation are involved.

See those docs for more information on distinctions between boxed and unboxed types. It is vital to understand those differences when working with unboxed values. However, we're not going to go into those details now. Instead, let's sum up what we've learnt so far:

  • Int addition is just normal Haskell code in a typeclass
  • Int itself is a normal Haskell datatype
  • GHC provides Int# and +# as an unboxed long int and addition on that type, respectively. This is exported by GHC.Prim, but the real implementation is "inside" GHC.
  • An Int contains an Int#, which is an unboxed type.
  • Addition of Ints takes advantage of the +# primop.
More addition

Alright, we understand basic addition! Let's make things a bit more complicated. Consider the program:

main = do let x = 1 + 2 y = 3 + 4 print x print y

We know for certain that the program will first print 3, and then print 7. But let me ask you a different question. Which operation will GHC perform first: 1 + 2 or 3 + 4? If you guessed 1 + 2, you're probably right, but not necessarily! Thanks to referential transparency, GHC is fully within its rights to rearrange evaluation of those expressions and add 3 + 4 before 1 + 2. Since neither expression depends on the result of the other, we know that it is irrelevant which evaluation occurs first.

Note: This is covered in much more detail on the GHC wiki's evaluation order and state tokens page.

That begs the question: if GHC is free to rearrange evaluation like that, how could I say in the previous paragraph that the program will always print 3 before printing 7? After all, it doesn't appear that print y uses the result of print x at all, so we not rearrange the calls? To answer that, we again need to unwrap some layers of abstraction. First, let's evaluate and inline x and y and get rid of the do-notation sugar. We end up with the program:

main = print 3 >> print 7

We know that print 3 and print 7 each have type IO (), so the >> operator being used comes from the Monad IO instance. Before we can understand that, though, we need to look at the definition of IO itself

newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))

We have a few things to understand about this line. Firstly, State# and RealWorld. For now, just pretend like they are a single type; we'll see when we get to ST why State# has a type parameter.

The other thing to understand is that (# ... #) syntax. That's an unboxed tuple, and it's a way of returning multiple values from a function. Unlike a normal, boxed tuple, unboxed tuples involve no extra allocation and create no thunks.

So IO takes a real world state, and gives you back a real world state and some value. And that right there is how we model side effects and mutation in a referentially transparent language. You may have heard the description of IO as "taking the world and giving you a new one back." What we're doing here is threading a specific state token through a series of function calls. By creating a dependency on the result of a previous function, we are able to ensure evaluation order, yet still remain purely functional.

Let's see this in action, by coming back to our example from above. We're now ready to look at the Monad IO instance:

instance Monad IO where (>>) = thenIO thenIO :: IO a -> IO b -> IO b thenIO (IO m) k = IO $ \ s -> case m s of (# new_s, _ #) -> unIO k new_s unIO :: IO a -> (State# RealWorld -> (# State# RealWorld, a #)) unIO (IO a) = a

(Yes, I changed things a bit to make them easier to understand. As an exercise, compare that this version is in fact equivalent to what is actually defined in GHC.Base.)

Let's inline these definitions into print 3 >> print 7:

main = IO $ \s0 -> case unIO (print 3) s0 of (# s1, res1 #) -> unIO (print 7) s1

Notice how, even though we ignore the result of print 3 (the res1 value), we still depend on the new state token s1 when we evaluate print 7, which forces the order of evaluation to first evaluate print 3 and then evaluate print 7.

If you look through GHC.Prim, you'll see that a number of primitive operations are defined in terms of State# RealWorld or State# s, which allows us to force evaluation order.

Exercise: implement a function getMaskingState :: IO Int using the getMaskingState# primop and the IO data constructor.

The ST monad

Let's compare the definitions of the IO and ST types:

newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #)) newtype ST s a = ST (State# s -> (# State# s, a #))

Well that looks oddly similar. Said more precisely, IO is isomorphic to ST RealWorld. ST works under the exact same principles as IO for threading state through, which is why we're able to have things like mutable references in the ST monad.

By using an uninstantiated s value, we can ensure that we aren't "cheating" and running arbitrary IO actions inside an ST action. Instead, we just have "local state" modifications, for some definition of local state. The details of using ST correctly and the Rank2Types approach to runST are interesting, but beyond the scope of this chapter, so we'll stop discussing them here.

Since ST RealWorld is isomorphic to IO, we should be able to convert between the two of them. base does in fact provide the stToIO function.

Exercise: write a pair of functions to convert between IO a and ST RealWorld a.

Exercise: GHC.Prim has a section on mutable variables, which forms the basis on IORef and STRef. Provide a new implementation of STRef, including newSTRef, readSTRef, and writeSTRef`.


It's a bit unfortunate that we have to have two completely separate sets of APIs: one for IO and another for ST. One common example of this is IORef and STRef, but- as we'll see at the end of this section- there are plenty of operations that we'd like to be able to generalize.

This is where PrimMonad, from the primitive package, comes into play. Let's look at its definition:

-- | Class of primitive state-transformer monads class Monad m => PrimMonad m where -- | State token type type PrimState m -- | Execute a primitive operation primitive :: (State# (PrimState m) -> (# State# (PrimState m), a #)) -> m a

Note: I have not included the internal method, since it will likely be removed. In fact, at the time you're reading this, it may already be gone!

PrimState is an associated type giving the type of the state token. For IO, that's RealWorld, and for ST s, it's s. primitive gives a way to lift the internal implementation of both IO and ST to the monad under question.

Exercise: Write implementations of the PrimMonad IO and PrimMonad (ST s) instances, and compare against the real ones.

The primitive package provides a number of wrappers around types and functions from GHC.Prim and generalizes them to both IO and ST via the PrimMonad type class.

Exercise: Extend your previous STRef implementation to work in any PrimMonad. After you're done, you may want to have a look at Data.Primitive.MutVar.

The vector package builds on top of the primitive package to provide mutable vectors that can be used from both IO and ST. This chapter is not a tutorial on the vector package, so we won't go into any more details now. However, if you're curious, please look through the Data.Vector.Generic.Mutable docs.

ReaderIO monad

To tie this off, we're going to implement a ReaderIO type. This will flatten together the implementations of ReaderT and IO. Generally speaking, there's no advantage to doing this: GHC should always be smart enough to generate the same code for this and for ReaderT r IO (and in my benchmarks, they perform identically). But it's a good way to test that you understand the details here.

You may want to try implementing this yourself before looking at the implementation below.

{-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE MagicHash #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UnboxedTuples #-} import Control.Applicative (Applicative (..)) import Control.Monad (ap, liftM) import Control.Monad.IO.Class (MonadIO (..)) import Control.Monad.Primitive (PrimMonad (..)) import Control.Monad.Reader.Class (MonadReader (..)) import GHC.Base (IO (..)) import GHC.Prim (RealWorld, State#) -- | Behaves like a @ReaderT r IO a@. newtype ReaderIO r a = ReaderIO (r -> State# RealWorld -> (# State# RealWorld, a #)) -- standard implementations... instance Functor (ReaderIO r) where fmap = liftM instance Applicative (ReaderIO r) where pure = return (<*>) = ap instance Monad (ReaderIO r) where return x = ReaderIO $ \_ s -> (# s, x #) ReaderIO f >>= g = ReaderIO $ \r s0 -> case f r s0 of (# s1, x #) -> let ReaderIO g' = g x in g' r s1 instance MonadReader r (ReaderIO r) where ask = ReaderIO $ \r s -> (# s, r #) local f (ReaderIO m) = ReaderIO $ \r s -> m (f r) s instance MonadIO (ReaderIO r) where liftIO (IO f) = ReaderIO $ \_ s -> f s instance PrimMonad (ReaderIO r) where type PrimState (ReaderIO r) = RealWorld primitive f = ReaderIO $ \_ s -> f s -- Cannot properly define internal, since there's no way to express a -- computation that requires an @r@ input value as one that doesn't. This -- limitation of @PrimMonad@ is being addressed: -- -- internal (ReaderIO f) = f (error "PrimMonad.internal: environment evaluated")

Exercise: Modify the ReaderIO monad to instead be a ReaderST monad, and take an s parameter for the specific state token.

Further reading
Categories: Offsite Blogs

Haskell 2010 Language Report

Haskell on Reddit - Mon, 02/16/2015 - 11:04pm
Categories: Incoming News

New Functional Programming Job Opportunities

haskell-cafe - Mon, 02/16/2015 - 7:00pm
Here are some functional programming job opportunities that were posted recently: Senior Developer (Big Data Tech Stack) at Recruit IT Group Ltd Senior Development Engineer at Lookingglass Cyber Solutions Cheers, Sean Murphy
Categories: Offsite Discussion

quickcheck for compiler testing

haskell-cafe - Mon, 02/16/2015 - 6:53pm
I'm starting to work on my first real haskell program (I've only RWH exercises under my belt) and wondering whether people use quickcheck at all for compiler testing. I've seen uses of quickcheck for testing parsers, but I'm interested in generating well-formed programs (e.g. random programs with all declarations in reasonable random places). This could be used to test passes other than parsing (or even parsing, for languages that needs to distinguish identifiers, like the 'typedef' problem in C/C++). The only thing I can think of, is to use quickcheck for randomly generating statements, go over them and figure out free variables (incl. functions) and generate declarations in random places for them. But I'm not sure how this would play with test size reduction and doesn't look like a nice solution anyhow. Any idea or pointers to examples? or should I give up on quickcheck for this and just do direct testing? Thanks, Maurizio _______________________________________________ Haskell-Cafe mailing list Has
Categories: Offsite Discussion

Danny Gratzer: A Twelf Introduction

Planet Haskell - Mon, 02/16/2015 - 6:00pm
Posted on February 17, 2015 Tags: twelf, types

For the last 3 or so weeks I’ve been writing a bunch of Twelf code for my research (hence my flat-lined github punch card). Since it’s actually a lot of fun I thought I’d share a bit about Twelf.

What Is Twelf

Since Twelf isn’t a terribly well known language it’s worth stating what exactly it is we’re talking about. Twelf is a proof assistant. It’s based on a logic called LF (similarly to how Coq is based on CiC).

Twelf is less powerful than some other proof assistants but by limiting some of its power it’s wonderfully suited to proving certain types of theorems. In particular, Twelf admits true “higher order abstract syntax” (don’t worry if you don’t know what this means) this makes it great for formalizing programming languages with variable bindings.

In short, Twelf is a proof assistant which is very well suited for defining and proving things about programming languages.

Getting Twelf

It’s much more fun to follow along a tutorial if you actually have a Twelf installation to try out the code. You can download and compile the sources to Twelf with SML/NJ or Mlton. You could also use smackage to get the compiler.

Once you’ve compiled the thing you should be left with a binary twelf-server. This is your primary way of interacting with the Twelf system. There’s quite a slick Emacs interface to smooth over this process. If you’ve installed twelf into a directory ~/twelf/ all you need is the incantation

(setq twelf-root "~/twelf/") (load (concat twelf-root "emacs/twelf-init.el"))

Without further ado, let’s look at some Twelf code.

Some Code

When writing Twelf code we encode the thing that we’re studying, the object language, as a bunch of type families and constructors in Twelf. This means that when we edit a Twelf file we’re just writing signatures.

For example, if we want to encode natural numbers we’d write something like

nat : type. z : nat. s : nat -> nat.

This is an LF signature, we declare a series of constants with NAME : TYPE.. Note the period at the end of each declaration. First we start by declaring a type for natural numbers called nat with nat : type. Here type is the base kind of all types in Twelf. Next we go on to declare what the values of type nat are.

In this case there are two constructors for nat. We either have zero, z, or the successor of another value of type nat, s. This gives us a canonical forms lemma for natural numbers: All values of type nat are either

  • z
  • s N for some value N : nat

Later on, we’ll justify the proofs we write with this lemma.

Anyways, now that we’ve encoded the natural numbers I wanted to point out a common point of confusion about Twelf. We’re not writing programs to be run. We’re writing programs exclusively for the purpose of typechecking. Heck, we’re not even writing programs at the term level! We’re just writing a bunch of constants out with their types! More than this even, Twelf is defined so that you can only write canonical forms. This means that if you write something in your program, it has to be in normal form, fully applied! In PL speak it has to be β-normal and η-long. This precludes actually writing programs for the sake of reducing them. You’re never going to write a web server in Twelf, you even be writing “Hello World”. You might use it to verify the language your writing them in though.

Now that we’ve gotten the awkward bit out the way, let’s now define a Twelf encoding of a judgment. We want to encode the judgment + which is given by the following rules

————————— z + n = n m + n = p ——————————————— s(m) + n = s(p)

In the rest of the world we have this idea that propositions are types. In twelf, we’re worried about defining logics and systems, so we have the metatheoretic equivalent: judgments are types.

So we define a type family plus.

plus : nat -> nat -> nat -> type

So plus is a type indexed over 3 natural numbers. This is our first example of dependent types: plus is a type which depends on 3 terms. Now we can list out how to construct a derivation of plus. This means that inference rules in a meta theory corresponds to constants in Twelf as well.

plus/z : {n : nat} plus z n n

This is some new syntax, in Twelf {NAME : TYPE} TYPE is a dependent function type, a pi type. This notation is awfully similar to Agda and Idris if you’re familiar with them. This means that this constructor takes a natural number, n and returns a derivation that plus z n n. The fact that the return type depends on what nat we supply is why this needs a dependent type.

In fact, this is such a common pattern that Twelf has sugar for it. If we write an unbound capital variable name Twelf will automagically introduce a binder {N : ...} at the front of our type. We can thus write our inference rules as

plus/z : plus z N N plus/s : plus N M P -> plus (s N) M (s P)

These rules together with our declaration of plus. In fact, there’s something kinda special about these two rules. We know that for any term n : nat which is in canonical form, there should be an applicable rule. In Twelf speak, we say that this type family is total.

We can ask Twelf to check this fact for us by saying

plus : nat -> nat -> nat -> type. %mode plus +N +M -P. plus/z : plus z N N. plus/s : plus N M P -> plus (s N) M (s P). %worlds () (plus _ _ _). %total (N) (plus N _ _).

We want to show that for all terms n, m : nat in canonical form, there is a term p in canonical form so that plus n m p. This sort of theorem is what we’d call a ∀∃-theorem. This is literally because it’s a theorem of the form “∀ something. ∃ something. so that something”. These are the sort of thing that Twelf can help us prove.

Here’s the workflow for writing one of these proofs in Twelf

  1. Write out the type family
  2. Write out a %mode specification to say what is bound in the ∀ and what is bound in the ∃.
  3. Write out the different constants in our type family
  4. Specify the context to check our proof in with %worlds, usually we want to say the empty context, ()
  5. Ask Twelf to check that we’ve created a proof according to the mode with %total where the N specifies what to induct on.

In our case we have a case for each canonical form of nat so our type family is total. This means that our theorem passes. Hurray!

Believe it or not this is what life is like in Twelf land. All the code I’ve written these last couple of weeks is literally type signatures and 5 occurrences of %total. What’s kind of fun is how unreasonably effective a system this is for proving things.

Let’s wrap things up by proving one last theorem, if plus A B N and plus A B M both have derivations, then we should be able to show that M and N are the same. Let’s start by defining what it means for two natural numbers to be the same.

nat-eq : nat -> nat -> type. nat-eq/r : nat-eq N N. nat-eq/s : nat-eq N M -> nat-eq (s N) (s M).

I’ve purposefully defined this so it’s amenable to our proof, but it’s still a believable formulation of equality. It’s reflexive and if N is equal to M, then s N is equal to s M. Now we can actually state our proof.

plus-fun : plus N M P -> plus N M P' -> nat-eq P P' -> type. %mode plus-fun +A +B -C.

Our theorem says if you give us two derivations of plus with the same arguments, we can prove that the outputs are equal. There are two cases we have to cover for our induction so there are two constructors for this type family.

plus-fun/z : plus-fun plus/z plus/z nat-eq/r. plus-fun/s : plus-fun (plus/s L) (plus/s R) (nat-eq/s E) <- plus-fun L R E.

A bit of syntactic sugar here, I used the backwards arrow which is identical to the normal -> except its arguments are flipped. Finally, we ask Twelf to check that we’ve actually proven something here.

%worlds () (plus-fun _ _ _). %total (P) (plus-fun P _ _).

And there you have it, some actual theorem we’ve mechanically checked using Twelf.

Wrap Up

I wanted to keep this short, so now that we’ve covered Twelf basics I’ll just refer you to one of the more extensive tutorials. You may be interested in

If you’re interested in learning a bit more about the nice mathematical foundations for LF you should check out “The LF Paper”.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs


libraries list - Mon, 02/16/2015 - 3:22pm
Hi Ivan, could you make a new release of fgl that works with ghc-7.10? I could compile it locally after the following changes: Data/Graph/Inductive/Query/Monad.hs +import Control.Applicative +import Control.Monad (ap, liftM) +instance Monad m => Functor (GT m g) where + fmap = liftM + +instance Monad m => Applicative (GT m g) where + pure = return + (<*>) = ap Data/Graph/Inductive/Monad/IOArray.hs -{-# LANGUAGE MultiParamTypeClasses, FlexibleInstances #-} +{-# LANGUAGE MultiParamTypeClasses, FlexibleInstances, FlexibleContexts #-} I'm not sure why FlexibleContexts is now needed and I'm not sure how the package version number should change. Cheers Christian
Categories: Offsite Discussion

Reminder: The GHC 7.10 Foldable/Traversable Survey ends on the 21st February!

Haskell on Reddit - Mon, 02/16/2015 - 2:04pm

Previously, on /r/haskell: GHC 7.10 Prelude: we need your opinion.

Since the previous thread is already on page ~3, I thought a reminder might be reasonable.

Links submitted by lserit7fc56v9o24
[link] [comment]
Categories: Incoming News

Venerable Haskell Platform, necessity or product of a bygone era?

Haskell on Reddit - Mon, 02/16/2015 - 12:51pm

Given the present state of the Haskell build arena (stackage, LTS, improved cabal, sandbox, etc.) I have some questions for the community.

1) Is Haskell Platform a necessity anymore?

2) Who is best served by Haskell Platform?

3) What is Haskell Platform's future?

submitted by noZone
[link] [57 comments]
Categories: Incoming News

Functional Jobs: Engineering Manager (OCaml or Haskell under Linux) at FireEye (Full-time)

Planet Haskell - Mon, 02/16/2015 - 11:29am

Position Title: Engineering Manager

Location: Dresden, Germany

The Company

FireEye has invented a purpose-built, virtual machine-based security platform that provides real-time threat protection to enterprises and governments worldwide against the next generation of cyber attacks. These highly sophisticated cyber attacks easily circumvent traditional signature-based defenses, such as next-generation firewalls, IPS, anti-virus, and gateways. The FireEye Threat Prevention Platform provides real-time, dynamic threat protection without the use of signatures to protect an organization across the primary threat vectors and across the different stages of an attack life cycle. The core of the FireEye platform is a virtual execution engine, complemented by dynamic threat intelligence, to identify and block cyber attacks in real time. FireEye has over 3,100 customers across 67 countries, including over 200 of the Fortune 500.

Job Description:

In Dresden, Germany, an outstanding team of formal methods engineers uses formal methods tools, such as proof assistants, to develop correctness proofs for FireEye's leading edge products. In real world applications of formal methods tools, automation is often not sufficient for the specific problems at hand. Therefore, we are seeking outstanding software developers with a passion for implementing both well-designed as well as ad-hoc formal methods software tools for proof refactoring, proof search, systematic testing and other areas.


  • Design and development of software tools, scripts and automated processes for various tasks, supporting formal method tools
  • Refactoring, redesigning and rewriting earlier ad-hoc solutions in a systematic way
  • Maintaining existing tools and scripts
  • Continuous focus and contribution in the areas of performance, reliability, scalability, and maintainability of the product
  • Active participation in our ongoing process enhancements in software development and verification practices

Desired Skills & Experience

  • BS or MS in Computer Science or equivalent experience
  • Strong programming skills in OCaml or Haskell under Linux
  • Good skills in scripting languages, such as bash, perl or python
  • Good knowledge of logic and formal logical reasoning
  • Experience with parallel programs running on multiple machines and cores
  • Interest in acquiring basic knowledge about proof assistants for higher-order logic
  • Excellent interpersonal and teamwork skills
  • Strong problem solving, troubleshooting and analysis skills
  • Excellent oral and written communication skills

Get information on how to apply for this position.

Categories: Offsite Blogs