News aggregator

Proposal: Add functions to get consecutive elements to Data.List

libraries list - Tue, 04/12/2016 - 9:55pm
I propose adding two new functions to Data.List: zipConsecutives :: [a] -> [(a,a)] zipConsecutives xs = zip xs (tail xs) zipConsecutivesWith :: (a -> a -> b) -> [a] -> [b] zipConsecutivesWith f xs = zipWith f xs (tail xs) (with possibly more efficient implementations) The Trac feature request is at . Why These functions are useful when one wants to process consecutive pairs of elements in a sequence, which is not uncommon. The ticket lists some examples, reiterated here: -- diff consecutive elements: diffs = zipConsecutivesWith (flip (-)) -- determine if list is ascending (similar for descending and strict): isAscending = and . zipConsecutivesWith (<=) -- an old friend of ours: fibs = 1 : 1 : zipConsecutivesWith (+) fibs -- get the edges of a closed path defined by points (ps): edges ps = zipConsecutivesWith makeEdge (ps ++ take 1 ps) Why add to Data.List (and base) The definition must either use an ex
Categories: Offsite Discussion

Derek Elkins: Understanding typing judgments

Planet Haskell - Tue, 04/12/2016 - 8:41pm

For many people interested in type systems and type theory, their first encounter with the literature presents them with this:

#frac(Gamma,x:tau_1 |--_Sigma e : tau_2)(Gamma |--_Sigma (lambda x:tau_1.e) : tau_1 -> tau_2) ->I#

#frac(Gamma |--_Sigma f : tau_1 -> tau_2 \qquad Gamma |--_Sigma x : tau_1)(Gamma |--_Sigma f x : tau_2) ->E#

Since this notation is ubiquitous, authors (reasonably) expect readers to already be familiar with it and thus provide no explanation. Because the notation is ubiquitous, the beginner looking for alternate resources will not escape it. All they will find is that the notation is everywhere but exists in myriad minor variations which may or may not indicate significant differences. At this point the options are: 1) to muddle on and hope understanding the notation isn’t too important, 2) look for introductory resources which typically take the form of $50+ 500+ page textbooks, or 3) give up.

The goal of this article is to explain the notation part-by-part in common realizations, and to cover the main idea behind the notation which is the idea of an inductively defined relation. To eliminate ambiguity and make hand-waving impossible, I’ll ground the explanations in code, in particular, in Agda. That means for each example of the informal notation, there will be how it would be realized in Agda.1 It will become clear that I’m am not (just) using Agda as a formal notation to talk about these concepts, but that Agda’s2 data type mechanism directly captures them3. The significance of this is that programmers are already familiar with many of the ideas behind the informal notation, and the notation is just obscuring this familiarity. Admittedly, Agda is itself pretty intimidating. I hope most of this article is accessible to those with familiarity with algebraic data types as they appear in Haskell, ML, Rust, or Swift with little to no need to look up details about Agda. Nevertheless, Agda at least has the benefit, when compared to the informal notation, of having a clear place to go to learn more, an unambiguous meaning, and tools that allow playing around with the ideas.

Parsing and reading

To start, if you are not already familiar with it, get familiar with the Greek alphabet. It will be far easier to (mentally) read mathematical notation of any kind if you can say “Gamma x” rather than “right angle thingy x” or “upside-down L x”.

Using the example from the introduction, the whole thing is a rule. The “|->I|” part is just the name of the rule (in this case being short for “|->| Introduction”). This rule is only part of the definition of the judgment of the form:

#Gamma |--_Sigma e : tau#

The judgment can be viewed as a proposition and the rule is an “if-then” statement read from top to bottom. So the “|->I|” rule says, “if #Gamma, x : tau_1 |--_Sigma e : tau_2# then #Gamma |--_Sigma lambda x : tau_1.e : tau_1 -> tau_2#”. It is often profitable to read it bottom-up as “To prove #Gamma |--_Sigma lambda x : tau_1.e : tau_1 -> tau_2# you need to show #Gamma, x : tau_1 |--_Sigma e : tau_2#”.

So what is the judgment saying? First, the judgment is, in this case, a four argument relation. The arguments of this relation are #Gamma#, #Sigma#, #e#, and #tau#. We could say the name of this relation is the perspicuous #(_)|--_((_)) (_) : (_)#. Note that it does not make sense to ask what “⊢” means or what “:” means anymore than it makes sense to ask what “->” means in Haskell’s \ x -> e.4

In the context of type systems, #Gamma# is called the context, #Sigma# is called the signature, #e# is the term or expression, and #tau# is the type. Given this, I would read #Gamma |--_Sigma e : tau# as “the expression e has type tau in context gamma given signature sigma.” For the “#->E#” rule we have, additionally, multiple judgements above the line. These are joined together by conjunction, that is, we’d read “#->E#” as “if #Gamma |--_Sigma f : tau_1 -> tau_2# and #Gamma |--_Sigma x : tau_1# then #Gamma |--_Sigma f x : tau_2#

In most recent type system research multiple judgments are necessary to describe the type system, and so you may see things like #Gamma |-- e > tau# or #Gamma |-- e_1 "~" e_2#. The key thing to remember is that these are completely distinct relations that will have their own definitions (although the collection of them will often be mutually recursively defined).

Inductively Defined Relations Relations

Relations in set theory are boolean valued functions. Being programmers, and thus constructivists, we want evidence, so a relation |R : A xx B -> bb2| becomes a type constructor R : (A , B) -> Set. |R(a,b)| holds if we have a value (proof/witness) w : R a b. An inductively defined relation or judgment is then just a type constructor for an (inductive) data type. That means, if R is an inductively defined relation, then its definition is data R : A -> B -> Set where .... A rule is a constructor of this data type. A derivation is a value of this data type, and will usually be a tree-like structure. As a bit of ambiguity in the terminology (arguably arising from a common ambiguity in mathematical notation), it’s a bit more natural to use the term “judgment” to refer to something that can be (at the meta level) true or false. For example, we’d say |R(a,b)| is a judgment. Nevertheless, when we say something like “the typing judgment” it’s clear that we’re referring to the whole relation, i.e. |R|.

Parameters of the judgments

Since a judgment is a relation, we need to describe what the arguments to the relation look like. Typically something like BNF is used. The BNF definitions provide the types used as parameters to the judgments. It is common to use a Fortran-esque style where a naming convention is used to avoid the need to explicitly declare the types of meta-variables. For example, the following says meta-variables like #n#, #m#, and #n_1# are all natural numbers.

n, m ::= Z | S n

BNF definitions translate readily to algebraic data types.

data Nat : Set where Z : Nat S : Nat -> Nat

Agda note: Set is what is called * in Haskell. “Type” would be a better name. Also, these sidebars will cover details about Agda with the aim that readers unfamiliar with Agda don’t get tripped up by tangential details.

Sometimes it’s not possible to fully capture the constraints on well-formed syntax with BNF. In other words, only a subset of syntactically valid terms are well-formed. For example, Nat Nat is syntactically valid but is not well-formed. We can pick out that subset with a predicate, i.e. a unary relation. This is, of course, nothing but another judgment. As an example, if we wired the Maybe type into our type system, we’d likely have a judgment that looks like #tau\ tt"type"# which would include the following rule:

#frac(tau\ tt"type")(("Maybe"\ tau)\ tt"type")#

In a scenario like this, we’d also have to make sure the rules of our typing judgment also required the types involved to be well-formed. Modifying the example from the introduction, we’d get:

#frac(Gamma,x:tau_1 |--_Sigma e : tau_2 \qquad tau_1\ tt"type" \qquad tau_2\ tt"type")(Gamma |--_Sigma (lambda x:tau_1.e) : tau_1 -> tau_2) ->I#

A simple inductively defined relation in Agda

As a very simple example, let’s say we wanted to provide explicit evidence that one natural number was less than or equal to another in Agda. Scenarios like this are common in dependently typed programming, and so we’ll start with the Agda this time and then “informalize” it.

data _isLessThanOrEqualTo_ : Nat -> Nat -> Set where Z<=n : {n : Nat} -> Z isLessThanOrEqualTo n Sm<=Sn : {m : Nat} -> {n : Nat} -> m isLessThanOrEqualTo n -> (S m) isLessThanOrEqualTo (S n)

Agda notes: In Agda identifiers can contain almost any character so Z<=n is just an identifier. Agda allows any identifier to be used infix (or more generally mixfix). The underscores mark where the arguments go. So _isLessThanOrEqualTo_ is a binary infix operator. Finally, curly brackets indicate implicit arguments which can be omitted and Agda will “guess” their values. Usually, they’ll be obvious to Agda by unification.

In the informal notation, the types of the arguments are implied by the naming. n is a natural number because it was used as the metavariable (non-terminal) in the BNF for naturals. We also implicitly quantify over all free variables. In the Agda code, this quantification was explicit.

#frac()(Z <= n) tt"Z<=n"#

#frac(m <= n)(S m <= S n) tt"Sm<=Sn"#

Again, I want to emphasize that these are defining isLessThanOrEqualTo and |<=|. They can’t be wrong. They can only fail to coincide with our intuitions or to an alternate definition. A derivation that |2 <= 3| looks like:

In Agda:

twoIsLessThanThree : (S (S Z)) isLessThanOrEqualTo (S (S (S Z))) twoIsLessThanThree = Sm<=Sn (Sm<=Sn Z<=n)

In the informal notation:

#frac(frac()(Z <= S Z))(frac(S Z <= S (S Z))(S (S Z) <= S (S (S Z)))#

Big-step operational semantics

Here’s a larger example that also illustrates that these judgments do not need to be typing judgments. Here we’re defining a big-step operational semantics for the untyped lambda calculus.

x variable v ::= λx.e e ::= v | e e | x

In informal presentations, binders like #lambda# are handled in a fairly relaxed manner. While the details of handling binders are tricky and error-prone, they are usually standard and so authors assume readers can fill in those details and are aware of the concerns (e.g. variable capture). In Agda, of course, we’ll need to spell out the details. There are many approaches for dealing with binders with different trade-offs. One of the newer and more convenient approaches is parametric higher-order abstract syntax (PHOAS). Higher-order abstract syntax (HOAS) approaches allow us to reuse the binding structure of the host language and thus eliminate much of the work. Below, this is realized by the Lambda constructor taking a function as its argument. In a later section, I’ll use a different approach using deBruijn indices.

-- PHOAS approach to binding mutual data Expr (A : Set) : Set where Val : Value A -> Expr A App : Expr A -> Expr A -> Expr A Var : A -> Expr A data Value (A : Set) : Set where Lambda : (A -> Expr A) -> Value A -- A closed expression CExpr : Set1 CExpr = {A : Set} -> Expr A -- A closed expression that is a value CValue : Set1 CValue = {A : Set} -> Value A

Agda note: Set1 is needed for technical reasons that are unimportant. You can just pretend it says Set instead. More important is that the definitions of Expr and Value are a bit different than the definition for _isLessThanOrEqualTo_. In particular, the argument (A : Set) occurs to the left of the colon. When an argument occurs to the left of the colon we say it parameterizes the data declaration and that it is a parameter. When it occurs to the right of the colon we say it indexes the data declaration and that it is an index. The difference is that parameters must occur uniformly in the return type of the data constructors while indexes can be different in each data constructor. The arguments of an inductively defined relation like _isLessThanOrEqualTo_ will always be indexes (though there could be additional parameters.)

#frac(e_1 darr lambda x.e \qquad e_2 darr v_2 \qquad e[x|->v_2] darr v)(e_1 e_2 darr v) tt"App"#

#frac()(v darr v) tt"Trivial"#

The #e darr v# judgment (read as “the expression #e# evaluates to the value #v#”) defines a call-by-value evaluation relation. #e[x|->v]# means “substitute #v# for #x# in the expression #e#”. This notation is not standardized; there are many variants. In more rigorous presentations this operation will be formally defined, but usually the authors assume you are familiar with it. In the #tt"Trivial"# rule, the inclusion of values into expressions is implicitly used. Note that the rule is restricted to values only.

The #tt"App"# rule specifies call-by-value because the #e_2# expression is evaluated and then the resulting value is substituted into #e#. For call-by-name, we’d omit the evaluation of #e_2# and directly substitute #e_2# for #x# in #e#. Whether #e_1# or #e_2# is evaluated first (or in parallel) is not specified in this example.

subst : {A : Set} -> Expr (Expr A) -> Expr A subst (Var e) = e subst (Val (Lambda b)) = Val (Lambda (λ a -> subst (b (Var a)))) subst (App e1 e2) = App (subst e1) (subst e2) data _EvaluatesTo_ : CExpr -> CValue -> Set1 where EvaluateTrivial : {v : CValue} -> (Val v) EvaluatesTo v EvaluateApp : {e1 : CExpr} -> {e2 : CExpr} -> {e : {A : Set} -> A -> Expr A} -> {v2 : CValue} -> {v : CValue} -> e1 EvaluatesTo (Lambda e) -> e2 EvaluatesTo v2 -> (subst (e (Val v2))) EvaluatesTo v -> (App e1 e2) EvaluatesTo v

The EvaluateTrivial constructor explicitly uses the Val injection of values into expressions. The EvaluateApp constructor starts off with a series of implicit arguments that introduce and quantify over the variables used in the rule. After those, each judgement above the line in the #tt"App"# rule, becomes an argument to the EvaluateApp constructor.

In this case ↓ is defining a functional relation, meaning for every expression there’s at most one value that the expression evaluates to. So another natural way to interpret ↓ is as a definition, in logic programming style, of a (partial) recursive function. In other words we can use the concept of mode from logic programming and instead of treating the arguments to ↓ as inputs, we can treat the first as an input and the second as an output.

↓ gives rise to a partial function because not every expression has a normal form. For _EvaluatesTo_ this is realized by the fact that we simply won’t be able to construct a term of type e EvaluatesTo v for any v if e doesn’t have a normal form. In fact, we can use the inductive structure of the relationship to help prove that statement. (Unfortunately, Agda doesn’t present a very good experience for data types indexed by functions, so the proof is not nearly as smooth as one would like.)

Type systems

Next we’ll turn to type systems which will present an even larger example, and will introduce some concepts that are specific to type systems (though, of course, they overlap greatly with concepts in logic due to the Curry-Howard correspondence.)

Terms and types

Below is an informal presentation of the polymorphic lambda calculus with explicit type abstraction and type application. An interesting fact about the polymorphic lambda calculus is that we don’t need any base types. Via Church-encoding, we can define types like natural numbers and lists.

α type variable τ ::= τ → τ | ∀α. τ | α x variable c constant v ::= λx:τ.e | Λτ.e | c e ::= v | e e | e[τ] | x

In this case I’ll be using deBruijn indices to handle the binding structure of the terms and types. This means instead of writing |lambda x.lambda y. x|, you would write |lambda lambda 1| where the |1| counts how many binders (lambdas) you need to traverse to reach the binding site.

data TType : Set where TTVar : Nat -> TType -- α _=>_ : TType -> TType -> TType -- τ → τ Forall : TType -> TType -- ∀α. τ mutual data TExpr : Set where TVal : TValue -> TExpr -- v TApp : TExpr -> TExpr -> TExpr -- f x TTyApp : TExpr -> TType -> TExpr -- e[τ] TVar : Nat -> TExpr -- x data TValue : Set where TLambda : TType -> TExpr -> TValue -- λx:τ.e TTyLambda : TExpr -> TValue -- Λτ.e TConst : Nat -> TValue -- c The Context

In formulating the typing rules we need to deal with open terms, that is terms which refer to variables that they don’t bind. This should only happen if some enclosing terms did bind those variables, so we need to keep track of the variables that have been bound by enclosing terms. For example, when type checking |lambda x:tau.x|, we’ll need to type check the subterm |x| which does not contain enough information in itself for us to know what the type should be. So, we keep track of what variables have been bound (and to what type) in a context and then we can just look up the expected type. When authors bother formally spelling out the context, it will look something like the following:

Γ ::= . | Γ, x:τ Δ ::= . | Δ, α

We see that this is just a (snoc) list. In the first case, |Gamma|, it is a list of pairs of variables and types, i.e. an association list mapping variables to types. Often it will be treated as a finite mapping. In the second case, |Delta|, it is a list of type variables. Since I’m using deBruijn notation, there are no variables so we end up with a list of types in the first case. In the second case, we would end up with a list of nothing in particular, i.e. a list of unit, but that is isomorphic to a natural number. In other words, the only purpose of the type context, |Delta|, is to make sure we don’t use unbound variables, which in deBruijn notation just means we don’t have deBruijn indexes that try to traverse more lambdas than enclose them. The Agda code for the above is completely straight-forward.

data List (A : Set) : Set where Nil : List A _,_ : List A -> A -> List A Context : Set Context = List TType TypeContext : Set TypeContext = Nat The Signature

Signatures keep track of what primitive, “user-defined” constants might exist. Often the signature is omitted since nothing particularly interesting happens with it. Indeed, that will be the case for us. Nevertheless, we see that the signature is just another association list mapping constants to types.

Σ ::= . | Σ, c:τ Signature : Set Signature = List TType

The main reason I included the signature, beyond just covering it for the cases when it is included, is that sometimes certain rules can be better understood as manipulations of the signature. For example, in logic, universal quantification is often described by a rule like:

#frac(Gamma |-- P[x|->c] \qquad c\ "fresh")(Gamma |-- forall x.P)#

What’s happening and what “freshness” is is made a bit clearer by employing a signature (which for logic is usually just a list of constants similar to our TypeContext):

#frac(Gamma |--_(Sigma, c) P[x|->c] \qquad c notin Sigma)(Gamma |--_Sigma forall x.P)#


To define the typing rules we need two judgements. The first, #Delta |-- tau#, will be a simple judgement that says |tau| is a well formed type in |Delta|. This basically just requires that all variables are bound.

#frac(alpha in Delta)(Delta |-- alpha)#

#frac(Delta, alpha |-- tau)(Delta |-- forall alpha. tau)#

#frac(Delta |-- tau_1 \qquad Delta |-- tau_2)(Delta |-- tau_1 -> tau_2)#

The Agda is

data _<_ : Nat -> Nat -> Set where Z<Sn : {n : Nat} -> Z < S n Sn<SSm : {n m : Nat} -> n < S m -> S n < S (S m) data _isValidIn_ : TType -> TypeContext -> Set where TyVarJ : {n : Nat} -> {ctx : TypeContext} -> n < ctx -> (TTVar n) isValidIn ctx TyArrJ : {t1 t2 : TType} -> {ctx : TypeContext} -> t1 isValidIn ctx -> t2 isValidIn ctx -> (t1 => t2) isValidIn ctx TyForallJ : {t : TType} -> {ctx : TypeContext} -> t isValidIn (S ctx) -> (Forall t) isValidIn ctx

The meat is the following typing judgement, depending on the judgement defining well-formed types. I’m not really going to explain these rules because, in some sense, there is nothing to explain. Beyond explaining the notation itself, which was the point of the article, the below is “self-explanatory” in the sense that it is a definition, and whether it is a good definition or “meaningful” depends on whether we can prove the theorems we want about it.

#frac(c:tau in Sigma \qquad Delta |-- tau)(Delta;Gamma |--_Sigma c : tau) tt"Const"#

#frac(x:tau in Gamma \qquad Delta |-- tau)(Delta;Gamma |--_Sigma x : tau) tt"Var"#

#frac(Delta;Gamma |--_Sigma e_1 : tau_1 -> tau_2 \qquad Delta;Gamma |--_Sigma e_2 : tau_1)(Delta;Gamma |--_Sigma e_1 e_2 : tau_2) tt"App"#

#frac(Delta;Gamma |--_Sigma e : forall alpha. tau_1 \qquad Delta |-- tau_2)(Delta;Gamma |--_Sigma e[tau_2] : tau_1[alpha|->tau_2]) tt"TyApp"#

#frac(Delta;Gamma, x:tau_1 |--_Sigma e : tau_2 \qquad Delta |-- tau_1)(Delta;Gamma |--_Sigma (lambda x:tau_1.e) : tau_1 -> tau_2) tt"Abs"#

#frac(Delta, alpha;Gamma |--_Sigma e : tau)(Delta;Gamma |--_Sigma (Lambda alpha.e) : forall alpha. tau) tt"TyAbs"#

Here’s the corresponding Agda code. Note, all Agda is doing for us here is making sure we haven’t written self-contradictory nonsense. In no way is Agda ensuring that this is the “right” definition. For example, it could be the case (but isn’t) that there are no values of this type. Agda would be perfectly content to let us define a type that had no values.

tySubst : TType -> TType -> TType tySubst t1 t2 = tySubst' t1 t2 Z where tySubst' : TType -> TType -> Nat -> TType tySubst' (TTVar Z) t2 Z = t2 tySubst' (TTVar Z) t2 (S _) = TTVar Z tySubst' (TTVar (S n)) t2 Z = TTVar (S n) tySubst' (TTVar (S n)) t2 (S d) = tySubst' (TTVar n) t2 d tySubst' (t1 => t2) t3 d = tySubst' t1 t3 d => tySubst' t2 t3 d tySubst' (Forall t1) t2 d = tySubst' t1 t2 (S d) data _isIn_at_ {A : Set} : A -> List A -> Nat -> Set where Found : {a : A} -> {l : List A} -> a isIn (l , a) at Z Next : {a : A} -> {b : A} -> {l : List A} -> {n : Nat} -> a isIn l at n -> a isIn (l , b) at (S n) data _hasType_inContext_and_given_ : TExpr -> TType -> Context -> TypeContext -> Signature -> Set where ConstJ : {t : TType} -> {c : Nat} -> {Sigma : Signature} -> {Gamma : Context} -> {Delta : TypeContext} -> t isIn Sigma at c -> t isValidIn Delta -> (TVal (TConst c)) hasType t inContext Gamma and Delta given Sigma VarJ : {t : TType} -> {x : Nat} -> {Sigma : Signature} -> {Gamma : Context} -> {Delta : TypeContext} -> t isIn Gamma at x -> t isValidIn Delta -> (TVar x) hasType t inContext Gamma and Delta given Sigma AppJ : {t1 : TType} -> {t2 : TType} -> {e1 : TExpr} -> {e2 : TExpr} -> {Sigma : Signature} -> {Gamma : Context} -> {Delta : TypeContext} -> e1 hasType (t1 => t2) inContext Gamma and Delta given Sigma -> e2 hasType t1 inContext Gamma and Delta given Sigma -> (TApp e1 e2) hasType t2 inContext Gamma and Delta given Sigma TyAppJ : {t1 : TType} -> {t2 : TType} -> {e : TExpr} -> {Sigma : Signature} -> {Gamma : Context} -> {Delta : TypeContext} -> e hasType (Forall t1) inContext Gamma and Delta given Sigma -> t2 isValidIn Delta -> (TTyApp e t2) hasType (tySubst t1 t2) inContext Gamma and Delta given Sigma AbsJ : {t1 : TType} -> {t2 : TType} -> {e : TExpr} -> {Sigma : Signature} -> {Gamma : Context} -> {Delta : TypeContext} -> e hasType t2 inContext (Gamma , t1) and Delta given Sigma -> (TVal (TLambda t1 e)) hasType (t1 => t2) inContext Gamma and Delta given Sigma TyAbsJ : {t : TType} -> {e : TExpr} -> {Sigma : Signature} -> {Gamma : Context} -> {Delta : TypeContext} -> e hasType t inContext Gamma and (S Delta) given Sigma -> (TVal (TTyLambda e)) hasType (Forall t) inContext Gamma and Delta given Sigma

Here’s a typing derivation for the polymorphic constant function:

tyLam : TExpr -> TExpr tyLam e = TVal (TTyLambda e) lam : TType -> TExpr -> TExpr lam t e = TVal (TLambda t e) polyConst : tyLam (tyLam (lam (TTVar Z) (lam (TTVar (S Z)) (TVar (S Z))))) -- Λs.Λt.λx:t.λy:s.x hasType (Forall (Forall (TTVar Z => (TTVar (S Z) => TTVar Z)))) -- ∀s.∀t.t→s→t inContext Nil and Z given Nil polyConst = TyAbsJ (TyAbsJ (AbsJ (AbsJ (VarJ (Next Found) (TyVarJ Z<Sn))))) -- written by Agda data False : Set where Not : Set -> Set Not A = A -> False wrongType : Not (tyLam (lam (TTVar Z) (TVar Z)) -- Λt.λx:t.x hasType (Forall (TTVar Z)) -- ∀t.t inContext Nil and Z given Nil) wrongType (TyAbsJ ())

Having written all this, we have not defined a type checking algorithm (though Agda’s auto tactic does a pretty good job); we’ve merely specified what evidence that a program is well-typed is. Explicitly, a type checking algorithm would be a function with the following type:

data Maybe (A : Set) : Set where Nothing : Maybe A Just : A -> Maybe A typeCheck : (e : TExpr) -> (t : TType) -> (sig : Signature) -> Maybe (e hasType t inContext Nil and Z given sig) typeCheck = ?

In fact, we’d want to additionally prove that this function never returns Nothing if there does exist a typing derivation that would give e the type t in signature sig. We could formalize this in Agda by instead giving typeCheck the following type:

data Decidable (A : Set) : Set where IsTrue : A -> Decidable A IsFalse : Not A -> Decidable A typeCheckDec : (e : TExpr) -> (t : TType) -> (sig : Signature) -> Decidable (e hasType t inContext Nil and Z given sig) typeCheckDec = ?

This type says that either typeCheckDec will return a typing derivation, or it will return a proof that there is no typing derivation. As the name Decidable suggests, this may not always be possible. Which is to say, type checking may not always be decidable. Note, we can always check that a typing derivation is valid — we just need to verify that we applied the rules correctly — what we can’t necessarily do is find such a derivation given only the expression and the type or prove that no such derivation exists. Similar concerns apply to type inference which could have one of the following types:

record Σ (T : Set) (F : T -> Set) : Set where field fst : T snd : F fst inferType : (e : TExpr) -> (sig : Signature) -> Maybe (Σ TType (λ t → e hasType t inContext Nil and Z given sig)) inferType = ? inferTypeDec : (e : TExpr) -> (sig : Signature) -> Decidable (Σ TType (λ t → e hasType t inContext Nil and Z given sig)) inferTypeDec = ?

where Σ indicates a dependent sum, i.e. a pair where the second component (here of type e hasType t inContext Nil and Z given sig) depends on the first component (here of type TType). With type inference we have the additional concern that there may be multiple possible types an expression could have, and we may want to ensure it returns the “most general” type in some sense. There may not always be a good sense of “most general” type and user-input is required to pick out of the possible types.

Sometimes the rules themselves can be viewed as the defining rules of a logic program and thus directly provide an algorithm. For example, if we eliminate the rules, types, and terms related to polymorphism, we’d get the simply typed lambda calculus. A Prolog program to do type checking can be written in a few lines with a one-to-one correspondence to the type checking rules (and, for simplicitly, also omitting the signature):

lookup(z, [T|_], T). lookup(s(N), [_|Ctx],T) :- lookup(N, Ctx, T). typeCheck(var(N), Ctx, T) :- lookup(N, Ctx, T). typeCheck(app(F,X), Ctx, T2) :- typeCheck(F, Ctx, tarr(T1, T2)), typeCheck(X, Ctx, T1). typeCheck(lam(B), Ctx, tarr(T1, T2)) :- typeCheck(B, [T1|Ctx], T2).
  1. This Agda file contains all the code from this article.

  2. Most dependently typed languages, such as Coq or Epigram would also be adequate.

  3. Epigram is most notable by actually using this 2D syntax.

  4. See this StackExchange answer for more discussion of this.

Categories: Offsite Blogs

Mark Jason Dominus: Neckbeards and other notes on “The Magnificent Ambersons”

Planet Haskell - Tue, 04/12/2016 - 6:48pm

Last week I read Booth Tarkington’s novel The Magnificent Ambersons, which won the 1919 Pulitzer Prize but today is chiefly remembered for Orson Welles’ 1942 film adaptation.

(It was sitting on the giveaway shelf in the coffee shop, so I grabbed it. It is a 1925 printing, discarded from the Bess Tilson Sprinkle library in Weaverville, North Carolina. The last due date stamped in the back is May 12, 1957.)

The Ambersons are the richest and most important family in an unnamed Midwestern town in 1880. The only grandchild, George, is completely spoiled and grows up to ruin the lives of everyone connected with him with his monstrous selfishness. Meanwhile, as the automobile is invented and the town evolves into a city the Amberson fortune is lost and the family dispersed and forgotten. George is destroyed so thoroughly that I could not even take any pleasure in it.

I made a few marginal notes as I read.


It was a hairier day than this. Beards were to the wearer’s fancy … and it was possible for a Senator of the United States to wear a mist of white whisker upon his throat only, not a newspaper in the land finding the ornament distinguished enough to warrant a lampoon.

I wondered who Tarkington had in mind. My first thought was Horace Greeley:

His neckbeard fits the description, but, although he served as an unelected congressman and ran unsuccessfully for President, he was never a Senator.

Then I thought of Hannibal Hamlin, who was a Senator:

But his neckbeard, although horrifying, doesn't match the description.

Gentle Readers, can you help me? Who did Tarkington have in mind? Or, if we can't figure that out, perhaps we could assemble a list of the Ten Worst Neckbeards of 19th Century Politics.

Other notes

I was startled on Page 288 by a mention of “purple haze”, but a Google Books search reveals that the phrase is not that uncommon. Jimi Hendrix owns it now, but in 1919 it was just purple haze.

George’s Aunt Fanny writes him a letter about his girlfriend Lucy:

Mr. Morgan took your mother and me to see Modjeska in “Twelfth Night” yesterday evening, and Lucy said she thought the Duke looked rather like you, only much more democratic in his manner.

Lucy, as you see, is not entirely sure that she likes George. George, who is not very intelligent, is not aware that Lucy is poking fun at him.

A little later we see George’s letter to Lucy. Here is an excerpt I found striking:

[Yours] is the only girl’s photograph I ever took the trouble to have framed, though as I told you frankly, I have had any number of other girls’ photographs, yet all were passing fancies, and oftentimes I have questioned in years past if I was capable of much friendship toward the feminine sex, which I usually found shallow until our own friendship began. When I look at your photograph, I say to myself “At last, at last here is one that will not prove shallow.”

The arrogance, the rambling, the indecisiveness of tone, and the vacillation reminded me of the speeches of Donald Trump, whom George resembles in several ways. George has an excuse not available to Trump; he is only twenty.

Addendum 20160413: John C. Calhoun seems like a strong possibility:

Categories: Offsite Blogs

Robert Harper: It Is What It Is (And Nothing Else)

Planet Haskell - Tue, 04/12/2016 - 10:37am

A recent discussion of introductory computer science education led to the topic of teaching recursion.  I was surprised to learn that students are being taught that recursion requires understanding something called a “stack” that is nowhere in evidence in their code.  Few, if any, students master the concept, which is usually “covered” only briefly.  Worst, they are encouraged to believe that recursion is a mysterious bit of esoterica that is best ignored.

And thus is lost one of the most important and beautiful concepts in computing.

The discussion then moved on to the implementation of recursion in certain inexplicably popular languages for teaching programming.  As it turns out, the compilers mis-implement recursion, causing unwarranted space usage in common cases.  Recursion is dismissed as problematic and unimportant, and the compiler error is elevated to a “design principle” — to be snake-like is to do it wrong.

And thus is lost one of the most important and beautiful concepts in computing.

And yet, for all the stack-based resistance to the concept, recursion has nothing to do with a stack.  Teaching recursion does not need any mumbo-jumbo about “stacks”.  Implementing recursion does not require a “stack”.  The idea that the two concepts are related is simply mistaken.

What, then, is recursion?  It is nothing more than self-reference, the ability to name a computation for use within the computation itself.  Recursion is what it is, and nothing more.  No stacks, no tail calls, no proper or improper forms, no optimizations, just self-reference pure and simple.  Recursion is not tied to “procedures” or “functions” or “methods”; one can have self-referential values of all types.

Somehow these very simple facts, which date back to the early 1930’s, have been replaced by damaging myths that impede teaching and using recursion in programs.  It is both a conceptual and a practical loss.  For example, the most effective methods for expressing parallelism in programs rely heavily on recursive self-reference; much would be lost without it.  And the allegation that “real programmers don’t use recursion” is beyond absurd: the very concept of a digital computer is grounded in recursive self-reference (the cross-connection of gates to form a latch).  (Which, needless to say, does not involve a stack.)  Not only do real programmers use recursion, there could not even be programmers were it not for recursion.

I have no explanation for why this terrible misconception persists.  But I do know that when it comes to programming languages, attitude trumps reality every time.  Facts?  We don’t need no stinking facts around here, amigo.  You must be some kind of mathematician.

If all the textbooks are wrong, what is right?  How should one explain recursion?  It’s simple.  If you want to refer to yourself, you need to give yourself a name.  “I” will do, but so will any other name, by the miracle of α-conversion.  A computation is given a name using a fixed point (not fixpoint, dammit) operator:  fix x is e stands for the expression e named x for use within e.  Using it, the textbook example of the factorial function is written thus:

fix f is fun n : nat in case n {zero => 1 | succ(n') => n * f n'}.

Let us call this whole expression fact, for convenience.  If we wish to evaluate it, perhaps because we wish to apply it to an argument, its value is

fun n : nat in case n {zero => 1 | succ(n') => n * fact n'}.

The recursion has been unrolled one step ahead of execution.  If we reach fact again, as we will for a positive argument,  fact is evaluated again, in the same way, and the computation continues.  There are no stacks involved in this explanation.

Nor is there a stack involved in the implementation of fixed points.  It is only necessary to make sure that the named computation does indeed name itself.  This can be achieved by a number of means, including circular data structures (non-well-founded abstract syntax), but the most elegant method is by self-application.  Simply arrange that a self-referential computation has an implicit argument with which it refers to itself.  Any use of the computation unrolls the self-reference, ensuring that the invariant is maintained.  No storage allocation is required.

Consequently, a self-referential functions such as

fix f is fun (n : nat, m:nat) in case n {zero => m | succ(n') => f (n',n*m)}

execute without needing any asymptotically significant space.  It is quite literally a loop, and no special arrangement is required to make sure that this is the case.  All that is required is to implement recursion properly (as self-reference), and you’re done.  There is no such thing as tail-call optimization.  It’s not a matter of optimization, but of proper implementation.  Calling it an optimization suggests it is optional, or unnecessary, or provided only as a favor, when it is more accurately described as a matter of getting it right.

So what, then, is the source of the confusion?  The problem seems to be a too-close association between compound expressions and recursive functions or procedures.  Consider the classic definition of factorial given earlier.  The body of the definition involves the expression

n * fact n'

where there is a pending multiplication to be accounted for.  Once the recursive call (to itself) completes, the multiplication can be carried out, and it is necessary to keep track of this pending obligation.  But this phenomenon has nothing whatsoever to do with recursion.  If you write

n * square n'

then it is equally necessary to record where the external call is to return its value.  In typical accounts of recursion, the two issues get confused, a regrettable tragedy of error.

Really, the need for a stack arises the moment one introduces compound expressions.  This can be explained in several ways, none of which need pictures or diagrams or any discussion about frames or pointers or any extra-linguistic concepts whatsoever.  The best way, in my opinion, is to use Plotkin’s structural operational semantics, as described in my Practical Foundations for Programming Languages (Second Edition) on Cambridge University Press.

There is no reason, nor any possibility, to avoid recursion in programming.  But folk wisdom would have it otherwise.  That’s just the trouble with folk wisdom, everyone knows it’s true, even when it’s not.

Update: Dan Piponi and Andreas Rossberg called attention to a pertinent point regarding stacks and recursion.  The conventional notion of a run-time stack records two distinct things, the control state of the program (such as subroutine return addresses, or, more abstractly, pending computations, or continuations), and the data state of the program (a term I just made up because I don’t know a better one, for managing multiple simultaneous activations of a given procedure or function).  Fortran (back in the day) didn’t permit multiple activations, meaning that at most one instance of a procedure can be in play at a given time.  One consequence is that α-equivalence can be neglected: the arguments of a procedure can be placed in a statically determined spot for the call.  As a member of the Algol-60 design committee Dijkstra argued, successfully, for admitting multiple procedure activations (and hence, with a little extra arrangement, recursive/self-referential procedures).  Doing so requires that α-equivalence be implemented properly; two activations of the same procedure cannot share the same argument locations.  The data stack implements α-equivalence using de Bruijn indices (stack slots); arguments are passed on the data stack using activation records in the now-classic manner invented by Dijkstra for the purpose.  It is not self-reference that gives rise to the need for a stack, but rather re-entrancy of procedures, which can arise in several ways, not just recursion.  Moreover, recursion does not always require re-entrancy—the so-called tail call optimization is just the observation that certain recursive procedures are not, in fact, re-entrant.  (Every looping construct illustrates this principle, albeit on an ad hoc basis, rather than as a general principle.)

Filed under: Programming, Teaching
Categories: Offsite Blogs

Are libraries expected to enforce safe usage ofthread-unsafe functions?

haskell-cafe - Tue, 04/12/2016 - 8:57am
Hi friends, I am writing a library that has Handle-like values for resources that are not thread-safe (foreign state from a non-thread-safe C library). In other words, a resource can only be "used" by one Haskell thread at a time. Unless I take steps, if one shares the resource between concurrent Haskell threads, bad things will happen at runtime (even with a single-threaded runtime). Naturally, I want consumers to use my library safely, so I am considering making it safe for all by pessimistically enforcing safe usage with MVars. Is it common/expected that Haskell libraries enforce safe access like this? MVars are not free from cost, but are they cheap (i.e. almost as fast as no forced synchronization) as long as no blocking/rescheduling occurs? Thanks, -- Thomas Koster _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

[TFP 2016] extended deadline, april 25 2016,final call for papers

haskell-cafe - Tue, 04/12/2016 - 8:35am
TFP 2016 has extended its deadline for draft papers by two weeks (now April 25). Although all draft papers accepted to TFP 2016 will be invited to submit to the post-symposium formal proceedings, authors are reminded that they are not obligated to do so; we welcome works in progress that may not be destined for the TFP proceedings. Thanks, David Van Horn ----------------------------- C A L L F O R P A P E R S ----------------------------- ======== TFP 2016 =========== 17th Symposium on Trends in Functional Programming June 8-10, 2016 University of Maryland, College Park Near Washington, DC The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trend
Categories: Offsite Discussion

[TFP 2016] extended deadline, april 25 2016,final call for papers

General haskell list - Tue, 04/12/2016 - 8:35am
TFP 2016 has extended its deadline for draft papers by two weeks (now April 25). Although all draft papers accepted to TFP 2016 will be invited to submit to the post-symposium formal proceedings, authors are reminded that they are not obligated to do so; we welcome works in progress that may not be destined for the TFP proceedings. Thanks, David Van Horn ----------------------------- C A L L F O R P A P E R S ----------------------------- ======== TFP 2016 =========== 17th Symposium on Trends in Functional Programming June 8-10, 2016 University of Maryland, College Park Near Washington, DC The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trend
Categories: Incoming News

Haskell DockerHub Official Repository: NeedsMaintainer TLC

haskell-cafe - Tue, 04/12/2016 - 3:53am
Greetings, I have a confession to make: I've been a neglectful maintainer of the "haskell" Official Repository on DockerHub. I'm hoping to find a proper maintainer who is both a Docker nerd and a Haskell enthusiast to ensure that it is kept up to date and improved over time. Bonus points for someone involved in GHC development, or at an organization that is heavily involved in the Haskell community. I'm happy to help you get up to speed with where the image is at today and how to effectively submit future iterations after you've had a chance to review the DockerHub Official Repositories documentation at The current Dockerfiles are located at and there are two other maintainers who also contribute occasionally. Thanks, _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Gabriel Gonzalez: Worst practices should be hard

Planet Haskell - Mon, 04/11/2016 - 5:26pm

In this post I hope to persuade you that Haskell is well-adapted to software engineering in the large. To motivate this post, I would like to begin with a simple thought experiment.

Suppose that we could measure short-term programmer productivity for two hypothetical programming languages named "X" and "Y". In practice, we cannot easily compare productivity between two programming languages, but just pretend that we can for the purpose of this example.

First, we will measure the short-term productivity of languages "X" and "Y" when the programmer carefully follows all best practices. These are made up numbers:

"Best Practices" (whatever that means):

| | 5
Arbitrary | +-----+
Productivity | | |
Units | | |
| | |
| | |

Higher productivity is better, so from the above chart alone we might conclude that "X" is a better language than "Y", all other things equal.

However, all other things might not be equal and there might be other redeeming features of language "Y". Perhaps language "Y" excels when the programmer takes short cuts and throws best practices to the wind. Let's call this "worst practices" and measure developer productivity when cutting corners:

"Worst Practices" (whatever that means):

| |
| |
Arbitrary | |
Productivity | |
Units | | 3
| +-----+
| | |
| | |

Weird! Language "X" still performs better! In fact, language "Y" did such a poor job that developer productivity decreased when they followed worst practices.

Therefore, the choice is clear: language "X" is strictly a better language, right?

Not necessarily!

Long-term productivity

I will actually argue that language "Y" is the superior language for large and long-lived projects.

All of our charts so far only measured short-term productivity. However, long-term productivity crucially depends on how much developers follow best practices. That's what makes them "best practices".

Perhaps we don't need to perform additional experiments to predict which language will perform better long-term. To see why, let's transpose our original two charts. This time we will plot short-term productivity for language "X" when following either best practices (i.e. "BP") or worst practices (i.e. "WP"):

Language "X":

| | 7
| +-----+
| | |
Arbitrary | | |
Productivity | | |
Units | | |
| | |
| | |

This chart highlights one major flaw in the design of language X: the language rewards worst practices! Therefore, developers who use language "X" will always be incentivized to do the wrong thing, especially when they are under deadline pressure.

In contrast, language "Y" rewards best practices!

Language "Y":

Arbitrary +-----+
Productivity 3 | |
Units +-----+ |
| | |
| | |

A developer using language "Y" can't help but follow best practices. The more deadline pressure there is, the greater the incentive to do the right thing.

Developers using language "X" might compensate by creating a culture that shames worst practices and glorifies best practices in order to realign incentives, but this does not scale well to large software projects and organizations. You can certainly discipline yourself. You can even police your immediate team to ensure that they program responsibly. But what about other teams within your company? What about third-party libraries that you depend on? You can't police an entire language ecosystem to do the right thing.


I believe that Haskell is the closest modern approximation to the ideal of language "Y". In other words, Haskell aligns short-term incentives with long-term incentives.

Best practices are obviously not a binary "yes/no" proposition or even a linear scale. There are multiple dimensions to best practices and some languages fare better on some dimensions and worse on others. Some dimensions where Haskell performs well are:

  • Absence of null
  • Limitation of effects
  • Immutability
  • Generic types
  • Strongly typed records

For each of the above points I will explain how Haskell makes it easier to do the right thing than the wrong thing.

Haskell isn't perfect, though, so in the interests of balance I will also devote a section to the following areas where Haskell actively rewards worst practices and punishes best practices!

  • Poor performance
  • Excessive abstraction
  • Unchecked exceptions
Absence of null

Haskell has no null/None/nil value built into the language but Haskell does have a Maybe type somewhat analogous to Option in Java. Maybe is defined within the language as an ordinary data type like this:

data Maybe a = Just a | Nothing

Maybe differs from null-like solutions in other languages because Maybe shows up in the type. That means that if I don't see Maybe in the type then I know for sure that the value cannot possibly be empty. For example, I can tell from the following type that z can never be None or null:

z :: String

This differs from many other languages where all values could potentially be null because there is no way to opt out of nullability.

The second thing that differentiates Maybe from other solutions is that you can't forget to handle the empty case. If you have a function that expects an String argument:

exclaim :: String -> String
exclaim str = str ++ "!"

... you can't accidentally apply that function to a value of type Maybe String or you will get a type error:

>>> x = Just "Hello" :: Maybe String
>>> exclaim x -- TYPE ERROR!

Instead, you have to explicitly handle the empty case, somehow. For example, one approach is to use pattern matching and provide a default value when the argument is empty:

case x of
Just str -> exclaim str
Nothing -> "Goodbye!"

Haskell rewards best practices with respect to nullable values by making it easier to program without them:

  • everything is non-null by default
  • nullable types are longer because they must explicitly declare nullability
  • nullable code is longer because it must explicitly handle nullability

... therefore Haskell programmers tend to use nullable values sparingly because because it's the path of least resistance!

Limitation of input/output effects

Haskell treats input/output effects (IO for short) the same way that Haskell treats nullable values:

  • Everything is IO-free by default
  • IO types are longer because they must explicitly declare IO effects
  • IO code is longer because it must explicitly propagate effects

Therefore Haskell programmers tend to err on the side of using effects sparingly because it's the path of least resistance!


Haskell treats immutability the same way that Haskell treats nullable values and IO:

  • Everything is immutable by default
  • Mutable types are longer because they must explicitly declare state/references
  • Mutable code is longer because it must thread state/references

Therefore Haskell programmers tend to err on the side of using state and mutable references sparingly because it's the path of least resistance!

Generic types

By default, the Haskell compiler infers the most general type for your code. For example, suppose that I define the following top-level function:

addAndShow x y = show (x + y)

I can omit the type signature for that function because the compiler will already infer the following most general type for addAndShow:

>>> :type addAndShow
addAndShow :: (Num a, Show a) => a -> a -> String

I actually have to go out of my way to provide a more specific type for that function. The only way I can get the compiler to infer a less general type is to provide an explicit type signature for addAndShow:

addAndShow :: Int -> Int -> String
addAndShow x y = show (x + y)

In other words, I have to actively go out of my way to make functions less general. The path of least resistance is to just let the compiler infer the most general type possible!

Strongly typed records

If you come to Haskell from a Python or Clojure background you will definitely notice how painful it is to work with dictionaries/maps in Haskell. Haskell has no syntactic support for map literals; the best you can do is something like this:

myMap =
[ ("foo", "Hello!")
, ("bar", "1" )
, ("baz", "2.0" )

You also can't store different types of values for each key; all values in a map must be the same type.

In contrast, defining new records in Haskell is extremely cheap compared to other languages and each field can be a distinct type:

data MyRecord = MyRecord
{ foo :: String
, bar :: Int
, baz :: Double

myRecord = MyRecord
{ foo = "Hello"
, bar = 1
, baz = 2.0

Therefore, Haskell programmers will very predictably reach for records instead of maps/dictionaries because records are infinitely more pleasant to use.

Poor performance

Haskell doesn't always reward best practices, though. Haskell is one of the fastest functional programming languages in experienced hands, but the language very much rewards poor performance in beginner hands.

For example, Haskell's syntax and Prelude highly rewards linked lists over vectors/arrays. Even worse, the default String type is a linked list of characters, which is terrible for performance.

There are more efficient solutions to this problem, such as:

  • Using the text library for high-performance text
  • Using the vector library for high-performance arrays
  • Using the OverloadedStrings and OverloadedLists extensions to overload list literal syntax and string literal syntax to support more efficient types

However, the path of least resistance is still to use linked lists and strings. This problem is so pervasive that I see even respected and experienced library authors who care a lot about performance succumb to the convenience of these less efficient types every once in a while.

Excessive abstraction

As you learn the language you begin to realize that Haskell makes some types of advanced abstraction very syntactically cheap and some people get carried away. I know that I'm guilty of this.

From an outside perspective, the typical progression of a Haskell programmer might look something like this:

  • Week 1: "How r monad formed?"
  • Week 2: "What is the difference between Applicative+Category and Arrow?"
  • Week 3: "I'm beginning a PhD in polymorphic type family recursion constraints"
  • Week 4: "I created my own language because Haskell's not powerful enough"

(Edit: I do not mean to downplay anybody's difficulty learning Haskell; this is just to illustrate that the language fosters and encourages "architecture astronauts")

The temptation to play with all of Haskell's abstract facilities is so strong for some (like me) because for those people abstraction is too fun!

Unchecked exceptions

Fun fact: did you know that the conventional way to throw checked exceptions in Haskell is to use monad transformers?

It's far more easy to just use throwIO to lump all exceptions under the IO type than to declare in the types all the possible exceptions that you might throw. That's at least partially checked in the sense that an IO in the type signature warns you about the possibility of exceptions even if it won't tell you which specific exceptions might be thrown.

However, throwIO is surprisingly not in the Prelude by default, but error is!

error is the worst way to throw a Haskell exception because it is:

  • marked pure, so it can strike anywhere
  • asynchronous, so it can strike at any time
  • completely invisible at the type level
  • triggered by accidental evaluation

A lot of people mistakenly use error instead of throwIO without realizing the consequences of doing so. I've seen otherwise very smart and talented Haskell programmers get this wrong.

Following exception best practices requires discipline and effort, which means that under pressure people will generally not take the time to propagate and catalog exceptions correctly.


Despite those flaws, I still believe that Haskell incentivizes the right behavior where it counts:

  • minimizing null
  • minimizing state
  • minimizing effects
  • minimizing weakly-typed maps/dictionaries

Those are the qualities that I see large projects fall down on time after time due to inexperience, developer churn, and time constraints.

More generally, programming languages should be designed with attention to incentives. Don't just focus on absolute productivity, but also pay attention to the relative productivity of best practices and worst practices. Worst practices should be hard!

Categories: Offsite Blogs

Differences between QuasiQuotes and TemplateHaskell

haskell-cafe - Mon, 04/11/2016 - 2:59pm
Hi, Originally, I thought that QuasiQuotes build on top of TemplateHaskell, since they just provide a way of parsing a string into a Template Haskell AST. That is, a quasi-quoter is defined by four parsers of types String -> Q Exp String -> Q Pat String -> Q Type String -> Q Dec that parse EDSLs into corresponding Haskell (abstract) syntax. However, while playing around with the Shakespeare package's quasiquoter hamlet, I noticed that (e.g.) a snippet html = [hamlet| <html> <body> <h1>Hello World |] () has type Html and not (as I would expect) Q Exp. Hence, it seems as if quasiquotes are spliced in implicitly? This confuses me since in Template Haskell in general a quote (e.g.,) [e| \x -> x |] has type Q Exp, which then needs to be explicitly spliced in order to produce the id function \x -> x. So why do quasiquotes not have types Q Exp, but rather concrete types? If anyone could clarify my confusion, this would be great! Dominik.
Categories: Offsite Discussion

Neil Mitchell: GHCid 0.6 Released

Planet Haskell - Mon, 04/11/2016 - 2:21pm

Summary: I've released a new version of GHCid, which can interrupt running tests.

I've just released version 0.6.1 of GHCid. To a first approximation, ghcid opens ghci and runs :reload whenever your source code changes, formatting the output to fit a fixed height console. Unlike other Haskell development tools, ghcid is intended to be incredibly simple - it works when nothing else does. This new version features:

Much faster: Since version 0.5 GHCid passes -fno-code to ghci when it makes sense, which is about twice as fast.

Interruptible test commands: Since version 0.4 ghcid has supported a --test flag to pass a test command (e.g. :main) which is run whenever the code is warning free. As of version 0.6 that command will be interrupted if it needs to :reload, allowing long running tests and persistent "tests" - e.g. spawning web servers or GUIs. Thanks to Reid Draper for showing it was possible as part of his ordeal project and Luigy Leon for merging that with GHCid.

Stack integration: If you have a stack.yaml function and a .stack-work directory it will use stack commands to run your project. Thanks to the Stack Team, in particular Michael Sloan, for helping get through all the hoops and providing the necessary functionality in Stack.

More restart/reload flags: It's been possible for a while to pass --restart to restart ghci if certain files change (e.g. the .cabal file). Now there is a separate --reload flag to cause :reload instead of a full restart, and both flags can take directories instead of individual files.

Major relayering: For 0.6 I significantly refactored much of the GHCid code. There has always been an underlying Language.Haskell.Ghcid API, and GHCid was built on top. With the new version the underlying library has been given a significant refactoring, mostly so that interruptions are handled properly without race conditions and with a sane multithreading story. On top of that is a new Session layer, which provides a session abstraction - a ghci instance which tracks more state (e.g. which warnings have been emitted for already loaded files). Then the Ghcid module builds on top, with much less state management. By simplifying and clarifying the responsibility of each layer certain issues such as leaking old ghci processes and obscure race conditions disappeared.

I've been making use of many of these features in the Shake website generator, which I invoke with:

ghcid --test=":main debug" --reload=parts --reload=../docs

This project uses Stack, so relies on the new stack integration. It runs :main debug as the test suite, which generates the website whenever the code reloads. Furthermore, if any of the parts (template files) or docs (Markdown pages) change the website regenerates. I can now edit the website, and saving it automatically regenerates the web pages within a second.

Categories: Offsite Blogs

Mark Jason Dominus: “…and she hated every stitch”

Planet Haskell - Mon, 04/11/2016 - 1:04pm

Recently the following amusing item was going around on Twitter:

I have some bad news and some good news. First the good news: there is an Edith-Anne. Her name is actually Patty Polk, and she lived in Maryland around 1800.

Now the bad news: the image above is almost certainly fake. It may be a purely digital fabrication (from whole cloth, ha ha), or more likely, I think, it is a real physical object, but of recent manufacture.

I wouldn't waste blog space just to crap on this harmless bit of fake history. I want to give credit where it is due, to Patty Polk who really did do this, probably with much greater proficiency.

Why I think it's fake

I have not looked into this closely, because I don't think the question merits a lot of effort. But I have two reasons for thinking so.

The main one is that the complaint “Edith-Anne … hated every Stitch” would have taken at least as much time and effort as the rest of the sampler, probably more. I find it unlikely that Edith-Anne would have put so much work—so many more hated stitches—into her rejection.

Also, the work is implausibly poor. These samplers were stitched by girls typically between the ages of 7 and 14, and their artisanship was much, much better than either section of this example. Here is a sampler made by Lydia Stocker in 1798 at the age of 12:

Here's one by Louisa Gauffreau, age 8:

Compare these with Edith-Anne's purported cross-stitching. One tries to imagine how old she is, but there seems to be no good answer. The crooked stitching is the work of a very young girl, perhaps five or six. But the determination behind the sentiment, and the perseverance that would have been needed to see it through, belong to a much older girl.

Of course one wouldn't expect Edith-Anne to do good work on her hated sampler. But look at the sampler at right, wrought by a young Emily Dickinson, who is believed to have disliked the work and to have intentionally done it poorly. Even compared with this, Edith-Anne's claimed sampler doesn't look like a real sampler.

Patty Polk

Web search for “hated every stitch” turns up several other versions of Edith-Anne, often named Polly Cook1 or Mary Pitt2 (“This was done by Mary Pitt / Who hated every stitch of it”) but without any reliable source.

However, Patty Polk is reliably sourced. Bolton and Coe's American Samplers3 describes Miss Polk's sampler:

POLK, PATTY. [Cir. 1800. Kent County, Md.] 10 yrs. 16"×16". Stem-stitch. Large garland of pinks, roses, passion flowers, nasturtiums, and green leaves; in center, a white tomb with “G W” on it, surrounded by forget-me-nots. “Patty Polk did this and she hated every stitch she did in it. She loves to read much more.”

The description was provided by Mrs. Frederic Tyson, who presumably owned or had at least seen the sampler. Unfortunately, there is no picture. The “G W” is believed to refer to George Washington, who died in 1799.

There is a lively market in designs for pseudo-vintage samplers that you can embroider yourself and “age”. One that features Patty Polk is produced by Falling Star Primitives:

Thanks to Lee Morrison of Falling Star Primitives for permission to use her “Patty Polk” design.


1. Parker, Rozsika. The Subversive Stitch: Embroidery and the Making of the Feminine. Routledge, 1989. p. 132.

2. Wilson, Erica. Needleplay. Scribner, 1975. p. 67.

3. Bolton, Ethel Stanwood and Eva Johnston Coe. American Samplers. Massachusetts Society of the Colonial Dames of America, 1921. p. 210.

[ Thanks to several Twitter users for suggesting gender-neutral vocabulary. ]

[ Addendum: Twitter user Kathryn Allen observes that Edith-Anne hated cross-stitch so much that she made another sampler to sell on eBay. Case closed. ]

[ Addendum: Ms. Allen further points out that the report by Mrs. Tyson in American Samplers may not be reliable, and directs me to the discussion by J.L. Bell, Clues to a Lost Sampler. ]

Categories: Offsite Blogs

JTRES 2016 Call for Papers

General haskell list - Mon, 04/11/2016 - 9:20am
====================================================================== CALL FOR PAPERS The 14th Workshop on Java Technologies for Real-Time and Embedded Systems JTRES 2016 Part of the Managed Languages & Runtimes Week 2016 29 August - 2 September 2016 Lugano, Switzerland ====================================================================== Submission deadline: 12 June, 2016 Submission site: ====================================================================== Over 90% of all microprocessors are now used for real-time and embedded applications. Embedded devices are deployed on a broad diversity of distinct processor architectures and operating systems. The application software for many embedded devices is custom tailored if
Categories: Incoming News

MPTC and quantification in GADTs vs. regular ADTs

haskell-cafe - Mon, 04/11/2016 - 8:57am
Good day folks! What is the explanation for the difference in GHC reaction towards the two variants of the following piece of code: 1 data T row col where 1 T ∷ ∀ row col. MPTC row col ⇒ 1 { col ∷ col 1 , row ∷ row } → T row col 2 data T row col 2 = ∀ row col. MPTC row col ⇒ 2 T { col ∷ col 2 , row ∷ row } Note the missing Nevertheless, the former typechecks, while the latter is rejected with: I'm sure this must have something to do with GADTs carrying type equality constraints, but still I don't quite understand the difference in interpretation of the forall quantification. To my understanding, a value bound by the pattern 'w< at >T{..}' ought to be treated as sufficient evidence for 'MPTC row col', no matter whether it is a GADT or not. ..but it isn't.
Categories: Offsite Discussion

Jasper Van der Jeugt: Haskell: mistakes I made

Planet Haskell - Sun, 04/10/2016 - 6:00pm

A week or two ago, I gave a talk at the Zurich Haskell Meetup about Haskell mistakes I have made in the past, and how you can fix them. The slides can be found here, and you can watch the talk below, thanks to Simon Meier’s prowess as a cameraman. I’m just glad I managed to strike a great pose in the video thumbnail.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"> </iframe>
Categories: Offsite Blogs

C2HS - marchalling arrays

haskell-cafe - Sun, 04/10/2016 - 1:09pm
Hi, I am using c2hs and I want to create bindings for a C function with the following signature: my_function(my_struct_t*, int* values, size_t num_values); Ideally I'd like to have a function that accepts [CInt], not a pair (pointer, size). What is the easiest way to do it? Can c2hs help me with this or I need to do it manually? Regards, Alexey. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Yesod Web Framework: Supporting separate read and write databases in persistent

Planet Haskell - Sun, 04/10/2016 - 9:00am

persistent has just released a new major version. With this release, queries which only read data are distinguished, at the type level, from queries which write data. This makes using a read replica database safer and generally makes our types more informative. Breakage should be minimal for end users.

Scaling a database

Many applications eventually reach the limits of a single database machine and work across multiple machines. Distributing state across multiple machines can introduce a great deal of complexity. Consequently, there are many approaches to dealing with multiple databases. The CAP theorem ensures that none of them is perfect (Is there ever a good impossibility theorem?). Front Row Education chose to go with streaming, native replication from a PostgreSQL write database to a PostgreSQL read database.

Possible solutions

What's the best way to program against a write database with a read replica? Here are a few approaches:

1. Run opaque queries against chosen backend.

In this approach, we don't change any of the persistent machinery. selectList, insert and runSqlPool stay the same:

runSqlPool :: (MonadBaseControl IO m) => ReaderT SqlBackend m a -> Pool SqlBackend -> m a writeBackend :: SqlBackend readBackend :: SqlBackend insertStudent :: ReaderT SqlBackend IO StudentId insertStudent = insert student selectStudents :: ReaderT SqlBackend IO [Entity Student] selectStudents = selectList ([] :: [Filter Student]) [] insertAndSelect :: IO [Entity Student] insertAndSelect = do _ <- runSqlPool (insertStudent >> insertStudent) writeBackend runSqlPool selectStudents readBackend

We choose which backend to run our queries against at execution time. That is, we pass one backend to runSqlPool if we want to execute the query against the read database and a different backend if we want to execute our query against the write database.

This approach does work in the most basic sense of the word. But it's manual and error-prone. Nothing stops us from accidentally running a write query against a read database and getting an error at runtime. That's not the Haskell way! We'd be much better off encoding this read and write information in the query's type.

2. update, delete and insert write. select reads.

In this approach we create wrappers around SqlBackend called SqlReadBackend and SqlWriteBackend. Then, we specify that all selects (reads) will operate against the read database and all inserts, update, or delete (writes) will operate against the write database. We can intermix queries of different types with multiple (now type safe) calls to runSqlPool:

runSqlPool :: (MonadBaseControl IO m, IsSqlBackend backend) => ReaderT backend m a -> Pool backend -> m a writeBackend :: SqlWriteBackend readBackend :: SqlReadBackend insertStudent :: ReaderT SqlWriteBackend IO StudentId selectStudents :: ReaderT SqlReadBackend IO [Entity Student] insertAndSelect :: IO [Entity Student] insertAndSelect = do _ <- runSqlPool (insertStudent >> insertStudnet) writeBackend runSqlPool selectStudents readBackend

Attempting to run insertStudent against on the readBackend will result in a type error. Nice!

Unfortunately, it will also result in a type error when attempting to run selectStudents against the writeBackend. Which is why we used two calls to runSqlPool in the above example. This inability to mix reads and writes in a single transaction is rather restrictive.

This approach also ignores problems of eventual consistency. Even under streaming replication, there is some lag (hopefully, only a few milliseconds or less) between the read database and the write database. If we can't run reads in the same transaction, on the same DB as writes we have a serious problem. In the above example, we have no guarantee that our student insertions will have propagated to the read DB in time for the select that immediately follows the insert.

3. update, delete and insert write. select can be used in a read or write context.

We must generalize our read operations so that we can still run them against the write database when we need to.

runSqlPool :: (MonadBaseControl IO m, IsSqlBackend backend) => ReaderT backend m a -> Pool backend -> m a writeBackend :: SqlWriteBackend readBackend :: SqlReadBackend instance SqlBackendCanRead SqlWriteBackend instance SqlBackendCanRead SqlReadBackend insertStudent :: ReaderT SqlWriteBackend IO StudentId selectStudents :: (SqlBackendCanRead backend) => ReaderT backend IO [Entity Student] insertAndSelect :: IO [Entity Student] insertAndSelect = runSqlPool (insertStudent >> insertStudent >> selectStudents) writeBackend

We now use type classes to say that write queries can only run against the write database but read queries can run against either type of database and we can defer the decision of where to run a read query until use. But in a safe way.


The new version of persistent follows the third approach.

Types as documentation

IO is sometimes referred to as Haskell's "sin bin". That is, a great number of effects end up marked as IO. Consequently, when you see IO in a type signature, it's hard to determine which effects that function uses. Does the function write to disk or get the current time? A more fine-grained type would make our types more informative.

Along similar lines, splitting the monolithic SqlPersistT into SqlReadT and SqlWriteT allows us to more clearly signal the capabilities leveraged inside a given function. When we see SqlReadT, we can be confident that the underlying database state hasn't changed.

Breaking changes

This version of persistent shouldn't break application authors and end users of persistent. You can continue to use SqlBackend which is now an instance of SqlBackendCanRead and SqlBackendCanWrite.

Library authors may need to modify some type signatures to work with the new machinery.

For example,

get404 :: (MonadIO m, PersistStore backend, backend ~ PersistEntityBackend val, PersistEntity val) => Key val -> ReaderT backend m val


get404 :: (MonadIO m, PersistStore backend, BaseBackend backend ~ PersistEntityBackend val, PersistEntity val) => Key val -> ReaderT backend m val

which leverages

instance HasPersistBackend SqlBackend where type BaseBackend SqlBackend = SqlBackend instance HasPersistBackend SqlReadBackend where type BaseBackend SqlReadBackend = SqlBackend instance HasPersistBackend SqlWriteBackend where type BaseBackend SqlWriteBackend = SqlBackend

from persistent.

This new concept of BaseBackend allows us to still assign a unique type of backend to each PersistEntity despite the availability of SqlReadBackend, SqlWriteBackend and SqlBackend.


If you have a read replica, you can now access it more safely. Even if you don't, your types are now a little more informative. And you get this for almost free!

Categories: Offsite Blogs

Emacs (was stack and the Atom editor)

haskell-cafe - Sun, 04/10/2016 - 8:36am
[Note the changed subject line :-) ] Ive been having a great deal of confusion/trouble with emacs haskell-mode of late. No this is not a gripe about haskell-mode Just that I find myself confused over 1. The old comint interface and new λ one -- when does which kick in 2. Basic setups -- is it called haskell-mode/haskell-emacs or what? 3. I dont quite know when things stopped working/being comfortable/familiar but I remember it as having nothing to do with haskell -- updating org mode or some such Just that I am supposed to be (at least out here!!) an emacs and haskell expert and while showing others how to get started I am often finding my own setups embarrassingly broken Any tips for an old emacs-geezer? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Haskell on OpenWRT

haskell-cafe - Sat, 04/09/2016 - 8:54am
Has anyone had any success running Haskell programs on OpenWRT? Specifically, I'm running OpenWRT on x86_64, so processor architecture shouldn't be an issue. However, by default OpenWRT uses musl as its C library, so binaries from a "normal" Linux system wouldn't be compatible with OpenWRT. I attempted to get around this problem by building an OpenWRT image with glibc as the C library. In theory, that ought to solve the problem. In practice, my program (a simple hello-world type program, which runs fine on Ubuntu) hung, printing nothing, using nearly all of one core, and was unkillable by any signal other than SIGKILL. If left alone to run, it eventually used all available memory and then died. I took a look at ldd, to see if there were any clues there. On my Ubuntu 12.04 machine, where I compiled the program (using ghc 7.4.1), I get: ppelletier< at >patrick64:~/programming/haskell$ ldd contact => (0x00007fff36f50000) => /usr/lib/x86_64-linux-gnu/
Categories: Offsite Discussion

Haskell, stack and the Atom editor

haskell-cafe - Sat, 04/09/2016 - 8:03am
Hi friends, I want to switch to the Atom editor for my Haskell development. I have tried several times over the last few years and given up each time because hardly anything works. I can't stand Emacs so don't bother suggesting it. I am currently using Geany, which has served me well, mostly because it is dead simple so nothing can go wrong. In addition to the usual universal text editor functions, all it does is highlight syntax (albeit with an outdated grammar), runs custom commands to build, displays clickable errors in a separate pane and underlines them in the editor pane. But for larger projects, I want more (better autocompletion, better integration with the build system, better VCS support). But back to Atom, I am having extreme difficulty getting even the basics working. I am using GHC 7.10.2, Cabal, Stack 1.0.4. I installed Atom 1.6.1 from their deb package. I installed the "language-haskell" package. This works very well, thankfully, but is the *only* thing that I can get working.
Categories: Offsite Discussion