News aggregator

Mark Jason Dominus: Neckbeards and other notes on “The Magnificent Ambersons”

Planet Haskell - Tue, 04/12/2016 - 6:48pm

Last week I read Booth Tarkington’s novel The Magnificent Ambersons, which won the 1919 Pulitzer Prize but today is chiefly remembered for Orson Welles’ 1942 film adaptation.

(It was sitting on the giveaway shelf in the coffee shop, so I grabbed it. It is a 1925 printing, discarded from the Bess Tilson Sprinkle library in Weaverville, North Carolina. The last due date stamped in the back is May 12, 1957.)

The Ambersons are the richest and most important family in an unnamed Midwestern town in 1880. The only grandchild, George, is completely spoiled and grows up to ruin the lives of everyone connected with him with his monstrous selfishness. Meanwhile, as the automobile is invented and the town evolves into a city the Amberson fortune is lost and the family dispersed and forgotten. George is destroyed so thoroughly that I could not even take any pleasure in it.

I made a few marginal notes as I read.


It was a hairier day than this. Beards were to the wearer’s fancy … and it was possible for a Senator of the United States to wear a mist of white whisker upon his throat only, not a newspaper in the land finding the ornament distinguished enough to warrant a lampoon.

I wondered who Tarkington had in mind. My first thought was Horace Greeley:

His neckbeard fits the description, but, although he served as an unelected congressman and ran unsuccessfully for President, he was never a Senator.

Then I thought of Hannibal Hamlin, who was a Senator:

But his neckbeard, although horrifying, doesn't match the description.

Gentle Readers, can you help me? Who did Tarkington have in mind? Or, if we can't figure that out, perhaps we could assemble a list of the Ten Worst Neckbeards of 19th Century Politics.

Other notes

I was startled on Page 288 by a mention of “purple haze”, but a Google Books search reveals that the phrase is not that uncommon. Jimi Hendrix owns it now, but in 1919 it was just purple haze.

George’s Aunt Fanny writes him a letter about his girlfriend Lucy:

Mr. Morgan took your mother and me to see Modjeska in “Twelfth Night” yesterday evening, and Lucy said she thought the Duke looked rather like you, only much more democratic in his manner.

Lucy, as you see, is not entirely sure that she likes George. George, who is not very intelligent, is not aware that Lucy is poking fun at him.

A little later we see George’s letter to Lucy. Here is an excerpt I found striking:

[Yours] is the only girl’s photograph I ever took the trouble to have framed, though as I told you frankly, I have had any number of other girls’ photographs, yet all were passing fancies, and oftentimes I have questioned in years past if I was capable of much friendship toward the feminine sex, which I usually found shallow until our own friendship began. When I look at your photograph, I say to myself “At last, at last here is one that will not prove shallow.”

The arrogance, the rambling, the indecisiveness of tone, and the vacillation reminded me of the speeches of Donald Trump, whom George resembles in several ways. George has an excuse not available to Trump; he is only twenty.

Addendum 20160413: John C. Calhoun seems like a strong possibility:

Categories: Offsite Blogs

Robert Harper: It Is What It Is (And Nothing Else)

Planet Haskell - Tue, 04/12/2016 - 10:37am

A recent discussion of introductory computer science education led to the topic of teaching recursion.  I was surprised to learn that students are being taught that recursion requires understanding something called a “stack” that is nowhere in evidence in their code.  Few, if any, students master the concept, which is usually “covered” only briefly.  Worst, they are encouraged to believe that recursion is a mysterious bit of esoterica that is best ignored.

And thus is lost one of the most important and beautiful concepts in computing.

The discussion then moved on to the implementation of recursion in certain inexplicably popular languages for teaching programming.  As it turns out, the compilers mis-implement recursion, causing unwarranted space usage in common cases.  Recursion is dismissed as problematic and unimportant, and the compiler error is elevated to a “design principle” — to be snake-like is to do it wrong.

And thus is lost one of the most important and beautiful concepts in computing.

And yet, for all the stack-based resistance to the concept, recursion has nothing to do with a stack.  Teaching recursion does not need any mumbo-jumbo about “stacks”.  Implementing recursion does not require a “stack”.  The idea that the two concepts are related is simply mistaken.

What, then, is recursion?  It is nothing more than self-reference, the ability to name a computation for use within the computation itself.  Recursion is what it is, and nothing more.  No stacks, no tail calls, no proper or improper forms, no optimizations, just self-reference pure and simple.  Recursion is not tied to “procedures” or “functions” or “methods”; one can have self-referential values of all types.

Somehow these very simple facts, which date back to the early 1930’s, have been replaced by damaging myths that impede teaching and using recursion in programs.  It is both a conceptual and a practical loss.  For example, the most effective methods for expressing parallelism in programs rely heavily on recursive self-reference; much would be lost without it.  And the allegation that “real programmers don’t use recursion” is beyond absurd: the very concept of a digital computer is grounded in recursive self-reference (the cross-connection of gates to form a latch).  (Which, needless to say, does not involve a stack.)  Not only do real programmers use recursion, there could not even be programmers were it not for recursion.

I have no explanation for why this terrible misconception persists.  But I do know that when it comes to programming languages, attitude trumps reality every time.  Facts?  We don’t need no stinking facts around here, amigo.  You must be some kind of mathematician.

If all the textbooks are wrong, what is right?  How should one explain recursion?  It’s simple.  If you want to refer to yourself, you need to give yourself a name.  “I” will do, but so will any other name, by the miracle of α-conversion.  A computation is given a name using a fixed point (not fixpoint, dammit) operator:  fix x is e stands for the expression e named x for use within e.  Using it, the textbook example of the factorial function is written thus:

fix f is fun n : nat in case n {zero => 1 | succ(n') => n * f n'}.

Let us call this whole expression fact, for convenience.  If we wish to evaluate it, perhaps because we wish to apply it to an argument, its value is

fun n : nat in case n {zero => 1 | succ(n') => n * fact n'}.

The recursion has been unrolled one step ahead of execution.  If we reach fact again, as we will for a positive argument,  fact is evaluated again, in the same way, and the computation continues.  There are no stacks involved in this explanation.

Nor is there a stack involved in the implementation of fixed points.  It is only necessary to make sure that the named computation does indeed name itself.  This can be achieved by a number of means, including circular data structures (non-well-founded abstract syntax), but the most elegant method is by self-application.  Simply arrange that a self-referential computation has an implicit argument with which it refers to itself.  Any use of the computation unrolls the self-reference, ensuring that the invariant is maintained.  No storage allocation is required.

Consequently, a self-referential functions such as

fix f is fun (n : nat, m:nat) in case n {zero => m | succ(n') => f (n',n*m)}

execute without needing any asymptotically significant space.  It is quite literally a loop, and no special arrangement is required to make sure that this is the case.  All that is required is to implement recursion properly (as self-reference), and you’re done.  There is no such thing as tail-call optimization.  It’s not a matter of optimization, but of proper implementation.  Calling it an optimization suggests it is optional, or unnecessary, or provided only as a favor, when it is more accurately described as a matter of getting it right.

So what, then, is the source of the confusion?  The problem seems to be a too-close association between compound expressions and recursive functions or procedures.  Consider the classic definition of factorial given earlier.  The body of the definition involves the expression

n * fact n'

where there is a pending multiplication to be accounted for.  Once the recursive call (to itself) completes, the multiplication can be carried out, and it is necessary to keep track of this pending obligation.  But this phenomenon has nothing whatsoever to do with recursion.  If you write

n * square n'

then it is equally necessary to record where the external call is to return its value.  In typical accounts of recursion, the two issues get confused, a regrettable tragedy of error.

Really, the need for a stack arises the moment one introduces compound expressions.  This can be explained in several ways, none of which need pictures or diagrams or any discussion about frames or pointers or any extra-linguistic concepts whatsoever.  The best way, in my opinion, is to use Plotkin’s structural operational semantics, as described in my Practical Foundations for Programming Languages (Second Edition) on Cambridge University Press.

There is no reason, nor any possibility, to avoid recursion in programming.  But folk wisdom would have it otherwise.  That’s just the trouble with folk wisdom, everyone knows it’s true, even when it’s not.

Update: Dan Piponi and Andreas Rossberg called attention to a pertinent point regarding stacks and recursion.  The conventional notion of a run-time stack records two distinct things, the control state of the program (such as subroutine return addresses, or, more abstractly, pending computations, or continuations), and the data state of the program (a term I just made up because I don’t know a better one, for managing multiple simultaneous activations of a given procedure or function).  Fortran (back in the day) didn’t permit multiple activations, meaning that at most one instance of a procedure can be in play at a given time.  One consequence is that α-equivalence can be neglected: the arguments of a procedure can be placed in a statically determined spot for the call.  As a member of the Algol-60 design committee Dijkstra argued, successfully, for admitting multiple procedure activations (and hence, with a little extra arrangement, recursive/self-referential procedures).  Doing so requires that α-equivalence be implemented properly; two activations of the same procedure cannot share the same argument locations.  The data stack implements α-equivalence using de Bruijn indices (stack slots); arguments are passed on the data stack using activation records in the now-classic manner invented by Dijkstra for the purpose.  It is not self-reference that gives rise to the need for a stack, but rather re-entrancy of procedures, which can arise in several ways, not just recursion.  Moreover, recursion does not always require re-entrancy—the so-called tail call optimization is just the observation that certain recursive procedures are not, in fact, re-entrant.  (Every looping construct illustrates this principle, albeit on an ad hoc basis, rather than as a general principle.)

Filed under: Programming, Teaching
Categories: Offsite Blogs

Are libraries expected to enforce safe usage ofthread-unsafe functions?

haskell-cafe - Tue, 04/12/2016 - 8:57am
Hi friends, I am writing a library that has Handle-like values for resources that are not thread-safe (foreign state from a non-thread-safe C library). In other words, a resource can only be "used" by one Haskell thread at a time. Unless I take steps, if one shares the resource between concurrent Haskell threads, bad things will happen at runtime (even with a single-threaded runtime). Naturally, I want consumers to use my library safely, so I am considering making it safe for all by pessimistically enforcing safe usage with MVars. Is it common/expected that Haskell libraries enforce safe access like this? MVars are not free from cost, but are they cheap (i.e. almost as fast as no forced synchronization) as long as no blocking/rescheduling occurs? Thanks, -- Thomas Koster _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

[TFP 2016] extended deadline, april 25 2016,final call for papers

haskell-cafe - Tue, 04/12/2016 - 8:35am
TFP 2016 has extended its deadline for draft papers by two weeks (now April 25). Although all draft papers accepted to TFP 2016 will be invited to submit to the post-symposium formal proceedings, authors are reminded that they are not obligated to do so; we welcome works in progress that may not be destined for the TFP proceedings. Thanks, David Van Horn ----------------------------- C A L L F O R P A P E R S ----------------------------- ======== TFP 2016 =========== 17th Symposium on Trends in Functional Programming June 8-10, 2016 University of Maryland, College Park Near Washington, DC The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trend
Categories: Offsite Discussion

[TFP 2016] extended deadline, april 25 2016,final call for papers

General haskell list - Tue, 04/12/2016 - 8:35am
TFP 2016 has extended its deadline for draft papers by two weeks (now April 25). Although all draft papers accepted to TFP 2016 will be invited to submit to the post-symposium formal proceedings, authors are reminded that they are not obligated to do so; we welcome works in progress that may not be destined for the TFP proceedings. Thanks, David Van Horn ----------------------------- C A L L F O R P A P E R S ----------------------------- ======== TFP 2016 =========== 17th Symposium on Trends in Functional Programming June 8-10, 2016 University of Maryland, College Park Near Washington, DC The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trend
Categories: Incoming News

Haskell DockerHub Official Repository: NeedsMaintainer TLC

haskell-cafe - Tue, 04/12/2016 - 3:53am
Greetings, I have a confession to make: I've been a neglectful maintainer of the "haskell" Official Repository on DockerHub. I'm hoping to find a proper maintainer who is both a Docker nerd and a Haskell enthusiast to ensure that it is kept up to date and improved over time. Bonus points for someone involved in GHC development, or at an organization that is heavily involved in the Haskell community. I'm happy to help you get up to speed with where the image is at today and how to effectively submit future iterations after you've had a chance to review the DockerHub Official Repositories documentation at The current Dockerfiles are located at and there are two other maintainers who also contribute occasionally. Thanks, _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Gabriel Gonzalez: Worst practices should be hard

Planet Haskell - Mon, 04/11/2016 - 5:26pm

In this post I hope to persuade you that Haskell is well-adapted to software engineering in the large. To motivate this post, I would like to begin with a simple thought experiment.

Suppose that we could measure short-term programmer productivity for two hypothetical programming languages named "X" and "Y". In practice, we cannot easily compare productivity between two programming languages, but just pretend that we can for the purpose of this example.

First, we will measure the short-term productivity of languages "X" and "Y" when the programmer carefully follows all best practices. These are made up numbers:

"Best Practices" (whatever that means):

| | 5
Arbitrary | +-----+
Productivity | | |
Units | | |
| | |
| | |

Higher productivity is better, so from the above chart alone we might conclude that "X" is a better language than "Y", all other things equal.

However, all other things might not be equal and there might be other redeeming features of language "Y". Perhaps language "Y" excels when the programmer takes short cuts and throws best practices to the wind. Let's call this "worst practices" and measure developer productivity when cutting corners:

"Worst Practices" (whatever that means):

| |
| |
Arbitrary | |
Productivity | |
Units | | 3
| +-----+
| | |
| | |

Weird! Language "X" still performs better! In fact, language "Y" did such a poor job that developer productivity decreased when they followed worst practices.

Therefore, the choice is clear: language "X" is strictly a better language, right?

Not necessarily!

Long-term productivity

I will actually argue that language "Y" is the superior language for large and long-lived projects.

All of our charts so far only measured short-term productivity. However, long-term productivity crucially depends on how much developers follow best practices. That's what makes them "best practices".

Perhaps we don't need to perform additional experiments to predict which language will perform better long-term. To see why, let's transpose our original two charts. This time we will plot short-term productivity for language "X" when following either best practices (i.e. "BP") or worst practices (i.e. "WP"):

Language "X":

| | 7
| +-----+
| | |
Arbitrary | | |
Productivity | | |
Units | | |
| | |
| | |

This chart highlights one major flaw in the design of language X: the language rewards worst practices! Therefore, developers who use language "X" will always be incentivized to do the wrong thing, especially when they are under deadline pressure.

In contrast, language "Y" rewards best practices!

Language "Y":

Arbitrary +-----+
Productivity 3 | |
Units +-----+ |
| | |
| | |

A developer using language "Y" can't help but follow best practices. The more deadline pressure there is, the greater the incentive to do the right thing.

Developers using language "X" might compensate by creating a culture that shames worst practices and glorifies best practices in order to realign incentives, but this does not scale well to large software projects and organizations. You can certainly discipline yourself. You can even police your immediate team to ensure that they program responsibly. But what about other teams within your company? What about third-party libraries that you depend on? You can't police an entire language ecosystem to do the right thing.


I believe that Haskell is the closest modern approximation to the ideal of language "Y". In other words, Haskell aligns short-term incentives with long-term incentives.

Best practices are obviously not a binary "yes/no" proposition or even a linear scale. There are multiple dimensions to best practices and some languages fare better on some dimensions and worse on others. Some dimensions where Haskell performs well are:

  • Absence of null
  • Limitation of effects
  • Immutability
  • Generic types
  • Strongly typed records

For each of the above points I will explain how Haskell makes it easier to do the right thing than the wrong thing.

Haskell isn't perfect, though, so in the interests of balance I will also devote a section to the following areas where Haskell actively rewards worst practices and punishes best practices!

  • Poor performance
  • Excessive abstraction
  • Unchecked exceptions
Absence of null

Haskell has no null/None/nil value built into the language but Haskell does have a Maybe type somewhat analogous to Option in Java. Maybe is defined within the language as an ordinary data type like this:

data Maybe a = Just a | Nothing

Maybe differs from null-like solutions in other languages because Maybe shows up in the type. That means that if I don't see Maybe in the type then I know for sure that the value cannot possibly be empty. For example, I can tell from the following type that z can never be None or null:

z :: String

This differs from many other languages where all values could potentially be null because there is no way to opt out of nullability.

The second thing that differentiates Maybe from other solutions is that you can't forget to handle the empty case. If you have a function that expects an String argument:

exclaim :: String -> String
exclaim str = str ++ "!"

... you can't accidentally apply that function to a value of type Maybe String or you will get a type error:

>>> x = Just "Hello" :: Maybe String
>>> exclaim x -- TYPE ERROR!

Instead, you have to explicitly handle the empty case, somehow. For example, one approach is to use pattern matching and provide a default value when the argument is empty:

case x of
Just str -> exclaim str
Nothing -> "Goodbye!"

Haskell rewards best practices with respect to nullable values by making it easier to program without them:

  • everything is non-null by default
  • nullable types are longer because they must explicitly declare nullability
  • nullable code is longer because it must explicitly handle nullability

... therefore Haskell programmers tend to use nullable values sparingly because because it's the path of least resistance!

Limitation of input/output effects

Haskell treats input/output effects (IO for short) the same way that Haskell treats nullable values:

  • Everything is IO-free by default
  • IO types are longer because they must explicitly declare IO effects
  • IO code is longer because it must explicitly propagate effects

Therefore Haskell programmers tend to err on the side of using effects sparingly because it's the path of least resistance!


Haskell treats immutability the same way that Haskell treats nullable values and IO:

  • Everything is immutable by default
  • Mutable types are longer because they must explicitly declare state/references
  • Mutable code is longer because it must thread state/references

Therefore Haskell programmers tend to err on the side of using state and mutable references sparingly because it's the path of least resistance!

Generic types

By default, the Haskell compiler infers the most general type for your code. For example, suppose that I define the following top-level function:

addAndShow x y = show (x + y)

I can omit the type signature for that function because the compiler will already infer the following most general type for addAndShow:

>>> :type addAndShow
addAndShow :: (Num a, Show a) => a -> a -> String

I actually have to go out of my way to provide a more specific type for that function. The only way I can get the compiler to infer a less general type is to provide an explicit type signature for addAndShow:

addAndShow :: Int -> Int -> String
addAndShow x y = show (x + y)

In other words, I have to actively go out of my way to make functions less general. The path of least resistance is to just let the compiler infer the most general type possible!

Strongly typed records

If you come to Haskell from a Python or Clojure background you will definitely notice how painful it is to work with dictionaries/maps in Haskell. Haskell has no syntactic support for map literals; the best you can do is something like this:

myMap =
[ ("foo", "Hello!")
, ("bar", "1" )
, ("baz", "2.0" )

You also can't store different types of values for each key; all values in a map must be the same type.

In contrast, defining new records in Haskell is extremely cheap compared to other languages and each field can be a distinct type:

data MyRecord = MyRecord
{ foo :: String
, bar :: Int
, baz :: Double

myRecord = MyRecord
{ foo = "Hello"
, bar = 1
, baz = 2.0

Therefore, Haskell programmers will very predictably reach for records instead of maps/dictionaries because records are infinitely more pleasant to use.

Poor performance

Haskell doesn't always reward best practices, though. Haskell is one of the fastest functional programming languages in experienced hands, but the language very much rewards poor performance in beginner hands.

For example, Haskell's syntax and Prelude highly rewards linked lists over vectors/arrays. Even worse, the default String type is a linked list of characters, which is terrible for performance.

There are more efficient solutions to this problem, such as:

  • Using the text library for high-performance text
  • Using the vector library for high-performance arrays
  • Using the OverloadedStrings and OverloadedLists extensions to overload list literal syntax and string literal syntax to support more efficient types

However, the path of least resistance is still to use linked lists and strings. This problem is so pervasive that I see even respected and experienced library authors who care a lot about performance succumb to the convenience of these less efficient types every once in a while.

Excessive abstraction

As you learn the language you begin to realize that Haskell makes some types of advanced abstraction very syntactically cheap and some people get carried away. I know that I'm guilty of this.

From an outside perspective, the typical progression of a Haskell programmer might look something like this:

  • Week 1: "How r monad formed?"
  • Week 2: "What is the difference between Applicative+Category and Arrow?"
  • Week 3: "I'm beginning a PhD in polymorphic type family recursion constraints"
  • Week 4: "I created my own language because Haskell's not powerful enough"

(Edit: I do not mean to downplay anybody's difficulty learning Haskell; this is just to illustrate that the language fosters and encourages "architecture astronauts")

The temptation to play with all of Haskell's abstract facilities is so strong for some (like me) because for those people abstraction is too fun!

Unchecked exceptions

Fun fact: did you know that the conventional way to throw checked exceptions in Haskell is to use monad transformers?

It's far more easy to just use throwIO to lump all exceptions under the IO type than to declare in the types all the possible exceptions that you might throw. That's at least partially checked in the sense that an IO in the type signature warns you about the possibility of exceptions even if it won't tell you which specific exceptions might be thrown.

However, throwIO is surprisingly not in the Prelude by default, but error is!

error is the worst way to throw a Haskell exception because it is:

  • marked pure, so it can strike anywhere
  • asynchronous, so it can strike at any time
  • completely invisible at the type level
  • triggered by accidental evaluation

A lot of people mistakenly use error instead of throwIO without realizing the consequences of doing so. I've seen otherwise very smart and talented Haskell programmers get this wrong.

Following exception best practices requires discipline and effort, which means that under pressure people will generally not take the time to propagate and catalog exceptions correctly.


Despite those flaws, I still believe that Haskell incentivizes the right behavior where it counts:

  • minimizing null
  • minimizing state
  • minimizing effects
  • minimizing weakly-typed maps/dictionaries

Those are the qualities that I see large projects fall down on time after time due to inexperience, developer churn, and time constraints.

More generally, programming languages should be designed with attention to incentives. Don't just focus on absolute productivity, but also pay attention to the relative productivity of best practices and worst practices. Worst practices should be hard!

Categories: Offsite Blogs

Differences between QuasiQuotes and TemplateHaskell

haskell-cafe - Mon, 04/11/2016 - 2:59pm
Hi, Originally, I thought that QuasiQuotes build on top of TemplateHaskell, since they just provide a way of parsing a string into a Template Haskell AST. That is, a quasi-quoter is defined by four parsers of types String -> Q Exp String -> Q Pat String -> Q Type String -> Q Dec that parse EDSLs into corresponding Haskell (abstract) syntax. However, while playing around with the Shakespeare package's quasiquoter hamlet, I noticed that (e.g.) a snippet html = [hamlet| <html> <body> <h1>Hello World |] () has type Html and not (as I would expect) Q Exp. Hence, it seems as if quasiquotes are spliced in implicitly? This confuses me since in Template Haskell in general a quote (e.g.,) [e| \x -> x |] has type Q Exp, which then needs to be explicitly spliced in order to produce the id function \x -> x. So why do quasiquotes not have types Q Exp, but rather concrete types? If anyone could clarify my confusion, this would be great! Dominik.
Categories: Offsite Discussion

Neil Mitchell: GHCid 0.6 Released

Planet Haskell - Mon, 04/11/2016 - 2:21pm

Summary: I've released a new version of GHCid, which can interrupt running tests.

I've just released version 0.6.1 of GHCid. To a first approximation, ghcid opens ghci and runs :reload whenever your source code changes, formatting the output to fit a fixed height console. Unlike other Haskell development tools, ghcid is intended to be incredibly simple - it works when nothing else does. This new version features:

Much faster: Since version 0.5 GHCid passes -fno-code to ghci when it makes sense, which is about twice as fast.

Interruptible test commands: Since version 0.4 ghcid has supported a --test flag to pass a test command (e.g. :main) which is run whenever the code is warning free. As of version 0.6 that command will be interrupted if it needs to :reload, allowing long running tests and persistent "tests" - e.g. spawning web servers or GUIs. Thanks to Reid Draper for showing it was possible as part of his ordeal project and Luigy Leon for merging that with GHCid.

Stack integration: If you have a stack.yaml function and a .stack-work directory it will use stack commands to run your project. Thanks to the Stack Team, in particular Michael Sloan, for helping get through all the hoops and providing the necessary functionality in Stack.

More restart/reload flags: It's been possible for a while to pass --restart to restart ghci if certain files change (e.g. the .cabal file). Now there is a separate --reload flag to cause :reload instead of a full restart, and both flags can take directories instead of individual files.

Major relayering: For 0.6 I significantly refactored much of the GHCid code. There has always been an underlying Language.Haskell.Ghcid API, and GHCid was built on top. With the new version the underlying library has been given a significant refactoring, mostly so that interruptions are handled properly without race conditions and with a sane multithreading story. On top of that is a new Session layer, which provides a session abstraction - a ghci instance which tracks more state (e.g. which warnings have been emitted for already loaded files). Then the Ghcid module builds on top, with much less state management. By simplifying and clarifying the responsibility of each layer certain issues such as leaking old ghci processes and obscure race conditions disappeared.

I've been making use of many of these features in the Shake website generator, which I invoke with:

ghcid --test=":main debug" --reload=parts --reload=../docs

This project uses Stack, so relies on the new stack integration. It runs :main debug as the test suite, which generates the website whenever the code reloads. Furthermore, if any of the parts (template files) or docs (Markdown pages) change the website regenerates. I can now edit the website, and saving it automatically regenerates the web pages within a second.

Categories: Offsite Blogs

Mark Jason Dominus: “…and she hated every stitch”

Planet Haskell - Mon, 04/11/2016 - 1:04pm

Recently the following amusing item was going around on Twitter:

I have some bad news and some good news. First the good news: there is an Edith-Anne. Her name is actually Patty Polk, and she lived in Maryland around 1800.

Now the bad news: the image above is almost certainly fake. It may be a purely digital fabrication (from whole cloth, ha ha), or more likely, I think, it is a real physical object, but of recent manufacture.

I wouldn't waste blog space just to crap on this harmless bit of fake history. I want to give credit where it is due, to Patty Polk who really did do this, probably with much greater proficiency.

Why I think it's fake

I have not looked into this closely, because I don't think the question merits a lot of effort. But I have two reasons for thinking so.

The main one is that the complaint “Edith-Anne … hated every Stitch” would have taken at least as much time and effort as the rest of the sampler, probably more. I find it unlikely that Edith-Anne would have put so much work—so many more hated stitches—into her rejection.

Also, the work is implausibly poor. These samplers were stitched by girls typically between the ages of 7 and 14, and their artisanship was much, much better than either section of this example. Here is a sampler made by Lydia Stocker in 1798 at the age of 12:

Here's one by Louisa Gauffreau, age 8:

Compare these with Edith-Anne's purported cross-stitching. One tries to imagine how old she is, but there seems to be no good answer. The crooked stitching is the work of a very young girl, perhaps five or six. But the determination behind the sentiment, and the perseverance that would have been needed to see it through, belong to a much older girl.

Of course one wouldn't expect Edith-Anne to do good work on her hated sampler. But look at the sampler at right, wrought by a young Emily Dickinson, who is believed to have disliked the work and to have intentionally done it poorly. Even compared with this, Edith-Anne's claimed sampler doesn't look like a real sampler.

Patty Polk

Web search for “hated every stitch” turns up several other versions of Edith-Anne, often named Polly Cook1 or Mary Pitt2 (“This was done by Mary Pitt / Who hated every stitch of it”) but without any reliable source.

However, Patty Polk is reliably sourced. Bolton and Coe's American Samplers3 describes Miss Polk's sampler:

POLK, PATTY. [Cir. 1800. Kent County, Md.] 10 yrs. 16"×16". Stem-stitch. Large garland of pinks, roses, passion flowers, nasturtiums, and green leaves; in center, a white tomb with “G W” on it, surrounded by forget-me-nots. “Patty Polk did this and she hated every stitch she did in it. She loves to read much more.”

The description was provided by Mrs. Frederic Tyson, who presumably owned or had at least seen the sampler. Unfortunately, there is no picture. The “G W” is believed to refer to George Washington, who died in 1799.

There is a lively market in designs for pseudo-vintage samplers that you can embroider yourself and “age”. One that features Patty Polk is produced by Falling Star Primitives:

Thanks to Lee Morrison of Falling Star Primitives for permission to use her “Patty Polk” design.


1. Parker, Rozsika. The Subversive Stitch: Embroidery and the Making of the Feminine. Routledge, 1989. p. 132.

2. Wilson, Erica. Needleplay. Scribner, 1975. p. 67.

3. Bolton, Ethel Stanwood and Eva Johnston Coe. American Samplers. Massachusetts Society of the Colonial Dames of America, 1921. p. 210.

[ Thanks to several Twitter users for suggesting gender-neutral vocabulary. ]

[ Addendum: Twitter user Kathryn Allen observes that Edith-Anne hated cross-stitch so much that she made another sampler to sell on eBay. Case closed. ]

[ Addendum: Ms. Allen further points out that the report by Mrs. Tyson in American Samplers may not be reliable, and directs me to the discussion by J.L. Bell, Clues to a Lost Sampler. ]

Categories: Offsite Blogs

JTRES 2016 Call for Papers

General haskell list - Mon, 04/11/2016 - 9:20am
====================================================================== CALL FOR PAPERS The 14th Workshop on Java Technologies for Real-Time and Embedded Systems JTRES 2016 Part of the Managed Languages & Runtimes Week 2016 29 August - 2 September 2016 Lugano, Switzerland ====================================================================== Submission deadline: 12 June, 2016 Submission site: ====================================================================== Over 90% of all microprocessors are now used for real-time and embedded applications. Embedded devices are deployed on a broad diversity of distinct processor architectures and operating systems. The application software for many embedded devices is custom tailored if
Categories: Incoming News

MPTC and quantification in GADTs vs. regular ADTs

haskell-cafe - Mon, 04/11/2016 - 8:57am
Good day folks! What is the explanation for the difference in GHC reaction towards the two variants of the following piece of code: 1 data T row col where 1 T ∷ ∀ row col. MPTC row col ⇒ 1 { col ∷ col 1 , row ∷ row } → T row col 2 data T row col 2 = ∀ row col. MPTC row col ⇒ 2 T { col ∷ col 2 , row ∷ row } Note the missing Nevertheless, the former typechecks, while the latter is rejected with: I'm sure this must have something to do with GADTs carrying type equality constraints, but still I don't quite understand the difference in interpretation of the forall quantification. To my understanding, a value bound by the pattern 'w< at >T{..}' ought to be treated as sufficient evidence for 'MPTC row col', no matter whether it is a GADT or not. ..but it isn't.
Categories: Offsite Discussion

Jasper Van der Jeugt: Haskell: mistakes I made

Planet Haskell - Sun, 04/10/2016 - 6:00pm

A week or two ago, I gave a talk at the Zurich Haskell Meetup about Haskell mistakes I have made in the past, and how you can fix them. The slides can be found here, and you can watch the talk below, thanks to Simon Meier’s prowess as a cameraman. I’m just glad I managed to strike a great pose in the video thumbnail.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"> </iframe>
Categories: Offsite Blogs

C2HS - marchalling arrays

haskell-cafe - Sun, 04/10/2016 - 1:09pm
Hi, I am using c2hs and I want to create bindings for a C function with the following signature: my_function(my_struct_t*, int* values, size_t num_values); Ideally I'd like to have a function that accepts [CInt], not a pair (pointer, size). What is the easiest way to do it? Can c2hs help me with this or I need to do it manually? Regards, Alexey. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Yesod Web Framework: Supporting separate read and write databases in persistent

Planet Haskell - Sun, 04/10/2016 - 9:00am

persistent has just released a new major version. With this release, queries which only read data are distinguished, at the type level, from queries which write data. This makes using a read replica database safer and generally makes our types more informative. Breakage should be minimal for end users.

Scaling a database

Many applications eventually reach the limits of a single database machine and work across multiple machines. Distributing state across multiple machines can introduce a great deal of complexity. Consequently, there are many approaches to dealing with multiple databases. The CAP theorem ensures that none of them is perfect (Is there ever a good impossibility theorem?). Front Row Education chose to go with streaming, native replication from a PostgreSQL write database to a PostgreSQL read database.

Possible solutions

What's the best way to program against a write database with a read replica? Here are a few approaches:

1. Run opaque queries against chosen backend.

In this approach, we don't change any of the persistent machinery. selectList, insert and runSqlPool stay the same:

runSqlPool :: (MonadBaseControl IO m) => ReaderT SqlBackend m a -> Pool SqlBackend -> m a writeBackend :: SqlBackend readBackend :: SqlBackend insertStudent :: ReaderT SqlBackend IO StudentId insertStudent = insert student selectStudents :: ReaderT SqlBackend IO [Entity Student] selectStudents = selectList ([] :: [Filter Student]) [] insertAndSelect :: IO [Entity Student] insertAndSelect = do _ <- runSqlPool (insertStudent >> insertStudent) writeBackend runSqlPool selectStudents readBackend

We choose which backend to run our queries against at execution time. That is, we pass one backend to runSqlPool if we want to execute the query against the read database and a different backend if we want to execute our query against the write database.

This approach does work in the most basic sense of the word. But it's manual and error-prone. Nothing stops us from accidentally running a write query against a read database and getting an error at runtime. That's not the Haskell way! We'd be much better off encoding this read and write information in the query's type.

2. update, delete and insert write. select reads.

In this approach we create wrappers around SqlBackend called SqlReadBackend and SqlWriteBackend. Then, we specify that all selects (reads) will operate against the read database and all inserts, update, or delete (writes) will operate against the write database. We can intermix queries of different types with multiple (now type safe) calls to runSqlPool:

runSqlPool :: (MonadBaseControl IO m, IsSqlBackend backend) => ReaderT backend m a -> Pool backend -> m a writeBackend :: SqlWriteBackend readBackend :: SqlReadBackend insertStudent :: ReaderT SqlWriteBackend IO StudentId selectStudents :: ReaderT SqlReadBackend IO [Entity Student] insertAndSelect :: IO [Entity Student] insertAndSelect = do _ <- runSqlPool (insertStudent >> insertStudnet) writeBackend runSqlPool selectStudents readBackend

Attempting to run insertStudent against on the readBackend will result in a type error. Nice!

Unfortunately, it will also result in a type error when attempting to run selectStudents against the writeBackend. Which is why we used two calls to runSqlPool in the above example. This inability to mix reads and writes in a single transaction is rather restrictive.

This approach also ignores problems of eventual consistency. Even under streaming replication, there is some lag (hopefully, only a few milliseconds or less) between the read database and the write database. If we can't run reads in the same transaction, on the same DB as writes we have a serious problem. In the above example, we have no guarantee that our student insertions will have propagated to the read DB in time for the select that immediately follows the insert.

3. update, delete and insert write. select can be used in a read or write context.

We must generalize our read operations so that we can still run them against the write database when we need to.

runSqlPool :: (MonadBaseControl IO m, IsSqlBackend backend) => ReaderT backend m a -> Pool backend -> m a writeBackend :: SqlWriteBackend readBackend :: SqlReadBackend instance SqlBackendCanRead SqlWriteBackend instance SqlBackendCanRead SqlReadBackend insertStudent :: ReaderT SqlWriteBackend IO StudentId selectStudents :: (SqlBackendCanRead backend) => ReaderT backend IO [Entity Student] insertAndSelect :: IO [Entity Student] insertAndSelect = runSqlPool (insertStudent >> insertStudent >> selectStudents) writeBackend

We now use type classes to say that write queries can only run against the write database but read queries can run against either type of database and we can defer the decision of where to run a read query until use. But in a safe way.


The new version of persistent follows the third approach.

Types as documentation

IO is sometimes referred to as Haskell's "sin bin". That is, a great number of effects end up marked as IO. Consequently, when you see IO in a type signature, it's hard to determine which effects that function uses. Does the function write to disk or get the current time? A more fine-grained type would make our types more informative.

Along similar lines, splitting the monolithic SqlPersistT into SqlReadT and SqlWriteT allows us to more clearly signal the capabilities leveraged inside a given function. When we see SqlReadT, we can be confident that the underlying database state hasn't changed.

Breaking changes

This version of persistent shouldn't break application authors and end users of persistent. You can continue to use SqlBackend which is now an instance of SqlBackendCanRead and SqlBackendCanWrite.

Library authors may need to modify some type signatures to work with the new machinery.

For example,

get404 :: (MonadIO m, PersistStore backend, backend ~ PersistEntityBackend val, PersistEntity val) => Key val -> ReaderT backend m val


get404 :: (MonadIO m, PersistStore backend, BaseBackend backend ~ PersistEntityBackend val, PersistEntity val) => Key val -> ReaderT backend m val

which leverages

instance HasPersistBackend SqlBackend where type BaseBackend SqlBackend = SqlBackend instance HasPersistBackend SqlReadBackend where type BaseBackend SqlReadBackend = SqlBackend instance HasPersistBackend SqlWriteBackend where type BaseBackend SqlWriteBackend = SqlBackend

from persistent.

This new concept of BaseBackend allows us to still assign a unique type of backend to each PersistEntity despite the availability of SqlReadBackend, SqlWriteBackend and SqlBackend.


If you have a read replica, you can now access it more safely. Even if you don't, your types are now a little more informative. And you get this for almost free!

Categories: Offsite Blogs

Emacs (was stack and the Atom editor)

haskell-cafe - Sun, 04/10/2016 - 8:36am
[Note the changed subject line :-) ] Ive been having a great deal of confusion/trouble with emacs haskell-mode of late. No this is not a gripe about haskell-mode Just that I find myself confused over 1. The old comint interface and new λ one -- when does which kick in 2. Basic setups -- is it called haskell-mode/haskell-emacs or what? 3. I dont quite know when things stopped working/being comfortable/familiar but I remember it as having nothing to do with haskell -- updating org mode or some such Just that I am supposed to be (at least out here!!) an emacs and haskell expert and while showing others how to get started I am often finding my own setups embarrassingly broken Any tips for an old emacs-geezer? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Haskell on OpenWRT

haskell-cafe - Sat, 04/09/2016 - 8:54am
Has anyone had any success running Haskell programs on OpenWRT? Specifically, I'm running OpenWRT on x86_64, so processor architecture shouldn't be an issue. However, by default OpenWRT uses musl as its C library, so binaries from a "normal" Linux system wouldn't be compatible with OpenWRT. I attempted to get around this problem by building an OpenWRT image with glibc as the C library. In theory, that ought to solve the problem. In practice, my program (a simple hello-world type program, which runs fine on Ubuntu) hung, printing nothing, using nearly all of one core, and was unkillable by any signal other than SIGKILL. If left alone to run, it eventually used all available memory and then died. I took a look at ldd, to see if there were any clues there. On my Ubuntu 12.04 machine, where I compiled the program (using ghc 7.4.1), I get: ppelletier< at >patrick64:~/programming/haskell$ ldd contact => (0x00007fff36f50000) => /usr/lib/x86_64-linux-gnu/
Categories: Offsite Discussion

Haskell, stack and the Atom editor

haskell-cafe - Sat, 04/09/2016 - 8:03am
Hi friends, I want to switch to the Atom editor for my Haskell development. I have tried several times over the last few years and given up each time because hardly anything works. I can't stand Emacs so don't bother suggesting it. I am currently using Geany, which has served me well, mostly because it is dead simple so nothing can go wrong. In addition to the usual universal text editor functions, all it does is highlight syntax (albeit with an outdated grammar), runs custom commands to build, displays clickable errors in a separate pane and underlines them in the editor pane. But for larger projects, I want more (better autocompletion, better integration with the build system, better VCS support). But back to Atom, I am having extreme difficulty getting even the basics working. I am using GHC 7.10.2, Cabal, Stack 1.0.4. I installed Atom 1.6.1 from their deb package. I installed the "language-haskell" package. This works very well, thankfully, but is the *only* thing that I can get working.
Categories: Offsite Discussion

Mark Jason Dominus: Two things about git

Planet Haskell - Fri, 04/08/2016 - 12:25pm

I'm becoming one of the people at my company that people come to when they want help with git, so I've been thinking a lot about what to tell people about it. It's always tempting to dive into the technical details, but I think the first and most important things to explain about it are:

  1. Git has a very simple and powerful underlying model. Atop this model is piled an immense trashheap of confusing, overlapping, inconsistent commands. If you try to just learn what commands to run in what order, your life will be miserable, because none of the commands make sense. Learning the underlying model has a much better payoff because it is much easier to understand what is really going on underneath than to try to infer it, Sherlock-Holmes style, from the top.

  2. One of Git's principal design criteria is that it should be very difficult to lose work. Everything is kept, even if it can sometimes be hard to find. If you lose something, don't panic. There's a good chance that you can find someone who will be able to hunt it down again. And if you make a mistake, it is almost always possible to put things back exactly the way they were, and you can find someone who can show you how to do it.

    One exception is changes that haven't been committed. These are not yet under Git's control, so it can't help you with them. Commit early and often.

[ Addendum 20160415: I wrote a detailed account of a time I recovered lost files. ]

Categories: Offsite Blogs

Remora: An Array-Oriented Language with Static Rank Polymorphism

Lambda the Ultimate - Fri, 04/08/2016 - 11:56am
The paper: An Array-Oriented Language with Static Rank Polymorphism, Justin Slepak, Olin Shivers, and Panagiotis Manolios, Northeastern University, 2014. Abstract. The array-computational model pioneered by Iverson’s lan- guages APL and J offers a simple and expressive solution to the “von Neumann bottleneck.” It includes a form of rank, or dimensional, poly- morphism, which renders much of a program’s control structure im- plicit by lifting base operators to higher-dimensional array structures. We present the first formal semantics for this model, along with the first static type system that captures the full power of the core language.

The formal dynamic semantics of our core language, Remora, illu- minates several of the murkier corners of the model. This allows us to resolve some of the model’s ad hoc elements in more general, regular ways. Among these, we can generalise the model from SIMD to MIMD computations, by extending the semantics to permit functions to be lifted to higher-dimensional arrays in the same way as their arguments.

Our static semantics, a dependent type system of carefully restricted power, is capable of describing array computations whose dimensions cannot be determined statically. The type-checking problem is decidable and the type system is accompanied by the usual soundness theorems. Our type system’s principal contribution is that it serves to extract the implicit control structure that provides so much of the language’s expres- sive power, making this structure explicitly apparent at compile time.
Categories: Offsite Discussion