News aggregator

Haskell Platform 2014.2.0.0 is Released!

haskell-cafe - Sat, 08/09/2014 - 11:28pm
On behalf of the Haskell Platform team, I'm happy to announce the release of *Haskell Platform 2014.2.0.0* featuring: GHC 7.8.3 53 packages 860+ public modules 4 tools This release features a major upgrade to OpenGL and GLUT. In addition, "behind the scenes", this release of the platform is produced with a new build system, and many improvements to the installations on all three OS families. Get it now: Download Haskell Platform <> — for Windows <> — for Mac OS X <> — for Linux <> (and similar systems) N.B.: You may need to explicitly re-fresh those links when visiting in your browser - those pages tend to get cached for very long periods of time. — Mark "platform wrangler" Lentczner P.S.: I realize this one was a long time a'comin'. I take the responsibility for the decisions that lead up to being this late, including deciding it was
Categories: Offsite Discussion

How feasible would it be to create a library for GLSL shader combinators?

Haskell on Reddit - Sat, 08/09/2014 - 8:09pm

I recently got into learning to use OpenGL in Haskell. The OpenGL bindings are fairly infamous for being very stateful and imperative, and unsurprisingly not very pleasant for a functional programmer. One area where this could be sorted out is with shaders. As of yet, as far as I am aware, GLSL shaders can not even be written in Haskell.

Bear with me, as some of this may seem a little unwell-formed -- I literally only thought of it an hour or so ago in the bath. For the purposes of argument, define a Shader type:

data Shader u i o = Shader { runShader :: ... , ... }

u is the type of the uniform values, i the inputs, and o the outputs. Typically we might have something like u = Either Array Int, etc.

Shader u is an instance of Arrow. The identity arrow would be that which mirrors inputs to outputs; arrow composition and first work in the obvious ways.

GLSL being a c-like language we can see that there are probably instances for Shader u of ArrowChoice and ArrowLoop.

This is just a vague idea, but it seems to suggest that a shader combinators library might be possible. There already exists a library for internal representation of the glsl format: language-glsl (although it currently only works with GLSL version 1.50).

If I had the necessary knowledge/patience, I would simply implement this myself, and attach a witty name to it, like Glacial or Colossal (ok, not that witty). Unfortunately, I have neither. Would some more experienced Haskellers care to comment on the feasability of the above?

submitted by tekn04
[link] [11 comments]
Categories: Incoming News

Mike Izbicki: Scribal traditions of "ancient" Hebrew scrolls

Planet Haskell - Sat, 08/09/2014 - 6:00pm
Scribal traditions of "ancient" Hebrew scrolls posted on 2014-08-10

In 2006, I saw the dead sea scrolls in San Diego. The experience changed my life. I realized I knew nothing about ancient Judea, and decided to immerse myself in it. I studied biblical Hebrew and began a collection of Hebrew scrolls.

a pile of Torah scrolls

Each scroll is between 100 to 600 years old, and is a fragment of the Torah. These scrolls were used by synagogues throughout Southern Europe, Africa, and the Middle East. As we’ll see in a bit, each region has subtly different scribal traditions. But they all take their Torah very seriously.

The first thing that strikes me about a scroll is its color. Scrolls are made from animal skin, and the color is determined by the type of animal and method of curing the skin. The methods and animals used depend on the local resources, so color gives us a clue about where the scroll originally came from. For example, scrolls with a deep red color usually come from North Africa. As the scroll ages, the color may either fade or deepen slightly, but remains largely the same. The final parchment is called either gevil or klaf depending on the quality and preparation method.

The four scrolls below show the range of colors scrolls come in:

4 Torah scrolls side by side with different ages

My largest scroll is about 60 feet long. Here I have it partially stretched out on the couch in my living room:

Torah scroll stretched out on couch

The scroll is about 300 years old, and contains most of Exodus, Leviticus, and Numbers. A complete Torah scroll would also have Genesis and Deuteronomy and be around 150 feet long. Sadly, this scroll has been damaged throughout its long life, and the damaged sections were removed.

As you can imagine, many hides were needed to make these large scrolls. These hides get sewn together to form the full scroll. You can easily see the stitching on the back of the scroll:

back of a Torah scroll

Also notice how rough that skin is! The scribes (for obvious reasons) chose to use the nice side of the skin to write on.

Here is a front end, rotated view of the same seam above. Some columns of text are separated at these seems, but some columns are not.

front of Torah seam

Animal hides come in many sizes. The hide in this image is pretty large and holds five columns of text:

5 panels of parchment

But this hide is smaller and holds only three columns:

3 panels of parchment

The coolest part of these scrolls is their calligraphy. Here’s a zoomed in look on one of the columns of text above:

zoomed in Hebrew Torah scroll

There’s a lot to notice in this picture:

  1. The detail is amazing. Many characters have small strokes decorating them. These strokes are called tagin (or crowns in English). A bit farther down the page we’ll see different ways other scribal traditions decorate these characters. Because of this detail in every letter, a scribe (or sopher) might spend the whole day writing without finishing a single piece of parchment. The average sopher takes between nine months to a year to complete a Torah scroll.

  2. There are faint indentations in the parchment that the sopher used to ensure he was writing straight. We learned to write straight in grade school by writing our letters on top of lines on paper. But in biblical Hebrew, the sopher writes their letters below the line!

  3. Hebrew is read and written right to left (backwards from English). To keep the left margin crisp, the letters on the left can be stretched to fill space. This effect is used in different amounts throughout the text. The stretching is more noticeable in this next section:

Hebrew stretched letters in Torah scroll

And sometimes the sopher goes crazy and stretches all the things:

scribe stretched all the letters on this line of a Hebrew Torah

If you look at the pictures above carefully, you can see that only certain letters get stretched: ת ד ח ה ר ל. These letters look nice when stretched because they have a single horizontal stroke.

The next picture shows a fairly rare example of stretching the letter ש. It looks much less elegant than the other stretched letters:

stretching the shem letter in a Hebrew Torah scroll

Usually these stretched characters are considered mistakes. An experienced sopher evenly spaces the letters to fill the line exactly. But a novice sopher can’t predict their space usage as well. When they hit the end of the line and realize they can’t fit another full word, they’ll add one of these stretched characters to fill the space.

In certain sections, however, stretched lettering is expected. It is one of the signs of poetic text in the Torah. For example, in the following picture, the sopher intentionally stretched each line, even when they didn’t have to:

closeup of Torah scroll with cool calligraphy

Keeping the left margin justified isn’t just about looks. The Torah is divided into thematic sections called parashot. There are two types of breaks separating parashot. The petuha (open) is a jagged edge, much like we end paragraphs in English. The setumah (closed) break is a long space in the middle of the line. The picture below shows both types of breaks:

Torah scroll containing a petuha and setumah parashah break

A sopher takes these parashot divisions very seriously. If the sopher accidentally adds or removes parashot from the text, the entire scroll becomes non-kosher and cannot be used. A mistake like this would typically be fixed by removing the offending piece of parchment from the scroll, rewriting it, and adding the corrected version back in. (We’ll see some pictures of less serious mistakes at the bottom of this page.)

The vast majority of of the Torah is formatted as simple blocks of text. But certain sections must be formatted in special ways. This is a visual cue that the text is more poetic.

The passage below is of Numbers 10:35-36. Here we see an example of the inverted nun character being used to highlight some text. This is the only section of the Torah where this formatting appears (although it also appears seven times in the book of psalms). The inverted nun characters are set all by themselves, and surround a command about the Ark of the Covenant:

Moses gives a command about the ark of the covenant in fancy Hebrew script; inverted nun character

It’s really cool when two different scrolls have sections that overlap. We can compare them side-by-side to watch the development of different scribal traditions. The image below shows two versions of Numbers 6:22-27.

The lord bless you and keep you rendered in a Hebrew Torah in fancy Hebrew script

The writing is almost identical in both versions, with one exception. On the first line with special formatting, the left scroll has two words in the right column: אמור להם, but the right scroll only has the world אמור) להם is the last word on the previous line). When the sopher is copying a scroll, he does his best to preserve the formatting in these special sections. But due to the vast Jewish diaspora, small mistakes like this get made and propagate. Eventually they form entirely new scribal traditions. (Note that if a single letter is missing from a Torah, then the Torah is not kosher and is considered unfit for use. These scribal differences are purely stylistic.)

Many individual characters and words also receive special formatting throughout the text. Both images below come from the same piece of parchment (in Genesis 23) and were created by the same sopher. The image on the left shows the letter פ in its standard form, and the image on the right shows it in a modified form.

a whirled pe in the Hebrew Torah side by side with a normal pe

The meaning of these special characters is not fully known, and every scribal tradition exhibits some variation in what letters get these extra decorations. In the scroll above, the whirled פ appears only once. But some scrolls exhibit the special character dozens of times. Here is another example where you can see a whirled פ a few letters to the right of its normal version:

a whirled pe and normal pe in the Hebrew Torah in the same sentence

Another special marker is when dots are placed over the Hebrew letters. The picture below comes from the story when Esau is reconciling with Jacob in Genesis 33. Normally, the dotted word would mean that Esau kissed Jacob in reconciliation; but tradition states that these dots indicate that Esau was being incincere. Some rabbis say that this word, when dotted, could be more accurately translated as Esau “bit” Jacob.

dots above words on the Hebrew Torah

Next, let’s take a look at God’s name written in many different styles. In Hebrew, God’s name is written יהוה. Christians often pronounce God’s name as Yahweh or Jehovah. Jews, however, never say God’s name. Instead, they say the word adonai, which means “lord.” In English old testaments, anytime you see the word Lord rendered in small caps, the Hebrew is actually God’s name. When writing in English, Jews will usually write God’s name as YHWH. Removing the vowels is a reminder to not say the name out loud.

Below are nine selected images of YHWH. Each comes from a different scroll and illustrates the decorations added by a different scribal tradition. A few are starting to fade from age, but they were the best examples I could find in the same style. The simplest letters are in the top left, and the most intricate in the bottom right. In the same scroll, YHWH is always written in the same style.

yahweh, jehova, the name of god, in many different Hebrew scripts

The next image shows the word YHWH at the end of the line. The ה letters get stretched just like in any other word. When I first saw this I was surprised a sopher would stretch the name of God like this—the name of God is taken very seriously and must be handled according to special rules. I can just imagine rabbis 300 years ago getting into heated debates about whether or not this was kosher!

stretched yahweh in Hebrew Torah scroll

There is another oddity in the image above. The letter yod (the small, apostrophe looking letter at the beginning of YHWH) appears in each line. But it is written differently in the last line. Here, it is given two tagin, but everywhere else it only has one. Usually, the sopher consistently applies the same styling throughout the scroll. Changes like this typically indicate the sopher is trying to emphasize some aspect of the text. Exactly what the changes mean, however, would depend on the specific scribal tradition.

The more general word for god in Hebrew is אלוהים, pronounced elohim. This word can refer to either YHWH or a non-Jewish god. Here it is below in two separate scrolls:

elohim, god, in Hebrew

Many Christians, when they first learn Hebrew, get confused by the word elohim. The ending im on Hebrew words is used to make a word plural, much like the ending s in English. (For example, the plural of sopher is sophrim.) Christians sometimes claim that because the Hebrew word for god looks plural, ancient Jews must have believed in the Christian doctrine of the trinity. But this is very wrong, and rather offensive to Jews.

Tradition has that Moses is the sole author of the Torah, and that Jewish sophrim have given us perfect copies of Moses’ original manuscripts. Most modern scholars, however, believe in the documentary hypothesis, which challenges this tradition. The hypothesis claims that two different writers wrote the Torah. One writer always referenced God as YHWH, whereas the other always referenced God as elohim. The main evidence for the documentary hypothesis is that some stories in the Torah are repeated twice with slightly different details; in one version God is always called YHWH, whereas in the other God is always called elohim. The documentary hypothesis suggests that some later editor merged two sources together, but didn’t feel comfortable editing out the discrepancies, so left them exactly as they were. Orthodox Jews reject the documentary hypothesis, but some strains of Judaism and most Christian denominations are willing to consider that the hypothesis might be true. This controversy is a very important distinction between different Jewish sects, but most Christians aren’t even aware of the controversy in their holy book.

The next two pictures show common gramatical modifications of the words YHWH and elohim: they have letters attached to them in the front. The word YHWH below has a ל in front. This signifies that something is being done to YHWH or for YHWH. The word elohim has a ה in front. This signifies that we’re talking about the God, not just a god. In Hebrew, prepositions like “to,” “for,” and “the” are not separate words. They’re just letters that get attached to the words they modify.

lamed on yhwh adonai plus a he on elohium

Names are very important in Hebrew. Most names are actually phrases. The name Jacob, for example, means “heel puller.” Jacob earned his name because he was pulling the heel of his twin brother Esau when they were born in Genesis 25:26. Below are two different versions of the word Jacob:

the name jacob written in fancy Hebrew script; genesis 25:26

But names often change in the book of Genesis. In fact, Jacob’s name is changed to Israel in two separate locations: first in Genesis 32 after Jacob wrestles with “a man”; then again in Genesis 35 after Jacob builds an alter to elohim. (This is one of the stories cited as evidence for the documentary hypothesis.) The name Israel is appropriate because it literally means “persevered with God.” The el at the end of Israel is a shortened form of elohim and is another Hebrew word for god.

Here is the name Israel in two different scripts:

Israel in Torah script Hebrew

Another important Hebrew name is ישוע. In Hebrew, this name is pronounced yeshua, but Christians commonly pronounce it Jesus! The name literally translates as “salvation.” That’s why the angel in Matthew 1:21 and Luke 1:31 gives Jesus this name. My scrolls are only of the old testament, so I don’t have any examples to show of Jesus’ name!

To wrap up the discussion of scribal writing styles, let’s take a look at the most common phrase in the Torah: ודבר יהוה אל משה. This translates to “and the Lord said to Moses.” Here is is rendered in three different styles:

vaydaber adonai lmosheh

vaydaber adonai lmosheh

vaydaber adonai lmosheh

Now let’s move on to what happens when the sophrim make mistakes.

Copying all of these intricate characters was exhausting work! And hard! So mistakes are bound to happen. But if even a single letter is wrong anywhere in the scroll, the entire scroll is considered unusable. The rules are incredibly strict, and this is why Orthodox Jews reject the documentary hypothesis. To them, it is simply inconceivable to use a version of the Torah that was combined from multiple sources.

The most common way to correct a mistake is to scratch off the outer layer of the parchment, removing the ink. In the picture below, the sopher has written the name Aaron (אהרן) over the scratched off parchment:

scribe mistake in Hebrew Torah scroll

The next picture shows the end of a line. Because of the mistake, however, the sopher must write several characters in the margin of the text, ruining the nice sharp edge they created with the stretched characters. Writing that enters the margins like this is not kosher.

scribe mistake in Hebrew Torah scroll

Sometimes a sopher doesn’t realize they’ve made a mistake until several lines later. In the picture below, the sopher has had to scratch off and replace three and a half lines:

scribe makes a big mistake and scratches off several lines in a Torah scroll

Scratching the parchment makes it thinner and weaker. Occasionally the parchment is already very thin, and scratching would tear through to the other side. In this case, the sopher can take a thin piece of blank parchment and attach it to the surface. In the following picture, you can see that the attached parchment has a different color and texture.

parchment mistake added ontop Torah scroll

The next picture shows a rather curious instance of this technique. The new parchment is placed so as to cover only parts of words on multiple lines. I can’t imagine how a sopher would make a mistake that would best be fixed in this manner. So my guess is that this patch was applied some time later, by a different sopher to repair some damage that had occurred to the scroll while it was in use.

parchment of repair added to the top of a Torah scroll

Our last example of correcting mistakes is the most rare. Below, the sopher completely forgot a word when copying the scroll, and added it in superscript above the standard text:

superscript mistake fixing in Toral scroll

If we zoom in, you can see that the superscript word is slightly more faded than the surrounding text. This might be because the word was discovered to be missing a long time (days or weeks) after the original text was written, so a different batch of ink was used to write the word.

superscript mistake fixing in Toral scroll

Since these scrolls are several hundred years old, they’ve had plenty of time to accumulate damage. When stored improperly, the parchment can tear in some places and bunch up in others:

parchment Torah scroll damage

One of the worst things that can happen to a scroll is water. It damages the parchment and makes the ink run. If this happens, the scroll is ruined permanently.

water damage on a torah scroll

you should learn Hebrew!

If you’ve read this far and enjoyed it, then you should learn biblical Hebrew. It’s a lot of fun! You can start right now at any of these great sites:

When you’re ready to get serious, you’ll need to get some books. The books that helped me the most were:

These books all have lots of exercises and make self study pretty simple. The Biblical Hebrew Workbook is for absolute beginners. Within the first few sessions you’re translating actual bible verses and learning the nuances that get lost in the process. I spent two days a week with this book, two hours at each session. It took about four months to finish.

The other two books start right where the workbook stops. They walk you through many important passages and even entire books of the old testament. After finishing these books, I felt comfortable enough to start reading the old testament by myself. Of course I was still very slow and was constantly looking things up in the dictionary!

For me, learning the vocabulary was the hardest part. I used a great free piece of software called FoundationStone to help. The program remembers which words you struggle with and quizes you on them more frequently.

Finally, let’s end with my favorite picture of them all. Here we’re looking down through a rolled up Torah scroll at one of my sandals.

torah sandals james bond

Categories: Offsite Blogs

Edward Z. Yang: What’s a module system good for anyway?

Planet Haskell - Sat, 08/09/2014 - 5:21pm

This summer, I've been working at Microsoft Research implementing Backpack, a module system for Haskell. Interestingly, Backpack is not really a single monolothic feature, but, rather, an agglomeration of small, infrastructural changes which combine together in an interesting way. In this series of blog posts, I want to talk about what these individual features are, as well as how the whole is greater than the sum of the parts.

But first, there's an important question that I need to answer: What's a module system good for anyway? Why should you, an average Haskell programmer, care about such nebulous things as module systems and modularity. At the end of the day, you want your tools to solve specific problems you have, and it is sometimes difficult to understand what problem a module system like Backpack solves. As tomejaguar puts it: "Can someone explain clearly the precise problem that Backpack addresses? I've read the paper and I know the problem is 'modularity' but I fear I am lacking the imagination to really grasp what the issue is."

Look no further. In this blog post, I want to talk concretely about problems Haskellers have today, explain what the underlying causes of these problems are, and say why a module system could help you out.

The String, Text, ByteString problem

As experienced Haskellers are well aware, there are multitude of string types in Haskell: String, ByteString (both lazy and strict), Text (also both lazy and strict). To make matters worse, there is no one "correct" choice of a string type: different types are appropriate in different cases. String is convenient and native to Haskell'98, but very slow; ByteString is fast but are simply arrays of bytes; Text is slower but Unicode aware.

In an ideal world, a programmer might choose the string representation most appropriate for their application, and write all their code accordingly. However, this is little solace for library writers, who don't know what string type their users are using! What's a library writer to do? There are only a few choices:

  1. They "commit" to one particular string representation, leaving users to manually convert from one representation to another when there is a mismatch. Or, more likely, the library writer used the default because it was easy. Examples: base (uses Strings because it completely predates the other representations), diagrams (uses Strings because it doesn't really do heavy string manipulation).
  2. They can provide separate functions for each variant, perhaps identically named but placed in separate modules. This pattern is frequently employed to support both strict/lazy variants Text and ByteStringExamples: aeson (providing decode/decodeStrict for lazy/strict ByteString), attoparsec (providing Data.Attoparsec.ByteString/Data.Attoparsec.ByteString.Lazy), lens (providing Data.ByteString.Lazy.Lens/Data.ByteString.Strict.Lens).
  3. They can use type-classes to overload functions to work with multiple representations. The particular type class used hugely varies: there is ListLike, which is used by a handful of packages, but a large portion of packages simply roll their own. Examples: SqlValue in HDBC, an internal StringLike in tagsoup, and yet another internal StringLike in web-encodings.

The last two methods have different trade offs. Defining separate functions as in (2) is a straightforward and easy to understand approach, but you are still saying no to modularity: the ability to support multiple string representations. Despite providing implementations for each representation, the user still has to commit to particular representation when they do an import. If they want to change their string representation, they have to go through all of their modules and rename their imports; and if they want to support multiple representations, they'll still have to write separate modules for each of them.

Using type classes (3) to regain modularity may seem like an attractive approach. But this approach has both practical and theoretical problems. First and foremost, how do you choose which methods go into the type class? Ideally, you'd pick a minimal set, from which all other operations could be derived. However, many operations are most efficient when directly implemented, which leads to a bloated type class, and a rough time for other people who have their own string types and need to write their own instances. Second, type classes make your type signatures more ugly String -> String to StringLike s => s -> s and can make type inference more difficult (for example, by introducing ambiguity). Finally, the type class StringLike has a very different character from the type class Monad, which has a minimal set of operations and laws governing their operation. It is difficult (or impossible) to characterize what the laws of an interface like this should be. All-in-all, it's much less pleasant to program against type classes than concrete implementations.

Wouldn't it be nice if I could import String, giving me the type String and operations on it, but then later decide which concrete implementation I want to instantiate it with? This is something a module system can do for you! This Reddit thread describes a number of other situations where an ML-style module would come in handy.

(PS: Why can't you just write a pile of preprocessor macros to swap in the implementation you want? The answer is, "Yes, you can; but how are you going to type check the thing, without trying it against every single implementation?")

Destructive package reinstalls

Have you ever gotten this error message when attempting to install a new package?

$ cabal install hakyll cabal: The following packages are likely to be broken by the reinstalls: pandoc- Graphalyze- Use --force-reinstalls if you want to install anyway.

Somehow, Cabal has concluded that the only way to install hakyll is to reinstall some dependency. Here's one situation where a situation like this could come about:

  1. pandoc and Graphalyze are compiled against the latest unordered-containers-, which itself was compiled against the latest hashable-
  2. hakyll also has a dependency on unordered-containers and hashable, but it has an upper bound restriction on hashable which excludes the latest hashable version. Cabal decides we need to install an old version of hashable, say hashable-
  3. If hashable- is installed, we also need to build unordered-containers against this older version for Hakyll to see consistent types. However, the resulting version is the same as the preexisting version: thus, reinstall!

The root cause of this error an invariant Cabal currently enforces on a package database: there can only be one instance of a package for any given package name and version. In particular, this means that it is not possible to install a package multiple times, compiled against different dependencies. This is a bit troublesome, because sometimes you really do want the same package installed multiple times with different dependencies: as seen above, it may be the only way to fulfill the version bounds of all packages involved. Currently, the only way to work around this problem is to use a Cabal sandbox (or blow away your package database and reinstall everything, which is basically the same thing).

You might be wondering, however, how could a module system possibly help with this? It doesn't... at least, not directly. Rather, nondestructive reinstalls of a package are a critical feature for implementing a module system like Backpack (a package may be installed multiple times with different concrete implementations of modules). Implementing Backpack necessitates fixing this problem, moving Haskell's package management a lot closer to that of Nix's or NPM.

Version bounds and the neglected PVP

While we're on the subject of cabal-install giving errors, have you ever gotten this error attempting to install a new package?

$ cabal install hledger-0.18 Resolving dependencies... cabal: Could not resolve dependencies: # pile of output

There are a number of possible reasons why this could occur, but usually it's because some of the packages involved have over-constrained version bounds (especially upper bounds), resulting in an unsatisfiable set of constraints. To add insult to injury, often these bounds have no grounding in reality (the package author simply guessed the range) and removing it would result in a working compilation. This situation is so common that Cabal has a flag --allow-newer which lets you override the upper bounds of packages. The annoyance of managing bounds has lead to the development of tools like cabal-bounds, which try to make it less tedious to keep upper bounds up-to-date.

But as much as we like to rag on them, version bounds have a very important function: they prevent you from attempting to compile packages against dependencies which don't work at all! An under-constrained set of version bounds can easily have compiling against a version of the dependency which doesn't type check.

How can a module system help? At the end of the day, version numbers are trying to capture something about the API exported by a package, described by the package versioning policy. But the current state-of-the-art requires a user to manually translate changes to the API into version numbers: an error prone process, even when assisted by various tools. A module system, on the other hand, turns the API into a first-class entity understood by the compiler itself: a module signature. Wouldn't it be great if packages depended upon signatures rather than versions: then you would never have to worry about version numbers being inaccurate with respect to type checking. (Of course, versions would still be useful for recording changes to semantics not seen in the types, but their role here would be secondary in importance.) Some full disclosure is warranted here: I am not going to have this implemented by the end of my internship, but I'm hoping to make some good infrastructural contributions toward it.


If you skimmed the introduction to the Backpack paper, you might have come away with the impression that Backpack is something about random number generators, recursive linking and applicative semantics. While these are all true "facts" about Backpack, they understate the impact a good module system can have on the day-to-day problems of a working programmer. In this post, I hope I've elucidated some of these problems, even if I haven't convinced you that a module system like Backpack actually goes about solving these problems: that's for the next series of posts. Stay tuned!

Categories: Offsite Blogs

Literate Futoshiki solver

Haskell on Reddit - Sat, 08/09/2014 - 2:10pm
Categories: Incoming News

ANN: immortal-0.1

haskell-cafe - Sat, 08/09/2014 - 1:57pm
I am pleased to announce the first release of the "immortal" package. Immortal allows you to create threads that never die. If the action executed by such a thread finishes or is killed by an exception, it is automatically restarted. Such a behavior is often desired when writing servers, so I decided to release a package that does exactly that. Roman _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Proposal: Export Data.Word.Word from Prelude

libraries list - Sat, 08/09/2014 - 12:38pm
Hello *, Proposal -------- I hereby propose to export Haskell2010's Data.Word.Word type from the Prelude Motivation ---------- Starting with Haskell2010, "Data.Word" exporting the type 'Word', "an unsigned integral type, with the same size as 'Int'" became part of the Haskell Report. 'Word' is usually a better choice than 'Int' when non-negative quantities (such as list lengths, bit or vector indices, or number of items in a container) need to be represented. Currently however, 'Word' is at a disadvantage (with respect to 'Int') in terms of public exposure by being accessible only after an "import Data.Word". Moreover, since 'Word' is now part of the Haskell Report, libraries should attempt to avoid name-clashing with 'Word' (and if they do, it ought to be a conscious decision, which ought to be declared by a requiring a "import Prelude hiding (Word)"). While one might think 'Word' would be a popular type-name to use in Haskell code, the current level of name collision is still rather low (as is show
Categories: Offsite Discussion

Lambdaheads - Vienna Functional Programming

Haskell on Reddit - Sat, 08/09/2014 - 3:16am

Friends of Functional Programming!

I'd like to announce the monthly meeting of the Lambdaheads (, a functional programming group based in Vienna. We will meet at the Metalab library in Rathausstraße 6, 1010 Vienna at 19:30.

I will prepare a talk on Record Syntax - Lenses and Prisms and I hope we will have another talk about "STM with Finalizers". I hope my talk will be beginner-friendly, wheras I guess the STM talk will be a bit more advanced.

If you are interested in functional programming, be it Haskell or a different language come and join us for a nice evening.

submitted by epsilonhalbe
[link] [5 comments]
Categories: Incoming News

language-puppet: 7 Startups - part 5 - the XMPP backend

Planet Haskell - Sat, 08/09/2014 - 2:48am

Note: I ran out of time weeks ago. I could never finish this serie as I envisionned, and I don’t see much free time on the horizon. Instead of letting this linger forever, here is a truncated conclusion. The previous episodes were :

  • Part 1 : probably the best episode, about the basic game types.
  • Part 2 : definition of the game rules in an unspecified monad.
  • Part 3 : writing an interpreter for the rules.
  • Part 4 : stumbling and failure in writing a clean backend system.

In the previous episode I added a ton of STM code and helper functions in several 15 minutes sessions. The result was not pretty, and left me dissatisfied.

For this episode, I decided to release my constraints. For now, I am only going to support the following :

  • The backend list will not be dynamic : a bunch of backends are going to be registered once, and it will be not be possible to remove an existing or add a previous backend once this is done.
  • The backends will be text-line based (XMPP and IRC are good protocols for this). This will unfortunately make it harder to write a nice web interface for the game too, but given how much time I can devote to this side-project this doesn’t matter much …
The MVC paradigm

A great man once said that “if you have category theory, everything looks like a pipe. Or a monad. Or a traversal. Or perhaps it’s a cosomething”. With the previously mentionned restrictions, I was able to shoehorn my problem in the shape of the mvc package, which I wanted to try for a while. It might be a bit different that what people usually expect when talking about the model - view - controller pattern, and is basically :

  • Some kind of pollable input (the controllers),
  • a pure stream based computation (the model), sporting an internal state and transforming the data coming from the inputs into something that is passed to …
  • … IO functions that run the actual effects (the views).

Each of these components can be reasoned about separately, and combined together in various ways.

There is however one obvious problem with this pattern, due to the way the game is modeled. Currently, the game is supposed to be able to receive data from the players, and to send data to them. It would need to live entirely in the model for this to work as expected, but the way it is currently written doesn’t make it obvious.

It might be possible to have the game be explicitely CPS, so that the pure part would run the game until communication with the players is required, which would translate nicely in an output that could be consumed by a view.

This would however require some refactoring and a lot of thinking, which I currently don’t have time for, so here is instead how the information flows :

Here PInput and GInput are the type of the inputs (respectively from player and games). The blue boxes are two models that will be combined together. The pink ones are the type of outputs emitted from the models. The backends serve as drivers for player communication. The games run in their respective threads, and the game manager spawns and manages the game threads.

Comparison with the “bunch of STM functions” model

I originally started with a global TVar containing the state information of each players (for example if they are part of a game, still joining, due to answer to a game query, etc.). There were a bunch of “helper functions” that would manipulate the global state in a way that would ensure its consistency. The catch is that the backends were responsible for calling these helper functions at appropriate times and for not messing with the global state.

The MVC pattern forces the structure of your program. In my particular case, it means a trick is necessary to integrate it with the current game logic (that will be explained later). The “boSf” pattern is more flexible, but carries a higher cognitive cost.

With the “boSf” pattern, response to player inputs could be :

  • Messages to players, which fits well with the model, as it happened over STM channels, so the whole processing / state manipulation / player output could be of type Input -> STM ().
  • Spawning a game. This time we need forkIO and state manipulation. This means a type like c :: Input -> STM (IO ()), with a call like join (atomically (c input)).

Now there are helper functions that return an IO action, and some that don’t. When some functionnality is added, some functions need to start returning IO actions. This is ugly and makes it harder to extend.

Conclusion of the serie

Unfortunately I ran out of time for working on this serie a few weeks ago. The code is out, the game works and it’s fun. My original motivation for writing this post was as an exposure on basic type-directed design to my non-Haskeller friends, but I think it’s not approachable to non Haskellers, so I never shown them.

The main takeaways are :

Game rules

The game rules have first been written with an unspecified monad that exposed several functions required for user interaction. That’s the reason I started with defining a typeclass, that way I wouldn’t have to worry about implementing the “hard” part and could concentrate on writing the rules instead. For me, this was the fun part, and it was also the quickest.

As of the implementation of the aforementionned functions, I then used the operational package, that would let me write and “interpreter” for my game rules. One of them is pure, and used in tests. There are two other interpreters, one of them for the console version of the game, the other for the multi-backends system.

Backend system

The backends are, I think, easy to expand. Building the core of the multi-game logic with the mvc package very straightforward. It would be obvious to add an IRC backend to the XMPP one, if there weren’t that many IRC packages to choose from on hackage …

A web backend doesn’t seem terribly complicated to write, until you want to take into account some common web application constraints, such as having several redundant servers. In order to do so, the game interpreter should be explicitely turned into an explicit continuation-like system (with the twist it only returns on blocking calls) and the game state serialized in a shared storage system.


My main motivation was to show it was possible to eliminate tons of bug classes by encoding of the invariants in the type system. I would say this was a success.

The area where I expected to have a ton of problems was the card list. It’s a tedious manual process, but some tests weeded out most of the errors (it helps that there are some properties that can be verified on the deck). The other one was the XMPP message processing in its XML horror. It looks terrible.

The area where I wanted this process to work well was a success. I wrote the game rules in one go, without any feedback. Once they were completed, I wrote the backends and tested the game. It turned out they were very few bugs, especially when considering the fact that the game is a moderately complicated board game :

  • One of the special capabilities was replaced with another, and handled at the wrong moment in the game. This was quickly debugged.
  • I used traverse instead of both for tuples. I expected them to have the same result, and it “typechecked” because my tuple was of type (a,a), but the Applicative instance for tuples made it obvious this wasn’t the case. That took a bit longer to find out, as it impacted half of the military victory points, which are distributed only three times per game.
  • I didn’t listen to my own advice, and didn’t take the time to properly encode that some functions only worked with nonempty lists as arguments. This was also quickly found out, using quickcheck.

The game seems to run fine now. There is a minor rule bugs identified (the interaction between card-recycling abilities and the last turn for example), but I don’t have time to fix it.

There might be some interest with the types of the Hub, as they also encode a lot of invariants.

Also off-topic, but I really like using the lens vocabulary to encode the relationship between types these days. A trivial example can be found here.

The game

That might be the most important part. I played a score of games, and it was a lot of fun. The game is playable, and just requires a valid account on an XMPP server. Have fun !

Categories: Offsite Blogs

Need help understanding lazysplines

Haskell on Reddit - Fri, 08/08/2014 - 11:28pm

The documentation is rather sparse, but from reading the source and trying to understand the examples, it seems that duckDeathAtAge defines a piece-wise function where duckDeathAtAge at x gives the probability that the duck will die at age x. Then, survival at x gives the probability that the duck will live till age x. I'm not fully understanding how the recursion works here and the other examples seem almost impenetrable.

Some googling led me to the announcement for a talk by the author but I couldn't find slides/video in the websites linked to at the end of the paper.

Can anyone with more experience with the library point me to additional resources to learn about the topic?

submitted by precalc
[link] [3 comments]
Categories: Incoming News

Yesod Web Framework: Deprecating yesod-platform

Planet Haskell - Fri, 08/08/2014 - 11:10pm

I want to deprecate the yesod-platform, and instead switch to Stackage server as the recommended installation method for Yesod for end users. To explain why, let me explain the purpose of yesod-platform, the problems I've encountered maintaining it, and how Stackage Server can fit in. I'll also explain some unfortunate complications with Stackage Server.

Why yesod-platform exists

Imagine a simpler Yesod installation path:

  1. cabal install yesod-bin, which provides the yesod executable.
  2. yesod init to create a scaffolding.
  3. cabal install inside that directory, which downloads and installs all of the necessary dependencies.

This in fact used to be the installation procedure, more or less. However, this led to a number of user problems:

  • Back in the earlier days of cabal-install, it was difficult for the dependency solver to find a build plan in this situation. Fortunately, cabal-install has improved drastically since then.
    • This does still happen occasionally, especially with packages with restrictive upper bounds. Using --max-backjumps=-1 usually fixes that.
  • It sometimes happens that an upstream package from Yesod breaks Yesod, either by changing an API accidentally, or by introducing a runtime bug.

This is where yesod-platform comes into play. Instead of leaving it up to cabal-install to track down a consistent build plan, it specifies exact versions of all depedencies to ensure a consistent build plan.

Conflicts with GHC deps/Haskell Platform

Yesod depends on aeson. So logically, yesod-platform should have a strict dependency on aeson. We try to always use the newest versions of dependencies, so today, that would be aeson == In turn, this demands text >= However, if you look at the Haskell Platform changelog, there's no version of the platform that provides a new enough version of text to support that constraint.

yesod-platform could instead specify an older version of aeson, but that would unnecessarily constrain users who aren't sticking to the Haskell Platform versions (which, in my experience, is the majority of users). This would also cause more dependency headaches down the road, as you'd now also need to force older versions of packages like criterion.

To avoid this conflict, yesod-platform has taken the approach of simply omitting constraints on any packages in the platform, as well as any packages with strict bounds on those packages. And if you look at yesod-platform today, you'll that there is no mention of aeson or text.

A similar issue pops up for packages that are a dependency of the GHC package (a.k.a., GHC-the-library). The primary problem there is the binary package. In this case, the allowed version of the package depends on which version of GHC is being used, not the presence or absence of the Haskell Platform.

This results in two problems:

  • It's very difficult to maintain this list of excluded packages correctly. I get large number of bug reports about these kinds of build plan problems.

  • We're giving up quite a bit of the guaranteed buildability that yesod-platform was supposed to provide. If aeson (as an example) doesn't work with yesod-form, yesod-platform won't be able to prevent such a build plan from happening.

There's also an issue with the inability to specify dependencies on executable-only packages, like alex, happy, and yesod-bin.

Stackage Server

Stackage Server solves exactly the same problem. It provides a consistent set of packages that can be installed together. Unlike yesod-platform, it can be distinguished based on GHC version. And it's far simpler to maintain. Firstly, I'm already maintaining Stackage Server full time. And secondly, all of the testing work is handled by a very automated process.

So here's what I'm proposing: I'll deprecate the yesod-platform package, and change the Yesod quickstart guide to have the following instructions:

  • Choose an appropriate Stackage snapshot from
  • Modify your cabal config file appropriately
  • cabal install yesod-bin alex happy
  • Use yesod init to set up a scaffolding
  • cabal install --enable-tests in the new directory

For users wishing to live on more of a bleeding edge, the option is always available to simply not use Stackage. Such a usage will give more control over package versions, but will also lack some stability.

The problems

There are a few issues that need to be ironed out.

  • cabal sandbox does not allow changing the remote-repo. Fortunately, Luite seems to have this solved, so hopefully this won't be a problem for long. Until then, you can either use a single Stackage snapshot for all your development, or use a separate sandboxing technology like hsenv.

  • Haskell Platform conflicts still exist. The problem I mentioned above with aeson and text is a real problem. The theoretically correct solution is to create a Stackage snapshot for GHC 7.8 + Haskell Platform. And if there's demand for that, I'll bite the bullet and do it, but it's not an easy bullet to bite. But frankly, I'm not hearing a lot of users saying that they want to peg Haskell Platform versions specifically.

    In fact, the only users who really seem to want to stick to Haskell Platform versions are Windows users, and the main reason for this is the complexity in installing the network package on Windows. I think there are three possible solutions to this issue, without forcing Windows users onto old versions of packages:

    1. Modify the network package to be easier to install on Windows. I really hope this has some progress. If this is too unstable to be included in the official Hackage release, we could instead have an experimental Stackage snapshot for Windows with that modification applied.
    2. Tell Windows users to simply bypass Stackage and yesod-platform, with the possibility of more build problems on that platform.
      • We could similarly recommend Windows users develop in a Linux virtual machine/Docker image.
    3. Provide a Windows distribution of GHC + cabal-install + network. With the newly split network/network-uri, this is a serious possibility.

Despite these issues, I think Stackage Server is a definite improvement on yesod-platform on Linux and Mac, and will likely still improve the situation on Windows, once we figure out the Haskell Platform problems.

I'm not making any immediate changes. I'd very much like to hear people using Yesod on various operating systems to see how these changes will affect them.

Categories: Offsite Blogs

Get out structured data from a C/C++ library inHaskell

haskell-cafe - Fri, 08/08/2014 - 3:34pm
Hello everybody. I'm new to the list so I'd like to say hello to you. I'm a student of computer science and early practitioner of Haskell. I've decided to implement the next project in Haskell but I need to interface a C++ library. I've read all the wikis material about the matter and I have an understanding of how FFI to C works in general. The library in question is a SAT solver for LTL formulas, and all I need to do is to be able to create the AST of the formula (Haskell will do the parsing) and pass it to the library, and then get back the reply. From the C++ point of view, the AST of the formulas consists simply of objects linked together with raw pointers. Nodes are of the same type, with an internal enum that specifies the type of node, so it's not an inheritance hierarchy or fancy things... What I would like to do is to be able to declare an algebraic data type that represents the AST and somehow mangle it to a form that can be passed to the C++ function of the library (that I can wrap into an ex
Categories: Offsite Discussion