News aggregator

New in-depth guide to stack

Haskell on Reddit - Tue, 09/01/2015 - 12:20am
Categories: Incoming News

FP Complete: stack: more binary package sharing

Planet Haskell - Tue, 09/01/2015 - 12:00am

This blog post describes a new feature in stack. Until now, multiple projects using the same snapshot could share the binary builds of packages. However, two separate snapshots could not share the binary builds of their packages, even if they were substantially identical. That's now changing.

tl;dr: stack will now be able to install new snapshots much more quickly, with less disk space usage, than previously.

This has been a known shortcoming since stack was first released. It's not coincidental that this support is being added not long after a similar project completed for Cabal. Ryan Trinkle- Vishal's mentor on the project- described the work to me a few months back, and I decided to wait to see the outcome of the project before working on the feature in stack.

The improvements to Cabal here are superb, and I'm thrilled to see them happening. However, after reviewing and discussing with a few stack developers and users, I decided to implement a different approach that doesn't take advantage of the new Cabal changes. The reasons are:

  • As Herbert very aptly pointed out on Reddit:

    Since Stack sandboxes everything maximum sharing between LTS versions can easily be implemented going back to GHC 7.0 without this new multi-instance support.

    This multi-instance support is needed if you want to accomplish the same thing without isolated sandboxes in a single package db.

  • There are some usability concerns around a single massive database with all packages in it. Specifically, there are potential problems around getting GHC to choose a coherent set of packages when using something like ghci or runghc. Hopefully some concept of views will be added (as Duncan described in the original proposal), but the implications still need to be worked out.

  • stack users are impatient (and I mean that in the best way possible). Why wait for a feature when we could have it now? While the Cabal Google Summer of Code project is complete, the changes are not yet merged to master, much less released. stack would need to wait until those changes are readily available to end users before relying on them.

stack's implementation

I came up with some complicated approaches to the problem, but ultimately a comment from Aaron Wolf rang true:

check the version differences and just copy compiled binaries from previous LTS for unchanged items

It turns out that this is really easy. The implementation ends up having two components:

  1. Whenever a snapshot package is built, write a precompiled cache file containing the filepaths of the library's .conf file (from inside the package database) and all of the executables installed.
  2. Before building a snapshot package, check for a precompiled cache file. If the file exists, copy over the executables and register the .conf file into the new snapshots database.

That precompiled cache file's path looks something like this:


This encodes the GHC version, Cabal version, package name, and package version. The last bit is a hash of all of the configuration information, including flags, GHC options, and dependencies. We then hash those flags and put them in the filepath, ensuring that when we look up a precompiled package, we're getting something that matches what we'd be building ourselves now.

The reason we can get away with this approach in stack is because of the invariants of a snapshot, namely: each snapshot has precisely one version of a package available, and therefore we have no need to deal with the new multi-instance installations GHC 7.10 supports. This also means no concern around views: a snapshot database is by its very nature a view.


  • Decreased compile times
  • Decreased disk space usage


  • You can't reliably delete a single snapshot, as there can be files shared between different snapshots. Deleting a single snapshot was never an officially supported feature previously, but if you knew what you were doing, you could do it safely.

After discussing with others: this trade-off seems acceptable: the overall decrease in disk space usage means that the desire to delete a single snapshot will be reduced. When real disk space reclaiming needs to happen, the recommended approach will be to wipe all snapshots and start over, which (1) will be an infrequent occurrence, and (2) due to the faster compile times, will be less burdensome.

Categories: Offsite Blogs

Backing up ghc version, just for one sandbox?

haskell-cafe - Mon, 08/31/2015 - 9:38pm
Hi all, Is it possible to back up the ghc version, just for a single sandbox? Thanks, -db _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Gabriel Gonzalez: State of the Haskell ecosystem - August 2015

Planet Haskell - Mon, 08/31/2015 - 9:07pm

Note: This went out as a RFC draft a few weeks ago, which is now a live wiki. See the Conclusions section at the end for more details.

In this post I will describe the current state of the Haskell ecosystem to the best of my knowledge and its suitability for various programming domains and tasks. The purpose of this post is to discuss both the good and the bad by advertising where Haskell shines while highlighting where I believe there is room for improvement.

This post is grouped into two sections: the first section covers Haskell's suitability for particular programming application domains (i.e. servers, games, or data science) and the second section covers Haskell's suitability for common general-purpose programming needs (such as testing, IDEs, or concurrency).

The topics are roughly sorted from greatest strengths to greatest weaknesses. Each programming area will also be summarized by a single rating of either:

  • Best in class: the best experience in any language
  • Mature: suitable for most programmers
  • Immature: only acceptable for early-adopters
  • Bad: pretty unusable

The more positive the rating the more I will support the rating with success stories in the wild. The more negative the rating the more I will offer constructive advice for how to improve things.

Disclaimer #1: I obviously don't know everything about the Haskell ecosystem, so whenever I am unsure I will make a ballpark guess and clearly state my uncertainty in order to solicit opinions from others who have more experience. I keep tabs on the Haskell ecosystem pretty well, but even this post is stretching my knowledge. If you believe any of my ratings are incorrect, I am more than happy to accept corrections (both upwards and downwards)

Disclaimer #2: There are some "Educational resource" sections below which are remarkably devoid of books, since I am not as familiar with textbook-related resources. If you have suggestions for textbooks to add, please let me know.

Disclaimer #3: I am very obviously a Haskell fanboy if you haven't guessed from the name of my blog and I am also an author of several libraries mentioned below, so I'm highly biased. I've made a sincere effort to honestly appraise the language, but please challenge my ratings if you believe that my bias is blinding me! I've also clearly marked Haskell sales pitches as "Propaganda" in my external link sections. :)

Table of ContentsApplication DomainsCompilers

Rating: Best in class

Haskell is an amazing language for writing your own compiler. If you are writing a compiler in another language you should genuinely consider switching.

Haskell originated in academia, and most languages of academic origin (such as the ML family of languages) excel at compiler-related tasks for obvious reasons. As a result the language has a rich ecosystem of libraries dedicated to compiler-related tasks, such as parsing, pretty-printing, unification, bound variables, syntax tree manipulations, and optimization.

Anybody who has ever written a compiler knows how difficult they are to implement because by necessity they manipulate very weakly typed data structures (trees and maps of strings and integers). Consequently, there is a huge margin for error in everything a compiler does, from type-checking to optimization, to code generation. Haskell knocks this out of the park, though, with a really powerful type system with many extensions that can eliminate large classes of errors at compile time.

I also believe that there are many excellent educational resources for compiler writers, both papers and books. I'm not the best person to summarize all the educational resources available, but the ones that I have read have been very high quality.

Finally, there are a large number of parsers and pretty-printers for other languages which you can use to write compilers to or from these languages.

Notable libraries:

Some compilers written in Haskell:

Educational resources:

Server-side programming

Rating: Mature

Haskell's second biggest strength is the back-end, both for web applications and services. The main features that the language brings to the table are:

  • Server stability
  • Performance
  • Ease of concurrent programming
  • Excellent support for web standards

The strong type system and polished runtime greatly improve server stability and simplify maintenance. This is the greatest differentiator of Haskell from other backend languages, because it significantly reduces the total-cost-of-ownership. You should expect that you can maintain Haskell-based services with significantly fewer programmers than other languages, even when compared to other statically typed languages.

However, the greatest weakness of server stability is space leaks. The most common solution that I know of is to use ekg (a process monitor) to examine a server's memory stability before deploying to production. The second most common solution is to learn to detect and prevent space leaks with experience, which is not as hard as people think.

Haskell's performance is excellent and currently comparable to Java. Both languages give roughly the same performance in beginner or expert hands, although for different reasons.

Where Haskell shines in usability is the runtime support for the following three features:

  • lightweight threads enhanced (which differentiate Haskell from the JVM)
  • software transactional memory (which differentiate Haskell from Go)
  • garbage collection (which differentiate Haskell from Rust)

Many languages support two of the above three features, but Haskell is the only one that I know of that supports all three.

If you have never tried out Haskell's software transactional memory you should really, really, really give it a try, since it eliminates a large number of concurrency logic bugs. STM is far and away the most underestimated feature of the Haskell runtime.

Notable libraries:

  • warp / wai - the low-level server and API that all server libraries share, with the exception of snap
  • scotty - A beginner-friendly server framework analogous to Ruby's Sinatra
  • spock - Lighter than the "enterprise" frameworks, but more featureful than scotty (type-safe routing, sessions, conn pooling, csrf protection, authentication, etc)
  • yesod / yesod-* / snap / snap-* / happstack-server / happstack-* - "Enterprise" server frameworks with all the bells and whistles
  • servant / servant-* - This server framework might blow your mind
  • authenticate / authenticate-* - Shared authentication libraries
  • ekg / ekg-* - Haskell service monitoring
  • stm - Software-transactional memory

Some web sites and services powered by Haskell:


Educational resources:

Scripting / Command-line applications

Rating: Mature

Haskell's biggest advantage as a scripting language is that Haskell is the most widely adopted language that support global type inference. Many languages support local type inference (such as Rust, Go, Java, C#), which means that function argument types and interfaces must be declared but everything else can be inferred. In Haskell, you can omit everything: all types and interfaces are completely inferred by the compiler (with some caveats, but they are minor).

Global type inference gives Haskell the feel of a scripting language while still providing static assurances of safety. Script type safety matters in particular for enterprise environments where glue scripts running with elevated privileges are one of the weakest points in these software architectures.

The second benefit of Haskell's type safety is ease of script maintenance. Many scripts grow out of control as they accrete arcane requirements and once they begin to exceed 1000 LOC they become difficult to maintain in a dynamically typed language. People rarely budget sufficient time to create a sufficiently extensive test suite that exercises every code path for each and every one of their scripts. Having a strong type system is like getting a large number of auto-generated tests for free that exercise all script code paths. Moreover, the type system is more resilient to refactoring than a test suite.

However, the main reason I mark Haskell as mature because the language is also usable even for simple one-off disposable scripts. These Haskell scripts are comparable in size and simplicity to their equivalent Bash or Python scripts. This lets you easily start small and finish big.

Haskell has one advantage over many dynamic scripting languages, which is that Haskell can be compiled into a native and statically linked binary for distribution to others.

Haskell's scripting libraries are feature complete and provide all the niceties that you would expect from scripting in Python or Ruby, including features such as:

  • rich suite of Unix-like utilities
  • advanced sub-process management
  • POSIX support
  • light-weight idioms for exception safety and automatic resource disposal

Notable libraries:

Some command-line tools written in Haskell:

Educational resources:

Numerical programming

Rating: Immature? (Uncertain)

Haskell's numerical programming story is not ready, but steadily improving.

My main experience in this area was from a few years ago doing numerical programming for bioinformatics that involved a lot of vector and matrix manipulation and my rating is largely colored by that experience.

The biggest issues that the ecosystem faces are:

  • Really clunky matrix library APIs
  • Fickle rewrite-rule-based optimizations

When the optimizations work they are amazing and produce code competitive with C. However, small changes to your code can cause the optimizations to suddenly not trigger and then performance drops off a cliff.

There is one Haskell library that avoids this problem entirely which I believe holds a lot of promise: accelerate generates LLVM and CUDA code at runtime and does not rely on Haskell's optimizer for code generation, which side-steps the problem. accelerate has a large set of supported algorithms that you can find by just checking the library's reverse dependencies:

However, I don't have enough experience with accelerate or enough familiarity with numerical programming success stories in Haskell to vouch for this just yet. If somebody has more experience then me in this regard and can provide evidence that the ecosystem is mature then I might consider revising my rating upward.

Notable libraries:


Educational Resources:

Front-end web programming

Rating: Immature

This boils down to Haskell's ability to compile to Javascript. ghcjs is the front-runner, but for a while setting up ghcjs was non-trivial. However, ghcjs appears to be very close to having a polished setup story now that ghc-7.10.2 is out (Source).

One of the distinctive features of ghcjs compared to other competing Haskell-to-Javascript compilers is that a huge number of Haskell libraries work out of the box with ghcjs because it supports most Haskell primitive operations.

I would also like to mention that there are two Haskell-like languages that you should also try out for front-end programming: elm and purescript. These are both used in production today and have equally active maintainers and communities of their own.

Areas for improvement:

  • There needs to be a clear story for smooth integration with existing Javascript projects
  • There need to be many more educational resources targeted at non-experts explaining how to translate existing front-end programming idioms to Haskell
  • There need to be several well-maintained and polished Haskell libraries for front-end programming

Notable Haskell-to-Javascript compilers:

Notable libraries:

  • reflex-dom - Functional reactive programming library for DOM manipulation

Distributed programming

Rating: Immature

This is sort of a broad area since I'm using this topic to refer to both distributed computation (for analytics) and distributed service architectures. However, in both regards Haskell is lagging behind its peers.

The JVM, Go, and Erlang have much better support for this sort of things, particularly in terms of libraries.

There has been a lot of work in replicating Erlang-like functionality in Haskell through the Cloud Haskell project, not just in creating the low-level primitives for code distribution / networking / transport, but also in assembling a Haskell analog of Erlang's OTP. I'm not that familiar with how far progress is in this area, but people who love Erlang should check out Cloud Haskell.

Areas for improvement:

  • We need more analytics libraries. Haskell has no analog of scalding or spark. The most we have is just a Haskell wrapper around hadoop
  • We need a polished consensus library (i.e. a high quality Raft implementation in Haskell)

Notable libraries:

Standalone GUI applications

Rating: Immature

Haskell really lags behind the C# and F# ecosystem in this area.

My experience on this is based on several private GUI projects I wrote several years back. Things may have improved since then so if you think my assessment is too negative just let me know.

All Haskell GUI libraries are wrappers around toolkits written in other languages (such as GTK+ or Qt). The last time I checked the gtk bindings were the most comprehensive, best maintained, and had the best documentation.

However, the Haskell bindings to GTK+ have a strongly imperative feel to them. The way you do everything is communicating between callbacks by mutating IORefs. Also, you can't take extensive advantage of Haskell's awesome threading features because the GTK+ runtime is picky about what needs to happen on certain threads. I haven't really seen a Haskell library that takes this imperative GTK+ interface and wraps it in a more idiomatic Haskell API.

My impression is that most Haskell programmers interested in applications programming have collectively decided to concentrate their efforts on improving Haskell web applications instead of standalone GUI applications. Honestly, that's probably the right decision in the long run.

Another post that goes into more detail about this topic is this post written by Keera Studios:

Areas for improvement:

  • A GUI toolkit binding that is maintained, comprehensive, and easy to use
  • Polished GUI interface builders

Notable libraries:

  • gtk / glib / cairo / pango - The GTK+ suite of libraries
  • wx - wxWidgets bindings
  • X11 - X11 bindings
  • threepenny-gui - Framework for local apps that use the web browser as the interface
  • hsqml - A Haskell binding for Qt Quick, a cross-platform framework for creating graphical user interfaces.
  • fltkhs - A Haskell binding to FLTK. Easy install/use, cross-platform, self-contained executables.

Some example applications:

Educational resources:

Machine learning

Rating: Immature? (Uncertain)

This area has been pioneered almost single-handedly by one person: Mike Izbicki. He maintains the HLearn suite of libraries for machine learning in Haskell.

I have essentially no experience in this area, so I can't really rate it that well. However, I'm pretty certain that I would not rate it mature because I'm not aware of any company successfully using machine learning in Haskell.

For the same reason, I can't really offer constructive advice for areas for improvement.

If you would like to learn more about this area the best place to begin is the Github page for the HLearn project:

Notable libraries: * HLearn-*

Data science

Rating: Immature

Haskell really lags behind Python and R in this area. Haskell is somewhat usable for data science, but probably not ready for expert use under deadline pressure.

I'll primarily compare Haskell to Python since that's the data science ecosystem that I'm more familiar with. Specifically, I'll compare to the scipy suite of libraries:

The Haskell analog of NumPy is the hmatrix library, which provides Haskell bindings to BLAS, LAPACK. hmatrix's main limitation is that the API is a bit clunky, but all the tools are there.

Haskell's charting story is okay. Probably my main criticism of most charting APIs is that their APIs tend to be large, the types are a bit complex, and they have a very large number of dependencies.

Fortunately, Haskell does integrate into IPython so you can use Haskell within an IPython shell or an online notebook. For example, there is an online "IHaskell" notebook that you can use right now located here:

If you want to learn more about how to setup your own IHaskell notebook, visit this project:

The closest thing to Python's pandas is the frames library. I haven't used it that much personally so I won't comment on it much other than to link to some tutorials in the Educational Resources section.

I'm not aware of a Haskell analog to SciPy (the library) or sympy. If you know of an equivalent Haskell library then let me know.

One Haskell library that deserves honorable mention here is the diagrams library which lets you produce complex data visualizations very easily if you want something a little bit fancier than a chart. Check out the diagrams project if you have time:

Areas for improvement:

  • Smooth user experience and integration across all of these libraries
  • Simple types and APIs. The data science programmers I know dislike overly complex or verbose APIs
  • Beautiful data visualizations with very little investment

Notable libraries:

Game programming

Rating: Immature? / Bad?

Haskell has SDL and OpenGL bindings, which are actually quite good, but that's about it. You're on your own from that point onward. There is not a rich ecosystem of higher-level libraries built on top of those bindings. There is some work in this area, but I'm not aware of anything production quality.

There is also one really fundamental issue with the language, which is garbage collection, which runs the risk of introducing perceptible pauses in gameplay if your heap grows too large.

For this reason I don't see Haskell ever being used for AAA game programming. I suppose you could use Haskell for simpler games that don't require keeping a lot of resources in memory.

Haskell could maybe be used for the scripting layer of a game or to power the backend for an online game, but for rendering or updating an extremely large graph of objects you should probably stick to another language.

The company that has been doing the most to push the envelope for game programming in Haskell is Keera Studios, so if this is an area that interests you then you should follow their blog:

Areas for improvement:

  • Improve the garbage collector and benchmark performance with large heap sizes
  • Provide higher-level game engines
  • Improve distribution of Haskell games on proprietary game platforms

Notable libraries:

Systems / embedded programming

Rating: Bad / Immature (?) (See description)

Since systems programming is an abused word, I will clarify that I mean programs where speed, memory layout, and latency really matter.

Haskell fares really poorly in this area because:

  • The language is garbage collected, so there are no latency guarantees
  • Executable sizes are large
  • Memory usage is difficult to constrain (thanks to space leaks)
  • Haskell has a large and unavoidable runtime, which means you cannot easily embed Haskell within larger programs
  • You can't easily predict what machine code that Haskell code will compile to

Typically people approach this problem from the opposite direction: they write the low-level parts in C or Rust and then write Haskell bindings to the low-level code.

It's worth noting that there is an alternative approach which is Haskell DSLs that are strongly typed that generate low-level code at runtime. This is the approach championed by the company Galois.

Notable libraries:

  • atom / ivory - DSL for generating embedded programs
  • copilot - Stream DSL that generates C code
  • improve - High-assurance DSL for embedded code that generates C and Ada

Educational resources:

Mobile apps

Rating: Immature? / Bad? (Uncertain)

This greatly lags behind using the language that is natively supported by the mobile platform (i.e. Java for Android or Objective-C / Swift for iOS).

I don't know a whole lot about this area, but I'm definitely sure it is far from mature. All I can do is link to the resources I know of for Android and iPhone development using Haskell.

I also can't really suggest improvements because I'm pretty out of touch with this branch of the Haskell ecosystem.

Educational resources:

ARM processor support

Rating: Immature / Early adopter

On hobbyist boards like the raspberry pi its possible to compile haskell code with ghc. But some libraries have problems on the arm platform, ghci only works on newer compilers, and the newer compilers are flaky.

If haskell code builds, it runs with respectable performance on these machines.

Raspian (raspberry pi, pi2, others) * current version: ghc 7.4, cabal-install 1.14 * ghci doesn't work.

Debian Jesse (Raspberry Pi 2) * current version: ghc 7.6 * can install the current ghc 7.10.2 binary and ghci starts. However, fails to build cabal, with 'illegal instruction'

Arch (Raspberry Pi 2) * current version 7.8.2, but llvm is 3.6, which is too new. * downgrade packages for llvm not officially available. * with llvm downgrade to 3.4, ghc and ghci work, but problems compiling yesod, scotty.
* compiler crashes, segfaults, etc.

Arch (Banana Pi) * similar to raspberry pi 2, ghc is 7.8.2, works with llvm downgrade * have had success compiling a yesod project on this platform.

Common Programming NeedsMaintenance

Rating: Best in class

Haskell is unbelievably awesome for maintaining large projects. There's nothing that I can say that will fully convey how nice it is to modify existing Haskell code. You can only appreciate this through experience.

When I say that Haskell is easy to maintain, I mean that you can easily approach a large Haskell code base written by somebody else and make sweeping architectural changes to the project without breaking the code.

You'll often hear people say: "if it compiles, it works". I think that is a bit of an exaggeration, but a more accurate statement is: "if you refactor and it compiles, it works". This lets you move fast without breaking things.

Most statically typed languages are easy to maintain, but Haskell is on its own level for the following reasons:

  • Strong types
  • Global type inference
  • Type classes
  • Laziness

The latter two features are what differentiate Haskell from other statically typed languages.

If you've ever maintained code in other languages you know that usually your test suite breaks the moment you make large changes to your code base and you have to spend a significant amount of effort keeping your test suite up to date with your changes. However, Haskell has a very powerful type system that lets you transform tests into invariants that are enforced by the types so that you can statically eliminate entire classes of errors at compile time. These types are much more flexible than tests when modifying code and types require much less upkeep as you make large changes.

The Haskell community and ecosystem use the type system heavily to "test" their applications, more so than other programming language communities. That's not to say that Haskell programmers don't write tests (they do), but rather they prefer types over tests when they have the option.

Global type inference means that you don't have to update types and interfaces as you change the code. Whenever I do a large refactor the first thing I do is delete all type signatures and let the compiler infer the types and interfaces for me as I go. When I'm done refactoring I just insert back the type signatures that the compiler infers as machine-checked documentation.

Type classes also assist refactoring because the compiler automatically infers type class constraints (analogous to interfaces in other languages) so that you don't need to explicitly annotate interfaces. This is a huge time saver.

Laziness deserves special mention because many outsiders do not appreciate how laziness simplifies maintenance. Many languages require tight coupling between producers and consumers of data structures in order to avoid wasteful evaluation, but laziness avoids this problem by only evaluating data structures on demand. This means that if your refactoring process changes the order in which data structures are consumed or even stops referencing them altogether you don't need to reorder or delete those data structures. They will just sit around patiently waiting until they are actually needed, if ever, before they are evaluated.

Single-machine Concurrency

Rating: Best in class

I give Haskell a "Best in class" rating because Haskell's concurrency runtime performs as well or better than mainstream languages and is significantly easier to use due to the runtime support for software-transactional memory.

The best explanation of Haskell's threading module is the documentation in Control.Concurrent:

Concurrency is "lightweight", which means that both thread creation and context switching overheads are extremely low. Scheduling of Haskell threads is done internally in the Haskell runtime system, and doesn't make use of any operating system-supplied thread packages.

The best way to explain the performance of Haskell's threaded runtime is to give hard numbers:

  • The Haskell thread scheduler can easily handle millions of threads
  • Each thread requires 1 kb of memory, so the hard limitation to thread count is memory (1 GB per million threads).
  • Haskell channel overhead for the standard library (using TQueue) is on the order of one microsecond per message and degrades linearly with increasing contention
  • Haskell channel overhead using the unagi-chan library is on the order of 100 nanoseconds (even under contention)
  • Haskell's MVar (a low-level concurrency communication primitive) requires 10-20 ns to add or remove values (roughly on par with acquiring or releasing a lock in other languages)

Haskell also provides software-transactional memory, which allows programmers build composable and atomic memory transactions. You can compose transactions together in multiple ways to build larger transactions:

  • You can sequence two transactions to build a larger atomic transaction
  • You can combine two transactions using alternation, falling back on the second transaction if the first one fails
  • Transactions can retry, rolling back their state and sleeping until one of their dependencies changes in order to avoid wasteful polling

A few other languages provide software-transactional memory, but Haskell's implementation has two main advantages over other implementations:

  • The type system enforces that transactions only permit reversible memory modifications. This guarantees at compile time that all transactions can be safely rolled back.
  • Haskell's STM runtime takes advantage of enforced purity to improve the efficiency of transactions, retries, and alternation.

Notable libraries:

  • stm - Software transactional memory
  • unagi-chan - High performance channels
  • async - Futures library

Educational resources:

Types / Type-driven development

Rating: Best in class

Haskell definitely does not have the most advanced type system (not even close if you count research languages) but out of all languages that are actually used in production Haskell is probably at the top. Idris is probably the closest thing to a type system more powerful than Haskell that has a realistic chance of use in production in the foreseeable future.

The killer features of Haskell's type system are:

  • Type classes
  • Global type and type class inference
  • Light-weight type syntax

Haskell's type system really does not get in your way at all. You (almost) never need to annotate the type of anything. As a result, the language feels light-weight to use like a dynamic language, but you get all the assurances of a static language.

Many people are familiar with languages that support "local" type inference (like Rust, Java, C#), where you have to explicitly type function arguments but then the compiler can infer the types of local variables. Haskell, on the other hand, provides "global" type inference, meaning that the types and interfaces of all function arguments are inferred, too. Type signatures are optional (with some minor caveats) and are primarily for the benefit of the programmer.

This really benefits projects where you need to prototype quickly but refactor painlessly when you realize you are on the wrong track. You can leave out all type signatures while prototyping but the types are still there even if you don't see them. Then when you dramatically change course those strong and silent types step in and keep large refactors painless.

Some Haskell programmers use a "type-driven development" programming style, analogous to "test-driven development":

  • they specify desired behavior as a type signature which initially fails to type-check (analogous to adding a test which starts out "red")
  • they create a quick and dirty solution that satisfies the type-checker (analogous to turning the test "green")
  • they improve on their initial solution while still satisfying the type-checker (analogous to a "red/green refactor")

"Type-driven development" supplements "test-driven development" and has different tradeoffs:

  • The biggest disadvantage of types is that test as many things as full-blown tests, especially because Haskell is not dependently typed
  • The biggest advantage of types is that they can prove the complete absence of programming errors for all possible cases, whereas tests cannot examine every possibility
  • Type-checking is much faster than running tests
  • Type error messages are informative: they explain what went wrong and never get stale
  • Type-checking never hangs and never gives flaky results

Haskell also provides the "Typed Holes" extension, which lets you add an underscore (i.e. "_") anywhere in the code whenever you don't know what expression belongs there. The compiler will then tell you the expected type of the hole and suggest terms in scope with related types that you can use to fill the hole.

Educational resources:


Domain-specific languages (DSLs)

Rating: Mature

Haskell rocks at DSL-building. While not as flexible as a Lisp language I would venture that Haskell is the most flexible of the non-Lisp languages. You can overload a large amount of built-in syntax for your custom DSL.

The most popular example of overloaded syntax is do notation, which you can overload to work with any type that implements the Monad interface. This syntactic sugar for Monads in turn led to a huge overabundance of Monad tutorials.

However, there are lesser known but equally important things that you can overload, such as:

  • numeric and string literals
  • if/then/else expressions
  • list comprehensions
  • numeric operators

Educational resources:


Rating: Mature

There are a few places where Haskell is the clear leader among all languages:

  • property-based testing
  • mocking / dependency injection

Haskell's QuickCheck is the gold standard which all other property-based testing libraries are measured against. The reason QuickCheck works so smoothly in Haskell is due to Haskell's type class system and purity. The type class system simplifies automatic generation of random data from the input type of the property test. Purity means that any failing test result can be automatically minimized by rerunning the check on smaller and smaller inputs until QuickCheck identifies the corner case that triggers the failure.

Mocking is another area where Haskell shines because you can overload almost all built-in syntax, including:

  • do notation
  • if statements
  • numeric literals
  • string literals

Haskell programmers overload this syntax (particularly do notation) to write code that looks like it is doing real work:

example = do str <- readLine
putLine str

... and the code will actually evaluate to a pure syntax tree that you can use to mock in external inputs and outputs:

example = ReadLine (\str -> PutStrLn str (Pure ()))

Haskell also supports most testing functionality that you expect from other languages, including:

  • standard package interfaces for testing
  • unit testing libraries
  • test result summaries and visualization

Notable libraries:

  • QuickCheck - property-based testing
  • doctest - tests embedded directly within documentation
  • free - Haskell's abstract version of "dependency injection"
  • hspec - Testing library analogous to Ruby's RSpec
  • HUnit - Testing library analogous to Java's JUnit
  • tasty - Combination unit / regression / property testing library

Educational resources:

Data structures and algorithms

Rating: Mature

Haskell primarily uses persistent data structures, meaning that when you "update" a persistent data structure you just create a new data structure and you can keep the old one around (thus the name: persistent). Haskell data structures are immutable, so you don't actually create a deep copy of the data structure when updating; any new structure will reuse as much of the original data structure as possible.

The Notable libraries sections contains links to Haskell collections libraries that are heavily tuned. You should realistically expect these libraries to compete with tuned Java code. However, you should not expect Haskell to match expertly tuned C++ code.

The selection of algorithms is not as broad as in Java or C++ but it is still pretty good and diverse enough to cover the majority of use cases.

Notable libraries:


Rating: Mature

This boils down exclusively to the criterion library, which was done so well that nobody bothered to write a competing library. Notable criterion features include:

  • Detailed statistical analysis of timing data
  • Beautiful graph output: (Example)
  • High-resolution analysis (accurate down to nanoseconds)
  • Customizable HTML/CSV/JSON output
  • Garbage collection insensitivity

Notable libraries:

Educational resources:


Rating: Mature

Haskell's Unicode support is excellent. Just use the text and text-icu libraries, which provide a high-performance, space-efficient, and easy-to-use API for Unicode-aware text operations.

Note that there is one big catch: the default String type in Haskell is inefficient. You should always use Text whenever possible.

Notable libraries:

Parsing / Pretty-printing

Rating: Mature

Haskell is amazing at parsing. Recursive descent parser combinators are far-and-away the most popular parsing paradigm within the Haskell ecosystem, so much so that people use them even in place of regular expressions. I strongly recommend reading the "Monadic Parsing in Haskell" functional pearl linked below if you want to get a feel for why parser combinators are so dominant in the Haskell landscape.

If you're not sure what library to pick, I generally recommend the parsec library as a default well-rounded choice because it strikes a decent balance between ease-of-use, performance, good error messages, and small dependencies (since it ships with GHC).

attoparsec deserves special mention as an extremely fast backtracking parsing library. The speed and simplicity of this library will blow you away. The main deficiency of attoparsec is the poor error messages.

The pretty-printing front is also excellent. Academic researchers just really love writing pretty-printing libraries in Haskell for some reason.

Notable libraries:

  • parsec - best overall "value"
  • attoparsec - Extremely fast backtracking parser
  • trifecta - Best error messages (clang-style)
  • alex / happy - Like lexx / yacc but with Haskell integration
  • Earley - Early parsing embedded within the Haskell language
  • ansi-wl-pprint - Pretty-printing library
  • text-format - High-performance string formatting

Educational resources:


Stream programming

Rating: Mature

Haskell's streaming ecosystem is mature. Probably the biggest issue is that there are too many good choices (and a lot of ecosystem fragmentation as a result), but each of the streaming libraries listed below has a sufficiently rich ecosystem including common streaming tasks like:

  • Network transmissions
  • Compression
  • External process pipes
  • High-performance streaming aggregation
  • Concurrent streams
  • Incremental parsing

Notable libraries:

  • conduit / io-streams / pipes - Stream programming libraries (Full disclosure: I authored pipes and wrote the official io-streams tutorial)
  • machines - Networked stream transducers library

Educational resources:

Serialization / Deserialization

Rating: Mature

Haskell's serialization libraries are reasonably efficient and very easy to use. You can easily automatically derive serializers/deserializers for user-defined data types and it's very easy to encode/decode values.

Haskell's serialization does not suffer from any of the gotchas that object-oriented languages deal with (particularly Java/Scala). Haskell data types don't have associated methods or state to deal with so serialization/deserialization is straightforward and obvious. That's also why you can automatically derive correct serializers/deserializers.

Serialization performance is pretty good. You should expect to serialize data at a rate between 100 Mb/s to 1 Gb/s with careful tuning. Serialization performance still has about 3x-5x room for improvement by multiple independent estimates. See the "Faster binary serialization" link below for details of the ongoing work to improve the serialization speed of existing libraries.

Notable libraries:

Educational resources:

Support for file formats

Rating: Mature

Haskell supports all the common domain-independent serialization formats (i.e. XML/JSON/YAML/CSV). For more exotic formats Haskell won't be as good as, say, Python (which is notorious for supporting a huge number of file formats) but it's so easy to write your own quick and dirty parser in Haskell that this is not much of an issue.

Notable libraries:

  • aeson - JSON encoding/decoding
  • cassava - CSV encoding/decoding
  • yaml - YAML encoding/decoding
  • xml - XML encoding/decoding

Package management

Rating: Mature

If you had asked me a few months back I would have rated Haskell immature in this area. This rating is based entirely on the recent release of the stack package tool by FPComplete which greatly simplifies package installation and dependency management. This tool was created in response to a broad survey of existing Haskell users and potential users where cabal-install was identified as the single greatest issue for professional Haskell development.

The stack tool is not just good by Haskell standards but excellent even compared to other language package managers. Key features include:

  • Excellent project isolation (including compiler isolation)
  • Global caching of shared dependencies to avoid wasteful rebuilds
  • Easily add local repositories or remote Github repositories as dependencies

stack is also powered by Stackage, which is a very large Hackage mono-build that ensures that a large subset of Hackage builds correctly against each other and automatically notifies package authors to fix or update libraries when they break the mono-build. Periodically this package set is frozen as a Stackage LTS release which you can supply to the stack tool in order to select dependencies that are guaranteed to build correctly with each other. Also, if all your projects use the same or similar LTS releases they will benefit heavily from the shared global cache.

Educational resources:



Haskell has decent logging support. That's pretty much all there is to say.

Rating: Mature

  • fast-logger - High-performance multicore logging system
  • hslogger - Logging library analogous to Python's ConfigParser library
  • monad-logger - add logging with line numbers to your monad stack. Uses fast-logger under the hood.


Rating: Immature

The primary reason for the "Immature" rating is two big deficiencies in Haskell learning materials:

  • Intermediate-level books
  • Beginner-level material targeted at people with no previous programming experience

Other than that the remaining learning resources are okay. If the above holes were filled then I would give a "Mature" rating.

The most important advice I can give to Haskell beginners is to learn by doing. I observe that many Haskell beginners dwell too long trying to learn by reading instead of trying to build something useful to hone their understanding.

Educational resources:


Rating: Immature

The main Haskell debugging features are:

  • Memory and performance profiling
  • Stack traces
  • Source-located errors, using the assert function
  • Breakpoints, single-stepping, and tracing within the GHCi REPL
  • Informal printf-style tracing using Debug.Trace
  • ThreadScope

The two reasons I still mark debugging "Immature" are:

  • GHC's stack traces require profiling to be enabled
  • There is only one IDE that I know of (leksah) that integrates support for breakpoints and single-stepping and leksah still needs more polish

ghc-7.10 also added preliminary support for DWARF symbols which allow support for gdb-based debugging and perf-based profiling, but there is still more work that needs to be done. See the following page for more details:

Educational resources:

Cross-platform support

Rating: Immature

I give Haskell an "Immature" rating primarily due to poor user experience on Windows:

  • Most Haskell tutorials assume a Unix-like system
  • Several Windows-specific GHC bugs
  • Poor IDE support (Most Windows programmers don't use a command-line editor)

This is partly a chicken-and-egg problem. Haskell has many Windows-specific issues because it has such a small pool of Windows developers to contribute fixes. Most Haskell developers are advised to use another operating system or a virtual machine to avoid these pain points, which exacerbates the problem.

The situation is not horrible, though. I know because I do half of my Haskell programming on Windows in order to familiarize myself with the pain points of the Windows ecosystem and most of the issues affect beginners and can be worked around by more experienced developers. I wouldn't say any individual issue is an outright dealbreaker; it's more like a thousand papercuts which turn people off of the language.

If you're a Haskell developer using Windows, I highly recommend the following installs to get started quickly and with as few issues as possible:

  • Git for Windows - A Unix-like command-line environment bundled with git that you can use to follow along with tutorials
  • MinGHC - Use this for project-independent Haskell experimentation
  • Stack - Use this for project development

Additionally, learn to use the command line a little bit until Haskell IDE support improves. Plus, it's a useful skill in general as you become a more experienced programmer.

For Mac, the recommended installation is:

  • Haskell for Mac OS X - A self-contained relocatable GHC build for project-independent Haskell experimentation
  • Stack - Use this for project development

For other operating systems, use your package manager of choice to install ghc and stack.

Educational resources:

Databases and data stores

Rating: Immature

This is is not one of my areas of expertise, but what I do know is that Haskell has bindings to most of the open source databases and datastores such as MySQL, Postgres, SQLite, Cassandra, Redis, DynamoDB and MongoDB. However, I haven't really evaluated the quality of these bindings other than the postgresql-simple library, which is the only one I've personally used and was decent as far as I could tell.

The "Immature" ranking is based on the recommendation of Stephen Diehl who notes:

Raw bindings are mature, but the higher level ORM tooling is a lot less mature than its Java, Scala, Python counterparts Source

However, Haskell appears to be deficient in bindings to commercial databases like Microsoft SQL server and Oracle. So whether or not Haskell is right for you probably depends heavily on whether there are bindings to the specific data store you use.

Notable libraries:

Hot code loading

Rating: Immature

Haskell does provide support for hot code loading, although nothing in the same ballpark as in languages like Clojure.

There are two main approaches to hot code loading:

  • Compiling and linking object code at runtime (i.e. the plugins or hint libraries)
  • Recompiling the entire program and then reinitializing the program with the program's saved state (i.e. the dyre or halive libraries)

You might wonder how Cloud Haskell sends code over the wire and my understanding is that it doesn't. Any function you wish to send over the wire is instead compiled ahead of time on both sides and stored in a shared symbol table which each side references when encoding or decoding the function.

Haskell does not let you edit a live program like Clojure does so Haskell will probably never be "Best in class" short of somebody releasing a completely new Haskell compiler built from the ground up to support this feature. The existing Haskell tools for hot code swapping seem as good as they are reasonably going to get, but I'm waiting for commercial success stories of their use before rating this "Mature".

The halive library has the best hot code swapping demo by far:

Notable libraries:

  • plugins / hint - Runtime compilation and linking
  • dyre / halive - Program reinitialization with saved state

IDE support

Rating: Immature

I am not the best person to review this area since I do not use an IDE myself. I'm basing this "Immature" rating purely on what I have heard from others.

The impression I get is that the biggest pain point is that Haskell IDEs, IDE plugins, and low-level IDE tools keep breaking with every new GHC release.

Most of the Haskell early adopters have been vi/vim or emacs users so those editors have gotten the most love. Support for more traditional IDEs has improved recently with Haskell plugins for IntelliJ and Eclipse and also the Haskell-native leksah IDE.

FPComplete has also released a web IDE for Haskell programming that is also worth checking out which is reasonably polished but cannot be used offline.

Notable tools:

  • hoogle - Type-based function search
  • hlint - Code linter
  • ghc-mod - editor agnostic tool that powers many IDE-like features
  • ghcid - lightweight background type-checker that triggers on code changes
  • haskell-mode - Umbrella project for Haskell emacs support
  • structured-haskell-mode - structural editing based on Haskell syntax for emacs
  • codex - Tags file generator for cabal project dependencies.
  • hdevtools - Persistent GHC-powered background server for development tools
  • ghc-imported-from - editor agnostic tool that finds Haddock documentation page for a symbol

IDE plugins:

  • IntelliJ (the official plugin or Haskforce)
  • Eclipse (the EclipseFP plugin)
  • Atom (the IDE-Haskell plugin)


Educational resources:


I originally hosted this post as a draft on Github in order to solicit review from people more knowledgeable than myself. In the process it turned into a collaboratively edited wiki which you can find here:

I will continue to accept pull requests and issues to make sure that it stays up to date and once or twice a year I will post announcements if there have been any major changes or improvements in the Haskell ecosystem.

The main changes since the draft initially went out were:

  • The "Type system" section was upgraded to "Best in class" (originally ranked "Mature")
  • The "Concurrency" section was renamed to "Single-machine concurrency" and upgraded to "Best in class" (originally ranked "Mature")
  • The "Database" section was downgraded to "Immature" (originally ranked "Mature")
  • New sections were added for "Debugging", "Education", and "Hot code loading"
  • Aaron Levin
  • Alois Cochard
  • Ben Kovach
  • Benno Fünfstück
  • Carlo Hamalainen
  • Chris Allen
  • Curtis Gagliardi
  • Deech
  • David Howlett
  • David Johnson
  • Edward Cho
  • Greg Weber
  • Gregor Uhlenheuer
  • Juan Pedro Villa Isaza
  • Kazu Yamamoto
  • Kirill Zaborsky
  • Liam O'Connor-Davis
  • Luke Randall
  • Marcio Klepacz
  • Mitchell Rosen
  • Nicolas Kaiser
  • Oliver Charles
  • Pierre Radermecker
  • Rodrigo B. de Oliveira
  • Stephen Diehl
  • Tim Docker
  • Tran Ma
  • Yuriy Syrovetskiy
  • @bburdette
  • @co-dan
  • @ExternalReality
  • @GetContented
  • @psibi
Categories: Offsite Blogs

Proposal: generalise Monoid's mconcat

libraries list - Mon, 08/31/2015 - 6:54pm
We could generalise: mconcat:: [a] -> a mconcat = foldr mappend memtpy to: mconcat:: Foldable t => t a -> a mconcat = foldr mappend memtpy
Categories: Offsite Discussion

Open Postdoc Position in formal methods applied to timedsystems

General haskell list - Mon, 08/31/2015 - 6:41pm
The Institute of Computer Engineering at Vienna ( University of Technology is seeking a candidate for a postdoctoral research position (one year with the posibility to renew for up to other two years), starting as soon as possible. The successful applicant will carry out his/her postdoc in the research area of formal methods applied to the verification and synthesis of timed systems with faults and delays, including distributed systems. This task is part of the recently granted Austrian FWF National Research Network “RiSE” (2nd funding period,, to be led by Ass.-Prof. Ezio Bartocci in collaboration with Prof. Ulrich Schmid and Prof. Radu Grosu and with the other PIs of RiSE: Task Description (Task leader Ezio Bartocci): Modeling and Analysis of Parametric, Probabilistic and Parameterized Timed Systems (Applications). To master the overwhelming complexity of manual correctne
Categories: Incoming News

How to combine simulations

haskell-cafe - Mon, 08/31/2015 - 5:48pm
Hello all, I've been trying hard to come up with an idea how to build a DES from smaller parts. So far, I came to the conclusion, that somewhere there must be an operation which takes an Event and maybe emits an Event (and appends to a log and updates some state). Those Events whould come from and go to the "environment" the simulation runs in. My mental model is two billiard tables, which are connected through a hole in the cushion and which each have a player. When I look at one such table, it would have to respond to Events from its player and from the other table and it would send events to its player ("all balls at rest") and to the other table. If I add the other table and the two players then the combined simulation would not emit any events at all and it would not respond to any events except maybe as START event. It would only depend on its initial state. But if I add only the player, but not the other table, it would still send events to the other table and respond to events from that other tab
Categories: Offsite Discussion

Now that writing great tutorials is a thing... could the next one be on how to not be rekt by Cabal anymore? :)

Haskell on Reddit - Mon, 08/31/2015 - 5:33pm

I know I've asked this a few times, yet it always changes... could someone write a definite, flawless, time-resistant tutorial on how to never have problems with builds again? I don't care if it takes installing another OS - I just want to get rid of that problem for life.

submitted by SrPeixinho
[link] [27 comments]
Categories: Incoming News


General haskell list - Mon, 08/31/2015 - 5:09pm
CALL FOR WORKSHOP AND TUTORIAL PROPOSALS Cyber-Physical Systems Week (CPS Week) April 11-14, 2016, Vienna, Austria <> —————————————————————————————————— CPS Week is the premier event on Cyber-Physical Systems. It brings together four top conferences, HSCC, ICCPS, IPSN, and RTAS, 10-15 workshops, a localization competition, tutorials and various exhibitions from both industry and academia. Altogether the CPS Week program covers a multitude of complementary aspects of CPS, and reunites the leading researchers in this dynamic field. CPS Week 2016 in Vienna, Austria will host 10-15 workshops (subject to room availability) and 2-3 tutorials on Monday April 11 and is soliciting proposals for new and recurring workshops as well as for tutorials. CPS Week workshops are excellent opportunities to bring together researchers and practitioners from different communities to share t
Categories: Incoming News

New Functional Programming Job Opportunities

haskell-cafe - Mon, 08/31/2015 - 5:00pm
Here are some functional programming job opportunities that were posted recently: Full Stack Haskell Software Engineer at Linkqlo Inc Cheers, Sean Murphy
Categories: Offsite Discussion

Compiling non-registered packages into a sandbox?

haskell-cafe - Mon, 08/31/2015 - 4:20pm
Hi all, How do I compile a non-Hackage registered package into a local sandbox? Let’s say I have: /proj_dir/ - .cabal-sandbox/ - cabal.sandbox.config - dep_pkg/ - dep_pkg.cabal - Setup.lhs - … I’d like to compile the dep_pkg package into my proj_dir sandbox; can I do this, if dep_pkg is NOT registered with Hackage? If so, how? Thanks, -db _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Yakov Zaytsev: How to get minor GHC version from custom Setup.hs

Planet Haskell - Mon, 08/31/2015 - 3:45pm

Haskell Cabal is an advanced build system which can produce self contained shared library with few lines.
It’s necessary to list GHC’s runtime system to be able to dlopen the library:

library ... ghc-options: -threaded extra-libraries: HSrts-ghc7.10.2

-threaded is there to use the FFI in a multi-threaded setting which is usually the case.

Name of the library produced by GHC changes from build to build (due to “hash” being attached?)

Shared libraries are commonly used as a plugins. Other naming convention can be implemented at a postBuild hook in custom Setup.hs e.g. by renaming the library.

The tricky part comes when we need to figure out that “hash”.

We could use mkSharedLibName of Distribution.Simple.BuildPaths but that uses System.Info.compilerVersion internally which does not report minor version of GHC (that’s by design)

To address that this quick workaround will suffice for me for now. Get GHC version from Cabal:

... Just loc <- findProgramOnSearchPath normal defaultProgramSearchPath (programName ghcProgram) Just (Version { versionBranch = [g,h,c] }) <- programFindVersion ghcProgram normal loc ...

And re-write mkSharedLibName using g,h,c instead of CompilerId which is left as an exercise 

I wonder what that “hash”‘s all about…

Categories: Offsite Blogs

Why isn't there a function for counting occurrences built into Prelude, or Data.List?

Haskell on Reddit - Mon, 08/31/2015 - 12:27pm

Something like

count :: a -> [a] -> Int count needle haystack = length (filter (==needle) haystack) submitted by sullyj3
[link] [18 comments]
Categories: Incoming News

Alessandro Vermeulen: FinagleCon

Planet Haskell - Mon, 08/31/2015 - 11:57am

FinagleCon was held at TwitterHQ in San Francisco. It is refreshing to see a nice working atmosphere with free food and drinks. Now for the contents.

Twitter’s RPC framework, Finagle, has been in production since August 2010 and has over 140 contributors. In addition to Twitter, it has been adopted by many large companies such as SoundCloud. Initially written in Java with FP constructs (monads, maps, etc.) all over, it was soon after rewritten in Scala.

Finagle is based on three core concepts: Simplicity, Composability, and Separation of Concerns. These concepts are shown through three primitive building blocks: Future, Service, and Filter.

  • Futures provide an easy interface to create asynchronous computation and to model sequential or asynchronous data-flows.
  • Services are functions that return futures, used to abstract away, possibly remote, service calls.
  • Filters are essentially decorators and are meant to contain modular blocks of re-usable, non-business logic. Example usages are LoggingFilter and RetryingFilter.

The use of Futures makes it easy to test asynchronous computations. Services and filters both can be created separately, each containing their specialized logic. This modularity makes it easy to test and reason about them separately. Services and filters are easily composed, just like functions do, which makes it convenient to test chains. Services and filters are meant to separate behaviour from domain logic.

As amazing as Finagle is, there are some things one should be aware of. To create a really resilient application with Finagle one has to be an expert in its internals. Many configuration parameters influence each other, e.g. queue size and time-outs. With a properly tuned setup Finagle is properly fast and resilient (the defaults are good as well, mind you). As most data centres are heterogenous in their setup, faster machines are added to the pool, and other conditions change, one has to keep attention to the tuning continuously in order to maintain optimal performance.

Some general advice, watch out for traffic amplification due to retries, keep your timeouts low so retry is useful, but not as low that you introduce spurious timeouts.

For extra points, keep hammering your application until it breaks, find out why it breaks, fix it, and repeat.

The future

In addition to this heads up we were also given a nice insight in the upcoming things for Finagle.

In order to make more informed decision, we will get a new Failure type which contains more information instead of ‘just’ a Throwable. In this new Failure, an added field indicates whether it is safe to retry.

There are several issues with the current way of fine-tuning Finagle, as mentioned, you need to be an expert to use all the configuration parameters properly. Next to this the configuration is static and doesn’t take into account changing environments and behaviour of downstream services. Because the tuning of the parameters is tightly coupled with the implementation of Finagle it is also hard to change the implementation significantly without significant re-tuning.

In order to battle the last two points, Finagle will introduce Service Level Objectives (SLO). The SLO is a higher-level goal that Finagle should strive to reach instead of low-level hardcoded parameters. What these SLO will be exactly is not yet known.

The community

The Finagle team will synchronize the internal Finagle repository with the Github repository every Monday. They will strive to publish a snapshot version of the change as well.

For someone looking to write his own protocol to connect to his service, finagle-serial is a nice project to start with. It is small enough to grasp within a day but big enough to be non-trivial.

It was found that the ParGCCardsPerStrideChunk garbage collection option, available from 7u40, can halve GC times on large heaps. It is recommended to try this parameter. Tuning seems to be hard to do and is generally done by copying a ‘known good set’ of parameters.

Scrooge is a good utility to use for Thrift and Scala as it is aware of Scala features such as Traits and Objects and can generate relevant transformations for them.

When you want to connect to multiple data-centres from a single data-centre one can use LatencyCompensation to include latency times.

Categories: Offsite Blogs

Merging bytestring and Vector

Haskell on Reddit - Mon, 08/31/2015 - 11:38am

Does anyone know if there's any ongoing work in merging the 'bytestring' and 'vector' libraries? I'm aware of the old 'vector-bytestring' library that reimplemented 'bytestring' in terms of Storable Vectors of Word8, but that hasn't been updated in a while.

submitted by dnaq
[link] [17 comments]
Categories: Incoming News

Wolfgang Jeltsch: Hyperreal numbers on Estonian TV

Planet Haskell - Mon, 08/31/2015 - 11:29am

On 13 February, I talked about hyperreal numbers in the Theory Lunch. I have not yet managed to write a blog article about this, but my notes on the whiteboard have already been featured on Estonian TV.

The background is that the head of the Software Department of the Institute of Cybernetics, Ahto Kalja, recently received the Order of the White Star, 4th class from the President of Estonia. On this account, Estonian TV conducted an interview with him, during which they recorded also parts of my notes that were still present on the whiteboard in our coffee room.

You can watch the video online. The relevant part, which is about e-government, is from 18:14 to 21:18. I enjoyed it very much hearing Ahto Kalja’s colleague Arvo Ott talking about electronic tax returns and seeing some formula about limits immediately afterwards. :-) At 20:38, there is also some Haskell-like pseudocode.

Tagged: Ahto Kalja, Arvo Ott, e-government, Eesti Televisioon, Haskell, hyperreal number, Institute of Cybernetics, Order of the White Star, talk, Theory Lunch
Categories: Offsite Blogs

Wolfgang Jeltsch: A taste of Curry

Planet Haskell - Mon, 08/31/2015 - 11:28am

Curry is a programming language that integrates functional and logic programming. Last week, Denis Firsov and I had a look at Curry, and Thursday, I gave an introductory talk about Curry in the Theory Lunch. This blog post is mostly a write-up of my talk.

Like Haskell, Curry has support for literate programming. So I wrote this blog post as a literate Curry file, which is available for download. If you want to try out the code, you have to install the Curry system KiCS2. The code uses the functional patterns language extension, which is only supported by KiCS2, as far as I know.

Functional programming

The functional fragment of Curry is very similar to Haskell. The only fundamental difference is that Curry does not support type classes.

Let us do some functional programming in Curry. First, we define a type whose values denote me and some of my relatives.

data Person = Paul | Joachim | Rita | Wolfgang | Veronika | Johanna | Jonathan | Jaromir

Now we define a function that yields the father of a given person if this father is covered by the Person type.

father :: Person -> Person father Joachim = Paul father Rita = Joachim father Wolfgang = Joachim father Veronika = Joachim father Johanna = Wolfgang father Jonathan = Wolfgang father Jaromir = Wolfgang

Based on father, we define a function for computing grandfathers. To keep things simple, we only consider fathers of fathers to be grandfathers, not fathers of mothers.

grandfather :: Person -> Person grandfather = father . father Combining functional and logic programming

Logic programming languages like Prolog are able to search for variable assignments that make a given proposition true. Curry, on the other hand, can search for variable assignments that make a certain expression defined.

For example, we can search for all persons that have a grandfather according to the above data. We just enter

grandfather person where person free

at the KiCS2 prompt. KiCS2 then outputs all assignments to the person variable for which grandfather person is defined. For each of these assignments, it additionally prints the result of the expression grandfather person.


Functions in Curry can actually be non-deterministic, that is, they can return multiple results. For example, we can define a function element that returns any element of a given list. To achieve this, we use overlapping patterns in our function definition. If several equations of a function definition match a particular function application, Curry takes all of them, not only the first one, as Haskell does.

element :: [el] -> el element (el : _) = el element (_ : els) = element els

Now we can enter

element "Hello!"

at the KiCS2 prompt, and the system outputs six different results.

Logic programming

We have already seen how to combine functional and logic programming with Curry. Now we want to do pure logic programming. This means that we only want to search for variable assignments, but are not interested in expression results. If you are not interested in results, you typically use a result type with only a single value. Curry provides the type Success with the single value success for doing logic programming.

Let us write some example code about routes between countries. We first introduce a type of some European and American countries.

data Country = Canada | Estonia | Germany | Latvia | Lithuania | Mexico | Poland | Russia | USA

Now we want to define a relation called borders that tells us which country borders which other country. We implement this relation as a function of type

Country -> Country -> Success

that has the trivial result success if the first country borders the second one, and has no result otherwise.

Note that this approach of implementing a relation is different from what we do in functional programming. In functional programming, we use Bool as the result type and signal falsity by the result False. In Curry, however, we signal falsity by the absence of a result.

Our borders relation only relates countries with those neighbouring countries whose names come later in alphabetical order. We will soon compute the symmetric closure of borders to also get the opposite relationships.

borders :: Country -> Country -> Success Canada `borders` USA = success Estonia `borders` Latvia = success Estonia `borders` Russia = success Germany `borders` Poland = success Latvia `borders` Lithuania = success Latvia `borders` Russia = success Lithuania `borders` Poland = success Mexico `borders` USA = success

Now we want to define a relation isConnected that tells whether two countries can be reached from each other via a land route. Clearly, isConnected is the equivalence relation that is generated by borders. In Prolog, we would write clauses that directly express this relationship between borders and isConnected. In Curry, on the other hand, we can write a function that generates an equivalence relation from any given relation and therefore does not only work with borders.

We first define a type alias Relation for the sake of convenience.

type Relation val = val -> val -> Success

Now we define what reflexive, symmetric, and transitive closures are.

reflClosure :: Relation val -> Relation val reflClosure rel val1 val2 = rel val1 val2 reflClosure rel val val = success symClosure :: Relation val -> Relation val symClosure rel val1 val2 = rel val1 val2 symClosure rel val2 val1 = rel val1 val2 transClosure :: Relation val -> Relation val transClosure rel val1 val2 = rel val1 val2 transClosure rel val1 val3 = rel val1 val2 & transClosure rel val2 val3 where val2 free

The operator & used in the definition of transClosure has type

Success -> Success -> Success

and denotes conjunction.

We define the function for generating equivalence relations as a composition of the above closure operators. Note that it is crucial that the transitive closure operator is applied after the symmetric closure operator, since the symmetric closure of a transitive relation is not necessarily transitive.

equivalence :: Relation val -> Relation val equivalence = reflClosure . transClosure . symClosure

The implementation of isConnected is now trivial.

isConnected :: Country -> Country -> Success isConnected = equivalence borders

Now we let KiCS2 compute which countries I can reach from Estonia without a ship or plane. We do so by entering

Estonia `isConnected` country where country free

at the prompt.

We can also implement a nondeterministic function that turns a country into the countries connected to it. For this, we use a guard that is of type Success. Such a guard succeeds if it has a result at all, which can only be success, of course.

connected :: Country -> Country connected country1 | country1 `isConnected` country2 = country2 where country2 free Equational constraints

Curry has a predefined operator

=:= :: val -> val -> Success

that stands for equality.

We can use this operator, for example, to define a nondeterministic function that yields the grandchildren of a given person. Again, we keep things simple by only considering relationships that solely go via fathers.

grandchild :: Person -> Person grandchild person | grandfather grandkid =:= person = grandkid where grandkid free

Note that grandchild is the inverse of grandfather.

Functional patterns

Functional patterns are a language extension that allows us to use ordinary functions in patterns, not just data constructors. Functional patterns are implemented by KiCS2.

Let us look at an example again. We want to define a function split that nondeterministically splits a list into two parts.1 Without functional patterns, we can implement splitting as follows.

split' :: [el] -> ([el],[el]) split' list | front ++ rear =:= list = (front,rear) where front, rear free

With functional patterns, we can implement splitting in a much simpler way.

split :: [el] -> ([el],[el]) split (front ++ rear) = (front,rear)

As a second example, let us define a function sublist that yields the sublists of a given list.

sublist :: [el] -> [el] sublist (_ ++ sub ++ _) = sub Inverting functions

In the grandchild example, we showed how we can define the inverse of a particular function. We can go further and implement a generic function inversion operator.

inverse :: (val -> val') -> (val' -> val) inverse fun val' | fun val =:= val' = val where val free

With this operator, we could also implement grandchild as inverse grandfather.

Inverting functions can make our lives a lot easier. Consider the example of parsing. A parser takes a string and returns a syntax tree. Writing a parser directly is a non-trivial task. However, generating a string from a syntax tree is just a simple functional programming exercise. So we can implement a parser in a simple way by writing a converter from syntax trees to strings and inverting it.

We show this for the language of all arithmetic expressions that can be built from addition, multiplication, and integer constants. We first define types for representing abstract syntax trees. These types resemble a grammar that takes precedence into account.

type Expr = Sum data Sum = Sum Product [Product] data Product = Product Atom [Atom] data Atom = Num Int | Para Sum

Now we implement the conversion from abstract syntax trees to strings.

toString :: Expr -> String toString = sumToString sumToString :: Sum -> String sumToString (Sum product products) = productToString product ++ concatMap ((" + " ++) . productToString) products productToString :: Product -> String productToString (Product atom atoms) = atomToString atom ++ concatMap ((" * " ++) . atomToString) atoms atomToString :: Atom -> String atomToString (Num num) = show num atomToString (Para sum) = "(" ++ sumToString sum ++ ")"

Implementing the parser is now extremely simple.

parse :: String -> Expr parse = inverse toString

KiCS2 uses a depth-first search strategy by default. However, our parser implementation does not work with depth-first search. So we switch to breadth-first search by entering

:set bfs

at the KiCS2 prompt. Now we can try out the parser by entering

parse "2 * (3 + 4)" .

  1. Note that our split function is not the same as the split function in Curry’s List module.

Tagged: breadth-first search, Curry, Denis Firsov, depth-first search, functional logic programming, functional pattern, functional programming, Institute of Cybernetics, KiCS2, literate programming, logic programming, parsing, Prolog, talk, Theory Lunch, type class
Categories: Offsite Blogs