Today, I'd like to talk about some of the core design principles behind type classes, a wildly successful feature in Haskell. The discussion here is closely motivated by the work we are doing at MSRC to support type classes in Backpack. While I was doing background reading, I was flummoxed to discover widespread misuse of the terms "confluence" and "coherence" with respect to type classes. So in this blog post, I want to settle the distinction, and propose a new term, "global uniqueness of instances" for the property which people have been colloquially referred to as confluence and coherence.
Let's start with the definitions of the two terms. Confluence is a property that comes from term-rewriting: a set of instances is confluent if, no matter what order constraint solving is performed, GHC will terminate with a canonical set of constraints that must be satisfied for any given use of a type class. In other words, confluence says that we won't conclude that a program doesn't type check just because we swapped in a different constraint solving algorithm.
Confluence's closely related twin is coherence (defined in the paper "Type classes: exploring the design space"). This property states that every different valid typing derivation of a program leads to a resulting program that has the same dynamic semantics. Why could differing typing derivations result in different dynamic semantics? The answer is that context reduction, which picks out type class instances, elaborates into concrete choices of dictionaries in the generated code. Confluence is a prerequisite for coherence, since one can hardly talk about the dynamic semantics of a program that doesn't type check.
So, what is it that people often refer to when they compare Scala type classes to Haskell type classes? I am going to refer to this as global uniqueness of instances, defining to say: in a fully compiled program, for any type, there is at most one instance resolution for a given type class. Languages with local type class instances such as Scala generally do not have this property, and this assumption is a very convenient one when building abstractions like sets.
So, what properties does GHC enforce, in practice? In the absence of any type system extensions, GHC's employs a set of rules to ensure that type class resolution is confluent and coherent. Intuitively, it achieves this by having a very simple constraint solving algorithm (generate wanted constraints and solve wanted constraints) and then requiring the set of instances to be nonoverlapping, ensuring there is only ever one way to solve a wanted constraint. Overlap is a more stringent restriction than either confluence or coherence, and via the OverlappingInstances and IncoherentInstances, GHC allows a user to relax this restriction "if they know what they're doing."
Surprisingly, however, GHC does not enforce global uniqueness of instances. Imported instances are not checked for overlap until we attempt to use them for instance resolution. Consider the following program:-- T.hs data T = T -- A.hs import T instance Eq T where -- B.hs import T instance Eq T where -- C.hs import A import B
When compiled with one-shot compilation, C will not report overlapping instances unless we actually attempt to use the Eq instance in C. This is by design: ensuring that there are no overlapping instances eagerly requires eagerly reading all the interface files a module may depend on.
We might summarize these three properties in the following manner. Culturally, the Haskell community expects global uniqueness of instances to hold: the implicit global database of instances should be confluent and coherent. GHC, however, does not enforce uniqueness of instances: instead, it merely guarantees that the subset of the instance database it uses when it compiles any given module is confluent and coherent. GHC does do some tests when an instance is declared to see if it would result in overlap with visible instances, but the check is by no means perfect; truly, type-class constraint resolution has the final word. One mitigating factor is that in the absence of orphan instances, GHC is guaranteed to eagerly notice when the instance database has overlap (assuming that the instance declaration checks actually worked...)
Clearly, the fact that GHC's lazy behavior is surprising to most Haskellers means that the lazy check is mostly good enough: a user is likely to discover overlapping instances one way or another. However, it is relatively simple to construct example programs which violate global uniqueness of instances in an observable way:-- A.hs module A where data U = X | Y deriving (Eq, Show) -- B.hs module B where import Data.Set import A instance Ord U where compare X X = EQ compare X Y = LT compare Y X = GT compare Y Y = EQ ins :: U -> Set U -> Set U ins = insert -- C.hs module C where import Data.Set import A instance Ord U where compare X X = EQ compare X Y = GT compare Y X = LT compare Y Y = EQ ins' :: U -> Set U -> Set U ins' = insert -- D.hs module Main where import Data.Set import A import B import C test :: Set U test = ins' X $ ins X $ ins Y $ empty main :: IO () main = print test -- OUTPUT $ ghc -Wall -XSafe -fforce-recomp --make D.hs [1 of 4] Compiling A ( A.hs, A.o ) [2 of 4] Compiling B ( B.hs, B.o ) B.hs:5:10: Warning: Orphan instance: instance [safe] Ord U [3 of 4] Compiling C ( C.hs, C.o ) C.hs:5:10: Warning: Orphan instance: instance [safe] Ord U [4 of 4] Compiling Main ( D.hs, D.o ) Linking D ... $ ./D fromList [X,Y,X]
Locally, all type class resolution was coherent: in the subset of instances each module had visible, type class resolution could be done unambiguously. Furthermore, the types of ins and ins' discharge type class resolution, so that in D when the database is now overlapping, no resolution occurs, so the error is never found.
It is easy to dismiss this example as an implementation wart in GHC, and continue pretending that global uniqueness of instances holds. However, the problem with global uniqueness of instances is that they are inherently nonmodular: you might find yourself unable to compose two components because they accidentally defined the same type class instance, even though these instances are plumbed deep in the implementation details of the components. This is a big problem for Backpack, or really any module system, whose mantra of separate modular development seeks to guarantee that linking will succeed if the library writer and the application writer develop to a common signature.
I had a great time at ICFP 2013 this year where I presented my paper “Automatic SIMD Vectorization for Haskell”, which was joint work with Leaf Petersen and Neal Glew of Intel Labs. The full paper and slides are available online. Our paper details the vectorization process in the Intel Labs Haskell Research Compiler (HRC) which gets decent speedups on numerical code (between 2-7x on 256-bit vector registers). It was nice to be able to talk about HRC and share the results. Paul (Hai) Liu also gave a talk at the Haskell Symposium which has more details about HRC than the vectorization paper (see the paper here with Neal Glew, Leaf Petersen, and Todd Anderson). Hopefully there will be a public release of HRC in future.
Still more to do
It’s been exciting to see the performance gains in compiled functional code over the last few years, and its encouraging to see that there is still much more we can do and explore. HRC outperforms GHC on roughly 50% of the benchmarks, showing some interesting trade-offs going on in the two compilers. HRC is particularly good at compiling high-throughput numerical code, thanks to various strictness/unboxing optimisations (and the vectorizer), but there is still more to be done.
Don’t throw away information about your programs
One thing I emphasized in my talk was the importance of keeping, not throwing away, the information encoded in our programs as we progress through the compiler stack. In the HRC vectorizer project, Haskell’s Data.Vector library was modified to distinguish between mutable array operations and “initializing writes”, a property which then gets encoded directly in HRC’s intermediate representation. This makes vectorization discovery much easier. We aim to preserve as much effect information around as possible in the IR from the original Haskell source.
This connected nicely with something Ben Lippmeier emphasised in his Haskell Symposium paper this year (“Data Flow Fusion with Series Expressions in Haskell“, joint with Manuel Chakravarty, Gabriele Keller and Amos Robinson). They provide a combinator library for first-order non-recursive dataflow computations which is guaranteed to be optimised using flow fusion (outperforming current stream fusion techniques). The important point Ben made is that, if your program fits the pattern, this optimisation is guaranteed. As well as being good for the compiler, this provides an obvious cost model for the user (no more games trying to coax the compiler into optimising in a particular way).
This is something that I have explored in the Ypnos array language, where the syntax is restricted to give (fairly strong) language invariants that guarantee parallelism and various optimisations, without undecidable analyses. The idea is to make static as much effect and coeffect (context dependence) information as possible. In Ypnos, this was so successful that I was able to encode the Ypnos’ language invariant of no out-of-bounds array access directly in Haskell’s type system (shown in the DSL’11 paper; this concept was also discussed briefly in my short language design essay).
This is a big selling point for DSLs in general: restrict a language such that various program properties are statically decidable, facilitating verification and optimisation.
Ypnos has actually had some more development in the past year, so if things progress further, there may be some new results to report on. I hope to be posting again soon about more research, including the ongoing work with Tomas Petricek on coeffect systems, and various other things I have been playing with. – D
Disclaimer: I think I've been doing functional programming for just over half a month.
I've been learning Haskell through "Learn You a Haskell" and messing around on some different webpages, getting a closer look at some packages (the numeric probability package from Hackage, for example) while reading about monads a bit. I want to continue my learning but in a more focused manner (and I'd like to take a break from LYAH; I feel that it has been better serving as a reference than something to "read" at this point).
I think one of the best ways to do this (and please tell me if you disagree) is to get a good outline of topics to learn and learn them in order. I have heard Real World Haskell is outdated in some ways but still a useful tool. Would you recommend this or any other tools? If I choose to read Real World Haskell, which sections should I skip and read somewhere else (or at least read with caution)?
Background: I am a mathematics undergrad student in my 4th year. My interests are in machine learning, statistics, and information theory. I have a background in basic measure theory (from a probabilistic perspective) but not category theory as well as mathematical statistics. I can program in Python and Java. Haskell interests me for a number of reasons, including the fact that it is so mathematically oriented, it seems to have great potential uses in data science/machine learning/AI (this is a big one, and if anyone has any links to things related to this, I'd love to see them), its use is becoming more widespread (and thus potentially viable in a number of new industry settings?), and honestly, it just feels nice to use.submitted by Knux-
[link] [10 comments]
I've heard that Haskell is very good at parsing and there are some good libraries for this (parsec, attoparsec, uu-parsinglib and so on).
Anyway, in the process "input -> Lexing -> Parsing" I haven't reached parsing stage. I'm stuck on designing the Token type and AST type. Any tips, rules of thumbs, mind patterns, little and concrete examples, tutorials for this type of intuition?
My goal is implement basic parsers and prettyprinters for ActionScript, C++ and Java, to be able to translate code between them. I cannot use existing parsing libraries, because they are just too broad. I need only subsets of these languages to be able to intertranslate. I assume that correcty designed AST is larger part of parsing problem, so I ask here.submitted by danbst
[link] [9 comments]
Learning haskell, I wrote this simple function that's supposed to return a list of prime numbers up to n. It works for low enough n, but above 5000 or so, it becomes noticeably slower to begin, which is far lower than a prime sieve has any right to slow down. What am I missing?import Data.List sieve :: (Integral a) => a -> [a] sieve n | n < 2 =  | n == 2 =  | otherwise = comb [2..n] where comb xxs@(x:xs) | x^2 > last xs = xxs | otherwise = x : comb (xs \\ [2*x, 3*x .. n]) comb  =  submitted by ZankerH
[link] [21 comments]
I started working on an emulator for the old CHIP-8 CPU as a project to learn haskell. Although it probably wasn't the best choice of project due to how basic it is, I've definitely gotten hooked on haskell.
I've done a few clojure projects in the past, so I'm familiar with the functional programming basics, but I haven't experimented much with haskell's abstractions (apart from basic monads).
If anyone can offer any improvements or tips, i'd be greatly appreciated. I'm planning on doing plenty more haskell projects, so I'd like to nip any bad practices in the bud.
Thanks in advance!submitted by tominated
[link] [3 comments]
Hey everyone. I've just started out with Haskell, and I just can't seem to understand how the reader and writer monads work. Could someone point me to some place where they're explained well? Thanks!submitted by Bollu
[link] [15 comments]
[ANN](and feedback request) unagi-chan: Fast and scalable concurrent queues for x86, with a Chan-like API
Here in sunny Austin Texas we've spun up a Haskell meetup group, and if you're in the area, you should join! Specifically, we've set up a small meetup tomorrow as a training class to help people solidify and expand their Haskell knowledge.
We currently are having monthly meetups here with classical talks, and we're also planning of setting up monthly or biweekly study groups like the one tomorrow for people to talk and learn in a different setting. So the one tomorrow will hopefully be the first of many more to come.
Tomorrow the study group will be downtown at MakerSquare - and if you're in the area, you should come by! Brand new to Functional Programming? Don't have GHC installed? Don't even know what Haskell is? No matter - things will happen, people will learn things, and you should come by and learn how awesome it is and how awesome all of us are, and we'll help you as much as we can.
Next week we'll be having a meeting at Rackspace in North Austin, where we'll be talking about error handling in Haskell, and I'll probably talk about something and I have no idea what that will be yet (it'll be a secret).
We normally have a turnout of about 20 people for the meetings. But that's just because you haven't come and hung out yet. The meetings are typically open forum and rather lax. So you're encouraged to raise your hand or just speak up.
We also tend to go drink beers and talk into the night afterwords - with many riveting conversations, including subjects like (roughly in order of priority):
- Tacos. Beer. Serious business.
- Why is software so terrible?
- How to achieve world domination in a purely-functional way?
- Education and how to do that properly.
- War stories (tears of joy/pain are encouraged)
- Cloud computing (the half-dozen Rackspace employees love this one!)
- Ask Austin (me, not the town) how some random GHC thing works.
- Non-taco-related foods.
Also, a bajillion points go to Rackspace for giving us a great working space for our meetups so far and providing us with great pizza for the past several meetings.submitted by aseipp
A new draft paper, Resource-dependent Algebraic Effects, is available. Abstract:
There has been significant interest in recent months in finding new ways to implement composable and modular effectful programs using handlers of algebraic effects. In my own previous work, I have shown how an algebraic effect system (called “effects“) can be embedded directly in a dependently typed host language. Using dependent types ought to allow precise reasoning about programs; however, the reasoning capabilities of effects have been limited to simple state transitions which are known at compile-time. In this paper, I show how effects can be extended to support reasoning in the presence of run-time state transitions, where the result may depend on run-time information about resource usage (e.g. whether opening a file succeeded). I show how this can be used to build expressive APIs, and to specify and verify the behaviour of interactive, stateful programs. I illustrate the technique using a file handling API, and an interactive game.
I’ve just submitted this, although constructive comments and suggestions are still of course very welcome!
I've been wondering about this for a little bit:
Q: When is it more convenient to use a Reader Monad as opposed to just threading an argument through your program?
I.e. what is the advantage of:f :: String -> Reader Config SomeType -- vs g :: String -> Config -> SomeType
One advantage I've seen is that if you don't annotate your functions with type signatures and use the Reader Monad with type inference, you can change the type of the shared environment without having to update much code (except code that pulls out of the shared environment, but that can be factored out and shared as well). Is this the only advantage? The disadvantage to this approach, imo, is you lose the nice readability you get from explicit type signatures.
On the other hand, if you need to introduce a dependency, you can just thread it through your function via plain-old arguments. What do you lose with this? Is it that you must update all functions that depend on modified methods, and programming in a monadic style may avoid some of these explicit refactorings?
Perhaps using the Reader Monad becomes more handy in large code-bases?
PS - my apologies if I'm not articulating this well.
Edit: thank you everyone. The consensus as I understand it seems to be (upstream/downstream refer to a methods call graph):
- Reader monad allows environment-independent methods from having to manage the environment for downstream dependencies.
- Reader monad is safer (you cannot manipulate or change the environment in an upstream method)
- MonadReader class makes working with Reader monad nice .
[link] [16 comments]