News aggregator

José Pedro Magalhães: POPL 2015 is in India, and I think that is a bad idea

Planet Haskell - Thu, 01/30/2014 - 1:42am
POPL, one of the most important scientific conferences in the field of programming languages, will be held in Mumbai, India, in 2015. I find this rather unfortunate. As a SIGPLAN event, POPL ought to follow the SIGPLAN Conference Anti-Harassment Policy. Quoting from the policy: "Harassment in any form, including but not limited to harassment based on (...) sexual orientation (...) will not be tolerated". Unfortunately, India just recently reinstated a ban on gay sex, and rejected a petition for reconsidering its decision. The law dates from the period of the British rule of India, and is not unlike the British law that lead to the death of Alan Turing. This is a worrying development, and sends a clear message of intolerance and harassment to local or visiting homosexuals.
Personally, I do not feel welcome or even safe in India, and consequently will not attend POPL 2015. I find it regrettable that the POPL Steering Committee failed to keep to its own anti-harassment policy when choosing a venue for POPL 2015.
Categories: Offsite Blogs

Edward Z. Yang: Equality, roughly speaking

Planet Haskell - Wed, 01/29/2014 - 11:05pm

In Software Foundations, equality is defined in this way:

Even Coq's equality relation is not built in. It has (roughly) the following inductive definition.

Inductive eq0 {X:Type} : X -> X -> Prop := refl_equal0 : forall x, eq0 x x.

Why the roughly? Well, as it turns out, Coq defines equality a little differently (reformatted to match the Software Foundations presentation):

Inductive eq1 {X:Type} (x:X) : X -> Prop := refl_equal1 : eq1 x x.

What’s the difference? The trick is to look at the induction principles that Coq generates for each of these:

eq0_ind : forall (X : Type) (P : X -> X -> Prop), (forall x : X, P x x) -> forall y y0 : X, eq0 y y0 -> P y y0 eq1_ind : forall (X : Type) (x : X) (P : X -> Prop), P x -> forall y : X, eq1 x y -> P y

During our Homotopy Type Theory reading group, Jeremy pointed out that the difference between these two principles is exactly the difference between path induction (eq0) and based path induction (eq1). (This is covered in the Homotopy Type Theory book in section 1.12) So, Coq uses the slightly weirder definition because it happens to be a bit more convenient. (I’m sure this is folklore, but I sure didn’t notice this until now! For more reading, check out this excellent blog post by Dan Licata.)

Categories: Offsite Blogs

incorrect MonadPlus law "v >> mzero = mzero"?

libraries list - Wed, 01/29/2014 - 3:57pm
Hi, this law apparently fails for a MonadPlus instance that has more than one possible failure value. Consider: runIdentity . runErrorT $ ((ErrorT . Identity $ Left "failure") >> mzero :: ErrorT String Identity ()) evaluates to `Left "failure"`, which isn't equal to ErrorT's mzero `Left ""`. This isn't just the case of ErrorT, it fails for any MonadPlus with multiple failure values. For example lift (tell "foo") >> mzero :: MaybeT (Writer String) () is again distinct from mzero. Actually, no monad transformer with a MonadPlus instance can satisfy the law, because the first part in front of `>> mzero` can introduce side effects in the underlying monad. I'm not sure what should be the proper solution. Perhaps to change the laws to: return x >> mzero = mzero (v >> mzero) >>= f = (v >> mzero)` That is, if an expression ends with `mzero`, it behaves like `mzero`. Petr _______________________________________________ Libraries mailing list Libraries< at >haskell.org http://www.haskell.org/m
Categories: Offsite Discussion

Request for review: numbers

Haskell on Reddit - Wed, 01/29/2014 - 2:53pm

Hey Haskellers, acting maintainer of numbers here.

I got a pull request that I'm a bit hesitant to accept for a few reasons. But I don't actually use the library, so I'd appreciate some additional opinions on a few questions that have arisen:

  • Should the fields of Interval be strict?
  • Should there be a Bounded instance for Bounded a => Interval a?
  • Is the current Ord instance useful, sensible, and acceptable?

Additionally, it would be nice if you could glance at the other changes in the PR and make sure they look good.

https://github.com/DanBurton/numbers/pull/10/files

Do you use the numbers library? Wish it had some new feature? Want to write a test suite or benchmark? I'd love to hear from you. Do you know of a class or tutorial that teaches via the numbers library? Let me know! I feel a bit disconnected from this library's history, and I'd like to be a little more plugged in.

submitted by drb226
[link] [comment]
Categories: Incoming News

Lens 4.0 is out

Haskell on Reddit - Wed, 01/29/2014 - 1:36pm
Categories: Incoming News

GHC 7.8 branch is created

Haskell on Reddit - Wed, 01/29/2014 - 11:32am
Categories: Incoming News

Is it safe to create global variables usingunsafePerformIO?

haskell-cafe - Wed, 01/29/2014 - 9:32am
Hello! Lets consider the following code: import Control.Concurrent import Control.Concurrent.STM import System.IO.Unsafe (unsafePerformIO) {-# NOINLINE counter #-} counter :: TVar Int counter = unsafePerformIO $ newTVarIO 0 incCounter :: IO Int incCounter = do r <- atomically $ do t <- readTVar counter let t' = t + 1 writeTVar counter t' return t' return r main :: IO () main = do n1 <- incCounter print n1 n2 <- incCounter print n2 n3 <- incCounter print n3 This program prints: 1 2 3 So we have a "global variable". Do I right understand that newTVarIO creates TVar and RTS memoizes it since 'counter' function is pure? If it's true, could it happen that under some circumstances memoized value will be deleted from memory? Or Haskell keeps all memoized values forever? Another issue which I'm afraid of --- would the given code be safe in multithread application? For example, is it possible to encounter a race condition if two threads will try to create a new counter in the
Categories: Offsite Discussion

Theory Lunch (Institute of Cybernetics, Tallinn): Transgressing the limits

Planet Haskell - Wed, 01/29/2014 - 8:48am

Today, the 14th of January 2014, we had a special session of our Theory Lunch. I spoke about ultrafilters and how they allow to generalize the notion of limit.

Consider the space of bounded sequences of real numbers, together with the supremum norm. We would like to define a notion of limit which holds for every and satisfies the well known properties of standard limit:

  1. Linearity: .
  2. Omogeneity: .
  3. Monotonicity: if for every then .
  4. Nontriviality: if for every then .
  5. Consistency: if the limit exists in the classical sense, then the two notions coincide.

The consistency condition is reasonable also because it avoid trivial cases: if we fix and we define the limit of the sequence as the value , then the first four properties are satisfied.

Let us recall the classical definition of limit: we say that converges to if and only if, for every , the set of values such that is cofinite, i.e., has a finite complement. The family of cofinite subsets of (in fact, of any set ) has the following properties:

  1. Upper closure: if and then .
  2. Meet stability: if  then .

A family of subsets of with the two properties above is called a filter on . An immediate example is the trivial filter ; another example is the improper filter . The family of cofinite subset of is called the Fréchet filter on . The Fréchet filter is not the improper one if and only if is infinite.

An ultrafilter on is a filter on satisfying the following additional conditions:

  1. Properness: .
  2. Maximality: for every , either or  .

For example, if , then is an ultrafilter on , called the principal ultrafilter generated by . Observe that : if  we say that is free. These are, in fact, the only two options.

Lemma 1. For a proper filter to be an ultrafilter, it is necessary and sufficient that it satisfies the following condition: for every and nonempty , if then  for at least one .

Proof: It is sufficient to prove the thesis with . If with , then is a proper filter that properly contains . If the condition is satisfied, for every which is neither nor we have , thus either or .

Theorem 1. Every nonprincipal ultrafilter is free. In addition, an ultrafilter is free if and only if it extends the Fréchet filter. In particular, every ultrafilter over a finite set is principal.

Proof: Let be a nonprincipal ultrafilter. Let : then , so either there exists such that and , or there exists such that and . In the first case, ; in the second case, we consider and reduce to the first case. As is arbitrary, is free.

Now, for every the set belongs to but not to : therefore, no principal ultrafilter extends the Fréchet filter. On the other hand, if is an ultrafilter, is finite, and , then by maximality, hence for some because of Lemma 1, thus cannot be a free ultrafilter.

So it seems that free ultrafilters are the right thing to consider when trying to expand the concept of limit. There is an issue, though: we have not seen any single example of a free ultrafilter; in fact, we do not even (yet) know whether free ultrafilters do exist! The answer to this problem comes, in a shamelessly nonconstructive way, from the following

Ultrafilter lemma. Every proper filter can be extended to an ultrafilter.

The ultrafilter lemma, together with Theorem 1, implies the existence of free ultrafilters on every infinite set, and in particular on . On the other hand, to prove the ultrafilter lemma the Axiom of Choice is required, in the form of Zorn’s lemma. Before giving such proof, we recall that a family of sets has the finite intersection property if every finite subfamily has a nonempty intersection: every proper filter has the finite intersection property.

Proof of the ultrafilter lemma. Let be a proper filter on and let be the family of the collections of subsets of that extend and have the finite intersection property, ordered by inclusion. Let be a totally ordered subfamily of : then extends and has the finite intersection property, because for every finitely many there exists by construction such that .

By Zorn’s lemma, has a maximal element , which surely satisfies and . If and , then still has the finite intersection property, therefore by maximality. If then still has the finite intersection property, therefore again by maximality.

Suppose, for the sake of contradiction, that there exists such that and : then neither nor  have the finite intersection property, hence there exist such that . But means , and means : therefore,

against having the finite intersection property.

We are now ready to expand the idea of limit. Let be a metric space and let be an ultrafilter on : we say that is the ultralimit of the sequence along if for every the set

belongs to . (Observe how, in the standard definition of limit, the above set is required to belong to the Fréchet filter.) If this is the case, we write

Ultralimits, if they exist, are unique and satisfy our first four conditions. Moreover, the choice of a principal ultrafilter corresponds to the trivial definition . So, what about free ultrafilters?

Theorem 2. Every bounded sequence of real numbers has an ultralimit along every free ultrafilter on .

Proof: It is not restrictive to suppose for every . Let be an arbitrary, but fixed, free ultrafilter on . We will construct a sequence of closed intervals , , such that and for every . By the Cantor intersection theorem it will be : we will then show that .

Let . Let be either or , chosen according to the following criterion: . If both halves satisfy the criterion we just choose one once and for all. We iterate the procedure by always choosing as one of the two halves of such that .

Let . Let , and let be so large that : then , thus . As the smaller set belongs to , so does the larger one.

We have thus almost achieved our original target: a notion of limit which applies to every bounded sequence of real numbers. Such notion will depend on the specific free ultrafilter we choose: but it is already very reassuring that such a notion exists at all! To complete our job we need one more check: we have to be sure that the definition is consistent with the classical one. And this is indeed what happens!

Theorem 3. Let be a sequence of real numbers and let . Then in the classical sense if and only if for every free ultrafilter on .

To prove Theorem 3 we make use of an auxiliary result, which is of interest by itself.

Lemma 2. Let be the family of collections of subsets of that have the finite intersection property. The maximal elements of are precisely the ultrafilters.

Proof: Every ultrafilter is clearly maximal in . If is maximal in , then it is clearly proper and upper closed, and we can reason as in the proof of the ultrafilter lemma to show that it is actually an ultrafilter.

Proof of Theorem 3: Suppose does not converge to in the classical sense. Fix such that the set is infinite. Then the family has the finite intersection property: let be a free ultrafilter that extends it. Then , and does not have an ultralimit along .

The converse implication follows from the classical definition of limit, together with the very notion of free ultrafilter.

Theorem 3 does hold for sequences of real numbers, but does not extend to arbitrary metric spaces. In fact, the following holds, which we state without proving.

Theorem 4. Let be a metric space. The following are equivalent.

  1. For some free ultrafilter on , every sequence in has an ultralimit along .
  2. For every free ultrafilter on , every sequence in has an ultralimit along .
  3. is compact.

Ultrafilters are useful in many other contexts. For instance, they are used to construct hyperreal numbers, which in turn allow a rigorous definition of infinitesimals and the foundation of calculus over those. But this might be the topic for another Theory Lunch talk.


Categories: Offsite Blogs

The marriage of bisimulations and Kripke logical relations

Lambda the Ultimate - Wed, 01/29/2014 - 7:18am
CK Hur, D Dreyer, G Neis, V Vafeiadis (POPL 2012). The marriage of bisimulations and Kripke logical relations. There has been great progress in recent years on developing effective techniques for reasoning about program equivalence in ML-like languages---that is, languages that combine features like higher-order functions, recursive types, abstract types, and general mutable references. Two of the most prominent types of techniques to have emerged are bisimulations and Kripke logical relations (KLRs). While both approaches are powerful, their complementary advantages have led us and other researchers to wonder whether there is an essential tradeoff between them. Furthermore, both approaches seem to suffer from fundamental limitations if one is interested in scaling them to inter-language reasoning.

In this paper, we propose relation transition systems (RTSs), which marry together some of the most appealing aspects of KLRs and bisimulations. In particular, RTSs show how bisimulations' support for reasoning about recursive features via coinduction can be synthesized with KLRs' support for reasoning about local state via state transition systems. Moreover, we have designed RTSs to avoid the limitations of KLRs and bisimulations that preclude their generalization to inter-language reasoning. Notably, unlike KLRs, RTSs are transitively composable. I understand the paper as offering an extension to bisimulation that handles the notion of hidden transitions properly and so allows a generalisation of KLRs to any systems that can be treated using bisimulations. Applications to verified compilation are mentioned, and everything has been validated in Coq.

Categories: Offsite Discussion

apfelmus: FRP - Release of reactive-banana version 0.8

Planet Haskell - Wed, 01/29/2014 - 4:33am

I am very pleased to announce the release of version 0.8.0.0 of my reactive-banana library on hackage. The major additions are a push-driven implementation that actually performs like one and a new module Reactive.Banana.Prim which can be used to implement your own, custom FRP library.

After a long wait, I have finally found the time to implement a push-driven algorithm that actually deserves that name – the previous implementations had taken a couple of shortcuts that rendered the performance closer to that of a pull-driven implementation. There was also an unexpected space leak, which I have fixed using a reasoning principle I’d like to call space invariants. Note that this release doesn’t include garbage collection for dynamically switched events just yet. But having rewritten the core algorithm over and over again for several years now, I finally understand its structure so well that garbage collection is easy to add – in fact, I have already implemented it in the development branch for the future 0.9 release.

Starting with this release, the development of reactive-banana will focus on performance – the banana is ready to pull out the guns and participate in the benchmarking game. (You see, the logo is no idle threat!) In fact, John Lato has put together a set of benchmarks for different FRP libraries. Unfortunately, reactive-banana took a severe beating there, coming out as the slowest contender. Oops. The main problem is that the library uses a stack of monad transfomers in an inner loop – bad idea.

Now, optimizing monad transformers seems to be an issue of general interest, but the only public information I could find was a moderately useful wiki page. If you have any tips or benchmarks for optimizing monad transformer stacks, please let me and the community know!

The other major addition to the reactive-banana library is a module Reactive.Banana.Prim which presents the core algorithm in such a way that you can use it to implement your very own FRP API. Essentially, it implements a basic FRP system on top of which you can implement richer semantics – like observable sharing, recursion, simultaneous events, change notification, and so on. Of course, few people will ever want to do that, also given that reactive-banana is currently not the fastest fruit in the basket.

But this new module is my main motivation for releasing version 0.8. It contains the lessons that I’ve learned from implementing yet another toy FRP system for my threepenny-gui project, and it’s time to put the latter on a solid footing. In particular, it appears that widget abstractions greatly benefit from dynamic event switching, which means that future versions of Threepenny cannot do without a solid FRP subsystem.

Categories: Offsite Blogs

Generalized null / zero

haskell-cafe - Wed, 01/29/2014 - 4:25am
1. Is there a more general version of `null`? (e.g. for a Monad, Functor, Applicative, Traversable or the like.) The closest I can come up with is, in decreasing clunkiness: zero :: (MonadPlus m, Eq (m a)) => m a -> Bool zero = m == mzero zero :: (Alternative f, Eq (f a)) => f a -> Bool zero = m == empty zero :: (Monoid m, Eq m) => m -> Bool zero = m == mempty Though requiring Eq seems ugly and unnecessary, in theory. 2. In that vein, is there an existing function for "a value or a default if it's zero"? E.g.: orElse :: (Monoid m) => m -> m -> m a `orElse` b = if zero a then b else a Thank you, Alvaro http://alva.ro _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

[ANN] yocto-0.1.2

General haskell list - Tue, 01/28/2014 - 9:57pm
Howdy, I'd like to present a minimal JSON encoding/decoding library: http://hackage.haskell.org/package/yocto-0.1.2 From the README: Yocto is exceedingly simple: it only exports one type, Value (which can represent any JSON-encoded data) in addition to Read and Show instances for it--which, respectively, take care of decoding and encoding values automatically. It's worth mentioning that Yocto handles numbers as Rationals rather than Doubles, which makes it faithful to the JSON standard and lets it handle rational numbers of arbitrary magnitude and precision. The name is a play on metric unit prefixes: AttoJson is a tiny JSON library, and Yocto is even smaller. (The entire implementation fits in fewer than 80 lines x 80 columns.) [It is meant primarily for interactive use and one-offs, which is how I'm handwaving hijacking Read and Show.] Anyway, I hope you find it useful; the code lives here: https://github.com/ajg/yocto Alvaro http://alva.ro _______________________________________________ Hask
Categories: Incoming News

Lazy MWC Random?

haskell-cafe - Tue, 01/28/2014 - 8:08pm
Dear Haskellers, I'm in a situation where I'd like to generate an infinite list of random elements (basically, I'm simulating stochastic systems). I feel like MWC Random is the fastest RNG available, but when I try to pull the infinite list out of RandST, it obviously doesn't halt, because ST is strict. Someone posted a way around this in this stack overflow thread: https://stackoverflow.com/questions/16248600/parallel-computations-with-fast-randomness-and-purity Which would fix my problem. My question is though, why isn't ST.Lazy included as a PrimMonad instance anyway? The best answer I can come up with is that, since evaluating the Generator is time dependent, it's best to make it strict to make sure that one's program isn't tapping into /dev/random at arbitrary times. In this way the best stackoverflow solution is quite good. It requires one to strictly generate a Seed (since that's the only way to do it), but then converts the ST Monad to the Lazy version to Lazify everything else. Howeve
Categories: Offsite Discussion

Alejandro Cabrera: 10 Ways to Incorporate Haskell into a Modern, Functional, CS Curriculum

Planet Haskell - Tue, 01/28/2014 - 7:25pm
I had some time to spare while I was waiting for dinner to be ready. Dinner was being prepared by the local China Gate, and it would probably take on the order of 10 to 15 minutes. So I sat down in the lobby with a notepad in hand.

The topic on my thoughts: how could Haskell be incorporated into a modern CS curriculum? I decided to run as radically as I could with this, writing a rough draft of what such a curriculum might look like.

I won't make any arguments for or against functional programming here. I refer readers instead to papers like this one and this one, talks like this one or this one, and books like this one. This is an exercise in ideation.

Without further ado, let's begin:

0. Haskell for Introductory Computer Science

Imagine this is the first time you're learning programming. You've never been exposed to mutation, to functions, to compiling, algorithms, or any of the details of architecture. Where do we start?

Four weeks of Haskell. Enough to get through the first few chapters of LYAH and be able to start thinking about recursion. End each week with a simple exercise, for example, upper-casing a list of strings, upper-casing only the first letter of every string that is longer than 2 characters - little things to build confidence. The Udacity Introduction to Computer Science course has many appropriate ideas for beginning level exercises.

With that introductory period out of the way, now's the time to show why computer science is relevant! Take the time to show case the areas: operating systems, networking, algorithms, programming languages, cryptography, architecture, hardware, and more. Make it relevant:

* Operating systems: Why are there so many? What does it do? How does my application (email, browser, Steam) run from beginning to end?
* Algorithms: How do you figure out if two characters have collided in a video game? How do you sort a list of address contacts alphabetically by last name?
* Networking: How do you send an instant message or an email to a friend on the other side of the world?
* Programming Languages: As with operating systems.

There are many applicable introductory exercises here that can set the pace for future courses.

1. Haskell for Basic Algorithms

This one, and the latter algorithms course on this "Top 10 List", deserve special attention.

Algorithms are fundamentally pure constructs. You give them a well-defined input, and receive a well-defined output

Take plenty of time to provide weekly exercises. Teaching sorting algorithms, trees, string matching algorithms, and more will be a delight here, I predict.

It's also a good time to introduce basic run-time analysis, e.g., Big O notation.

This is also a beautiful time to introduce QuickCheck in conjunction with invariants.

2. Haskell for Data Structures

Very similar to the basic algorithms course, except now we teach students about some basic ways to organize data. Lists, vectors, trees, hash maps, and graphs - these should be enough to keep most students (and practitioners) well-equipped for years!

QuickCheck and frequent programming exercises will do well here.

If an advanced version of this course is desired, I highly recommend starting from here to brainstorm a variant: Purely Functional Data Structures

3. Haskell for Networking

This can be very low-level (OSI network stack, TCP window buffering, etc.), it can be very high-level (HTTP, distributed systems), or some mix of the two.

I think the most important concepts students can come of this course with would be:

* Validating data at the boundaries and the dangers/usefulness of IO
* How to communicate with other systems
* How to write their own protocols, and why in general, they shouldn't reinvent the wheel

4. Haskell for Operating Systems

Teach them to ask - what is an operating system? How do I manage my resources?

It's worth surveying the concepts of: memory management, file systems, data persistence, concurrency, parallelism, process management, task scheduling, and possibly a bit more.

Great projects in this course include: 1) write your own shell, 2) write a simple, local task manager.

5. Haskell for Comparative Programming Languagues

Let's talk types, functions, composition, and problem solving using different approaches. Ideally, such a course would come after learning how to design solutions to mid-sized programming challenges.

After that, have students write an interpreter for a Lisp.

6. Haskell for Compiler Construction

More on: write your own language. This course should cover parsing, lexical analysis, type analysis, and the conversion from source to assembly.

7. Haskell for Advanced Algorithms

This one's going to be fun. Unleash the power of equational reasoning to put together a course that runs through: graph theory, network flow analysis, greedy algorithms, memoization, and more.

This would also be a great time to discuss how the price one pays for purity in regards to asymptotic performance, and how to overcome that, if necessary.

Also, an extended treatment of algorithmic analysis in the presence of laziness would be valuable here.

8. Haskell for Introductory Design of Programs

Really, this should be higher up in this list, and a very early course.

The goal of this course is to come out of it knowing how to:

* Get a big picture of what they're trying to build
* Break it down into the smaller pieces
* Iterate 'til they get the big picture running

It's a great time to teach some basic ideas for testing, how to experiment with the REPL, and how to take advantage of the type system for simple things.

On a more social level, it's a wonderful time to also guide students towards collaborative design, e.g., how to work together and make it fun and efficient.

9. Haskell for High-Performance Computing

This could be a very fun course. It affords the opportunity to allow students to run wild with simulations and experiments of their choosing, while learning about what it means to do high-performance computing in a functional language.

Given that, it should teach some basic tools that will be applicable to most or all projects. How does one benchmark well? What are the evils of optimization? What is over-optimization? When is optimization needed? What tools exist right now to harness parallelism in Haskell (Repa, Accelerate, etc.)? When is data distribution needed? Why is parallelism important? How is parallelism different than concurrency? How can the type system be wielded to help keep units (km/s, etc.) consistent across calculations?

I'd advocate for letting students choose their own projects built in parallel with the course taking place. A simple default is to optimize the much-lauded matrix multiply to larger and larger scales (distributed even, if they want to go so far!). Writing a collision detection engine for a simple game would be pretty interesting, as well.

Notably (in my opinion) absent topics:

* Hardware: CPUs, cache hierarchies, main memory, storage, interconnects
* Advanced data structures
* Cryptography
* Web development
* Type systems
* Cross-disciplinary development, e.g., Haskell for CS + [Physics, Chemistry, Biology, etc.]

These topics are absent from my list for no reason other than I didn't think of them 'til the list was done and articulated. There's so much one can learn and apply at the level of abstraction that computer science (and mathematics) affords that we could specialize further and further. For my own sake, I'm setting a limit. :)

Final Notes

I've just brain-stormed a curriculum in Haskell. There's a lot of details missing, but it's a working draft.

There's also other things to consider, beyond the technical aspects. Consider the social aspects. How we teach students to work together? How do we keep learning engaging and fun?  How do we help students connect to the greater community of developers that exist outside of academia? How do we keep the lessons relevant to the lives that they lead? How do we encourage them to pursue their passions?
Categories: Offsite Blogs

www.huffingtonpost.com

del.icio.us/haskell - Tue, 01/28/2014 - 6:35pm
Categories: Offsite Blogs

www.cs.uu.nl

del.icio.us/haskell - Tue, 01/28/2014 - 6:35pm
Categories: Offsite Blogs

www.huffingtonpost.com

del.icio.us/haskell - Tue, 01/28/2014 - 6:35pm
Categories: Offsite Blogs