News aggregator

Taking a number to a power and taking its mod

haskell-cafe - Sun, 06/15/2014 - 4:10pm
Hello, I've been trying to do a problem from SPOJ which requires me to find prime numbers in a range. I've been battling with several possibility, so far I have had no luck. I remember primality tests from college. So I looked up the one I knew: fermat's little theorem. It requires me to do this check: a^(p-1) `mod` p == 1 However, taking a number to a large power is very slow. So it was even worse than the solution I was already trying. I'm not here for your guys to give me the solution. But I'm here to understand how can I do that operation quickly. This wiki entry implements powMod http://www.haskell.org/haskellwiki/Testing_primality Calling that funcion with powMod 999999527 2 (999999527-1) is very fast. What's going on? Can someone shed me some light? []'s Rafael _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Noob here: How can I define a list length function non-recursively?

Haskell on Reddit - Sun, 06/15/2014 - 12:47pm

It's a thought-exercise from 'the Haskell School of Music.' I was staring at it for a long time to no avail. I realize the answer's probably obvious.

submitted by hoomei
[link] [19 comments]
Categories: Incoming News

Support of multi-constructor types by UNPACK pragma

haskell-cafe - Sun, 06/15/2014 - 12:43pm
Currently the pragma only supports single-constructor types. So in the following example it will simply be ignored: data A = A1 Char | A2 {-# UNPACK #-} !B data B = B1 Int | B2 Bool However it seems to be easily solvable by changing the type A to something like the following during unpacking: data A = A1 Char | A2_1 Int | -- from B1 A2_2 Bool -- from B2 Am I missing something? Why is it not implemented? ​ _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Yesod Web Framework: Three evil ways to avoid conditional compilation

Planet Haskell - Sun, 06/15/2014 - 8:15am

Let's suppose that you're using a library at version 1. It exposes a function:

someFunc :: Int -> String someFunc i = 'x' : show i

In version 2 of the library, someone realizes that this library isn't general-purpose enough: why is the 'x' character hardcoded? So version 2 exposes a more powerful version of the function:

someFunc :: Int -> Char -> String someFunc i c = c : show i

In your current codebase, you have:

main = putStrLn $ someFunc 5

Changing that code to work with version 2 is trivial:

main = putStrLn $ someFunc 5 'x'

But what if you want to make your code work with both versions? The real, proper answer, that I hope everyone actually uses, is to use Cabal CPP macros:

#if MIN_VERSION_somelib(2, 0, 0) main = putStrLn $ someFunc 5 'x' #else main = putStrLn $ someFunc 5 #endif

And sure, you should do that... but let's have some fun. I'm going to present three evil techniques to accomplish the same conditional compilation result, and try to point out their relative merits. I encourage others to come up with other ridiculous ideas of their own.

If anyone's curious where I came up with the idea to do this, it was thinking about prerelease GHC patch level releases that break backwards compatibility. And if you want to play around with the code, either open it in FP Haskell Center or clone the Github repo. In all the examples, simply switch whether V1 or V2 is imported to simulated upgrading/downgrading dependencies.

Typeclasses

A well known truth in Haskell is, "for every problem, there's an abuse of typeclasses waiting to be discovered." As far as typeclass abuses go, this one is pretty benign.

{-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeSynonymInstances #-} module Typeclass where import V1 -- import V2 class Evil a where evil :: a -> String instance Evil String where evil = id instance Evil a => Evil (Char -> a) where evil f = evil (f 'x') main :: IO () main = putStrLn $ evil $ someFunc 5

What's "nice" about this approach (if anything here is nice) is that everything is compile-time checked, and the code is actually pretty readable. However, as the different cases you want to support get more complicated, you'll need to add in ever harrier language extensions.

Typeable

This approach is for the Pythonista in you. Next time someone proudly states that Haskell is a statically typed language, just pull this one out:

module Typeable where import Data.Typeable -- import V1 import V2 evil :: Typeable a => a -> String evil a | Just s <- cast a = s | Just f <- cast a = f 'x' | otherwise = error "Yay, runtime type errors!" main :: IO () main = putStrLn $ evil $ someFunc 5

Advantage: it's so incredibly trivial to add more cases. Downsides: it's runtime type checking, and the dispatch is performed at runtime, not compile time.

Template Haskell

Of course, no blog post about Haskell abuse would be complete without some usage of Template Haskell. Due to the stage restriction, we have to split this into two files. First, THHelper:

{-# LANGUAGE TemplateHaskell #-} module THHelper where import Language.Haskell.TH evil :: Name -> Q Exp evil name = do VarI _ typ _ _ <- reify name case typ of AppT (AppT ArrowT (ConT _)) (ConT _) -> return $ VarE name AppT (AppT ArrowT (ConT _)) (AppT (AppT ArrowT (ConT _)) (ConT _)) -> [|flip $(return $ VarE name) 'x'|] _ -> error $ "Unknown type: " ++ show typ

Notice how beautiful our pattern matching is. This combines the best (worst) of both worlds from above: we get full compile time checking, and can easily (hah!) pattern match on all possible signatures for the function at hand.

And calling this beast is equally elegant:

{-# LANGUAGE TemplateHaskell #-} module TH where import THHelper import V1 -- import V2 main :: IO () main = putStrLn $ $(evil 'someFunc) 5
Categories: Offsite Blogs

Open source projects using postgresql-simple?

Haskell on Reddit - Sun, 06/15/2014 - 7:54am

A while ago, I made a post asking for advice on how to use databases in Haskell. I've played around a bit, and decided that postgresql-simple might be the best way to go. However, I was wondering if there were any examples floating around of projects that have used it in production? Googling didn't turn anything up.

Writing the interface between Haskell and database types seems like it is the sort of thing that you would have to be quite disciplined in doing. If anyone could point me at some code that shows examples of doing so, I would be very grateful!

submitted by parrotchute
[link] [9 comments]
Categories: Incoming News

Dominic Steinitz: Gibbs Sampling in Haskell

Planet Haskell - Sun, 06/15/2014 - 6:13am

This is really intended as a draft chapter for our book. Given the diverse natures of the intended intended audiences, it is probably a bit light on explanation of the Haskell (use of monad transformers) for those with a background in numerical methods. It is hoped that the explanation of the mathematics is adequate for those with a background in Haskell but not necessarily in numerical methods. As always, any feedback is gratefully accepted.

Introduction

Imagine an insect, a grasshopper, trapped on the face of a clock which wants to visit each hour an equal number of times. However, there is a snag: it can only see the value of the hour it is on and the value of the hours immediately anti-clockwise and immediately clockwise. For example, if it is standing on 5 then it can see the 5, the 4, and the 6 but no others.

It can adopt the following strategy: toss a fair coin and move anti-clockwise for a head and move clockwise for a tail. Intuition tells us that over a large set of moves the grasshopper will visit each hour (approximately) the same number of times.

Can we confirm our intuition somehow? Suppose that the strategy has worked and the grasshopper is now to be found with equal probability on any hour. Then at the last jump, the grasshopper must either have been at the hour before the one it is now on or it must have been at the hour after the one it is now on. Let us denote the probability that the grasshopper is on hour by and the (conditional) probability that the grasshopper jumps to state given it was in state by . Then we have

Substituting in where is a normalising constant (12 in this case) we obtain

This tells us that the required distribution is a fixed point of the grasshopper’s strategy. But does the strategy actually converge to the fixed point? Let us perform an experiment.

First we import some modules from hmatrix.

> {-# LANGUAGE FlexibleContexts #-} > module Chapter1 where > import Data.Packed.Matrix > import Numeric.LinearAlgebra.Algorithms > import Numeric.Container > import Data.Random > import Control.Monad.State > import qualified Control.Monad.Writer as W > import qualified Control.Monad.Loops as ML > import Data.Random.Source.PureMT

Let us use a clock with 5 hours to make the matrices sufficiently small to fit on one page.

Here is the strategy encoded as a matrix. For example the first row says jump to position 1 with probablity 0.5 or jump to position 5 with probability 0.5.

> eqProbsMat :: Matrix Double > eqProbsMat = (5 >< 5) > [ 0.0, 0.5, 0.0, 0.0, 0.5 > , 0.5, 0.0, 0.5, 0.0, 0.0 > , 0.0, 0.5, 0.0, 0.5, 0.0 > , 0.0, 0.0, 0.5, 0.0, 0.5 > , 0.5, 0.0, 0.0, 0.5, 0.0 > ]

We suppose the grasshopper starts at 1 o’clock.

> startOnOne :: Matrix Double > startOnOne = ((1 >< 5) [1.0, 0.0, 0.0, 0.0, 0.0])

If we allow the grasshopper to hop 1000 times then we see that it is equally likely to be found on any hour hand with a 20% probability.

ghci> eqProbsMat (5><5) [ 0.0, 0.5, 0.0, 0.0, 0.5 , 0.5, 0.0, 0.5, 0.0, 0.0 , 0.0, 0.5, 0.0, 0.5, 0.0 , 0.0, 0.0, 0.5, 0.0, 0.5 , 0.5, 0.0, 0.0, 0.5, 0.0 ] ghci> take 1 $ drop 1000 $ iterate (<> eqProbsMat) startOnOne [(1><5) [ 0.20000000000000007, 0.2, 0.20000000000000004, 0.20000000000000004, 0.2 ]]

In this particular case, the strategy does indeed converge.

Now suppose the grasshopper wants to visit each hour in proportion the value of the number on the hour. Lacking pen and paper (and indeed opposable thumbs), it decides to adopt the following strategy: toss a fair coin as in the previous strategy but only move if the number is larger than the one it is standing on; if, on the other hand, the number is smaller then choose a number at random from between 0 and 1 and move if this value is smaller than the ratio of the proposed hour and the hour on which it is standing otherwise stay put. For example, if the grasshopper is standing on 5 and gets a tail then it will move to 6 but if it gets a head then four fifths of the time it will move to 4 but one fifth of the time it will stay where it is.

Suppose that the strategy has worked (it is not clear that is has) and the grasshopper is now to be found at 12 o’clock 12 times as often as at 1 o’clock, at 11 o’clock 11 times as often as at 1 o’clock, etc. Then at the last jump, the grasshopper must either have been at the hour before the one it is now on, the hour after the one it is now on or the same hour it is now on. Let us denote the probability that the grasshopper is on hour by .

Substituting in at 4 say

The reader can check that this relationship holds for all other hours. This tells us that the required distribution is a fixed point of the grasshopper’s strategy. But does this strategy actually converge to the fixed point?

Again, let us use a clock with 5 hours to make the matrices sufficiently small to fit on one page.

Here is the strategy encoded as a matrix. For example the first row says jump to position 1 with probablity 0.5 or jump to position 5 with probability 0.5.

> incProbsMat :: Matrix Double > incProbsMat = scale 0.5 $ > (5 >< 5) > [ 0.0, 1.0, 0.0, 0.0, 1.0 > , 1.0/2.0, 1.0/2.0, 1.0, 0.0, 0.0 > , 0.0, 2.0/3.0, 1.0/3.0, 1.0, 0.0 > , 0.0, 0.0, 3.0/4.0, 1.0/4.0, 1.0 > , 1.0/5.0, 0.0, 0.0, 4.0/5.0, 1.0/5.0 + 4.0/5.0 > ]

We suppose the grasshopper starts at 1 o’clock.

If we allow the grasshopper to hop 1000 times then we see that it is equally likely to be found on any hour hand with a probability of times the probability of being found on 1.

ghci> incProbsMat (5><5) [ 0.0, 0.5, 0.0, 0.0, 0.5 , 0.25, 0.25, 0.5, 0.0, 0.0 , 0.0, 0.3333333333333333, 0.16666666666666666, 0.5, 0.0 , 0.0, 0.0, 0.375, 0.125, 0.5 , 0.1, 0.0, 0.0, 0.4, 0.5 ] ghci> take 1 $ drop 1000 $ iterate (<> incProbsMat) startOnOne [(1><5) [ 6.666666666666665e-2, 0.1333333333333333, 0.19999999999999996, 0.2666666666666666, 0.33333333333333326 ]]

In this particular case, the strategy does indeed converge.

Surprisingly, this strategy produces the desired result and is known as the Metropolis Algorithm. What the grasshopper has done is to construct a (discrete) Markov Process which has a limiting distribution (the stationary distribution) with the desired feature: sampling from this process will result in each hour being sampled in proportion to its value.

Markov Chain Theory

Let us examine what is happening in a bit more detail.

The grasshopper has started with a very simple Markov Chain: one which jumps clockwise or anti-clockwise with equal probability and then modified it. But what is a Markov Chain?

A time homogeneous Markov chain is a countable sequence of random variables
such that

We sometimes say that a Markov Chain is discrete time stochastic process with the above property.

So the very simple Markov Chain can be described by

The grasshopper knows that so it can calculate without knowing . This is important because now, without knowing , the grasshopper can evaluate

where takes the maximum of its arguments. Simplifying the above by substituing in the grasshopper’s probabilities and noting that is somewhat obscure way of saying jump clockwise or anti-clockwise we obtain

The Ergodic Theorem

In most studies of Markov chains, one is interested in whether a chain has a stationary distribution. What we wish to do is take a distribution and create a chain with this distribution as its stationary distribution. We will still need to show that our chain does indeed have the correct stationary distribution and we state the relevant theorem somewhat informally and with no proof.

Theorem

An irreducible, aperiodic and positive recurrent Markov chain has a unique stationary distribution.

Roughly speaking

  • Irreducible means it is possible to get from any state to any other state.

  • Aperiodic means that returning to a state having started at that state occurs at irregular times.

  • Positive recurrent means that the first time to hit a state is finite (for every state and more pedantically except on sets of null measure).

Note that the last condition is required when the state space is infinite – see Skrikant‘s lecture notes for an example and also for a more formal definition of the theorem and its proof.

Algorithm

Let be a probability distribution on the state space with for all and let be an ergodic Markov chain on with transition probabilities (the latter condition is slightly stronger than it need be but we will not need fully general conditions).

Create a new (ergodic) Markov chain with transition probabilities

where takes the maximum of its arguments.

Calculate the value of interest on the state space e.g. the total magnetization for each step produced by this new chain.

Repeat a sufficiently large number of times and take the average. This gives the estimate of the value of interest.

Convergence

Let us first note that the Markov chain produced by this algorithm almost trivially satisfies the detailed balance condition, for example,

Secondly since we have specified that is ergodic then clearly is also ergodic (all the transition probabilities are ).

So we know the algorithm will converge to the unique distribution we specified to provide estimates of values of interest.

Gibbs Sampling Random Scan

For simplicity let us consider a model with two parameters and that we sample from either parameter with equal probability. In this sampler, We update the parameters in a single step.

The transition density kernel is then given by

where is the Dirac delta function.

Detailed balance

This sampling scheme satisifies the detailed balance condition. We have

In other words

Hand waving slightly, we can see that this scheme satisfies the premises of the ergodic theorem and so we can conclude that there is a unique stationary distribution and must be that distribution.

Systematic Scan

Most references on Gibbs sampling do not describe the random scan but instead something called a systematic scan.

Again for simplicity let us consider a model with two parameters. In this sampler, we update the parameters in two steps.

We observe that this is not time-homegeneous; at each step the transition matrix flips between the two transition matrices given by the individual steps. Thus although, as we show below, each individual transtion satisifies the detailed balance condition, we cannot apply the ergodic theorem as it only applies to time-homogeneous processes.

The transition density kernel is then given by

where .

Thus

Detailed balance

Suppose that we have two states and and that . Then . Trivially we have

Now suppose that

So again we have

Similarly we can show

But note that

whereas

and these are not necessarily equal.

So the detailed balance equation is not satisfied, another sign that we cannot appeal to the ergodic theorem.

An Example: The Bivariate Normal

Let us demonstrate the Gibbs sampler with a distribution which we actually know: the bivariate normal.

The conditional distributions are easily calculated to be

Let’s take a correlation of 0.8, a data point of (0.0, 0.0) and start the chain at (2.5, 2.5).

> rho :: Double > rho = 0.8 > > y :: (Double, Double) > y = (0.0, 0.0) > > y1, y2 :: Double > y1 = fst y > y2 = snd y > > initTheta :: (Double, Double) > initTheta = (2.5, 2.5)

We pre-calculate the variance needed for the sampler.

> var :: Double > var = 1.0 - rho^2

In Haskell and in the random-fu package, sampling from probability distributions is implemented as a monad. We sample from the relevant normal distributions and keep the trajectory using a writer monad.

> gibbsSampler :: Double -> RVarT (W.Writer [(Double,Double)]) Double > gibbsSampler oldTheta2 = do > newTheta1 <- rvarT (Normal (y1 + rho * (oldTheta2 - y2)) var) > lift $ W.tell [(newTheta1, oldTheta2)] > newTheta2 <- rvarT (Normal (y2 + rho * (newTheta1 - y1)) var) > lift $ W.tell [(newTheta1, newTheta2)] > return $ newTheta2

It is common to allow the chain to “burn in” so as to “forget” its starting position. We arbitrarily burn in for 10,000 steps.

> burnIn :: Int > burnIn = 10000

We sample repeatedly from the sampler using the monadic form of iterate. Running the monadic stack is slightly noisy but nonetheless straightforward. We use mersenne-random-pure64 (albeit indirectly via random-source) as our source of entropy.

> runMCMC :: Int -> [(Double, Double)] > runMCMC n = > take n $ > drop burnIn $ > snd $ > W.runWriter (evalStateT (sample (ML.iterateM_ gibbsSampler (snd initTheta))) (pureMT 2))

We can look at the trajectory of our sampler for various run lengths.

For bigger sample sizes, plotting the distribution sampled re-assures us that we are indeed sampling from a bivariate normal distribution as the theory predicted.

Applications to Bayesian Statistics

Some of what is here and here excluding JAGS and STAN (after all this is a book about Haskell).

Applications to Physics

Applications to Physics

Most of what is here.


Categories: Offsite Blogs

A failed attempt at using an indexed monad for one-time-use values

Haskell on Reddit - Sat, 06/14/2014 - 10:13pm

Suppose for a given type T you had

  • an action m T to produce values of type T
  • a function T -> m () to consume values of type T

And you wanted to ensure that every produced value was consumed once and only once.

This isn't possible with a standard monad, but I think it should be possible with an indexed monad.

Here's a gist of my current attempt. It doesn't quite work.

Inspired by ST, I tried to tag every produced value with a unique type:

data Value uid a produce :: a -> Unique '[] '[uid] (Value uid a) consume :: Value uid a -> Unique '[uid] '[] a

And to use two type-level lists to track

  • the uids of Values that an action consumes, but doesn't produce
  • the uids of Values that an action produces, but doesn't consume

I can get it to compile some actions:

pcPC = do x <- produce "hello" consume x y <- produce (3 :: Int) consume y return () pPCc = do x <- produce "hello" y <- produce (3 :: Int) consume y consume x return ()

But it's unable to handle full interleaving, as actions like the following don't compile:

pPcC = do x <- produce "hello" y <- produce (3 :: Int) consume x consume y return ()

That's got a huge mess of a compiler error, which is much more managable if I skip the initial produce:

interleavedConsume :: Value uid a -> Unique '[uid] '[] a interleavedConsume x = do y <- produce () a <- consume x consume y return a

which gives the error (using ghc-7.8.2):

Couldn't match type ‘Delete uid0 '[uid, uid0]’ with ‘'[uid]’ The type variable ‘uid0’ is ambiguous Expected type: Unique '[] '[uid0] (Value uid0 ()) -> (Value uid0 () -> Unique '[uid, uid0] '[] a) -> Unique '[uid] '[] a Actual type: Unique '[] '[uid0] (Value uid0 ()) -> (Value uid0 () -> Unique '[uid, uid0] '[] a) -> Unique ('[] ++ ('[uid, uid0] \\ '[uid0])) (('[uid0] \\ '[uid, uid0]) ++ '[]) a

Thinking about it, I realize that this is because the compiler doesn't know it can't unify uid0 with uid. I can use type equality for these anonymous types, but I can't rely on inequality.

I'm thinking I may try replacing the anonymous types with type-level Nats, but before I pound my head against the wall too long, I thought I might check in here to see if anyone had any bright ideas.

submitted by rampion
[link] [7 comments]
Categories: Incoming News

Proposal: Use (.) and id from Control.Category in Prelude

libraries list - Sat, 06/14/2014 - 9:15pm
(.) and id are generalized in the Category class in Control.Category. In the past, many libraries have instructed users to use (.) and id from Control.Category for composition of their types. Importing these types into Prelude shouldn't break existing code, and makes libraries that use Category easier to use. Discussion period: 4 weeks
Categories: Offsite Discussion

setlocale maintainership

libraries list - Sat, 06/14/2014 - 8:30pm
Hello list. http://hackage.haskell.org/package/setlocale does not compile with ghc-7.8 because it has a manually written typeable instance. I emailed the author about this one month ago (May 15), but I have received no reply. Let us wait until September 1 2014, and then I could be allowed to upload a corrected version? Regards, Adam
Categories: Offsite Discussion

RFC: Unicode primes and super/subscript characters in GHC

glasgow-user - Sat, 06/14/2014 - 3:48pm
Hello lists, As some of you may know, GHC's support for Unicode characters in lexemes is rather crude and hence prone to inconsistencies in their handling versus the ASCII counterparts. For example, APOSTROPHE is treated differently from PRIME: λ> data a +' b = Plus a b <interactive>:3:9: Unexpected type ‘b’ In the data declaration for ‘+’ A data declaration should have form data + a b c = ... λ> data a +′ b = Plus a b λ> let a' = 1 λ> let a′ = 1 <interactive>:10:8: parse error on input ‘=’ Also some rather bizarre looking things are accepted: λ> let ᵤxᵤy = 1 In the spirit of improving things little by little I would like to propose: 1. Handle single/double/triple/quadruple Unicode PRIMEs the same way as APOSTROPHE, meaning the following alterations to the lexer: primes -> U+2032 | U+2033 | U+2034 | U+2057 symbol -> ascSymbol | uniSymbol (EXCEPT special | _ | " | ' | primes) graphic -> small | large | symbol | digit | special | " | ' | primes varid -
Categories: Offsite Discussion

How do you learn to write systems in Haskell? (and a story)

Haskell on Reddit - Sat, 06/14/2014 - 11:46am

Last night I decided to start porting a daemon I'd written in Go to Haskell. The first step is splitting a streaming HTTP response into lines, so I decided to do that and print them out individually. I threw all the parts onto the floor and started tinkering.

I found a snippet for getting an HTTP stream:

req <- parseUrl "http://www.example.com/" withManager tlsManagerSettings $ \m -> withHTTP req m $ \resp -> do runEffect $ responseBody resp >-> PB.stdout

I found a function for turning a binary stream into text:

Pipes.Text.Encoding.decodeUtf8

I found a function for splitting text into lines:

Pipes.Text.lines

... And then I spent several hours trying to fit those three things together. And I never got it! How is that possible?

  • I read the documentation. Pipes claims to have second-to-none documentation, though Pipes.Text doesn't. :P
  • I asked questions on IRC. It took some discussion but someone (Cale?) figured out how to connect the first two things together with "fmap (const ())". They didn't know the API, but they guessed that that was what I needed from the error message. I'm not sure how yet, but I was assured it was idiomatic. :) I got the sense from the documentation that the fix was throwing out some of the error handling, but I am not sure.
  • I asked my IRL Haskell friend for help. The compiler error messages puzzled both of us. His major contribution was recommending I use Pipes instead of Conduit, but it seemed just as opaque to me, and it didn't seem to help us debug.

But after all of this, what I'm puzzled by is how the gulf between what I know and what I need to know could be so vast. I was trying to connect three things using a library whose purpose is connecting things. I've even written tools to help me fit things together. I had all the pieces, and I'd read the documentation for every library I was using. I spent most of my time reading documentation.

I don't understand how any programming task this simple can be this difficult for me. I've been trying to learn Haskell for so long. Periodically I get frustrated and quit, and when I come back, there's some big new thing that I need to wrap my mind around to use any of the modern libraries (monad transformers, Applicative, lenses, etc). That's great! I feel like there's a sense in which the language is getting more usable all the time, but... I have yet to benefit.

So, friends, how the hell do you learn to write systems in Haskell? I really want to know the answer. Diving into interesting problems, following the types, and phoning a friend just burns me out. I need your meta advice!

submitted by AllTom
[link] [56 comments]
Categories: Incoming News

Thunks.

Haskell on Reddit - Sat, 06/14/2014 - 4:40am
Categories: Incoming News

Typical arguments against Haskell

Haskell on Reddit - Sat, 06/14/2014 - 4:17am

A list of the most typical arguments against Haskell in roughly chronological order (as I perceive them):

  • 1. FP doesn't work

FP can't be used to solve real problems!

  • 2. No libraries

Ok, so FP may work -- but there aren't enough libraries!

  • 3. No IDE support

Ok, so there are libraries for most things -- but there's no IDE support!

  • 4. No programmers

Ok, there is some IDE support -- but there are no programmers!

  • 5. ??

Ok, there are programmers -- but ...

submitted by togrof
[link] [14 comments]
Categories: Incoming News