# News aggregator

### multithreading - How difficult is Haskell multi-threading? - Stack Overflow

### multithreading - How difficult is Haskell multi-threading? - Stack Overflow

### Benchmarking with Criterion, 'scanl1 (+)' appears 7 times faster than 'sum' ?

Hi,

I am trying out Criterion to make some quick measurements, here I compare scanl1 (+) rand with sum rand for a random list, and the former takes much less time than the latter, even though it should be more work.

If I try last . scanl1 (+) to obtain a value equal to sum rand, I get a similar time.

One point which may be unclear to me is the description of nf in Criterion.Main:

Apply an argument to a function, and evaluate the result to head normal form (NF).

I'm not sure what "head normal form" means in the context of Haskell, particularly when there are constructors, but the comments at the top of the page seem to indicate that nf would fully evaluate the list.

What am I missing?

submitted by Syrak[link] [7 comments]

### Dominic Steinitz: Student’s T and Space Leaks

The other speaker at the Machine Learning Meetup at which I gave my talk on automatic differentiation gave a very interesting talk on A/B testing. Apparently this is big business these days as attested by the fact I got 3 ads above the wikipedia entry when I googled for it.

It seems that people tend to test with small sample sizes and to do so very often, resulting in spurious results. Of course readers of XKCD will be well aware of some of the pitfalls.

I thought a Bayesian approach might circumvent some of the problems and set out to write a blog article only to discover that there was no Haskell library for sampling from Student’s t. Actually there was one but is currently an unreleased part of random-fu. So I set about fixing this shortfall.

I thought I had better run a few tests so I calculated the sampled mean, variance, skewness and kurtosis.

I wasn’t really giving this my full attention and as a result ran into a few problems with space. I thought these were worth sharing and that is what this blog post is about. Hopefully, I will have time soon to actually blog about the Bayesian equivalent of A/B testing.

Preamble > {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > {-# OPTIONS_GHC -fno-warn-type-defaults #-} > {-# OPTIONS_GHC -fno-warn-unused-do-bind #-} > {-# OPTIONS_GHC -fno-warn-missing-methods #-} > {-# OPTIONS_GHC -fno-warn-orphans #-} > > {-# LANGUAGE NoMonomorphismRestriction #-} > > module StudentTest ( > main > ) where > > import qualified Data.Vector.Unboxed as V > import Data.Random.Source.PureMT > import Data.Random > import Data.Random.Distribution.T > import Control.Monad.State > import Data.Histogram.Fill > import Data.Histogram.Generic ( Histogram ) > import Data.List Space AnalysisLet’s create a reasonable number of samples as the higher moments converge quite slowly.

> nSamples :: Int > nSamples = 1000000An arbitrary seed for creating the samples.

> arbSeed :: Int > arbSeed = 8Student’s t only has one parameter, the number of degrees of freedom.

> nu :: Integer > nu = 6Now we can do our tests by calculating the sampled values.

> ts :: [Double] > ts = > evalState (replicateM nSamples (sample (T nu))) > (pureMT $ fromIntegral arbSeed) > mean, variance, skewness, kurtosis :: Double > mean = (sum ts) / fromIntegral nSamples > variance = (sum (map (**2) ts)) / fromIntegral nSamples > skewness = (sum (map (**3) ts) / fromIntegral nSamples) / variance**1.5 > kurtosis = (sum (map (**4) ts) / fromIntegral nSamples) / variance**2This works fine for small sample sizes but not for the number we have chosen.

./StudentTest +RTS -hc Stack space overflow: current size 8388608 bytes. Use `+RTS -Ksize -RTS' to increase it.It seems a shame that the function in the Prelude has this behaviour but never mind let us ensure that we consume values strictly (they are being produced lazily).

> mean' = (foldl' (+) 0 ts) / fromIntegral nSamples > variance' = (foldl' (+) 0 (map (**2) ts)) / fromIntegral nSamples > skewness' = (foldl' (+) 0 (map (**3) ts) / fromIntegral nSamples) / variance'**1.5 > kurtosis' = (foldl' (+) 0 (map (**4) ts) / fromIntegral nSamples) / variance'**2We now have a space leak on the heap as using the ghc profiler below shows. What went wrong?

If we only calculate the mean using foldl then all is well. Instead of 35M we only use 45K.

Well that gives us a clue. The garbage collector cannot reclaim the samples as they are needed for other calculations. What we need to do is calculate the moments strictly altogether.

Let’s create a strict record to do this.

> data Moments = Moments { m1 :: !Double > , m2 :: !Double > , m3 :: !Double > , m4 :: !Double > } > deriving ShowAnd calculate the results strictly.

> > m = foldl' (\m x -> Moments { m1 = m1 m + x > , m2 = m2 m + x**2 > , m3 = m3 m + x**3 > , m4 = m4 m + x**4 > }) (Moments 0.0 0.0 0.0 0.0) ts > > mean'' = m1 m / fromIntegral nSamples > variance'' = m2 m / fromIntegral nSamples > skewness'' = (m3 m / fromIntegral nSamples) / variance''**1.5 > kurtosis'' = (m4 m / fromIntegral nSamples) / variance''**2Now we have what we want; the program runs in small constant space.

> main :: IO () > main = do > putStrLn $ show mean'' > putStrLn $ show variance'' > putStrLn $ show skewness'' > putStrLn $ show kurtosis''Oh and the moments give the expected answers.

ghci> mean'' 3.9298418844289093e-4 ghci> variance'' 1.4962681916693004 ghci> skewness'' 1.0113188204317015e-2 ghci> kurtosis'' 5.661776268997382 Running the CodeTo run this you will need my version of random-fu. The code for this article is here. You will need to compile everything with profiling, something like

ghc -O2 -main-is StudentTest StudentTest.lhs -prof -package-db=.cabal-sandbox/x86_64-osx-ghc-7.6.2-packages.conf.dSince you need all the packages to be built with profiling, you will probably want to build using a sandbox as above. The only slightly tricky aspect is building random-fu so it is in your sandbox.

runghc Setup.lhs configure --enable-library-profiling --package-db=/HasBayes/.cabal-sandbox/x86_64-osx-ghc-7.6.2-packages.conf.d --libdir=/HasBayes/.cabal-sandbox/lib### Douglas M. Auclair (geophf): 'B' is for Bayesian Classifiers

Bayes was this little guy, way back when who happened to notice something that most people just naturally assume to be true. It's not, but people assume this.

And this observation that Bayes noticed was this.

"Hey, look, guys, something seems to happen because of everything else that's happening!"

Stunning revelation, isn't it?

But then he, unlike everybody else at that time (prior and even unto now), codified that belief.

('Belief.' Heh. The Bayesian network is also called the belief network).

He said, the likelihood of something is dependent on the likelihood of all the other factors you've observed.

So, for example, the probability of you picking the bracket perfectly is a (more than a few) billion (plus) to your one guess. Double your odds? Guess twice.

Yeah, right.

But I digress.

As usual.

So, formulized, Bayes said

p(x | X) = p(a | x) * p(b | x) * p(c | x) ... * p(z | x)

Or, that is:

The probability of x in the set of X's happening (like the probability of the Red Sox winning this game), is equivalent to the set of probabilities of everything else happening, given that the Red Sox won before.

So, was it raining when the Red Sox won?

Was it Tuesday when the Red Sox won?

Did you wear your lucky shirt when the Red Sox won?

You take all that historical data, compute the probabilities of each, multiply them all together, and you come up with some probability.

Then, you do the same for all cases (Red Sox winning is one case, Red Sox losing is what we don't even want to consider, but it is a case, too, so just deal with that ... for eighty-six years), and the case with the highest probable outcome is ... well, the one most likely to occur.

... according to Bayes.

How does this work in practice?

Well, pretty darn good, but that's dependent upon what other systems you're comparing it against and what you're using it for.

For example (or 'fer realz'), a Bayesian classifier we built against a transactional system that ran one hundred thousand transactions per day was able to sniff out more than ninety-nine percent of the transactions that were ... 'fishy': erroneous or malicious. Compared to what other system? A rule-based system made up of rules composed by subject-matter experts and analysts familiar with the transactions.

Another for example, a little Bayesian classifier I built to pick winning football teams worked ... pretty well. We had ten or so members of that pool, and I walked away a winner three or four times. Not stellar, but not bad; not bad at all.

The beauty of a Bayesian classifier is that it's simplicity itself to implement. The probabilities run away from you on a computer, because multiplying small probabilities will get you into the IEEE math error zone so quickly your head will spin, and your system gets all weirded out, but instead of multiplying probabilities, just use a little bit of seventh grade math (in my day) and simply sum their logarithms:

p(x | X) = p(a | x) * p(b | x) * p(c | x) ...

can be re-represented as:

log p(x | X) = log p(a | x) + log p(b | x) + log p(c | x) ...

And this way we (greatly) reduce going off into la-la land for your ALU.

### A cool paper I found on "Functional Programming and 3D Games". It discusses everything from collision detection to physics and texturing. [PDF]

### Adventures in Hoogling

### Adventures in Hoogling

### Lee Pike: Embedded Security in the Mainstream

Science Friday had an interesting podcast with Bruce Schneier worth a listen. It discussed security in embedded systems (or as is hip to say now, *the Internet of Things*). The idea of security in embedded systems seemed pretty obscure just a few years ago, so it’s nice to see it being featured on a general-audience radio show.

So who’s going to listen to it from their phone, connected to their car’s audio over Bluetooth, on the way to the store to buy a Nest smoke alarm?

### valderman/haste-compiler

### valderman/haste-compiler

### Neil Mitchell: Exceptional Testing

*Summary: Failing properties should throw exceptions, not return False.*

When testing properties in Haskell with QuickCheck we usually write a predicate that takes some arguments, and returns a boolean. For example:

import Test.QuickCheckmain = quickCheck $ \x -> exp (log (abs x)) == abs (x :: Double)

Here we are checking that for all positive Double values, applying log then exp is the identity. This statement is incorrect for Double due to floating point errors. Running main we get:

*** Failed! Falsifiable (after 6 tests and 2 shrinks):3.0

QuickCheck is an incredibly powerful tool - it first found a failing example, and then automatically simplified it. However, it's left out some important information - what is the value of exp (log 3)? For my tests I usually define === and use it instead of ==:

a === b | a == b = True| otherwise = error $ show a ++ " /= " ++ show b

Now when running main we get:

*** Failed! Exception: '3.0000000000000004 /= 3.0'(after 2 tests and 2 shrinks):

3.0

We can immediately see the magnitude of the error introduced, giving a kick-start to debugging.

**Do we still need Bool?**

When writing functions returning Bool there are three interesting cases:

- The function returns True, the test passes, continue on.
- The function returns False, the test fails, with no information beyond the input arguments.
- The function throws an exception, the test fails, but with a human readable error message about the point of failure.

Of these, returning False is the least useful, and entirely subsumed by exceptions. It is likely there was additional information available before reducing to False, which has now been lost, and must first be recovered before debugging can start.

Given that the only interesting values are True and exception, we can switch to using the () type, where passing tests return () and failing tests throw an exception. However, working with exceptions in pure code is a bit ugly, so I typically define:

import Control.Monad(===) :: (Show a, Eq a) => a -> a -> IO ()

a === b = when (a /= b) $ error $ show a ++ " /= " ++ show b

Now === is an action, and passing tests do nothing, while failing tests raise an error. This definition forces all tests to end up in IO, which is terrible for "real" Haskell code, where pure and IO computations should be segregated. However, for tests, as long as the test is repeatable (doesn't store some temporary state) then I don't worry.

To test the IO () returning property with QuickCheck we can define:

instance Testable () whereproperty () = property True

instance Testable a => Testable (IO a) where

property = property . unsafePerformIO

Now QuickCheck can work with this === definition, but also any IO property. I have found that testing IO properties with QuickCheck is very valuable, even without using ===.

**Non-QuickCheck tests**

Whilst I have argued for exceptions in the context of QuickCheck tests, I use the same exception pattern for non-parameterised/non-QuickCheck assertions. As an example, taken from Shake:

test = dodropDirectory1 "aaa/bbb" === "bbb"

dropDirectory1 "aaa/" === ""

dropDirectory1 "aaa" === ""

dropDirectory1 "" === ""

Using IO () properties allows for trivial sequencing. As a result, the tests are concise, and the effort to add additional tests is low.

**(===) in QuickCheck**

In QuickCheck-2.7 the === operator was introduced (which caused name clashes in both Shake and Hoogle - new versions of both have been released). The === operator uses the Property data type in QuickCheck to pass both the Bool value and additional information together. I prefer my definition of === because it's simpler, doesn't rely on QuickCheck and makes it clear how to pass additional information beyond just the arguments to ===. As an example, we could also return the log value:

\x -> when (exp (log (abs x)) /= abs x) $error $ show (x,log $ abs x, exp $ log $ abs x)

However, the QuickCheck === is a great improvement over ==, and should be a very popular addition.