News aggregator

High(low) performance recursion

Haskell on Reddit - Mon, 07/28/2014 - 1:41am

I have this recursive formula (Panjer recursion) that I need to implement efficiently in order to numerically approximate the probability distribution of the sum of iid random variables (I will now use Latex notation) :

s{r}=(1-p)\cdot\sum{r}{i=1}g{i}\cdot s{r-i}

I have implemented it in Haskell and Matlab but in both cases the code is too slow for (not so much) high 'r' due to the recursive nature of the problem.

I know that expressing recursive calls in terms of \textit{map} and \textit{fold} function would speed up the computation and make it even faster than a loop but I am stuck and I can not do better than this:

prob :: Int -> Float -> [Float] -> Float

prob r p gs = (1-p) * ( foldr (+) 0 [ g * ( prob (r-b) p gs) | (g,b) <- zip gs [1..r] ] )

Would you know how to improve it?

submitted by ceikit
[link] [26 comments]
Categories: Incoming News

PPDP 2014: Program and 2nd Call for Participation

General haskell list - Sun, 07/27/2014 - 1:59pm
====================================================================== CALL FOR PARTICIPATION: PPDP 2014 16th International Symposium on Principles and Practice of Declarative Programming Canterbury, Kent, September 8-10, 2014 http://users-cs.au.dk/danvy/ppdp14/ co-located with LOPSTR 2014 24th International Symposium on Logic-Based Program Synthesis and Transformation Canterbury, Kent, September 9-11, 2014 http://www.iasi.cnr.it/events/lopstr14/ ====================================================================== Two weeks left for early registration (until August 8): http://www.cs.kent.ac.uk/events/2014/ppdp-lopstr-14/ A significant discount is available when registering to both events, especially as a student (until August 8). PPDP 2014 features * an invited talk by Roberto Giacobazzi, shared with LOPSTR: "Obscuring Code -- Unveiling and Veiling Information in Programs" * no fewer than 4 distilled tuto
Categories: Incoming News

Exception in simple Pipes example

Haskell on Reddit - Sun, 07/27/2014 - 12:03pm

So I took the simple example of using Pipes to read from reddit (see here) and tried to turn it into a stand alone Producer.

It works but instead of just stopping when there's no more data coming in, it throws an exception. What am I missing?

{-# LANGUAGE OverloadedStrings #-} module Main where import Pipes as P import Pipes.HTTP import Pipes.ByteString as PB downloadRedditExample :: IO (Producer ByteString IO ()) downloadRedditExample = do req <- parseUrl "http://www.reddit.com/r/haskell.json" withManager defaultManagerSettings $ \m -> withHTTP req m (return . responseBody) main :: IO () main = do redditPipe <- downloadRedditExample runEffect $ redditPipe >-> PB.stdout

Also on lpaste at http://lpaste.net/108221

submitted by hiptobecubic
[link] [8 comments]
Categories: Incoming News

Intuition to understand poor man's concurrency

haskell-cafe - Sun, 07/27/2014 - 11:44am
Hello all, I am trying to understand the ideas of Koen Klaessen, published in Functional Pearls: "A poor man's concurrency" (1993). The code in the paper doesn't compile. E.g. uses "lambda dot" instead of "labmda arrow", i.e. the way the labmda calculus guys write things. Was that ever legal haskell or is this the result of some lhs2lex pretty printer? Anyways, I believe I was able to convert that into modern haskell syntax - at least it compiles. But I have trouble to understand the Monad instance presented there. Could anyobody walk me through the bind function? But even more important: how do you guys develop an intuition about what the bind operator does in a specific monad. I saw a Google tech talk where Douglas Crockford explains mondads in terms of Objects in javascript (https://www.youtube.com/watch?v=b0EF0VTs9Dc), which was certainly enlightening, but I couldn't apply it to this particular modad. I also tried using something like Data Flow Diagrams. This kinda works for simple Mondads such as t
Categories: Offsite Discussion

MACD Forex Trading Project in Haskell - Need Guidance

Haskell on Reddit - Sun, 07/27/2014 - 11:39am

Hey guys,

I've started a self-initiated project with Haskell in an effort to build an automated forex trading system based on the MACD indicator.

The Moving Average Convergence/Divergence (MACD) indicator is calculated as the difference between the 24-period and 52-period Exponential Moving Average (EMA) of a given currency pair, stock, etc. It's a useful tool for choosing entry/exit opportunities in the market. It analyzes the normal relationship between price and momentum, and identifies when that relationship has been temporarily broken to determine periods of bullish/bearish convergence/divergence. When a price/rate is lower than momentum suggests it should be, MACD issues a buy signal, as it is expected that the relationship between price/momentum will return to normal, and the quote will rise.

So here's my flowchart of how I imagine this system working:

http://imgur.com/84UtiZ9

I want to pull live quotes from an online database using a web query, run calculations on those quotes using Haskell, output those to Excel to construct conditionally-formatted tables, graphs, etc, and then have some sort of user interface that identifies buy/sell opportunities by displaying the currency pair and various statistics.

I don't even know if this is possible. I've only taken an intro to Comp Sci class and I'm clueless regarding the compatibility of Haskell with Excel. Yes, I could skip Haskell and do this entire thing in Excel, but that would defeat the purpose of the project, which is to get better at programming.

Phase 1: Haskell Functions. Here is what I have so far:

closing_prices = **HELP** --Can I import a list of live currency rates using a web query? I need a list to run the following functions on. --Simple Moving Average (SMA) calcSMA :: Int -> a calcSMA n = sum (take n closing_prices) / n --takes a number of periods (n), and a list closing prices (cp) --for the nth most recent periods, and returns a simple moving average --Exponential Moving Average (EMA) calcEMA :: Int -> a calcEMA n = (p * a) + (pEMA * (1 - a)) where p = head closing_prices a = 2 / (1 + n) pEMA = **HELP** --takes number of periods (n), current closing price (p), smoothing --factor (a), and the previous EMA (pEMA), and returns the current EMA ema24 = calcEMA 24 --24-period EMA ema52 = calcEMA 52 --52-period EMA calcMACD = subtract ema24 ema52

The functions are worthless without a continuously-updated list of exchange rates for a specified pair, say EUR/USD for example.

Also, since I need the EMA of the pair at the previous closing price to calculate the current EMA, I don't think a recursive function would work, and I don't really know where to go with this function next.

Any help would be greatly appreciated.

submitted by Mister_Balloon_Hands
[link] [6 comments]
Categories: Incoming News

Seeking feedback on a web asset pipeline library

Haskell on Reddit - Sun, 07/27/2014 - 10:25am

I've been writing a library to help with compiling and serving web assets, which is more or less ready to be used: https://github.com/hdgarrood/herringbone

If you are doing web stuff in Haskell, I'd like feedback. Is this the kind of thing you can see yourself using? If not, why not? Would you have designed it differently? Does the README tell you what you want to know? and so on.

submitted by hdgarrood
[link] [4 comments]
Categories: Incoming News

Yesod Web Framework: Reading files from the proc filesystem

Planet Haskell - Sun, 07/27/2014 - 4:50am

I was stumped by this one myself for a bit today, so I thought writing it up in a blog post would be a good way to make sure (1) I don't forget this little fact, and (2) hopefully the next person doesn't need to puzzle over this as long as I did. Let's say you want to read the contents of a file in the proc filesystem, such as /proc/uptime. There are many ways to do that in Haskell. Let's ignore any streaming data framework for the moment, and instead focus on just the "string-like" types: String and strict/lazy ByteString/Text. Here's a little program that tries all of them out:

import qualified Data.ByteString as S import qualified Data.ByteString.Lazy as L import qualified Data.Text.IO as T import qualified Data.Text.Lazy.IO as TL test :: Show a => String -> (FilePath -> IO a) -> IO () test name reader = do contents <- reader "/proc/uptime" putStrLn $ name ++ ": " ++ show contents main :: IO () main = do test "String " readFile test "strict ByteString" S.readFile test "lazy ByteString " L.readFile test "strict Text " T.readFile test "lazy Text " TL.readFile

Given that the uptime file is just simple ASCII data, you'd probably assume (like I did) that all of these will produce the same result. In fact, that's not the case. On my system, the results are:

String : "60740.70 136144.86\n" strict ByteString: "" lazy ByteString : "60740.70 136144.86\n" strict Text : "60740.70 136144.86\n" lazy Text : "60740.70 136144.86\n"

Strict ByteString reading is returning an empty value! Why is this happening? It's actually quite easy to see once you throw in two new pieces of information. First, let's look at the implementation of Data.ByteString.readFile:

readFile :: FilePath -> IO ByteString readFile f = bracket (openBinaryFile f ReadMode) hClose (\h -> hFileSize h >>= hGet h . fromIntegral)

Notice how we allocate a buffer exactly the right size to read in the entire contents of the file. We don't do this with any of the file reading functions. For the lazy variants, we don't want to read the entire file into memory at once. And for strict Text, knowing the size of the file doesn't tell us the size of the buffer we need to allocate, due to variable length encoding. So this nifty optimization only applies to strict ByteStrings.

Now piece of data number two:

$ ls -l /proc/uptime -r--r--r-- 1 root root 0 Jul 27 13:56 /proc/uptime

Huh, the file is empty! As is well documented, virtually every file in the proc filesystem is listed as empty, and the contents are generated on demand by the kernel.

So how do you read the file contents into a strict ByteString? There are actually plenty of approaches that work. In my case, I ended up just writing a helper function using conduit:

localReadFile fp = IO.withBinaryFile fp IO.ReadMode $ \h -> sourceHandle h $$ foldC

But probably the simplest thing to do is to just convert a lazy ByteString into a strict ByteString, e.g. fmap L.toStrict . L.readFile.

Categories: Offsite Blogs

Converting Make to Shake

Haskell on Reddit - Sun, 07/27/2014 - 3:45am
Categories: Incoming News

New to Nix; problem using cabal2nix

Haskell on Reddit - Sun, 07/27/2014 - 12:53am

I've finally got round to trying out Nix.

I'm trying to set up a project sandbox, as described by /u/ocharles in How I Develop with Nix.

I'm under the impression that I should be able to generate (most of?) the default.nix file from my existing *.cabal file using cabal2nix.

However, cabal2nix isn't working for me.

$ cabal2nix ./maison.cabal % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (22) The requested URL returned error: 404 Not Found /home/dave/.nix-profile/bin/nix-prefetch-url: download of ‘http://hackage.haskell.org/packages/archive/maison/0.1.0.0/maison-0.1.0.0.tar.gz’ failed cabal2nix: Error: Cannot compute hash. (Not a hackage project?) Specify hash explicitly via --sha256 and add appropriate "src" attribute to resulting nix expression.

cabal2nix correctly concludes that my package is not on Hackage (why it had to try downloading it instead of noting the cabal file is local I don't know).

cabal2nix wants me to give it a hash. What does it want a hash of?

Have I misunderstood? Is cabal2nix only for packages released on Hackage, and not for packages under development?

submitted by dave4420
[link] [11 comments]
Categories: Incoming News

[] \\ [1..] diverges - intended?

haskell-cafe - Sat, 07/26/2014 - 8:39pm
Hi, I just noticed that import Data.List [] \\ [1..] diverges, although technically it doesn't have to (the docs suggest to me that it could just be [], and a non-diverging implementation is possible). Same for [1,2] \\ [1..] of course. Was this intended?
Categories: Offsite Discussion

Am I missing something about unfoldr?

libraries list - Sat, 07/26/2014 - 8:34pm
The current definition: unfoldr :: (b -> Maybe (a, b)) -> b -> [a] unfoldr f b = case f b of Just (a,new_b) -> a : unfoldr f new_b Nothing -> [] I haven't yet looked over the Core to see if GHC is able to do a worker/wrapper transformation to inline it, but we definitely want to inline it (in some appropriate phase), because doing so has the potential to erase the Maybes. It also isn't a good producer for foldr/build. However, if we (re)write it like this: unfoldrB f b c n = go b where go b = case f b of Just (a,new_b) -> a `c` go new_b Nothing -> n unfoldr f b = build (unfoldrB f b) then we get some really nice fusion: foldr c n (unfoldr f b) = foldr c n (build (unfoldrB f b)) = unfoldrB f b c n The list is gone, and there are no new closures or data structures in its place. As a side benefit, we could write groupBy as an unfoldr, and make it a somewhat better producer, especially when the groups are small. I don't *think* there's any way to make groupBy a good co
Categories: Offsite Discussion

Moving from understanding monad transformers to knowing when to use monad transformers?

Haskell on Reddit - Sat, 07/26/2014 - 8:06pm

As the title says, I've read the papers on monad transformers and I get them at least on a basic level, but when I'm coding I never think "this is a problem I should use a monad transformer stack for".

submitted by Hrothen
[link] [9 comments]
Categories: Incoming News

Cartesian Closed Comic: Safety

Planet Haskell - Sat, 07/26/2014 - 5:00pm

Categories: Offsite Blogs

Kototama: Playing with Haskell

Planet Haskell - Sat, 07/26/2014 - 4:00pm

After reading a few books on Haskell I had to face the reality: learning by doing a project was necessary!

I chose a project which was easy enough to be finished in a few weeks or months (but still slightly challenging) and had a practical utility.

My project is a JSON command-line utility loosely inspired by underscore-cli and Hawk. Arbitrary Lens expressions can be used to filter or transform the input.

If you don’t know what Lens are, think of them as getters/setters/filters/functions combinators similar to JQuery or CSS selectors but for any type of data-structures. I’m still a beginner regarding Lens.

The challenge for me was to learn how to dynamically evaluate Haskell expressions. This is uncommon since Haskell is statically typed. The library I used to do that is naturally limited in its functionality in comparison to a Lisp but the final result is all but disappointing.

For the purpose of my program I implemented a pretty printer for JSON similar to aeson-pretty but with colors. Maybe I should package it for Hackage?

Once I had hdevtools setup for cabal sandboxes, programming in Haskell was enjoyable. Refactoring is easy thanks to the strong type system. I was stuck once or twice with the type system but the people on the #haskell channel were helpful. The code has a certain form of esthetic even if I feel more knowledge would allow me to be cleaner. For example I wonder if it is possible to avoid pattern matching on Left and Right for multiple calls which return something like IO (Either x y), since both IO and Either are monads.

You can have a look at the project here:

https://github.com/kototama/jp

Categories: Offsite Blogs

Neil Mitchell: Converting Make to Shake

Planet Haskell - Sat, 07/26/2014 - 2:49pm

Summary: I have converted over 10,000 lines from Make to Shake. Here are some tips I learnt along the way.

Make is the de facto build system for large projects - if no one made an active choice, your project is probably using Make. The Shake build system can be a better alternative, but how should you convert? The following tips are based on my experience converting a 10,000 line Make system to Shake.

Shake can do whatever Make can

Shake is more powerful than Make, so if Make could do something, Shake probably can too. As a first approximation, the Make snippet:

output: input1 input2
shell command to run

Becomes:

"output" *> \out -> do
need ["input1","input2"]
cmd Shell "shell command to run"

In addition:

  • Variables in Make usually map to normal Haskell variables.
  • Definitions of rules and dependencies use the functions from Development.Shake. For example, .PHONY maps to the phony function.
  • Filepath manipulation uses the functions from Development.Shake.FilePath.
  • Dynamically generated include files can be handled with needMakefileDependencies from Development.Shake.Util.

Preserve the file/directory structure

The existing Make system will generate object files with particular names in particular places. Often these locations aren't what you would pick if you wrote the build system afresh. However, resist the temptation to "clean up" these pieces during the conversion. Treat the file locations as a specification, which lets you focus on the conversion to Shake without simultaneously redesigning a large and complex build system.

Treat the Makefile as a black box

Often the existing Makefile will be hard to understand, and sometimes won't be worth reading at all. The most important information in the Makefile is what commands it runs, which can be determined by make clean && make -j1 > log.txt, which captures a complete list of the commands run. From the commands it is usually relatively easy to determine the inputs and outputs, from which you can write the Shake rules. However, the Makefile can be useful to figure out which commands to group into a single rule, and how to generalise rules to cover multiple files.

Split the metadata from the logic

Often the Makefiles combine metadata (these object files go into this executable) with logic (use gcc -O2 to build all executables). Shake is great for writing build logic, but metadata is often better placed in separate files (the Haskell syntax can be a little heavy). You can use the full power of Haskell to store whatever metadata you require, and addOracle from Shake can introduce granular dependencies on the information. The module Development.Shake.Config provides some helper functions that might serve as a suitable base.

To bootstrap the Shake system, often the metadata can be extracted from the existing Makefiles. You can write a temporary script to parse the Makefile and extract whatever you consider the metadata, clean it up, and write it to new configuration files. Initially the config files are generated, but once you delete the Make original, they become source files.

Focus on a single platform/configuration

Often a build system will be cross-platform (Linux/Mac/Windows), build multiple targets (binaries/distribution package/documentation) and build multiple configurations (release/debug/profile). To start the conversion, focus only on the most heavily developed platform/configuration - if the migration is successful, abstracting over the differences is far easier in Shake than Make. You may wish to start with a simple target to try out Shake (e.g. documentation), but after that work on the target developers use every day, so that the developers can make use of the improvements sooner, motivating the migration.

Convert bottom up

Shake demands that it built all the dependencies (it checks the modification time is equal to what it remembered), in contrast Make only requires that targets are newer than their dependencies. As a result, you should start converting the leaves of the build system to Shake, and work upwards. Provided you use the same file/directory structure, you can then build what you have defined with Shake, then finish the build with Make, checking the result still works as expected.

Run Make and Shake in parallel

One you have migrated enough of the build system to be useful (the usual targets in the most common configuration), you should encourage some developers to try Shake instead of Make. These developers will find things that don't work properly, hidden features in the Make system that no one knew about etc. Expect to fix problems and iterate several times.

Hopefully the Shake system will be faster and more robust. Once these advantages have encouraged all the main developers to make the switch, you should delete/disable the Make system and expect it to bitrot quickly.

Refactor individual rules

As you are converting rules from Make to Shake you can translate them directly and refactor later, or convert straight into more idiomatic Shake. As an example, you might start with:

cmd Shell "ls >" out

The argument Shell tells Shake to use the system shell, meaning that > redirect works. Later on you may wish to switch to:

Stdout result <- cmd "ls"
writeFile' out result

Now you are invoking the ls command directly, capturing the output using Shake. Sometime later you may switch to:

getDirectoryFiles "." ["*"]

Which is the Shake tracked way of getting a list of files. Similarly, calling sed or for through Shell should probably be gradually converted to Shake/Haskell operations.

Refactor the whole

Once you have converted the whole build system, and disabled the original Make system, you may wish to refactor the build system - putting files in more appropriate places, rethinking file dependencies etc. In truth, I've never got round to this step, and I would be surprised if many people did. However, as the build system grows, hopefully the new bits with sensible decisions will gradually begin to outnumber the old bits with questionable design.

Ask if you get stuck

Build systems (even in Shake) are complex entities, with intricate coordination between files, which mostly run untyped external commands with many platform/version differences. As a result, build systems are often complex to write.

If you have a problem using Shake, just ask. If you can boil down the problem to something fairly standalone, ask on StackOverflow with the tag shake-build-system. If you are looking for more general advice, ask on the mailing list. If you succeed, write a blog post and tweet me.

Categories: Offsite Blogs