News aggregator

Mathematical functions with multiple arguments

haskell-cafe - Wed, 03/11/2015 - 10:45pm
Hi everybody, I have a function of type plot :: ([Double] -> Double) -- A function to plot -> [(Double, Double)] -- Range for all arguments -> IO () I want to enforce the fact that ranges for all arguments should be provided. Is there a way to make the type system enforce it?
Categories: Offsite Discussion

My first big Haskell project

Haskell on Reddit - Wed, 03/11/2015 - 8:54pm

Hey guys,

I was just looking back at my first big Haskell project, which you can find here: https://github.com/Julek/klyslee

It's a small "A.I." which creates music through a genetic algorithms. I put a couple of the tunes it made at various points online at (https://soundcloud.com/klyslee/tracks plus one now as a "restart", be careful of your years on the earliest ones!).

Looking back on this code, it's a bit embarrassing now, so I was wondering, what have you guys learnt looking back at your first big projects?

submitted by julek1024
[link] [17 comments]
Categories: Incoming News

What imperative languages do Haskellers like?

Haskell on Reddit - Wed, 03/11/2015 - 6:24pm

Now that I've been using Haskell for almost a year I find going back to imperative languages incredibly frustrating. I went and brushed up on my Python and the biggest problem I had with it was how verbose it is (which frustrated me more than the lack of compile time optimisation or type safety...).

For various reasons I'm finding JavaScript to be quite nice. The syntax is reasonably lean, there are higher order functions and I can take lessons from Haskell about mutable state. Duck Typing is annoying, as is the lack of compile time assurance and the largely poor performance. I certainly wouldn't do anything more than client side functionality for web.

I realise that what I'm really liking about Haskell more than anything is how terse it is. Once you understand the syntax it's not just quick to write but it's easy to understand.

I've seen a bit of J around and it looks like it might push that a bit far. Anyone have any experience with it? Once you understand the syntax is it possible to decipher what someone else's code means or is it a "write only language"?

What imperative languages do Haskellers out there like?

submitted by TheCriticalSkeptic
[link] [84 comments]
Categories: Incoming News

References on haskellwiki

haskell-cafe - Wed, 03/11/2015 - 5:21pm
Hi, I've begun to do some formatting on old Monad Readers editions on Haskell Wiki. Unfortunately, there is no support for proper references support. Can the admin add the "Cite" extension ? It would help a lot. Thanks
Categories: Offsite Discussion

Arrows, Profunctors, DeepArrow, circat, GArrows, Lambda-CCC and more!

Haskell on Reddit - Wed, 03/11/2015 - 4:30pm

Fellow archers, (haha),

I've been working with Arrows and their various formulations recently. In particular, I've created a quasiquoter ArrowInit for proc-do notation that allows me to implement the CCA package without a pre-processor by de-sugaring the proc/do and then lifting functions appropriately. I am interested in a more generic endeavor to use a similar quasi-quoter to take arbitrary notation, and optimize/normalize while degrading gracefully. The more restrictive DeepArrow/circat/GArrow/Profunctor's allow for optimizations. My idea would be for a quasiquoter to detect which variant is possible for a particular expression, and then to instantiate for that restricted version.

There seems to be a constant low-level history of this sort of effort regarding Arrows, and I think the key is to expose the nice features of proc-do notation into these efforts. If this works, someone can draw out something like:

runItA = runConcurrently [arrow| proc (a,b) -> y <- getURL -< a z <- getURL -< b return (y,z) |]

and obtain the equivalent of using (<*> or ***) where

runItV = [arrow| proc n -> a <- A -< n y <- B -< a z <- C -< a return (y,z) |]

would be something like: A >>> (B &&& C) but be in a restricted arrow (note no use of arr, and all right hand side expressions are unchanged variables, no arbitrary functions needed). This allows users to define optimized versions of <*>, ***, &&& and so on that they get to use a comfortable proc notation with a range of abstractions.

Which of the restricted Arrow approaches would be ideal, and what sort of direction should an effort like this go in? Some similar approaches seem to have died out and I'd like to avoid that fate. Is there a particular effort I should join efforts with? Any direction/advice would be appreciated.

In a sense, this is similar to the ApplicativeDo proposal, but a step in between Applicative and Monad. ApplicativeDo notices when an expression can become an applicative, "ArrowDo" may notice when an expression can become an arrow-like expression.

-- (syntax is a bit off, but the idea should be clear) runItC n = do a <- A -< n b <- B -< a C -< (a,b)

This has no Applicative expression due to the reuse of a, but it can be an Arrow A >>> (returnA &&& B) >>> C instead of a monad. It can also be a 'restricted Arrow' due to a lack of arbitrary expressions, only tupling rearrangement.

runItD n = do Just a <- A -< n+1 b <- B -< a+2 d <- D -< a+3 C -< (a,b+d)

This variant should look something like: arr (\n->n+1) >>> A >>> arr (\(Just a)->(a,(a+2,a+3))) >>> second ( B *** D >>> arr (\(b,d) -> b+d)) >>> C )

I think all this should be possible. Worthwhile? Needed? I'm not sure, but I'm willing to give it a shot. It also connects various abstractions cleanly into a single framework. So far I've been able to implement the ArrowInit variant of this scheme.

submitted by tomberek
[link] [12 comments]
Categories: Incoming News

Where is Haskell Weekly News syndicated?

haskell-cafe - Wed, 03/11/2015 - 4:25pm
Haskell Weekly News used to be syndicated on http://contemplatecode.blogspot.com/ and http://sequence.complete.org/hwn. sequence only has HWN up to March 2010, while contemplatecode goes up to the beginning of February. Is there any way that I can subscribe to HWN (preferably as RSS/Atom), and not other stuff that doesn't interest me? -- View this message in context: http://haskell.1045720.n5.nabble.com/Where-is-Haskell-Weekly-News-syndicated-tp5766854.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
Categories: Offsite Discussion

Proposal: Export cycleN from Data.Sequence

libraries list - Wed, 03/11/2015 - 4:14pm
Yesterday I rewrote `*>` for Data.Sequence (again), using an internal function cycleN :: Int -> Seq a -> Seq a The name of the function is based on that of Data.Sequence.iterateN. cycleN takes a sequence and cycles it as many times as requested: cycleN 0 $ fromList [1,2] = [] cycleN 5 $ fromList [1,2] = [1,2,1,2,1,2,1,2,1,2] The function is written to maximize sharing in the result sequence and to minimize construction time. Specifically, cycleN n xs should take something like O(|xs| + log n) space (of which all but O(log |xs| + log n) is shared with the structure of xs) and O(log |xs| + log n) time. With current (on GitHub) Data.Sequence exports, the only way to get this functionality with these time and space bounds is to combine replicate with *> : cycleN n xs = replicate n () *> xs This strikes me as a bit unpleasant. David
Categories: Offsite Discussion

"let" inside "do" & scope

Haskell on Reddit - Wed, 03/11/2015 - 3:44pm

I tried this in GHC:

main = do let a = 2 print a let a = a+1 print a

It prints "2" and hangs. But "let" is supposed to create a new scope, right? The "a" on the left-hand side of "a = a+1" and the "a" on the right-hand side are different, so why the (apparent) infinite recursion?

I'm told that the above is just a sugared version of the following:

main = (\a -> (print a >> ((\a -> print a) (a+1)))) 2

And this works fine. It prints "2", then "3", and then quits.

So why doesn't the first one do the same thing?

submitted by ggchappell
[link] [19 comments]
Categories: Incoming News

www.reddit.com

del.icio.us/haskell - Wed, 03/11/2015 - 1:49pm
Categories: Offsite Blogs

Multiple compiles with cabal

haskell-cafe - Wed, 03/11/2015 - 10:39am
Dear Cafe, I have a program [1] which half of the code is meant to be compiled with GHCJS and the other half with GHC (Or any other Haskell compiler). Currently I do the compilation separately and simply include the compiled JS as a static resource for the regular project. Has anyone run into a similar scenario? Can Cabal handle this? Is there some way of having multiple compiles w/o requiring a shell script to initialize them? Thank you & Cheers N. [1] https://github.com/netogallo/Cryptographer (annoying bc Github now believes my project is like 99% JS since the GHCJS runtime is bundled as a static resource)
Categories: Offsite Discussion

Frege Goodness

del.icio.us/haskell - Wed, 03/11/2015 - 9:39am
Categories: Offsite Blogs

tpolecat

del.icio.us/haskell - Wed, 03/11/2015 - 9:36am
Categories: Offsite Blogs

missing rseq?

haskell-cafe - Wed, 03/11/2015 - 8:26am
In the book Parallel and Concurrent Programming in Haskell http://chimera.labs.oreilly.com/books/1230000000929/ch02.html#sec_par-eval-s udoku2 a list of sudokus is solved in parallel. In version 2 (sudoku2.hs) the program splits the list of sudokus in 2 seperate lists and solves these lists in parallel. In version3 (sudoku3.hs) parMap is used. What I don't understand is why in sudoku2 the program has to wait until the parallel computations are finished with rseqs while in sudoku3.hs there is no rseq (not in the main program nor in parMap)? Why can't program sudoku3.hs terminate before all parallel calculations are finished as in Example 2-1. rpar/rpar? Kees _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Haskell Help

Haskell on Reddit - Wed, 03/11/2015 - 7:58am

Hello, Just wanted to know where the issues in each of these code snippets are coming from.

Couldn't match expected type Integer' with actual typem0 Integer'. Just trying to return an altered result. What would the declaration be if I wanted to return possible doubles as well?

f :: Integer -> Integer f x = do return ((x*x*x*x*x)-10*(x*x*x)+30*(x))

Couldn't match type Int' withDouble'. Basically want to return fractions, passing in numberous parameters as well.

ternarySearch :: IO() -> Int-> Int-> Int-> Int -> Double ternarySearch f a b tau = do if (abs(b - a) < tau) then do return (a + b) / 2 else do return 5 submitted by DESU-troyer
[link] [4 comments]
Categories: Incoming News

Deriving Show for non-regular data types

haskell-cafe - Wed, 03/11/2015 - 6:45am
Hi, As part of studying Okasaki's PFDS book, I wanted to add Show support for each of the data structures, and some have proven to be challenging, as some of the types are non-regular and automatic derivation of Show doesn't work. I've been able to add some code that introduces a supplementary type class that serves as a way to pass "proof" that the wrapping data type supports traversal for Show-ability, but the solution seems unsatisfactory. I would greatly appreciate any suggestions for improvements. I've attached the code; the relevant bits are in BankersDeque.hs, Example.hs, NestedShowable.hs, and SimpleCatenableDeque.hs. The same code is available here: https://gist.github.com/drvink/30fb2a2b257fc99af281 Thanks, Mark Laws
Categories: Offsite Discussion

Copying Cabal sandboxes

Haskell on Reddit - Wed, 03/11/2015 - 4:57am

Sandboxes are great, but having to build them from scratch is time (and disk space) consuming, which is annoying. Wouldn't it be great if we could copy an existing one when starting a new one? Sadly, this is known not to work, but what doesn't seem to be so widely known is that we can copy package DBs, so that we can have packages registered in one sandbox that live in another. This can be a huge time saver.

Details in the "Copying Sandboxes" section of Comprehensive Haskell Sandboxes, Revisited.

submitted by edsko
[link] [13 comments]
Categories: Incoming News

Dominic Steinitz: Stochastic Volatility

Planet Haskell - Wed, 03/11/2015 - 4:06am
Introduction

Simple models for e.g. financial option pricing assume that the volatility of an index or a stock is constant, see here for example. However, simple observation of time series show that this is not the case; if it were then the log returns would be white noise

One approach which addresses this, GARCH (Generalised AutoRegressive Conditional Heteroskedasticity), models the evolution of volatility deterministically.

Stochastic volatility models treat the volatility of a return on an asset, such as an option to buy a security, as a Hidden Markov Model (HMM). Typically, the observable data consist of noisy mean-corrected returns on an underlying asset at equally spaced time points.

There is evidence that Stochastic Volatility models (Kim, Shephard, and Chib (1998)) offer increased flexibility over the GARCH family, e.g. see Geweke (1994), Fridman and Harris (1998) and Jacquier, Polson, and Rossi (1994). Despite this and judging by the numbers of questions on the R Special Interest Group on Finance mailing list, the use of GARCH in practice far outweighs that of Stochastic Volatility. Reasons cited are the multiplicity of estimation methods for the latter and the lack of packages (but see here for a recent improvement to the paucity of packages).

In their tutorial on particle filtering, Doucet and Johansen (2011) give an example of stochastic volatility. We save this approach for future blog posts and follow Lopes and Polson and the excellent lecture notes by Hedibert Lopes.

Here’s the model.

We wish to estimate and . To do this via a Gibbs sampler we need to sample from

Haskell Preamble > {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > {-# OPTIONS_GHC -fno-warn-type-defaults #-} > {-# OPTIONS_GHC -fno-warn-unused-do-bind #-} > {-# OPTIONS_GHC -fno-warn-missing-methods #-} > {-# OPTIONS_GHC -fno-warn-orphans #-} > {-# LANGUAGE RecursiveDo #-} > {-# LANGUAGE ExplicitForAll #-} > {-# LANGUAGE TypeOperators #-} > {-# LANGUAGE TypeFamilies #-} > {-# LANGUAGE ScopedTypeVariables #-} > {-# LANGUAGE DataKinds #-} > {-# LANGUAGE FlexibleContexts #-} > module StochVol ( > bigM > , bigM0 > , runMC > , ys > , vols > , expectationTau2 > , varianceTau2 > ) where > import Numeric.LinearAlgebra.HMatrix hiding ( (===), (|||), Element, > (<>), (#>), inv ) > import qualified Numeric.LinearAlgebra.Static as S > import Numeric.LinearAlgebra.Static ( (<>) ) > import GHC.TypeLits > import Data.Proxy > import Data.Maybe ( fromJust ) > import Data.Random > import Data.Random.Source.PureMT > import Control.Monad.Fix > import Control.Monad.State.Lazy > import Control.Monad.Writer hiding ( (<>) ) > import Control.Monad.Loops > import Control.Applicative > import qualified Data.Vector as V > inv :: (KnownNat n, (1 <=? n) ~ 'True) => S.Sq n -> S.Sq n > inv m = fromJust $ S.linSolve m S.eye > infixr 8 #> > (#>) :: (KnownNat m, KnownNat n) => S.L m n -> S.R n -> S.R m > (#>) = (S.#>) > type StatsM a = RVarT (Writer [((Double, Double), Double)]) a > (|||) :: (KnownNat ((+) r1 r2), KnownNat r2, KnownNat c, KnownNat r1) => > S.L c r1 -> S.L c r2 -> S.L c ((+) r1 r2) > (|||) = (S.¦) Marginal Distribution for Parameters

Let us take a prior that is standard for linear regression

where and use standard results for linear regression to obtain the required marginal distribution.

That the prior is Normal Inverse Gamma () means

Standard Bayesian analysis for regression tells us that the (conditional) posterior distribution for

where the are IID normal with variance is given by

with

Recursive Form

We can re-write the above recursively. We do not need to for this blog article but it will be required in any future blog article which uses Sequential Monte Carlo techniques.

Furthermore

so we can write

and

Specialising

In the case of our model we can specialise the non-recursive equations as

Let’s re-write the notation to fit our model.

Sample from

We can implement this in Haskell as

> sampleParms :: > forall n m . > (KnownNat n, (1 <=? n) ~ 'True) => > S.R n -> S.L n 2 -> S.R 2 -> S.Sq 2 -> Double -> Double -> > RVarT m (S.R 2, Double) > sampleParms y bigX theta_0 bigLambda_0 a_0 s_02 = do > let n = natVal (Proxy :: Proxy n) > a_n = 0.5 * (a_0 + fromIntegral n) > bigLambda_n = bigLambda_0 + (tr bigX) <> bigX > invBigLambda_n = inv bigLambda_n > theta_n = invBigLambda_n #> ((tr bigX) #> y + (tr bigLambda_0) #> theta_0) > b_0 = 0.5 * a_0 * s_02 > b_n = b_0 + > 0.5 * (S.extract (S.row y <> S.col y)!0!0) + > 0.5 * (S.extract (S.row theta_0 <> bigLambda_0 <> S.col theta_0)!0!0) - > 0.5 * (S.extract (S.row theta_n <> bigLambda_n <> S.col theta_n)!0!0) > g <- rvarT (Gamma a_n (recip b_n)) > let s2 = recip g > invBigLambda_n' = m <> invBigLambda_n > where > m = S.diag $ S.vector (replicate 2 s2) > m1 <- rvarT StdNormal > m2 <- rvarT StdNormal > let theta_n' :: S.R 2 > theta_n' = theta_n + S.chol (S.sym invBigLambda_n') #> (S.vector [m1, m2]) > return (theta_n', s2) Marginal Distribution for State Marginal for

Using a standard result about conjugate priors and since we have

we can deduce

where

> sampleH0 :: Double -> > Double -> > V.Vector Double -> > Double -> > Double -> > Double -> > RVarT m Double > sampleH0 iC0 iC0m0 hs mu phi tau2 = do > let var = recip $ (iC0 + phi^2 / tau2) > mean = var * (iC0m0 + phi * ((hs V.! 0) - mu) / tau2) > rvarT (Normal mean (sqrt var)) Marginal for

From the state equation, we have

We also have

Adding the two expressions together gives

Since are standard normal, then conditional on and is normally distributed, and

We also have

Writing

by Bayes’ Theorem we have

where is the probability density function of a normal distribution.

We can sample from this using Metropolis

  1. For each , sample from where is the tuning variance.

  2. For each , compute the acceptance probability

  1. For each , compute a new value of

> metropolis :: V.Vector Double -> > Double -> > Double -> > Double -> > Double -> > V.Vector Double -> > Double -> > RVarT m (V.Vector Double) > metropolis ys mu phi tau2 h0 hs vh = do > let eta2s = V.replicate (n-1) (tau2 / (1 + phi^2)) `V.snoc` tau2 > etas = V.map sqrt eta2s > coef1 = (1 - phi) / (1 + phi^2) * mu > coef2 = phi / (1 + phi^2) > mu_n = mu + phi * (hs V.! (n-1)) > mu_1 = coef1 + coef2 * ((hs V.! 1) + h0) > innerMus = V.zipWith (\hp1 hm1 -> coef1 + coef2 * (hp1 + hm1)) (V.tail (V.tail hs)) hs > mus = mu_1 `V.cons` innerMus `V.snoc` mu_n > hs' <- V.mapM (\mu -> rvarT (Normal mu vh)) hs > let num1s = V.zipWith3 (\mu eta h -> logPdf (Normal mu eta) h) mus etas hs' > num2s = V.zipWith (\y h -> logPdf (Normal 0.0 (exp (0.5 * h))) y) ys hs' > nums = V.zipWith (+) num1s num2s > den1s = V.zipWith3 (\mu eta h -> logPdf (Normal mu eta) h) mus etas hs > den2s = V.zipWith (\y h -> logPdf (Normal 0.0 (exp (0.5 * h))) y) ys hs > dens = V.zipWith (+) den1s den2s > us <- V.replicate n <$> rvarT StdUniform > let ls = V.zipWith (\n d -> min 0.0 (n - d)) nums dens > return $ V.zipWith4 (\u l h h' -> if log u < l then h' else h) us ls hs hs' Markov Chain Monte Carlo

Now we can write down a single step for our Gibbs sampler, sampling from each marginal in turn.

> singleStep :: Double -> V.Vector Double -> > (Double, Double, Double, Double, V.Vector Double) -> > StatsM (Double, Double, Double, Double, V.Vector Double) > singleStep vh y (mu, phi, tau2, h0, h) = do > lift $ tell [((mu, phi),tau2)] > hNew <- metropolis y mu phi tau2 h0 h vh > h0New <- sampleH0 iC0 iC0m0 hNew mu phi tau2 > let bigX' = (S.col $ S.vector $ replicate n 1.0) > ||| > (S.col $ S.vector $ V.toList $ h0New `V.cons` V.init hNew) > bigX = bigX' `asTypeOf` (snd $ valAndType nT) > newParms <- sampleParms (S.vector $ V.toList h) bigX (S.vector [mu0, phi0]) invBigV0 nu0 s02 > return ( (S.extract (fst newParms))!0 > , (S.extract (fst newParms))!1 > , snd newParms > , h0New > , hNew > ) Testing

Let’s create some test data.

> mu', phi', tau2', tau' :: Double > mu' = -0.00645 > phi' = 0.99 > tau2' = 0.15^2 > tau' = sqrt tau2'

We need to create a statically typed matrix with one dimension the same size as the data so we tie the data size value to the required type.

> nT :: Proxy 500 > nT = Proxy > valAndType :: KnownNat n => Proxy n -> (Int, S.L n 2) > valAndType x = (fromIntegral $ natVal x, undefined) > n :: Int > n = fst $ valAndType nT

Arbitrarily let us start the process at

> h0 :: Double > h0 = 0.0

We define the process as a stream (aka co-recursively) using the Haskell recursive do construct. It is not necessary to do this but streams are a natural way to think of stochastic processes.

> hs, vols, sds, ys :: V.Vector Double > hs = V.fromList $ take n $ fst $ runState hsAux (pureMT 1) > where > hsAux :: (MonadFix m, MonadRandom m) => m [Double] > hsAux = mdo { x0 <- sample (Normal (mu' + phi' * h0) tau') > ; xs <- mapM (\x -> sample (Normal (mu' + phi' * x) tau')) (x0:xs) > ; return xs > } > vols = V.map exp hs

We can plot the volatility (which we cannot observe directly).

And we can plot the log returns.

> sds = V.map sqrt vols > ys = fst $ runState ysAux (pureMT 2) > where > ysAux = V.mapM (\sd -> sample (Normal 0.0 sd)) sds

We start with a vague prior for

> m0, c0 :: Double > m0 = 0.0 > c0 = 100.0

For convenience

> iC0, iC0m0 :: Double > iC0 = recip c0 > iC0m0 = iC0 * m0

Rather than really sample from priors for and let us cheat and assume we sampled the simulated values!

> mu0, phi0, tau20 :: Double > mu0 = -0.00645 > phi0 = 0.99 > tau20 = 0.15^2

But that we are still very uncertain about them

> bigV0, invBigV0 :: S.Sq 2 > bigV0 = S.diag $ S.fromList [100.0, 100.0] > invBigV0 = inv bigV0 > nu0, s02 :: Double > nu0 = 10.0 > s02 = (nu0 - 2) / nu0 * tau20

Note that for the inverse gamma this gives

> expectationTau2, varianceTau2 :: Double > expectationTau2 = (nu0 * s02 / 2) / ((nu0 / 2) - 1) > varianceTau2 = (nu0 * s02 / 2)^2 / (((nu0 / 2) - 1)^2 * ((nu0 / 2) - 2)) ghci> expectationTau2 2.25e-2 ghci> varianceTau2 1.6874999999999998e-4 Running the Markov Chain

Tuning parameter

> vh :: Double > vh = 0.1

The burn-in and sample sizes may be too low for actual estimation but will suffice for a demonstration.

> bigM, bigM0 :: Int > bigM0 = 2000 > bigM = 2000 > multiStep :: StatsM (Double, Double, Double, Double, V.Vector Double) > multiStep = iterateM_ (singleStep vh ys) (mu0, phi0, tau20, h0, vols) > runMC :: [((Double, Double), Double)] > runMC = take bigM $ drop bigM0 $ > execWriter (evalStateT (sample multiStep) (pureMT 42))

And now we can look at the distributions of our estimates

Bibliography

Doucet, Arnaud, and Adam M Johansen. 2011. “A Tutorial on Particle Filtering and Smoothing: Fifteen Years Later.” In Handbook of Nonlinear Filtering. Oxford, UK: Oxford University Press.

Fridman, Moshe, and Lawrence Harris. 1998. “A Maximum Likelihood Approach for Non-Gaussian Stochastic Volatility Models.” Journal of Business & Economic Statistics 16 (3): 284–91.

Geweke, John. 1994. “Bayesian Comparison of Econometric Models.”

Jacquier, Eric, Nicholas G. Polson, and Peter E. Rossi. 1994. “Bayesian Analysis of Stochastic Volatility Models.”

Kim, Sangjoon, Neil Shephard, and Siddhartha Chib. 1998. “Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models.” Review of Economic Studies 65 (3): 361–93. http://ideas.repec.org/a/bla/restud/v65y1998i3p361-93.html.


Categories: Offsite Blogs