# News aggregator

### Haskell usage in the real world...

Hey guys, I've just discovered Haskell, and was wondering if anybody knew of any instances of Haskell being used in the real world? I've read that it's used widely in high frequency trading, but haven't heard much about anything else. Does anyone have any insight into this? Thank you.

submitted by fata5ian[link] [43 comments]

### Douglas M. Auclair (geophf): 'E' is for ... well, ... e

The number e is this neat little number that shows us that mathematics isn't for just us pencil-necked geeks with our pocket protectors.

Okay. Seriously. Did they have to go there in WarGames with the whole "Mr. Potato-Head! Mr. Potato-head!" Seriously? We, the geeks of the world, are not all like that.

Mostly.

Sometimes.

Anyway, I digress.

(Do you think I love digressing? No, not at all, it's just who I am and what I do. That's how I'm rollin' wid it, baybee!)

Okay, so, fer realz, yo: the number e. You can't point to it on a number line ...

Unless you go:

| --- | ---- | ---- |

0 1 2 e3

And there's your e ... kinda, sorta, I just pointed to it.

But 'e' is eminently useful, as it shows up in calculus, particularly for the 'natural' logarithm.

But why? Why do we want to scale along the number e?

Money.

Did I get your attention?

Bernoulli came along one day when someone asked him, 'Hey, I know I get money back when I put money into a bank that pays interest annually, but how much, if any, more will I get back if they compound more often?"

Money, compounded continuously falls along the exponent of the number, so, to compute your mortgage or your interest in your savings, you use the 'Pert' formula.

And, yes, you look

*lovely!*in that dress, you Pert thing!

'Pert' is this:

P = principal (what you put in)

e = e (duh!)

rt is 'rate times time (in years)'

So, really it's:

Pert

Or, written on your calculator or on your computer in a spreadsheet:

P * exp(r * t)

So, if you put in $1,000 for a year in a bank paying 5.25% interest

(Don't laugh, that's what I did when I was a wee one, wet behind the ears. Remember when savings account used to payout at that rate? No, you don't, because you weren't born then)

(But I digress)

Then you simply use your Pert formula and come to find that you have:

$1053.90

If the bank had paid interest only once per year, you would've only had

$1052.50 (of course).

A buck-fifty. Big whoop.

The big whoop is this. Do that for 20 years, 30 years, 40 years ...

You'll start to see a huge difference between compounding simply and compounding continuously.

That's your savings. The banks are doing the same thing with you.

You have credit cards? You have a mortgage?

A general I knew used to say, nearly every day: "30-year mortgages are killing our nation's finances!"

Pay off your debts. The longer you take to pay, the more that little number e is going to crush you under its accumulated weight.

And the reverse is true, the bit extra you pay now against the principal you owe? It translates into a factor of five or ten or twenty-fold, depending on the interest rate and the time of the loan.

Knowing a little bit of math can actually help you decide things more prudently for the long term.

### Why does map even exists?

I was wondering. Considering that map is just a "specialization" of fmap with a list as a Functor, why does it even exists?

submitted by notjefff[link] [26 comments]

### Lennart Augustsson: Haskell error reporting with locations, update

First, the __LOCATION__ variable gets an abstract type. So

extractLocation :: Location -> IO String The error function needs a new exception to throw

data ErrorCallLoc = ErrorCallLoc Location String {-# LOCATIONTRANSPARENT error #-} error :: String -> a error s = throw (ErrorCallLoc __LOCATION__ s) This means that the location string cannot be used when we throw the error. But it can be used where the error is caught, since this can only be done in the IO monad.

Under the hood the everything is just as before, Location is just a string. It just can't be manipulated except in the IO monad, so we can pretend it's pure.

### How can I set up wai-handler-devel with Scotty to have automatic code reloading?

### Category Theory by Tom LaGatta

### Haskell in the Newsroom

### How does automatic code reloading in Haskell web frameworks work?

I'm coming from Ruby on Rails, so pardon my ignorance here.

I've been playing with the web frameworks of Haskell, specifically Snap, Happstack and Simple (now wanting to try Scotty). And I've noticed that Simple for example mentions, that if you install wai-handler-devel, it will automatically reload the code on each change.

But how is this possible given that Haskell has to be compiled? The package description says it embeds GHC inside the handler, but I'm not sure if I understand that correctly.

Does this mean that the WAI middleware that's above the chain before the application actually loads all of the code in some temporary instance of GHC and interprets it on the fly? How does this work?

The part that I especially don't understand here is that given everything is statically compiled, how can something like reloadBeforeRunning derp actually recompile the derp function on the fly (if that's what's happening). Does this require some Haskell extension to make it work?

submitted by progfu[link] [16 comments]

### Douglas M. Auclair (geophf): 'D' is for Darn it, I have to write a blog entry!

So, eek! Fifteen minutes left to today, and I have yet to complete my entry on 'D' is for dependent types.

*sigh*

SO, 'D' is now for 'Darn it, I have to write a blog entry!'

AND, since this is a math blog, I darn well tootin' better write some math in it.

Well:

1 + 1 = 2.

And, okay, here's a taste of dependent types.

Types tell us what kind of objects are in that set of like-things.

The type, or class, of chickens in the coop lay eggs.

But

*dependent*types go one further: the

*type*depends on the thing it describes.

So the dependent type of even-numbered things are the sets of things that only have even numbers of things in them. How do we know the type? The type depends on the things. How do we know the things? The things are described by the type.

What do dependent types give us, beside circularity?

Clarity.

If I know I have an even number of things I'm working with, I don't need to as is_even. Or if I do, I know my answer, from the type itself.

For the type of even-numbered things, is_even always is true and is_odd is always false.

So, one thing I don't need to do is to check what I'm working with, or how many I have, odd or even, because I already know from the type itself.

Dependent types are from a new kind of logic called intuitionistic logic developed by Per Martin-Löf. The math for it is rather heady, and using dependent types in programming is 'harder' at the get-go than using ordinary types, but, being very restrictive, they also give a much stronger correlation between knowing that your program is correct: that it's doing what it's supposed to be doing, or what you want it to be doing, and, with dependent types, I find I am able to express things, correctly and safely that I am not able to do (correctly and safely) using ordinary types.

...

My

*working*draft for this blog entry

*had*started out as follows:

'D' is for Dependent Type

We're all familiar with types, right? They're classes, or classifiers, telling us

*what*something in (into which class it belongs). One example is this weekend I'm going to be running a 5k with my family, which is a very different thing than running a 5 miler or running just 5 feet. Kilometers, miles, feet, these are units of measure, and they

*type*or

*class*ify what we're talking about. 'Units of measure,' also, in fact, is a type, so the types 'kilometers,' 'miles' and 'feet' all fall into the type 'units of measure.' Just like our pets are a type of animal: cats. 'Cats' is a type, and itself is a type of something else: 'animal,' or 'pet,' ... depending.

Programming with types is an on-again off-again kind of thing, some programming languages require all things/objects to be typed, some languages require none to be typed and some fall in the middle somewhere between those extremes.

And, as you may guess, different programming languages have different kinds of types or different ways of typing things or different ways of creating types.

What do types give you?

Some say they give you a pain in the ... neck, and nothing more; others say that, by putting things into categories, they provide a distinction of objects operated on, so you know what you can do, and can't do, with the object at hand, simplifying the programming task.

Me, I'm a typeful person, if you hadn't surmised this from the title of the blog which contains this entry. Types are wonderful! They provide a way for me to declare what things are, and, from simply knowing the types of the data I'm working with, I now know a lot about the system and what it does and what its usefulness is.

### Gabriel Gonzalez: Scalable program architectures

Haskell design patterns differ from mainstream design patterns in one important way:

**Conventional architecture:**Combine a several components together of type A together to generate a "network" or "topology" of type B**Haskell architecture:**Combine several components together of type A to generate a new component of the same type A, indistinguishable in character from its substituent parts

This distinction affects how the two architectural styles evolve as code bases grow.

The conventional architecture requires layering on abstraction on top of abstraction:

Oh no, these Bs are not connectable, so let's make a network of Bs and call that a C.

Well, I want to assemble several Cs, so let's make a network of Cs and call that a D

...

Wash, rinse, and repeat until you have an unmanageable tower of abstractions.

With a Haskell-style architecture, you don't need to keep layering on abstractions to preserve combinability. When you combine things together the result is still itself combinable. You don't distinguish between components and networks of components.

In fact, this principle should be familiar to anybody who knows basic arithmetic. When you combine a bunch of numbers together you get back a number:

3 + 4 + 9 = 16Zero or more numbers go in and exactly one number comes out. The resulting number is itself combinable. You don't have to learn about "web"s of numbers or "web"s of "web"s of numbers.

If elementary school children can master this principle, then perhaps we can, too. How can we make programming more like addition?

Well, addition is simple because we have (+) and 0. (+) ensures that we can always convert **more than** one number into exactly number:

... and 0 ensures that we can always convert **less than** one number into exactly one number by providing a suitable default:

This will look familiar to Haskell programmers: these type signatures resemble the methods of the Monoid type class:

class Monoid m where-- `mappend` is analogous to `(+)`

mappend :: m -> m -> m

-- `mempty` is analogous to `0`

mempty :: m

In other words, the Monoid type class is the canonical example of this Haskell architectural style. We use mappend and mempty to combine 0 or more ms into exactly 1 m. The resulting m is still combinable.

Not every Haskell abstraction implements Monoid, nor do they have to because category theory takes this basic Monoid idea and generalizes it to more powerful domains. Each generalization retains the same basic principle of preserving combinability.

For example, a Category is just a typed monoid, where not all combinations type-check:

class Category cat where-- `(.)` is analogous to `(+)`

(.) :: cat b c -> cat a b -> cat a c

-- `id` is analogous to `0`

id :: cat a a

... a Monad is like a monoid where we combine functors "vertically":

-- Slightly modified from the original type classclass Functor m => Monad m where

-- `join` is analogous to `(+)`

join :: m (m a) -> m a

-- `return` is analogous to `0`

return :: a -> m a

... and an Applicative is like a monoid where we combine functors "horizontally":

-- Greatly modified, but equivalent to, the original type classclass Functor f => Applicative f where

-- `mult` is is analogous to `(+)`

mult :: f a -> f b -> f (a, b)

-- `unit` is analogous to `0`

unit :: f ()

Category theory is full of generalized patterns like these, all of which try to preserve that basic intuition we had for addition. We convert **more than** one *thing* into exactly one *thing* using something that resembles addition and we convert **less than** one *thing* into exactly one *thing* using something that resembles zero. Once you learn to think in terms of these patterns, programming becomes as simple as basic arithmetic: combinable components go in and exactly one combinable component comes out.

These abstractions scale limitlessly because they always preserve combinability, therefore we never need to layer further abstractions on top. This is one reason why you should learn Haskell: you learn to how to build flat architectures.

### www.fpcomplete.com

### Dan Piponi (sigfpe): This is a test

The dual of an optimization problemConsider the optimization problem

$$ \begin{aligned} \mbox{min} & f(x) \\ \mbox{subject to} & x\le a\\ \end{aligned} $$

illustrated below and where $f$ is convex:

The essential features are there in this problem despite it being the most basic problem with an inequality.

Now consider the optimum given as a function of $a$. So $f_+(a)=\inf_{x\le a}f(a)$. You can think of $f_+$ as representing the "best value of $f$ so far". Its graph looks like this:

It's basically like the original $f$ except that we only have the descending parts and none of the ascending parts.

Now we're going to construct $f_+$ by a slightly roundabout way, not by considering the function itself, but its

*epigraph*, the set of points above the graph. Here's the epigraph for the original $f$:

Notice I've drawn a bunch of lines just touching the epigraph, ie. its subtangents. In general, any convex shape can be expressed as the intersection of half-planes. Those subtangents indicate a handful of the boundaries of those half planes. We can construct those subtangents by considering the lines $y=\lambda x$ and then pushing those lines up or down so each is high as possible without crossing $f$. We can find out how much pushing we need by solving the optimisation:

$$ \begin{aligned} \mbox{maximise} & \lambda x-f(x) \end{aligned} $$

For any $\lambda$ we call the optimal value $f^\ast(\lambda)$. $f^\ast$ is also known as the Legendre transform or Fenchel dual. It's basically telling us which half-planes define our epigraph.

Interestingly, reconstructing the function $f$ from $f^\ast$ requires exactly the same expression with $f^\ast$ substituted for $f$. So the Fenchel dual also takes a collection of half-planes and gives you back the original function.

Now we construct the epigraph of $f_+$. We just want the descending parts of $f$. It's easy to construct the epigraph, we take the intersection of just those half-planes whose boundaries are decreasing. Here's a picture:

So here's a process for constructing $f_+$: we take the Fenchel dual of $f$, we throw away the "half" where $\lambda\le0$, and now take the inverse Fenchel dual (which is in fact just the Fenchel dual).

Now let's go back to the original problem and construct its dual as described in numerous textbooks. We form the Lagrangian

$$L(a,x,\lambda)=f(x)+\lambda (x-a)$$

We then form the function

$$g(a,\lambda)=\min_x L(a,x,\lambda)$$

This is basically (modulo a sign) the Fenchel dual of $f$ with an extra $-\lambda a$ term. The dual problem, as in the textbooks, is

$$ \begin{aligned} \mbox{maximise} & g(a,\lambda)\\ \mbox{such that} & \lambda\ge0\\ \end{aligned} $$

Remembering that $-\lambda a$ term, that's almost the inverse Fenchel dual except we've maximised over just "half" of the $\lambda$s, those for which $\lambda\ge0$. When we compute the optimum of the dual problem we get $h(x)=\max_x g(a,x)$. If the problem is convex, we've just computed $f_+$ by another path. So this gives what I think is a clear picture of what the dual problem is. Forming the dual problem is (modulo a vertical shift) converting the original problem into finding the subtangents. Solving the dual problem is converting that back to a function again, but using only the downward sloping lines. So this illustrates the primal-dual theorem. We get the same result in both the primal and dual problems as the dual problem is simply solving our original problem via epigraphs.

The Riemann-Hilbert problem and the Hilbert TransformHere's my latest toy:It's a software defined radio I bought for $13 on Amazon. Here's what it does.

You have an electromagnetic field in the vicinity of the antenna. Let's pretend such fields, $\phi$ are scalars. Just about any kind of radio uses a carrier modulated in some way. For example AM radio encodes the audio (represented as a function of time, A) as something like$$\phi=A(t)cos(\omega t)$$

### Idris 0.9.12 released with major laziness, codata, and documentation updates plus metavars in types

### FP Complete: Calculating the Minimum Variance Portfolio in R, Pandas and IAP

As part of producing a demo for FP Complete's new IAP product, I wound up implementing the Minimum Variance Portfolio calculation for a stock portfolio in R, then in Haskell for the IAP, and finally in Python using the NumPy and SciPy extension libraries. I want to look at the process of writing each of these, and the resulting code in this article. I am not an expert in these things, so if you know a better way to do them, please let me know. In particular, the R example was borrowed from someone else.

The functionFirst, we find a version of the formula for MVP that can be converted into those systems. I like the one used by Yue Kuen KWOK:

ω = Ω⁻¹ **1**/ **1** ⋅ Ω⁻¹ **1**

where Ω is the covariant matrix of the returns for the stocks in question.

R versionThe R version is fairly straightforward - we just need to put the pieces together:

minvar <- function (px){ Rt <- px[-1,]/px[-nrow(px),]-1 cov.Rt <- cov(Rt) one.vec <- rep(1,nrow(cov.Rt)) num <- solve(cov.Rt) %*% one.vec den <- as.numeric(t(one.vec) %*% num) return(num/den) }Calculating returns can be done with standard matrix operations and slicing. The covariant function is built in, as is inverting it (solve). Since the numerator Ω⁻¹ **1** appears in the denominator, I reuse it's value there.

All the array operations were documented in the same place. That I only needed one unit vector was a bit of a surprise, but R sized it dynamically to work. That I had to transpose the unit vector and use the cross product operator (%*%) to get a dot product was a a less pleasant surprise, but is apparently a standard R idiom.

Python versionThe Python version is almost a direct copy of the R version.

def minvar(prices): cov = np.cov((prices[1:] / prices[:-1] - 1).transpose()) vu = np.array(cov.shape[1] * [1], float) num = np.dot(np.linalg.inv(cov), vu) den = np.dot(vu, num) return num / denIn this case, I passed the returns matrix to the covariant function directly. The NumPy dot function performs both cross products and dot products, and again the unit vector adopts it's length dynamically.

Documentation was a bit more scattered. Being a mix of traditional imperative and object-oriented, some functions are functions in the module, and others are object methods. The biggest surprise was that the returns matrix needed to be transposed before the covariant was calculated.

Haskell versionHaskell is not quite as obvious a translation of the R and Python versions, but is a more straightforward translation of the original formula - once you notice that Ω⁻¹ **1** has been factored into tv. It reads from top to bottom like the original, with the main formula at the top and the various terms defined below.

Returns again use the standard Haskell idiom for slicing the array. This is a bit more verbose than either R or Python, as they are functions rather than special syntax.

minVariance prices = tv / scalar (tvu <.> tv) where rets = dropRows 1 prices / takeRows (rows prices - 1) prices - (_, cov) = meanCov $ rets vu = constant 1.0 (cols cov) tvu = constant 1.0 (rows cov) tv = inv cov <> vuThe documentation was again straightforward, with everything being a function in the hmatrix package. In this case, both unit vectors were needed, as Haskell does not scale them automatically. It was the least surprising in at least one way - it used a distinct dot product operator for the two vectors rather than transposing - whether implicitly like Python or explicitly like R - the unit vector in a cross product.

PerformanceWhile performance comparisons with IAP aren't very useful, as it runs in the cloud so doing comparisons on identical systems may be impossible, Haskell does have one interesting advantage.

All three of these systems have the same basic architecture - a high-level language running in an interactive environment, with bindings to low-level, fast implementations of the matrix manipulations. Haskell adds the ability to compile your program into native x86_64 code. Doing so reduced the wall clock time of this short demo by roughly two orders of magnitude.

SummaryI found the IAP version a little easier to deal with. Having custom operators and functions for everything - and this will only get better with time - along with being able to use the mathematical layout of the *where* statement just made things a little easier to deal with. While not having unit vectors that automatically size themselves - or transpose into matrices - is a little inconvenient, this also exposed a problem in the original R version, in that the unit vector's length was initially set wrong. I'm not sure that will make any real difference, but the thought that the language can catch such errors for me is comforting.

### Nested nested lists [noob question]

I'm just starting to learn haskell. I'm a complete noob. I wanted to construct a function that would take something and wrap it in a list.

matryoshka :: a -> [a] matryoshka x = [x] > matryoshka 3 [3] > matryoshka [3] [[3]] > (matryoshka . matryoshka) 3 [[3]]All good. Now, I want a function that does this to arbitrary depth.

doll n x = iterate matryoshka nThis does not work, because matryoshka is of type (a -> [a]), and iterate wants (a -> a).

My question is, is there a way to write this sort of function with the standard Haskell list? I know that it should be possible to make my own type and do it that way, but it would be neater to do it using the built-in types.

submitted by the_noodle[link] [12 comments]

### Philip Wadler: FAQ: The snake fight portion of your thesis defense

Q: Do I have to kill the snake?

A: University guidelines state that you have to “defeat” the snake. There are many ways to accomplish this. Lots of students choose to wrestle the snake. Some construct decoys and elaborate traps to confuse and then ensnare the snake. One student brought a flute and played a song to lull the snake to sleep. Then he threw the snake out a window.

Q: Does everyone fight the same snake?

A: No. You will fight one of the many snakes that are kept on campus by the facilities department.

Spotted by Garrett Morris.