News aggregator

Lennart Augustsson: Haskell error reporting with locations, update

Planet Haskell - Sat, 04/05/2014 - 1:53pm
Since some people (I'm among them) dislike impure features in Haskell I thought I'd present a slight variation on the error location feature that is "pure".

First, the __LOCATION__ variable gets an abstract type. So

data Location __LOCATION__ :: Location It's defined in the Prelude and always in scope. The type cannot be compared, shown, or anything. There's just one thing that can be done, namely:
extractLocation :: Location -> IO String The error function needs a new exception to throw
data ErrorCallLoc = ErrorCallLoc Location String {-# LOCATIONTRANSPARENT error #-} error :: String -> a error s = throw (ErrorCallLoc __LOCATION__ s) This means that the location string cannot be used when we throw the error. But it can be used where the error is caught, since this can only be done in the IO monad.

Under the hood the everything is just as before, Location is just a string. It just can't be manipulated except in the IO monad, so we can pretend it's pure.

newtype Location = Location String extractLocation (Location s) = return s It now looks a lot like Michael Snoyman's proposal.
Categories: Offsite Blogs

Philip Wadler: A new SQL injection attack?

Planet Haskell - Sat, 04/05/2014 - 6:47am
Against speed cameras?
Categories: Offsite Blogs

Haskell in the Newsroom

del.icio.us/haskell - Sat, 04/05/2014 - 6:28am
Categories: Offsite Blogs

How does automatic code reloading in Haskell web frameworks work?

Haskell on Reddit - Sat, 04/05/2014 - 3:06am

I'm coming from Ruby on Rails, so pardon my ignorance here.

I've been playing with the web frameworks of Haskell, specifically Snap, Happstack and Simple (now wanting to try Scotty). And I've noticed that Simple for example mentions, that if you install wai-handler-devel, it will automatically reload the code on each change.

But how is this possible given that Haskell has to be compiled? The package description says it embeds GHC inside the handler, but I'm not sure if I understand that correctly.

Does this mean that the WAI middleware that's above the chain before the application actually loads all of the code in some temporary instance of GHC and interprets it on the fly? How does this work?

The part that I especially don't understand here is that given everything is statically compiled, how can something like reloadBeforeRunning derp actually recompile the derp function on the fly (if that's what's happening). Does this require some Haskell extension to make it work?

submitted by progfu
[link] [16 comments]
Categories: Incoming News

Douglas M. Auclair (geophf): 'D' is for Darn it, I have to write a blog entry!

Planet Haskell - Fri, 04/04/2014 - 10:03pm
eheh, eheh, eheh!

So, eek! Fifteen minutes left to today, and I have yet to complete my entry on 'D' is for dependent types.

*sigh*

SO, 'D' is now for 'Darn it, I have to write a blog entry!'

AND, since this is a math blog, I darn well tootin' better write some math in it.

Well:

1 + 1 = 2.

And, okay, here's a taste of dependent types.

Types tell us what kind of objects are in that set of like-things.

The type, or class, of chickens in the coop lay eggs.

But dependent types go one further: the type depends on the thing it describes.

So the dependent type of even-numbered things are the sets of things that only have even numbers of things in them. How do we know the type? The type depends on the things. How do we know the things? The things are described by the type.

What do dependent types give us, beside circularity?

Clarity.

If I know I have an even number of things I'm working with, I don't need to as is_even. Or if I do, I know my answer, from the type itself.

For the type of even-numbered things, is_even always is true and is_odd is always false.

So, one thing I don't need to do is to check what I'm working with, or how many I have, odd or even, because I already know from the type itself.

Dependent types are from a new kind of logic called intuitionistic logic developed by Per Martin-Löf. The math for it is rather heady, and using dependent types in programming is 'harder' at the get-go than using ordinary types, but, being very restrictive, they also give a much stronger correlation between knowing that your program is correct: that it's doing what it's supposed to be doing, or what you want it to be doing, and, with dependent types, I find I am able to express things, correctly and safely that I am not able to do (correctly and safely) using ordinary types.

...

My working draft for this blog entry had started out as follows:


'D' is for Dependent Type
We're all familiar with types, right? They're classes, or classifiers, telling us what something in (into which class it belongs).  One example is this weekend I'm going to be running a 5k with my family, which is a very different thing than running a 5 miler or running just 5 feet. Kilometers, miles, feet, these are units of measure, and they type or classify what we're talking about. 'Units of measure,' also, in fact, is a type, so the types 'kilometers,' 'miles' and 'feet' all fall into the type 'units of measure.' Just like our pets are a type of animal: cats. 'Cats' is a type, and itself is a type of something else: 'animal,' or 'pet,' ... depending.
Programming with types is an on-again off-again kind of thing, some programming languages require all things/objects to be typed, some languages require none to be typed and some fall in the middle somewhere between those extremes.
And, as you may guess, different programming languages have different kinds of types or different ways of typing things or different ways of creating types.
What do types give you?
Some say they give you a pain in the ... neck, and nothing more; others say that, by putting things into categories, they provide a distinction of objects operated on, so you know what you can do, and can't do, with the object at hand, simplifying the programming task.
Me, I'm a typeful person, if you hadn't surmised this from the title of the blog which contains this entry. Types are wonderful! They provide a way for me to declare what things are, and, from simply knowing the types of the data I'm working with, I now know a lot about the system and what it does and what its usefulness is.
Categories: Offsite Blogs

Gabriel Gonzalez: Scalable program architectures

Planet Haskell - Fri, 04/04/2014 - 9:33pm

Haskell design patterns differ from mainstream design patterns in one important way:

  • Conventional architecture: Combine a several components together of type A together to generate a "network" or "topology" of type B

  • Haskell architecture: Combine several components together of type A to generate a new component of the same type A, indistinguishable in character from its substituent parts

This distinction affects how the two architectural styles evolve as code bases grow.

The conventional architecture requires layering on abstraction on top of abstraction:

Oh no, these Bs are not connectable, so let's make a network of Bs and call that a C.

Well, I want to assemble several Cs, so let's make a network of Cs and call that a D

...

Wash, rinse, and repeat until you have an unmanageable tower of abstractions.

With a Haskell-style architecture, you don't need to keep layering on abstractions to preserve combinability. When you combine things together the result is still itself combinable. You don't distinguish between components and networks of components.

In fact, this principle should be familiar to anybody who knows basic arithmetic. When you combine a bunch of numbers together you get back a number:

3 + 4 + 9 = 16

Zero or more numbers go in and exactly one number comes out. The resulting number is itself combinable. You don't have to learn about "web"s of numbers or "web"s of "web"s of numbers.

If elementary school children can master this principle, then perhaps we can, too. How can we make programming more like addition?

Well, addition is simple because we have (+) and 0. (+) ensures that we can always convert more than one number into exactly number:

(+) :: Int -> Int -> Int

... and 0 ensures that we can always convert less than one number into exactly one number by providing a suitable default:

0 :: Int

This will look familiar to Haskell programmers: these type signatures resemble the methods of the Monoid type class:

class Monoid m where
-- `mappend` is analogous to `(+)`
mappend :: m -> m -> m

-- `mempty` is analogous to `0`
mempty :: m

In other words, the Monoid type class is the canonical example of this Haskell architectural style. We use mappend and mempty to combine 0 or more ms into exactly 1 m. The resulting m is still combinable.

Not every Haskell abstraction implements Monoid, nor do they have to because category theory takes this basic Monoid idea and generalizes it to more powerful domains. Each generalization retains the same basic principle of preserving combinability.

For example, a Category is just a typed monoid, where not all combinations type-check:

class Category cat where
-- `(.)` is analogous to `(+)`
(.) :: cat b c -> cat a b -> cat a c

-- `id` is analogous to `0`
id :: cat a a

... a Monad is like a monoid where we combine functors "vertically":

-- Slightly modified from the original type class
class Functor m => Monad m where
-- `join` is analogous to `(+)`
join :: m (m a) -> m a

-- `return` is analogous to `0`
return :: a -> m a

... and an Applicative is like a monoid where we combine functors "horizontally":

-- Greatly modified, but equivalent to, the original type class
class Functor f => Applicative f where
-- `mult` is is analogous to `(+)`
mult :: f a -> f b -> f (a, b)

-- `unit` is analogous to `0`
unit :: f ()

Category theory is full of generalized patterns like these, all of which try to preserve that basic intuition we had for addition. We convert more than one thing into exactly one thing using something that resembles addition and we convert less than one thing into exactly one thing using something that resembles zero. Once you learn to think in terms of these patterns, programming becomes as simple as basic arithmetic: combinable components go in and exactly one combinable component comes out.

These abstractions scale limitlessly because they always preserve combinability, therefore we never need to layer further abstractions on top. This is one reason why you should learn Haskell: you learn to how to build flat architectures.

Categories: Offsite Blogs

www.fpcomplete.com

del.icio.us/haskell - Fri, 04/04/2014 - 8:25pm
Categories: Offsite Blogs

Dan Piponi (sigfpe): This is a test

Planet Haskell - Fri, 04/04/2014 - 7:05pm
IntroductionI like to see connections between different branches of mathematics. Here's an application of the same technique in two completely different areas that I haven't seen in the literature. It might take a bit of work to convince you that they are in fact the same thing, so here goes.

The dual of an optimization problemConsider the optimization problem
$$ \begin{aligned} \mbox{min} & f(x) \\ \mbox{subject to} & x\le a\\ \end{aligned} $$
illustrated below and where $f$ is convex:

The essential features are there in this problem despite it being the most basic problem with an inequality.

Now consider the optimum given as a function of $a$. So $f_+(a)=\inf_{x\le a}f(a)$. You can think of $f_+$ as representing the "best value of $f$ so far". Its graph looks like this:

It's basically like the original $f$ except that we only have the descending parts and none of the ascending parts.

Now we're going to construct $f_+$ by a slightly roundabout way, not by considering the function itself, but its epigraph, the set of points above the graph. Here's the epigraph for the original $f$:

Notice I've drawn a bunch of lines just touching the epigraph, ie. its subtangents. In general, any convex shape can be expressed as the intersection of half-planes. Those subtangents indicate a handful of the boundaries of those half planes. We can construct those subtangents by considering the lines $y=\lambda x$ and then pushing those lines up or down so each is high as possible without crossing $f$. We can find out how much pushing we need by solving the optimisation:
$$ \begin{aligned} \mbox{maximise} & \lambda x-f(x) \end{aligned} $$
For any $\lambda$ we call the optimal value $f^\ast(\lambda)$. $f^\ast$ is also known as the Legendre transform or Fenchel dual. It's basically telling us which half-planes define our epigraph.

Interestingly, reconstructing the function $f$ from $f^\ast$ requires exactly the same expression with $f^\ast$ substituted for $f$. So the Fenchel dual also takes a collection of half-planes and gives you back the original function.

Now we construct the epigraph of $f_+$. We just want the descending parts of $f$. It's easy to construct the epigraph, we take the intersection of just those half-planes whose boundaries are decreasing. Here's a picture:

So here's a process for constructing $f_+$: we take the Fenchel dual of $f$, we throw away the "half" where $\lambda\le0$, and now take the inverse Fenchel dual (which is in fact just the Fenchel dual).

Now let's go back to the original problem and construct its dual as described in numerous textbooks. We form the Lagrangian
$$L(a,x,\lambda)=f(x)+\lambda (x-a)$$
We then form the function
$$g(a,\lambda)=\min_x L(a,x,\lambda)$$
This is basically (modulo a sign) the Fenchel dual of $f$ with an extra $-\lambda a$ term. The dual problem, as in the textbooks, is
$$ \begin{aligned} \mbox{maximise} & g(a,\lambda)\\ \mbox{such that} & \lambda\ge0\\ \end{aligned} $$
Remembering that $-\lambda a$ term, that's almost the inverse Fenchel dual except we've maximised over just "half" of the $\lambda$s, those for which $\lambda\ge0$. When we compute the optimum of the dual problem we get $h(x)=\max_x g(a,x)$. If the problem is convex, we've just computed $f_+$ by another path. So this gives what I think is a clear picture of what the dual problem is. Forming the dual problem is (modulo a vertical shift) converting the original problem into finding the subtangents. Solving the dual problem is converting that back to a function again, but using only the downward sloping lines. So this illustrates the primal-dual theorem. We get the same result in both the primal and dual problems as the dual problem is simply solving our original problem via epigraphs.

The Riemann-Hilbert problem and the Hilbert TransformHere's my latest toy:It's a software defined radio I bought for $13 on Amazon. Here's what it does.
You have an electromagnetic field in the vicinity of the antenna. Let's pretend such fields, $\phi$ are scalars. Just about any kind of radio uses a carrier modulated in some way. For example AM radio encodes the audio (represented as a function of time, A) as something like$$\phi=A(t)cos(\omega t)$$
Categories: Offsite Blogs

FP Complete: Calculating the Minimum Variance Portfolio in R, Pandas and IAP

Planet Haskell - Fri, 04/04/2014 - 2:07pm
Introduction

As part of producing a demo for FP Complete's new IAP product, I wound up implementing the Minimum Variance Portfolio calculation for a stock portfolio in R, then in Haskell for the IAP, and finally in Python using the NumPy and SciPy extension libraries. I want to look at the process of writing each of these, and the resulting code in this article. I am not an expert in these things, so if you know a better way to do them, please let me know. In particular, the R example was borrowed from someone else.

The function

First, we find a version of the formula for MVP that can be converted into those systems. I like the one used by Yue Kuen KWOK:

ω = Ω⁻¹ 1/ 1 ⋅ Ω⁻¹ 1

where Ω is the covariant matrix of the returns for the stocks in question.

R version

The R version is fairly straightforward - we just need to put the pieces together:

minvar <- function (px){ Rt <- px[-1,]/px[-nrow(px),]-1 cov.Rt <- cov(Rt) one.vec <- rep(1,nrow(cov.Rt)) num <- solve(cov.Rt) %*% one.vec den <- as.numeric(t(one.vec) %*% num) return(num/den) }

Calculating returns can be done with standard matrix operations and slicing. The covariant function is built in, as is inverting it (solve). Since the numerator Ω⁻¹ 1 appears in the denominator, I reuse it's value there.

All the array operations were documented in the same place. That I only needed one unit vector was a bit of a surprise, but R sized it dynamically to work. That I had to transpose the unit vector and use the cross product operator (%*%) to get a dot product was a a less pleasant surprise, but is apparently a standard R idiom.

Python version

The Python version is almost a direct copy of the R version.

def minvar(prices): cov = np.cov((prices[1:] / prices[:-1] - 1).transpose()) vu = np.array(cov.shape[1] * [1], float) num = np.dot(np.linalg.inv(cov), vu) den = np.dot(vu, num) return num / den

In this case, I passed the returns matrix to the covariant function directly. The NumPy dot function performs both cross products and dot products, and again the unit vector adopts it's length dynamically.

Documentation was a bit more scattered. Being a mix of traditional imperative and object-oriented, some functions are functions in the module, and others are object methods. The biggest surprise was that the returns matrix needed to be transposed before the covariant was calculated.

Haskell version

Haskell is not quite as obvious a translation of the R and Python versions, but is a more straightforward translation of the original formula - once you notice that Ω⁻¹ 1 has been factored into tv. It reads from top to bottom like the original, with the main formula at the top and the various terms defined below.

Returns again use the standard Haskell idiom for slicing the array. This is a bit more verbose than either R or Python, as they are functions rather than special syntax.

minVariance prices = tv / scalar (tvu <.> tv) where rets = dropRows 1 prices / takeRows (rows prices - 1) prices - (_, cov) = meanCov $ rets vu = constant 1.0 (cols cov) tvu = constant 1.0 (rows cov) tv = inv cov <> vu

The documentation was again straightforward, with everything being a function in the hmatrix package. In this case, both unit vectors were needed, as Haskell does not scale them automatically. It was the least surprising in at least one way - it used a distinct dot product operator for the two vectors rather than transposing - whether implicitly like Python or explicitly like R - the unit vector in a cross product.

Performance

While performance comparisons with IAP aren't very useful, as it runs in the cloud so doing comparisons on identical systems may be impossible, Haskell does have one interesting advantage.

All three of these systems have the same basic architecture - a high-level language running in an interactive environment, with bindings to low-level, fast implementations of the matrix manipulations. Haskell adds the ability to compile your program into native x86_64 code. Doing so reduced the wall clock time of this short demo by roughly two orders of magnitude.

Summary

I found the IAP version a little easier to deal with. Having custom operators and functions for everything - and this will only get better with time - along with being able to use the mathematical layout of the where statement just made things a little easier to deal with. While not having unit vectors that automatically size themselves - or transpose into matrices - is a little inconvenient, this also exposed a problem in the original R version, in that the unit vector's length was initially set wrong. I'm not sure that will make any real difference, but the thought that the language can catch such errors for me is comforting.

Categories: Offsite Blogs

Nested nested lists [noob question]

Haskell on Reddit - Fri, 04/04/2014 - 11:36am

I'm just starting to learn haskell. I'm a complete noob. I wanted to construct a function that would take something and wrap it in a list.

matryoshka :: a -> [a] matryoshka x = [x] > matryoshka 3 [3] > matryoshka [3] [[3]] > (matryoshka . matryoshka) 3 [[3]]

All good. Now, I want a function that does this to arbitrary depth.

doll n x = iterate matryoshka n

This does not work, because matryoshka is of type (a -> [a]), and iterate wants (a -> a).

My question is, is there a way to write this sort of function with the standard Haskell list? I know that it should be possible to make my own type and do it that way, but it would be neater to do it using the built-in types.

submitted by the_noodle
[link] [12 comments]
Categories: Incoming News

Philip Wadler: FAQ: The snake fight portion of your thesis defense

Planet Haskell - Fri, 04/04/2014 - 11:14am
FAQ: The snake fight portion of your thesis defense.

Q: Do I have to kill the snake?
A: University guidelines state that you have to “defeat” the snake. There are many ways to accomplish this. Lots of students choose to wrestle the snake. Some construct decoys and elaborate traps to confuse and then ensnare the snake. One student brought a flute and played a song to lull the snake to sleep. Then he threw the snake out a window.
Q: Does everyone fight the same snake?
A: No. You will fight one of the many snakes that are kept on campus by the facilities department.
Spotted by Garrett Morris.
Categories: Offsite Blogs

Snap for Beginners

del.icio.us/haskell - Fri, 04/04/2014 - 10:00am
Categories: Offsite Blogs

Is there a function that converts Bool to Maybe?

Haskell on Reddit - Fri, 04/04/2014 - 6:37am

Hi there

Recently I had a Bool that told me whether to construct a value. That makes my value optional, so a Maybe. After doing this a few times, I extracted the following function out of it. I was surprised I couldn't find something similar on Hoogle. Any suggestions?

whenMaybe :: (Applicative m) => m a -> Bool -> m (Maybe a) whenMaybe _ False = pure Nothing whenMaybe ma True = Just <$> ma

(My specific application is parsing MQTT messages. A flag earlier in the message controls whether to expect another value later in the message. So my m is Data.Binary.Get. But as the type signature implies, surely the situation is more general than that.)

Edit: to clarify, I only want to carry out my effect when the Bool is False. In my case, the effect is reading from the network and the Bool tells whether there is data or not.

Thanks --Gareth

submitted by garethrowlands
[link] [20 comments]
Categories: Incoming News