# News aggregator

### Haskell web site is really, *really* slow.

Seriously guys, it's embarrassing. Can someone replace the site with a static one until the backend is sorted out?

submitted by pja[link] [64 comments]

### Dominic Steinitz: A Type Safe Reverse or Some Hasochism

Conor McBride was not joking when he and his co-author entitled their paper about dependent typing in Haskell “Hasochism”: Lindley and McBride (2013).

In trying to resurrect the Haskell package yarr, it seemed that a dependently typed reverse function needed to be written. Writing such a function turns out to be far from straightforward. How GHC determines that a proof (program) discharges a proposition (type signature) is rather opaque and perhaps not surprisingly the error messages one gets if the proof is incorrect are far from easy to interpret.

I’d like to thank all the folk on StackOverflow whose answers and comments I have used freely below. Needless to say, any errors are entirely mine.

Here are two implementations, each starting from different axioms (NB: I have never seen type families referred to as axioms but it seems helpful to think about them in this way).

> {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > {-# OPTIONS_GHC -fno-warn-type-defaults #-} > {-# OPTIONS_GHC -fno-warn-unused-do-bind #-} > {-# OPTIONS_GHC -fno-warn-missing-methods #-} > {-# OPTIONS_GHC -fno-warn-orphans #-} > {-# LANGUAGE GADTs #-} > {-# LANGUAGE KindSignatures #-} > {-# LANGUAGE DataKinds #-} > {-# LANGUAGE TypeFamilies #-} > {-# LANGUAGE UndecidableInstances #-} > {-# LANGUAGE ExplicitForAll #-} > {-# LANGUAGE TypeOperators #-} > {-# LANGUAGE ScopedTypeVariables #-}For both implementations, we need propositional equality: if a :~: b is inhabited by some terminating value, then the type a is the same as the type b. Further we need the normal form of an equality proof: Refl :: a :~: a and a function, gcastWith which allows us to use internal equality (:~:) to discharge a required proof of external equality (~). Readers familiar with topos theory, for example see Lambek and Scott (1988), will note that the notation is reversed.

> import Data.Type.Equality ( (:~:) (Refl), gcastWith )For the second of the two approaches adumbrated we will need

> import Data.ProxyThe usual natural numbers:

> data Nat = Z | S NatWe need some axioms:

- A left unit and
- Restricted commutativity.

We need the usual singleton for Nat to tie types and terms together.

> data SNat :: Nat -> * where > SZero :: SNat Z > SSucc :: SNat n -> SNat (S n)Now we can prove some lemmas.

First a lemma showing we can push :+ inside a successor, S.

> succ_plus_id :: SNat n1 -> SNat n2 -> (((S n1) :+ n2) :~: (S (n1 :+ n2))) > succ_plus_id SZero _ = Refl > succ_plus_id (SSucc n) m = gcastWith (succ_plus_id n (SSucc m)) ReflThis looks nothing like a standard mathematical proof and it’s hard to know what ghc is doing under the covers but presumably something like this:

- For SZero

- S Z :+ n2 = Z :+ S n2 (by axiom 2) = S n2 (by axiom 1) and
- S (Z + n2) = S n2 (by axiom 1)
- So S Z :+ n2 = S (Z + n2)

- For SSucc

- SSucc n :: SNat (S k) so n :: SNat k and m :: SNat i so SSucc m :: SNat (S i)
- succ_plus id n (SSucc m) :: k ~ S p => S p :+ S i :~: S (p :+ S i) (by hypothesis)
- k ~ S p => S p :+ S i :~: S (S p :+ i) (by axiom 2)
- k :+ S i :~: S (k :+ i) (by substitution)
- S k :+ i :~: S (k :+ i) (by axiom 2)

Second a lemma showing that Z is also the right unit.

> plus_id_r :: SNat n -> ((n :+ Z) :~: n) > plus_id_r SZero = Refl > plus_id_r (SSucc n) = gcastWith (plus_id_r n) (succ_plus_id n SZero)- For SZero

- Z :+ Z = Z (by axiom 1)

- For SSucc

- SSucc n :: SNat (S k) so n :: SNat k
- plus_id_r n :: k :+ Z :~: k (by hypothesis)
- succ_plus_id n SZero :: S k :+ Z :~: S (k + Z) (by the succ_plus_id lemma)
- succ_plus_id n SZero :: k :+ Z ~ k => S k :+ Z :~: S k (by substitution)
- plus_id_r n :: k :+ Z :~: k (by equation 2)

Now we can defined vectors which have their lengths encoded in their type.

> infixr 4 ::: > data Vec a n where > Nil :: Vec a Z > (:::) :: a -> Vec a n -> Vec a (S n)We can prove a simple result using the lemma that Z is a right unit.

> elim0 :: SNat n -> (Vec a (n :+ Z) -> Vec a n) > elim0 n x = gcastWith (plus_id_r n) xArmed with this we can write an reverse function in which the length of the result is guaranteed to be the same as the length of the argument.

> size :: Vec a n -> SNat n > size Nil = SZero > size (_ ::: xs) = SSucc $ size xs > accrev :: Vec a n -> Vec a n > accrev x = elim0 (size x) $ go Nil x where > go :: Vec a m -> Vec a n -> Vec a (n :+ m) > go acc Nil = acc > go acc (x ::: xs) = go (x ::: acc) xs > toList :: Vec a n -> [a] > toList Nil = [] > toList (x ::: xs) = x : toList xs > test0 :: [String] > test0 = toList $ accrev $ "a" ::: "b" ::: "c" ::: Nil ghci> test0 ["c","b","a"]For an alternative approach, let us change the axioms slightly.

> type family (n1 :: Nat) + (n2 :: Nat) :: Nat where > Z + n2 = n2 > (S n1) + n2 = S (n1 + n2)Now the proof that Z is a right unit is more straightforward.

> plus_id_r1 :: SNat n -> ((n + Z) :~: n) > plus_id_r1 SZero = Refl > plus_id_r1 (SSucc n) = gcastWith (plus_id_r1 n) ReflFor the lemma showing we can push + inside a successor, S, we can use a Proxy.

> plus_succ_r1 :: SNat n1 -> Proxy n2 -> ((n1 + (S n2)) :~: (S (n1 + n2))) > plus_succ_r1 SZero _ = Refl > plus_succ_r1 (SSucc n1) proxy_n2 = gcastWith (plus_succ_r1 n1 proxy_n2) ReflNow we can write our reverse function without having to calculate sizes.

> accrev1 :: Vec a n -> Vec a n > accrev1 Nil = Nil > accrev1 list = go SZero Nil list > where > go :: SNat n1 -> Vec a n1 -> Vec a n2 -> Vec a (n1 + n2) > go snat acc Nil = gcastWith (plus_id_r1 snat) acc > go snat acc (h ::: (t :: Vec a n3)) = > gcastWith (plus_succ_r1 snat (Proxy :: Proxy n3)) > (go (SSucc snat) (h ::: acc) t) > test1 :: [String] > test1 = toList $ accrev1 $ "a" ::: "b" ::: "c" ::: Nil ghci> test0 ["c","b","a"] BibliographyLambek, J., and P.J. Scott. 1988. *Introduction to Higher-Order Categorical Logic*. Cambridge Studies in Advanced Mathematics. Cambridge University Press. http://books.google.co.uk/books?id=6PY_emBeGjUC.

Lindley, Sam, and Conor McBride. 2013. “Hasochism: The Pleasure and Pain of Dependently Typed Haskell Programming.” In *Proceedings of the 2013 ACM SIGPLAN Symposium on Haskell*, 81–92. Haskell ’13. New York, NY, USA: ACM. doi:10.1145/2503778.2503786.

### Noam Lewis: Ok, Internet, I heard you. I need a new name for the JS type inferer/checker.

I knew a new name is needed for this JS type inferer/checker. Many people have pointed out that sweet.js uses the same short name (sjs). I could just stick with “Safe JS” but not make it short. Or I could pick a new name, to make it less boring.

Help me help you!

**Please take this quick poll.**

Thanks!

### Monad layering and DSL

### Uniform access to data in Haskell (exploring the "design space")

I apologize in advance if this is not well thought out. I'm posting here for guidance and conversation.

This is not a proposal for Haskell, just some fun ideas.

Smarter people than I have already had these ideas but I don't know the terminology (row types, extensible records, type level literals and dependent types?) so please link me to papers if you know where this has been covered. This paper by Daan Leijen was one I read.

Our motivation is uniform access to data (perhaps full CRUD). I'm very much a "completionist" so this made me sketch out how several of Haskell's "collections" are related. I came up with a "design space" (for lack of a better word) with 3 axes: RowTyped (yes|no), KeyType (Int|Symbol), Extensible (yes|no) = 2 * 2 * 2 = 8 and here is a terrible diagram of what I mean.

For read-only access we start with "class ReadMap k v where readData :: k -> Maybe v". Map already supports this and List can be made to support it when k = Int. Adding "deriving ReadMap" would add this to Data Constructors and Records comprised of only one type for Types with only one constructor. Meaning: data T = T Int Int deriving ReadMap, and data T = T {x = Int, y = Int} deriving ReadMap makes sense but obviously not data T = T Int String or data T = A Int | B String. For update access we add "updateData :: d -> k -> KeyNotFound | Updated d".

But how do we add this to "row typed" things (where I just mean things with a type like Tuple)? What if we could use literal values in types (without going into full Dependent Types)? Let's create a magic type signature: getData :: (x = Literal k) -> (r = RowType k) -> (r @ x). Where the type parameter for Literal will be either Nat or Symbol - because we already have these two in our existing Type Level Literals!

So we needed to add 4 things to the type system. A Literal type that hoists a literal up to the existing Nat and Symbol types, "=" notation for referencing a type level map or a value of a Literal, a "RowType key" type that is a type level map from Nat or Symbol keys to a type value, "@" notation for getting from the type level map.

Elm has extensible records. It's interesting to note his fear they will be abused to create Objects. The goal here isn't to make everying a map like in Lua and JS. Just to get access to tantalizingly close info that we already know at compile time but don't have the notation to express (indexable tuples). I purposely omitted insert and delete to keep this short.

**TLDR:** If we can: hoist integer and string literals to the existing type level literals + abstract the existing Tuple/Record types as a "type level map" from a Nat or Symbol key to a type + add notation for extracting values (types) from these type maps, then we have uniform read and update access to Lists, Maps, Tuples and Data/Records, as well as making these things indexable and keeping compile time type saftey.

[link] [5 comments]

### thoughtbot/til

### Transducers: Pipes vs Machines

### up-to-date and complete version of http://packdeps.haskellers.com/reverse ?

### How I learned to understand Monads

Since learning about Monads, I've also learned about Traversable, Foldable, Lenses, Conduits/Pipes and a myriad of other concepts that are super useful. But the language embeds so many useful structures in Monads, I just don't think a new person can get very far without learning them.

I want to talk a bit about my journey to understanding Monads to discuss what I think is missing a little from all the talk, discussion, wiki articles, tutorials and blog posts.

When I first had something "stuck" in a Monad and wanted to get it out I gave up. In the end I decided to thread a state around manually. Then it happened a second time and a third time. Eventually I learned I could chain things together using >>= to do the same thing. I learned I could do this:

someFunction :: MyData -> a -> (a,MyData) -- transform to this, and bind together a chain of them with someFunction' :: a -> MyType a -- and eventually, instead of writing my own I could use State someFunction'' :: a -> State MyType aI finally thought I got it. Then I had a [MyData a] and needed to get MyData [a] and learned about sequence. And then I thought I understood Monads.

Then I had something like this: foo a >>= bar >>= foobar >>= etc... and learned that do notation wasn't just for IO. And then I thought I understood Monads.

Then I realised the difference between a type constructor and a data constructor. I learned I could map Just xs using the Just constructor like I could with any other function. Then it hit me that "everything is a function" including Monads.

And then I wrote my own join function (I even called it join) before discovering I'd seen it before. Then I realised realised bind was join . fmap. It started to occur to me that I never really understood Monads when I first thought I did. But when I eventually got to mapM, mapM_, foldrM, etc. I was really starting to get a hang of it.

And then I learned how ST s threads an existential forall s around to prevent a mutable state from leaking. And boy did that turn my understanding upside down. I started to learn why unsafePerformIO and other unsafe functions were named as such.

Most recently I learned how to use Transformers so that I could "lift" functions from all the Monads in my stack to a single type.

**So what?**

Where am I going with this? What I realised I needed was a volume of exposure.

Before I get to that, I'd like to say that I could have used a bit of a Rosetta Stone. I actually don't know if this would help other many people, but eventually I realised that if Monads could work in OO, they might look like this:

Interface Monad<A> public static final Monad<A> return(A a); public static final Monad<A> join(Monad<Monad<A>> mma); public static final <B> Monad<B> bind(Function<A,Monad<B>> f, Monad<A> ma);I mean, it's not like there aren't OO languages with type signatures. Granted such an interface isn't possible in any OO language I know (especially return). But it is possible to implement on a per Monad basis. And I actually did a few tests with Java writing my own Maybe (side note: wow that was hard... Either would have been way easier than Maybe. How did I ever work without multiple data constructors?).

So that was one problem: the idea that the concept couldn't be explained in terms I already knew was pretty prevalent. I don't think we necessarily need to tell imperative people that everything they've learned so far is wrong.

The other problem was a lack of "complete" examples. I consider Monad (like any type class) to be equivalent to almost but not entirely unlike an OO interface. Like any interface it has a lot of really useful functions. I found most of them by getting stuck and typing stuff like m (m a) -> m a into Hoogle and hoping it existed.

One could quite easily explain how all of the key functions work using just Maybe and Data.Map. Everything from fmap to sequence could be covered by example because of the lookup function. e.g. M.lookup a mb >>= \b -> M.lookup b mc. It could also be a good opportunity to learn about some of the Maybe specific functions (e.g. the difference between catMaybes and sequence).

Then expand on examples for Either, State, Reader, Writer, etc. etc. I admit these wouldn't come as easily, but the idea would be to teach people "*a* Monad" rather than "Monads".

Basically, instead trying to figure out "What is a Monad", I could really have used a whole bunch of examples for different ways to use the Monadic "interface" to solve a problem. A potential downside to this is that you risk oversimplifying the usefulness of having a Monad. On the other hand, learning to use a particular Monad in its full, one at a time, could also be helpful.

I certainly don't think it should be the only way to teach Monads. But volume of exposure was a key part of my understanding. And there were a lot of resources explaining the theory and the "big picture" (the "depth") rather than examples of use (the "breadth").

**LYAH**

I thought LYAH was great and it sort of does what I described above. But it falls over in a few places:

- It spends heaps of time on the theory and talking about "Monads" as this esoteric thing
- It delves into MonadPlus as part of the first lesson on Monads... I'm not really sure why guard gets introduced so early, when join happens on the next chapter and sequence is omitted entirely. It does at least spend a decent time on mapM_, foldM, etc.
- There are 2 "projects" I guess you could call them, but no exercises (I'm aware this is a general criticism people have)
- It doesn't cover the same ground for each Monad, so it's hard to really get a sense that its reusable syntax.

I remember getting up to that section and actually wondering whether or not I made a mistake learning Haskell. It all just seemed too hard. I persevered and went and read the million Monad tutorials people had written and eventually learned enough to be able to have this criticism of it. Don't get me wrong, LYAH is excellent, but I'm sure I'm not the only one that got lost at that point.

About 6 months in to Haskell and I thought I'd share my thoughts. Hopefully I didn't ramble too much. I feel like I've not just learned a new programming language but a new way of thinking. The digital world would be a better place if more things were written in Haskell. I hope we can do more to encourage newbies.

submitted by TheCriticalSkeptic[link] [13 comments]

### Happstack: serveDirectory vs serveFile

### Christopher Done: My Haskell tooling wishlist

I spend a lot of my time on Haskell tooling, both for my hobbies and my job. Almost every project I work on sparks a desire for another piece of tooling. Much of the time, I’ll follow that wish and take a detour to implement that thing (Fay, structured-haskell-mode, hindent, are some Haskell-specific examples). But in the end it means less time working on the actual domain problem I’m interested in, so a while ago I intentionally placed a quota on the amount of time I can spend on this.

So this page will contain a list of things I’d work on if I had infinite spare time, and that I wish someone else would make. I’ll update it from time to time as ideas come to the fore.

These projects are non-trivial but are do-able by one person who has enough free time and motivation. There is a common theme among the projects listed, which is that they are things that Haskell among most other well known languages is particularly well suited for and yet we don’t have such tooling as standard tools in the Haskell tool box. They should be!

An equational reasoning assistantEquational reasoning lets you prove properties about your functions by following a simple substitution model to state that one term is equal to another. The approach I typically take is to expand and reduce until both sides of the equation are the same.

Here is an example. I have a data type, Consumer. Here is an instance of Functor:

instance Functor (Consumer s d) where fmap f (Consumer d p) = Consumer d (\s -> case p s of (Failed e,s') -> (Failed e,s') (Continued e,s') -> (Continued e,s') (Succeeded a,s') -> (Succeeded (f a),s'))I want to prove that it is a law-abiding instance of Functor, which means proving that fmap id ≡ id. You don’t need to know anything about the Consumer type itself, just this implementation. Here are some very mechanical steps one can take to prove this:

id ≡ fmap id ≡ \(Consumer d p) -> Consumer d (\s -> case p s of (Failed e,s') -> (Failed e,s') (Continued e,s') -> (Continued e,s') (Succeeded a,s') -> (Succeeded (id a),s')) ≡ \(Consumer d p) -> Consumer d (\s -> case p s of (Failed e,s') -> (Failed e,s') (Continued e,s') -> (Continued e,s') (Succeeded a,s') -> (Succeeded a,s')) ≡ \(Consumer d p) -> Consumer d (\s -> p s) ≡ \(Consumer d p) -> Consumer d p ≡ idSo that’s:

- Expand the fmap id into the instance’s implementation.
- Reduce by applying the property that id x ≡ x.
- Reason that if every branch of a case returns the original value of the case, then that whole case is an identity and can be dropped.
- Eta-reduce.
- Again, pattern-matching lambdas are just syntactic sugar for cases, so by the same rule this can be considered identity.
- End up with what we wanted to prove: fmap id ≡ id

These are pretty mechanical steps. They’re also pretty laborious and error-prone. Of course, if you look at the first step, it’s pretty obvious the whole thing is an identity, but writing the steps out provides transformations that can be statically checked by a program. So it’s a good example, because it’s easily understandable and you can imagine proving something more complex would require a lot more steps and a lot more substitutions. Proof of identity for Applicative has substantially more steps, but is equally mechanical.

Wouldn’t it be nice if there was a tool which given some expression would do the following?

- Suggest a list of in-place expansions.
- Suggest a list of reductions based on a set of pre-defined rules (or axioms).

Then I could easily provide an interactive interface for this from Emacs.

In order to do expansion, you need the original source of the function name you want to expand. So in the case of id, that’s why I suggested stating an axiom (id a ≡ a) for this. Similarly, I could state the identity law for Monoids by saying mappend mempty a ≡ a, mappend a mempty ≡ a. I don’t necessarily need to expand the source of all functions. Usually just the ones I’m interested in.

Given such a system, for my example above, the program could actually perform all those steps automatically and spit out the steps so that I can read them if I choose, or otherwise accept that the proof was derived sensibly.

In fact, suppose I have my implementation again, and I state what must be satisfied by the equational process (and, perhaps, some axioms that might be helpful for doing it, but in this case our axioms are pretty standard), I might write it like this:

instance Functor (Consumer s d) where fmap f (Consumer d p) = ... proof [|fmap id ≡ id :: Consumer s d a|]This template-haskell macro proof would run the steps above and if the equivalence is satisfied, the program compiles. If not, it generates a compile error, showing the steps it performed and where it got stuck. TH has limitations, so it might require writing it another way.

Such a helpful tool would also encourage people (even newbies) to do more equational reasoning, which Haskell is often claimed to be good at but you don’t often see it in evidence in codebases. In practice isn’t a standard thing.

Promising work in this area:

- Introducing the Haskell Equational Reasoning Assistant – works pretty much how I described above. I don’t know where the source is, I’ve emailed the author about it. Will update with any results.

**Update 2014-01-25**: Andrew Gill got back to me that HERMIT is the continuation of HERA. It seems that you can get inlining, general reduction and a whole bunch of case rewrites from this project. Check the KURE paper for the DSL used to do these rewrites, it looks pretty aweeome. So if anyone’s thinking of working on this, I’d probably start with reading HERMIT.Shell or HERMIT.Plugin and see how to get it up and running. It’s a pity it has to work on Core, that’s a little sad, but as far as trade-offs go it’s not too bad. Doing proofs on things more complicated than core might be hard anyway. It does mean you’ll probably want to make a rewrite that does a global variable replacement: x and y is a little easier to read than x0_6 and the like that you get in Core.

Ideally, we would never have inexhaustive patterns in Haskell. But a combination of an insufficient type system and people’s insistence on using partial functions leads to a library ecosystem full of potential landmines. Catch is a project by Neil Mitchell which considers how a function is called when determining whether its patterns are exhaustive or not. This lets us use things like head and actually have a formal proof that our use is correct, or a formal proof that our use, or someone else’s use, will possibly crash.

map head . groupThis is an example which is always correct, because group returns a list of non-empty lists.

Unfortunately, it currently works for a defunct Haskell compiler, but apparently it can be ported to GHC Core with some work. I would very much like for someone to do that. This is yet another project which is the kind of thing people claim is possible thanks to Haskell’s unique properties, but in practice it isn’t a standard thing, in the way that QuickCheck is.

A substitution stepperThis is semi-related, but different, to the proof assistant. I would like a program which can accept a Haskell module of source code and an expression to evaluate in the context of that module and output the same expression, as valid source code, with a single evaluation step performed. This would be fantastic for writing new algorithms, for understanding existing functions and algorithms, writing proofs, and learning Haskell. There was something like this demonstrated in Inventing on Principle. The opportunities for education and general development practice are worth such a project.

Note: A debugger stepper is not the same thing.

Example:

foldr (+) 0 [1, 2, 3, 4] foldr (+) 0 (1 : [2, 3, 4]) 1 + foldr (+) 0 [2, 3, 4] 1 + foldr (+) 0 (2 : [3, 4]) 1 + (2 + foldr (+) 0 [3, 4]) 1 + (2 + foldr (+) 0 (3 : [4])) 1 + (2 + (3 + foldr (+) 0 [4])) 1 + (2 + (3 + foldr (+) 0 (4 : []))) 1 + (2 + (3 + (4 + foldr (+) 0 []))) 1 + (2 + (3 + (4 + 0))) 1 + (2 + (3 + 4)) 1 + (2 + 7) 1 + 9 10Comparing this with foldl immediately shows the viewer how they differ in structure:

foldl (+) 0 [1, 2, 3, 4] foldl (+) 0 (1 : [2, 3, 4]) foldl (+) ((+) 0 1) [2, 3, 4] foldl (+) ((+) 0 1) (2 : [3, 4]) foldl (+) ((+) ((+) 0 1) 2) [3, 4] foldl (+) ((+) ((+) 0 1) 2) (3 : [4]) foldl (+) ((+) ((+) ((+) 0 1) 2) 3) [4] foldl (+) ((+) ((+) ((+) 0 1) 2) 3) (4 : []) foldl (+) ((+) ((+) ((+) ((+) 0 1) 2) 3) 4) [] (+) ((+) ((+) ((+) 0 1) 2) 3) 4 1 + 2 + 3 + 4 3 + 3 + 4 6 + 4 10Each step in this is a valid Haskell program, and it’s just simple substitution.

If the source for a function isn’t available, there are a couple options for what to do:

- Have special-cases for things like (+), as above.
- Just perform no substitution for that function, it will still be a legitimate program.

It’s another project I could easily provide see-as-you-type support for in Emacs, given an engine to query.

Again, this is just one more project which should just be a standard thing Haskell can do. It’s a pure language. It’s used to teach equational reasoning and following a simple lambda calculus substitution model. But there is no such tool. Haskell is practically waving in our faces with this opportunity.

Existing work in this area:

- stepeval - a prototype which nicely demonstrates the idea. It’s based on HSE and only supports a tiny subset. There aren’t any plans to move this forward at the moment. I’ll update the page if this changes.

### Haskell wiki

### Munich Haskell Meeting, 2015-01-26 < at > 19:30

### IO Monad Considered Harmful · in Code

### putStrLn (show x) -vs- (putStrLn . show) x

I can define two seemingly-equivalent functions:

f x = putStrLn (show x) g x = (putStrLn . show) x...with the same type signature and results. Under what circumstance would I want to use the longer and less explicit (to me) dot notation?

submitted by chrox[link] [13 comments]