# News aggregator

### Performance of StateT and best practices fordebugging

### Pinhole: a falling ball demo (written using Haskell + Gloss, with explanation)

### parsec: problem combining lookAhead with many1 (bug?)

### Free monad based EDSL for writing LLVM programs.

### ANN: fastbayes, Bayesian modeling algorithms accelerated for particular model structures

### Going from intermediate to advanced haskell programming?

Hello /r/haskell,

I've been learning Haskell for about 3 months now. I just realized I've written about 11k lines of code so far. I've been reading LYAH, RWH and consult the Haskell Wiki from time to time and Hoogle very often. I'm fairly comfortable with concepts like monads, function composition, type classes, eta reductions and haskell syntax in general.

The problem is most of the code I've written so far are reproductions of projects I've found online e.g. an implementation of the raft protocol in Haskell, a ray tracer in Haskell, a Lisp interpreter in haskell, and numerous other small projects. What I usually do is read through a textual description of the project's source if available (e.g. RWH) and then try to reproduce the code on my own. What I try to understand mainly from these projects is not the exact code that was used but the design thinking that was followed (e.g. which monads to use, what kind of function abstraction helps e.g. where can I return functions for greater effect etc). I only look at the actual code as a fallback if I ever encounter something I don't know how to do in Haskell. So this has been my learning technique so far.

I'm having a hard time figuring out how to progress to the next level of Haskell fluency. I feel like I haven't yet imbibed the Haskell nature of things (when it becomes completely natural to program in a given language). In addition, I've chanced upon numerous discussions here and on other sites/mailing lists (and some code) where I feel completely lost. For instance, there are (just off the top of my head)

- programming-related things I have:

*no* idea about: many pragmas -- INLINE/SPECIALIZE/RankNTypes etc, GHC optimization tricks , profiling and performance tuning etc etc.

*some* idea about: overlapping and incoherent instances, continuations, monomorphisms etc etc.

- theoretical things I have no idea about: category theory, kleisli arrows, comonads etc etc

Now some might say you don't need to know *every* feature of a language. No one does. This is true. However, I don't want to miss out on features of the language that can actually make me more productive. For instance, the first time I came across arrows was when hlint suggested using arrows for a certain piece of code. They were actually useful (as opposed to gimmicks learnt only to show off your skillz).

Others might say if you know what you don't know, just get to know it. This is true and all of the things I don't know that I mentioned above, I'm already reading up on.

My question is **what is the most effective way to ramp up your Haskell skills to the next level? How did you do it?** With a language like Java/C++, there are resources like Effective Java/C++ or tons of advanced programming tutorials online. What is the Haskell equivalent?

**TL;DR**: title of this post

**EDIT**: thank you all for your replies and links. I'll keep checking back until this thread dies down.

[link] [20 comments]

### Haskell Weekly News: Issue 301

### wren gayle romano: On being the "same" or "different": Introduction to Apartness

Meanwhile, back in math land... A couple-few months ago I was doing some work on apartness relations. In particular, I was looking into foundational issues, and into what an apartness-based (rather than equality-based) dependently-typed programming language would look like. Unfortunately, too many folks think "constructive mathematics" only means BHK-style intuitionistic logic— whereas constructive mathematics includes all sorts of other concepts, and they really should be better known!

So I started writing a preamble post, introducing the basic definitions and ideas behind apartnesses, and... well, I kinda got carried away. Instead of a blog post I kinda ended up with a short chapter. And then, well, panic struck. In the interests of Publish Ever, Publish Often, I thought I might as well share it: a brief introduction to apartness relations. As with my blog posts, I'm releasing it under Creative Commons Attribution-NonCommercial-NoDerivs 4.0; so feel free to share it and use it for classes. But, unlike the other columbicubiculomania files, it is not ShareAlike— since I may actually turn it into a published chapter someday. So do respect that. And if you have a book that needs some chapters on apartness relations, get in touch!

The intro goes a little something like this:

We often talk about values being "the same as" or "different from" one another. But how can we formalize these notions? In particular, how should we do so in a constructive setting?

Constructively, we lack a general axiom for double-negation elimination; therefore, every primitive notion gives rise to both strong (strictly positive) and weak (doubly-negated) propositions. Thus, from the denial of (weak) difference we can only conclude weak sameness. Consequently, in the constructive setting it is often desirable to take difference to be a primitive— so that, from the denial of strong difference we can in fact conclude strong sameness.

This ability "un-negate" sameness is the principal reason for taking difference to be one of our primitive notions. While nice in and of itself, it also causes the strong and weak notions of sameness to become logically equivalent (thm 1.4); enabling us to drop the qualifiers when discussing sameness.

But if not being different is enough to be considered the same, then do we still need sameness to be primitive? To simplify our reasoning, we may wish to have sameness be *defined* as the lack of difference. However, this is not without complications. Sameness has been considered a natural primitive for so long that it has accrued many additional non-propositional properties (e.g., the substitution principle). So, if we eliminate the propositional notion of primitive equality, we will need somewhere else to hang those coats.

The rest of the paper fleshes out these various ideas.

Twitter Facebook Google+ Tumblr WordPresscomments

### wren gayle romano: On being the "same" or "different": Introduction to Apartness

Meanwhile, back in math land... A couple-few months ago I was doing some work on apartness relations. In particular, I was looking into foundational issues, and into what an apartness-based (rather than equality-based) dependently-typed programming language would look like. Unfortunately, too many folks think "constructive mathematics" only means BHK-style intuitionistic logic— whereas constructive mathematics includes all sorts of other concepts, and they really should be better known!

So I started writing a preamble post, introducing the basic definitions and ideas behind apartnesses, and... well, I kinda got carried away. Instead of a blog post I kinda ended up with a short chapter. And then, well, panic struck. In the interests of Publish Ever, Publish Often, I thought I might as well share it: a brief introduction to apartness relations. As with my blog posts, I'm releasing it under Creative Commons Attribution-NonCommercial-NoDerivs 4.0; so feel free to share it and use it for classes. But, unlike the other columbicubiculomania files, it is not ShareAlike— since I may actually turn it into a published chapter someday. So do respect that. And if you have a book that needs some chapters on apartness relations, get in touch!

The intro goes a little something like this:

We often talk about values being "the same as" or "different from" one another. But how can we formalize these notions? In particular, how should we do so in a constructive setting?

Constructively, we lack a general axiom for double-negation elimination; therefore, every primitive notion gives rise to both strong (strictly positive) and weak (doubly-negated) propositions. Thus, from the denial of (weak) difference we can only conclude weak sameness. Consequently, in the constructive setting it is often desirable to take difference to be a primitive— so that, from the denial of strong difference we can in fact conclude strong sameness.

This ability "un-negate" sameness is the principal reason for taking difference to be one of our primitive notions. While nice in and of itself, it also causes the strong and weak notions of sameness to become logically equivalent (thm 1.4); enabling us to drop the qualifiers when discussing sameness.

But if not being different is enough to be considered the same, then do we still need sameness to be primitive? To simplify our reasoning, we may wish to have sameness be *defined* as the lack of difference. However, this is not without complications. Sameness has been considered a natural primitive for so long that it has accrued many additional non-propositional properties (e.g., the substitution principle). So, if we eliminate the propositional notion of primitive equality, we will need somewhere else to hang those coats.

The rest of the paper fleshes out these various ideas.

Twitter Facebook Google+ Tumblr WordPresscomments

### Clojure's Transducers are Perverse Lenses

/u/tel was playing around with a translation of Clojure's transducers to Haskell here. He introduced a type

type Red r a = (r -> a -> r, r)which reminded me of non-van Laarhoven lenses

type OldLens a b = (a -> b -> a, a -> b)We can change tel's Red slightly

type Red r a = (r -> a -> r, () -> r)From this point of view, Red is a perverse form of lens, because the "getter" always returns the same value, which is the value a normal lens would extract a value from! I think the modified "van Laarhoven form" of Red reads

type PerverseLens r a = forall f. Functor f => (() -> f a) -> a -> f rbut I'm not sure. I suspect that you'll be able to use normal function composition with this encoding somehow, and it will compose "backwards" like lenses do. After about 15 minutes, I haven't gotten anywhere, but I'm a Haskell noob, so I'm curious if someone more experienced can make this work.

/u/tel also defined reducer transformers

type RT r a b = PerverseLens r a -> PerverseLens r bFrom the "perverse lens" point of view, I believe an RT would be equivalent to

(. perverseGetter)where a PerverseGetter is PerverseLens specialized to Const, in the same way Getter is Lens specialized to Const.

I'm not sure how helpful or useful any of this is, but it is interesting.

EDIT: Perhaps

type Red r a = (r -> a -> r, (forall x. x -> r)) type PerverseLens r a = forall f. Functor f => (forall x. x -> f a) -> a -> f rwould be better types for perverse lenses?

submitted by kidnapster[link] [23 comments]

### I Did A Haskell: fizzbuzz

I started the Learn You A Haskell tutorial, and as soon as I hit the boomBangs example I was like "ooh! ooh!" and had to do fizzbuzz. I have no idea if this is anywhere close to idiomatic (or whether there's any benefit to declaring the null case explicitly, here). First working version had fz and bz as functions in a where clause because I couldn't figure out how to inline them, which amuses me in retrospect because, ha ha, in lines -- anyway, I suspect that my original version is more proper, but dammit, I'm proud of myself. :D

Okay, done gushing at the internet for now, but this here thing is *neat.*

[link] [26 comments]

### Oliver Charles: Working with postgresql-simple with generics-sop

The least interesting part of my job as a programmer is the act of pressing keys on a keyboard, and thus I actively seek ways to reduce typing. As programmers, we aim for reuse in a our programs - abstracting commonality into reusable functions such that our programs get more concise. Functional programmers are aware of the benefits of higher-order functions as one form of generic programming, but another powerful technique is that of data type generic programming.

This variant of generic programming allows one to build programs that work over arbitrary data types, providing they have some sort of known “shape”. We describe the shape of data types by representing them via a code - often we can describe a data type as a sum of products. By sum, we are talking about the choice of a constructor in a data type (such as choosing between Left and Right to construct Either values), and by product we mean the individual fields in a constructor (such as the individual fields in a record).

Last month, Edsko and Löh announced a new library for generic programming: generics-sop. I’ve been playing with this library in the last couple of days, and I absolutely love the approach. In today’s short post, I want to demonstrate how easy it is to use this library. I don’t plan to go into a lot of detail, but I encourage interested readers to check out the associated paper - True Sums of Products - a paper with a lovely balance of theory and a plethora of examples.

postgresql-simpleWhen working with postgresql-simple, one often defines records and corresponding FromRow and ToRow instances. Let’s assume we’re modelling a library. No library is complete without books, so we might begin with a record such as:

data Book = Book { bookTitle :: Text , bookAuthor :: Text , bookISBN :: ISBN , bookPublishYear :: Int }In order to store and retrieve these in our database, we need to write the following instances:

instance FromRow Book where toRow = Book <$> field <*> field <*> field <*> field instance ToRow Book where toRow Book{..} = [ toField bookTitle , toField bookAuthor , toField bookISBN , toField bookPublishYear ]As you can see - that’s a lot of boilerplate. In fact, it’s nearly twice as much code as the data type itself! The definitions of these instances are trivial, so it’s frustrating that I have to manually type the implementation bodies by hand. It’s here that we turn to generics-sop.

First, we’re going to need a bit of boiler-plate in order to manipulate Books generically:

data Book = ... deriving (GHC.Generics.Generic) instance Generics.SOP.Generic BookWe derive generic representations of our Book using GHC.Generics, and in turn use this generic representation to derive the Generics.SOP.Generic instance. With this out of the way, we’re ready to work with Books in a generic manner.

generics-sopThe generics-sop library works by manipulating heterogeneous lists of data. If we look at our Book data type, we can see that the following two are morally describing the same data:

book = Book "Conceptual Mathematics" "Lawvere, Schanuel" "978-0-521-71916-2" 2009 book = [ "Conceptual Mathematics", "Lawvere, Schanuel", "978-0-521-71916-2", 2009 ]Of course, we can’t actually write such a thing in Haskell - lists are required to have all their elements of the same type. However, using modern GHC extensions, we can get very close to modelling this:

data HList :: [*] -> * where Nil :: HList '[] (:*) :: x -> HList xs -> HList (x ': xs) book :: HList '[Text, Text, ISBN, Int] book = "Conceptual Mathematics" :* "Lawvere, Schanuel" :* "978-0-521-71916-2" :* 2009 :* NilOnce we begin working in this domain, a lot of the techniques we’re already familiar with continue fairly naturally. We can map over these lists, exploit their applicative functor-like structure, fold them, and so on.

generics-sop continues in the trend, using kind polymorphism and a few other techniques to maximise generality. We can see what exactly is going on with generics-sop if we ask GHCI for the :kind! of Book’s generic Code:

> :kind! Code Book Code Book = SOP I '[ '[ Text, Text, ISBN, Int ] ]The list of fields is contained within another list of all possible constructors - as Book only has one constructor, thus there is only one element in the outer list.

FromRow, GenericallyHow does this help us solve the problem of our FromRow and ToRow instances? First, let’s think about what’s happening when we write instances of FromRow. Our Book data type has four fields, so we need to use field four times. field has side effects in the RowParser functor, so we sequence all of these calls using applicative syntax, finally applying the results to the Book constructor.

Now that we’ve broken the problem down, we’ll start by solving our first problem - calling field the correct number of times. Calling field means we need to have an instance of FromField for each field in a constructor, so to enforce this, we can use All to require all fields have an instance of a type class. We also use a little trick with Proxy to specify which type class we need to use. We combine all of this with hcpure, which is a variant of pure that can be used to build a product:

fields :: (All FromField xs, SingI xs) => NP RowParser xs fields = hcpure fromField field where fromField = Proxy :: Proxy FromFieldSo far, we’ve built a product of field calls, which you can think of as being a list of RowParsers - something akin to [RowParser ..]. However, we need a single row parser returning multiple values, which is more like RowParser [..]. In the Prelude we have a function to sequence a list of monadic actions:

sequence :: Monad m => [m a] -> m [a]There is an equivalent in generics-sop for working with heterogeneous lists - hsequence. Thus if we hsequence our fields, we build a single RowParser that returns a product of values:

fields :: (All FromField xs, SingI xs) => RowParser (NP I xs) fields = hsequence (hcpure fromField field) where fromField = Proxy :: Proxy FromField(I is the “do nothing” identity functor).

Remarkably, these few lines of code are enough to construct data types. All we need to do is embed this product in a constructor of a sum, and then switch from the generic representation to a concrete data type. We’ll restrict ourselves to data types that have only one constructor, and this constraint is mentioned in the type below (Code a ~ '[ xs ] forces a to have only one constructor):

gfrowRow :: (All FromField xs, Code a ~ '[xs], SingI xs, Generic a) => RowParser a gfrowRow = to . SOP . Z <$> hsequence (hcpure fromField field) where fromField = Proxy :: Proxy FromFieldThat’s all there is to it! No type class instances, no skipping over meta-data - we just build a list of field calls, sequence them, and turn the result into our data type.

ToRow, GenericallyIt’s not hard to apply the same ideas for ToRow. Recall the definition of ToRow:

class ToRow a where toRow :: a -> [Action]toRow takes a value of type a and turns it into a list of actions. Usually, we have one action for each field - we just call toField on each field in the record.

To work with data generically, we first need move from the original data type to its generic representation, which we can do with from and a little bit of pattern matching:

gtoRow :: (Generic a, Code a ~ '[xs]) => a -> [Action] gtoRow a = case from a of SOP (Z xs) -> _Here we pattern match into the fields of the first constructor of the data type. xs is now a product of all fields, and we can begin turning into Actions. The most natural way to do this is simply to map toField over each field, collecting the resulting Actions into a list. That is, we’d like to do:

map toField xsThat’s not quite possible in generics-sop, but we can get very close. Using hcliftA, we can lift a method of a type class over a heterogeneous list:

gtoRow :: (Generic a, Code a ~ '[xs], All ToField xs, SingI xs) => a -> [Action] gtoRow a = case from a of SOP (Z xs) -> _ (hcliftA toFieldP (K . toField . unI) xs) where toFieldP = Proxy :: Proxy ToFieldWe unwrap from the identity functor I, call toField on the value, and then pack this back up using the constant functor K. The details here are a little subtle, but essentially this moves us from a heterogeneous list to a homogeneous list, where each element of the list is an Action. Now that we have a homogeneous list, we can switch back to a more basic representation by collapsing the structure with hcollapse:

gtoRow :: (Generic a, Code a ~ '[xs], All ToField xs, SingI xs) => a -> [Action] gtoRow a = case from a of SOP (Z xs) -> hcollapse (hcliftA toFieldP (K . toField . unI) xs) where toFieldP = Proxy :: Proxy ToFieldAdmittedly this definition is a little more complicated than one might hope, but it’s still extremely concise and declarative - there’s only a little bit of noise added. However, again we should note there was no need to write type class instances, perform explicit recursion or deal with meta-data - generics-sop stayed out of way and gave us just what we needed.

ConclusionNow that we have gfromRow and gtoRow it’s easy to extend our application. Perhaps we now want to extend our database with Author objects. We’re now free to do so, with minimal boiler plate:

data Book = Book { bookId :: Int , bookTitle :: Text , bookAuthorId :: Int , bookISBN :: ISBN , bookPublishYear :: Int } deriving (GHC.Generics.Generic) instance Generic.SOP.Generic Book instance FromRow Book where fromRow = gfromRow instance ToRow Book where toRow = gtoRow data Author = Author { authorId :: Int , authorName :: Text , authorCountry :: Country } deriving (GHC.Generics.Generic) instance Generic.SOP.Generic Author instance FromRow Author where fromRow = gfromRow instance ToRow Author where toRow = gtoRowgenerics-sop is a powerful library for dealing with data generically. By using heterogeneous lists, the techniques we’ve learnt at the value level naturally extend and we can begin to think of working with generic data in a declarative manner. For me, this appeal to familiar techniques makes it easy to dive straight in to writing generic functions - I’ve already spent time learning to think in maps and folds, it’s nice to see the ideas transfer to yet another problem domain.

generics-sop goes a lot further than we’ve seen in this post. For more real-world examples, see the links at the top of the generics-sop Hackage page.

### cabal repl failing silently on missing exposed-modules

### How would you make a falling sand game in Haskell?

Background: a falling sand game is this. Basically, various types of particles have very naive physics.

So basically, the usual/imperative approach would be to have a list of particles and update them one at a time. For speed, there would be a 2d-array to quickly look up if there is a particle at a position.

It seems like there should be a more idiomatic approach in Haskell. I've thought about it, but even something as simple as abstracting the looping construct seems tricky.

submitted by tailcalled[link] [26 comments]