# News aggregator

### Storing data in Haskell. : haskell

### Storing data in Haskell. : haskell

### What Does fmap Preserve? (a gentle motivation for parametricity and naturality)

### The fast-math package implements a number of rewrite rules that might speed up your numeric computations

### Daniel Mlot (duplode): What Does fmap Preserve?

A common way of introducing fmap is saying that it only changes the values in a container, and not its structure. Leaving behind the the functors-as-containers metaphor, we can convey the same idea by saying that fmap leaves the context of the values in a Functor unchanged. But what, exactly, is the “context” or “structure” being preserved? “It depends on the functor”, though correct, is not an entirely satisfactory answer. The functor laws, after all, are highly abstract, and make no mention of anything a programmer would be inclined to call “structure” (say, the skeleton of a list); and yet the preservation we alluded to follows from them. After struggling a bit with this question, I realised that the incompatibility is only apparent. This post shows how the tension can be resolved through the mediation of *parametricity* and *naturality*, two concepts from different domains that are intertwined in Haskell.

A correct, if rather cruel, answer to “Why does fmap preserve structure?” would be “By definition, you silly!” To see what would be meant by that, let’s have a look at the functor laws.

fmap id = id -- 1st functor law fmap (g . f) = fmap g . fmap f -- 2nd functor lawfmap is a mapping of functions that takes identity to identity, and composed functions to the corresponding composed functions. Identity and composition make up the structure, in the mathematical sense, of a category. In category theory, a functor is a mapping between categories that preserves category structure. Therefore, the functor laws ensure that Haskell Functors are indeed functors; more precisely, functors from **Hask** to **Hask**, **Hask** being the category with Haskell types as objects and Haskell functions as arrows.1

That functors preserve category structure is evident. However, our question is not directly about “structure” in the mathematical sense, but with the looser acception it has in programmer parlance. In what follows, our goal will be clarifying this casual meaning.

What Can You Do With a Function?As an intial, fuzzy characterisation, we can say that, given a functorial value, the Functor context is everything in it other than the wrapped values. Staring from that, a straightforward way of showing why fmap preserves context involves *parametric polymorphism*; more specifically, the preservation is ensured by the wild generality of the types in the signature of fmap.

We will look at fmap as a function of one argument which converts a plain a -> b function into a function which operates on functorial values. The key fact is that there is very little we can do with the a -> b function when defining fmap. Composition is not an option, as choosing a function other than id to compose it with would require knowledge about the a and b types. The only thing that can be done is applying the function to any a values we can retrieve from the t a functorial value. Since the context of a t a value, whatever it is, does not include the a values, it follows that changes to the context cannot depend on the a -> b function. Given that fmap takes no other arguments, any changes in the context must happen for any a -> b arguments uniformly. The first functor law, however, says that fmap id = id, and so there is one argument, id, which leads to no changes in the context. Therefore, fmap never changes the context.

The informal argument above can be made precise through a proper type theory treatment of parametricity. Philip Wadler’s *Theorems for free!* is a well-known example of such work. However, a type theory approach, while entirely appropriate, would have us taking concrete Haksell types for granted and only incidentally concluding they are functors; in contrast, our problem begins with functors. For that reason, we will follow a different path and look at the issue from a primarily categorical point of view.

In the spirit of category theory, we will now focus not on the types but on the functions between them. After all, given functional purity any interesting properties of a Haskell value can be verified with suitable functions. Let’s start with a few concrete examples of how the context of a Functor can be probed with functions.

length :: [a] -> IntThe length of a list is perhaps the most obvious example of a structural property. It depends only on the list skeleton, and not all on the values in it. The type of length, with a fully polymorphic element type which is not mentioned by the result type, reflects such an independence. An obvious consequence is that fmap, which only affects the list elements, cannot change the length of a list. We can state this fact like this:

length xs = length (fmap f xs)Or, in a more categorical fashion:

length = length . fmap fOur second example of a structure-probing function will be reverse:

reverse :: [a] -> [a]While the result value of reverse obviously depends on the list elements, reverse cannot actually modify the elements, given that the function is fully polymorphic on the element type. fmap applied to a list after reversing it will thus affect the same element values there were before the reversal; they will only have been rearranged. In other words, fmap *commutes* with reverse:

Our final example will be listToMaybe from Data.Maybe:

listToMaybe :: [a] -> Maybe aOperationally, listToMaybe is a safe version of head, which returns Nothing when given an empty list. Again, the function is fully polymorphic in the element type, and so the value of the first element cannot be affected by it. The scenario is very similar to what we have seen for reverse, and an analogous property holds, with the only difference being that fmap is instantiated at a different Functor at each side of the equation:

-- Maybe-fmap on the left, []-fmap on the right. fmap f . listToMaybe = listToMaybe . fmap fEarlier we said that the Functor context consists of everything but the wrapped values. Our examples illustrate how parametric polymorphism makes it possible to keep that general idea while putting functions rather than values under the spotlight. The context is all that can be probed with functions fully polymorphic on the type parameter of the Functor; or, taking the abstraction further, the context *is* the collection of functions fully polymorphic on the type parameter of the Functor. We now have done away with the fuzziness of our preliminary, valure-centric definition. The next step is clarifying how that definition relates to fmap.

By identifying the Functor context with polymorphic functions, we can also state the context-preserving trait of fmap through commutativity equations like those shown in the above examples. For an arbitrary context-probing function r, the equation is:

-- f is arbitrary, and so are the involved functors. fmap f . r = r . fmap fThe equations for reverse and listToMaybe clearly have that shape. length does not seem to fit at first sight, but that can be easily solved by lifting it to a constant functor such as the one provided by Control.Applicative.

lengthC :: [a] -> Const Int a lengthC = Const . length -- length = getConst . lengthC -- For constant functors, fmap f = id regardless of f. fmap f . lengthC = lengthC . fmap fA similar trick can be done with the Identity functor to make functions in which the type parameter of the Functor appears bare, such as Just :: a -> Maybe a, fit our scheme.

It turns out that there is a category theory concept that captures the commutativity property we are interested in. A *natural transformation* is a translation between functors which preserves arrows being mapped through them. For Haskell Functors, that amounts to preserving functions being mapped via fmap. We can display the relation through a diagram:

*Naturality for Haskell Functors. Example instantation: T = []; U = Maybe; r = listToMaybe.*</figcaption> </figure>

The naturality condition matches our commuativity property. Indeed, *polymorphic functions are natural transformations between Haskell Functors*. The proof of this appealing result is not trivial, and requires some theoretical work, just like in the case of the closely related results about parametricity we alluded to earlier. In any case, all it takes to go from “natural transformations preserve fmap” to “fmap preserves natural transformations” is tilting our heads while looking at the diagram above!

Given how we identified Functor contexts, polymorphic functions and natural transformations, we can finally give a precise answer to our question. The context consists of natural transformations between functors, and therefore fmap preserves it.

Structures and StructuresEarlier on, we have said that we would not be directly concerned with structure in the sense mathematicians use the word, but only with the fuzzy Haskell concept that sometimes goes by the same name. To wrap things up, we will now illustrate the fact that both acceptions are not worlds apart. Let’s have another look at the second functor law, which states that fmap preserves composition:

fmap (g . f) = fmap g . fmap fStructure, in the mathematical sense, refers to some collection of interesting operations and distinguished elements. In this example, the relevant operation is function composition, which is part of the structure of the *Hask* category. Besides that, however, we are now able to note the uncanny resemblance between the shapes of the law, which says that it does not matter whether we compose f and g before applying fmap, and of the commutativity properties we used to characterise functorial contexts. The upshot is that by identifying context and structure of a Functor with polymorphic functions, we retain much of the spirit of the mathematical usage of structure. The interesting operations, in our case, are the polymorphic functions with which the context is probed. Perhaps it even makes sense to keep talking of structure of a Functor even after dropping the container metaphor.

Speaking of the second law, we will, just for kicks, use it to show how to turn things around and look at fmap as a natural transformation between Functors. In order to do so, we have to recall that (.) is fmap for the function functor:

fmap (g . f) = fmap g . fmap f fmap (((.) g) f) = (.) (fmap g) (fmap f) fmap . (.) g = ((.) . fmap) g . fmap -- A rephrasing to point out the involved functors: -- By fixing t and a, we get Functors parametrised over b in both sides -- of the function arrow below. fmap_t :: (Functor t) => (->) a b -> (->) (t a) (t b) fmap_t = fmap fmap_fun :: (b -> c) -> ((->) a b -> (->) a c) fmap_fun = (.) fmap_fun_t :: (Functor t) => (b -> c) -> ((->) (t a) (t b) -> (->) (t a) (t c)) fmap_fun_t = fmap_fun . fmap_t fmap_t . fmap_fun g = fmap_fun_t g . fmap_t -- Or simply: fmap . fmap g = fmap g . fmap Further ReadingIn

*The Holy Trinity*, Robert Harper comments on the deep connection between logic, type theory and category theory that allows us to shift seamlessly between the categorical and the type theoretical perspectives, as we have done here.*You Could Have Defined Natural Transformations*by Dan Piponi is a very clear introduction to natural transformations in a Haskell context.We have already mentioned Philip Wadler’s

*Theorems for free!*, which is a reasonably accessible introduction to the*free theorems*.*Free theorems*are results about functions that, thanks to parametric polymorphism, can be deduced from the type of the function alone. Given suitable generalisations, free theorems and naturality conditions provide two parallel ways of reaching the same results about Haskell functions.*Free Theorems Involving Type Constructor Classes*, a functional pearl by Janis Voigtländer illustrates how free theorem generation can be generalised to types parametric on type constructors and type classes.For an explicitly categorical perspective on parametricity, a good place to start if you are willing to dig into theory is the section on parametricity in

*Some Aspects of Categories in Computer Science*by Philip J. Scott.

A category theory primer would be too big a detour for this post. If the category theory concepts I just mentioned are new to you, I suggest the following gentle introductions for Haskellers, which have very different approaches: Haskell Wikibook chapter on category theory, and Gabriel Gonzalez’s posts The category design pattern and The functor design pattern.↩

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

### Recommendation for Learning Haskell. Pick 3 from this list and/or other lists. And if you want to know which one is the best, answer: the 3rd one you read.

### A major upgrade to attoparsec: more speed, more power | teideal glic deisbhéalach

### A major upgrade to attoparsec: more speed, more power | teideal glic deisbhéalach

### My Experiences at BayHac 2014

### Haskell Programming Language

### Haskell Programming Language

### I made an animated ASCII-art music visualization at a hackathon today in Haskell

### ghc-rts.pdf

### ghc-rts.pdf

### IntelliJ plugin?

I looks like someone finally either forked the old plugin or decided to start a new one. link from IntelliJ site. Has anyone used this yet?

submitted by spitfiredd[link] [16 comments]

### Can't Scheme be considered a statically typed language?

I was thinking and I noticed one could create a language identical to Scheme with a static typing system and type inference. So why can't we say Scheme itself is a statically typed language?

5 :: Number (cons 1 nil) :: List Number (lambda (a) (+ a a)) :: Number -> Number (lambda (a) (cons a nil)) :: a -> List aThe only trouble I see would be something like below:

(lambda (a) (if a 5 (cons 1 nil)))But I guess one could just go ahead and say it is just a sum type:

data NewType = Number | List (lambda (a) (if a 5 (cons 1 nil))) :: NewType?

submitted by SrPeixinho[link] [13 comments]

### Why is the monoid operation mappend and not mop?

I realize it's a pretty trivial thing, but it's been bugging me for a while. I searched a bit but couldn't find any historical reason for the list-centric naming of monoid functions.

submitted by Hrothen[link] [29 comments]

### Yesod won't install

I tried installing Yesod, following the instructions. Unfortunately it failed with an error message:

cabal: Error: some packages failed to install: hjsmin-0.1.4.6 depends on language-javascript-0.5.13 which failed to install. language-javascript-0.5.13 failed during the configure step. The exception was: ExitFailure 1 yesod-platform-1.2.11 depends on language-javascript-0.5.13 which failed to install. yesod-static-1.2.3 depends on language-javascript-0.5.13 which failed to install.FWIW I'm using Ubuntu 12.04 and installed Haskell with:

$ sudo apt-get install haskell-platformIs this what is know as "dependency hell"? Would I be better off not using the apt-get package?

Also, AIUI cabal is Haskell's equvalent of Python's pip; is there a Haskell equivalent of Python's virtualenv (which let's me run multiple virtual environments, each with different packages installed)?

submitted by cabalamat[link] [15 comments]

### Writing functions without explicit application

I'm trying to write a function that converts a list to a palindromic list in the following way:

> palindrate [1,2,3] [1,2,3,3,2,1] > palindrate "r/haskell" "r/haskelllleksah/r"It is quite simple to write this in the following way:

palindrate :: [a] -> [a] palindrate l = l ++ reverse lHowever recently I've become rather fond of defining functions without explicitly referring to the value they are applied to. In most situations, like the one above, it takes a few moments of thought to figure out how to do this. I have come up with the following:

palindrate :: [a] -> [a] palindrate = id &&& reverse >>> uncurry (++)Is there a better way of doing this? And, other than the benefits derived from the programmer feeling good, is there anything to be gained from this approach?

submitted by tekn04[link] [16 comments]