News aggregator

Lenses You Can Make at Home

Haskell on Reddit - Sat, 04/26/2014 - 9:37am
Categories: Incoming News

IdiomaticLens.hs

Haskell on Reddit - Sat, 04/26/2014 - 7:56am

Given the popularity of the Lens is unidiomatic Haskell article, I thought it might be worthwhile to investigate what an idiomatic implementation of (a small subset of) lens might look like. Here is my implementation.

Notes:

  • By "idiomatic Haskell", I mean that instead of cleverly using type synonyms to induce a subtyping relation between the different kinds of optics, I use separate datatype definitions and explicit conversion functions. Whether that is or should be called "idiomatic" is beside the point.
  • While my optics compose in the same order as accessors instead of the OO order used by lens, achieving this was not a design goal. It's a simple consequence of the order in which the type arguments are given.
  • lens-family and lens-family-core also use the type synonyms trick, and since data-lens only supports the Lens optic, the question of explicit conversion between optics doesn't arise. My toy implementation is far less serious than any of those libraries.
  • I have probably followed glguy's convenient summary of the different optics (which I reproduce here for convenience) much too closely, apologies if my optics aren't as general as the originals.
Getter' s a -- Gets one a from an s Setter' s a -- Sets any number of a in s Fold' s a -- Gets any number of a in s Traversal' s a -- Gets or Sets any number of a in s Lens' s a -- Gets or Sets one a in s Review' s a -- Can construct s from a Prism' s a -- Gets zero or one a in s / Can construct s from a submitted by gelisam
[link] [9 comments]
Categories: Incoming News

Daniel Mlot (duplode): Lenses You Can Make at Home

Planet Haskell - Sat, 04/26/2014 - 6:00am

The most striking traits of the lens library are its astonishing breadth and generality. And yet, the whole edifice is built around van Laarhoven lenses, which are a simple and elegant concept. In this hands-on exposition, I will show how the Lens type can be understood without prerequisites other than a passing acquaintance with Haskell functors. Encouraging sound intuition in an accessible manner can go a long way towards making lens and lenses less intimidating.

Humble Beginnings

Dramatis personæ:

> import Data.Functor.Identity (Identity(..)) > import Control.Applicative (Const(..))

I will define a toy data type so that we have something concrete to play with, as well as a starting point for working out generalisations.

> data Foo = Foo { bar :: Int } deriving (Show)

The record definition gets us a function for accessing the bar field.

GHCi> :t bar bar :: Foo -> Int

As for the setter, we have to define it ourselves, unless we feel like mucking around with record update syntax.

> setBar :: Foo -> Int -> Foo > setBar x y = x { bar = y }

Armed with a proper getter and setter pair, we can easily flip the sign of the bar inside a Foo.

GHCi> let x = Foo 3 GHCi> setBar x (negate $ bar x) Foo {bar = -3}

We can make it even easier by defining a modifier function for bar.

> modifyBar :: (Int -> Int) -> Foo -> Foo > modifyBar k x = setBar x . k . bar $ x GHCi> modifyBar negate x Foo {bar = -3}

setBar can be recovered from modifyBar by using const to discard the original value and put the new one in its place.

const y = \_ -> y > setBar' :: Foo -> Int -> Foo > setBar' x y = modifyBar (const y) x

If our data type had several fields, defining a modifier for each of them would amount to quite a lot of boilerplate. We could minimise it by, starting from our modifyBar definition, abstracting from the specific getter and setter for bar. Here, things begin to pick up steam. I will define a general modify function, which, given an appropriate getter-setter pair, can deal with any field of any data type.

> modify :: (s -> a) -> (s -> a -> s) -> (a -> a) -> s -> s > modify getter setter k x = setter x . k . getter $ x

It is trivial to recover modifyBar; when we do so, s becomes Foo and a becomes Int.

> modifyBar' :: (Int -> Int) -> Foo -> Foo > modifyBar' = modify bar setBar Functors Galore

The next step of generalisation is the one leap of faith I will ask of you in the way towards lenses. I will introduce a variant of modify in which the modifying function, rather than being a plain a -> a function, returns a functorial value. Defining it only takes an extra fmap.

> modifyF :: Functor f => (s -> a) -> (s -> a -> s) > -> (a -> f a) -> s -> f s > modifyF getter setter k x = fmap (setter x) . k . getter $ x

And here is its specialisation for bar.

> modifyBarF :: Functor f => (Int -> f Int) -> Foo -> f Foo > modifyBarF = modifyF bar setBar

Why on Earth we would want to do that? For one, it allows for some nifty tricks depending on the functor we choose. Let’s try it with lists. Specialising the modifyF type would give:

modifyL :: (s -> a) -> (s -> a -> s) -> (a -> [a]) -> s -> [s]

Providing the getter and the setter would result in a (a -> [a]) -> s -> [s] function. Can you guess what it would do?

GHCi> modifyBarF (\y -> [0..y]) x [Foo {bar = 0},Foo {bar = 1},Foo {bar = 2},Foo {bar = 3}]

As the types suggest, we get a function which modifies the field in multiple ways and collects the results.

I claimed that moving from modify to modifyF was a generalisation. Indeed, we can recover modify by bringing Identity, the dummy functor, into play.

newtype Identity a = Identity { runIdentity :: a } instance Functor Identity where fmap f (Identity x) = Identity (f x) modifyI :: (s -> a) -> (s -> a -> s) -> (a -> Identity a) -> s -> Identity s > modify' :: (s -> a) -> (s -> a -> s) -> (a -> a) -> s -> s > modify' getter setter k = > runIdentity . modifyF getter setter (Identity . k)

We wrap the field value with Identity value after applying k and unwrap the final result after applying the setter. Since Identity does nothing interesting to the wrapped values, the overall result boils down to our original modify. If you have found this definition confusing, I suggest that you, as an exercise, rewrite it in pointful style and substitute the definition of modifyF.

We managed to get modify back with little trouble, which is rather interesting. However, what is truly surprising is that we can reconstruct not only the modifier but also the getter! To pull that off, we will use Const, which is a very quaint functor.

newtype Const a b = Const { getConst :: a } instance Functor (Const a) where fmap _ (Const y) = Const y modifyC :: (s -> a) -> (s -> a -> s) -> (a -> Const r a) -> s -> Const r s

If functors were really containers, Const would be an Acme product. A Const a b value does not contain anything of type b; what it does contain is an a value that we cannot even modify, given that fmap f is id regardless of what f is. As a consequence, if, given a field of type a, we pick Const a a as the functor to use with modifyF, if our chosen modifying function wraps the field value with Const then it will not be affected by the setter, and we will be able to recover it later. That suffices for recovering the getter.

> get :: (s -> a) -> (s -> a -> s) -> s -> a > get getter setter = getConst . modifyF getter setter Const > > getBar :: Foo -> Int > getBar = get bar setBar The Grand Unification

Given a getter and a setter, modifyF gets us a corresponding functorial modifier. From it, by choosing the appropriate functors, we can recover the getter and a plain modifier; the latter, in turn, allows us to recover the setter. We can highlight the correspondence by redefining once more the recovered getters and modifiers, this time in terms of the functorial modifier.

modifyF :: Functor f => (s -> a) -> (s -> a -> s) -> ((a -> f a) -> s -> f s) > modify'' :: ((a -> Identity a) -> s -> Identity s) -> (a -> a) -> s -> s > modify'' modifier k = runIdentity . modifier (Identity . k) > > modifyBar'' :: (Int -> Int) -> Foo -> Foo > modifyBar'' = modify'' modifyBarF > > set :: ((a -> Identity a) -> s -> Identity s) -> s -> a -> s > set modifier x y = modify'' modifier (const y) x > > setBar'' :: Foo -> Int -> Foo > setBar'' = set modifyBarF > > get' :: ((a -> Const a a) -> s -> Const a s) -> (s -> a) > get' modifier = getConst . modifier Const > > getBar' :: Foo -> Int > getBar' = get' modifyBarF

The bottom line is that given modifyBarF we can get by without modifyBar, setBar and bar, as modify'', set and get' allow us to reconstruct them whenever necessary. While our first version of get was, in effect, just a specialised const with a wacky implementation, get' is genuinely useful because it cuts the number of separate field manipulation functions we have to deal with by a third.

Expanding Horizons

Even after all of the work so far we can still generalise further! Let’s have a second look at modifyF.

modifyF :: Functor f => (s -> a) -> (s -> a -> s) -> (a -> f a) -> s -> f s modifyF getter setter k x = fmap (setter x) . k . getter $ x

The type of setter is (s -> a -> s); however, nothing in the implementation forces the first argument and the result to have the same type. Furthermore, with a different signature k could have a more general type, (a -> f b), as long as the type of setter was adjusted accordingly. We can thus give modifyF a more general type.

> modifyGenF :: Functor f => (s -> a) -> (s -> b -> t) > -> (a -> f b) -> s -> f t > modifyGenF getter setter k x = fmap (setter x) . k . getter $ x

For the sake of completeness, here are the generalised recovery functions. get is not included because the generalisation does not affect it.

> modifyGen :: ((a -> Identity b) -> s -> Identity t) -> (a -> b) -> s -> t > modifyGen modifier k = runIdentity . modifier (Identity . k) > > setGen :: ((a -> Identity b) -> s -> Identity t) -> s -> b -> t > setGen modifier x y = modifyGen modifier (const y) x

By now, it is clear that our getters and setters need not be ways to manipulate fields in a record. In a broader sense, a getter is anything that produces a value from another; in other words, any function can be a getter. By the same token, any binary function can be a setter, as all that is required is that it combines one value with another producing a third; the initial and final values do not even need to have the same type.1 That is a long way from the toy data type we started with!

The Reveal

If we look at modifyGenF as a function of two arguments, its result type becomes:

Functor f => (a -> f b) -> s -> f t

Now, let’s take a peek at Control.Lens.Lens:

type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t

It is the same type! We have reached our destination.2 A lens is what we might have called a generalised functorial modifier; furthermore, sans implementation details we have that:

  • The lens function is modifyGenF;
  • modifyF is lens specialised to produce simple lenses;3
  • modifyBarF is a lens with type Lens Foo Foo Int Int;
  • (^.) is flipped get';
  • set is setGen;
  • over is modifyGen further generalised.4

lens uses type synonyms liberally, so those correspondences are not immediately obvious form the signatures in the documentation. Digging a little deeper, however, shows that in

set :: ASetter s t a b -> b -> s -> t

ASetter is merely

type ASetter s t a b = (a -> Identity b) -> s -> Identity t

Analogously, we have

(^.) :: s -> Getting a s a -> a type Getting r s a = (a -> Const r a) -> s -> Const r s

Behind the plethora of type synonyms - ASetter, Getting, Fold, Traversal, Prism, Iso and so forth - there are different choices of functors,5 which make it possible to capture many different concepts as variations on lenses. The variations may be more general or less general than lenses; occasionally they are neither, as the overlap is just partial. The fact that we can express so much through parametrization of functors is key to the extraordinary breadth of lens.

Going Forward

This exposition is primarily concerned with building lenses, and so very little was said about how to use them. In any case, we have seen enough to understand why lenses are also known as functional references. By unifying getters and setters, lenses provide a completely general vocabulary to point at parts of a whole.

Finally, a few words about composition of lenses are unavoidable. One of the great things about lenses is that they are just functions; even better, they are functions with signatures tidy enough for them to compose cleanly with (.). That makes it possible to compose lenses independently of whether you intend to get, set or modify their targets. Here is a quick demonstration using the tuple lenses from lens.

GHCi> :m GHCi> :m +Control.Lens GHCi> ((1,2),(3,4)) ^. _1 . _2 GHCi> 2 GHCi> set (_1 . _2) 0 ((1,2),(3,4)) GHCi> ((1,0),(3,4))

A perennial topic in discussions about lens is the order of composition of lenses. They are often said to compose backwards; that is, backwards with respect to composition of record accessors and similar getters. For instance, the getter corresponding to the _1 . _2 lens is snd . fst. The claim that lenses compose backwards, or in the “wrong order”, however, are only defensible when talking about style, and not about semantics. That becomes clear after placing the signatures of a getter and its corresponding lens side by side.

GHCi> :t fst fst :: (a, b) -> a GHCi> :t _1 :: Lens' (a, b) a _1 :: Lens' (a, b) a :: Functor f => (a -> f a) -> (a, b) -> f (a, b)

The getter takes a value of the source type and produces a value of the target type. The lens, however, takes a function from the target type and produces a function from the source type. Therefore, it is no surprise that the order of composition differs, and the order for lenses is entirely natural. That ties in closely to what we have seen while implementing lenses. While we can squeeze lenses until they give back getters, it is much easier to think of them as generalised modifiers.

<section class="footnotes">
  1. We are not quite as free when it comes to pairing getters and setters. Beyond the obvious need for getter and setter to start from values of the same type, they should behave sanely when composed. In particular, the following should hold:

    get' modifier (setGen modifier y x) ≡ y setGen modifier (get' modifier x) x ≡ x setGen modifier z (setGen modifier y x) ≡ setGen modifier z x
  2. “What about the forall?” you might ask. Are we cheating? Not quite. The forall is there to control how f is specialised when lens combinators are used. The underlying issue does not affect our reasoning here. If you are into type system subtleties, there were a few interesting comments about it in the reddit thread for this post.

  3. Lens' s a or Lens s s a a, as opposed to Lens s t a b.

  4. Yes, even further; from taking modifying functions to taking modifying profunctors. The difference need not worry us now.

  5. And in some cases of profunctors to replace the function type constructor.

</section> Comment on GitHub (see the full post for a reddit link)

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Categories: Offsite Blogs

Douglas M. Auclair (geophf): 'V' is for Variable

Planet Haskell - Sat, 04/26/2014 - 3:32am

'V' is for ... I don't know what, because 'V' is for 'Variable'
HA! Saved by the variable.
I cannot believe I didn't know, at first, what 'v' was for, but there it was, all along, so much present I didn't even see it: the variable.
That happens to you? Something is so in front of you all the time, you don't even notice it until it's gone? It happened to Emma, Darwin's wife (yes, the Darwin), she had a 10-year-old boy whom she loved, and then, one day he was gone. Alive and sweet and playful one day, and then in a casket the next, never to breathe or to eat or to sing or to cry again, just to be dead then buried six feet under the ground, and then forgotten, just like Emma is now, just like Charles the Darwin is now not. But he will be. Given enough time and enough distance and everything shrinks to insignificance, and then to invisibility, just like the 'next' number after π.
Okay, side note, this font? Verdana? I love it. I freekin love it, except for one small nagging detail. Why in the world does it make the Greek letter pi, π, look like a frikken 'n,' for goodness sake?
Okay, switching fonts now.
Let's try Geneva. π. Better, much better representation of π. Okay, moving on with this new font.
Okay, so anyway, there, always there, but then gone and forgotten.
Except in Eternity. That's why we have it. In eternity, there always is, and always will be. There is being and that's all there is in Eternity.
Math is eternal. A statement of truth in mathematics is true before (we just didn't know it yet), it's true now, and it will be true for ever (in that particular domain). In mathematics you'll always have the Fermat's thingie, first known as a conjecture, then as his Last Theorem, and now known as 'solved.'
Heh.
In mathematics, you're granted eternity. Euclid, anyone? He's been going strong these last two-thousand, six-hundred years. Erosthothenese? xxx he was just some dude at the Library of Alexandria, but he just noticed something about how to find prime numbers, and his technique is by no means now the best or most efficient, but which one is the go-to algorithm for primes at school or nearly most anywhere? And who knows his name and the association? Today?
Who will know your name and what you did, twenty-six hundred years from now?
Okay, wait. Geneva font doesn't have italics?
This is getting ridiculous. JUST GIVE ME A FONT THAT DOES EVERYTHING I WANT, PLEASE! THAT'S ALL I'M ASKING!
JEEZ!
Okay, trying Helvetica now (π). Okay, this is working so far ... (Oh, and don't you dare say, 'oh, geophf, why not just use Times New Roman like everybody else does?' Don't you dare say that! Do I look like everybody else? No? Then don't ask the question why don't I do X like everybody else does! News flash: I'm not like everybody else! THANK you!)
(But I digress.)
(As always.)
So you have this thing, called a variable, that prevalent in mathematics and in computer programming languages. So prevalent, in fact, that it's just taken for granted and not even seen any more, just used, and abused, and that's it.
But, what, ladies and gentlemen, is a variable?
We can revert to the epigram: "Constants aren't; Variables don't" from antiquity.
Or we can say: "Variables? Just say no."
I'm being fatuous (as always), but, as always, there's a kernel of truth I'm trying to convey here.
You all know that the 'a variable is a cup' that teachers teach you in lesson 2 of the high school class that they unknowingly mockingly name 'Computer Science I.'
Computer 'science'? Really? Where is science being done in computer science classes these days in high schools? If there were experimentation and discovery going on, then, yes, they could rightly name these classes and these, okay, really? 'Labs'? computer science. But if it's all 'Okay, class, turn to page 204 of your ... and do the exercises there' rote drilling, then it's not computer science, it's more like, 'big brother grinding out mindless automatons so you can work for the Man until you die I.'
I mean, come on, let's be honest, please.
Anyway. But I digress.
As always.
What I'm (trying to) say(ing) here is that there are different representations of what variables can be, including the 'pure' approach and the 'logical' approach, and the approach that you probably learned in your comp.sci class that told you you can assign your variable, change its value and do all sorts of weird and wonderful thing you do to variables that have no correspondence to actual mathematics.
For example
int x = 5;x = x + 1;
What is x now?
'Well, 6, geophf, obviously,' you say.
Oh, really? Let's look at this again, mathematically:
x = 5
Okay, fine, x is 5. No problems with that.
Then this:
x = x + 1
Okay, what in the world is that? For:
 x = x + 1    (theorem)-x = -x         (reflexive) 0 = 0 + 1    (addition) 0 = 1          (addition)
And now you're left with a statement that 0 = 1.
So either that is true, meaning that x = x + 1 is true, or that is false, meaning that x = x + 1 is senseless, or absurd.
Take your pick: either zero equals one, or you can't, logically, 'increment' a number to 'become' another number.
So why do you think you can get away with that with variables?
Because in your comp.sci. I class you were told a variable is a 'cup' and you can increment it, and you told that stupid girl in the back of the class to shut the hell up after the third time she said, all confused, 'but, I don't get it.'
That actually happened in an intro to comp.sci class.
That 'stupid' girl in the back actually isn't all that stupid. She didn't 'get it' because there was nothing, logical, to get. It's an absurdity taught in comp.sci. classes (that aren't comp.sci. at all, I'll reiterate) that you all just buy because you were told this, like everything else you were told, and you're gonna get that 'A' from teach for the class, arencha?
But that's how most programming language model variables: something that can change, extralogically, too, at that, so why am I making a big stink about this, and about other extralogical features, such as (shudder) assert?
Because when you introduce things outside your logical domain, then you have a hole in your proof, and a hole, no matter how small, means that you can no longer say anything with certainty about the correctness of your system.
So, how should variables behave then, if they were logical?
'Constants aren't [that's your problem, bud]; variables don't.'
Variables in computer programs shouldn't vary. In the context of their computation, once bound, they should remain bound to the binding value while in that computational context.
(Please, let's not talk about lexical scope verses dynamic scoping of variables. That conversation died with the invention of Scheme.)
So,
x = 5
But what if you need that variable to change? You ask.
Why? I ask back. Let's think this through. Why would you need to have something change during the course of a computation? Because one value didn't work and you want to try another value?
That's easily solved, simply restart the computation in that case, then you rebind the variable with the new trial case.
For example,
give_me_a_number_whose_factorial_is_greater_than_100(x)
Is x 1? Obviously not, so just keep calling that function with a new x until you get your greater-than-100-factorial value from your trial number. But x, inside the computation is that value, and not some other value.
By the way, I had this as an interview question: write a function that returns a factorial of a number greater than X.
... as a fault-tolerant web-service.
The questions of 'Okay, why?' and 'Who would want this, anyway?' were not part of the conversation.
This is all very comonadic ('W' is for comonad ... fer realz, yo! That'll be my next post) because here you control the value in the computational context as well as the computational context, itself.
Variables are comonadic as the title of of one of my '... are comonadic' articles? Why not?
The same thing, in a very different way, happens in Prolog. Once a logical variable is bound (that is, once it's not longer 'free') it remains bound in the logical context, and you either complete the proof, exiting the context, or you fail back up through the proof and in so doing restart somewhere upstream from the variable binding, which allows you to bind a new value to the logic variable.
A logic variable is variable insofar that it is either in a bound state (has a value) or in a free state (and is waiting to be bound to a value), and this happens through some kind of unification ('U' is for Unification).
Variable-binding in prolog is purely logical.
'Variables that don't change, geophf? What use are they? That totally doesn't model the real world!'
Uh, it does model the real world, and much better than your imperative-stateful variable-state changes. I mean, once a thing is a thing, it remains that thing forever more. You ever try to try to 'change the state' of an electron to a proton? You ever change a hydrogen atom into helium?
Actually a 19-year-old kid built a fusion reactor in the toolshed in his back yard. People kept telling him that he 'couldn't' do that. He loved that when he was told that over and over again.
And then submitted his working fusion reactor to the google science fair. 
Do people get attention for listening to nay-sayers? No, they get beaten down to become a nay-sayer.
Did this young laddie get attention at the google fair?
... and beyond?
Yes. Oh, yes, he did.
Where were we? Oh, yes: variables. How did we get here? Oh, yes: changing something to be something else.
You 'can't' do that. So don't bother, sayeth the nay-sayer.
And, if you absolutely, positively must change a variable to become something else, well, I got something for ya! It's called a State variable.
data State a s  = s -> (a, s)
What's going on here?
State isn't a (mutable) variable, actually. It's a function.
Functions rock, by the way.
This function takes a state-of-the-world, whatever you deem that to be, and returns a value from that state as well as the new state-of-the-world that it sees.
So?
So, what you can do is string this state-'variable' from one state to the next and get yourself an approximation of a mutable variable, just like the ones you hold so dearly in your 'ordinary' programming languages, so your x = 5 example now works:
do put 5
     x <- get
     return $ x + 1

Where 'put' is your assignment and 'get' is getting the value of x
And those two statements are strung together, computationally, even though not explicitly so in the above example.
Here, I'll make it explicit:
put(5) >>= get >>= \x -> return $ x + 1
The monadic-bind operation (>>=) strings the computation together and you have a logical trace of the computational state-domain from its initial state (put(5)) to its retrieval (get) to it's updated state (return $ x + 1) there is not one place where a variable actually changes its value and the entire computation is logically sound from beginning to end.
The above is a statement of truth and is provable and fully verifiable.
The below statements, however:
x = 5x = x + 1
Are, indeed, statements, but I've shown one of them is false, so the whole system crumbles into absurdity from that one falsehood.
And people wonder why computer programs are so hard to write and to get right.
It's only because of something so simple, and so pervasive, as mutable variables.
See you tomorrow.
Categories: Offsite Blogs

www.haskellforall.com

del.icio.us/haskell - Sat, 04/26/2014 - 3:15am
Categories: Offsite Blogs

ro-che.info

del.icio.us/haskell - Sat, 04/26/2014 - 3:15am
Categories: Offsite Blogs

Mark Jason Dominus: My brush with Oulipo

Planet Haskell - Fri, 04/25/2014 - 11:12pm

Last night I gave a talk for the New York Perl Mongers, and got to see a number of people that I like but don't often see. Among these was Michael Fischer, who told me of a story about myself that I had completely forgotten, but I think will be of general interest.

Order
Oulipo Compendium

with kickback
no kickback

The front end of the story is this: Michael first met me at some conference, shortly after the publication of Higher-Order Perl, and people were coming up to me and presenting me with copies of the book to sign. In many cases these were people who had helped me edit the book, or who had reported printing errors; for some of those people I would find the error in the text that they had reported, circle it, and write a thank-you note on the same page. Michael did not have a copy of my book, but for some reason he had with him a copy of Oulipo Compendium, and he presented this to me to sign instead.

Oulipo is a society of writers, founded in 1960, who pursue “constrained writing”. Perhaps the best-known example is the lipogrammatic novel La Disparition, written in 1969 by Oulipo member Georges Perec, entirely without the use of the letter e. Another possibly well-known example is the Exercises in Style of Raymond Queneau, which retells the same vapid anecdote in 99 different styles. The book that Michael put in front of me to sign is a compendium of anecdotes, examples of Oulipan work, and other Oulipalia.

What Michael did not realize, however, was that the gods of fate were handing me an opportunity. He says that I glared at him for a moment, then flipped through the pages, found the place in the book where I was mentioned, circled it, and signed that.

The other half of that story is how I happened to be mentioned in Oulipo Compendium.

Back in the early 1990s I did a few text processing projects which would be trivial now, but which were unusual at the time, in a small way. For example, I constructed a concordance of the King James Bible, listing, for each word, the number of every verse in which it appeared. This was a significant effort at the time; the Bible was sufficiently large (around five megabytes) that I normally kept the files compressed to save space. This project was surprisingly popular, and I received frequent email from strangers asking for copies of the concordance.

Another project, less popular but still interesting, was an anagram dictionary. The word list from Webster's Second International dictionary was available, and it was an easy matter to locate all the anagrams in it, and compile a file. Unlike the Bible concordance, which I considered inferior to simply running grep, I still have the anagram dictionary. It begins:

aal ala aam ama Aarhus (See `arusha') Aaronic (See `Nicarao') Aaronite aeration Aaru aura

And ends:

zoosporic sporozoic zootype ozotype zyga gazy zygal glazy

The cross-references are to save space. When two words are anagrams of one another, both are listed in both places. But when three or more words are anagrams, the words are listed in one place, with cross-references in the other places, so for example:

Ateles teasel stelae saltee sealet saltee (See `Ateles') sealet (See `Ateles') stelae (See `Ateles') teasel (See `Ateles')

saves 52 characters over the unabbreviated version. Even with this optimization, the complete anagram dictionary was around 750 kilobytes, a significant amount of space in 1991. A few years later I generated an improved version, which dispensed with the abbreviation, by that time unnecessary, and which attempted, sucessfully I thought, to score the anagrams according to interestingness. But I digress.

One day in August of 1994, I received a query about the anagram dictionary, including a question about whether it could be used in a certain way. I replied in detail, explaining what I had done, how it could be used, and what could be done instead, and the result was a reply from Harry Mathews, another well-known member of the Oulipo, of which I had not heard before. Mr. Mathews, correctly recognizing that I would be interested, explained what he was really after:

A poetic procedure created by the late Georges Perec falls into the latter category. According to this procedure, only the 11 commonest letters in the language can be used, and all have to be used before any of them can be used again. A poem therefore consists of a series of 11 multi-word anagrams of, in French, the letters e s a r t i n u l o c (a c e i l n o r s t). Perec discovered only one one-word anagram for the letter-group, "ulcerations", which was adopted as a generic name for the procedure.

Mathews wanted, not exactly an anagram dictionary, but a list of words acceptable for the English version of "ulcerations". They should contain only the letters a d e h i l n o r s t, at most once each. In particular, he wanted a word containing precisely these eleven letters, to use as the translation of "ulcerations".

Producing the requisite list was much easier then producing the anagram dictionary iself, so I quickly did it and sent it back; it looked like this:

a A a d D d e E e h H h i I i l L l n N n o O o r R r s S s t T t ad ad da ae ae ea ah Ah ah ha ... lost lost lots slot nors sorn nort torn tron nost snot orst sort adehl heald adehn henad adehr derah adehs Hades deash sadhe shade ... deilnorst nostriled ehilnorst nosethirl adehilnort threnodial adehilnrst disenthral aehilnorst hortensial

The leftmost column is the alphabetical list of letters. This is so that if you find yourself needing to use the letters 'a d e h s' at some point in your poem, you can jump to that part of the list and immediately locate the words containing exactly those letters. (It provides somewhat less help for discovering the shorter words that contain only some of those letters, but there is a limit to how much can be done with static files.)

As can be seen at the end of the list, there were three words that each used ten of the eleven required letters: “hortensial”, “threnodial”, “disenthral”, but none with all eleven. However, Mathews replied:

You have found the solution to my immediate problem: "threnodial" may only have 10 letters, but the 11th letter is "s". So, as an adjectival noun, "threnodials" becomes the one and only generic name for English "Ulcerations". It is not only less harsh a word than the French one but a sorrowfully appropriate one, since the form is naturally associated with Georges Perec, who died 12 years ago at 46 to the lasting consternation of us all.

(A threnody is a hymn of mourning.)

A few years later, the Oulipo Compendium appeared, edited by Mathews, and the article on Threnodials mentions my assistance. And so it was that when Michael Fischer handed me a copy, I was able to open it up to the place where I was mentioned.

[ Addendum 20140428: Thanks to Philippe Bruhat for some corrections: neither Perec nor Mathews was a founding member of Oulipo. ]

Categories: Offsite Blogs

Dan Piponi (sigfpe): The Monad called Free

Planet Haskell - Fri, 04/25/2014 - 10:42pm
Introduction

As Dan Doel points out here, the gadget Free that turns a functor into a monad is itself a kind of monad, though not the usual kind of monad we find in Haskell. I'll call it a higher order monad and you can find a type class corresponding to this in various places including an old version of Ed Kmett's category-extras. I'll borrow some code from there. I hunted around and couldn't find an implementation of Free as an instance of this class so I thought I'd plug the gap.



> {-# LANGUAGE RankNTypes, FlexibleContexts, InstanceSigs, ScopedTypeVariables #-}



> import Control.Monad
> import Data.Monoid



To make things unambiguous I'll implement free monads in the usual way here:



> data Free f a = Pure a | Free (f (Free f a))



> instance Functor f => Functor (Free f) where
> fmap f (Pure a) = Pure (f a)
> fmap f (Free a) = Free (fmap (fmap f) a)



> instance Functor f => Monad (Free f) where
> return = Pure
> Pure a >>= f = f a
> Free a >>= f = Free (fmap (>>= f) a)



The usual Haskell typeclass Monad corresponds to monads in the category of types and functions, Hask. We're going to want monads in the category of endomorphisms of Hask which I'll call Endo.


The objects in Endo correspond to Haskell's Functor. The arrows in Endo are the natural transformations between these functors:



> type Natural f g = (Functor f, Functor g) => forall a. f a -> g a



So now we are led to consider functors in Endo.



> class HFunctor f where



A functor in Endo must map functors in Hask to functors in Hask. So if f is a functor in Endo and g is a functor in Hask, then f g must be another functor in Hask. So there must be an fmap associated with this new functor. There's an associated fmap for every g and we collect them all into one big happy natural family:



> ffmap :: Functor g => (a -> b) -> f g a -> f g b



But note also that by virtue of being a functor itself, f must have its own fmap type function associated with it. The arrows in Endo are natural transformations in Hask so the fmap for HFunctor must take arrows in Endo to arrows in Endo like so:



> hfmap :: (Functor g, Functor h) => Natural g h -> Natural (f g) (f h)



Many constructions in the category Hask carry over to Endo. In Hask we can form a product of type types a and b as (a, b). In Endo we form the product of two functors f and g as



> data Product f g a = Product (f (g a))



Note that this product isn't commutative. We don't necessarily have an isomorphism from Product f g to Product g f. (This breaks many attempts to transfer constructions from Hask to Endo.) We also won't explicitly use Product because we can simply use the usual Haskell composition of functors inline.


We can implement some functions that act on product types in both senses of the word "product":



> left :: (a -> c) -> (a, b) -> (c, b)
> left f (a, b) = (f a, b)



> right :: (b -> c) -> (a, b) -> (a, c)
> right f (a, b) = (a, f b)



> hleft :: (Functor a, Functor b, Functor c) => Natural a c -> a (b x) -> c (b x)
> hleft f = f



> hright :: (Functor a, Functor b, Functor c) => Natural b c -> a (b x) -> a (c x)
> hright f = fmap f



(Compare with what I wrote here.)


We have something in Endo a bit like the type with one element in Hask, namely the identity functor. The product of a type a with the one element type in Hask gives you something isomorphic to a. In Endo the product is composition for which the identity functor is the identity. (Two different meanings of the word "identity" there.)


We also have sums. For example, if we define a functor like so



> data F a = A a | B a a



we can think of F as a sum of two functors: one with a single constructor A and another with constructor B.


We can now think about reproducing an Endo flavoured version of lists. The usual definition is isomorphic to:



> data List a = Nil | Cons a (List a)



And it has a Monoid instance:



> instance Monoid (List a) where
> mempty = Nil
> mappend Nil as = as
> mappend (Cons a as) bs = Cons a (mappend as bs)



We can try to translate that into Endo. The Nil part can be thought of as being an element of a type with one element so it should become the identity functor. The Cons a (List a) part is a product of a and List a so that should get replaced by a composition. So we expect to see something vaguely like:



List' a = Nil' | Cons' (a (List' a))



That's not quite right because List' a is a functor, not a type, and so acts on types. So a better definition would be:



List' a b = Nil' b | Cons' (a (List' a b))



That's just the definition of Free. So free monads are lists in Endo. As everyone knows :-) monads are just monoids in the category of endofunctors. Free monads are also just free monoids in the category of endofunctors.


So now we can expect many constructions associated with monoids and lists to carry over to monads and free monads.


An obvious one is the generalization of the singleton map a -> List a:



> singleton :: a -> List a
> singleton a = Cons a Nil



> hsingleton :: Natural f (Free f)
> hsingleton f = Free (fmap Pure f)



Another is the generalization of foldMap. This can be found under a variety of names in the various free monad libraries out there but this implementation is designed to highlight the similarity between monoids and monads:



> foldMap :: Monoid m => (a -> m) -> List a -> m
> foldMap _ Nil = mempty
> foldMap f (Cons a as) = uncurry mappend $ left f $ right (foldMap f) (a, as)



> fold :: Monoid m => List m -> m
> fold = foldMap id



> hFoldMap :: (Functor f, Functor m, Monad m) => Natural f m -> Natural (Free f) m
> hFoldMap _ (Pure x) = return x
> hFoldMap f (Free x) = join $ hleft f $ hright (hFoldMap f) x



> hFold :: Monad f => Natural (Free f) f
> hFold = hFoldMap id



The similarity here isn't simply formal. If you think of a list as a sequence of instructions then foldMap interprets the sequence of instructions like a computer program. Similarly hFoldMap can be used to interpret programs for which the free monad provides an abstract syntax tree.


You'll find some of these functions here by different names.


Now we can consider Free. It's easy to show this is a HFunctor by copying a suitable definition for List:



> instance Functor List where
> fmap f = foldMap (singleton . f)



> instance HFunctor Free where
> ffmap = fmap
> hfmap f = hFoldMap (hsingleton . f)



We can define HMonad as follows:



> class HMonad m where
> hreturn :: Functor f => f a -> m f a
> hbind :: (Functor f, Functor g) => m f a -> Natural f (m g) -> m g a



Before making Free an instance, let's look at how we'd make List an instance of Monad



> instance Monad List where
> return = singleton
> m >>= f = fold (fmap f m)



And now the instance I promised at the beginning.



> instance HMonad Free where
> hreturn = hsingleton
> hbind m f = hFold (hfmap f m)



I've skipped the proofs that the monad laws hold and that hreturn and hbind are actually natural transformations in Endo. Maybe I'll leave those as exercises for the reader.


Update

After writing this I tried googling for "instance HMonad Free" and I found this by haasn. There's some other good stuff in there too.

Categories: Offsite Blogs

Gabriel Gonzalez: Model-view-controller, Haskell-style

Planet Haskell - Fri, 04/25/2014 - 7:59pm

I'm releasing the mvc library for model-view-controller (MVC) programming in Haskell. I initially designed this library with games and user interfaces in mind, but the larger goal of this library is to provide a mathematically inspired framework for general-purpose component sharing in Haskell.

This library differs in a significant way from other MVC libraries: this API statically enforces in the types that the Model is pure, but with very little loss in power. Using mvc you can refactor many types of concurrent applications into a substantial and pure Model that interfaces with carefully constrained Views and Controllers.

When you purify your Model this way, you can:

  • record and replay bugs reproducibly,

  • do property-based testing (ala QuickCheck) to uncover corner cases, and:

  • prove formal properties about your Model using equational reasoning.

The first half of this post walks through two examples reproduced from the mvc documentation and the second half of this post describes the architecture that enables you to embed large and non-trivial business logic in an entirely pure Model. This post will use a side-by-side mix of theoretical terminology alongside plain English. Even if you don't understand the theoretical half of this post you should still derive benefit from the non-technical half and use that as a bridge towards understanding the underlying theory.

Examples

The mvc library uses four types to represent everything:

  • The Model is a pure streaming transformation from inputs to outputs

  • The View handles all outputs from the Model

  • The Controller supplies all inputs to the Model

  • The Managed type extends other types with logic for acquiring or releasing resources

There are no other concepts you have to learn. The API is extraordinarily small (4 types, and 8 primitive functions associated with those types).

However, as we extend our application the types will never grow more complex. In fact, the mvc library statically forbids you from increasing the complexity of your types, because the library only provides a single run function of the following type:

runMVC
:: s -- Initial state
-> Model s a b -- Program logic
-> Managed (View b, Controller a) -- Impure output and input
-> IO s -- Returns final state

There is no other way to consume Models, Views, and Controllers, so runMVC forces you to consolidate all your Views into a single View and consolidate all your Controllers into a single Controller. This creates a single entry point and a single exit point for your Model. Equally important, you cannot mix your Model logic with your View or Controller logic. The mvc library enforces MVC best practices using the type system.

This first minimal example illustrates these basic concepts:

import MVC
import qualified MVC.Prelude as MVC
import qualified Pipes.Prelude as Pipes

external :: Managed (View String, Controller String)
external = do
c1 <- MVC.stdinLines
c2 <- MVC.tick 1
return (MVC.stdoutLines, c1 <> fmap show c2)

model :: Model () String String
model = asPipe (Pipes.takeWhile (/= "quit"))

main :: IO ()
main = runMVC () model external

The key components are:

  • A Controller that interleaves lines from standard input with periodic ticks

  • A View that writes lines to standard output

  • A pure Model, which forwards lines until it detects the string "quit"

  • A Managed type, which abstracts over resource acquisition and release

The Model only has a single streaming entry point (the Controller) and a single streaming exit point (the View).

However, this interface is deceptively simple and can model very complex logic. For example, here's a more elaborate example using the sdl library that displays a white rectangle between every two mouse clicks:

import Control.Monad (join)
import Graphics.UI.SDL as SDL
import Lens.Family.Stock (_Left, _Right) -- `lens-family-core`
import MVC
import MVC.Prelude
import qualified Pipes.Prelude as Pipes

data Done = Done deriving (Eq, Show)

sdl :: Managed (View (Either Rect Done), Controller Event)
sdl = join $ managed $ \k ->
withInit [InitVideo, InitEventthread] $ do
surface <- setVideoMode 640 480 32 [SWSurface]
white <- mapRGB (surfaceGetPixelFormat surface) 255 255 255

let done :: View Done
done = asSink (\Done -> SDL.quit)

drawRect :: View Rect
drawRect = asSink $ \rect -> do
_ <- fillRect surface (Just rect) white
SDL.flip surface

totalOut :: View (Either Rect Done)
totalOut = handles _Left drawRect <> handles _Right done

k $ do
totalIn <- producer Single (lift waitEvent >~ cat)
return (totalOut, totalIn)

pipe :: Monad m => Pipe Event (Either Rect Done) m ()
pipe = do
Pipes.takeWhile (/= Quit)
>-> (click >~ rectangle >~ Pipes.map Left)
yield (Right Done)

rectangle :: Monad m => Consumer' (Int, Int) m Rect
rectangle = do
(x1, y1) <- await
(x2, y2) <- await
let x = min x1 x2
y = min y1 y2
w = abs (x1 - x2)
h = abs (y1 - y2)
return (Rect x y w h)

click :: Monad m => Consumer' Event m (Int, Int)
click = do
e <- await
case e of
MouseButtonDown x y ButtonLeft ->
return (fromIntegral x, fromIntegral y)
_ -> click

main :: IO ()
main = runMVC () (asPipe pipe) sdl

Compile and run this program, which will open up a window, and begin clicking to paint white rectangles to the screen:

Here we package the effectful and concurrent components that we need from the sdl into a self-contained package containing a single View and Controller. Our pure logic is contained entirely within a pure Pipe, meaning that we can feed synthetic input to our program:

>>> let leftClick (x, y) = MouseButtonDown x y ButtonLeft
>>> Pipes.toList $
... each [leftClick (10, 10), leftClick (15, 16), Quit]
... >-> pipe
[Left (Rect {rectX = 10, rectY = 10, rectW = 5, rectH = 6}),Right
Done]

... or even QuickCheck our program logic! We can verify that our program generates exactly one rectangle for every two clicks:

>>> import Test.QuickCheck
>>> quickCheck $ \xs ->
... length (Pipes.toList (each (map leftClick xs) >-> pipe))
... == length xs `div` 2
+++ OK, passed 100 tests.

These kinds of tests would be impossible to run if we settled for anything less than complete separation of impurity and concurrency from our program's logic.

However, this total separation might seem unrealistic. What happens if we don't have exactly one View or exactly one Controller?

Monoids - Part 1

Views and Controllers are Monoids, meaning that we can combine any number of Views into a single View, and likewise combine any number of Controllers into a single Controller, by using mconcat (short for "Monoid concatenation") from Data.Monoid:

-- Combine a list of `Monoid`s into a single `Monoid`
mconcat :: Monoid m => [m] -> m

When we specialize the type of mconcat to View or Controller we get the following two specialized functions:

-- Combining `View`s sequences their effects
mconcat :: [View a] -> View a

-- Combining `Controller`s interleaves their events
mconcat :: [Controller a] -> Controller a

In other words, we can can combine a list of any number of Views into a single View and combine a list of any number of Controllers into a single Controller. We get several benefits for free as a result of this design.

First, combinability centralizes our View logic and Controller logic into a single expression that we feed to runMVC. We can therefore identify all inputs and outputs to our system simply by tracing all sub-expressions that feed into this larger expression. Contrast this with a typical mature code base where locating all relevant inputs and outputs for the system is non-trivial because they are typically not packaged into a single self-contained term.

Second, combinability promotes reuse. If we find ourselves repeatedly using the same set of inputs or outputs we can bundle them into a new derived component that we can share with others.

Third, combinable inputs and outputs are the reason our Model can afford to have a single entry point and a single exit point. This beats having to write callback spaghetti code where we cannot easily reason about our application's control flow.

This is an example of a scalable architecture. The Monoid type class lets us indefinitely grow our inputs and outputs without ever increasing the number of concepts, abstractions or types.

To be more specific, this scalable architecture is a special case of the category design pattern. When combinable components are morphisms in a category, we can connect as many components as we please yet still end up back where we started. In this case the operative category is a monoid, where Views or Controllers are morphisms, (<>) is the composition operator and mempty is the identity morphism.

Functors - Part 1

However, the Monoid type class only lets us combine Views and Controllers that have the same type. For example, suppose we have a Controller for key presses, and a separate Controller for mouse events:

keys :: Controller KeyEvent

clicks :: Controller MouseEvent

If we try to combine these using (<>) (an infix operator for mappend), we will get a type error because their types do not match:

keys <> clicks -- TYPE ERROR!

keys and clicks don't stream the same event type, so how do we reconcile their different types? We use functors!

fmap Left keys
:: Controller (Either KeyEvent MouseEvent)

fmap Right clicks
:: Controller (Either KeyEvent MouseEvent)

fmap Left keys <> fmap Right clicks
:: Controller (Either KeyEvent MouseEvent)

The functor design pattern specifies that when we have an impedance mismatch between components, we unify them to agree on a common component framework. Here, we unify both of our Controller output types using Either.

Using theoretical terminology, when we have morphisms in diverse categories, we use functors to transform these morphisms to agree on a common category. In this case keys is a morphism in the Controller KeyEvent monoid and clicks is a morphism in the Controller MouseEvent monoid. We use fmap to transform both monoids to agree on the Controller (Either KeyEvent MouseEvent) monoid.

However, in this case fmap is behaving as a functor in a different sense than we are normally accustomed to. We're already familiar with the following functor laws for fmap:

fmap (f . g) = fmap f . fmap g

fmap id = id

However, right now we're not interested in transformations from functions to functions. Instead, we're interested in transformations from monoids to monoids, so we're going to invoke a different set of functor laws for our Controllers:

fmap f (c1 <> c2) = fmap f c1 <> fmap f c2

fmap f mempty = mempty

In other words, fmap f correctly translates monoid operations from one type of Controller to another. This functor between monoids is the operative functor when we transform Controllers to agree on a common type.

Functors - Part 2

We can use the same functor design pattern to unify different types of Views as well. For example, let's assume that we have two separate Views, one that logs Strings to a file, and another that displays video Frames to a screen:

logLines :: View String

frames :: View Frame

Again, we cannot naively combine these using mappend/(<>) because the types don't match:

logLines <> frames -- TYPE ERROR!

However, View does not implement Functor, so how do we unify the types this time?

We still use functors! However, this time we will be using the handles function from mvc, which has the following type:

handles :: Traversal' a b -> View b -> View a

This lets us use Traversals to specify which outgoing values each View should handle:

import Lens.Family.Stock (_Left, _Right)

-- _Left :: Traversal' (Either a b) a
-- _Right :: Traversal' (Either a b) b

handles _Left logLines
:: View (Either String Frames)

handles _Right frames
:: View (Either String Frames)

handles _Left logLines <> handles _Right frames
:: view (Either String Frames)

This reads a little like English: logLines handles _Left values, and frames handles _Right values.

Like the previous example, handles is a functor in two ways. The first functor maps traversal composition to function composition:

handles (t1 . t2) = handles t1 . handles t2

handles id = id

The second functor maps monoid operations from one View to another:

handles t (v1 <> v2) = handles t v1 <> handles t v2

handles t mempty = mempty

This latter functor between View monoids is the operative functor when we are unifying Views to agree on a common type.

Applicatives

Alright, but we don't typically work with unadorned Views or Controllers. If you look at the utility section of mvc you will see that most Views or Controllers are Managed, meaning that they must acquire or release some sort of resource before they can begin streaming values in or out. For example, any View or Controller that interacts with a file must initially acquire the file and release the file when done:

fromFile :: FilePath -> Managed (Controller String)

toFile :: FilePath -> Managed (View String)

Uh oh... We have a problem! runMVC doesn't accept separate Managed Views and Managed Controllers. runMVC only accepts a combined View and Controller that share the same Managed logic:

runMVC
:: s
-> Model s a b
-> Managed (View b, Controller a) -- What do we do?
-> IO s

runMVC is written this way because some Views and Controllers must acquire and release the same resource. Does this mean that I need to provide a new run function that accepts separate Managed Views and Managed Controllers?

No! Fortunately, Managed implements the Applicative type class and we can use Applicatives to combined two Managed resources into a single Managed resource:

import Control.Applicative (liftA2)

liftA2 (,)
:: Applicative f => f a -> f b -> f (a, b)

-- Specialize `liftA2` to `Managed`
liftA2 (,)
:: Managed a -> Managed b -> Managed (a, b)

toFile "output.txt"
:: Managed (View String)

fromFile "input.txt"
:: Managed (Controller String)

liftA2 (,) (toFile "output.txt") (fromFile "input.txt")
:: Managed (View String, Controller String)

I can fuse my two Managed resources into a single Managed resource! This is another example of scalable design. We don't complicate our run function by adding special cases for every permutation of Managed Views and Controllers. Instead, we make Managed layers laterally combinable, which prevents proliferation of functions, types, and concepts.

Monoids - Part 2

Managed implements the Monoid type class, too! Specifically, we can wrap any type that implements Monoid with Managed and we will get back a new derived Monoid:

instance Monoid r => Monoid (Managed r) where
mempty = pure mempty
mappend = liftA2 mappend

This means that if I have two Managed Views, I can combine them into a single Managed View using the same Monoid operations as before:

view1 :: Managed (View A)
view2 :: Managed (View A)

viewBoth :: Managed (View A)
viewBoth = view1 <> view2

The same is true for Controllers:

controller1 :: Managed (Controller A)
controller2 :: Managed (Controller A)

controllerBoth :: Managed (Controller A)
controllerBoth = controller1 <> controller2

In fact, this trick works for any Applicative, not just Managed. Applicatives let you extend arbitrary Monoids with new features while still preserving their Monoid interface. There is no limit to how many Applicative extensions you can layer this way.

Conclusion

The documentation for the mvc library is full of theoretical examples like these showing how to architect applications using scalable abstractions inspired by category theory.

The mvc library has certain limitations. Specifically, I did not design the library to handle changing numbers of inputs and outputs over time. This is not because of a deficiency in category theory. Rather, I wanted to introduce this simpler API as a stepping stone towards understanding more general abstractions later on that I will release as separate libraries.

The other reason I'm releasing the mvc library is to test the waters for an upcoming book I will write about architecting programs using category theory. I plan to write one section of the book around an application structured using this mvc style.

Links:

  • Hackage - the API and documentation (including tutorials)

  • Github - For development and issues

Categories: Offsite Blogs

Douglas M. Auclair (geophf): 'U' is for Universe

Planet Haskell - Fri, 04/25/2014 - 7:14pm

'U' is for Universe
Yesterday we proved that pf B -> pf (A imp B), that is to say if you have the proof of B then (it was proved) you also have the proof that A implies B.
In other words, no matter what, A implied B if you have B, because if A is false, you can say anything you like about B, including that it's true, so that holds, if A is false, but if A is true then that implies B is true, but we already know that B is true, so we're fine there, too.
Groovy.
The thing I glossed over in the last post is that A can be anything, and B can be anything that is proved. We call these variables universals, as they range over anything, or we say that they are universally quantified, again, for the same reason.
A and B are free-range variables, kind of like chickens (not eggs), but not, because they're variables, so they're more general than chickens, which are just lieutenants.
groan. Military humor!
So these are universally quantified variables, as opposed to existentially quantified variables ('E' is for Existential), where there's some, or a particular, value that satisfies the relation. For example, in f(x) = x + 3 there's an universe of possible values for x that satisfy that relation, some examples are: x = 0, 1, 2, ... or x = 0, -1, -2, -3, ... or x = 1, 1/2, 1/4, 1/8, ... The possibilities are endless, unlimited, but if you were to take that same relation and say, x + 3 = 4, then there's just some x's that satisfy that relation. Off the top of my head, x = 1 looks like a good candidate.
So there're existentially quantified variables, too, but that's not for this post, and we just covered universal quantification, but even that is not for this post.
Let's talk about something bigger. The universe.
Now, I'm not talking about the Universe that contains you, me, and Uranus (the planet) and Pluto (not a planet, but it cudda beena condenda!)
Pluto's not a planet! Take that, Church, Frege, and Wikipedia!
I'm not even talking about that Universe. After all, that's rather uninteresting, all you do is inhabit it and interact with it. (Inhabitant, n.: that which eats and excretes. Like fire.)
Let's talk about something bigger: a universe.
Go to prolog, or twelf, or any logic framework language, and create an universe.
Huh? Wut? Wuttaratawginbout, geophf?
Here, let me demonstrate:
geophf.
I just created an universe and populated it with one inhabitant.
When I say, I am the center of the Universe, I actually mean it.
Okay, I load that universe into prolog:
?- [geophf].Ok.
Now I query it. Am I in there?
?- geophf.Yes.
Yes, you betcha I am.
How about Gauss?
?- gauss.xxx error
Nope.
Susie Derkins, then?
?- suze.xxx error
Nope.
Kinda lonely in here, all by myself, nothing else here.
But the thing is. I created it. I spoke it into existence, and it was, and I populated it. Yes, with just one inhabitant, and I happen to be in that universe and above it directing everything that goes on in there ...
... and people don't get theology. I'm like: why? Theology is the simplest thing in the world! God is in the Universe, that which He created, and, at the same time, above it, directing everything that goes on in there.
"Oh, Mathematicians are atheists."
Uh, no.
"Oh, Mathematicians have big egos and always think they're right about everything."
Uh. No comment.
But the sweetest, shiest, quietest, most-filled-with-wonder people I know are mathematicians.
Present company excluded, of course.
Okay, back on topic. Boring universe, but I created it, AND I can start to have fun with this stuff.
inhabitant(Privileged).
A new ...
No, wait.
What is an universe?
Well, what is the Universe in which you live? Isn't it, simply, a set of rules you've (tacitly) agreed to live your life by? I mean, the very concept of 'you' and 'your life' are actually very modern inventions, and only a very short time ago there were no such concepts. Again, before that time, it was all very boring, but it was an Universe with a set of (tacit) rules that everybody played by to play in that Universe. The Universe of the Vikings was a very different one then the Universe of the Wessexes, so the Norse could just walk into a church (go vikinging) kill everybody in it, man, woman and child, priest or no, take all the loot off the altar and exclaim: 'That was so much easier than going east to the Rus-land! Let's do this again!' And then they'd hang a survivor from an oak tree as sacrificial thanks to Odinn.
Easy! Pagan-times! Good times!
('Oh, I wish I lived back then!' Yeah, so you could get raped and murdered on a raid, or die of exposure or disease over the winter if you were lucky enough to be brought back as a slave. Oh, and put down your phone, and don't look at it ever again. 'Oh, I wish I lived back then!' Yeah. Right.)
Everything in this Universe that you see or feel or touch or anything comes to you as a set of rules that you've tacitly or explicitly agreed to. Oh, what's this thing? It's paper. It has green writing on it. You can trade it for bread (that you eat) or smart phones (that you don't eat).
Really? Why would anybody be so stupid as to give me bread that I eat or this heavy thing that I don't for a cut of a piece of paper?
Are you starting to get it?
Not even close. You'll never, ever come even close to understanding how much the universe you live in is a set of rules you just made up or agreed to (because somebody else made them up, and you went along with them unquestioningly), until you start iconoclasting them.
What is 'air' even? Did you know that 'air' didn't exist until recently? Or money, or light, or gravity, or God, or anything, until we framed them in our little box we call language, thought, whatever, and gave them definition.
Before you disagree, read the autobiography of Helen Keller. Read that point where she got it, that her nurse was signing 'water,' and she got it. When she got that, she got everything: life, love, laughter, time, space, God, you-as-you, and distinct from me-what-I-am.
She had no concept of self, of existence, of being until it was framed. Up until then it was everything-nothing.
Guess what the Universe is? What it really is? It's everything-nothing. Then you come into the world, not when you're born, nor before you're born, nor after you're born, but when you-as-you come into the world, and when that happens, you look at the world, everything-nothing, and you frame it.
BOOM!
You just created the Universe.
And ever since then you have been living in a world that you've created. This isn't the Matrix, this is much, much bigger than that, and it isn't a conspiracy of robots or AI, it's a conspiracy of you-as-you to hold onto you-as-you and keep everything else at bay, at a safe, well-defined distance from you-as-you.
So you keep defining you-as-you ('I'm stupid.' 'I'm so fat!' 'I am the smartest girl in my class.' 'I don't have a thing to wear!' 'I can't do that.' 'I could never do that.' 'God, I need a drink!') and you keep keeping 'everything' 'else' at a safe distance ('You can't do that.' 'That's illegal.' 'That's no fair!' 'God, I hate that Susie Derkins; who does she think she is?' 'I'm so proud of you.' 'Stove. Hot.' 'That hurts.')
You're living in a superstitious world, moreso than Monty Python who would burn a girl for being a witch because she wore red on Sunday and had a little freckle on her shoulder, moreso than that.
You're living the superstition of 'is.' That's the way things are, and you-as-you keep talking to yourself all day, every day, telling you what is and what isn't, telling you what you can do and what you can't do, and, as you magnify you, by insignifying others, what they can do, and what they can't do.
It's all maya.
You are responsible for the world you're living in: you created it.
That's bad news for a lot of people, because now they have to own up to their unhappiness.
The good news is this: so. 
Yeah, really: that. 'So.'
You have been living in the world you created and you will be living in that world for the rest of your life. You can't not live in a world. That's how we define 'life' these days. So you can transcend life, like Buddha, or whomever, or you can live your life in the world you created.
And if your life sucks ... and then you die, ...
And that's your fate.
Then, okay, live that life. You are, anyway, so keep doing what you're doing.
Or you must be-h brahve to change yer feht ... a-cheese.
You're living in the world you created. You have been since you became 'you.'
So, create an entirely new world. Just like I did.
Start from nothing. Put yourself in there first, then build everything else from there.
Here's the thing.
You can only create things smaller than you. Just ask God. God is bigger than the Universe; God is bigger than the Boogieman, God is bigger than everything.
In your world, the world you are the center of, you are bigger than everything around you. Even the big, scary things, even as you make yourself smaller than those big, bad things, so they won't pick on you or notice you, you're still bigger than them, in your smallness. They, being bigger, more clearly define you-as-you-not-them, even in your smallness.
This is all an aside, or is it?
You create your universe, and so you either know you intend things to be as they are, ... or you don't know that you intend them thus. But they still are the way they are, because you intend them to be so.
The same thing in mathematics. You intend something to be, you declare it so, and it is.
So you can even, as in this world, make things big and ugly and mean and just wrong.
If you intend to have holes in your system, then you can have them:
hole: {C} pf C.
And there you are, anything you declare as a hole is a hole, and it's there in your system, a hole or backdoor or paradox, so you could say
hole (1 == 2).
and behold, from now on in your universe 1 and 2 are equivalent. There you go.
So, before this long aside, I declared:
inhabitant(Privileged).
So, whereas before there was just me, just geophf in my universe, now I can use this declaration to populate my universe with a multitude:
?- inhabitant(gauss).Yes
?- inhabitant(suze).Yes
And look at that, I now have two new friends in my universe, just like that, just because I made a fact of declaration and then declared gauss and suze to be inhabitants.
I could even make this permanent (drastically), but asserting these as facts:
inhabitant(Privileged) :- assert(Privieged).
But, ... bleh. I've never used assert, not interested in starting now. I went into purely functional programming to get rid of extralogical constructs, like variable (re)assignment in imperative programming, so I'm not interested in using extralogical features, particularly when I'm programming in a logical framework.
So, anyway, 'U' is for universe. In a logical programming framework it's as easy as π to create your own, populated with exactly what you want in there.
Same as for in the real world for you.
Postlude
So, there's another universal, and that is the U-combinator, or the universal-combinator, and it's also called the Turing-combinator after the dude, Alan Turing, who discovered and studied it.
The U-combinator is defined thus:
U = λxy -> y(xxy)
The interesting property of the universal combinator is that it gives insight into the combinator that it uses, that is, you get to get into the guts of the combinator you're using with the Turing combinator and see what it's made of, for example, the I-combinator simply gives you your argument back:
I = λx -> x
So, the I paired with the U combinator gives you the M combinator:
UI x -> xx
M x -> xx
The K-combinator is constifying:
K = λxy -> x
So the UK combinator, besides being very proper and British, is not only constifying, but absorbing:
UKx -> x(KKx) -> xK ... which is the start of the sole-basis combinator, or, put another way:
N = λxKSK = UKxSK
Interesting!
The U-combinator has many interesting properties, giving many interesting results, it ... universally ... tells you what the 'machinery' of the combinators you are working with is.
I wonder what the JI-basis representation of the U-combinator is.
Homework: Given that
J = λ xyzw -> xy(xwz); and,I = λ x -> x (as before)
arrange as many J's and I's as needed to get you the U-combinator.
For example, the T-combinator is
T = λxy -> yx
So arranging J's and I's thus:
JII
gets you T.
T was easy. U, which is:
U = λxy -> y(xxy)
"might" be a "bit" harder (see some examples of representing the B, C, R, and M combinators in the JI-basis here). Give it a go! The first one that gets me the correct answer (no: 2000 * (2000 + 1) / 2 - (1000 * (1000 + 1) / 2) errors, please, ahem!) will have their name listed on the ⊥-trophies and beside the JI-basis-representation of the U-combinator in my spreadsheet. What could be better than that? That, and a latte, has got to be the highlight of anybody's day. 
But you have to get your own latte.
Just so we're clear on that.
Categories: Offsite Blogs

Oliver Charles: Building data constructors with GHC Generics

Planet Haskell - Fri, 04/25/2014 - 6:00pm

Let’s imagine for a moment that we are in the presence of the following data type:

data Coffee = MkCoffee { coffeeBeans :: String , coffeeOriginCountry :: Country , coffeeBrewMethod :: BrewMethod } deriving (Generic)

However, we have only been given the type itself - and not a constructor for this value. That is, we don’t have access to this function:

MkCoffee :: String -> Country -> BrewMethod -> Coffee

Is it still possible to make Coffee? As we’ll see, by making use of the Generics type class it is indeed possible to make Coffee, and with a little bit of functor tricky, we can derive an implementation of the MkCoffee data value constructor ourselves.

GHC Generics

Before we get going, it’s worth discussing briefly what is meant be the idea of a generic representation, specifically GHC generics. The term “generic programming” is fairly overloaded, but for this post we’re interested in the idea of shape generics. This (and the other types of generic programming) are discussed in José Pedro Magalhães’ thesis “Less is More”, but loosely speaking we’re interested in abstracting over the “shape” of data types. This means we find another value that isomorphic to our, but is made out of a smaller (and importantly, closed) set of primitives.

GHC Generics is one such approach to shape genericity - Rep type is the type of isomorphic data, and we have to and from to move between both sides of the isomorphism. Let’s see what this all means for Coffee:

> :kind! (Rep Coffee) Rep Coffee :: * -> * = M1 D Main.D1Coffee (M1 C Main.C1_0Coffee (M1 S Main.S1_0_0Coffee (K1 R [Char]) :*: (M1 S Main.S1_0_1Coffee (K1 R Country) :*: M1 S Main.S1_0_2Coffee (K1 R BrewMethod))))

This is neither pretty nor succinct, but thankfully a lot of this is noise (for our purposes). Here’s another look at just the essential structure of Rep Coffee:

K1 [Char] :*: (K1 R Country :*: K1 R BrewMethod)

Now we can see that Coffee is isomorphic to the product of a string, a country and a BrewMethod. These correspond to the three fields in the above Coffee data type.

Using these generic representations, we can construct new Coffees:

> to (M1 (M1 (M1 (K1 "Single Origin") :*: (M1 (K1 (Country "Rwanda")) :*: M1 (K1 V60))))) :: Coffee MkCoffee { coffeeBeans = "Single Origin" , coffeeOriginCountry = Country "Rwanda" , coffeeBrewMethod = V60 }

That’s pretty cool, no? It’s this idea that is at the heart of generic programming. We could imagine doing a similar thing by building these values by reading a binary serialisation from a file, or walking a JSON AST.

However, for our purposes we’ve not yet reached our goal. To recap, we really want to be able to build a function like MkCoffee, keeping the generic representation behind the scenes. To do this, we’ll need to interpret the generic representation into a function.

Interpretting Generic Representations

The standard way to work with a generic representation is to walk the tree using instances of a type class. We’ll do the same, and walk the generic representation to reach a functor that will contain Rep Coffee. Later, we’ll be able to use fmap to, turning Rep Coffee into real Coffee.

Our workhorse is the following type class.

class Functor f => Mk rep f | rep -> f where mk :: f (rep a)

As you can see, it’s a multi-parameter type class, taking the generic Rep type, and also indicating which functor can “build” this Rep. Each Rep uniquely determines the constructing functor, which we indicate with a functional dependency. This is essential for having sane type inference.

Starting “at the bottom”, we can begin by constructing a single field. In our Coffee example, we need to construct Strings, Countrys, and BrewMethods. In GHC Generics, each of these is represented with K1. To actually construct K1 we need a value, so our constructing functor will be a one-argument function:

-- Remember that ((->) c) means (c ->) instance Mk (K1 i c) ((->) c) where mk = \x -> K1 x

A Coffee is more than just a single field though, and we need a way to combine individual fields together. This is done by the use of the :*: constuctor, which we can think of as having “fields on the left” and “fields on the right”. To construct :*: we need to compose a builder on the left with the builder on the right, so we use Compose to join the left and right functors into one. The definition of mk itself is a little cumbersome, but does the job:

instance (Mk l fl, Mk r fr) => Mk (l :*: r) (Compose fl fr) where mk = Compose (fmap (\l -> fmap (\r -> l :*: r) mk) mk)

Finally, we just need to construct the M1 layers, which just hold meta-data. These don’t have any effect on what we’re trying to do, so their type class instance simply proxy through to other instances:

instance (Mk f f') => Mk (M1 i c f) f' where mk = M1 <$> mk

Believe it or not, we’re now a good step closer to getting what we want. Let’s have a look and see what get if we try and mk some Coffee:

fmap (to :: Rep Coffee a -> Coffee) mk :: Compose ((->) String) (Compose ((->) Country) ((->) BrewMethod)) Coffee

Hey, that’s actually pretty close! If we squint, this is some sort of function that takes a String, a Country and a BrewMethod and yields some Coffee. All that we have to do is somehow get rid of all the Compose noise.

Unwrapping Compose

At this point, I was originally stuck, but then I remembered smart people have already come before me to work on very similar problems. Specifically, Ralf Hinze published a Functional Pearl a while ago called “Formatting: A Class Act”, which uses an identical construction (which I can honestly say was a happy accident!). Hinze then goes a step further with this magic little type class:

class Functor f => Apply f a b | f a -> b where apply :: f a -> b instance Apply ((->) a) b (a -> b) where apply = id instance (Apply g a b, Apply f b c) => Apply (Compose f g) a c where apply (Compose x) = apply (fmap apply x)

This time we have a type class of three types (!), where the first twot types determine the third. However, it’s not so bad - f is the functor we need to expand, a is the type of data under the functor and b is the final type after expansion. If you look at the specific instances, it should be clear how this all plays out.

Now we can use apply with mk to generate a function!

> apply (fmap to mk) :: String -> Country -> BrewMethod -> Coffee <interactive>:1:2: No instance for (Apply f0 a0 (String -> Country -> BrewMethod -> Coffee)) arising from a use of ‘apply’ The type variables ‘f0’, ‘a0’ are ambiguous Note: there are several potential instances: instance (Apply g a b, Apply f b c) => Apply (Compose f g) a c -- Defined at 2014-04-26-coffee.hs:39:10 instance Apply ((->) a) b (a -> b) -- Defined at 2014-04-26-coffee.hs:36:10

Damn! What went wrong?

Unfortunately, if we use apply with mk we lose parametricity on f itself. There could be many different ways to reach the type we desire (especially as type classes are open), and it turns out that the crucial ingredient is having rep available.

However, from our perspective it should be possible to infer all of this from the return type of the function we are building. Of course, GHC can’t guess, so we will need to encode this information somehow. With GHC 7.8 we can easily express this with a closed type family:

type family Returns (f :: *) :: * where Returns (a -> b) = Returns b Returns r = r

Returns lets us figure out what the final type of a function is, and now we can complete the loop and build our final make function:

make :: forall b f z. (Generic (Returns b), Apply f (Returns b) b, Mk (Rep (Returns b)) f) => b make = apply (fmap (to :: Rep (Returns b) z -> (Returns b)) (mk :: f (Rep (Returns b) z)))

We need to use ScopedTypeVariables to carry a bit of extra type information around, but luckily this is all behind the scenes.

Finally, we can now write mkCoffee:

mkCoffee :: String -> Country -> BrewMethod -> Coffee mkCoffee = make

Does it work?

> mkCoffee "Single Origin" (Country "Rwanda") V60 Coffee { coffeeBeans = "Single Origin" , coffeeOriginCountry = Country "Rwanda" , coffeeBrewMethod = V60 }

Tada!

Categories: Offsite Blogs

Christopher Done: The printer Haskell deserves

Planet Haskell - Fri, 04/25/2014 - 6:00pm

Friday night project ahoy!

Problem

I was working with haskell-names the other day. Its data types are nice enough, but are rather unweildly to read in the REPL when debugging and inspecting. This got me thinking about inspection and printers for Haskell data structures again.

I’ve made several approaches for to haskell-mode in the past.

  • One which requires parsing the output of Show with Haskell and then printing that to s-expressions for Emacs to consume. This is generally unreliable and hacky.
  • Then I settled with making the REPL just syntax highlight the output. That generally works flawlessly and is an okay solution.
  • Then I really wanted collapsing support again, so I implemented one based on Emacs’s awareness of expression boundaries (of ( ) and { } and " " etc.). Simple. Kind of reliable.

Today I implement yet another one, but this one I like best. I’ve always wanted to have a Haskell printer that can evaluate on demand, piece-wise, taking care not to evaluate the whole structure too eagerly. I should be able to type [1..] into my REPL and not be spammed by numbers, but rather to expand it at my leisure.

Implementation

My plan was to use the Data.Data API to traverse data structures breadth-first, display to the user something like Just … and then allow the user to continue evaluating on request by clicking the … slot.

I chatted with Michael Sloan about it and we came up with a simple experimental design and thought it would be a nice idea. We hypothesized a nice class-based way to provide custom presenters for your types, so that e.g. a Diagram could be rendered as a bitmap inline with the rest of the data structure, but that needs more thinking about.

I’ve implemented a basic version of it in the present package (a la “presentations” in CLIM) and implemented a usable front-end for it in Emacs. There’s some information about the implementation in the README which you can read on Github.

Result

Yes! It works. Here is a demonstration video. Concept proven. This is definitely my favourite way so far. I will probably write a simple algorithm in Emacs to format things on separate lines, which would make it much easier to read, and I want to make strings expand to fill the screen width, but no further. But this is already an improvement.

I’ll trial it for a while, if I end up using it more often than not, I’ll make the option to make :present implicit for all REPL evaluations.

Example

For kicks, here’s the output for

loeb (map (\i l -> Node i (map (fmap (+1)) l)) [1..3])

Normally you would get:

[Node {rootLabel = 1, subForest = [Node {rootLabel = 2, subForest = [Node {rootLabel = 3, subForest = [Node {rootLabel = 4, subForest = [Node {rootLabel = 5, subForest = [Node {rootLabel = 6, subForest = [Node {rootLabel = 7, subForest = [Node {rootLabel = 8, subForest =

Ad infinitum! With presentation, you get:

λ> :present loeb (map (\i l -> Node i (map (fmap (+1)) l)) [1..3]) Tree Integer:[Tree Integer]

If you click Tree Integer on the left, you get:

(Node 1 [Tree Integer]):[Tree Integer]

Click the new one on the left:

(Node 1 (Tree Integer:[Tree Integer])):[Tree Integer]

Et cetera:

(Node 1 ((Node 2 [Tree Integer]):[Tree Integer])): ((Node 2 [Tree Integer]):[Tree Integer])

In other words, every [Tree Integer] is a placeholder that you can click to get more output.

Categories: Offsite Blogs

Singapore Haskell job

Haskell on Reddit - Fri, 04/25/2014 - 4:39pm
Categories: Incoming News

Baltimore Haskell Users Group?

Haskell on Reddit - Fri, 04/25/2014 - 3:14pm

So I think Haskell is rad. But it's also a but confusing. I'd like to get better. I lazily Googled and couldn't find any mention of a Baltimore Area users Group. To the best of anyone's knowledge, does one exist? Sorry if this is the wrong sub for this.

submitted by sdroadie
[link] [20 comments]
Categories: Incoming News

mth

del.icio.us/haskell - Fri, 04/25/2014 - 2:09pm
Categories: Offsite Blogs

Which language to use for working through Okasaki?

Haskell on Reddit - Fri, 04/25/2014 - 1:46pm

I'm starting to work through Okasaki's 'Purely functional data structures'. The main text uses Standard ML, which is strict (except for lazy keyword which is used only occasionally). Appendix contains translation of programs into Haskell, which is lazy (seq and friends are not used). Does it mean that they are not equivalent? I would imagine that the question of (non-)laziness is crucial for analyzing running time of algorithms.

Also, if I choose Standard ML, which implementation should I use? Looks like the book uses a made up dialect with 'lazy' keyword and $ operator.

submitted by MintyGrindy
[link] [13 comments]
Categories: Incoming News