News aggregator

WJWH/HPi

del.icio.us/haskell - Tue, 04/22/2014 - 1:33pm
Categories: Offsite Blogs

Hayoo Alfred 2 workflow for functions and packages search

Haskell on Reddit - Tue, 04/22/2014 - 1:24pm

I don't really like hoogle because sometimes it does not find functions from some packages like hxt. That's why I created alfred hayoo workflow. Big thanks to @carlosgaldino's alfred-hoogle-workflow for learning me how to do that. Get it!

submitted by prettynatty
[link] [4 comments]
Categories: Incoming News

Douglas M. Auclair (geophf): 'S' is for Simply Summing

Planet Haskell - Tue, 04/22/2014 - 1:06pm

'S' is for ... simply summing.
Okay, this post was going to be about symmetries and super-symmetries, then I was going to transition to sieves, like the Sieve of Eratosthenes, and how that relates to fizz-buzz, ...
But then I thought: 'eh.'
Yeah, that's what I thought: 'eh.'
I'm a moody brute, filled with deep, indiscernible thoughts, 'cause that's how I roll.
So, yeah: eh. This one will be a simple post today.
Sums.
Sum the numbers 1.
Yeah, I just said that; humor me.
Okay, the sum of 1 is 1. Toughie.
Next, sum 1 + 2.
3, right? Still with me?
Okay, 1 + 2 + 3.
Got it? 6. No probs.
Now: sum 1 to 10. No laptop help, just sum it yourself.
A little more fun, right? So, that's 55.
Okay, no computers: sum 1 to 100.
Hm, hm, hm. Done yet?
La-di-dah. You're not even trying, are you? You're just reading this post.
Okay, the sum from 1 to 100, inclusive, no computers is ... 5,050.
1 to 1000: 500,500.
Pattern? You bet.
1 to ... oh, I don't know: 123:
That's a might harder, but, no computers:
62 00 + 12 4 0 + 18 6 = 7,626.
Okay, let's verify, by asking my Haskell interpreter: 
Prelude> sum [1 .. 123]7626
Yeppers, I was right. I was also as fast as a person typing a program into their computer to solve it, if they were using a functional programming language, and much, much faster than a person using a computer if they weren't and called up Excel, for example (loading ... loading ... loading ... blue screen of death) (You really need to virus-clean your computer, because I see you don't have a Mac, do you, tsk, tsk!) or if they pulled out their HP calculator from their high school days because they are that old, boys and girls!
You see this thing in my hand, children? This is not a smart phone, it's a calculator.
Children: Ooh! Papa, can I borrow it for a sec ...
Children, borrowing it, the puzzling over it: Um, Dad, ... where's the Plants vs. Zombies 2 app?
Yeah, a calculator, it calculates. And that's it. No google, no nothing, just numbers and numeric operations.
Children, looking uncomprehendingly at the thing in their hands, then, tossing the cootied thing from their hands and run to their rooms, screaming, remembering with horror the time Dear Old Dad got out an LP and told them that music came out of it, but ... it didn't have playlists!
Yeah. Calculator. But would your calculator get you that sum that fast? I mean, after you go to the CVS to buy a new battery. Those things are long-lasting, but thirty years later? And you expect your calculator to still just work?
Pfft! Heh.
Pick a number (natural number), any number, I'll sum it for you, from the origin.
Sum 1 to 31415.
Okay
31415 * 15708 = 314,15 0,000 + 157,075, 000 + 21,990,5 00 + 0 + 251,320 = 493,466,820
Okay, that took a bit longer, let's see what Haskell says:
Prelude> sum [1..31415]493466820
Okay. Damn! ... Sometimes I impress even myself.
How do I do this? How do I add all the numbers from 1 to 31,415 and in under a minute? And perfectly, perfectly, correctly?
Well, I'll tell you.
But first, I'll tell you a story.
Once upon a time, there was a little boy named Gauss, and he was a naughty, little, precocious lad, always getting into trouble for dunking Susie Derkins' pigtails in the inkwell at his desk, and that would be fine if she had jet black hair, but you know the Viking types, right? All blond-haired, blue-eyed things, and getting your honey-blond long-flowing locks dunked into ink was not on the agenda for poor, little Susie, who cried, and had to be sent home to console herself with an appointment at the beautician's who thought the Goth-look actually fit Susie well, so she came back the next day as Suze-in-black-leather-and-studs which was disruptive for the class in other ways, so Gauss found himself at detention again.
But this time the math teacher had had enough of little Gauss' pronouncements and recitations of π to sixty-seven places, and decided to teach the little scamp but good.
Not that I'm channeling, or anything. It's not 'Mary Sue'-writing; it's 'walking in your character's moccasins a mile' ... even though Gauss probably didn't wear moccasins all that often, anyway.
Still with me?
So, mean-old Herr Schopenhauer told little Gauss. "Okay, twerp, this is gonna be a light one on you. You can go home after you sum the number from one to one hundred."
Huh? One to one hundred? But that will take all night!
"That's right," said mean old Herr Bergermeister Meisterburger, "so you'd better get crackin'!" And he cackled evilly with Schadenfreude.
The Head-Master had several names, don't you know, ... and a peridiction for Schadenfreude with his afternoon tea and muffin.
So, what could little Gauss do? He got cracking, the poor lamb.
1 = 11 + 2 = 31 + 2 + 3 = 61 + 2 + 3 + 4 = 10 ... obviously, you just add the last number to the previous sum1 + 2 + 3 + 4 + 5 = 15 ... now, wait a minute ...1 + 2 + 3 + 4 + 5 + 6 = 21 ... there seems to be another pattern here ...1 + 2 + 3 + 4 + 5 + 6 + 7 = 28 ... Gauss looked at the chalkboard, then ...
Got it! Little Gauss thought.
Then he wrote:
1 + 2 + 3 + ... + 98 + 99 + 100 = 5,050.
"G'nite, Teach!" Gauss crowed, and off he skipped home with Suze, where they shared a chocolate milk and a strawberry poptart ... before all that sugar glaze had been added to the crust (shudder).
And Herr Herren HerringSchönheit was left stuttering a "Vat? Vat? Git back here, you rapscallion!"
Okay, what's a rap-scallion? Is it a thug onion with 'tude, a glock, and a triple-platinum pedigree?
But all Herr whatz-his-name could do was sputter, because little Gauss got it.
The sum from one (zero, actually) to any number reduces to a simple formula:
sum [1..n] = n * (n + 1) / 2
Either n or n + 1 is even, so one of them is divisible by two, so it works.
Try it out:
1 = 1 * (1 + 1) / 2 = 1 * 2 / 2 = 11 + 2 = 2 * (2 + 1) / 2 = 2 * 3 / 2 = 31 + 2 + 3 = 3 * (3 + 1) / 2 = 3 * 4 / 2 = 3 * 2 = 6... so obviously:1 + 2 + 3 + ... + 31,415 = 31,415 * (31,415 + 1) / 2 = 31,415 * 15,708 = that big number I computed earlier. Yeah, almost five-hundred million!
Put that into your "ceci-n'est-pas-unu-pipe" and smoke it. I'd buy that for a dollar.
Now, there's something you can impress your friends at Happy Hour with.
Epilogue
Okay, so you can sum 1 to ... whatevs, but what if a friend asks you, very kindly, and very politely, 'Okay, Ms. Smartypants, how about summing 1,000 to 2,000? What's that, huh? Betcha can't do that! Huh? Amirite? Amirite? Lookacha sweating bullets now, arencha? So, well, what is it? Huh?"
Yeah, really politely. Like that.
Now you could say, "But-but-but, Gauss' summer[-function, not -time] doesn't work like that."
But then you'd have feet of clay and egg on your face.
And with this hot summer coming, you don't want egg on your face. Trust me on that one.
So. What to do?
Think about it, that's what.
You simply make your 1,000 your zero, by scaling each number back by 1,000, then get your sum, then scale back each number by 1,000. For each time you applied the scale, you apply the scale to the result.
So, the sum of 1,000 to 2,000 inclusive is:
1,000 + 1,001 + 1,002 + ... 1,998 + 1,999 + 2,000 =
scaled back is
0 + 1 + 2 + ... + 998 + 999 + 1000 = 1000 * (1000 + 1) / 2 = 500,500
Now scale each number forward again, by adding 1,000 back to each number. You did that 1,001 times, right? (Don't forget to count the zero.) That's a rescalation (that's a word now) of 1,001,000. Add that to your sum to get 1,501,500.
There's your answer.
Let's check:
Prelude> sum [1000..2000]1501500
Bingo! Tickets! This must be the Front Row!
And you, very politely, just told your friend where they could put their superior 'tude, too.
Win-win! Good times; good times!
See y'all tomorrow. By the way, what's the number of moves needed to solve the Towers of Hanoi? Not that 'Towers of Hanoi' is going to be my topic for 'T.'
Not at all.
Categories: Offsite Blogs

Russell O'Connor: How to Fake Dynamic Binding in Nix

Planet Haskell - Tue, 04/22/2014 - 8:29am

The Nix language is a purely functional, lazy, statically scoped configuration language that is commonly used to specify software package configurations, OS system configurations, distributed system configurations, and cloud system configurations.

Static scoping means that variables are statically bound; all variable references are resolved based on their scope at declaration time. For example, if we declare a set with recursive bindings,

> let a = rec { x = "abc"; x2 = x + "123"; }; in a { x = "abc"; x2 = "abc123"; } then the use of x in the definition of x2 is resolved to "abc". Even if we later update the definition of x, the definition of x2 will not change. > let a = rec { x = "abc"; x2 = x + "123"; }; in a // { x = "def"; } { x = "def"; x2 = "abc123"; }

Generally speaking, static binding is a good thing. I find that languages with static scoping are easy to understand because variable references can be followed in the source code. Static scoping lets Nix be referentially transparent, so, modulo α-renaming, you can always substitute expressions by their (partially) normalized values. In the previous example, we can replace the expression rec { x = "abc"; x2 = x + "123"; } with { x = "abc"; x2 = "abc123"; } and the result of the program does not change.

> let a = { x = "abc"; x2 = "abc123"; }; in a // { x = "def"; } { x = "def"; x2 = "abc123"; }

That said, on some occasions it would be nice to have some dynamic binding. Dynamic binding is used in Nixpkgs to allow users to override libraries with alternate versions in such a way that all other software packages that depend on it pick up the replacement version. For example, we might have the following in our nixpkgs declaration

boost149 = callPackage ../development/libraries/boost/1.49.nix { }; boost155 = callPackage ../development/libraries/boost/1.55.nix { }; boost = boost155; and perhaps we want to make boost149 the default boost version to work around some regression. If we write nixpkgs // { boost = boost149; } then we only update the boost field of the nix package collection and none of the packages depending on boost will change. Instead we need to use the config.packageOverrides to change boost in such a way that all expressions depending on boost are also updated. Our goal is to understand the technique that packageOverrides and other similar overrides employ to achieve this sort of dynamic binding in a statically scoped language such as Nix. This same technique is also used to give semantics to object-oriented languages.

First we need to review normal recursive bindings. The rec operator is can almost be defined as a function in Nix itself by taking a fixed point. Recall that in Nix lambda expressions are written as x: expr.

> let fix = f: let fixpoint = f fixpoint; in fixpoint; in let a = fix (self: { x = "abc"; x2 = self.x + "123"; }); in a { x = "abc"; x2 = "abc123"; }

The function self: { x = "abc"; x2 = self.x + "123"; } is an object written in the open recursive style. By taking the fixpoint of this function, the recursion is closed yielding the desired value. Written this way, we had to prefix the recursive references to x with self.. However using Nix’s with operator, we can bring the values of self into scope allowing us to drop the prefix.

> let fix = f: let fixpoint = f fixpoint; in fixpoint; in let a = fix (self: with self; { x = "abc"; x2 = x + "123"; }); in a { x = "abc"; x2 = "abc123"; }

This is pretty close to a definition of rec. We can almost think of rec { bindings } as syntactic sugar for fix (self: with self; { bindings }).

In order to override the definition of x instead up updating it, we need to intercept the definition of x before the open recursion is closed. To this end, we write a fixWithOverride function that takes a set of overriding bindings and an open recursive object and applies the override bindings before closing the recursion.

> let fix = f: let fixpoint = f fixpoint; in fixpoint; withOverride = overrides: f: self: f self // overrides; fixWithOverride = overrides: f: fix (withOverride overrides f); in let open_a = self: with self; { x = "abc"; x2 = x + "123"; }; in fixWithOverride { x = "def"; } open_a { x = "def"; x2 = "def123"; }

Success! We have manage to override the definition of x and get the definition of x2 updated automatically to reflect the new value of x. Let us step through this code to see how it works. First we defined open_a to be the same as our previous definition of a, but written as an open recursive object. The expression fixWithOverride { x = "def"; } open_a reduces to fix (withOverride { x = "def"; } open_a). What the withOverride function does is takes an open recursive object and returns a new open recursive object with updated bindings. In particular, withOverride { x = "def"; } open_a reduces to

self: (with self; { x = "abc"; x2 = x + "123"; }) // { x = "def"; } which in turn reduces to self: { x = "def"; x2 = self.x + "123"; }. Finally, fix takes the fixpoint of this updated open recursive object to get the closed value { x = "def"; x2 = "def123"; }.

This is great; however, we do not want to have to work with open recursive objects everywhere. Instead, what we can do is build a closed recursive value, but tack on an extra field named _override that provides access to withOverride applied to the open recursive object. This will allow us to perform both updates and overrides at our discretion.

> let fix = f: let fixpoint = f fixpoint; in fixpoint; withOverride = overrides: f: self: f self // overrides; virtual = f: fix f // { _override = overrides: virtual (withOverride overrides f); }; in let a = virtual (self: with self; { x = "abc"; x2 = x + "123"; }); in rec { example1 = a; example2 = a // { x = "def"; }; example3 = a._override { x = "def"; }; example4 = example3._override { y = true; }; example5 = example4._override { x = "ghi"; }; } { example1 = { _override = <LAMBDA>; x = "abc"; x2 = "abc123"; }; example2 = { _override = <LAMBDA>; x = "def"; x2 = "abc123"; }; example3 = { _override = <LAMBDA>; x = "def"; x2 = "def123"; }; example4 = { _override = <LAMBDA>; x = "def"; x2 = "def123"; y = true; }; example5 = { _override = <LAMBDA>; x = "ghi"; x2 = "ghi123"; y = true; }; }

One remaining problem with the above definition of virtual is that we cannot override the method x2 and still get access to x.

> let fix = f: let fixpoint = f fixpoint; in fixpoint; withOverride = overrides: f: self: f self // overrides; virtual = f: fix f // { _override = overrides: virtual (withOverride overrides f); }; in let a = virtual (self: with self; { x = "abc"; x2 = x + "123"; }); in a._override { x2 = x + "456"; } error: undefined variable `x' at `(string):5:23'

Remembering that Nix is statically scoped, we see that the variable x in a._override { x2 = x + "456"; } is a dangling reference and does not refer to anything in lexical scope. To remedy this, we allow the overrides parameter to withOverride to optionally take a open recursive object rather than necessarily a set of bindings.

> let fix = f: let fixpoint = f fixpoint; in fixpoint; withOverride = overrides: f: self: f self // (if builtins.isFunction overrides then overrides self else overrides); virtual = f: fix f // { _override = overrides: virtual (withOverride overrides f); }; in let a = virtual (self: with self; { x = "abc"; x2 = x + "123"; }); in rec { example6 = a._override (self: with self; { x2 = x + "456"; }); example7 = example6._override { x = "def"; }; } { example6 = { _override = <LAMBDA>; x = "abc"; x2 = "abc456"; }; example7 = { _override = <LAMBDA>; x = "def"; x2 = "def456"; }; }

This illustrates is the basic technique that packageOverrides and other similar overrides use. The packageOverrides code is a bit more complex because there are multiple points of entry into the package override system, but the above is the essential idea behind it. The makeOverridable function from customisation.nix creates an override field similar to my _override field above, but overrides function arguments rather than overriding recursive bindings.

The syntax virtual (self: with self; { bindings }) is a little heavy. One could imagine adding a virtual keyword to Nix, similar to rec, so that virtual { bindings } would denote this expression.

After writing all this I am not certain my title is correct. I called this faking dynamic binding, but I think one could argue that this is actually real dynamic binding.

Categories: Offsite Blogs

Douglas M. Auclair (geophf): 'R' is fer Realz, yo!

Planet Haskell - Tue, 04/22/2014 - 8:16am

'R' is for Let's Get Real (the Real Number line)
... because this reality you're living in?
It isn't. It's entirely manufactured for you and by you, and you have no clue that you're living in this unreality that you think isn't.
It's as simple, and as pervasive, as this:
The 'Real Numbers'?
They aren't.
Let's take a step back.
First, as you know, there was counting, and that started, naturally, from the number one.
But even that statement is so obviously flawed and ridiculous on the face of it to a modern-day mathematician. Why are you starting with the number one? Why aren't you starting with the number zero?
This isn't an argument over semantics, by the way, this has real (heh!), fundamental impact, for the number one, in counting, is not the identity. You cannot add one to a number and get that number back, and if you can't do that, you don't have a category (a 'Ring'), and if you don't have that, you have nothing, because your number system has no basis, no foundation, and anything goes because nothing is sure.
But getting to the number zero, admitting that it exists, even though it represents the zero quantity, so technically, it refers to things that don't exist (and therefore a fundamental problem with the number zero in ancient societies) ... well, there was a philosopher who posited that the number zero existed.
He was summarily executed by Plato and his 'platonic' buddies because he had spouted heresy.
If number zero exists, then you had to be able to divide by it, and when you did, you got infinity. And, obviously, infinity cannot be allowed to exist, so no number zero for you.
We went on for a long, long time without the number zero. Even unto today. You study the German rule, then you learn your multiplication tables starting from which number? Not zero: one.
"One times one is one. One times two is two. One times three is three."
Where is the recitation for the "Zero times ..."?
And I mean 'we' as Western society, as shaped by Western philosophy. The Easterners, lead by the Indians, had no problem admitting the number zero, they even had a symbol for it, 0, and then playing in the infinite playground it opened up, and therefore Eastern philosophy thrived, flourished, while Western philosophy, and society, stagnated, ...
... for one thousand years, ...
Just because ... now get this, ... just because people refused to open their eyes to a new way of seeing the world, ...
... through the number zero.
BOOM!
That's what happened when finally West met East, through exchange of ideas through trade (spices) and the Crusades (coffee served with croissants), and philosophers started talking, and the number zero was raised as a possibility.
BOOM!
Mathematics, mathematical ideas, and ideas, themselves, exploded onto the world and into though. Now that there was zero, there was infinity, now that there was infinity, and it was tenable, people now had the freedom to explore spaces that didn't exist anymore. People could go to the New World now, both figuratively and literally.
For growth in Mathematics comes from opening up your mind the possibilities you wouldn't ('couldn't') consider before, and growth in Mathematics leads to opening the mind further.
Take, for example, the expansion of the counting numbers, from not admitting zero to, now, admitting it, yes, but then the fractional numbers. You could count fractionally now.
From zero to infinity there were an infinite number of numbers, but, with fractions, we now found that from zero to one, there were an equal number of infinite number of fractions.
The really neat discovery was that if you put all the fractions in one set, and you put all the counting numbers into another, there was a one-to-one correspondence between the two.
An infinity of counting numbers was the same size as an infinity of counting-by-fraction numbers.
Wow.
So, infinity was the biggest number, fer realz, then, eh?
No, not really.
Because then came what we call the 'Real Numbers' (which aren't, not by a long shot), and then we found an infinite number of numbers between one-half and one-third.
But the thing with these numbers?
The were rationals (fractional) in there, to be sure, but they were also irrationals.
There were numbers like π and τ and e and other weird and wonderful numbers, and the problem with these numbers was that there was no correspondence between them and the rational numbers. There was no way you could combine rational numbers in any form and point directly to τ, for example. These numbers were transcendent.
What's more: they were more. There were infinitely more transcendent numbers, irrational numbers, than there were rationals.
And not even countably infinite more, they were uncountably more infinite.
There was an infinite that was bigger than infinity, and this we call the Continuum.
Why? Because between zero and one and then between one and two there's this measured, discrete, gap, and this we use for counting. There's a measured, even, step between the counting numbers, and even between the fractional numbers: you can count by them, because between them there is this discreteness.
Between the Reals there's no measurable gap. You can't count by them, and you can't add 'just this much' (every time) to go from τ to π ...
(Heh, actually, you can: π = τ + τ, but then what? What's the 'next' irrational number? There's no such thing as the 'next' irrational number, because no matter what 'next' number you choose, there will always be an uncountably infinite number of numbers between that 'next' number and the number you started from, so your 'next' number isn't the next number at all, and never will be.)
So, wow, the Reals. Lots of them. They cover everything, then, right?
Not even close.
There are numbers that are not numbers.
For example, what is the number of all the functions that yield the number zero?
There are, in fact, an infinite number of those.
How about all the functions that give you ('eventually') π?
... Ooh! There are several different ones to find π, aren't they?
Yes. Yes, there are. In fact, there are an uncountably infinite number of functions that compute π.
Now, wait. You're saying, geophf, that there are uncountably infinite number of functions to find each and every Real number and that the Real numbers are uncountable as well, so that means...
Yeah, the Continuum reigned supreme for just a while as the biggest-big number, but it was soon toppled by this Powerset infinity (my term for it, it's actually called something else).
Now, I don't know the relation between the functions that yield numbers, and the functions that construct functions that do that.
But do you see where we're going with this?
As big as you can stretch yourself, there's new vistas to see in mathematics (and meta-mathematics, let's not neglect that, now, shall we?).
But we still haven't scratched the surface.
Is the world quantized, like the rational numbers? Or is it a continuum like the Reals?
Or is something else, even something more?
Electricity.
Direct current comes to you in a straight, steady line.
The thing about DC ('direct current')? It sucks.
(Just ask Marvel.)
If you want clean, pure, ... powerful, ... well: power, over any kind of distance, you have to ask ... not our friend Tommy (Edison), but our wild-child Tesla.
He proposed to Edison that we should use AC ('alternating current') to provide electricity, and Edison threw him out of his lab, that idiot, telling him never to show his face there again.
Guess how your electricity is delivered to your home today?
The thing about alternating current? It's a wave-form, and not only that, it's a triple wave-form. How do real numbers model electricity? Well, with DC, you've got one number: "That there is 5 volts or 5 amps or 1.21 gigiwatts."
Boy, that last one came out of the blue: like a bolt of lightning!
Heh.
But if it's alternating current, then you need the sine and cosine functions to describe your power. Functions? Wouldn't it be nice if it were just a number?
Yes, it would be nice, and there is a number to describe wave-form functions, like your alternating current.
'Imaginary' numbers.
They are called 'imaginary numbers,' because, if you look hard enough on the number line, with good enough eyes, eventually you'll see the number π or τ or e or 1, or 2, or 3, or even zero.
But no matter how hard you strain your eyes, you will never see a number with an imaginary component, because why? Because most imaginary numbers, being on the curve of the wave-form are either above or below the number line. They 'aren't' numbers, then, because they're not on the numberline.
They're imaginary.
I mean, come on! The square-root of negative one? Why would anybody do this? Unless they were daft or a bit batty.
The thing is, without imaginary numbers, we wouldn't have the forms to get our heads around alternating current.
Most of the world, except those very lucky few who lived within a mile or two of a power plant, would be in darkness.
And the computer? Pfft! Don't get me started. Hie thee to the nunnery, because we are now back in the Dark Ages.
Or at most in the Age of 'Enlightenment' where you had to run for cover when the landlady bellowed "'ware slops!" ... unless you wanted raw sewage mixed with rotted fruit on your head.
But now, here we are, because we have both Real and imaginary numbers, together giving us the complex number set (which, it turns out, is not bigger than the Reals, as there is a one-to-one correspondence between each real number and each complex number. Fancy that! An infinity 'more' number of complex number above and below the Real number line gives the same number of complex numbers as Reals).
We're good?
Not even close.
Last I checked, I don't live in Flatland, and, last I checked, nor do you.
Complex go 'above' and 'below' the Real number line, but ... what about the third dimension? Is there numbers to model us in three dimension?
What would such numbers look like?
And here's a stunner. If I were on Mars, or the Moon, and you were here, reading this blog post, how would I know where to look to see you?
The Moon, and Mars, too, has their own three-dimensional frames of reference, and the Earth has its own, too (it's called geocentric). So, to draw a line from a satellite (such as the Moon, known as 'Earth's satellite') so that it can look down at a spot on the Earth, you actually have to use a four-dimensional number to connect the two three-dimensional frames of reference so that one can look at the other. This four-dimensional number is called the Quaternion.
It's simple, really, it's just rocket science.
And it's really ... mind-bending, at first, wrapping your head around the math and drawing pictures, or using both your hands, three fingers out indicating both sets of axes, and you use your nose to draw the line connecting the two, and then you scream, 'but how do I measure the angles?'
Not that I've worked on satellite projects or anything. cough-EarthWatch-cough.
But nowadays, you can't get into making a realistic game without having quaternions under your belt. Monster sees you, monster charges you, monster bonks you on the head. Game over, thank you for playing, please insert $1.50 to die ... that is to say, to try again.
The thing is: how does the monster 'see' you? The monster has it's own frame of reference, just as you do. The monster exists in its own three-dimensional coordinate system, just as you do. If you were standing on a little hillock, would you expect the monster not to see you because you're slightly elevated?
Of course not! The monster sees you, the monster bonks you. All of this happens through transformation of disparate coordinate systems via quaternions.
Now that's something to impress people with at cocktail parties.
'Yeah, I spent all day translating coordinate systems using quaternions. Busy day, busy day."
Just don't say that to a mathematician, because he'll (in general, 'he') will pause, scratch his head then ask: "So you were checking out babes checking you out?"
Then you'll have to admit that, no, not that, you were instead avoiding your boss trying to catch your eye so he could hand you a whole stack of TPS reports to work on over the weekend.
Like I ... didn't. Ooh. Ouch! Guess who was working through Easter?
Fun, fun!
Okay, though. Four dimensions. We've got it all, now, right?
Not if you're a bee.
Okay, where did that come from?
Bees see the world differently from you and me.
Please reflect on the syntax of that sentence, writers.
Bees see the world differently (not: 'different') from you and me (not: 'from you and I').
Gosh! Where is the (American) English language going?
(But I digress.)
(As always.)
If you look at how they communicate through their dance, you see an orientation, but you also see some other activity, they 'waggle' (so it's called the 'waggle-dance') and the vigor at which they do their waggle communicates a more interesting cache of nectar. There are other factors, too.
The fact of the matter: three dimensions are not enough for the bee's dance to communicate what it needs to say to the other drones about the location, distance, and quantity of nectar to be found.
So, it has its waggle-dance to communicate this information. Everybody knows this.
Until, one little girl, working on her Ph.D. in mathematics, stopped by her daddy's apiary, and, at his invitation, watched what he was doing.
Eureka.
"Daddy," she said, "those bees are dancing in six dimensions."
Guess who changed the topic of her Ph.D., right then and there?
Combining distance, angle from the sun, quantity of interest, ... all the factors, the bees came up with a dance.
They had only three dimensions to communicate six things.
The thing is, nobody told the bees they had only three dimensions to work with. So they do their dance in six dimension.
If you map what they are doing up to the sixth dimension, it gives a simple (six-dimensional) vector, which conveys all the information in one number.
Bees live in six dimensions, and they live pretty darn well in it, too.
Or, put this way: 80% of the world's food supply would disappear if bees didn't do what they did.
You are living in six dimensions, or, more correctly, you are alive now, thanks to a little six-dimensional dance.
Six dimensions.
Okay, but what if you're Buckaroo Banzai?
Pfft! Eight dimensions? Light weight.
In String Theory, we need at least ten dimensions for super strings, and twenty-six dimensions for some types of strings.
So, 'R' is for real numbers.
The neat thing about numbers, is ... they can get as big as you can think them.
And they're good for a Ph.D. thesis ... or two.http://logicaltypes.blogspot.com/2014/04/f-is-for-function.html
Categories: Offsite Blogs

Douglas M. Auclair (geophf): 'F' is for function

Planet Haskell - Tue, 04/22/2014 - 4:43am

'F' is for function.
Okay, you just knew we were going there today, didn't you?
Today, and ... well, every day in the last 2,600 years (I'm not joking) has had this dynamic tension between objects and functions. Which is greater?
Read this tell-all post to find out!
Sorry, I channeled Carl Sagan in Cosmos for a second. 
Not really.
But I digress.
WAY back when, back in the olden times, there were just things, and then when you had a full belly and your one thing, your neighbor, who happened to be named 'Jones,' had two things.
And then keeping up with the Jones' came to be a real problem, because you had one thing, and they had two. So you got one more.
And counting was invented.
For things.
In fact, in some cultures, counting doesn't exist as an abstract thing. You don't count, in general, you have different counting systems for different things. Counting, for example, pens, is a very different thing than counting eggs.
Ha-ha, so quaint! Who would ever do that?
Well, you do. What time is it? You have a different way for counting time than you do for counting distance or for counting groceries that you need to buy, don't you?
But, at base, they are all just things: time, distance, and gallons of milk.
Sorry: litres of milk. How very British Imperial of me!
Question: why are Americans just about the only ones using the British Imperial system any more? That is, the real question is: why are the British not using the British Imperial system? They onto something we're not?
Anybody want a satellite coded wrongly because some engineers intermingled British Imperial units with metrics?
Okay, so, units, things, metrics, whatevs.
And counting.
Counting is doing something to something, whereas, at first, it's important you have that chicken in your stomach (cooked is nice, but optional), suddenly it's become important that you have not one chicken, a hen, but a (very rarely) occasional rooster with all your hens would be nice, too.
(Roosters are big jerks, by the way.)
Counting was all we had, after the chickens and their yummy eggs, that is, and that for a long while.
Question: which came first, the chicken or the egg.
Answer: that's obvious, the egg.
Think about it. You have eggs for breakfast, then you have a chicken-salad sandwich.
Like I said: obvious! First the egg, then the chicken.
Unless you're one of those weirdos that eat chicken sausage with your breakfast, then it's a toss-up, but in that case I have no help for you.
Okay, chickens, eggs (egg came first) (You read that first, right here on this blog) (and you thought there was no point to math!), and then counting, for a long, long time.
Then, ... counting became too much, because you ran out of fingers, then toes, to count your mutton on. It happens. That's a good thing, because then people started to have to think (that's a good thing, too), and they thought, 'how do I count all my mutton when I run out of fingers and toes? I need to know that I have more sheep than the Jones', and I've paired each of my sheep to his, and I have more than twenty-two than he does. How many more do I have!"
Then people started getting smart. They did more than one-to-one, or pairwise, groupings, they started grouping numbers, themselves.
All this is very nice background, geophf, but to what end?
First counting, then grouping, those are operations, and on or using numbers.
Then addition and subtraction, which are abstractions, or generalizations on counting, then multiplication, and let's not even go into division.
But.
Counting, adding, multiplying. These things worked for you chickens (and eggs) (first), but also on your mutton, and when you settled down, it worked on how many feet, yards, the hectares of land you had, too. The same principles applied: numbers worked on things, no matter what the things were, and the operations on numbers worked on ... well, no matter what the numbers counted.
Now. Today.
Or, sometime back in the 40s anyway, way back before the Birkenstock-wearers were mellowing out with lattes, anyway. A couple of dudes (Samuel Eilenberg and Saunders Mac Lane, specifically) said, 'Hey, wait!' they exclaimed, 'it doesn't matter what the thing is, at all!' they said, 'as long as the things are all the same thing, we don't care all all about the thing itself at all!'
And they invented a little something that eventually became known as Category Theory, which is the basis of ... well, a lot of things now: quantum thermodynamics, for one, and computer science, for another.
And they nailed it. They created a one-to-one correspondence between the things in categories and the things in Set theory.
Why is that important? Set theory is this:
{}
Got me?
A set is a thing that contains a thing. The fundamental set is the set that contains nothing (the 'empty set'), which is this:
{}
And other sets are sets that contain sets:
{{}, {{}, {{}, {}}}}
With sets you can model numbers if you'd like:
0: {},1 {{}} (the set that contains '0'), 2: {{}, {{}}} (the set that contains '0' and '1')3: {{}, {{}}, {{}, {{}}}} (the set that contains '0', '1', and '2')
etc...
And as Gödel (eventually) showed, once you model numbers, you can model anything.
(We'll get to that ... for 'G' is for Gödel-numbering, I'm thinking)
How? Well, once you have numbers, you can count with them (as just demonstrated), you can add one to them (the 'successor' function), you can take away one from the (the 'predecessor' function), you can add two of them together (successively add one to the first number until the second number is zero. Try it!), subtract, by doing the opposite (or by just removing duplicates from both until one of them is gone ... 'pairwise-subtraction'!), multiply two together (successive addition), divide numbers ... (um, yeah)... and, well, do anything with them.
Once you have Number, you have counting, and from that, you have everything else.
So, our dudes mapped category theory things (morphisms) down to Set mapping functions, and they had it.
Because why?
Well, it's rather cumbersome to carry a whole bunch of sets for your Pert formulation, especially when you want to see the money you (don't) save by paying off your mortgage's principal early.
Banker: oh, geophf, pay off your mortgage principal early, you'll save a ton in the long run.me: uh, that was true when mortgage interest was 10-15-21%...
(you remember those Jimmy Carter days?)
me, continuing: ... but now that my interest rate is 2.5% I save zip after 30 years for early payment.
Try it. Run the numbers. Bankers haven't. They're just repeating what was good advice, ... twenty-five years ago.
But when you have objects, whose representations do not matter, be they 0, 1, a googol, or {{}, {{}}, {{}, {{}}}}, then you can concentrate on what's really important.
The functions.
Whoopsie! I slipped and gave the answer, didn't I?
Me, and Alan Kay, both.
Alan Kay, inventor of Smalltalk, and inspirer of the movie Tron: The objects aren't important, it's the messages that go between them that is.
So there.
Well, okay, functions win. It used to be the chicken (or the egg), but now we've ... grown a bit, worrying less about what fills our stomach, and where we're going to get our next meal (and not be the next meal), you know: survival.
We've transcended our stomachs.
Maybe.
Well, okay, functions rule, so back off.
The dudes didn't stop there, because there's been some really neat things going on with functions, since discovering functions are things in and of themselves, because if functions are things, in and of themselves, then, well, ...
Well, you can number them (tomorrow) (promise), you can count them, you can add them, you can multiply them, you can exponate them, you can ...
You can, in fact, operate on them, applying functions to functions.
So, our dudes did just that. They said their functions, their morphisms have two rules to be a function.
You can ask what it is, it's identity:
id f = f
And you can string them together, composing them:
g . f = ... well, g . f
So, if you have 
> f x = succ x
and 
> g x = x * 2
then (g . f) gives you some function, we can call it, h, here if you'd like, and h is the composition of applying f to your number first, then g, or
> h x = 2 (x + 1)
Try it!
Pull up your functional programming language listener (I use Haskell for now) (until I find or invent something better)
and enter the above.
Now, it's an interesting proposition to show that
(g . f) == h
But, for now, in Haskell, we cannot do that.
Now, in some other programming languages (I'm thinking of my experience with Twelf), you can, but first you have to provide numbers, on you own, like: 0, 1, 2, ...
... and prove they are what they are, you know: they follow in order, and stuff like that ...
Then you provide the functions for identity and composition, and then you can prove the above statement, because now you have theorems and a theorem prover.
Yay!
No, ... no 'yay'!
I mean, 'yay,' if that's your thing, proving theorems (I happen to have fun doing that at times), but 'ick,' otherwise, because it's a lot of work to get to things like just plain old addition.
See, theorem-prover-folk are these hard-core fighters, right on the edge of discovering new mathematics and science. Not plain-old-little me, I just use the stuff (to build new theories from preexisting, proved, ones), so I'm a lover of the stuff.
To quote an eminent mathematical philosopher: I'm a lover, not a fighter.
And, no, Billie-Jean is not my lover. She's just a girl that claims that I am the one.
Anyway!
So, but anyway, using the above two functions, identity and composition, you're now able to reason about functions generally, making new functions by stringing together (composing) other, preexisting functions.
This is so important that Eilenberg and Mac Lane gave this process it's own name 'natural transformation.'
But okay, Categories have units and you don't particularly care what those units are; we saw one set of units, numbers, themselves, as an example,
But units can be anything.
Even functions, so you have a category of functions (several categories, in fact, as there are different kinds of functions, just so you know) ... You can even have categories of categories of things (things being some arbitary unitary types, like numbers, or functions, or categories, again ... it can get as wild and woolly as you want it to get).
And types. Categories can be used to model types, so you can have a category that represents the natural numbers ... as types:
Nat : Category
zero : 1 -> Natsucc : Nat -> Nat
so zero = zero ()one = succ zero ()two = succ onethree = succ twoetc ...
And those formula, those functions works, regardless of the underlying type of things like 'number' or even 'function.'
Category theory was originally called 'Abstract nonsense,' by the way, because Eilenberg and Mac Lane saw it as so abstract that they were actually talking about nothing.
Sort of like how Set Theory does, starting from nothing, the empty set, and building everything from that object, but even more abstractly than that, because category theory doesn't even have the 'nothing'-object to start from, it just starts identifying and composing functions against objects about which it doesn't care what they are.
Now, there was, at one point, a working definition of abstraction, and that was: the more useful a theorem is, the less abstract it is.
The contrapositive implied: the more abstract a formula is, the less useful.
But I find I play in these domains very happily, and in playing, I'm able to invent stuff, useful stuff that's able to do things, in software, that other software engineers struggle with and even give up, stating: "This can't be done."
I love it when I'm told I "can't" do something.
You know what I hear when I hear the word "can't"?
I hear opportunity. 
Everyone else may have given up on this particular task, because it 'can't' be done. But me? I go right in with my little abstract nonsense, reinventing the language that was unable to frame the problem before with a language that can, and I find, hey, I not only can do it, but I just did.
Language frames our thoughts, telling us what we can and cannot do. English is a very complex language, taking in a lot of the world and describing it just so.
The language of mathematics is very, very simple, and describes a very tiny world, a world that is so tiny, in fact, doesn't exist in reality. It's a quantum-sized world: mathematics and has the possibility to be even smaller than that.
But so what? I get to invent a world that doesn't exist, each time I apply a theorem from mathematics that others are unfamiliar with.
And when I do so, I get to create a world where the impossible is possible.
It's very liberating, being able to do things others 'can't.' And when I pull it off, not only do I win, but my customers win, and maybe even those developers who 'can't' win, too, if they're willing to open their eyes to see possibilities they didn't see before.
'F' is for functions, a totally different way of seeing the world, not as a world of objects, but as a world of possibilities to explore.
Categories: Offsite Blogs

cabal - the simple guide

del.icio.us/haskell - Tue, 04/22/2014 - 2:40am
Categories: Offsite Blogs

Christopher Done: Typeable and Data in Haskell

Planet Haskell - Mon, 04/21/2014 - 6:00pm

Data.Typeable and Data.Data are rather mysterious. Starting out as a Haskell newbie you see them once in a while and wonder what use they are. Their Haddock pages are pretty opaque and scary in places. Here’s a quick rundown I thought I’d write to get people up to speed nice and quick so that they can start using it.1

It’s really rather beautiful as a way to do generic programming in Haskell. The general approach is that you don’t know what data types are being given to you, but you want to work upon them almost as if you did. The technique is simple when broken down.

Requirements

First, there is a class exported by each module. The class Typeable and the class Data. Your data types have to be instances of these if you want to use the generic programming methods on them.

Happily, we don’t have to write these instances ourselves (and in GHC 7.8 it is actually not possible to do so): GHC provides the extension DeriveDataTypeable, which you can enable by adding {-# LANGUAGE DeriveDataTypeable #-} to the top of your file, or providing -XDeriveDataTypeable to ghc.

Now you can derive instances of both:

data X = X deriving (Data,Typeable)

Now we can start doing generic operations upon X.

The Typeable class

As a simple starter, we can trivially print the type of any instance of Typeable. What are some existing instances of Typeable? Let’s ask GHCi:

λ> :i Typeable class Typeable a where typeOf :: a -> TypeRep instance [overlap ok] (Typeable1 s, Typeable a) => Typeable (s a) instance [overlap ok] Typeable TypeRep instance [overlap ok] Typeable TyCon instance [overlap ok] Typeable Ordering instance [overlap ok] Typeable Integer instance [overlap ok] Typeable Int instance [overlap ok] Typeable Float instance [overlap ok] Typeable Double instance [overlap ok] Typeable Char instance [overlap ok] Typeable Bool instance [overlap ok] Typeable ()

That’s the basic Prelude types and the Typeable library’s own types.

There’s only one method in the Typeable class:

typeOf :: a -> TypeRep

The TypeRep value has some useful normal instances:

λ> :i TypeRep instance [overlap ok] Eq TypeRep instance [overlap ok] Ord TypeRep instance [overlap ok] Show TypeRep instance [overlap ok] Typeable TypeRep Use-case 1: Print the type of something

So we can use this function on a Char value, for example, and GHCi can print it:

λ> :t typeOf 'a' typeOf 'a' :: TypeRep λ> typeOf 'a' Char

This is mostly useful for debugging, but can also be useful when writing generic encoders or any tool which needs an identifier to be associated with some generic value.

Use-case 2: Compare the types of two things

We can also compare two type representations:

λ> typeOf 'a' == typeOf 'b' True λ> typeOf 'a' == typeOf () False

Any code which needs to allow any old type to be passed into it, but which has some interest in sometimes enforcing or triggering on a specific type can use this to compare them.

Use-case 3: Reifying from generic to concrete

A common thing to need to do is when given a generic value, is to sometimes, if the type is right, actually work with the value as the concrete type, not a polymorphic type. For example, a printing function:

char :: Typeable a => a -> String

The specification for this function is: if given an Char, return its string representation, otherwise, return "unknown". To do this, we need a function that will convert from a polymorphic value to a concrete one:

cast :: (Typeable a, Typeable b) => a -> Maybe b

This function from Data.Typeable will do just that. Now we can implement char:

λ> let char x = case cast x of Just (x :: Char) -> show x Nothing -> "unknown" λ> char 'a' "'a'" λ> char 5 "unknown" λ> char () "unknown" The Data class

That’s more or less where the interesting practical applications of the Typeable class ends. But it becomes more interesting once you have that, the Data class can take advantage of it. The Data class is much more interesting. The point is to be able to look into a data type’s constructors, its fields and traverse across or fold over them. Let’s take a look at the class.

Again, there are some basic instances provided:

instance Data a => Data [a] instance Data Ordering instance Data a => Data (Maybe a) instance Data Integer instance Data Int instance Data Float instance (Data a, Data b) => Data (Either a b) instance Data Double instance Data Char instance Data Bool

It’s a rather big class, so I’ll just cover some methods that demonstrate the key use-cases.

Use-case 1: Get the data type

Similar to the TypeRep, you can use dataTypeOf to get a unique representation of a data type:

dataTypeOf :: Data a => a -> DataType

For example:

λ> dataTypeOf (Just 'a') DataType {tycon = "Prelude.Maybe", datarep = AlgRep [Nothing,Just]}

There aren’t any other interesting instances for this type, but we’ll look at uses for this type later. Representations (so-called FooRep) tend to be references from which you can reify into more concrete values.

Use-case 2: Inspecting a data type

The most common thing to want to do is to get a list of constructors that a type contains. So, the Maybe type contains two.

λ> :t dataTypeConstrs dataTypeConstrs :: DataType -> [Constr] λ> dataTypeConstrs (dataTypeOf (Nothing :: Maybe ())) [Nothing,Just]

We’ll look at what we can do with constructors later.

It’s also surprisingly common to want to see what the constructor is at a particular index. We could write this function ourself, but there’s already one provided:

λ> indexConstr (dataTypeOf (Nothing :: Maybe ())) 2 Just

Sometimes you want to know whether a data type is algebraic (in other words, does it have constructors and is it not one of the built-in types like Int/Float/etc)?

λ> isAlgType (dataTypeOf (Just 'a')) True λ> isAlgType (dataTypeOf 'a') False

Use-case 3: Get the constructor of a value

We have the method

toConstr :: a -> Constr

Which given any instance of Data will yield a constructor.

λ> :i Constr data Constr instance Eq Constr instance Show Constr

You can’t do much with a constructor as-is, but compare and print it:

λ> toConstr (Just 'a') Just λ> toConstr (Just 'a') == toConstr (Nothing :: Maybe Char) False

However, those operations by themselves can be useful.

By the way, we can also get back the DataRep of a constructor:

λ> constrType (toConstr (Just 'a')) DataType {tycon = "Prelude.Maybe", datarep = AlgRep [Nothing,Just]} Use-case 4: Get fields of a constructor

Another typical thing to want to do is to use the field names of a constructor. So for example:

λ> data X = X { foo :: Int, bar :: Char } deriving (Typeable,Data) λ> toConstr (X 0 'a') X λ> constrFields (toConstr (X 0 'a')) ["foo","bar"]

It’s a good use-case for serializing and debugging.

Use-case 5: Make a real value from its constructor

It’s actually possible to produce a value from its constructor. We have this function

fromConstr :: Data a => Constr -> a

Example:

λ> fromConstr (toConstr (Nothing :: Maybe ())) :: Maybe () Nothing

But what do you do when the constructor has fields? No sweat. We have this function:

fromConstrB :: forall a. Data a => (forall d. Data d => d) -> Constr -> a

Haskell beginners: Don’t fear the rank-N type. What it’s saying is merely that the fromConstrB function determines what the type of d will be by itself, by looking at Constr. It’s not provided externally by the caller, as it would be if the forall d. were at the same level as the a. Think of it like scope. let a = d in let d = … doesn’t make sense: the d is in a lower scope. That means we can’t just write:

fromConstrB (5 :: Int) (toConstr (Just 1 :: Maybe Int)) :: Maybe Int

The Int cannot unify with the d because the quantification is one level lower. It basically doesn’t exist outside of the (forall d. Data d => d) (nor can it escape). That’s okay, though. There is a type-class constraint which lets us be generic. We already have a function producing a value of that type:

λ> :t fromConstr (toConstr (1 :: Int)) fromConstr (toConstr (1 :: Int)) :: Data a => a

So we can just use that:

λ> fromConstrB (fromConstr (toConstr (1 :: Int))) (toConstr (Just 1 :: Maybe Int)) :: Maybe Int Just 1

Tada! But wait… What if there’re more fields? How do we provide more than one, and of different types?

Enter fromConstrM:

fromConstrM :: forall m a. (Monad m, Data a) => (forall d. Data d => m d) -> Constr -> m a

Because it’s monadic we can use a state monad to keep an index! Observe:

λ> :t execState execState :: State s a -> s -> s λ> :t execState (modify (+1)) execState (modify (+1)) :: Num s => s -> s λ> :t execState (forM_ [1..5] (const (modify (+1)))) execState (forM_ [1..5] (const (modify (+1)))) :: Num s => s-> s λ> execState (forM_ [1..5] (const (modify (+1)))) 5 10

Let’s put this to use with fromConstrM:

λ> evalState (fromConstrM (do i <- get modify (+1) return (case i of 0 -> fromConstr (toConstr (5::Int)) 1 -> fromConstr (toConstr 'b'))) (toConstr (Foo 4 'a'))) 0 :: Foo Foo 5 'b' λ>

In other words, keep an index starting at 0. Increase it each iteration that fromConstrM does. When we’re at index 0, return an Int, when we’re at index 1, return a Char. Easy! Right?

Use-case 6: mapping over data structures generically

A common thing to want is to map over a value in a structure-preserving way, but changing its values. For that we have gmapT:

gmapT :: forall a. Data a => (forall b. Data b => b -> b) -> a -> a

Similar to fromConstr*, there is a rank-n type b that refers to each type in the constructor of type a. It’s easy enough to use:

λ> gmapT (\d -> case cast d of Nothing -> d Just x -> fromJust (cast (if isUpper x then '!' else x))) (Foo 4 'a') Foo 4 'a' λ> gmapT (\d -> case cast d of Nothing -> d Just x -> fromJust (cast (if isUpper x then '!' else x))) (Foo 4 'A') Foo 4 '!'

Here I’m doing a little check on any field in the constructor of type Char and if it’s upper case, replacing it with !, otherwise leaving it as-is. The first trick is to use the cast function we used earlier to reify the generic d into something real (Char). The second trick is to cast our concrete Char back into a generic d type.

Just like fromConstrM earlier, if you want to operate on exact indices of the constructor rather than going by type, you can use gmapM and use a state monad to do the same thing as we did before.

Use-case 7: generating from data structures generically

Another slightly different use-case is to walk over the values of a data structure, collecting the result. You can do this with gmapM and a state monad or a writer, but there’s a handy function already to do this:

gmapQ :: forall a. Data a => (forall d. Data d => d -> u) -> a -> [u]

Trivial example:

λ> gmapQ (\d -> toConstr d) (Foo 5 'a') [5,'a']

A more useful example can be found in structured-haskell-mode which walks over the Haskell syntax tree and collects source spans into a flat list.

Printer example

Here’s a trivial (not very good, but something I wrote once) generic printer:

gshows :: Data a => a -> ShowS gshows = render `extQ` (shows :: String -> ShowS) where render t | isTuple = showChar '(' . drop 1 . commaSlots . showChar ')' | isNull = showString "[]" | isList = showChar '[' . drop 1 . listSlots . showChar ']' | otherwise = showChar '(' . constructor . slots . showChar ')' where constructor = showString . showConstr . toConstr $ t slots = foldr (.) id . gmapQ ((showChar ' ' .) . gshows) $ t commaSlots = foldr (.) id . gmapQ ((showChar ',' .) . gshows) $ t listSlots = foldr (.) id . init . gmapQ ((showChar ',' .) . gshows) $ t isTuple = all (==',') (filter (not . flip elem "()") (constructor "")) isNull = null (filter (not . flip elem "[]") (constructor "")) isList = constructor "" == "(:)"

I wrote it because the GHC API doesn’t have Show instances for most of its data types, so it’s rather hard to actually inspect any data types that you’re working with in the REPL. It has instances for pretty printing, but pretty printing confuses presentation with data.

Example:

λ> data Foo = Foo Char Int deriving (Data,Typeable) λ> gshow ([Just 2],'c',Foo 'a' 5) "([(Just (2))],('c'),(Foo ('a') (5)))"

Note: no Show instance for Foo.

Summary

We’ve briefly covered how to query types, how to cast them, how to walk over them or generate from them. There’re other things one can do, but those are the main things. The real trick is understanding how to make the types work and that comes with a bit of experience. Fiddle around with the concepts above and you should gain an intution for what is possible with this library. See also: Data.Generics.Aliases.

Hope it helps!

  1. I’ll migrate this to the HaskellWiki when it doesn’t look so, uh, shall we say, unattractive.

Categories: Offsite Blogs