# News aggregator

### Douglas M. Auclair (geophf): 'L' is for the Lambda-calculus

So, nearly halfway through the alphabet. How are we all doing, are we going to be able to go from 'A' to 'Z' in one month?

*Tune in tomorrow to see the next compelling and exciting installment of the #atozchallenge!*

So, this is the big one, a way at looking at mathematics that has been seething under the surface, but with the λ-calculus (pronounced

*'lambda calculus')*has now bubbled up to the surface.

So, we've seen that the formula, or the functions, are the things that concern mathematics, and not so much the objects operated on, and that turned world-thought on its head. Well, the λ-calculus came to be to address a very particular concern that mathematicians worry about (mathematicians are such worriers. Except me, I'm carefree as a hummingbird, flitting among rose bushes and lemon trees, gathering necktar to keep my heartrate at three-thousand beats per minute.) (no, ... wait.), and that concern is this: in the function

f x y z = x + 2y - 3z

How do we

*know*x is x, y is y and z is z?

That is, what if I juxtapose y and z, mixing the two up, as it were? Is there something, anything, that enforces the ordering of the arguments to a function?

In set theory, you see, the set {1, 2, 3} is equivalent to the set {3, 1, 2} or any permutation? Two sets are the same if they have the same members, so there's no ordering imposed.

Now, you could enforce an ordering, by adding another layer of sets:

{{1, 1}, {2, 2}, {3, 3}}

And this ...

*'means'*(I say 'means' quite loosely, you see) that "the

*first*number is 1, the

*second*number is two, and the

*third*number is two."

But it doesn't at all, because, as sets are order-independent, how can we guarantee the ordering argument is the first one in the set? We can't. So we add another layering of sets:

{{{1}, 27}, {42, {3}}, {{2}, 21}}}

Which says: "The first argument is 27, the second argument is 21, and the third argument is 42."

You see what I did there? I mixed up the ordering of the sets and even their ordering arguments, but since the order is a subset of the subset, you're still able to get that information to order the sets as you read through them.

This gave mathematicians headaches, because mathematicians are wont to get headaches about things, because the above set is

*not*the set {27, 21, 42} (or any of its permutations, but something quite different, and the function

f x y z

is

*not*the function

f {{z, {3}}, {{1}, x}, {y, {2}}

(or any of its permutations

because the first one, f x y z, has three arguments, but f {...whatever...} has one argument, which is a set of ordered arguments.

Ugh. Try working with that!

Some people have enough problems working with f x y z, already!

So, how to solve this?

Two of the most popular ways that mathematicians solved this problem are:

1. get rid of variables altogether. And numbers, too. And all symbols, except some very primitive functions which we'll predefine for you, so instead of the above function

f x y z = x + 2y - 3z

we have something looking like this:

BBBVKI xxxx

Okay, you can reason about that, it comes as its own Gödel numeral, so you can have fun with it that way, but ...

But that's hard for a lot of people to swallow or to know what you're talking about, or to have a conversation about the veracity of the statement you've constructed.

The other approach is this:

*partial funtions.*

The function f x y z can actually be represented as three functions

Some function f that takes a variable x given the result of the partial functions of y and z, or, in 'math words':

f x = something + x

where 'something' is

g y = 2y + something else

and 'something else' is

h z = - 3 z.

See?

So when we string these functions together to get our total answer (instead of three partial answers we have

f x . g y . h z = x + 2y - 3z

Which gets us our original equation back,

*and*guarantees that our f x is getting x, because it's

*not*getting anything else, our g y and h z are getting their y and z, respectively, too, for the same reason. Once you reduce a function down to one of only one argument, then you know the argument you're getting is the right one for that function.

Problem solved.

Class...

is...

dismissed!

(And all the students stand up an applaud at the end of the lecture.)

That solves that problem, neatly, too.

But then it magically solved a bunch of other problems, too.

Because now that you're going through your math-space in your spaceship of imagination, just like Carl Sagan through his

*Cosmos,*then it becomes totally natural to start composing these very simple functions together to get more complex ones, and you build what you're really after from these very simple building blocks.

And it was Church, the inventor of the lambda calculus, or maybe his Hinglish buddy, Turing, who invented the U-combinator, aka the 'universal combinator,' or also called the Turing-combinator in his honor, was that

*all*you needed was the lambda (λ), the variables, and 'application' (the gluing of them together) and you got out of that ... everything.

And by everything, I mean, everything: numbers, counting, addition, and all math operations after that, and then, from number (again), knowledge-representation.

λ z -> z (zero)λ succ . λ z -> succ z (one) (and counting, too)

You just named the things, like 'z' and 'succ,' but you could name them anything, and they would ...

*'mean'*the same thing.

This naming to make numbers came to earn its

*own*name: it was called 'Church numerals,' after its inventor.

And the Church numbers

*actually*mean the numbers they name, because

λ succ . λ z -> succ succ succ succ z

Is the Church numeral 'four,' but if you put in a chicken for 'succ' you have:

λ succ[succ/chicken] . λ z -> succ succ succ succ z or

chicken chicken chicken chicken z

And there you go: your four chickens. Throw those puppies on the grille and you've got yourself a nice tailgate party for you and your friends to go with 'z,' the keg.

(What else would 'z' stand for, I ask you.)

(And the keg would

*definitely*be 'zeroed' out, I tell you what.)

The

*really*neat thing?

There's a one-to-one correspondence between combinatory logic and the λ-calculus.

For if you have the λ-term K such that

λ K . λ x . λ y -> x (dropping the 'y' term)

And the λ-term S such that

λ S . λ x . λ y . λ z -> xy(xz) (distributing application over the forms)

Then combining the λ-terms K and S in the following way:

SKK

gets you λ x -> x

Which is the identity function, or the I-combinator, just like combining the combinators S and K in the exact same way gets you the same result.

Surprising. Shocking.

*Really neat!*

These to things, the λ-calculus and combinatory logic were developed around the same time, the lambda calculus to provide proper ordering of variables, and then demonstrating that

*all*you needed were the variables, and nothing else, not numbers, nor addition, nor anything else, just the λ-abstraction and function application got you to numbers and everything else.

Well, combinatory logic argued that all you needed were the

*functions.*It had no variables whatsoever.

Two opposite mathematical languages, but they ended up by meaning the exact same thing!

*Fancy that!*

So, if you proved one thing in one of the languages, because it was easier to express the forms in that way, then the exact same proof held in the corresponding (and opposite) mathematical language.

The other really neat thing is that once you've captured formulas in these building blocks, you can even express logic in these formula, so not only do you have:

λ x . λ y . λ z = x + 2y - 3z

in the λ-calculus, but you also begin to build logic statements of truth, like:

For every x in the above formula, x is a number.

It was later found out that these statements of truth are simple propositions that

*type*the λ-terms.

Ooh!

*Typed-*λ calculus, and on top of that, using the λ-calculus, itself, to type ... well,

*itself!*

*AND THEN!*

And then came this dude, Per Martin-Löf (Swedish professors call each other dude, but collegially. That's a well-known fact.), and he said, 'Yo! Homies!' and then lifted the type-system to predicate calculus, where a type

*depended*(or was

*predicated*on) other types or the instances of the type itself.

So, he would write about the chickens (that came after the eggs, as you recall), that:

For every succ in the below formula,

*given that*every succ is a chicken, then for every z below, then the z

*must be*a keg.

λ succ . λ z -> succ succ succ succ z

The very

*type*of z

*depends upon*the instances ('inhabitants') of the type of succ, knowing one gives you the other,

*dependent*on the foreknowledge.

*Cool!*

So 'cool,' in fact, that you could actually program with

*just*the types and not have any instances or spelled out formula at all, and, guess what? People have actually done that: created programs that only had type descriptors, and the program somehow magically figured out what it was supposed to do from those type descriptions.

How did it figure this out? The types themselves

*only*allowed one way for the program to flow from its start to the end.

Magic! A program with no code actually doing something!

(Cue spooky

*Twilight Zone*thematic music.)

And that brings us right back to 'meaning' ... (and my quotes around 'meaning').

(Question: when you're indicating that you're quoting a word, such as the word 'meaning' ... is that quoted word quoted (again) because you're quoting that you're quoting that word?)

(Just wondering.)

What does 'succ' and 'z' and 'x' and 'y' and the

*other*'z' and ... everything else ... mean?

I could save that for my entry 'M' is for Meaning, but I'm not going to write that entry, because 'M,' of course, is for 'Burrito' (a.k.a. 'Monad,' but we'll get to that later).

So I'll tell you know:

Nothing.

None of those symbols mean anything at all in mathematics. Succ is an abbreviation for 'successor function,' which means 'add one to a number,' but, for example:

λ succ . λ z -> succ succ succ succ z

could mean, just as well, 'apply the function named "succ" cumulatively to some function named "z"'

Because that's what is actually happening there, in the lambda calculus. The lambda calculus doesn't 'care' what 'succ' or 'z' 'mean.' They are just functional variables to be applied to each other, the above function could have just as easily be written with the symbols 'x' and 'y':

λ x . λ y -> x x x x y

Same functional application, same effect. No meaning.

Just like 'succ' doesn't 'mean' successor function, nor does it mean 'chicken.'

In mathematics, it is vitally important that the symbols mean nothing, and by nothing, I mean absolutely nothing. All that matters is the mechanics: there are symbols and how you move them around in your system, and that's it.

Why is that so vitally important?

Because if the symbols mean nothing, then they can represent anything, in which case math truly becomes the universal language: you encode your knowledge base as symbols ('O' is for ontology) you manipulate those symbols following the rules of manipulation you established

*a priori,*and you come up with a result. Many times, it's a surprising result, so surprising that you get people telling you, to your face, 'that can't be done.'

The you translate those symbols back to what they were used to represent, or you apply your knowledge to those symbols

*a posteri*to the symbolic manipulations, and you do those things that you were told are impossible for you, or anyone else, to do.

But you just did it, despite that fact that 'it can't be done.'

The non-mathematician then has the opportunity to do something they've never done in their lives, not at home, not at school, not in college, and especially not in their jobs:

They have the opportunity to learn something.

About the world, yes, but the world is as how they see it, so they have the opportunity to do something even better. They have the opportunity to learn something about themselves.

Math is hard.

Why?

Because, to 'get' math, you have to 'get' yourself, or, as is most often the case: you have to 'get' over yourself and your preconceived notions of what the world is and how the world works. Math will fail you as hard as you're failing, and, when you transcend yourself, it will succeed in places you never could before and let you see things you've never seen, ... if you are brave enough to look.

The λ-calculus, by the way, is the basis for every electronic device that exists today, and for every programming language used today. A programming language judged as 'Turing-complete,' and it must be that to solve anything more than trivial problems (which, unfortunately, most programming tasks are not even trivial, anymore: they're just simply data storage and retrieval and

*'ooh! look at the pretty colors on this web-page!' 'Can I get it in corn-flower blue?' 'Why, yes! Yes, you can!' 'Good, good!')*Turing based his 'machine,' which was a conceptual model of a universal computational machine, on an equivalent to Church's λ-calculus (Turing and Church were two dudes who were buds.)

Church developed this very simple calculus to address a very small concern. It's simple math.

And it changed the world into the one we now live in.

'L' is for this world we now live in. Thanks to a little bit of mathematics.

Oh, and p.s. ... HA! I finally wrote a complete post. Yay!

### Haskell meetup Apr. 17 in Washington DC area: Applications and Applicatives

### I'm teaching a (student-run) course on Haskell next semester. Please critique my syllabus!

I'm a student at UC Berkeley, and I'm planning on teaching a Haskell course next year through the DeCal program for student-led classes.

The goal of the course is to get students interested in and proficient with Haskell. It would be mostly practical, but hint at the more theoretical side of things for interested students.

My syllabus so far is here (PDF). Any feedback would be appreciated. Thanks for your help!

EDIT:

Thanks for the feedback, guys. It seems that I'm trying to go way too fast.

submitted by knrafto[link] [6 comments]

### Any way to find list of files installed with `sudo cabal install xmonad --global'? (ubuntu)

I accidentally installed Xmonad globally, want to remove it, but know there's no uninstall and I have to do it manually. Is there a command that can list the files it installed?

Update: I figured it out by rebuilding XMonad to /opt/haskell/xmonad:

- ghc-pkg unregister xmonad
- sudo cabal install xmonad --prefix=/opt/haskell/xmonad/0.11 --reinstall
- Look at the file structure in /opt/haskell/xmonad/0.11, then manually rm the corresponding files and folders in /usr/local
- If you're on Debian or a derivative, use the Debian Alternatives tool to add your /opt/haskell/xmonad as a system installation.

[link] [2 comments]

### Haskell on language-learning platforms

I've been looking for something like Duolingo for programming and Codeacademy seems to be a similar approach, but I don't see a way for volunteers to contribute courses.

It seems like Memrise does this, but there are no courses yet for Haskell. Would anyone want to start working on one?

P.S. Shoutout to http://exercism.io/

submitted by kxra[link] [1 comment]

### A question on numeric type literals/dependent types

I'm trying to create a tree that can be stored in a database and lazily retreived, node by node. What I want to do is to use Haskell's type system to ensure that the tree is always balanced, so a sketch of what I'm trying to do would be this:

data Tree depth a = Branch (Tree (depth-1) a) (Tree (depth-1) a) | Leaf 0 aSo, that's obviously not legal syntax, and I haven't been able to figure out if there is some legal syntax for what I'm trying to do. Some other functions would be append:

append :: Tree depth a -> Tree depth a -> Tree (depth+1) awhich does the obvious, and then the really nasty one (that I think Haskell just can't express) is to load a node from the database (I made the "a" concrete here because I really don't care about that):

load :: DBConn -> SomeID -> IO (Tree depth Int)I think that it's possible to define a data structure that does what my broken example would do (I've read through the tutorials on type-level Peano numbers, and I've tried to understand the new TypeLits stuff but failed), and I would think that the pure function would also be possible to write. The load function might require dependent types, which Haskell doesn't have, right?

What I'm really hoping here is that I'm wrong about the nature of that load function, and that some clever person can show me some trick that Haskell has to look at external data and create some sort of quasi-dynamic type at runtime, and then to somehow make use of that.

submitted by tsuraan[link] [16 comments]

### 24 Days of Hackage 2012

### Learning Haskell & Cryptography

Hi Everyone,

I'm learning Haskell at the moment but I'm tired of just learning syntax and algorithms in isolation. I need a project if I'm to really absorb the language. I'm also trying to learn cryptography.

Due to work and life I don't have that much free time so I thought I might try to combine both and do something with cryptography in Haskell. I've doing the usual Google searches for interesting API's in Haskell etc but I was wondering if anyone could point me towards cryptography tutorials, books articles etc with Haskell.

Also if what I'm trying to do i.e. jamming both side projects into one due to time constraints, is a bad idea please let me know as I don't really have the frame of reference to know otherwise.

submitted by bantersnatch[link] [13 comments]

### Rascal

### ansi-terminal

### FP Complete: The heartbleed bug and FP Haskell Center

If you haven't heard about the heartbleed bug and related security issues, this article provides a good explanation.

Applications developed in FPHC and deployed using the FP Application
servers are **not** vulnerable to this bug.

Services we use from Amazon and GitHub were affected. SSL connections to our servers go through Amazon software, and we use SSL to connect to GitHub repositories. Both Amazon and GitHub have already updated their services to remove the vulnerability. FP Complete has followed GitHub's suggestions by changing our passwords and OAuth tokens. You can read those guidelines at github.

While we have no evidence that any sensitive information was leaked from FPHC, we recommend changing your password for FPHC as soon as possible, just in case.

Other measures to increase security never hurt. Things like using two-factor authentication on sites for which it is available, and using password locker software that will generate strong passwords unique to each site, will help prevent people breaking into your accounts. This event provides a good time to consider adding these extra security measures if you aren't already using them.

### Google Code Jam 2014

Since I rarely find the enthusiasm to practice coding, I'm using this year Code Jam as an excuse.

I'd love if I could get some feedback for my code(github), like if I'm following a good train of thought.

Anyone else is participating?

submitted by MdxBhmt[link] [6 comments]

### Danny Gratzer: Continuations and Exceptions

Continuations are useful things. They provide a nice way to manually handle control flow. This fact makes them very useful for compilers, an often used alternative to SSA is an intermediate language in which every function call is a tail-call and every expression has been converted to continuation passing style.

Often however, this isn’t enough. In a language which exceptions, we don’t just have a single continuation. Since every expression can either do one of two things.

- Continue the rest of the program normally
- Throw an exception and run an alternative program, the exception handler

To represent this, we can imagine having two continuations. Instead of

newtype Cont r a = Cont {runCont :: (a -> r) -> r}We have

{-# LANGUAGE DeriveFunctor #-} import Control.Monad newtype Throws r e a = Throws {runThrows :: (e -> r) -> (a -> r) -> r} deriving (Functor)Now we have two continuations, where e -> r represents the composition of exception handlers.

We can write a trivial monad instance similar to Cont

instance Monad (Throws r e) where return a = Throws $ \ex cont -> cont a (Throws c) >>= f = Throws $ \ ex cont -> c ex $ \a -> runThrows (f a) e contSo >>= maintains the exception handler between computations and otherwise acts exactly like Cont.

To actually take advantage of our exception handlers, we need two things, a throw and catch like pair of function. Let’s start with throw since it’s easiest.

throw :: e -> Throws r e a throw e = Throws $ \ex cont -> ex eThis is pretty straightforward, when we’re given an exception an to throw, we simply feed it to our exception handler continuation. Since care what value cont needs, we can universally quantify over a.

Next up is handle, we’ll represent an exception handler as a function from e -> Maybe a. If we return Nothing, we can’t handle the exception at this level and we’ll just pass it to the existing exception handler.

So our handle is

handle :: Throws r e a -> (e -> Maybe a) -> Throws r e a handle (Throws rest) handler = Throws $ \ex cont -> rest (\e -> maybe (ex e) cont (handler e)) contNotice the clever bit here, each handler actually contains both the success and failure continuations! If we can’t handle the exception we fail otherwise we can resume exactly where we were before.

No post would be complete without a demonstration!

data Ex = Boom | Kaboom | Splat String deriving Show throwEx1 = throw Boom throwEx2 = throw Kaboom throwEx3 = throw (Splat "splat") test = do result <- handle throwEx1 $ \e -> case e of Boom -> Just "foo" _ -> Nothing result2 <- handle throwEx2 $ \e -> case e of Boom -> Just "what" Kaboom -> Just "huh?" _ -> Nothing result3 <- handle throwEx3 $ \e -> case e of Splat s -> Just s _ -> Nothing return (unwords [result, result2, result3])We can run this with

runThrows (error . ("Toplevel fail "++)) testwhich returns

"foo huh? splat"So our exceptions do in fact, work :) ## A Note on Either Now we already have a perfectly good system of monadic exception like thing in the form of Either.

It might be interesting to note that what we’ve written is in fact isomorphic to Either. (e -> r) -> (a -> r) -> r is just the church representation of Either e a.

We can even go all the way and change Throws to

newtype Throws e a = Throws {runThrows :: forall r. (e -> r) -> (a -> r) -> r}So there you are, an interesting realization that one of the classic representations of a language like SML is in fact a really complicated version of Either :)

<script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus