# News aggregator

### Silk Engineering Blog: Writing admin interfaces with Fay using fay-builder

### ANN: CLaSH - From Haskell to Hardware

### Silk: Writing admin interfaces with Fay using fay-builder

We are in the process of open sourcing a lot of the general purpose libraries we’ve built at Silk under the BSD3 license. We’re planning to write a series of blog posts introducing them so please stay tuned!

First off is one of the smaller packages, fay-builder (github).

Fay is a proper subset of Haskell that compiles to JavaScript, as well as a compiler. I’ve been helping out with the project since its first release in July 2012.

We have several Haskell backend services with admin interfaces written in HTML and JavaScript that communicate with the running server using REST APIs. It seemed like a good idea to write their admin interfaces in Haskell and we have been experimenting with using Fay for this. Since Fay supports Cabal packages we can use our usual tools and upload these packages to the public hackage or our private instance.

fay-builder lets you store options needed to compile Fay code along with an executable. This is done by using custom cabal flags that can be read either by Setup.hs (but see caveats below) or from within the application at startup, on request, or at some other point.

If you want to try it out you can install it by running cabal install fay-builder.

This post demonstrates how we build Haskell code for our admin interfaces.Join our team if you enjoy this stuff too, we’re hiring! The Cabal file

Here’s how you can define a Cabal file that uses fay-builder for a project:

name: my-program version: 0.1Use a custom build type

build-type: Customextra-source-files specifies files what should be included with a source distribution. The fay files are added here so they don’t get lost when running cabal sdist.

extra-source-files: fay/*.hsdata-files are also included, with the addition that Cabal will generate a paths module that you can use to fetch the resources during runtime. We probably want to precompile the fay files so the program can just serve them.

data-files: my-program.cabal, js/*.jsx-foo specifies custom flags that Cabal itself ignores, but we can use it for extra configuration options.

x-fay-packages: fay-jquery, fay-text x-fay-root-modules: Index, Misc x-fay-include-paths: src x-fay-output-dir: data/js x-fay-source-dir: fay executable my-program main-is: Main.hs hs-source-dirs: src default-language: Haskell2010 ghc-options: -WallThis is the haskell module Cabal generates to let us access data-files.

other-modules: Paths_my_programTo make sure we can build the Fay code we add all the dependencies it has here along with the GHC libraries dependencies.

One thing to note is that we can’t depend on base and fay-base at the same time since they have clashing module names. But if we depend on some other fay package we get a transitive dependency to fay-base and we know it will be available.

build-depends: base, Cabal, fay, fay-builder, fay-jquery, fay-textWe have a few options for when to build the Fay code.

Building as a Cabal hookWith a custom Setup.hs we can add a postBuildHook that will compile the Fay code after the executable has been compiled, but before an sdist is created. This is nice because it will generate the needed .js files before the tarball is created. The disadvantage is that Setup.hs cannot have explicit dependencies so you won’t be able to sdist your program unless you already have all dependencies installed.

It is possible to do this, and fay-builder includes a defaultFayHook that you can use in Setup.hs like this:

import Fay.Builder (defaultFayHook) main = defaultFayHookIt’s pretty cool, but maybe not very useful because of the dependency issue. Just make sure to have fay-builder installed before running cabal configure.

Building inside the programFor one application I started out with the above Setup.hs approach, but soon realized that this would not work out since sdisting wasn’t possible in a clean package DB. I instead chose to add a --compile-fay flag that would compile all Fay sources when the process starts when developing. The live setup won’t do this and instead read the data-files.

The trick here is that we added the .cabal file as a data-file to let us access it from other modules.

import qualified Paths_my_program as Paths import qualified Fay.Builder as Builder compileFay :: IO () compileFay = do pkgDb <- getPackageDb packageDesc <- Paths.getDataFileName "my-program.cabal" >>= Builder.readPackageDescription Builder.build packageDesc pkgDb where -- Some clever way to fetch the current package DB -- if you are using a sandbox. getPackageDb :: IO (Maybe FilePath)If we had added the settings elsewhere, such as in a custom configurator file like snaplet-fay does, we wouldn’t be able to read it from within Setup.hs so the cabal file might be the only place to put this configuration if we want both features at once.

One of the biggest advantages in using fay-builder is that all dependencies can be listed in one cabal file instead of having to split the client and server code into multiple packages (which would turn into three different packages if there’s shared code).

### Deadline approaching: ICFP workshop on Functional Art, Music, Modelling and Design

### Douglas M. Auclair (geophf): 'L' is for the Lambda-calculus

So, nearly halfway through the alphabet. How are we all doing, are we going to be able to go from 'A' to 'Z' in one month?

*Tune in tomorrow to see the next compelling and exciting installment of the #atozchallenge!*

So, this is the big one, a way at looking at mathematics that has been seething under the surface, but with the λ-calculus (pronounced

*'lambda calculus')*has now bubbled up to the surface.

So, we've seen that the formula, or the functions, are the things that concern mathematics, and not so much the objects operated on, and that turned world-thought on its head. Well, the λ-calculus came to be to address a very particular concern that mathematicians worry about (mathematicians are such worriers. Except me, I'm carefree as a hummingbird, flitting among rose bushes and lemon trees, gathering necktar to keep my heartrate at three-thousand beats per minute.) (no, ... wait.), and that concern is this: in the function

f x y z = x + 2y - 3z

How do we

*know*x is x, y is y and z is z?

That is, what if I juxtapose y and z, mixing the two up, as it were? Is there something, anything, that enforces the ordering of the arguments to a function?

In set theory, you see, the set {1, 2, 3} is equivalent to the set {3, 1, 2} or any permutation? Two sets are the same if they have the same members, so there's no ordering imposed.

Now, you could enforce an ordering, by adding another layer of sets:

{{1, 1}, {2, 2}, {3, 3}}

And this ...

*'means'*(I say 'means' quite loosely, you see) that "the

*first*number is 1, the

*second*number is two, and the

*third*number is two."

But it doesn't at all, because, as sets are order-independent, how can we guarantee the ordering argument is the first one in the set? We can't. So we add another layering of sets:

{{{1}, 27}, {42, {3}}, {{2}, 21}}}

Which says: "The first argument is 27, the second argument is 21, and the third argument is 42."

You see what I did there? I mixed up the ordering of the sets and even their ordering arguments, but since the order is a subset of the subset, you're still able to get that information to order the sets as you read through them.

This gave mathematicians headaches, because mathematicians are wont to get headaches about things, because the above set is

*not*the set {27, 21, 42} (or any of its permutations, but something quite different, and the function

f x y z

is

*not*the function

f {{z, {3}}, {{1}, x}, {y, {2}}

(or any of its permutations

because the first one, f x y z, has three arguments, but f {...whatever...} has one argument, which is a set of ordered arguments.

Ugh. Try working with that!

Some people have enough problems working with f x y z, already!

So, how to solve this?

Two of the most popular ways that mathematicians solved this problem are:

1. get rid of variables altogether. And numbers, too. And all symbols, except some very primitive functions which we'll predefine for you, so instead of the above function

f x y z = x + 2y - 3z

we have something looking like this:

BBBVKI xxxx

Okay, you can reason about that, it comes as its own Gödel numeral, so you can have fun with it that way, but ...

But that's hard for a lot of people to swallow or to know what you're talking about, or to have a conversation about the veracity of the statement you've constructed.

The other approach is this:

*partial funtions.*

The function f x y z can actually be represented as three functions

Some function f that takes a variable x given the result of the partial functions of y and z, or, in 'math words':

f x = something + x

where 'something' is

g y = 2y + something else

and 'something else' is

h z = - 3 z.

See?

So when we string these functions together to get our total answer (instead of three partial answers we have

f x . g y . h z = x + 2y - 3z

Which gets us our original equation back,

*and*guarantees that our f x is getting x, because it's

*not*getting anything else, our g y and h z are getting their y and z, respectively, too, for the same reason. Once you reduce a function down to one of only one argument, then you know the argument you're getting is the right one for that function.

Problem solved.

Class...

is...

dismissed!

(And all the students stand up an applaud at the end of the lecture.)

That solves that problem, neatly, too.

But then it magically solved a bunch of other problems, too.

Because now that you're going through your math-space in your spaceship of imagination, just like Carl Sagan through his

*Cosmos,*then it becomes totally natural to start composing these very simple functions together to get more complex ones, and you build what you're really after from these very simple building blocks.

And it was Church, the inventor of the lambda calculus, or maybe his Hinglish buddy, Turing, who invented the U-combinator, aka the 'universal combinator,' or also called the Turing-combinator in his honor, was that

*all*you needed was the lambda (λ), the variables, and 'application' (the gluing of them together) and you got out of that ... everything.

And by everything, I mean, everything: numbers, counting, addition, and all math operations after that, and then, from number (again), knowledge-representation.

λ z -> z (zero)λ succ . λ z -> succ z (one) (and counting, too)

You just named the things, like 'z' and 'succ,' but you could name them anything, and they would ...

*'mean'*the same thing.

This naming to make numbers came to earn its

*own*name: it was called 'Church numerals,' after its inventor.

And the Church numbers

*actually*mean the numbers they name, because

λ succ . λ z -> succ succ succ succ z

Is the Church numeral 'four,' but if you put in a chicken for 'succ' you have:

λ succ[succ/chicken] . λ z -> succ succ succ succ z or

chicken chicken chicken chicken z

And there you go: your four chickens. Throw those puppies on the grille and you've got yourself a nice tailgate party for you and your friends to go with 'z,' the keg.

(What else would 'z' stand for, I ask you.)

(And the keg would

*definitely*be 'zeroed' out, I tell you what.)

The

*really*neat thing?

There's a one-to-one correspondence between combinatory logic and the λ-calculus.

For if you have the λ-term K such that

λ K . λ x . λ y -> x (dropping the 'y' term)

And the λ-term S such that

λ S . λ x . λ y . λ z -> xy(xz) (distributing application over the forms)

Then combining the λ-terms K and S in the following way:

SKK

gets you λ x -> x

Which is the identity function, or the I-combinator, just like combining the combinators S and K in the exact same way gets you the same result.

Surprising. Shocking.

*Really neat!*

These to things, the λ-calculus and combinatory logic were developed around the same time, the lambda calculus to provide proper ordering of variables, and then demonstrating that

*all*you needed were the variables, and nothing else, not numbers, nor addition, nor anything else, just the λ-abstraction and function application got you to numbers and everything else.

Well, combinatory logic argued that all you needed were the

*functions.*It had no variables whatsoever.

Two opposite mathematical languages, but they ended up by meaning the exact same thing!

*Fancy that!*

So, if you proved one thing in one of the languages, because it was easier to express the forms in that way, then the exact same proof held in the corresponding (and opposite) mathematical language.

The other really neat thing is that once you've captured formulas in these building blocks, you can even express logic in these formula, so not only do you have:

λ x . λ y . λ z = x + 2y - 3z

in the λ-calculus, but you also begin to build logic statements of truth, like:

For every x in the above formula, x is a number.

It was later found out that these statements of truth are simple propositions that

*type*the λ-terms.

Ooh!

*Typed-*λ calculus, and on top of that, using the λ-calculus, itself, to type ... well,

*itself!*

*AND THEN!*

And then came this dude, Per Martin-Löf (Swedish professors call each other dude, but collegially. That's a well-known fact.), and he said, 'Yo! Homies!' and then lifted the type-system to predicate calculus, where a type

*depended*(or was

*predicated*on) other types or the instances of the type itself.

So, he would write about the chickens (that came after the eggs, as you recall), that:

For every succ in the below formula,

*given that*every succ is a chicken, then for every z below, then the z

*must be*a keg.

λ succ . λ z -> succ succ succ succ z

The very

*type*of z

*depends upon*the instances ('inhabitants') of the type of succ, knowing one gives you the other,

*dependent*on the foreknowledge.

*Cool!*

So 'cool,' in fact, that you could actually program with

*just*the types and not have any instances or spelled out formula at all, and, guess what? People have actually done that: created programs that only had type descriptors, and the program somehow magically figured out what it was supposed to do from those type descriptions.

How did it figure this out? The types themselves

*only*allowed one way for the program to flow from its start to the end.

Magic! A program with no code actually doing something!

(Cue spooky

*Twilight Zone*thematic music.)

And that brings us right back to 'meaning' ... (and my quotes around 'meaning').

(Question: when you're indicating that you're quoting a word, such as the word 'meaning' ... is that quoted word quoted (again) because you're quoting that you're quoting that word?)

(Just wondering.)

What does 'succ' and 'z' and 'x' and 'y' and the

*other*'z' and ... everything else ... mean?

I could save that for my entry 'M' is for Meaning, but I'm not going to write that entry, because 'M,' of course, is for 'Burrito' (a.k.a. 'Monad,' but we'll get to that later).

So I'll tell you know:

Nothing.

None of those symbols mean anything at all in mathematics. Succ is an abbreviation for 'successor function,' which means 'add one to a number,' but, for example:

λ succ . λ z -> succ succ succ succ z

could mean, just as well, 'apply the function named "succ" cumulatively to some function named "z"'

Because that's what is actually happening there, in the lambda calculus. The lambda calculus doesn't 'care' what 'succ' or 'z' 'mean.' They are just functional variables to be applied to each other, the above function could have just as easily be written with the symbols 'x' and 'y':

λ x . λ y -> x x x x y

Same functional application, same effect. No meaning.

Just like 'succ' doesn't 'mean' successor function, nor does it mean 'chicken.'

In mathematics, it is vitally important that the symbols mean nothing, and by nothing, I mean absolutely nothing. All that matters is the mechanics: there are symbols and how you move them around in your system, and that's it.

Why is that so vitally important?

Because if the symbols mean nothing, then they can represent anything, in which case math truly becomes the universal language: you encode your knowledge base as symbols ('O' is for ontology) you manipulate those symbols following the rules of manipulation you established

*a priori,*and you come up with a result. Many times, it's a surprising result, so surprising that you get people telling you, to your face, 'that can't be done.'

The you translate those symbols back to what they were used to represent, or you apply your knowledge to those symbols

*a posteri*to the symbolic manipulations, and you do those things that you were told are impossible for you, or anyone else, to do.

But you just did it, despite that fact that 'it can't be done.'

The non-mathematician then has the opportunity to do something they've never done in their lives, not at home, not at school, not in college, and especially not in their jobs:

They have the opportunity to learn something.

About the world, yes, but the world is as how they see it, so they have the opportunity to do something even better. They have the opportunity to learn something about themselves.

Math is hard.

Why?

Because, to 'get' math, you have to 'get' yourself, or, as is most often the case: you have to 'get' over yourself and your preconceived notions of what the world is and how the world works. Math will fail you as hard as you're failing, and, when you transcend yourself, it will succeed in places you never could before and let you see things you've never seen, ... if you are brave enough to look.

The λ-calculus, by the way, is the basis for every electronic device that exists today, and for every programming language used today. A programming language judged as 'Turing-complete,' and it must be that to solve anything more than trivial problems (which, unfortunately, most programming tasks are not even trivial, anymore: they're just simply data storage and retrieval and

*'ooh! look at the pretty colors on this web-page!' 'Can I get it in corn-flower blue?' 'Why, yes! Yes, you can!' 'Good, good!')*Turing based his 'machine,' which was a conceptual model of a universal computational machine, on an equivalent to Church's λ-calculus (Turing and Church were two dudes who were buds.)

Church developed this very simple calculus to address a very small concern. It's simple math.

And it changed the world into the one we now live in.

'L' is for this world we now live in. Thanks to a little bit of mathematics.

Oh, and p.s. ... HA! I finally wrote a complete post. Yay!

### Haskell meetup Apr. 17 in Washington DC area: Applications and Applicatives

### I'm teaching a (student-run) course on Haskell next semester. Please critique my syllabus!

I'm a student at UC Berkeley, and I'm planning on teaching a Haskell course next year through the DeCal program for student-led classes.

The goal of the course is to get students interested in and proficient with Haskell. It would be mostly practical, but hint at the more theoretical side of things for interested students.

My syllabus so far is here (PDF). Any feedback would be appreciated. Thanks for your help!

EDIT:

Thanks for the feedback, guys. It seems that I'm trying to go way too fast.

submitted by knrafto[link] [6 comments]

### Any way to find list of files installed with `sudo cabal install xmonad --global'? (ubuntu)

I accidentally installed Xmonad globally, want to remove it, but know there's no uninstall and I have to do it manually. Is there a command that can list the files it installed?

Update: I figured it out by rebuilding XMonad to /opt/haskell/xmonad:

- ghc-pkg unregister xmonad
- sudo cabal install xmonad --prefix=/opt/haskell/xmonad/0.11 --reinstall
- Look at the file structure in /opt/haskell/xmonad/0.11, then manually rm the corresponding files and folders in /usr/local
- If you're on Debian or a derivative, use the Debian Alternatives tool to add your /opt/haskell/xmonad as a system installation.

[link] [2 comments]

### Haskell on language-learning platforms

I've been looking for something like Duolingo for programming and Codeacademy seems to be a similar approach, but I don't see a way for volunteers to contribute courses.

It seems like Memrise does this, but there are no courses yet for Haskell. Would anyone want to start working on one?

P.S. Shoutout to http://exercism.io/

submitted by kxra[link] [1 comment]

### A question on numeric type literals/dependent types

I'm trying to create a tree that can be stored in a database and lazily retreived, node by node. What I want to do is to use Haskell's type system to ensure that the tree is always balanced, so a sketch of what I'm trying to do would be this:

data Tree depth a = Branch (Tree (depth-1) a) (Tree (depth-1) a) | Leaf 0 aSo, that's obviously not legal syntax, and I haven't been able to figure out if there is some legal syntax for what I'm trying to do. Some other functions would be append:

append :: Tree depth a -> Tree depth a -> Tree (depth+1) awhich does the obvious, and then the really nasty one (that I think Haskell just can't express) is to load a node from the database (I made the "a" concrete here because I really don't care about that):

load :: DBConn -> SomeID -> IO (Tree depth Int)I think that it's possible to define a data structure that does what my broken example would do (I've read through the tutorials on type-level Peano numbers, and I've tried to understand the new TypeLits stuff but failed), and I would think that the pure function would also be possible to write. The load function might require dependent types, which Haskell doesn't have, right?

What I'm really hoping here is that I'm wrong about the nature of that load function, and that some clever person can show me some trick that Haskell has to look at external data and create some sort of quasi-dynamic type at runtime, and then to somehow make use of that.

submitted by tsuraan[link] [16 comments]

### 24 Days of Hackage 2012

### Learning Haskell & Cryptography

Hi Everyone,

I'm learning Haskell at the moment but I'm tired of just learning syntax and algorithms in isolation. I need a project if I'm to really absorb the language. I'm also trying to learn cryptography.

Due to work and life I don't have that much free time so I thought I might try to combine both and do something with cryptography in Haskell. I've doing the usual Google searches for interesting API's in Haskell etc but I was wondering if anyone could point me towards cryptography tutorials, books articles etc with Haskell.

Also if what I'm trying to do i.e. jamming both side projects into one due to time constraints, is a bad idea please let me know as I don't really have the frame of reference to know otherwise.

submitted by bantersnatch[link] [13 comments]