News aggregator

foobarcode.me

del.icio.us/haskell - Fri, 01/09/2015 - 5:40pm
Categories: Offsite Blogs

foobarcode.me

del.icio.us/haskell - Fri, 01/09/2015 - 5:40pm
Categories: Offsite Blogs

foobarcode.me

del.icio.us/haskell - Fri, 01/09/2015 - 5:40pm
Categories: Offsite Blogs

Is there collection of exercises similar to 1HAD?

Haskell on Reddit - Fri, 01/09/2015 - 11:31am

I like the format of https://github.com/1HaskellADay/1HAD

You need to write one simple function and you can view a solution. Is there something similar?

submitted by dotneter
[link] [7 comments]
Categories: Incoming News

Question about "Type Theory & Functional Programming"

Haskell on Reddit - Thu, 01/08/2015 - 9:20pm

Following an endorsement of type theory in a recent comment thread, I decided to actually learn what it was, and started to read Simon Thompson's Type Theory & Functional Programming (and errata) which is just great. Having never taken any class that used the notation, this is the first time I've really had someone explain the rules of propositional logic, and I think I'm finally starting to wrap my head around the concept of a type as proposition and an object as proof. I'm finding it to be a great introduction to the subject.

If this is all old hat to you, then maybe you can help me with a question I've got.

In Chapter 4 on page 76, Thompson says:

Formation Rule for ⊥ -------------------- (⊥F) ⊥ is a formula

and we have no introduction rule associated with ⊥, as we know of no way of forming proofs of the absurd proposition.

And then on page 77 says:

Rule of Assumption A is a formula ------------------ (AS) x : A

But couldn't those be combined to give an introduction rule for ⊥?

---------------- (⊥F) ⊥ is a formula -------------------- (AS) x : ⊥

Making ⊥ inhabited?

I'm only four chapters in, so this may be covered later, but it's a bit of a stumbling block for me.

submitted by rampion
[link] [17 comments]
Categories: Incoming News

Haskell / functional programming research in spanish

Haskell on Reddit - Thu, 01/08/2015 - 7:06pm

I'm trying to practice my spanish, and reading things related to haskell seemed like a great way to do it. AFAIK, the vast majority of functional programming research is done in English or weird Eastern European languages :)

Is there an active Spanish-language functional programming community?

Any links to good papers / books / blog posts?

What Spanish universities / professors are working on functional programming?

submitted by PokerPirate
[link] [11 comments]
Categories: Incoming News

Where does all the hackage package downloads come from?

Haskell on Reddit - Thu, 01/08/2015 - 6:50pm

I have this small insignificant package on hackage that was released not even a month ago witch no one knows about and has ~77 downloads.

Are there some robots automatically downloading packages?

submitted by Faleidel
[link] [12 comments]
Categories: Incoming News

Is Haskell a good programming language for statistics/econometrics (or general scientific computing stuff)?

Haskell on Reddit - Thu, 01/08/2015 - 6:31pm

Hi,

for my statistics/econometrics needs I use a lot of R and Python/Cython/Scipy/Numpy. But I'm fairly interested in Haskell (I've been using xmonad, a tiling window manager for X for years now) and it seems to me that Haskell being a purely functional language might actually help with faster development times, since I deal primarily with functions (likelihood functions mostly).

Do some of you use Haskell for that kind of stuff? Is it possible to call R from Haskell (or vice versa)? I supposed that Haskell is quite fast, being a compiled language, but how's performance with ghci?

submitted by gumbel_distro
[link] [41 comments]
Categories: Incoming News

Danny Gratzer: Why Constructive Logic

Planet Haskell - Thu, 01/08/2015 - 6:00pm
Posted on January 9, 2015 Tags: types

Continuing on my quest of writing about my poorly thought out comments, let’s talk about constructive logic. A lot of people in and around the Haskell/FP community will make statements like

The Curry-Howard isomorphism means that you’re proving things in constructive logic.

Usually absent from these remarks is a nice explanation of why constructive logic matches up with the programming we know and love.

In this post I’d like to highlight what constructive logic is intended to capture and why this corresponds so nicely with programming.

A Bit of History

First things first, let’s discuss the actual origin of constructive logic. It starts with a mathematician and philosopher named Brouwer. He was concerned trying to give an answer to the question “What does it mean to know something to be true” where something is defined as a mathematical proposition.

He settled on the idea of proof being a sort of subjective and personal thing. I know something is true if and only if I can formulate some intuitive proof of it. When viewed this way, the proof I scribble down on paper doesn’t actually validate something’s truthfulness. It’s merely a serialization of my thought process for validating its truthfulness.

Notice that this line of reasoning doesn’t actually specify a precise definition of what verifying something intuitively means. I interpret this idea as something slightly more meta then any single formal system. Rather, when looking a formal system, you ought to verify that its axioms are admissible by your own intuition and then you may go on to accept proofs built off of these axioms.

Now after Brouwer started talking about these ideas Arend Heyting decided to try to write down a logic that captured this notion of “proof is intuition”. The result was this thing called intuitionistic logic. This logic is part of a broader family of logics called “constructive logics”.

Constructive Logic

The core idea of constructive logic is replacing the notion of truth found in classical logic with an intuitionist version. In a classical logic each proposition is either true or false, regardless of what we know about it.

In our new constructive system, a formula cannot be assigned either until we have direct evidence of it. It’s not that there’s a magical new boolean value, {true, false, i-don’t-know}, it’s just not a meaningful question to ask. It doesn’t make sense in these logics to say “A is true” without having a proof of A. There isn’t necessarily this Platonic notion of truthfulness, just things we as logicians can prove. This is sometimes why constructive logic is called “logic for humans”.

The consequences of dealing with things in this way can be boils down to a few things. For example, we now know that

  1. If ∃x. A(x) can be proven, then there is some term which we can readily produce t so that A(t) is provable
  2. If A ∨ B can be proven then either A or B is provable and we know which. (note that ∨ is the symbol for OR)

These make sense when you realize that ∃x. A(x) can only be proven if we have a direct example of it. We can’t indirectly reason that it really ought to exist or merely claim that it must be true in one of a set of cases. We actually need to introduce it by proving an example of it. When our logic enforces this of course we can produce that example!

The same goes for A ∨ B, in our logic the only way to prove A ∨ B is to either provide a proof of A or provide a proof of B. If this is the only way to build a ∨ we can always just point to how it was introduced!

If we extend this to and, ∧: The only way to prove A ∧ B is to prove both A and B. If this is the only way to get to a proof of A ∧ B then of course we can get a proof of A from A ∧ B. ∧ is just behaving like a pair of proofs.

All of this points at one thing: our logic is structured so that we can only prove something when we directly prove it, that’s the spirit of Brouwer’s intuitionism that we’re trying to capture.

There are a lot of different incarnations of constructive logic, in fact pretty much every logic has a constructive cousin. They all share this notion of “We need a direct proof to be true” however. One thing to note that is that some constructive logics conflict a bit with intuitionism. While intuitionism might have provided some of the basis for constructive logics gradually people have poked and pushed the boundaries away from just Brouwer’s intuitionism. For example both Markov’s principle and Church’s thesis state something about all computable functions. While they may be reasonable statements we can’t give a satisfactory proof for them. This is a little confusing I know and I’m only going to talk about constructive logics that Brouwer would approve of.

I encourage the curious reader to poke further at this, it’s rather cool math.

Who on Earth Cares?

Now while constructive logic probably sounds reasonable, if weird, it doesn’t immediately strike me as particularly useful! Indeed, the main reason why computer science cares about constructivism is because we all use it already.

To better understand this, let’s talk about the Curry-Howard isomorphism. It’s that thing that wasn’t really invented by either Curry or Howard and some claim isn’t best seen as an isomorphism, naming is hard. The Curry-Howard isomorphism states that there’s a mapping from a type to a logical proposition and from a program to a proof.

To show some of the mappings for types

CH(Either a b) = CH(a) ∨ CH(b) CH((a, b)) = CH(a) ∧ CH(b) CH( () ) = ⊤ -- True CH(Void) = ⊥ -- False CH(a -> b) = CH(a) → CH(b)

So a program with the type (a, b) is really a proof that a ∧ b is true. Here the truthfulness of a proposition really means that the corresponding type can be occupied by a program.

Now, onto why this logic we get is constructive. Recall our two conditions for a logic being constructive, first is that if ∃x. A(x) is provable then there’s a specific t where A(t) is provable.

Under the Curry Howard isomorphism, ∃ is mapped to existential types (I wonder how that got its name :). That means that a proof of ∃x. A(x) is something like

-- Haskell ex. syntax is a bit gnaryl :/ data Exists f = forall x. Exists f x ourProof :: Exists F ourProof = ...

Now we know the only way to construct an Exists F is to use the constructor Exists. This constructor means that there is at least one specific type for which we could prove f x. We can also easily produce this term as well!

isProof :: Exists f -> (f x -> c) -> c isProof (Exists x) cont = cont x

We can always access the specific “witness” we used to construct this Exists type with pattern matching.

The next law is similar. If we have a proof of a ∨ b we’re supposed to immediately be able to produce a proof of a or a proof of b.

In programming terms, if we have a program Either a b we’re supposed to be able to immediately tell whether this returns Right or Left! We can make some argument that one of these must be possible to construct but we’re not sure which since we have to be able to actually run this program! If we evaluate a program with the type Either a b we’re guaranteed to get either Left a or Right b.

The Self-Sacrificing Definition of Constructive Logic

There are a few explanations of constructive logic that basically describe it as “Classical logic - the law of excluded middle”. More verbosely, a constructive logic is just one that forbids

  1. ∀ A. A ∨ ¬ A being provable (the law of excluded middle, LEM)
  2. ∀ A. ¬ (¬ A) → A being provable (the law of double negation)

I carefully chose the words “being provable” because we can easily introduce these as a hypothesis to a proof and still have a sound system. Indeed this is not uncommon when working in Coq or Agda. They’re just not a readily available tool. Looking at them, this should be apparent as they both let us prove something without directly proving it.

This isn’t really a defining aspect of constructivism, just a natural consequence. If we need a proof of A to show A to be true if we admit A ∨ ¬ A by default it defeats the point. We can introduce A merely by showing ¬ (¬ A) which isn’t a proof of A! Just a proof that it really ought to be true.

In programming terms this is saying we can’t write these two functions.

data Void doubleNeg :: ((a -> Void) -> Void) -> a doubleNeg = ... lem :: Either a (a -> Void) lem = ...

For the first one we have to choices, either we use this (a -> Void) -> Void term we’re given or we construct an a without it. Constructing an arbitrary a without the function is just equivalent to forall a. a which we know to be unoccupied. That means we have to use (a -> Void) -> Void which means we have to build an a -> Void. We have no way of doing something interesting with that supplied a however so we’re completely stuck! The story is similar with lem.

In a lot of ways this definition strikes me in the same way that describing functional programming as

Oh it’s just programming where you don’t have variables or objects.

Or static typing as

It’s just dynamic typed programming where you can’t write certain correct programs

I have a strong urge to say “Well.. yes but no!”.

Wrap Up

Hopefully this helps clarify what exactly people mean when they say Haskell corresponds to a constructive logic or programs are proofs. Indeed this constructivism gives rise to a really cool thing called “proof relevant mathematics”. This is mathematics done purely with constructive proofs. One of the latest ideas to trickle from mathematics to computers is homotopy type theory where we take a proof relevant look at identity types.

Before I wrap up I wanted to share one funny little thought I heard. Constructive mathematics has found a home in automated proof systems. Imagine Brouwer’s horror at hearing we do “intuitionist” proofs that no one will ever look at or try to understand beyond some random mechanical proof assistant!

Thanks to Jon Sterling and Darryl McAdams for the advice and insight

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Hoogle easter egg

Haskell on Reddit - Thu, 01/08/2015 - 5:35pm

Hoogle for "hoogle"

(it says "Can't think of anything more interesting to search for?")

Funny, /u/ndmitchell :) And thanks for a great tool!

submitted by ignorantone
[link] [8 comments]
Categories: Incoming News

Is there any Haskell BVH parser?

Haskell on Reddit - Thu, 01/08/2015 - 4:37pm

For Biovision files.

submitted by SrPeixinho
[link] [comment]
Categories: Incoming News

nh2/haskell-ordnub · GitHub

del.icio.us/haskell - Thu, 01/08/2015 - 7:52am
Categories: Offsite Blogs

nh2/haskell-ordnub · GitHub

del.icio.us/haskell - Thu, 01/08/2015 - 7:52am
Categories: Offsite Blogs

GHC Weekly News - 2015/01/07

Haskell on Reddit - Thu, 01/08/2015 - 2:49am
Categories: Incoming News

Migration/7.10 – GHC

del.icio.us/haskell - Wed, 01/07/2015 - 9:14pm
Categories: Offsite Blogs

Migration/7.10 – GHC

del.icio.us/haskell - Wed, 01/07/2015 - 9:14pm
Categories: Offsite Blogs

Danny Gratzer: A Crash Course on ML Modules

Planet Haskell - Wed, 01/07/2015 - 6:00pm
Posted on January 8, 2015 Tags: sml, haskell

I was having lunch with a couple of Haskell programmers the other day and the subject of the ML family came up. I’ve been writing a lot of ML lately and mentioned that I thought *ML was well worth learning for the average Haskeller. When pressed why the best answer I could come up with was “Well.. clean language, Oh! And an awesome module system” which wasn’t my exactly most compelling response.

I’d like to outline a bit of SML module system here to help substantiate why looking at an ML is A Good Thing. All the code here should be translatable to OCaml if that’s more your taste.

Concepts

In ML languages modules are a well thought out portion of the language. They aren’t just “Oh we need to separate these names… modules should work”. Like any good language they have methods for abstraction and composition. Additionally, like any good part of an ML language, modules have an expressive type language for mediating how composition and abstraction works.

So to explain how this module system functions as a whole, we’ll cover 3 subjects

  1. Structures
  2. Signatures
  3. Functors

Giving a cursory overview of what each thing is and how it might be used.

Structures

Structures are the values in the module language. They are how we actually create a module. The syntax for them is

struct fun flip f x y = f y x datatype 'a list = Con of ('a * 'a list) | Nil ... end

A quick note to Haskellers, in ML types are lower case and type variables are written with ’s. Type constructors are applied “backwards” so List a is 'a list.

So they’re just a bunch of a declarations stuffed in between a struct and end. This is a bit useless if we can’t bind it to a name. For that there’s

structure M = struct val x = 1 end

And now we have a new module M with a single member, x : int. This is just like binding a variable in the term language except a “level up” if you like. We can use this just like you would use modules in any other language.

val x' = M.x + 1

Since struct ... end can contain any list of declarations we can nest module bindings.

structure M' = struct structure NestedM = M end

And access this using ..

val sum = M'.NestedM.x + M.x

As you can imagine, it would get a bit tedious if we needed to . our way to every single module access. For that we have open which just dumps a module’s exposed contents into our namespace. What’s particularly interesting about open is that it is a “normal” declaration and can be nested with let.

fun f y = let open M in x + y end

OCaml has gone a step further and added special syntax for small opens. The “local opens” would turn our code into

let f y = M.(x + y)

This already gives us a lot more power than your average module system. Structures basically encapsulate what we’d expect in a module system, but

  1. Structures =/= files
  2. Structures can be bound to names
  3. Structures can be nested

Up next is a look at what sort of type system we can impose on our language of structures.

Signatures

Now for the same reason we love types in the term language (safety, readability, insert-semireligious-rant) we’d like them in the module language. Happily ML comes equipped with a feature called signatures. Signature values look a lot like structures

sig val x : int datatype 'a list = Cons of ('a * 'a list) | Nil end

So a signature is a list of declarations without any implementations. We can list algebraic data types, other modules, and even functions and values but we won’t provide any actual code to run them. I like to think of signatures as what most documentation rendering tools show for a module.

As we had with structures, signatures can be given names.

signature MSIG = sig val x : int end

On their own signatures are quite useless, the whole point is that we can apply them to modules after all! To do this we use : just like in the term language.

structure M : MSIG = struct val x = 1 end

When compiled, this will check that M has at least the field x : int inside its structure. We can apply signatures retroactively to both module variables and structure values themselves.

structure M : MSIG = struct val x = 1 end : MSIG

One interesting feature of signatures is the ability to leave certain types abstract. For example, when implementing a map the actual implementation of the core data type doesn’t belong in the signature.

signature MAP = sig type key type 'a table val empty : 'a table val insert : key -> 'a -> 'a table -> 'a table val lookup : key -> 'a table -> 'a option end

Notice that the type of keys and tables are left abstract. When someone applies a signature they can do so in two ways, weak or strong ascription. Weak ascription (:) means that the constructors of abstract types are still accessible, but the signature does hide all unrelated declarations in the module. Strong ascription (:>) makes the abstract types actually abstract.

Every once in a while we need to modify a signature. We can do this with the keywords where type. For example, we might implement a specialization of MAP for integer keys and want our signature to express this

structure IntMap :> MAP where type key = int = struct ... end

This incantation leaves the type of the table abstract but specializes the keys to an int.

Last but not least, let’s talk about abstraction in module land.

Functors

Last but not least let’s talk about the “killer feature” of ML module systems: functors. Functors are the “lifting” of functions into the module language. A functor is a function that maps modules with a certain signature to functions of a different signature.

Jumping back to our earlier example of maps, the equivalent in Haskell land is Data.Map. The big difference is that Haskell gives us maps for all keys that implement Ord. Our signature doesn’t give us a clear way to associate all these different modules, one for each Orderable key, that are really the same thing. We can represent this relationship in SML with

signature ORD = sig type t val compare : t * t -> order end functor RBTree (O : ORD) : MAP where type key = O.t = struct open O .... end

Which reads as “For any module implementing Ord, I can give you a module implementing MAP which keys of type O.t”. We can then instantiate these

structure IntOrd = struct type t = int val compare = Int.compare end end structure IntMap = RBTree(IntOrd)

Sadly SML’s module language isn’t higher order. This means we can’t assign functors a type (there isn’t an equivalent of ->) and we can’t pass functors to functors. Even with this restriction functors are tremendously useful.

One interesting difference between SML and OCaml is how functors handle abstract types. Specifically, is it the case that

F(M).t = F(M).t

In SML the answer is (surprisingly) no! Applying a functor generates brand new abstract types. This is actually beneficial when you remember SML and OCaml aren’t pure. For example you might write a functor for handling symbol tables and internally use a mutable symbol table. One nifty trick would be to keep of type of symbols abstract. If you only give back a symbol upon registering something in the table, this would mean that all symbols a user can supply are guaranteed to correspond to some entry.

This falls apart however if functors are extensional. Consider the following REPL session

> structure S1 = SymbolTable(WhateverParameters) > structure S2 = SymbolTable(WhateverParameters) > val key = S1.register "I'm an entry" > S2.lookup key Error: no such key!

This will not work if S1 and S2 have separate key types.

To my knowledge, the general conclusion is that generative functors (ala SML) are good for impure code, but applicative functors (ala OCaml and BackPack) really shine with pure code.

Wrap Up

We’ve covered a lot of ground in this post. This wasn’t an exhaustive tour of every feature of ML module systems, but hopefully I got the jist across.

If there’s one point to take home: In a lot of languages modules are clearly a bolted on construction. They’re something added on later to fix “that library problem” and generally consist of the same “module <-> file” and “A module imports others to bring them into scope”. In ML that’s simply not the case. The module language is a rich, well thought out thing with it’s own methods of abstraction, composition, and even a notion of types!

I wholeheartedly recommend messing around a bit with OCaml or SML to see how having these things impacts your thought process. I think you’ll be pleasantly surprised.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs