News aggregator

The Cabal of Cabal

del.icio.us/haskell - Wed, 03/05/2014 - 9:01am
Categories: Offsite Blogs

The Cabal of Cabal

del.icio.us/haskell - Wed, 03/05/2014 - 9:01am
Categories: Offsite Blogs

Philip Wadler: Blame, coercions, and threesomes, precisely

Planet Haskell - Wed, 03/05/2014 - 8:02am
Blame, coercions, and threesomes, precisely
Jeremy Siek, Peter Thiemann, and Philip Wadler
Draft, March 2014
We systematically present four calculi for gradual typing: the blame calculus of Wadler and Findler (2009); a novel calculus that pinpoints blame precisely; the coercion calculus of Henglein (1994); and the threesome calculus of Siek and Wadler (2010). Threesomes are given a syntax that directly exposes their origin as coercions in normal form, a more transparent presentation than that found in Siek and Wadler (2010) or Garcia (2013).Comments welcome!
Categories: Offsite Blogs

Haskell on Reddit - Wed, 03/05/2014 - 7:50am
Categories: Incoming News

cabal sandbox tips

Haskell on Reddit - Wed, 03/05/2014 - 7:02am
Categories: Incoming News

Graphics.Rasterific

del.icio.us/haskell - Wed, 03/05/2014 - 2:14am
Categories: Offsite Blogs

Mark Hibberd & Tony Morris: Argonaut – pure functional JSON | Functional Talks

del.icio.us/haskell - Wed, 03/05/2014 - 1:13am
Categories: Offsite Blogs

www.wumpus-cave.net

del.icio.us/haskell - Wed, 03/05/2014 - 12:46am
Categories: Offsite Blogs

web.engr.oregonstate.edu

del.icio.us/haskell - Wed, 03/05/2014 - 12:46am
Categories: Offsite Blogs

www.fpcomplete.com

del.icio.us/haskell - Wed, 03/05/2014 - 12:45am
Categories: Offsite Blogs

del.icio.us/haskell - Tue, 03/04/2014 - 11:53pm
Categories: Offsite Blogs

haskell-cafe - Tue, 03/04/2014 - 11:02pm
The haskell.org committee is trying to figure out how to use some of its newfound power (the Power of Collecting Money) to best benefit the open-source Haskell community. You can help us by filling out a very short survey (it should only take you about 5 minutes): https://docs.google.com/forms/d/1rEobhHwFpjzPnra9L1TmrozWNFFyAVNPmdUMCcT--3Q/viewform Please do fill it out, especially if you have opinions about what parts of the Haskell open-source world need more work, and could benefit by having some people paid to work on them. Thanks! -Brent, for the haskell.org committee
Categories: Offsite Discussion

2013.2 on Win8.1 64 bit

haskell-cafe - Tue, 03/04/2014 - 10:22pm
Hi, Does the current latest Haskell Platform install on Win8.1? I don't seem to have luck. I have it installed on my portable, which runs Win7. On my desktop I have 8.1, and the installers asks me whether I want to stop and restart with elevated permissions. It does this however I start it, including from a command prompt running as administrator, or from the 'right click' run as administrator. Continuing then fails to write to Program Files (x86). Any ideas? James
Categories: Offsite Discussion

Free Applicative Functors (Paolo Capriotti, Ambrus Kaposi; MSFP 2014)

Haskell on Reddit - Tue, 03/04/2014 - 8:55pm
Categories: Incoming News

Luke Palmer: Algebraic and Analytic Programming

Planet Haskell - Tue, 03/04/2014 - 6:40pm

The professor began my undergrad number theory class by drawing a distinction between algebra and analysis, two major themes in mathematics. This distinction has been discussed elsewhere, and seems to be rather slippery (to mathematicians at least, because it evades precise definition).  My professor seemed to approach it from a synesthetic perspective — it’s about the feel of it.  Algebra is rigid, geometric (think polyhedra) , perfect.  The results are beautiful, compact, and eternal.  By contrast, analysis is messy and malleable.  Theorems have lots of assumptions which aren’t always satisfied, but analysts use them anyway and hope (and check later) that the assumptions really do hold up.  Perelman’s famous proof of Poincare’s conjecture, as I understand, is essentially an example of going back and checking analytic assumptions.  Analysis often makes precise and works with the notion of “good enough” — two things don’t have to be equal, they only need to converge toward each other with a sufficiently small error term.

I have been thinking about this distinction in the realm of programming.  As a Haskell programmer, most of my focus is in an algebraic-feeling programming.  I like to perfect my modules, making them beautiful and eternal, built up from definitions that are compact and each obviously correct.  I take care with my modules when I first write them, and then rarely touch them again (except to update them with dependency patches that the community helpfully provides).  This is in harmony with the current practice of denotative programming, which strives to give mathematical meaning to programs and thus make them easy to reason about. This meaning has, so far, always been of an algebraic nature.

What a jolt I felt when I began work at Google.  The programming that happens here feels quite different — much more like the analytic feeling (I presume — I mostly studied algebraic areas of math in school, so I have less experience).  Here the codebase and its dependencies are constantly in motion, gaining requirements, changing direction.  ”Good enough” is good enough; we don’t need beautiful, eternal results.  It’s messy, it’s malleable. We use automated tests to keep things within appropriate error bounds — proofs and obviously-correct code would be intractable.  We don’t need perfect abstraction boundaries — we can go dig into a dependency and change its assumptions to fit our needs.

Much of the ideological disagreement within the Haskell community and between nearby communities happens across this line.  Unit tests are not good enough for algebraists; proofs are crazy to an analyst.  QuickCheck strikes a nice balance; it’s fuzzy unit tests for the algebraist.  It gives compact, simple, meaningful specifications for the fuzzy business of testing.  I wonder, can we find a dual middle-ground?  I have never seen an analytic proof of software correctness.  Can we say with mathematical certainty that our software is good enough, and what would such a proof look like?

Categories: Offsite Blogs

Mark Jason Dominus: Proof by contradiction

Planet Haskell - Tue, 03/04/2014 - 6:00pm

Intuitionistic logic is deeply misunderstood by people who have not studied it closely; such people often seem to think that the intuitionists were just a bunch of lunatics who rejected the law of the excluded middle for no reason. One often hears that intuitionistic logic rejects proof by contradiction. This is only half true. It arises from a typically classical misunderstanding of intuitionistic logic.

Intuitionists are perfectly happy to accept a reductio ad absurdum proof of the following form:

$$(P\to\bot)\to \lnot P$$

Here means an absurdity or a contradiction; means that assuming leads to absurdity, and means that if assuming leads to absurdity, then you can conclude that is false. This is a classic proof by contradiction, and it is intuitionistically valid. In fact, in many formulations of intuitionistic logic, is defined to mean .

What is rejected by intuitionistic logic is the similar-seeming claim that:

$$(\lnot P\to\bot)\to P$$

This says that if assuming leads to absurdity, you can conclude that is true. This is not intuitionistically valid.

This is where people become puzzled if they only know classical logic. “But those are the same thing!” they cry. “You just have to replace with in the first one, and you get the second.”

Not quite. If you replace with in the first one, you do not get the second one; you get:

$$(\lnot P\to\bot)\to \lnot\lnot P$$

People familiar with classical logic are so used to shuffling the signs around and treating the same as that they often don't notice when they are doing it. But in intuitionistic logic, and are not the same. is weaker than , in the sense that from one can always conclude , but not always vice versa. Intuitionistic logic is happy to agree that if leads to absurdity, then . But it does not agree that this is sufficient to conclude .

As is often the case, it may be helpful to try to understand intuitionistic logic as talking about provability instead of truth. In classical logic, P means that P is true and \lnot P means that P is false. If P is not false it is true, so \lnot\lnot P and P mean the same thing. But in intuitionistic logic P means that P is provable, and \lnot P means that P is not provable. \lnot\lnot P means that it is impossible to prove that P is not provable.

If P is provable, it is certainly impossible to prove that P is not provable. So P implies \lnot\lnot P. But just because it is impossible to prove that there is no proof of P does not mean that P itself is provable, so \lnot\lnot P does not imply P.

Similarly,

$$(P\to\bot)\to \lnot P$$

means that if a proof of would lead to absurdity, then we may conclude that there cannot be a proof of . This is quite valid. But

$$(\lnot P\to\bot)\to P$$

means that if assuming that a proof of is impossible leads to absurdity, there must be a proof of . But this itself isn't a proof of , nor is it enough to prove ; it only shows that there is no proof that proofs of are impossible.

Categories: Offsite Blogs

Haxe 3.1 is here

Lambda the Ultimate - Tue, 03/04/2014 - 4:41pm

Haxe 3.1 is here. It is a language that is sorta rooted in the bog-standard main-stream (it came out of Action/Ecma scripts), but has gradually (especially in the move from 2.0 to 3.0+) been adding some of its own ideas. I've used 2.x for making cross-platform games. I sorta love/hate it, but I'd certainly be a lot more sad if it stopped existing, having known it (not in the biblical sense or anything). There's probably too many random things to go into any detail here, but I'll try to summarize it as: A cross-platform (compiles down to other languages) statically and dynamically typed (including structural typing) language and libraries, with some nifty typing ideas/constructs and syntax of its own. Oh, and: macros. (But it has seemingly weird lapses, like I dunno that it will ever really support The Curiously Recurring Template Pattern, tho which I find personally sad).

Categories: Offsite Discussion

Roman Cheplyaka: cabal sandbox tips

Planet Haskell - Tue, 03/04/2014 - 4:00pm

In case you missed it, starting from version 1.18 cabal-install has awesome sandboxing capabilities. Here I share a couple of tricks that will make your sandboxing experience even better.

Location-independent sandboxes

By default, cabal uses sandbox only in the directory where cabal.sandbox.config is present. This is inconvenient when sharing a sandbox among multiple projects, and in general makes it somewhat fragile.

With cabal 1.19 (i.e. cabal HEAD as of now) you can set the CABAL_SANDBOX_CONFIG environment variable to the path to your cabal.sandbox.config, and the corresponding sandbox will be used regardless of your current directory.

I’ve defined convenience functions for myself such as

tasty() { export CABAL_SANDBOX_CONFIG=$HOME/prog/tasty/sandbox/cabal.sandbox.config sandbox_name=tasty } for every sandbox I commonly use. Notice how I also set the sandbox_name variable to the human-readable name of the sandbox. It can be displayed in the prompt as follows: setopt prompt_subst # force prompt re-evaluation PROMPT='${sandbox_name+[sandbox: \$sandbox_name] }%~ %% '

(sandbox name in the prompt idea is due to /u/cameleon)

Sandbox-aware ghc

Sandboxes only affect cabal, but not ghc or ghci when those are invoked directly. At some point in the future we’ll be able to write

% cabal exec ghc ...

For now I’ve defined the following sandbox-aware wrappers for ghc and ghci:

<script src="https://gist.github.com/feuerbach/9365969.js"></script>

Clone the repo somewhere

% git clone https://gist.github.com/9365969.git ghc_sandbox

and include in your .bashrc or .zshrc

. ~/path/to/ghc_sandbox/ghc_sandbox.sh

(Why am I wrapping ghci instead of using cabal repl? cabal repl has some side-effects, like re-installing packages, that are not always desirable. And ghci is much faster to start, too.)

Categories: Offsite Blogs

Haskell on Reddit - Tue, 03/04/2014 - 3:09pm

The haskell.org committee is trying to figure out how to use some of its newfound power (the Power of Collecting Money) to best benefit the open-source Haskell community. You can help us by filling out this very short survey (it should only take you about 5 minutes). Please do fill it out, especially if you have opinions about what parts of the Haskell open-source world need more work, and could benefit by having some people paid to work on them.

Thanks!