News aggregator

Ben Moseley: FTP dangers

Planet Haskell - Wed, 02/11/2015 - 5:55am
I am concerned about the Foldable Traverable Proposal (FTP) ( :

My first and biggest concern is simply that it's harder to read code which uses highly polymorphic functions unnecessarily.

You can see that even by considering plain-old fmap vs map: it's harder to read "fmap f . fmap g" than "map f . map g" because with the former you're having to manually search for more information about what Functor is being used in each case.

My second concern is that the more you overload, the more you risk having something unexpected happen:

Say we have two variables:
*Borders.Base.Utils> let a' = ["alice", "bob"]
*Borders.Base.Utils> let a  = (True, a')
we want to count the characters so we type:
*Borders.Base.Utils> length $ concat a
..but we've accidentally forgotten the prime character...

... right now we get:
   Couldn't match expected type ‘[[a0]]’
               with actual type ‘(Bool, [[Char]])’
   In the first argument of ‘concat’, namely ‘a’
   In the second argument of ‘($)’, namely ‘concat a’</interactive>
so we fix it and get our desired result:

*Borders.Base.Utils> length $ concat a'
...but under the FTP proposals (where concat would become Data.Foldable.concat) we get:

*Borders.Base.Utils> length $ Data.Foldable.concat a
(because pairs are Foldable in their second argument).

This cannot be a good thing.

I believe that the more generalised functions of FTP should be opt-in (i.e. you should - as at present - need to import them explicitly).

Categories: Offsite Blogs

Having problems with my indention and alignment in notepad++, after working on a .lhs script for a while when I reopened it today I found everything to be unaligned. please help!

Haskell on Reddit - Wed, 02/11/2015 - 4:38am

I'm working on a Haskell program which is currently over 500 lines. I'm very careful to make sure all my functions have neat and identical alignment but I have opened notepad++ today to discover that all my functions are no longer aligned neatly and the whole thing looks a mess. As far as I'm aware I've not done anything and it seems to be the case with all my scripts, wondered if anyone had any idea what is going on?

Here's an example of what I mean

fun :: Int -> Int fun a = a

Has become

fun :: Int -> Int fun a = a

I know it's not super important but it's very frustrating and wondered if anyone knew a fix?

EDIT: I'm an idiot thanks for all your help its sorted now!

submitted by LJackso
[link] [15 comments]
Categories: Incoming News

Rank-N types with (.) composition

General haskell list - Tue, 02/10/2015 - 10:38pm
I came across something that seems a bit strange to me. Here is a simplified version (the original was trying to move from a lens ReifiedFold to a lens-action ReifiedMonadicFold) {-# LANGUAGE RankNTypes #-} import Control.Applicative newtype Wrap = Wrap { extract :: forall f. Functor f => f Int } trip :: Wrap -> Wrap trip a = Wrap (extract a) The compiler is okay with this. It chokes on this alternative though trip :: Wrap -> Wrap trip = Wrap . extract giving (GHC 7.8.2) Couldn't match type ‘f0 Int’ with ‘forall (f :: * -> *). Functor f => f Int’ Expected type: f0 Int -> Wrap Actual type: (forall (f :: * -> *). Functor f => f Int) -> Wrap In the first argument of ‘(.)’, namely ‘Wrap’ In the expression: Wrap . extract I'm guessing this is because the compiler fancy footwork to handle the implicit parameters, something like trip a = Wrap (\f fDict -> extract a f fDict) where f is the Functor type and fDict is the associated dictionary, isn't compatible
Categories: Incoming News

Functional Jobs: Senior Development Engineer at Lookingglass Cyber Solutions (Full-time)

Planet Haskell - Tue, 02/10/2015 - 8:21pm

Lookingglass is seeking a qualified Senior Development Engineer to join our team!

Are you an experienced senior software engineer in security, networking, cloud and big data? Are you interested in cyber security or improving the security of the Internet? Do you push yourself to be creative and innovative and expect the same of others?

At Lookingglass, we are driven and passionate about what we do. We believe that teams deliver great products not individuals. We inspire each other and our customers every day with technology that improves the security of the Internet and of our customer’s. Behind our success is a team that thrives on collaboration and creativity, delivering meaningful impact to our customers.

Skills & Requirements Required:
  • US Citizen or Green Card Holder
  • Location: MD/VA or CA based
  • Bachelor’s or Masters degree in: computer science, engineering, information systems or mathematics
  • 5-8 yrs experienced with full development life-cycle with shipping products
  • 4-8 yrs experienced with Functional (particularly Clojure) and OO languages – have worked in functional paradigms with immutable data models
  • List item 5+ yrs building distributed system products including messaging, NoSQL, RPC / RMI mechanisms – Including building Service Orientated Architectures
  • Proficiency with data structure and algorithm analysis • Experienced working in a TDD Environment
  • Comfortable with aggressive refactoring
  • Architectural and design skills to map a solution across hardware, software, and business layers in a distributed architecture
Nice to Have:
  • Product development experience in network security, content security or cybersecurity intelligence
  • Experience with CSP concurrency models
  • Experience with key-value distributed databases
  • Experience deploying production software in Haskell, OCAML, or Lisp
  • Comfortable writing one or more of the following Javascript, GoLang, Ruby

Get information on how to apply for this position.

Categories: Offsite Blogs

Why is the function for inserting a value into a monadic context called "return"?

Haskell on Reddit - Tue, 02/10/2015 - 6:42pm

So I've been using Haskell for a while now, and I was just reading through Wadler's "The essence of functional programming" and I saw that in that paper he refers to what we now call "return" as "unitM" (that is, the unit operation of monads). How did the standard term become "return" and was there concern about naming confusion with "return" in imperative languages?

submitted by hemamorphy
[link] [33 comments]
Categories: Incoming News

Danny Gratzer: Notes on Proof Theory: Part 1

Planet Haskell - Tue, 02/10/2015 - 6:00pm
Posted on February 11, 2015 Tags: types

I write a lot about types. Up until now however, I’ve only made passing references to the thing I’ve actually been studying in most of my free time lately: proof theory. Now I have a good reason for this: the proof theory I’m interested in is undeniably intertwined with type theory and computer science as a whole. In fact, you occasionally see someone draw the triangle

Type Theory / \ / \ Proof Theory ---- Category Theory

Which nicely summarizes the lay of the land in the world I’m interested in. People will often pick up something will understood on one corner of the triangle and drag it off to another, producing a flurry of new ideas and papers. It’s all very exciting and leads to really cool stuff. I think the most talked about example lately is homotopy type theory which drags a mathematical structure (weak infinite groupoids) and hoists off to type theory!

If you read the [unprofessional, mostly incorrect, and entirely more fun to read] blog posts on these subjects you’ll find most of the lip service is paid to category theory and type theory with poor proof theory shunted off to the side.

In this post, I’d like to jot down my notes on Frank Pfenning’s introduction to proof theory materials to change that in some small way.

What is Proof Theory

The obvious question is just “What is proof theory?”. The answer is that proof theory is the study of proofs. In this world we study proofs as first class mathematical objects which we prove interesting things about. This is the branch of math that formalizes our handwavy notion of a proof into a precise object governed by rules.

We can then prove things like “Given a proof that Γ ⊢ A and another derivation of Γ, A ⊢ B, then we can produce a derivation of Γ ⊢ B. Such a theorem is utterly crazy unless we can formalize what it means to derive something.

From this we grow beautiful little sets of rules and construct derivations with them. Later, we can drag these derivations off to type theory and use them to model all sorts of wonderful phenomena. My most recent personal example was when folks noticed that the rules for modal logic perfectly capture what the semantics of static pointers ought to be.

So in short, proof theory is devoted to answering that question that every single one of your math classes dodged

Professor, what exactly is a proof?

Basic Building Blocks

In every logic that we’ll study we’ll keep circling back to two core objects: judgments and propositions. The best explanation of judgments I’ve read comes from Frank Pfenning

A judgment is something we may know, that is, an object of knowledge. A judgment is evident if we in fact know it.

So judgments are the things we’ll structure our logic around. You’ve definitely heard of one judgment: A true. This judgment signifies whether or not some proposition A is true. Judgments can be much fancier though: we might have a whole bunch of judgments like n even, A possible or A resource.

These judgments act across various syntactic objects. In particular, from our point of view we’ll understand the meaning of a proposition by the ways we can prove it, that is the proofs that A true is evident.

We prove a judgment J through inference rules. An inference rule takes the form

J₁ J₂ .... Jₓ ————————————— J

Which should be read as “When J₁, J₂ … and Jₓ hold, then so does J”. Here the things above the line are premises and the ones below are conclusions. What we’ll do is define a bunch of these inference rules and use them to construct proofs of judgments. For example, we might have the inference rules

n even —————— ———————————— 0 even S(S(n)) even

for the judgment n even. We can then form proofs to show that n even holds for some particular n.

—————— 0 even ———————————— S(S(0)) even —————————————————— S(S(S(S(0)))) even

This tree for example is evidence that 4 even holds. We apply second inference rule to S(S(S(S(0)))) first. This leaves us with one premise to show, S(S(0)) even. For this we repeat the process and end up with the new premise that 0 even. For this we can apply the first inference rule which has no premises completing our proof.

One judgment we’ll often see is A prop. It simply says that A is a well formed proposition, not necessarily true but syntactically well formed. This judgment is defined inductively over the structure of A. An example judgment would be

A prop B prop —————————————— A ∧ B prop

Which says that A ∧ B (A and B) is a well formed proposition if and only if A and B are! We can imagine a whole bunch of these rules

A prop B prop —————— —————— ————————————— ... ⊤ prop ⊥ prop A ∨ B prop

that lay out the propositions of our logic. This doesn’t yet tell us how prove any of these propositions to be true, but it’s a start. After we formally specify what sentences are propositions in our logic we need to discuss how to prove that one is true. We do this with a different judgment A true which is once again defined inductively.

For example, we might want to give meaning to the proposition A ∧ B. To do this we define its meaning through the inference rules for proving that A ∧ B true. In this case, we have the rule

A true B true —————————————— (∧ I) A ∧ B true

I claim that this defines the meaning of ∧: to prove a conjunction to be true we must prove its left and right halves. The rather proof-theoretic thing we’ve done here is said that the meaning of something is what we use to prove it. This is sometimes called the “verificationist perspective”. Finally, note that I annotated this rule with the name ∧ I simply for convenience to refer it.

Now that we know what A ∧ B means, what does have a proof of it imply? Well we should be able to “get out what we put in” which would mean we’d have two inference rules

A ∧ B true A ∧ B true —————————— —————————— A true B true

We’ll refer to these rules as ∧ E1 and ∧ E2 respectively.

Now for a bit of terminology, rules that let us “introduce” new proofs of propositions are introduction rules. Once we have a proof, we can use it to construct other proofs. The rules for how we do that are called elimination rules. That’s why I’ve been adding I’s and E’s to the ends of our rule names.

How do we convince ourselves that these rules are correct with respect to our understanding of ∧? This question leads us to our first sort of proofs-about-proofs we’ll make.

Local Soundness and Completeness

What we want to say is that the introduction and elimination rules match up. This should mean that anytime we prove something with an by an introduction rule followed by an elimination rule, we should be able to rewrite to avoid this duplication. This also hints that the rules aren’t too powerful: we can’t prove anything with the elimination rules that we didn’t have a proof for at some point already.

For ∧ this proof looks like this

D E – – A B D —————— ∧I ⇒ ———— A ∧ B A —————— ∧E 1 A

So whenever we introduce a ∧ and then eliminate it with ∧ E1 we can always rewrite our proof to not use the elimination rules. Here notice that D and E range over derivations in this proof. They represent a chain of rule applications that let us produce an A or B in the end. Note I got a bit lazy and started omitting the true judgments, this is something I’ll do a lot since it’s mostly unambiguous.

The proof for ∧E2 is similar.

D E – – A B E ————— ∧I ⇒ ———— A ∧ B B ————— ∧E 2 B

Given this we say that the elimination rules for ∧ are “locally sound”. That is, when used immediately after an elimination rule they don’t let us produce anything truly new.

Next we want to show that if we have a proof of A ∧ B, the elimination rules give us enough information that we can pick the proof apart and produce a reassembled A ∧ B.

D D ————– ————– D A ∧ B A ∧ B ————— ⇒ —————∧E1 ——————∧E2 A ∧ B A B ———————————————— ∧I A ∧ B

This somewhat confusion derivation takes our original proof of A ∧ B and pulls it apart into proof of A and B and uses these to assemble a new proof of A ∧ B. This means that our elimination rules give us all the information we put in so we say their locally complete.

The two of these properties combined, local soundness and completeness are how we show that an elimination rule is balanced with its introduction rule.

If you’re more comfortable with programming languages (I am) our local soundness property is equivalent to stating that

fst (a, b) ≡ a snd (a, b) ≡ b

And local completeness is that

a ≡ (fst a, snd a)

The first equations are reductions and the second is expansion. These actually correspond the eta and beta rules we expect a programming language to have! This is a nice little example of why proof theory is useful, it gives a systematic way to define some parts of the behavior of a program. Given the logic a programming language gives rise to we can double check that all rules are locally sound and complete which gives us confidence our language isn’t horribly broken.

Hypothetical Judgments

Before I wrap up this post I wanted to talk about one last important concept in proof theory: judgments with hypotheses. This is best illustrated by trying to write the introduction and elimination rules for “implies” or “entailment”, written A ⊃ B.

Clearly A ⊃ B is supposed to mean we can prove B true assume A true to be provable. In other words, we can construct a derivation of the form

A true —————— . . . —————— B true

We can notate our rules then as

—————— u A true —————— . . . —————— B true A ⊃ B A —————————— u —————————— A ⊃ B true B true

This notation is a bit clunky, so we’ll opt for a new one: Γ ⊢ J. In this notation Γ is some list of judgments we assume to hold and J is the thing we want to show holds. Generally we’ll end up with the rule

J ∈ Γ ————— Γ ⊢ J

Which captures the fact that Γ contains assumptions we may or may not use to prove our goal. This specific rule may vary depending on how we want express how assumptions work in our logic (substructural logics spring to mind here). For our purposes, this is the most straightforward characterization of how this ought to work.

Our hypothetical judgments come with a few rules which we call “structural rules”. They modify the structure of judgment, rather than any particular proposition we’re trying to prove.

Weakening Γ ⊢ J ————————— Γ, Γ' ⊢ J Contraction Γ, A, A, Γ' ⊢ J ——————————————— Γ, A, Γ' ⊢ J Exchange Γ' = permute(Γ) Γ' ⊢ A ———————————————————————— Γ ⊢ A

Finally, we get a substitution principle. This allows us to eliminate some of the assumptions we made to prove a theorem.

Γ ⊢ A Γ, A ⊢ B ———————————————— Γ ⊢ B

These 5 rules define meaning to our hypothetical judgments. We can restate our formulation of entailment with less clunky notation then as

A prop B prop —————————————— A ⊃ B prop Γ, A ⊢ B Γ ⊢ A ⊃ B Γ ⊢ A ————————— —————————————————— Γ ⊢ A ⊃ B Γ ⊢ B

One thing in particular to note here is that entailment actually internalizes the notion of hypothetical judgments into our logic. This the aspect of it that made it behave so differently then the other connectives we looked at.

As an exercise to the reader: prove the local soundness and completeness of these rules.


In this post we’ve layed out a bunch of rules and I’ve hinted that a bunch more are possible. When put together these rules define a logic using “natural deduction”, a particular way of specifying proofs that uses inference rules rather than axioms or something entirely different.

Hopefully I’ve inspired you to poke a bit further into proof theory, in that case I heartily recommend Frank Pfenning’s lectures at the Oregon Summer School for Programming Languages.


<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

The GHC Team: GHC Weekly News - 2015/02/10

Planet Haskell - Tue, 02/10/2015 - 5:14pm

Hi *,

Welcome! This is the first GHC Weekly news of February 2015. You might be wondering what happened to the last one. Well, your editor was just in New York for the past week attending ​Compose Conference, making friends and talking a lot about Haskell (luckily we missed a snow storm that may have messed it up quite badly!)

The conference was great. I got to have some interesting discussions about GHC and Haskell with many friendly faces from all around at an incredibly well run conference with a stellar set of people. Many thanks to NY Haskell (organizing), Spotify (hosting space), and to all the speakers for a wonderful time. (And of course, your editor would like to thank his employer Well-Typed for sending him!)

But now, since your author has returned, GHC HQ met back up this week for some discussion, with some regularly scheduled topics. For the most part it was a short meeting this week - our goals are pretty well identified:

  • It's likely GHC HQ will do a third 7.10.1 Release Candidate at the very end of February after the votes are included. We missed some patches in RC2 (such as ​Phab:D347) and incorporated even more bugfixes, so this is worth a test drive by users.
  • For the most part, things for 7.10 have been going very smoothly other than the debates and a few bugs trickling in - there has not been much ticket activity the past two weeks, so things feel pretty good right now. Austin will mostly be focused on shipping 7.10 and keeping the normal review/patch/triaging cycles going until it's done. We're on track to fix all the major bugs we've assigned (see milestone:7.10.1).

Since my last post, we've also had other random assorted chatter on the mailing lists by the dev team:

  • In light of a recent large bug in GHC which can be used to derive unsafeCoerce, GHC HQ has decided to push back the 7.10 release a bit longer to about March, in order to fix this bug and ferret out the little fallout afterwords. It turns out this isn't a simple bug to fix, but luckily a fix is being worked on already. ​
  • David Feuer has a question: why is undefined so special? In particular, it seems as if undefined can be specially used as a value with a type of kind # as well as *. It turns out GHC has a special notion of subkinding, and undefined has a type more special than meets the eye which allows this, as Adam Gundry replied. ​
  • Greg Weber opened up a discussion about 'Restricted Template Haskell', which would hopefully make it easier for users to see what a TH computation is actually doing. It turns out - as noted by Simon - that Typed Template Haskell is perhaps closer to what Greg wants. The proposal and discussion then resulted in us realising that the typed TH documentation is rather poor! Hopefully Greg or someone can swing in to improve things. ​

Closed tickets the past two weeks include: #10028, #10040, #10031, #9935, #9928, #2615, #10048, #10057, #10054, #10060, #10017, #10038, #9937, #8796, #10030, #9988, #10066, #7425, #7424, #7434, #10041, #2917, #4834, #10004, #10050, #10020, #10036, #9213, and #10047.

Categories: Offsite Blogs

Using Haskell types in a database

haskell-cafe - Tue, 02/10/2015 - 4:29pm
Suppose I need to represent a user in a database. I could make a table like this one (using Yesod's config/models syntax): User name Text email Text Maybe is_verified Bool default=false last_login UTCTime default=now() Instead, it would be better to do the following, so I couldn't swap name and email by accident or feed a wrong value to is_verified and last_login: User name UserName email UserEmail v Maybe last_login UserLastLogin where newtype UserName = UserName Text deriving (Show, Read) derivePersistField "UserName" newtype UserLastLogin = UserLastLogin UTCTime deriving (Show, Read) derivePersistField "UserLastLogin" data Verified data Unverified newtype UserEmail v = UserEmail Text deriving (Show, Read) derivePersistField "UserEmail" (So I could define functions like sendAnnouncement :: UserEmail Verified -> IO () sendVerification :: UserEmail Unverified -> IO ()) The Haskell part is not new and has been successfully used in the la
Categories: Offsite Discussion

Neil Mitchell: Why is the Hoogle index so out of date?

Planet Haskell - Tue, 02/10/2015 - 12:22pm

Summary: Hoogle 4 is out of date. The alpha version Hoogle 5 has fresh code and data every day (and isn't yet ready).

Someone recently asked why Hoogle's index is so out of date. Making the index both more current (updated daily) and larger (indexing all of Stackage) is one of the goals behind my Hoogle 5 rewrite (which still isn't finished). Let's compare the different update processes:

Hoogle 4 updates took about two hours to complete, if they went well, and often had to be aborted. I first compiled the Hoogle binary on the machines, which often failed, as typically the version of GHC was very old. Once I'd got a compiled binary, I needed to generate the database, which took about 2 hours, and occasionally failed halfway through. Once I had the new binary and databases I moved everything to correct place for Apache, accepting a small window of downtime during the move. Assuming that worked, I did a few test searches and smiled. Often the new Hoogle binary failed to start (usually failure to find some files, sometimes permissions) and I had to switch back to the old copy. Fixing up such issues took up to an hour. I had a mix of Windows .bat and Linux .sh scripts to automate some of the steps, but they weren't very robust, and required babysitting.

Hoogle 5 updates happen automatically at 8pm every night, take 4 minutes, and have yet to fail. I have a cron script that checks out the latest code and runs an update script. That script clones a fresh repo, compiles Hoogle, builds the databases, runs the test suite, kills the old version and launches the new version. The Hoogle code is all tested on Travis, so I don't expect that to fail very often. The upgrade script is hard to test, but the two failure modes are upgrading to a broken version, or not upgrading. The upgrade script runs checks and fails if anything doesn't work as expected, so it errs on the side of not upgrading. I use Uptime Robot to run searches and check the server is working, along with a canary page which raises an error if no upgrade happens for two days.

Clearly, the Hoogle 5 version update story is better. But why didn't I do it that way with Hoogle 4? The answer is that Hoogle 4 came out over six years ago, and a lot has changed since then:

  • Hoogle 4 is a CGI binary, served through Apache, while Hoogle 5 is a Haskell Warp server. By moving the logic into Haskell, it's far easier for me to configure and manage. Warp was only released on Hackage in 2011.
  • Hoogle 4 runs on the on the main server, where my mistakes can easily take out the home page (as a result, the home page once said "moo" for 10 minutes). Hoogle 5 runs on a dedicated VM where I have root, and no one else runs anything, so I can experiment with settings about swap files, IP tables and cron jobs.
  • My job has provided a lot of practice doing drive-by sysadmining over the last 6 years. I've also had a lot of practice doing critical releases on a nightly basis. In comparison, Hoogle is pretty simple.
  • The revised/rethought approach to Hoogle databases is a lot faster and uses a lot less memory, so it takes under a minute to generate databases, instead of over an hour. That time difference makes it much easier to experiment with different approaches.

When will Hoogle 5 be ready? It doesn't yet do type search, there is no offline version and no API. There are probably lots of other little pieces missing. If you want, feel free to use it now at You can still use Hoogle 4 at, or the more up-to-date FP complete hosted Hoogle 4.

Categories: Offsite Blogs

First Call for Papers, PxTP 2015

General haskell list - Tue, 02/10/2015 - 11:46am
The Fourth International Workshop on Proof eXchange for Theorem Proving (PxTP) August 2-3, 2015, Berlin, Germany associated with CADE 2015 Important dates * Abstract submission: Thu, May 7, 2015 * Paper submission: Thu, May 14, 2015 * Notification: Tue, June 16, 2015 * Camera ready versions due: Thu, June 25, 2015 * Workshop: August 2-3, 2015 Background The PxTP workshop brings together researchers working on various aspects of communication, integration, and cooperation between reasoning systems and formalisms. The progress in computer-aided reasoning, both automated and interactive, during the past decades, made it possible to build deduction tools that are increasingly more applicable to a wider range of problems and are able to tackle larger problems progressively faster. In recent years, cooperation of such tools in larger verification environments has demon
Categories: Incoming News

Proposal: NF newtype

libraries list - Tue, 02/10/2015 - 10:46am
I propose the following (abstract) data type, functions, and instance be added to deepseq (naming amenable to bikeshedding): newtype NF a = NF a -- abstract makeNF :: NFData a => a -> NF a getNF :: NF a -> a instance NFData (NF a) where rnf x = x `seq` () NF is an abstract data type representing data which has been evaluated to normal form; the guarantee specifically is, if NF is in whnf, then it is in nf. Crucially, when we have 'NF a', we ONLY need to seq it in order to assure that it is fully evaluated. This guarantee is sufficient for a variety of cases where normal data is necessary, e.g. when transmitting data over Channels. For example, from the monad-par library 'put_' could be used in place of 'put' with this type signature. put_ :: IVar (NF a) -> (NF a) -> Par () Cheers, Edward
Categories: Offsite Discussion

The GHC Team: GHC 7.10 Prelude: we need your opinion

Planet Haskell - Tue, 02/10/2015 - 9:52am

This post asks for your help in deciding how to proceed with some Prelude changes in GHC 7.10. Please read on, but all the info is also at the survey link, here: ​ Deadline is 21 Feb 2015.

The Core Libraries Committee (CLC) is responsible for developing the core libraries that ship with GHC. This is an important but painstaking task, and we owe the CLC a big vote of thanks for taking it on.

For over a year the CLC has been working on integrating the Foldable and Traversable classes (shipped in base in GHC 7.8) into the core libraries, and into the Prelude in particular. Detailed planning for GHC 7.10 started in the autumn of 2014, and the CLC went ahead with this integration.

Then we had a failure of communication. As these changes affect the Prelude, which is in scope for all users of Haskell, these changes should be held to a higher bar than the regular libraries@ review process. However, the Foldable/Traversable changes were not particularly well signposted. Many people have only recently woken up to them, and some have objected (both in principle and detail).

This is an extremely unfortunate situation. On the one hand we are at RC2 for GHC 7.10, so library authors have invested effort in updating their libraries to the new Prelude. On the other, altering the Prelude is in effect altering the language, something we take pretty seriously. We should have had this debate back in 2014, but here we are, and it is unproductive to argue about whose fault it is. We all share responsibility. We need to decide what to do now. A small group of us met by Skype and we've decided to do this:

  • Push back GHC 7.10's release by at least a month, to late March. This delay also gives us breathing space to address an unrelated show-stopping bug, Trac #9858.
  • Invite input from the Haskell community on which of two approaches to adopt (​this survey). The main questions revolve around impact on the Haskell ecosystem (commercial applications, teaching, libraries, etc etc), so we want to ask your opinion rather than guess it.
  • Ask Simon Marlow and Simon Peyton Jones to decide which approach to follow for GHC 7.10.

Wiki pages have been created summarizing these two primary alternatives, including many more points and counter-points and technical details:

This survey invites your input on which plan we should follow. Would you please

  • Read the details of the alternative plans on the three wiki pages above
  • Add your response to ​the survey

Please do read the background. Well-informed responses will help. Thank you!

DEADLINE: 21 February 2015

Simon PJ

Categories: Offsite Blogs

Why all the tooling woes?

Haskell on Reddit - Tue, 02/10/2015 - 9:05am

I have seen a lot of discussion (and complaints) about the Haskell toolchain on this forum.

One popular area of discussion has to do with resolving dependencies, dealing with breaking changes, and deciding who should maintain backwards compatibility (and how).

This is sort of an umbrella issue for topics like "what's up with Cabal", or, "too much CPP is happening!"

I'm wondering why we seem to discuss these issues so much more within the Haskell community than within, say, the Ruby or Clojure communities.

The too-convenient explanation is to say, "in the Ruby and Clojure universes, it doesn't matter if there are subtle errors in the way dependencies interact, since people can release their code anyway; with Haskell, we reject invalid programs, so this class of programs can't be released"

I imagine that there is some truth to the above explanation; what are the other root causes that force us to debate the aforementioned dependency-resolution-umbrella-issue more than other language communities?

submitted by skatenerd
[link] [7 comments]
Categories: Incoming News

Cross-Compiling GUI Libraries from Linux to Windows

Haskell on Reddit - Tue, 02/10/2015 - 3:23am

I'm not sure this is the best venue for this, but I wanted to gather all of this in one place and I don't have a good place to put it.

I'm not too experienced with Haskell, but I've been wanting to use it for a while. However, most of the projects I want to do are small cross-platform GUI utilities to replace Clojure+Swing to fix: how bad Swing looks, the size of a full Clojure JAR, Clojure's start times, and the need for the JVM. So, I tried to get Gtk2Hs, wxHaskell, and HsQML to work. I'm currently using Fedora 20, so I first tried the MinGW compiler packaged with that. I didn't get too far with that. I turned to the next best thing: Wine (by the time I was done with this, I could have used some of the more drinkable kind). I started by installing the 32-bit Haskell Platform 2014.2.0.0 in Wine and adding that to the path. That worked just fine: I could run wine cabal and wine ghc.

To start, I tried installing wxHaskell. I started by trying to compile wxWidgets, but their directions recommended using MSYS2, which I found doesn't work with Wine. I eventually downloaded wxPack, which is about a gigabyte. While following the wiki page on installing wxHaskell in Windows, I also had to remove the "48" from "lib/gcc48_lib" and "lib/gcc48_dll" in wxPack to make the paths match. Installing wxHaskell from Hackage failed with:

src\cpp\glcanvas.cpp:43:60: error: 'wxGLContext' has not been declared src\cpp\glcanvas.cpp:102:1: error: 'wxGLContext' does not name a type src\cpp\glcanvas.cpp:109:1: error: 'wxGLContext' does not name a type src\cpp\glcanvas.cpp:116:1: warning: '__cdecl__' attribute only applies to function types src\cpp\glcanvas.cpp:116:1: error: 'wxGLContext' was not declared in this scope src\cpp\glcanvas.cpp:116:1: error: 'self' was not declared in this scope src\cpp\glcanvas.cpp:116:1: error: expected primary-expression before 'void' src\cpp\glcanvas.cpp:116:1: error: initializer expression list treated as compound expression src\cpp\glcanvas.cpp:117:1: error: expected ',' or ';' before '{' token

So, I installed from the wxHaskell Git repository by putting "wine" in front of every mention of "cabal" in the bin/mk-cabal script and ran that. Then I compiled a small Hello World program. I copied the appropriate DLLs next to that program and tried running it. That failed with:

err:module:attach_process_dlls "wxc.dll" failed to initialize, aborting

Looking at more verbose debug messages pointed toward an exception being raised while wxc.dll was loading. I really wasn't sure where to go from there, so I gave up on wxHaskell.

Next, I tried HsQML. Fortunately, the directions on that site worked more or less perfectly. I compiled the hsqml-demo-samples to make sure they worked. My first problem was how the directories were arranged by Cabal; since I wanted it to work as a portable application, the executable should be at the root. Running cabal configure --bindir=$prefix --datadir=$prefix/data before building fixed that. Then, I needed to gather the dependencies into the application directory. In the case of the demos, those are:

  • <Qt directory>/qml/QtQuick/ -> ./QtQuick/
  • <Qt>/qml/QtQuick.2/ -> ./QtQuick.2/
  • <Qt>/plugins/platforms/qwindows.dll -> ./platforms/qwindows.dll
  • icudt53.dll (the rest of these are from <Qt>/bin to the executable directory)
  • icuin53.dll
  • icuuc53.dll
  • libgcc_s_dw2-1.dll
  • libstdc++-6.dll
  • Qt5Core.dll
  • Qt5Gui.dll
  • Qt5Network.dll
  • Qt5Qml.dll
  • Qt5Quick.dll
  • Qt5Widgets.dll

My next biggest concern was the size of all of these dependencies, which came out to about 50 MB. First, I stripped everything, which helped a little. Next, I tried UPX, which cut it almost in half. Finally, I found that icudt53.dll, which started out at 22 MB and compressed to about 8 MB, could shrink further by customizing the included ICU data, as described in this forum post, which has a reduced DLL. That pushed all the dependencies down to 16 MB. With the first OpenGL demo as an example app (820 KB compressed), it was possible to put everything needed into a ZIP file of 11 MB while uncompressing to 17 MB. One last note on Qt: while testing in a Windows VM, I found that Qt would not work (failing with errors like unable to resolve `glBindBuffer`) because I needed to upgrade VirtualBox Guest Additions and enable 3D acceleration.

The last one I tried was Gtk2Hs. The directions only talk about GTK+ 2. Following them works fine, but GTK+ 2 didn't look very native. However, following those directions, but substituting the GTK+ 2 bundle for the GTK+ 3 bundle and installing the gtk3 package worked fine. I was able to compile a simple Hello World. For deployment, I copied all of the DLLs from the bin directory of the GTK bundle. I also noticed that the font didn't look native either, which was fixed by adding a etc/gtk-3.0/settings.ini file with the executable containing:

[Settings] gtk-font-name = Tahoma 8

All those DLLs took up 23 MB. Running them through UPX compressed them down to 9 MB. Together with the application (4 MB compressed), a complete distributable ZIP took only 7 MB.

So, after all that work, as well as many failures along the way, I was able to compile Windows executables in Linux using only free software, while avoiding recompiling any of the large GUI toolkits. I was also able to fulfill all my goals. Gtk2Hs was the most painless and the smallest. On the other hand, I've always disliked how GTK looks. HsQML, while immature, gives me Qt, which I think looks better than GTK, while not being much larger, once you use the minimal ICU DLL. More objectively, I think data binding with QML is really convenient and useful, as it enables better separation of concerns.

In summary, I couldn't get wxHaskell to work, Gtk2Hs and HsQML worked, and I think Qt is pretty and cool.

submitted by TJSomething
[link] [5 comments]
Categories: Incoming News

Noam Lewis: Inference of ‘new’ statements, and ambiguous types

Planet Haskell - Tue, 02/10/2015 - 2:16am

To prevent possible name clashes, the name of my type inference engine for JavaScript is now Inferny Infernu . I hope I don’t have to change the name again!

In other news, here is some recent progress:

‘new’ now requires ‘this’ to be a row type

I changed inference of ‘new’ calls, so that the constructed function must have a row type as it’s “this” implicit parameter (nothing else makes sense).

The change to “new” typing made it possible to define the built in String, Number, Boolean type coercion functions in a way that disallows constructing them (e.g. “new String(3)”) which is considered bad practice in JavaScript. The reason is that the constructed values are “boxed” (kind of) so that they don’t equate by reference as normal strings, booleans and numbers do. For example, new String(3) == '3' but at the same time, new String(3) !== '3'.

Ambiguous Types

I added an initial implementation of what I call ambiguous types. These are simple type constraints that limit a type variable to a set of specific types.

The motivation for type constraints is that some JavaScript operators make sense for certain types, but not all types. So having a fully polymorphic type variable would be too weak. For example, the + operator has weird outputs when using all sorts of different input types (NaNNaNNaNNaNNaNNaN….batman!). I would like to constrain + to work only between strings or between numbers.

With the new type constraints, + has the following type:

a = (TNumber | TString) => ((a, a) -> a)

The syntax is reminiscent of Haskell’s type classes, and means: given a type variable “a” that is either a TNumber or a TString, the type of + is: (a, a) -> a

I’m thinking about implementing full-blown type classes, or alternatively, a more powerful form of ambiguous types, to deal with some other more complicated examples.

Tagged: Haskell, Javascript
Categories: Offsite Blogs