News aggregator

Stack transitive dependencies

haskell-cafe - Wed, 08/26/2015 - 4:09pm
Hello, I have a package depending on another: Foo/stack.yaml: flags: {} packages: - '.' - location: ../Bar extra-dep: true extra-deps: - Bar-0.7.6 resolver: lts-2.19 When I compile (stack build), I obtain:
Categories: Offsite Discussion

"Assignment" in do blocks?

haskell-cafe - Wed, 08/26/2015 - 1:25pm
Hi all, I'd like to propose an inbuilt "assignment" operator for use within do-blocks. Here's a common pattern - do foo <- someaction do something with foo -- Here we "update" the value of foo let foo' = bar foo -- The code from this point onwards shouldn't use foo -- but there is no way to enforce that What I'd like to do instead is something like - let foo = bar foo But to Haskell this is a recursive definition instead of an update to the previous value. I can do this instead - foo <- return $ bar foo which works fine but (I assume) GHC currently can't optimise it for arbitrary monads. Also it won't necessarily work if the monad laws don't hold. It would be nice to have an "assignment arrow", say (<-=), which lets GHC optimise this, so - foo <-= bar foo ... desugars to something like - foo' (bar foo) where foo' foo = ...
Categories: Offsite Discussion

New Functional Programming User Group in Graz, Austria

Haskell on Reddit - Wed, 08/26/2015 - 12:46pm

Hi all,

This group will be for discussing and learning about function programming. It's called Scala User Group Graz, but we will also be open to other fp languages like Haskell.

The first meeting will be on Tuesday 1st of September 19:00 at the Realraum (realraum.at) Brockmanngasse 15.

Greetings,

Benedikt

PS: I also posted this announcement at the scala subreddit https://www.reddit.com/r/scala/comments/3ihwug/new_scala_user_group_in_graz_austria/

submitted by bmaderbacher
[link] [comment]
Categories: Incoming News

Frege faster than Haskell (GHC)

Haskell on Reddit - Wed, 08/26/2015 - 6:21am

Hello there,

while doing some micro-benchmarking an interesting data point appeared to me where Frege beats Haskell (GHC) with 6625 ms to 8008 ms which is like 20% faster.

The Mandelbrot benchmark is calculating 60000 points in the Mandelbrot set in the complex plane between -2 to 1 on the x-axis and -1 to 1 on the y-axis. Each point is maximally iterated 30000 times over.

I am surprised the JVM is faster than compiled Haskell. Maybe you have some hints for me?

submitted by vsts15
[link] [27 comments]
Categories: Incoming News

'Scientific' loses significant digits

haskell-cafe - Tue, 08/25/2015 - 11:34pm
Hi: Does anyone know why is this true? ((fromJust . readMaybe) "1.00000000" :: Scientific) == ((fromJust . readMaybe) "1" :: Scientific)
Categories: Offsite Discussion

Electronic Medical Records (EMR) software development with pure functional programming (FP)

Haskell on Reddit - Tue, 08/25/2015 - 7:31pm

I'm looking for feedback from the FP community (specific language agnostic). My professional background is in medicine with personal interest in programming, and lately functional programming. The commercial EMR/EHR market is awash in choices, from large hospital system-wide offerings like EPIC to small limited-release packages tailored for private medical offices. I don't know exactly which language(s) most of these products are written in (maybe some of you might know?) but I would venture a guess that they are probably written in "standard" object-oriented enterprise languages like C++, C#, Java, etc. There are also freemium (ad-based) web browser platforms like Amazing Charts that are probably developed with standard web-centric technologies (PHP, Ruby/Python frameworks, perhaps a complete Javascript solution - esp lately).

My question is has any attempt been made to create a commercial quality EMR system (even a small one) with "pure" functional programming in a functional language? My second question is there any potential benefit(s) (besides interest and coolness points) in developing an EMR package with functional programming from ground up? Let's leave the issue of static vs dynamic typing alone, as I suspect no packages exist in any functional language. If this kind of project is "interesting" and potentially useful, then I'm thinking the languages to think about would be Haskell (obviously), F#, Scala, Clojure, Golang, Erlang, or Elixir. Erlang/Elixir might offer distinct advantages for a large scale distribution across a large system like say Kaiser or VA.

Thoughts?

submitted by RaymondWies
[link] [19 comments]
Categories: Incoming News

Danny Gratzer: Type is not in Type

Planet Haskell - Tue, 08/25/2015 - 6:00pm
Posted on August 26, 2015 Tags: jonprl, types, haskell

I was reading a recent proposal to merge types and kinds in Haskell to start the transition to dependently typed Haskell. One thing that caught my eye as I was reading it was that this proposal adds * :: * to the type system. This is of some significance because it means that once this is fully realized Haskell will be inconsistent (as a logic) in a new way! Of course, this isn’t a huge deal since Haskell is already woefully inconsistent with

  • unsafePerformIO
  • Recursive bindings
  • Recursive types
  • Exceptions

So it’s not like we’ll be entering new territory here. All that it means is that there’s a new way to inhabit every type in Haskell. If you were using Haskell as a proof assistant you were already in for a rude awakening I’m afraid :)

This is an issue of significance though for languages like Idris or Agda where such a thing would actually render proofs useless. Famously, Martin-Lof’s original type theory did have Type : Type (or * :: * in Haskell spelling) and Girard managed to derive a contradiction (Girard’s paradox). I’ve always been told that the particulars of this construction are a little bit complicated but to remember that Type : Type is bad.

In this post I’d like to prove that Type : Type is a contradiction in JonPRL. This is a little interesting because in most proof assistants this would work in two steps

  1. Hack the compiler to add the rule Type : Type
  2. Construct a contradiction and check it with the modified compiler

OK to be fair, in something like Agda you could use the compiler hacking they’ve already done and just say {-# OPTIONS --set-in-set #-} or whatever the flag is. The spirit of the development is the same though

In JonPRL I’m just going to prove this as a regular implication. We have a proposition which internalizes membership and I’ll demonstrate not(member(U{i}; U{i})) is provable (U{i} is how we say Type in JonPRL). It’s the same logic as we had before.

Background on JonPRL

Before we can really get to the proof we want to talk about we should go through some of the more advanced features of JonPRL we need to use.

JonPRL is a little different than most proof assistants, for example We can define a type of all closed terms in our language and whose equality is purely computational. This type is base. To prove that =(a; b; base) holds you have to prove ceq(a; b), the finest grain equality in JonPRL. Two terms are ceq if they

  1. Both diverge
  2. Run to the same outermost form and have ceq components

What’s particularly exciting is that you can substitute any term for any other term ceq to it, no matter at what type it’s being used. In fact, the reduce tactic (which beta reduces terms) can conceptually be thought of as substituting a bunch of terms for their whn forms which are ceq to the original terms. The relevant literature behind this is found in Doug Howe’s “Equality in a Lazy Computation System”. There’s more in JonPRL in this regard, we also have the asymmetric version of ceq (called approx) but we won’t need it today.

Next, let’s talk about the image type. This is a type constructor with the following formation rule

H ⊢ A : U{i} H ⊢ f : base ————————————————————————————————— H ⊢ image(A; f) : U{i}

So here A is a type and f is anything. Things are going to be equal image if we can prove that they’re of the form f w and f w' where w = w' ∈ A. So image gives us the codomain (range) of a function. What’s pretty crazy about this is that it’s not just the range of some function A → B, we don’t really need a whole new type for that. It’s the range of literally any closed term we can apply. We can take the range of the y combinator over pi types. We can take the range of lam(x. ⊥) over unit, anything we want!

This construct let’s us define some really incredible things as a user of JonPRL. For example, the “squash” of a type is supposed to be a type which is occupied by <> (and only <>) if and only if there was an occupant of the original type. You can define these in HoTT with higher inductive types. Or, you can define these in this type theory as

Operator squash : (0). [squash(A)] =def= [image(A; lam(x. <>))]

x ∈ squash(A) if and only if we can construct an a so that a ∈ A and lam(x. <>) a ~ x. Clearly x must be <> and we can construct such an a if and only if A is nonempty.

We can also define the set-union of two types. Something is supposed to be in the set union if and only if it’s in one or the other. Two define such a thing with an image type we have

Operator union : (0). [union(A; B)] =def= [image((x : unit + unit) * decide(x; _.A; _.B); lam(x.snd(x)))]

This one is a bit more complicated. The domain of things we’re applying our function to this time is

(x : unit + unit) * decide(x; _.A; _.B)

This is a dependent pair, sometimes called a Σ type. The first component is a boolean, if the first component is true the second component is of type A, otherwise it’s of type B. So for every term of type A or B, there’s a term of this Σ type. In fact, we can recover that original term of type A or B by just grabbing the second component of the term! We don’t have to worry about the type of such an operation because we’re not creating something with a function type, just something in base.

unions let us define an absolutely critical admissible rule in our system. See JonPRL has this propositional reflection of the equality judgment and membership, but membership is non-negatable. By this I mean that if we have some a so that a = a ∈ A doesn’t hold, we won’t be able to prove =(a; a; A) -> void. See in order to prove such a thing we first have to prove that =(a; b; A) -> void is a type, which means proving that =(a; a; A) is a type.

In order to prove that =(a; b; A) is a proposition we have to prove =(a; a; A), =(b; b; A), and =(A; A; U{i}). The process of proving these will actually also show that the corresponding judgments, a ∈ A, b ∈ A, and A ∈ U{i} hold.

However, in the case that a and b are the same term this is just the same as proving =(a; b; A)! So =(a; a; A) is a proposition only if it’s true. However, we can add a rule that says that =(a; b; A) is a proposition if a = a ∈ (A ∪ base) and similarly for b! This fixes our negatibility issue because we can just prove that =(a; a; base), something that may be true even if a is not equal in A. Before having a function take a member(...) was useless (member(a; A) is just thin sugar for =(a; a; A)! member(a; A) is a proposition if and only if a = a ∈ A holds, in other words, it’s a proposition if and only if it’s true! With this new rule, we can prove member(a; A) is a proposition if A ∈ U{i} and a ∈ base, a much weaker set of conditions that are almost always true. We can apply this special rule in JonPRL with eq-eq-base instead of just eq-cd like the rest of our equality rules.

The Main Result

Now let’s actually begin proving Russell’s paradox. To start with some notation.

Infix 20 "∈" := member. Infix 40 "~" := ceq. Infix 60 "∪" := bunion. Prefix 40 "¬" := not.

This let’s us say a ∈ b instead of member(a; b). JonPRL recently grew this ability to add transparent notation to terms, it makes our theorems a lot prettier.

Next we define the central term to our proof

Operator Russell : (). [Russell] =def= [{x : U{i} | ¬ (x ∈ x)}]

Here we’ve defined Russell as shorthand for a subset type, in particular it’s a subset of U{i} (the universe of types). x ∈ Russell if x ∈ U{i} and ¬ (x ∈ x). Now normally we won’t be able to prove that this is a type (specifically x ∈ x is going to be a problem), but in our case we’ll have some help from an assumption that U{i} ∈ U{i}.

Now we begin to define a small set of tactic that we’ll want. These tactics are really where the fiddly bits of using JonPRL’s tactic system come into play. If you’re just reading this for the intuition as to why Type ∈ Type is bad just skip this. You’ll still understand the construction even if you don’t understand these bits of the proof.

First we have a tactic which finds an occurrence of H : A + B in the context and eliminate it. This gives us two goals, one with an A and one with a B. To do this we use match, This gives us something like match goal with in Coq.

Tactic break-plus { @{ [H : _ + _ |- _] => elim <H>; thin <H> } }.

Note the syntax [H : ... |- ...] to match on a sequent. In particular here we just have _ + _ and _. Next we have a tactic bunion-eq-right. It’s to help us work with bunions (unions). Basically it turns =(M; N; bunion(A; B)) into

=(lam(x.snd(x)) <<>, M>; lam(x.snd(x)) <<>, N>; bunion(A; B))

This is actually helpful because it turns out that once we unfold bunion we have to prove that M and N are in an image type, remember that bunion is just a thin layer of sugar on top of image types. In order to prove something is in the image type it needs to be of the form f a where f in our case is lam(x. snd(x)).

This is done with

Tactic bunion-eq-right { @{ [|- =(M; N; L ∪ R)] => csubst [M ~ lam(x. snd(x)) <inr(<>), M>] [h.=(h;_;_)]; aux { unfold <snd>; reduce; auto }; csubst [N ~ lam(x. snd(x)) <inr(<>), N>] [h.=(_;h;_)]; aux { unfold <snd>; reduce; auto }; } }.

The key here is csubst. It takes a ceq as its first argument and a “targeting”. It then tries to replace each occurrence of the left side of the equality with the right. To find each occurrence the targeting maps a variable to each occurrence. We’re allowed to use wildcards in the targeting as well. It also relegates actually proving the equality into a new subgoal. It’s easy enough to prove so we demonstrate it with aux {unfold <snd>; reduce; auto}.

We only need to apply this tactic after eq-eq-base, this applies that rule I mentioned earlier about proving equalities well formed in a much more liberal environment. Therefore we wrap those two tactics into one more convenient package.

Tactic eq-base-tac { @{ [|- =(=(M; N; A); =(M'; N'; A'); _)] => eq-eq-base; auto; bunion-eq-right; unfold <bunion> } }.

There is one last tactic in this series, this one to prove that member(X; X) ∈ U{i'} is well formed (a type). It starts by unfolding member into =(=(X; X; X); =(X; X; X); U{i}) and then applying the new tactic. Then we do other things. These things aren’t pretty. I suggest we just ignore them.

Tactic impredicativity-wf-tac { unfold <member>; eq-base-tac; eq-cd; ?{@{[|- =(_; _; base)] => auto}}; eq-cd @i'; ?{break-plus}; reduce; auto }.

Finally we have a tactic to prove that if we have not(P) and P existing in the context proves void. This is another nice application match

Tactic contradiction { unfold <not implies>; @{ [H : P -> void, H' : P |- void] => elim <H> [H']; unfold <member>; auto } }.

We start by unfolding not and implies. This gives us P -> void and P. From there, we just apply one to the other giving us a void as we wanted.

We’re now ready to prove our theorem. We start with

Theorem type-not-in-type : [¬ (U{i} ∈ U{i})] { }.

We now have the main subgoal

Remaining subgoals: [main] ⊢ not(member(U{i}; U{i}))

We can start by unfold not and implies. Remember that not isn’t a built in thing, it’s just sugar. By unfolding it we get the more primitive form, something that actually apply the intro tactic to.

{ unfold <not implies>; intro }

Once unfolded, we’d get a goal along the lines of member(U{i}; U{i}) -> void. We immediately apply intro to this though. Now we have two subgoals, one is the result of applying intro, namely we have a hypothesis x : member(U{i}; U{i}) and a goal void. The second subgoal is the “well-formedness” obligation.

We have to prove that member(U{i}; U{i}) is a type in order to apply the intro tactic. This is a crucial difference between Coq-like systems and these proof-refinement logics. The process of demonstrating that what you’re proving is a proposition is intermingled with actually constructing the proof. It means you get to apply all the normal mathematical tools you have for proving things to be true to prove that they’re types. This gives us a lot of flexibility, but at the cost of sometimes annoying subgoals. They’re annotated with [aux] (as opposed to [main]). This means we can target them all at once using with the aux tactics.

To summarize that whole paragraph as JonPRL would say it, our proof state is

[main] 1. x : member(U{i}; U{i}) ⊢ void [aux] ⊢ member(member(U{i}; U{i}); U{i'})

Let’s get rid of that auxiliary subgoal using that impredictivity-wf-tac, this subgoal is in fact exactly what it was made for.

{ unfold <not implies>; intro aux { impredicativity-wf-tac }; }

This picks off that [aux] goal leaving us with just

[main] 1. x : member(U{i}; U{i}) ⊢ void

Now we need to prove some lemmas. They state that Russell is actually a type. This is possible to do here and only here because we’ll need to actually use x in the process of proving this. It’s a very nice example of what explicitly proving well-formedness can give you! After all, the process of demonstrating Russell is a type is nontrivial and only true in this hypothetical context, rather than just hoping that JonPRL is clever enough to figure that out for itself we get to demonstrate it locally.

We’re going to use the assert tactic to get these lemmas. This lets us state a term, prove it as a subgoal and use it as a hypothesis in the main goal. If you’re logically minded, it’s cut.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; }

The thing in <>s is the name it will get in our hypothetical context for the main goal. This leaves us with two subgoals. The aux one being the assertion and the main one being allowed to assume it.

[aux] 1. x : member(U{i}; U{i}) ⊢ member(Russell; U{i}) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) ⊢ void

We can prove this by basically working our way towards using impredicativity-wf-tac. We’ll use aux again to target the aux subgoal. We’ll start by unfolding everything and applying eq-cd.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; }; }

Remember that Russell is {x : U{i} | ¬ (x ∈ x)}

So we just applied eq-cd to a subset type (Russell) so we get two subgoals. One says that U{i} is a type, one says that if x ∈ U{i} then ¬ (x ∈ x) is also a type. In essence this just says that a subset type is a type if both components are types. The former goal is quite straightforward so we applied auto and take care of it. Now we have one new subgoal to handle

[main] 1. x : =(U{i}; U{i}; U{i}) 2. x' : U{i} ⊢ =(not(member(x'; x')); not(member(x'; x')); U{i}) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) ⊢ void

The second subgoal is just the rest of the proof, the first subgoal is what we want to handle. It says that if we have a type x then not(member(x; x)) is a type albeit in ugly notation. To prove this we have to unfold not. So we’ll do this and apply eq-cd again.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; unfold <not implies>; eq-cd; auto; }; }

Remember that not(P) desugars to P -> void. Applying eq-cd is going to give us two subgoals, P is a type and void is a type. However, member(void; U{i}) is pretty easy to prove, so we apply auto again which takes care of one of our two new goals. Now we just have

[main] 1. x : =(U{i}; U{i}; U{i}) 2. x' : U{i} ⊢ =(member(x'; x'); member(x'; x'); U{i}) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) ⊢ void

Now we’re getting to the root of the issue. We’re trying to prove that member(x'; x') is a type. This is happily handled by impredicativity-wf-tac which will use our assumption that U{i} ∈ U{i} because it’s smart like that.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; unfold <not implies>; eq-cd; auto; impredicativity-wf-tac }; }

Now we just have that main goal with the assumption russell-wf added.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) ⊢ void

Now we have a similar well-formedness goal to assert and prove. We want to prove that ∈(Russell; Russell) is a type. This is easier though, we can prove it easily using impredicativity-wf-tac.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; unfold <not implies>; eq-cd; auto; impredicativity-wf-tac }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { impredicativity-wf-tac; cum @i; auto }; }

That cum @i is a quirk of impredicativity-wf-tac. It basically means that instead of proving =(...; ...; U{i'}) we can prove =(...; ...; U{i}) since U{i} is a universe below U{i'} and all universes are cumulative.

Our goal is now

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) ⊢ void

Ok, so now the reasoning can start now that we have all these well-formedness lemmas. Our proof sketch is basically as follows

  1. Prove that Russell ∈ Russell is false. This is because if Russell was in Russell then by definition of Russell it isn’t in Russell.
  2. Since not(Russell ∈ Russell) holds, then Russell ∈ Russell holds.
  3. Hilarity ensues.

Here’s the first assertion

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; unfold <not implies>; eq-cd; auto; impredicativity-wf-tac }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { impredicativity-wf-tac; cum @i; auto }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; }

Here are our subgoals

[aux] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) ⊢ not(member(Russell; Russell)) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ void

We want to prove that first one. To start, let’s unfold that not and move member(Russell; Russell) to the hypothesis and use it to prove void. We do this with intro.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; } }

Notice that the well-formedness goal that intro is handled by our assumption! After all, it’s just member(Russell; Russell) ∈ U{i}, we already proved it. Now our subgoals look like this

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. x' : member(Russell; Russell) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ void

So what we want to do is note that we’ve assume that Russell ∈ Russell we can say that Russell ~ X for some X where ¬ (X ∈ X) holds. But wait, that means that ¬ (Russell ∈ Russell) holds but we’ve assume that Russell ∈ Russell also holds so we have a contradiction.

Let’s start by introducing that X (here called R). We’ll assert an R : Russell such that R ~ Russell. We do this using dependent pairs (here written (x : A) * B(x)).

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; } }

We’ve proven this by intro. For proving dependent products we provide an explicit witness for the first component. Basically to prove (x : A) * B(x) we say intro[Foo]. We then have a goal Foo ∈ A and B(Foo). Since subgoals are fully independent of each other, we have to give the witness for the first component upfront. It’s a little awkward, Jon’s working on it :)

In this case we use intro [Russell]. After this we have to prove that this witness has type Russell and then prove the second component holds. Happily, auto takes care of both of these obligations so intro [Russell] @i; auto handles it all.

Now we promptly eliminate this pair. It gives us two new facts, that R : Russell and R ~ Russell hold.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop> } }

This leaves our goal as

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. x' : member(Russell; Russell) 5. s : Russell 6. t : ceq(s; Russell) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ void

Now let’s invert on that s : Russell, we want to use it to conclude that ¬ (s ∈ s) holds since that will give us ¬ (R ∈ R).

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; } }

Now that we’ve unfolded all of those Russells our goal is a little bit harder to read, remember to mentally substitute {x : U{i} | not(member(x; x))} as Russell.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ void

Now we use #7 to derive that not(member(Russell; Russell)) holds.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; assert [¬ member(Russell; Russell)]; aux { unfold <Russell>; }; } }

This leaves us with 3 subgoals. The first one being the assertion.

[aux] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) ⊢ not(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))})) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) 9. H : not(member(Russell; Russell)) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ void

Now to prove this what we need to do is substitute the unfolded Russell for x'', from there it’s immediate by assumption. We perform the substitution with chyp-subst. This takes a direction to substitute, the hypothesis to use, and another targeting telling us where to apply the substitution.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; assert [¬ member(Russell; Russell)]; aux { unfold <Russell>; chyp-subst ← #8 [h. ¬ (h ∈ h)]; }; } }

This leaves us with a much more tractable goal.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) ⊢ not(member(x''; x'')) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) 9. H : not(member(Russell; Russell)) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ void

We’d like to imply just apply assumption but it’s not immediately applicable due to some technically details (basically we can only apply an assumption in a proof irrelevant context but we have to unfold Russell and introduce it to demonstrate that it’s irrelevant). So just read what’s left as a (very) convoluted assumption.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; assert [¬ (Russell; Russell)]; aux { unfold <Russell>; chyp-subst ← #8 [h. ¬ (h ∈ h)]; unfold <not implies> intro; aux { impredicativity-wf-tac }; contradiction }; } }

Now we’re almost through this assertion, our subgoals look like this (pay attention to 9 and 4)

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) 9. H : not(member(Russell; Russell)) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ void

Once we unfold that Russell we have an immediate contradiction so unfold <Russell>; contradiction solves it.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; assert [¬ (Russell; Russell)]; aux { unfold <Russell>; chyp-subst ← #8 [h. ¬ (h ∈ h)]; unfold <not implies>; intro; aux { impredicativity-wf-tac }; contradiction }; unfold <Russell>; contradiction } }

This takes care of this subgoal so now we’re back on the main goal. This time though we have an extra hypothesis which will provide the leverage we need to prove our next assertion.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ void

Now we’re going to claim that Russell is in fact in Russell. This will follow from the fact that we’ve proved already that Russell isn’t in Russell (yeah, it seems pretty paradoxical already).

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { ... }; assert [Russell ∈ Russell]; }

Giving us

[aux] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ member(Russell; Russell) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) 5. H : member(Russell; Russell) ⊢ void

Proving this is pretty straightforward, we only have to demonstrate that not(Russell ∈ Russell) and Russell ∈ U{i}, both of which we have as assumptions. The rest of the proof is just more well-formedness goals.

First we unfold everything and apply eq-cd. This gives us 3 subgoals, the first two are Russell ∈ U{i} and ¬(Russell ∈ Russell). Since we have these as assumptions we’ll use main {assumption}. That will target both these goals and prove them immediately. Here by using main we avoid applying this to the well-formedness goal, which in this case actually isn’t the assumption.

{ unfold <not implies>; intro aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { ... }; assert [Russell ∈ Russell]; aux { unfold <member Russell>; eq-cd; unfold <member>; main { assumption }; }; }

This just leaves us with one awful well-formedness goal requiring us to prove that not(=(x; x; x)) is a type if x is a type. We actually proved something similar back when we prove that Russell was well formed. The proof is the same as then, just unfold, eq-cd and impredicativity-wf-tac. We use ?{!{auto}} to only apply auto in a subgoal where it immediately proves it. Here ?{} says “run this or do nothing” and !{} says “run this, if it succeeds stop, if it does anything else, fail”. This is not an interesting portion of the proof, don’t burn too many cycles trying to figure this out.

{ unfold <not implies>; intro aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { ... }; assert [Russell ∈ Russell] <russell-in-russell>; aux { unfold <member Russell>; eq-cd; unfold <member>; main { assumption }; unfold <not implies>; eq-cd; ?{!{auto}}; impredicativity-wf-tac; }; }

Now we just have the final subgoal to prove. We’re actually in a position to do so now.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) 5. russell-in-russell : member(Russell; Russell) ⊢ void

Now that we’ve shown P and not(P) hold at the same time all we need to do is apply contradiction and we’re done.

Theorem type-not-in-type [¬ (U{i} ∈ U{i})] { unfold <not implies>; intro aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { ... }; assert [Russell ∈ Russell] <russell-in-russell>; aux { ... }; contradiction }.

And there you have it, a complete proof of Russell’s paradox fully formalized in JonPRL! We actually proved a slightly stronger result than just that the type of types cannot be in itself, we proved that at any point in the hierarchy of universes (the first of which is Type/*/whatever) if you tie it off, you’ll get a contradiction.

Wrap Up

I hope you found this proof interesting. Even if you’re not at all interested in JonPRL, it’s nice to see that allowing one to have U{i} : U{i} or * :: * gives you the ability to have a type like Russell and with it, void. I also find it especially pleasing that we can prove something like this in JonPRL, it’s growing up so fast.

Thanks to Jon for greatly improving the original proof we had

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Haskell for Mac

Lambda the Ultimate - Tue, 08/25/2015 - 5:13pm

Available here with hackernews and reddit discussions ongoing.

Even though I'm not a big fan of Haskell, I'm pretty excited about this. It represents a trend where PL is finally taking holistic programmer experiences seriously, and a move toward interactivity in program development that takes advantage of our (a) rich type systems, and (b) increasing budget of computer cycles. Even that they are trying to sell this is good: if people can get used to paying for tooling, that will encourage even more tooling via a healthy market feedback loop. The only drawback is the MAS sandbox, app stores need to learn how to accept developer tools without crippling them.

Categories: Offsite Discussion

The GHC Team: GHC Weekly News - 2015/08/06

Planet Haskell - Tue, 08/25/2015 - 3:14pm

GHC Weekly News - 6 Aug 2015

Hello *,

Here is a rather belated Weekly News which I found sitting nearly done on my work-queue. I hope this will make for a good read despite its age. The next edition of the Weekly News will be posted soon.

Warnings for missed specialization opportunities

Simon Peyton Jones recently [a4261549afaee56b00fbea1b4bc1a07c95e60929 introduced] a warning in master to alert users when the compiler was unable to specialize an imported binding despite it being marked as INLINEABLE. This change was motivated by #10720, where the reporter observed poor runtime performance despite taking care to ensure his binding could be inlined. Up until now, ensuring that the compiler's optimizations meet the user's expectation would require a careful look at the produced Core. With this change the user is notified of exactly where the compiler had to stop specializing, along with a helpful hint on where to add a INLINABLE pragma.

Ticky-Ticky profiling

Recently I have been looking into breathing life back into GHC's ticky-ticky profiling mechanism. When enabled, ticky-ticky maintains low-level counters of various runtime-system events. These include closure entries, updates, and allocations. While ticky doesn't provide nearly the detail that the cost-center profiler allows, it is invisible to the Core-to-Core optimization passes and has minimal runtime overhead (manifested as a bit more memory traffic due to counter updates). For this reason, the ticky-ticky profiler can be a useful tool for those working on the Core simplifier.

Sadly, ticky-ticky has fallen into quite a state of disrepair in recent years as the runtime system and native code generator have evolved. As the beginning of an effort to resuscitate the ticky-ticky profiler I've started putting together a ​list of the counters currently implemented and whether they can be expected to do something useful. Evaluating the functionality of these counters is non-trivial, however, so this will be an on-going effort.

One of our goals is to eventually do a systematic comparison of the heap allocation numbers produced by the ticky-ticky profiler, the cost-center profiler, and ticky-ticky. While this will help validate some of the more coarse-grained counters exposed by ticky, most of them will need a more thorough read-through of the runtime system to verify.

integer-gmp Performance

Since the 7.10.2 release much of my effort has been devoted to characterizing the performance of various benchmarks over various GHC versions. This is part of an effort to find places where we have regressed in the past few versions. One product of this effort is a complete comparison of ​results from our nofib benchmark suite ranging from 7.4.2 to 7.10.1.

The good news is there are essentially no disastrous regressions. Moreover, on the mean runtimes are over 10% faster than they were in 7.4.2. There are, however, a few cases which have regressed. The runtime of the integer test, for instance, has increased by 7%. Looking at the trend across versions, it becomes apparent that the regression began with 7.10.1.

One of the improvements that was introduced with 7.10 was a rewrite of the integer-gmp library, which this benchmark tests heavily. To isolate this potential cause, I recompiled GHC 7.10.1 with the old integer-gmp-0.5. Comparing 7.10.1 with the two integer-gmp versions reveals a 4% increase in allocations.

While we can't necessarily attribute all of the runtime increase to these allocations, they are something that should be addressed if possible. Herbert Valerio Riedel, the author of the integer-gmp rewrite, believes that the cause may be due to the tendency for the rewrite to initially allocate a conservatively-sized backing ByteArray# for results. This leads to increased allocations due to the reallocations that are later required to accommodate larger-than-expected results.

While being more liberal in the initial allocation sizes would solve the reallocation issue, this approach may substantially increase working-set sizes and heap fragmentation for integer-heavy workloads. For this reason, Herbert will be looking into exploiting a feature of our heap allocator. Heap allocations in GHC occur by bumping a pointer into an allocation block. Not only is this a very efficient means of allocating, it potentially allows one to efficiently grow an existing allocation. In this case, if we allocate a buffer and soon after realize that our request was too small we can simply bump the heap pointer by the size deficit, so long as no other allocations have occurred since our initial allocation. We can do this since we know that the memory after the heap pointer is available; we merely need to ensure that the current block we are allocating into is large enough.

Simon Marlow and Herbert will be investigating this possibility in the coming weeks.

D924: mapM_ and traverse_

As discussed in the ​most recent Weekly News, one issue on our plate at the moment is ​Phab:D924, which attempted to patch up two remaining facets of the Applicative-Monad Proposal,

  1. Remove the override of mapM for the [] Traversal instance
  2. Rewrite mapM_ in terms of traverse_

While (1) seems like an obvious cleanup, (2) is a bit tricky. As noted last time, traverse_ appears to give rise to non-linear behavior in this context.

akio has contributed an insightful [analysis ​https://ghc.haskell.org/trac/ghc/timeline?from=2015-08-01T10%3A00%3A33Z&precision=second] shedding light on the cause of this behavior. Given that the quadratic behavior is intrinsic to the Applicative formulation, we'll be sending this matter back to the Core Libraries Committee to inform their future design decisions.

That is all for this week!

Cheers,

~ Ben

Categories: Offsite Blogs

Munich Haskell Meeting,2015-08-27 < at > 19:30 Augustiner-Keller Biergarten

haskell-cafe - Tue, 08/25/2015 - 1:16pm
Dear all, This week, our monthly Munich Haskell Meeting will take place again on Thursday, August 27 at **Augustiner-Keller Biergarten** (Arnulfstr. 52) in the self-service area (rougly here: https://goo.gl/maps/QmnMc) at 19h30. For details see: http://www.haskell-munich.de/dates Please note the changed location this time! The weather will be nice. If you plan to join, please add yourself to this dudle. It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-aug-2015/ Everybody is welcome! cu,
Categories: Offsite Discussion

What's the status of WinGHCi?

Haskell on Reddit - Tue, 08/25/2015 - 11:26am

Having seen Haskell for Mac announcement today, I've suddenly remembered that such thing as WinGHCi (which is included in Haskell Platform for Windows) exists. After a quick web search, I've found out that it's hosted on Google Code and it seems like no one exported it to Github / anywhere else. So, I'm wondering if anyone is maintaining it now?

submitted by dauntingdrone
[link] [3 comments]
Categories: Incoming News

ANNOUNCE: Haskell for Mac, 1.0

General haskell list - Tue, 08/25/2015 - 10:22am
It is my great pleasure to announce Haskell for Mac http://haskellformac.com Haskell for Mac is an interactive Haskell development environment for OS X Yosemite that is trivial to install, easy to use, and that features Haskell playgrounds. Haskell playgrounds are very much like a persistent, constantly re-evaluated GHCi session. All Haskell commands are evaluated as you edit them, constantly displaying up-to-date results and type information. Moreover, Haskell playgrounds support rich interactive graphics and animations (based on the Rasterific, Diagram, and Chart packages as well as Apple’s 2D animation and games framework SpriteKit). Haskell for Mac is an advanced programming environment for an advanced language. Enjoy! Manuel PS: The focus of the first release is on learning and prototyping with minimal project management support (essentially equivalent to Cabal files with one executable section). _______________________________________________ Haskell mailing list Haskell< at >haskell.org http://m
Categories: Incoming News

Dimitri Sabadie: Contravariance and luminance to add safety to uniforms

Planet Haskell - Tue, 08/25/2015 - 9:02am

It’s been a few days I haven’t posted about luminance. I’m on holidays, thus I can’t be as involved in the development of the graphics framework as I’m used to on a daily basis. Although I’ve been producing less in the past few days, I’ve been actively thinking about something very important: uniform.

What people usually do

Uniforms are a way to pass data to shaders. I won’t talk about uniform blocks nor uniform buffers – I’ll make a dedicated post for that purpose. The common OpenGL uniform flow is the following:

  1. you ask OpenGL to retrieve the location of a GLSL uniform through the function glGetUniformLocation, or you can use an explicit location if you want to handle the semantics on your own ;
  2. you use that location, the identifier of your shader program and send the actual values with the proper glProgramUniform.

You typically don’t retrieve the location each time you need to send values to the GPU – you only retrieve them once, while initializing.

The first thing to make uniforms more elegant and safer is to provide a typeclass to provide a shared interface. Instead of using several functions for each type of uniform – glProgramUniform1i for Int32, glProgramUniform1f for Float and so on – we can just provide a function that will call the right OpenGL function for the type:

class Uniform a where
sendUniform :: GLuint -> GLint -> a -> IO ()

instance Uniform Int32 where
sendUniform = glProgramUniform1i

instance Uniform Float where
sendUniform = glProgramUniform1f

-- and so on…

That’s the first step, and I think everyone should do that. However, that way of doing has several drawbacks:

  • it still relies on side-effects; that is, we can call sendUniform pretty much everywhere ;
  • imagine we have a shader program that requires several uniforms to be passed each time we draw something; what happens if we forget to call a sendUniform? If we haven’t sent the uniform yet, we might have an undefined behavior. If we already have, we will override all future draws with that value, which is very wrong… ;
  • with that way of representing uniforms, we have a very imperative interface; we can have a more composable and pure approach than that, hence enabling us to gain in power and flexibility.
What luminance used to do

In my luminance package, I used to represent uniforms as values.

newtype U a = U { runU :: a -> IO () }

We can then alter the Uniform typeclass to make it simpler:

class Uniform a where
toU :: GLuint -> GLint -> U a

instance Uniform Int32 where
toU prog l = U $ glProgramUniform1i prog l

instance Uniform Float where
toU prog l = U $ glProgramUniform1f prog l

We also have a pure interface now. I used to provide another type, Uniformed, to be able to send uniforms without exposing IO, and an operator to accumulate uniforms settings, (@=):

newtype Uniformed a = Uniformed { runUniformed :: IO a } deriving (Applicative,Functor,Monad)

(@=) :: U a -> a -> Uniformed ()
U f @= a = Uniformed $ f a

Pretty simple.

The new uniform interface

The problem with that is that we still have the completion problem and the side-effects, because we just wrap them without adding anything special – Uniformed is isomorphic to IO. We have no way to create a type and ensure that all uniforms have been sent down to the GPU…

Contravariance to save us!

If you’re an advanced Haskell programmer, you might have noticed something very interesting about our U type. It’s contravariant in its argument. What’s cool about that is that we could then create new uniform types – new U – by contramapping over those types! That means we can enrich the scope of the hardcoded Uniform instances, because the single way we have to get a U is to use Uniform.toU. With contravariance, we can – in theory – extend those types to all types.

Sounds handy eh? First thing first, contravariant functor. A contravariant functor is a functor that flips the direction of the morphism:

class Contravariant f where
contramap :: (a -> b) -> f b -> f a
(>$) :: b -> f b -> f a

contramap is the contravariant version of fmap and (>$) is the contravariant version of (<$). If you’re not used to contravariance or if it’s the first time you see such a type signature, it might seem confusing or even magic. Well, that’s the mathematic magic in the place! But you’ll see just below that there’s no magic no trick in the implementation.

Because U is contravariant in its argument, we can define a Contravariant instance:

instance Contravariant U where
contramap f u = U $ runU u . f

As you can see, nothing tricky here. We just apply the (a -> b) function on the input of the resulting U a so that we can pass it to u, and we just runU the whole thing.

A few friends of mine – not Haskeller though – told me things like “That’s just theory bullshit, no one needs to know what a contravariant thingy stuff is!”. Well, here’s an example:

newtype Color = Color {
colorName :: String
, colorValue :: (Float,Float,Float,Float)
}

Even though we have an instance of Uniform for (Float,Float,Float,Float), there will never be an instance of Uniform for Color, so we can’t have a U Color… Or can we?

uColor = contramap colorValue float4U

The type of uColor is… U Color! That works because contravariance enabled us to adapt the Color structure so that we end up on (Float,Float,Float,Float). The contravariance property is then a very great ally in such situations!

More contravariance

We can even dig in deeper! Something cool would be to do the same thing, but for several fields. Imagine a mouse:

data Mouse = Mouse {
mouseX :: Float
, mouseY :: Float
}

We’d like to find a cool way to have U Mouse, so that we can send the mouse cursor to shaders. We’d like to contramap over mouseX and mouseY. A bit like with Functor + Applicative:

getMouseX :: IO Float
getMouseY :: IO Float

getMouse :: IO Mouse
getMouse = Mouse <$> getMouseX <*> getMouseY

We could have the same thing for contravariance… And guess what. That exists, and that’s called divisible contravariant functors! A Divisible contravariant functor is the exact contravariant version of Applicative!

class (Contravariant f) => Divisible f where
divide :: (a -> (b,c)) -> f b -> f c -> f a
conquer :: f a

divide is the contravariant version of (<*>) and conquer is the contravariant version of pure. You know that pure’s type is a -> f a, which is isomorphic to (() -> a) -> f a. Take the contravariant version of (() -> a) -> f a, you end up with (a -> ()) -> f a. (a -> ()) is isomorphic to (), so we can simplify the whole thing to f a. Here you have conquer. Thank you to Edward Kmett for helping me understand that!

Let’s see how we can implement Divisible for U!

instance Divisible U where
divide f p q = U $ \a -> do
let (b,c) = f a
runU p b
runU q c
conquer = U . const $ pure ()

And now let’s use it to get a U Mouse!

let uMouse = divide (\(Mouse mx my) -> (mx,my)) mouseXU mouseYU

And here we have uMouse :: U Mouse! As you can see, if you have several uniforms – for each fields of the type, you can divide your type and map all fields to the uniforms by applying several times divide.

The current implementation is almost the one shown here. There’s also a Decidable instance, but I won’t talk about that for now.

The cool thing about that is that I can lose the Uniformed monadic type and rely only on U. Thanks to the Divisible typeclass, we have completion, and we can’t override future uniforms then!

I hope you’ve learnt something cool and useful through this. Keep in mind that category abstractions are powerful and are useful in some contexts.

Keep hacking around, keep being curious. A Haskeller never stops learning! And that’s what so cool about Haskell! Keep the vibe, and see you another luminance post soon!

Categories: Offsite Blogs

Hackage Security Beta Release

Haskell on Reddit - Tue, 08/25/2015 - 8:50am
Categories: Incoming News

Well-Typed.Com: Hackage Security Beta Release

Planet Haskell - Tue, 08/25/2015 - 8:49am

Well-Typed and the Industrial Haskell Group are very happy to announce the beta release of the hackage-security library, along with its integration in hackage-server and cabal-install. The new security features of hackage are now deployed on the central server hackage.haskell.org and there is a beta release of cabal available. You can install it through

cabal install \ http://www.well-typed.com/hackage-security/Cabal-1.23.0.0.tar.gz \ http://www.well-typed.com/hackage-security/cabal-secure-beta.tar.gz

This will install a cabal-secure-beta binary which you can use alongside your normal installation of cabal.

For a more detailed discussion of the rationale behind this project, see the annoucement of the alpha release or the initial announcement of this project. We will also be giving a talk about the details at the upcoming Haskell Implementors Workshop. In the remainder of this blog post we will describe what’s available, right now.

What’s in it for you? Increased security

The Hackage server now does index signing. This means that if an attacker sits between you and Hackage and tries to feed you different packages than you think you are installing, cabal will notice this and throw a security exception. Index signing provides no (or very limited) security against compromise of the central server itself, but allows clients to verify that what they are getting is indeed what is on the central server.

 (Untrusted) mirrors

A very important corollary of the previous point is that we can now have untrusted mirrors. Anyone can offer to mirror hackage and we can gratefully accept these offers without having to trust those mirror operators. Whether we are downloading from the mirror or from the primary server, the new security features make it possible to verify that what we are downloading is what is on the primary server.

In practice this mean we can have mirrors at all, and we can use them fully automatically with no client side configuration required. This should give a huge boost to the reliability of Hackage; even AWS goes down from time to time but properly decentralised mirrors should mean there’s always a recent snapshot available.

On the client-side, the very first time cabal updates from the primary server it also finds out what mirrors are available. On subsequent updates it will automatically make use of any of those mirrors: if it encounters a problem with one it will try another. Updates to the list of mirrors is also fully automatic.

For operating a mirror, we have extended the hackage-mirror client (currently bundled in the hackage-server package) so that it can be used to mirror a Hackage repository to a simple set of local files which can then be served by an ordinary HTTP server.

We already have one mirror available in time for the beta. The OSU Open Source Lab have very kindly agreed to host a Hackage mirror for the benefit of the Haskell community. This mirror is now live at http://ftp.osuosl.org/pub/hackage/, but we didn’t need to tell you that: (the beta release of) cabal will notice this automatically without any configuration on the part of the user thanks to hackage.haskell.org/mirrors.json.

Getting a mirror up and running is very easy, so if you would like to host a public Hackage mirror, then please do get in touch; during the beta period get in touch with us, or later on get in touch with the Hackage admins.

Incremental updates

Hackage provides a 00-index.tar.gz resource which is a tarball containing the .cabal files for all packages available on Hackage. It is this file that cabal downloads when you call cabal update, and that it uses during dependency resolution.

However, this file is quite large, which is why cabal update can take a few seconds to complete. In fact at nearly 10Mb the index is now considerably larger than almost all package source tarballs.

As part of the security work we have had to extend this index with extra security metadata, making the file even larger. So we have also taken the opportunity to dramatically reduce download sizes by allowing clients to update this file incrementally. The index tarball is now extended in an append-only way. This means that once cabal has downloaded the tarball once, on subsequent updates it can just download the little bit it doesn’t yet have. To avoid making existing clients download the new larger index file each time, the 00-index.tar.gz is kept as it always was and repositories supporting the new features additionally provide a 01-index.tar.gz. In future we could additionally provide a .tar.xz variant and thereby keep the first-time update size to a minimum.

The append-only nature of the index has additional benefits; in effect, the index becomes a log of Hackage’s history. This log can be used for various purposes; for example, we can track how install plans for packages change over time. As another example, Herbert Valerio Riedel has been working on an “package-index wayback” feature for Cabal. This uses the index to recreate a past view of the package index for recovering now bit-rotted install-plans that were known to work in the past.

There are currently a few known issues that make cabal update slower than it needs to be, even though it’s doing an incremental update. This will be addressed before the official release.

Host your own private repository

It has always been possible to host your own Hackage repository, either for private packages or as a mirror of the public collection, but it has not always been convenient.

There is the “smart” server in the form of the hackage-server, which while relatively easy to build and run, isn’t as simple as just a bunch of files. There has also always been the option of a “dumb” server, in the form of a bunch of files in the right format hosted by an ordinary HTTP server. While the format is quite simple (reusing the standard tar format), there have not been convenient tools for creating or managing these file based repositories.

As part of the security work we have made a simple command line tool to create and manage file based Hackage repositories, including all the necessary security metadata. This tool has been released as hackage-repo-tool on Hackage.

So whether you want a private mirror of the public packages, or a repository for your own private packages, or both, we hope these new tools will make that much more convenient. Currently documentation on how to use these tools is still somewhat lacking; this is something we will address after this beta release. Getting started is not difficult; there are some brief instructions in the reddit discussion, and feel free to talk to us on #hackage on IRC or contact us directly at info@well-typed.com if you need help.

What’s next?

As mentioned, we would like to invite you to install cabal-secure-beta and start testing it; just use it as you would cabal right now, and report any problems you may find on the hackage-security issue tracker. Additionally, if you would like to host a public mirror for Hackage, please contact us.

This release is primarily intended as an in-the-wild test of the infrastructure; there are still several details to be dealt with before we call this an official release.

The most important of these is proper key management. Much like, say, HTTPS, the chain of trust starts at a set of root keys. We have asked the Haskell.org committee to act as the root of trust and the committee has agreed in principle. The committee members will hold a number of the root keys themselves and the committee may also invite other organisations and individuals within the community to hold root keys. There are some policy details that remain to be reviewed and agreed. For example we need to decide on how many root keys to issue, what threshold number of keys be required to re-sign the root info, and agree policies for storing the root keys to keep them safe (for instance, mandate an air gap where the root key is never on a machine that is connected to the Internet). We will use the opportunity of ICFP (and the HIW talk) in a couple weeks time to present more details and get feedback.

If you would like to help with development, please take a look at the issue list and get in touch!

Categories: Offsite Blogs