# News aggregator

### Russell O'Connor: Clearing Up “Clearing Up Mysteries”

I am a big fan of E. T. Jaynes. His book Probability Theory: The Logic of Science is the only book on statistics that I ever felt I could understand. Therefore, when he appears to rail against the conclusions of Bell’s theorem in his paper “Clearing up Mysteries—The Original Goal”, I take him seriously. He suggests that perhaps there could be a time-dependent hidden variable theory that could yield the outcomes that quantum mechanics predicts.

However, after reading Richard D. Gill’s paper, “Time, Finite Statistics, and Bell’s Fifth Position” it is very clear that there can be nothing like a classical explanation that yields quantum predictions, time-dependent or otherwise. In this paper Gill reintroduces Steve Gull’s computer network, where a pair of classical computers is tasked to recreate probabilities predicted in a Bell-CHSH delayed choice experiment. The catch is that the challenger gets to choose the stream of bits sent to each of the two spatially separated computers in the network. These bits represent the free choice an experimenter running a Bell-CHSH experiment has to choose which polarization measurements to make. No matter what the classical computer does, no matter how much time-dependent fiddling you want to do, it can never produce correlations that will violate the Bell-CHSH inequality in the long run. This is Gull’s “You can’t program two independently running computers to emulate the EPR experiment” theorem.

Gill presents a nice analogy with playing roulette in the casino. Because of the rules of roulette, there is no computer algorithm can implement a strategy that will beat the house in roulette in the long run. Gill goes on to quantify exactly how long the long run is in order to place a wager against other people who claim they can recreate the probabilities predicted by quantum mechanics using a classical local hidden variable theory. Using the theory of supermartingales, one can bound the likelihood of seeing the Bell-CHSH inequality violated by chance by any classical algorithm in the same way that one can bound the likelihood of long winning streaks in roulette games.

I liked the casino analogy so much that I decided to rephrase Gull’s computer network as a coin guessing casino game I call Bell’s Casino. We can prove that any classical strategy, time-dependent or otherwise, simply cannot beat the house at that particular game in the long run. Yet, there is a strategy where the players employ entangled qubits and beat the house on average. This implies there cannot be any classical phenomena that yields quantum outcomes. Even if one proposes some classical oscollating (time-dependent) hidden variable vibrating at such a high rate that we could never practically measure it, this theory still could not yield quantum probabilities, because such a theory implies we could simulate it with Gull’s computer network. Even if our computer simulation was impractically slow, we could still, in principle, deploy it against Bell’s Casino to beat their coin game. But no such computer algorithm exists, in exactly the same way that there is no computer algorithm that will beat a casino at a fair game of roulette. The fact that we can beat the casino by using qubits clearly proves that qubits and quantum physics is something truly different.

You may have heard the saying that “correlation does not imply causation”. The idea is that if outcomes A and B are correlated, the either A causes B, or B causes A, or there is some other C that causes A and B. However, in quantum physics there is a fourth possibilty. We can have correlation without causation.

In light of Gull and Gill’s iron clad argument, I went back to reread Jaynes’s “Clearing up Mysteries”. I wanted to understand how Jaynes could have been so mistaken. After rereading it I realized that I had misunderstood what he was trying to say about Bell’s theorem. Jaynes just wanted to say two things.

Firstly, Jaynes wanted to say that Bell’s theorem does not necessarily imply action at a distance. This is not actually a controversial statement. The many-worlds interpretation is a local, non-realist (in the sense that experiments do not have unique definite outcomes) interpretation of quantum mechanics. This interpretation does not invoke any action at a distance and is perfectly compatible with Bell’s theorem. Jaynes spends some time noting that correlation does not imply causation in an attempt to clarify this point although he never talks about the many-worlds interpretation.

Secondly, Jaynes wanted to say that Bell’s theorem does not imply that quantum mechanics is the best possible physical theory that explains quantum outcomes. Here his argument is half-right and half-wrong. He spends some time suggesting that maybe there is a time-dependent hidden variable theory that could give more refined predictions than predicted by quantum theory. However, the suggestion that any classical theory, time-dependent or otherwise, could underlie quantum mechanics is refuted by Bell’s theorem and this is clearly illustrated by Gull’s computer network or by Bell’s casino. Jaynes learned about Gull’s computer network argument at the same conference that he presented “Clearing Up Mysteries”. His writing suggests that he was surprised by the argument, but he did not want to rush to draw any conclusions to from it without time to get a deeper understanding of it. Nevertheless, Jaynes larger point was still correct. Bell’s theorem does not imply that there is not some, non-classical, refinement of quantum mechanics that might yield more informative predictions than quantum mechanics does and Jaynes was worried that people would not look for such a refinement.

Jaynes spent a lot of effort trying to separate epistemology, where probability theory rules how we reason in the face of imperfect knowledge, from ontology, which describes what happens in reality if we had perfect information. Jaynes thought that quantum mechanics was mixing these two branches together into one theory and worried that if people were mistaking quantum mechanics for an ontological theory then they would never seek a more refined theory.

While Bell’s theorem does not rule out that there may be a non-classical hidden variable theory, Colbeck and Renner’s paper “No extension of quantum theory can have improved predictive power” all but eliminates that possibility by proving that there is no quantum hidden variable theory. This can be seen as a strengthening of Bell’s theorem, and they even address some of the same concerns that Jaynes had about Bell’s theorem.

To quote Bell [2], locality is the requirement that “…the result of a measurement on one system [is] unaffected by operations on a distant system with which it has interacted in the past…” Indeed, our non-signalling conditions reflect this requirement and, in our language, the statement that PXYZ|ABC is non-signalling is equivalent to a statement that the model is local (see also the discussion in [28]). (We remind the reader that we do not assume the non-signalling conditions, but instead derive them from the free choice assumption.) In spite of the above quote, Bell’s formal definition of locality is slightly more restrictive than these non-signalling conditions. Bell considers extending the theory using hidden variables, here denoted by the variable Z. He requires PXY|ABZ = PX|AZ × PY|BZ (see e.g. [13]), which corresponds to assuming not only PX|ABZ = PX|AZ and PY|ABZ = PY|BZ (the non-signalling constraints, also called parameter-independence in this context), but also PX|ABYZ = PX|ABZ and PY|ABXZ = PY|ABZ (also called outcome-independence). These additional constraints do not follow from our assumptions and are not used in this work.

The probabilistic assumptions are weaker in Colbeck and Renner’s work than in Bell’s theorem, because they want to exclude quantum hidden variable theories in addition to classical hidden variable theories. Today, if one wants to advance a local hidden variable theory, it would have to be a theory that is even weirder than quantum mechanics, if such a thing is even logically possible. It seems that quantum mechanics’s wave function is an ontological description after all.

I wonder what Jaynes would have thought about this result. I suspect he would still be looking for an exotic hidden variable theory. He seemed so convinced that probability theory was solely in the realm of epistemology and not ontology that he would not accept any probabilistic ontology at all.

I think Jaynes was wrong when he suggested that quantum mechanics was necessarily mixing up epistemology and ontology.
I believe the many-worlds interpretation is trying to make that distinction clear.
In this interpretation the wave-function and Schrödinger’s equation is ontology, but the Born rule that relates the norm-squared amplitude to probability ought to be epistemological.
However, there does remain an important mystery here:
Why do the observers within the many-worlds observe quantum probabilities that satisfy the Born rule?
I like to imagine Jaynes could solve this problem if he were still around.
I imagine that would say that something like, “Due to phase invariance of the wave-function … *something something* … transformation group … *something something* … thus the distribution must be in accordance with the Born rule.”
After all, Jaynes did manage to use transformation groups to solve the Bertrand paradox, a problem widely regarded as being unsolvable due to being underspecified.

### When is the Haddock on Hackage getting "hyperlinked source"?

In case you didn't know, Haddock on gitHub has the ability to prettify source code, such that variables are linked to their definitions. For example, here is a copy of the Prelude with this done. My question is, when is the haddock on hackage will get this, so we don't have to get it from gitHub?

submitted by TheKing01[link] [3 comments]

### Who works at Google and how is it?

### Functional Jobs: Full Stack Haskell Software Engineer at Linkqlo Inc (Full-time)

Linkqlo is a Palo Alto-based technology startup that is building a pioneering mobile community to connect people with better fitting clothes. We’re solving an industry-wide pain point for both consumers and fashion brands in retail shopping, sizing and fitting, just like Paypal took on the online payment challenge in 1999. Our app is available for download now in App Store. The next few months will see us focus on building up our iOS app with more features and functionalities to accelerate user acquisition. You’ll be an early member of an exciting pre-Series A startup in a fast-changing interesting space - fashion/apparel/retail, that is on a mission to redefine the way people discover and shop for clothes. We’re looking for talented, self-motivated people who are also passionate about our mission and excited about the challenges ahead.

We are looking for a committed full-time **Full-Stack Haskell Software Engineer** to integrate our iOS front-end client with the server, develop new product features, build and maintain robust and highly available database, design and develop HTML5 web site, and ensure high-performance and responsiveness of the entire system.

**Single-digit**employee badge number**Downtown Palo Alto**location one block away from Caltrain Station- Dynamic and fun-loving startup culture
- Attractive package in salary, stocks and benefits

- You
**love**Haskell and want to build everything with Haskell - You have
**incredible**coding skills and can turn ideas into extremely fast and reliable code - You are comfortable in working in an
**early-stage startup**environment to quickly expand your skill set with increasingly substantial**responsibilities** - You buy into our
**vision**and are interested in using the product for your own benefit - You are capable of designing and coding the whole web project
**yourself**or supervising others to perform the tasks by overlooking the whole process from scratch to finish - You aspire to steer web projects in the right direction utilizing the
**best**practices and**latest**advancements in the technology - You have
**excellent**written and verbal English - You have
**legal**status to work in U.S.

- Development of server side logic to integrate user-facing elements developed by front-end developers
- Management of hosting environments
**AWS**EC2/S3 and**Docker**, including database administration and scaling an application to support load changes - Building
**reusable**code and libraries for future use - Creating database
**schemas**that represent and support business processes - Optimization of the application for maximum
**speed**and**scalability** - Implementation of
**security**and data protection - Designing and implementing data
**storage**solutions - Implementing automated
**testing**platforms and unit tests - Integration of multiple data sources and databases into one system

- Expert in Haskell and experienced in one or more other web programming languages of Python/PHP/Ruby/Node.js/Java
- Proficiency in data migration, transformation, backup and
**scripting** - Fluency in popular web application frameworks (back-end is a must, front-end is a plus)
**CS**bachelor’s degree or an equivalent background in software engineering- Minimum
**three year**s of industrial experience in consumer web technology companies - Good understanding of front-end technologies and platforms, such as JavaScript, HTML5, and CSS3
- Understanding accessibility and security compliance
- Knowledge in user authentication and authorization between multiple systems, servers, and environments
- Understanding differences between multiple delivery platforms such as mobile vs desktop, and optimizing output to match the specific platform
- Mastery knowledge and use of code versioning tools including
**Git**(Gitlab/Github) - Understanding of “session management” in a distributed server environment

Get information on how to apply for this position.

### Stack transitive dependencies

### "Assignment" in do blocks?

### New Functional Programming User Group in Graz, Austria

Hi all,

This group will be for discussing and learning about function programming. It's called Scala User Group Graz, but we will also be open to other fp languages like Haskell.

The first meeting will be on Tuesday 1st of September 19:00 at the Realraum (realraum.at) Brockmanngasse 15.

Greetings,

Benedikt

PS: I also posted this announcement at the scala subreddit https://www.reddit.com/r/scala/comments/3ihwug/new_scala_user_group_in_graz_austria/

submitted by bmaderbacher[link] [comment]

### [Proposal] `returnWhen` for MonadPlus

### Frege faster than Haskell (GHC)

Hello there,

while doing some micro-benchmarking an interesting data point appeared to me where Frege beats Haskell (GHC) with 6625 ms to 8008 ms which is like 20% faster.

The Mandelbrot benchmark is calculating 60000 points in the Mandelbrot set in the complex plane between -2 to 1 on the x-axis and -1 to 1 on the y-axis. Each point is maximally iterated 30000 times over.

I am surprised the JVM is faster than compiled Haskell. Maybe you have some hints for me?

submitted by vsts15[link] [27 comments]

### FP Complete - stack and GHC on Windows

### 'Scientific' loses significant digits

### Electronic Medical Records (EMR) software development with pure functional programming (FP)

I'm looking for feedback from the FP community (specific language agnostic). My professional background is in medicine with personal interest in programming, and lately functional programming. The commercial EMR/EHR market is awash in choices, from large hospital system-wide offerings like EPIC to small limited-release packages tailored for private medical offices. I don't know exactly which language(s) most of these products are written in (maybe some of you might know?) but I would venture a guess that they are probably written in "standard" object-oriented enterprise languages like C++, C#, Java, etc. There are also freemium (ad-based) web browser platforms like Amazing Charts that are probably developed with standard web-centric technologies (PHP, Ruby/Python frameworks, perhaps a complete Javascript solution - esp lately).

My question is has any attempt been made to create a commercial quality EMR system (even a small one) with "pure" functional programming in a functional language? My second question is there any potential benefit(s) (besides interest and coolness points) in developing an EMR package with functional programming from ground up? Let's leave the issue of static vs dynamic typing alone, as I suspect no packages exist in any functional language. If this kind of project is "interesting" and potentially useful, then I'm thinking the languages to think about would be Haskell (obviously), F#, Scala, Clojure, Golang, Erlang, or Elixir. Erlang/Elixir might offer distinct advantages for a large scale distribution across a large system like say Kaiser or VA.

Thoughts?

submitted by RaymondWies[link] [19 comments]

### Danny Gratzer: Type is not in Type

I was reading a recent proposal to merge types and kinds in Haskell to start the transition to dependently typed Haskell. One thing that caught my eye as I was reading it was that this proposal adds * :: * to the type system. This is of some significance because it means that once this is fully realized Haskell will be inconsistent (as a logic) in a new way! Of course, this isn’t a huge deal since Haskell is already woefully inconsistent with

- unsafePerformIO
- Recursive bindings
- Recursive types
- Exceptions
- …

So it’s not like we’ll be entering new territory here. All that it means is that there’s a new way to inhabit every type in Haskell. If you were using Haskell as a proof assistant you were already in for a rude awakening I’m afraid :)

This is an issue of significance though for languages like Idris or Agda where such a thing would actually render proofs useless. Famously, Martin-Lof’s original type theory did have Type : Type (or * :: * in Haskell spelling) and Girard managed to derive a contradiction (Girard’s paradox). I’ve always been told that the particulars of this construction are a little bit complicated but to remember that Type : Type is bad.

In this post I’d like to prove that Type : Type is a contradiction in JonPRL. This is a little interesting because in most proof assistants this would work in two steps

- Hack the compiler to add the rule Type : Type
- Construct a contradiction and check it with the modified compiler

*OK to be fair, in something like Agda you could use the compiler hacking they’ve already done and just say {-# OPTIONS --set-in-set #-} or whatever the flag is. The spirit of the development is the same though*

In JonPRL I’m just going to prove this as a regular implication. We have a proposition which internalizes membership and I’ll demonstrate not(member(U{i}; U{i})) is provable (U{i} is how we say Type in JonPRL). It’s the same logic as we had before.

Background on JonPRLBefore we can really get to the proof we want to talk about we should go through some of the more advanced features of JonPRL we need to use.

JonPRL is a little different than most proof assistants, for example We can define a type of all closed terms in our language and whose equality is purely computational. This type is base. To prove that =(a; b; base) holds you have to prove ceq(a; b), the finest grain equality in JonPRL. Two terms are ceq if they

- Both diverge
- Run to the same outermost form and have ceq components

What’s particularly exciting is that you can substitute any term for any other term ceq to it, no matter at what type it’s being used. In fact, the reduce tactic (which beta reduces terms) can conceptually be thought of as substituting a bunch of terms for their whn forms which are ceq to the original terms. The relevant literature behind this is found in Doug Howe’s “Equality in a Lazy Computation System”. There’s more in JonPRL in this regard, we also have the asymmetric version of ceq (called approx) but we won’t need it today.

Next, let’s talk about the image type. This is a type constructor with the following formation rule

H ⊢ A : U{i} H ⊢ f : base ————————————————————————————————— H ⊢ image(A; f) : U{i}So here A is a type and f is anything. Things are going to be equal image if we can prove that they’re of the form f w and f w' where w = w' ∈ A. So image gives us the codomain (range) of a function. What’s pretty crazy about this is that it’s not just the range of some function A → B, we don’t really need a whole new type for that. It’s the range of *literally any closed term we can apply*. We can take the range of the y combinator over pi types. We can take the range of lam(x. ⊥) over unit, anything we want!

This construct let’s us define some really incredible things as a user of JonPRL. For example, the “squash” of a type is supposed to be a type which is occupied by <> (and only <>) if and only if there was an occupant of the original type. You can define these in HoTT with higher inductive types. Or, you can define these in this type theory as

Operator squash : (0). [squash(A)] =def= [image(A; lam(x. <>))]x ∈ squash(A) if and only if we can construct an a so that a ∈ A and lam(x. <>) a ~ x. Clearly x must be <> and we can construct such an a if and only if A is nonempty.

We can also define the set-union of two types. Something is supposed to be in the set union if and only if it’s in one or the other. Two define such a thing with an image type we have

Operator union : (0). [union(A; B)] =def= [image((x : unit + unit) * decide(x; _.A; _.B); lam(x.snd(x)))]This one is a bit more complicated. The domain of things we’re applying our function to this time is

(x : unit + unit) * decide(x; _.A; _.B)This is a dependent pair, sometimes called a Σ type. The first component is a boolean, if the first component is true the second component is of type A, otherwise it’s of type B. So for every term of type A or B, there’s a term of this Σ type. In fact, we can recover that original term of type A or B by just grabbing the second component of the term! We don’t have to worry about the type of such an operation because we’re not creating something with a function type, just something in base.

unions let us define an absolutely critical admissible rule in our system. See JonPRL has this propositional reflection of the equality judgment and membership, but membership is non-negatable. By this I mean that if we have some a so that a = a ∈ A doesn’t hold, we won’t be able to prove =(a; a; A) -> void. See in order to prove such a thing we first have to prove that =(a; b; A) -> void is a type, which means proving that =(a; a; A) is a type.

In order to prove that =(a; b; A) is a proposition we have to prove =(a; a; A), =(b; b; A), and =(A; A; U{i}). The process of proving these will actually also show that the corresponding judgments, a ∈ A, b ∈ A, and A ∈ U{i} hold.

However, in the case that a and b are the same term this is just the same as proving =(a; b; A)! So =(a; a; A) is a proposition only if it’s true. However, we can add a rule that says that =(a; b; A) is a proposition if a = a ∈ (A ∪ base) and similarly for b! This fixes our negatibility issue because we can just prove that =(a; a; base), something that may be true even if a is not equal in A. Before having a function take a member(...) was useless (member(a; A) is just thin sugar for =(a; a; A)! member(a; A) is a proposition if and only if a = a ∈ A holds, in other words, it’s a proposition if and only if it’s true! With this new rule, we can prove member(a; A) is a proposition if A ∈ U{i} and a ∈ base, a much weaker set of conditions that are almost always true. We can apply this special rule in JonPRL with eq-eq-base instead of just eq-cd like the rest of our equality rules.

The Main ResultNow let’s actually begin proving Russell’s paradox. To start with some notation.

Infix 20 "∈" := member. Infix 40 "~" := ceq. Infix 60 "∪" := bunion. Prefix 40 "¬" := not.This let’s us say a ∈ b instead of member(a; b). JonPRL recently grew this ability to add transparent notation to terms, it makes our theorems a *lot* prettier.

Next we define the central term to our proof

Operator Russell : (). [Russell] =def= [{x : U{i} | ¬ (x ∈ x)}]Here we’ve defined Russell as shorthand for a subset type, in particular it’s a subset of U{i} (the universe of types). x ∈ Russell if x ∈ U{i} and ¬ (x ∈ x). Now normally we won’t be able to prove that this is a type (specifically x ∈ x is going to be a problem), but in our case we’ll have some help from an assumption that U{i} ∈ U{i}.

Now we begin to define a small set of tactic that we’ll want. These tactics are really where the fiddly bits of using JonPRL’s tactic system come into play. If you’re just reading this for the intuition as to why Type ∈ Type is bad just skip this. You’ll still understand the construction even if you don’t understand these bits of the proof.

First we have a tactic which finds an occurrence of H : A + B in the context and eliminate it. This gives us two goals, one with an A and one with a B. To do this we use match, This gives us something like match goal with in Coq.

Tactic break-plus { @{ [H : _ + _ |- _] => elim <H>; thin <H> } }.Note the syntax [H : ... |- ...] to match on a sequent. In particular here we just have _ + _ and _. Next we have a tactic bunion-eq-right. It’s to help us work with bunions (unions). Basically it turns =(M; N; bunion(A; B)) into

=(lam(x.snd(x)) <<>, M>; lam(x.snd(x)) <<>, N>; bunion(A; B))This is actually helpful because it turns out that once we unfold bunion we have to prove that M and N are in an image type, remember that bunion is just a thin layer of sugar on top of image types. In order to prove something is in the image type it needs to be of the form f a where f in our case is lam(x. snd(x)).

This is done with

Tactic bunion-eq-right { @{ [|- =(M; N; L ∪ R)] => csubst [M ~ lam(x. snd(x)) <inr(<>), M>] [h.=(h;_;_)]; aux { unfold <snd>; reduce; auto }; csubst [N ~ lam(x. snd(x)) <inr(<>), N>] [h.=(_;h;_)]; aux { unfold <snd>; reduce; auto }; } }.The key here is csubst. It takes a ceq as its first argument and a “targeting”. It then tries to replace each occurrence of the left side of the equality with the right. To find each occurrence the targeting maps a variable to each occurrence. We’re allowed to use wildcards in the targeting as well. It also relegates actually proving the equality into a new subgoal. It’s easy enough to prove so we demonstrate it with aux {unfold <snd>; reduce; auto}.

We only need to apply this tactic after eq-eq-base, this applies that rule I mentioned earlier about proving equalities well formed in a much more liberal environment. Therefore we wrap those two tactics into one more convenient package.

Tactic eq-base-tac { @{ [|- =(=(M; N; A); =(M'; N'; A'); _)] => eq-eq-base; auto; bunion-eq-right; unfold <bunion> } }.There is one last tactic in this series, this one to prove that member(X; X) ∈ U{i'} is well formed (a type). It starts by unfolding member into =(=(X; X; X); =(X; X; X); U{i}) and then applying the new tactic. Then we do other things. These things aren’t pretty. I suggest we just ignore them.

Tactic impredicativity-wf-tac { unfold <member>; eq-base-tac; eq-cd; ?{@{[|- =(_; _; base)] => auto}}; eq-cd @i'; ?{break-plus}; reduce; auto }.Finally we have a tactic to prove that if we have not(P) and P existing in the context proves void. This is another nice application match

Tactic contradiction { unfold <not implies>; @{ [H : P -> void, H' : P |- void] => elim <H> [H']; unfold <member>; auto } }.We start by unfolding not and implies. This gives us P -> void and P. From there, we just apply one to the other giving us a void as we wanted.

We’re now ready to prove our theorem. We start with

Theorem type-not-in-type : [¬ (U{i} ∈ U{i})] { }.We now have the main subgoal

Remaining subgoals: [main] ⊢ not(member(U{i}; U{i}))We can start by unfold not and implies. Remember that not isn’t a built in thing, it’s just sugar. By unfolding it we get the more primitive form, something that actually apply the intro tactic to.

{ unfold <not implies>; intro }Once unfolded, we’d get a goal along the lines of member(U{i}; U{i}) -> void. We immediately apply intro to this though. Now we have two subgoals, one is the result of applying intro, namely we have a hypothesis x : member(U{i}; U{i}) and a goal void. The second subgoal is the “well-formedness” obligation.

We have to prove that member(U{i}; U{i}) is a type in order to apply the intro tactic. This is a crucial difference between Coq-like systems and these proof-refinement logics. The process of demonstrating that what you’re proving is a proposition is intermingled with actually constructing the proof. It means you get to apply all the normal mathematical tools you have for proving things to be true to prove that they’re types. This gives us a lot of flexibility, but at the cost of sometimes annoying subgoals. They’re annotated with [aux] (as opposed to [main]). This means we can target them all at once using with the aux tactics.

To summarize that whole paragraph as JonPRL would say it, our proof state is

[main] 1. x : member(U{i}; U{i}) ⊢ void [aux] ⊢ member(member(U{i}; U{i}); U{i'})Let’s get rid of that auxiliary subgoal using that impredictivity-wf-tac, this subgoal is in fact exactly what it was made for.

{ unfold <not implies>; intro aux { impredicativity-wf-tac }; }This picks off that [aux] goal leaving us with just

[main] 1. x : member(U{i}; U{i}) ⊢ voidNow we need to prove some lemmas. They state that Russell is actually a type. This is possible to do here and only here because we’ll need to actually use x in the process of proving this. It’s a very nice example of what explicitly proving well-formedness can give you! After all, the process of demonstrating Russell is a type is nontrivial and only true in this hypothetical context, rather than just hoping that JonPRL is clever enough to figure that out for itself we get to demonstrate it locally.

We’re going to use the assert tactic to get these lemmas. This lets us state a term, prove it as a subgoal and use it as a hypothesis in the main goal. If you’re logically minded, it’s cut.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; }The thing in <>s is the name it will get in our hypothetical context for the main goal. This leaves us with two subgoals. The aux one being the assertion and the main one being allowed to assume it.

[aux] 1. x : member(U{i}; U{i}) ⊢ member(Russell; U{i}) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) ⊢ voidWe can prove this by basically working our way towards using impredicativity-wf-tac. We’ll use aux again to target the aux subgoal. We’ll start by unfolding everything and applying eq-cd.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; }; }*Remember that Russell is {x : U{i} | ¬ (x ∈ x)}*

So we just applied eq-cd to a subset type (Russell) so we get two subgoals. One says that U{i} is a type, one says that if x ∈ U{i} then ¬ (x ∈ x) is also a type. In essence this just says that a subset type is a type if both components are types. The former goal is quite straightforward so we applied auto and take care of it. Now we have one new subgoal to handle

[main] 1. x : =(U{i}; U{i}; U{i}) 2. x' : U{i} ⊢ =(not(member(x'; x')); not(member(x'; x')); U{i}) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) ⊢ voidThe second subgoal is just the rest of the proof, the first subgoal is what we want to handle. It says that if we have a type x then not(member(x; x)) is a type albeit in ugly notation. To prove this we have to unfold not. So we’ll do this and apply eq-cd again.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; unfold <not implies>; eq-cd; auto; }; }Remember that not(P) desugars to P -> void. Applying eq-cd is going to give us two subgoals, P is a type and void is a type. However, member(void; U{i}) is pretty easy to prove, so we apply auto again which takes care of one of our two new goals. Now we just have

[main] 1. x : =(U{i}; U{i}; U{i}) 2. x' : U{i} ⊢ =(member(x'; x'); member(x'; x'); U{i}) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) ⊢ voidNow we’re getting to the root of the issue. We’re trying to prove that member(x'; x') is a type. This is happily handled by impredicativity-wf-tac which will use our assumption that U{i} ∈ U{i} because it’s smart like that.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; unfold <not implies>; eq-cd; auto; impredicativity-wf-tac }; }Now we just have that main goal with the assumption russell-wf added.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) ⊢ voidNow we have a similar well-formedness goal to assert and prove. We want to prove that ∈(Russell; Russell) is a type. This is easier though, we can prove it easily using impredicativity-wf-tac.

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; unfold <not implies>; eq-cd; auto; impredicativity-wf-tac }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { impredicativity-wf-tac; cum @i; auto }; }That cum @i is a quirk of impredicativity-wf-tac. It basically means that instead of proving =(...; ...; U{i'}) we can prove =(...; ...; U{i}) since U{i} is a universe below U{i'} and all universes are cumulative.

Our goal is now

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) ⊢ voidOk, so now the reasoning can start now that we have all these well-formedness lemmas. Our proof sketch is basically as follows

- Prove that Russell ∈ Russell is false. This is because if Russell
*was*in Russell then by definition of Russell it isn’t in Russell. - Since not(Russell ∈ Russell) holds, then Russell ∈ Russell holds.
- Hilarity ensues.

Here’s the first assertion

{ unfold <not implies>; intro; aux { impredicativity-wf-tac }; assert [Russell ∈ U{i}] <russell-wf>; aux { unfold <member Russell>; eq-cd; auto; unfold <not implies>; eq-cd; auto; impredicativity-wf-tac }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { impredicativity-wf-tac; cum @i; auto }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; }Here are our subgoals

[aux] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) ⊢ not(member(Russell; Russell)) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ voidWe want to prove that first one. To start, let’s unfold that not and move member(Russell; Russell) to the hypothesis and use it to prove void. We do this with intro.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; } }Notice that the well-formedness goal that intro is handled by our assumption! After all, it’s just member(Russell; Russell) ∈ U{i}, we already proved it. Now our subgoals look like this

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. x' : member(Russell; Russell) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ voidSo what we want to do is note that we’ve assume that Russell ∈ Russell we can say that Russell ~ X for some X where ¬ (X ∈ X) holds. But wait, that means that ¬ (Russell ∈ Russell) holds but we’ve assume that Russell ∈ Russell also holds so we have a contradiction.

Let’s start by introducing that X (here called R). We’ll assert an R : Russell such that R ~ Russell. We do this using dependent pairs (here written (x : A) * B(x)).

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; } }We’ve proven this by intro. For proving dependent products we provide an explicit witness for the first component. Basically to prove (x : A) * B(x) we say intro[Foo]. We then have a goal Foo ∈ A and B(Foo). Since subgoals are fully independent of each other, we have to give the witness for the first component upfront. It’s a little awkward, Jon’s working on it :)

In this case we use intro [Russell]. After this we have to prove that this witness has type Russell and then prove the second component holds. Happily, auto takes care of both of these obligations so intro [Russell] @i; auto handles it all.

Now we promptly eliminate this pair. It gives us two new facts, that R : Russell and R ~ Russell hold.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop> } }This leaves our goal as

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. x' : member(Russell; Russell) 5. s : Russell 6. t : ceq(s; Russell) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ voidNow let’s invert on that s : Russell, we want to use it to conclude that ¬ (s ∈ s) holds since that will give us ¬ (R ∈ R).

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; } }Now that we’ve unfolded all of those Russells our goal is a little bit harder to read, remember to mentally substitute {x : U{i} | not(member(x; x))} as Russell.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ voidNow we use #7 to derive that not(member(Russell; Russell)) holds.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; assert [¬ member(Russell; Russell)]; aux { unfold <Russell>; }; } }This leaves us with 3 subgoals. The first one being the assertion.

[aux] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) ⊢ not(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))})) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) 9. H : not(member(Russell; Russell)) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ voidNow to prove this what we need to do is substitute the unfolded Russell for x'', from there it’s immediate by assumption. We perform the substitution with chyp-subst. This takes a direction to substitute, the hypothesis to use, and another targeting telling us where to apply the substitution.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; assert [¬ member(Russell; Russell)]; aux { unfold <Russell>; chyp-subst ← #8 [h. ¬ (h ∈ h)]; }; } }This leaves us with a much more tractable goal.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) ⊢ not(member(x''; x'')) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) 9. H : not(member(Russell; Russell)) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ voidWe’d like to imply just apply assumption but it’s not immediately applicable due to some technically details (basically we can only apply an assumption in a proof irrelevant context but we have to unfold Russell and introduce it to demonstrate that it’s irrelevant). So just read what’s left as a (very) convoluted assumption.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; assert [¬ (Russell; Russell)]; aux { unfold <Russell>; chyp-subst ← #8 [h. ¬ (h ∈ h)]; unfold <not implies> intro; aux { impredicativity-wf-tac }; contradiction }; } }Now we’re almost through this assertion, our subgoals look like this (pay attention to 9 and 4)

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i}) 3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i}) 4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}) 5. s : {x:U{i} | not(member(x; x))} 6. x'' : U{i} 7. [t'] : not(member(x''; x'')) 8. t : ceq(x''; {x:U{i} | not(member(x; x))}) 9. H : not(member(Russell; Russell)) ⊢ void [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ voidOnce we unfold that Russell we have an immediate contradiction so unfold <Russell>; contradiction solves it.

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { unfold <not implies>; intro @i; aux {assumption}; assert [(R : Russell) * R ~ Russell] <R-with-prop>; aux { intro [Russell] @i; auto }; elim <R-with-prop>; thin <R-with-prop>; unfold <Russell>; elim #5; assert [¬ (Russell; Russell)]; aux { unfold <Russell>; chyp-subst ← #8 [h. ¬ (h ∈ h)]; unfold <not implies>; intro; aux { impredicativity-wf-tac }; contradiction }; unfold <Russell>; contradiction } }This takes care of this subgoal so now we’re back on the main goal. This time though we have an extra hypothesis which will provide the leverage we need to prove our next assertion.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ voidNow we’re going to claim that Russell is in fact in Russell. This will follow from the fact that we’ve proved already that Russell isn’t in Russell (yeah, it seems pretty paradoxical already).

{ unfold <not implies>; intro; aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { ... }; assert [Russell ∈ Russell]; }Giving us

[aux] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) ⊢ member(Russell; Russell) [main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) 5. H : member(Russell; Russell) ⊢ voidProving this is pretty straightforward, we only have to demonstrate that not(Russell ∈ Russell) and Russell ∈ U{i}, both of which we have as assumptions. The rest of the proof is just more well-formedness goals.

First we unfold everything and apply eq-cd. This gives us 3 subgoals, the first two are Russell ∈ U{i} and ¬(Russell ∈ Russell). Since we have these as assumptions we’ll use main {assumption}. That will target both these goals and prove them immediately. Here by using main we avoid applying this to the well-formedness goal, which in this case actually *isn’t* the assumption.

This just leaves us with one awful well-formedness goal requiring us to prove that not(=(x; x; x)) is a type if x is a type. We actually proved something similar back when we prove that Russell was well formed. The proof is the same as then, just unfold, eq-cd and impredicativity-wf-tac. We use ?{!{auto}} to only apply auto in a subgoal where it immediately proves it. Here ?{} says “run this or do nothing” and !{} says “run this, if it succeeds stop, if it does anything else, fail”. This is not an interesting portion of the proof, don’t burn too many cycles trying to figure this out.

{ unfold <not implies>; intro aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { ... }; assert [Russell ∈ Russell] <russell-in-russell>; aux { unfold <member Russell>; eq-cd; unfold <member>; main { assumption }; unfold <not implies>; eq-cd; ?{!{auto}}; impredicativity-wf-tac; }; }Now we just have the final subgoal to prove. We’re actually in a position to do so now.

[main] 1. x : member(U{i}; U{i}) 2. russell-wf : member(Russell; U{i}) 3. russell-in-russell-wf : member(member(Russell; Russell); U{i}) 4. russell-not-in-russell : not(member(Russell; Russell)) 5. russell-in-russell : member(Russell; Russell) ⊢ voidNow that we’ve shown P and not(P) hold at the same time all we need to do is apply contradiction and we’re done.

Theorem type-not-in-type [¬ (U{i} ∈ U{i})] { unfold <not implies>; intro aux { ... }; assert [Russell ∈ U{i}] <russell-wf>; aux { ... }; assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>; aux { ... }; assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>; aux { ... }; assert [Russell ∈ Russell] <russell-in-russell>; aux { ... }; contradiction }.And there you have it, a complete proof of Russell’s paradox fully formalized in JonPRL! We actually proved a slightly stronger result than just that the type of types cannot be in itself, we proved that at any point in the hierarchy of universes (the first of which is Type/*/whatever) if you tie it off, you’ll get a contradiction.

Wrap UpI hope you found this proof interesting. Even if you’re not at all interested in JonPRL, it’s nice to see that allowing one to have U{i} : U{i} or * :: * gives you the ability to have a type like Russell and with it, void. I also find it especially pleasing that we can prove something like this in JonPRL, it’s growing up so fast.

*Thanks to Jon for greatly improving the original proof we had*

### Haskell for Mac

Available here with hackernews and reddit discussions ongoing.

Even though I'm not a big fan of Haskell, I'm pretty excited about this. It represents a trend where PL is finally taking holistic programmer experiences seriously, and a move toward interactivity in program development that takes advantage of our (a) rich type systems, and (b) increasing budget of computer cycles. Even that they are trying to sell this is good: if people can get used to paying for tooling, that will encourage even more tooling via a healthy market feedback loop. The only drawback is the MAS sandbox, app stores need to learn how to accept developer tools without crippling them.

### The GHC Team: GHC Weekly News - 2015/08/06

GHC Weekly News - 6 Aug 2015

Hello *,

Here is a rather belated Weekly News which I found sitting nearly done on my work-queue. I hope this will make for a good read despite its age. The next edition of the Weekly News will be posted soon.

Warnings for missed specialization opportunitiesSimon Peyton Jones recently [a4261549afaee56b00fbea1b4bc1a07c95e60929 introduced] a warning in master to alert users when the compiler was unable to specialize an imported binding despite it being marked as INLINEABLE. This change was motivated by #10720, where the reporter observed poor runtime performance despite taking care to ensure his binding could be inlined. Up until now, ensuring that the compiler's optimizations meet the user's expectation would require a careful look at the produced Core. With this change the user is notified of exactly where the compiler had to stop specializing, along with a helpful hint on where to add a INLINABLE pragma.

Ticky-Ticky profilingRecently I have been looking into breathing life back into GHC's ticky-ticky profiling mechanism. When enabled, ticky-ticky maintains low-level counters of various runtime-system events. These include closure entries, updates, and allocations. While ticky doesn't provide nearly the detail that the cost-center profiler allows, it is invisible to the Core-to-Core optimization passes and has minimal runtime overhead (manifested as a bit more memory traffic due to counter updates). For this reason, the ticky-ticky profiler can be a useful tool for those working on the Core simplifier.

Sadly, ticky-ticky has fallen into quite a state of disrepair in recent years as the runtime system and native code generator have evolved. As the beginning of an effort to resuscitate the ticky-ticky profiler I've started putting together a list of the counters currently implemented and whether they can be expected to do something useful. Evaluating the functionality of these counters is non-trivial, however, so this will be an on-going effort.

One of our goals is to eventually do a systematic comparison of the heap allocation numbers produced by the ticky-ticky profiler, the cost-center profiler, and ticky-ticky. While this will help validate some of the more coarse-grained counters exposed by ticky, most of them will need a more thorough read-through of the runtime system to verify.

integer-gmp PerformanceSince the 7.10.2 release much of my effort has been devoted to characterizing the performance of various benchmarks over various GHC versions. This is part of an effort to find places where we have regressed in the past few versions. One product of this effort is a complete comparison of results from our nofib benchmark suite ranging from 7.4.2 to 7.10.1.

The good news is there are essentially no disastrous regressions. Moreover, on the mean runtimes are over 10% faster than they were in 7.4.2. There are, however, a few cases which have regressed. The runtime of the integer test, for instance, has increased by 7%. Looking at the trend across versions, it becomes apparent that the regression began with 7.10.1.

One of the improvements that was introduced with 7.10 was a rewrite of the integer-gmp library, which this benchmark tests heavily. To isolate this potential cause, I recompiled GHC 7.10.1 with the old integer-gmp-0.5. Comparing 7.10.1 with the two integer-gmp versions reveals a 4% increase in allocations.

While we can't necessarily attribute all of the runtime increase to these allocations, they are something that should be addressed if possible. Herbert Valerio Riedel, the author of the integer-gmp rewrite, believes that the cause may be due to the tendency for the rewrite to initially allocate a conservatively-sized backing ByteArray# for results. This leads to increased allocations due to the reallocations that are later required to accommodate larger-than-expected results.

While being more liberal in the initial allocation sizes would solve the reallocation issue, this approach may substantially increase working-set sizes and heap fragmentation for integer-heavy workloads. For this reason, Herbert will be looking into exploiting a feature of our heap allocator. Heap allocations in GHC occur by bumping a pointer into an allocation block. Not only is this a very efficient means of allocating, it potentially allows one to efficiently grow an existing allocation. In this case, if we allocate a buffer and soon after realize that our request was too small we can simply bump the heap pointer by the size deficit, so long as no other allocations have occurred since our initial allocation. We can do this since we know that the memory after the heap pointer is available; we merely need to ensure that the current block we are allocating into is large enough.

Simon Marlow and Herbert will be investigating this possibility in the coming weeks.

D924: mapM_ and traverse_As discussed in the most recent Weekly News, one issue on our plate at the moment is Phab:D924, which attempted to patch up two remaining facets of the Applicative-Monad Proposal,

- Remove the override of mapM for the [] Traversal instance
- Rewrite mapM_ in terms of traverse_

While (1) seems like an obvious cleanup, (2) is a bit tricky. As noted last time, traverse_ appears to give rise to non-linear behavior in this context.

akio has contributed an insightful [analysis https://ghc.haskell.org/trac/ghc/timeline?from=2015-08-01T10%3A00%3A33Z&precision=second] shedding light on the cause of this behavior. Given that the quadratic behavior is intrinsic to the Applicative formulation, we'll be sending this matter back to the Core Libraries Committee to inform their future design decisions.

That is all for this week!

Cheers,

~ Ben

### Munich Haskell Meeting,2015-08-27 < at > 19:30 Augustiner-Keller Biergarten

### What's the status of WinGHCi?

Having seen Haskell for Mac announcement today, I've suddenly remembered that such thing as WinGHCi (which is included in Haskell Platform for Windows) exists. After a quick web search, I've found out that it's hosted on Google Code and it seems like no one exported it to Github / anywhere else. So, I'm wondering if anyone is maintaining it now?

submitted by dauntingdrone[link] [3 comments]