# News aggregator

### Danny Gratzer: Overview of A Scheme Compiler

For the last few months I’ve been spending a fair amount of time on a fun little Scheme to C compiler, c_of_scheme.

In this post I’ll outline the high level overview of c_of_scheme and in future posts detail the specifics of each component.

Modulesc_of_scheme is divided into 11 modules: 2 utility modules, 6 modules which each handle one step of compilation, a module with definitions of ASTs, a driver, and of course Main.

First, let’s discuss the utility modules, Utils.Gen and Utils.Error. Gen defines a monad, Gen. This is used to generate unique integers to be used as identifiers. For example:

data Var = Name String | Gen Integer genVar :: Gen Var genVar = Gen <$> genOther stages of the compiler (continuation passing style, closure conversion, and lambda lifting) need lots of temporaries so this is used throughout the compiler.

Gen also comes with a monad transformer that implements a handful of useful MTL type classes. Overall, nothing too stunning.

The other uninteresting utility module is Error, this is just a wrapper around Either with a few functions for throwing errors and good pretty printing of errors. This is used internally to signal a major internal error.

The precise interface is given by a set of functions failRW, failCPS, failClos, etc., which correspond to each stage of compilation. These generate lovely pretty printed error messages for each stage. This will become clearer as we go over each phase individually and it’s clear what needs to signal failure.

A module that’s worth mentioning that’s not a compilation stage but not quite a utility module is AST. This defines the various abstract syntax trees and primops for our representation of Scheme. This also defines the compiler monad, which combines our error monad with Gen and some other bits and bobs useful for our compiler. More on AST in future posts.

Stages of CompilationNow let’s actually go over the individual phases of compilation.

Parsing (Parser.hs)This is the least interesting phase of compilation.. I personally just dislike parsing so I don’t have much to say about this.

A legal Scheme program is a list of definitions, we don’t currently allow top level expressions. We also don’t currently support the usual define sugar for functions.

The parser uses Parsec because I just happen to know the Parsec API, ironically because of this. If anyone cares enough to write a proper lexer and/or parser or something, I’m more than happy to help!

Rewrite Top Levels (RewriteTopLevel.hs)This phase is a little peculiar. It exists because we’re targeting C and C has a fairly annoying restriction on what it allows top levels to initialized to.

In C, we can’t write something like

int c = 1 + 1 + 1;but in our dialect of Scheme, this is the only way to write interesting computations! This phase of compilation rewrites top levels (Shocking!) to match the C definition of top levels.

This is done by changing each definition to an Init, this will later turn into a C declaration without initialization. Next we create a new function, our main function, that is a series of assignments which pair each top level definition to its initializer.

For example

(define foo 1) (define bar 2) (define quux (+ foo bar)) (define _ (display quux))will become

(init foo) (init bar) (init quux) (init _) (define magical-main (lambda () (set! foo 1) (set! bar 1) (set! quux (+ foo bar)) (set! _ (display quux))))where magical-main will be the first thing called in the generated code.

A caveat, we turn (define foo (lambda (..) ...)) into something different since it’s more efficient to directly convert these to functions.

Continuations Passing Style Conversion (CPS.hs)This is the first interesting bit of compilation, CPS is a style where each function call is a tail call. Here’s an example non-CPS code converted to CPS.

(define foo (lambda (y) (+ 1 y))) (define foo-cps (lambda (cont x y) ((lambda (+') ((lambda (one) ((lambda (x') ((lambda (result) (cont result)) (+' one x'))) x)) 1)) +)))Notice how with the CPS’ed version we’ve actually made evaluation order explicit and have removed non-primitive expressions.

CPS.hs converts the AST to use CPS. We’ll detail this process later but for now I’ll mention one more interesting tidbit.

CPS.hs is also where we implement call/cc! In fact it’s trivial to do. All we do as add the declaration for

(define call/cc (lambda (c f) (f c (lambda (ignored x) (c x))))) Optimizations (OptimizeCPS.hs)This module implements the simple optimizations we perform. For now this is limited to simple inlining and constant folding, but this should improve in the future.

These optimizations are implemented quite pleasantly with recursion schemes.

Closure Conversion + Lambda Lifting (ClosureConvert.hs)This is the most difficult phase of compilation, for me anyways. In concept it’s quite simple though.

The idea is that we take the implicit closure “argument” that all scheme procedures take and make it explicit. To this end we add three new primops, NewClos, ReadClos, and WriteClos. These do much what you would expect and let us treat closures opaquely as first class values.

Next we change each procedure to take an extra argument, its closure, and change closed over variables to be selected from this closure. Finally we change each lambda to be paired with its closure when constructed.

This sounded pretty feasible to me on paper, but in practice it seems to be the greatest source of bugs in c_of_scheme. It finally seems to work nicely now so I’ll be sure to blog about it soon.

Code Generation (CodeGen.hs)This is the final stop in our compilation pipeline - we generate C code.

To do this we use one of my libraries. This is actually quite a simple step in the compiler since closure-converted, CPS-ed code is quite close to C.

Some of the details that code generation handles:

- Interfacing to the runtime system
- Generating the main method
- Generating declarations for all the variables used in our intermediate language
- Mapping the Scheme variables to appropriate C names

While this might sound daunting, this isn’t actually so bad.

Driver (Driver.hs)While I might not write a post on it, Driver is my personal favorite module. It glues together all of the previous compilation phases and provides a bunch of nice high level functions like compileScheme.

The reason I like it so much is that all the code in it is a very nice, clean example of composing components as good old functions.

If you’re looking to understand c_of_scheme’s particular implementation, I’d urge you to start with Driver. It’ll provide a bit of an intuition from what goes to where.

The Runtime SystemCurrently c_of_scheme has an incredibly naive runtime system. Mostly because it’s being written by an incredibly naive C programmer (hi!).

I already wrote about the most interesting bit of the RTS: tail calls.

I plan on talking a bit about the RTS in the context of code generation (since it’d be impossible not to), and perhaps a post on c_of_scheme’s simple little mark and sweep GC.

Wrap UpSo that’s the high level overview of c_of_scheme, I think the compiler is best exemplified by one particular function in Driver.hs:

compileScheme :: [SDec UserPrim] -> Compiler [CExtDecl] compileScheme = addPrimops >=> makeMain >=> cpsify >=> optimizeCPS >=> closConvert >=> codegen where addPrimops = return . (++prims)This chains together all the phases of compilation into one big old function from the Scheme AST to the C one.

Now, if you’re really interested in c_of_scheme, go ahead and grab the source with

hg clone ssh://hg@bitbucket.org/jozefg/c_of_schemeI do use mercurial so you can also grab a zip from bitbucket if you’re unwilling to use mercurial for one command :)

I should have posts about each specific phase of compilation up in Real Soon Now. I’ll edit with a list of links to posts below as they are written.

*Thanks to @tylerholien for proofreading*

### How do you choose the fixity of operators when designing DSLs?

Since designing DSLs in Haskell is so common, I'm just wondering if there are some tricks out there to keep the fixity declarations the best they can be, or if it's just a matter of trial and error?

Thanks!

submitted by 5outh[link] [28 comments]

### Why are non-parallel programs run with parallel flags slower?

I compile my non-parallel program with GHC 7.8.2 with parallel flags. When I run the program with parallel RTS options (+RTS -N -RTS), the performance is significantly worse than running without any flags. Why is this?

submitted by fiendfan1[link] [6 comments]

### Final call for talk proposals: HOPE'14 (Workshop on Higher-Order Programming with Effects, affiliated with ICFP'14)

### Ken T Takusagawa: [qplzrxby] Letter and polygram frequencies

**Part 1**

In order of decreasing frequency, here are the common English polygrams (single letters, bigrams, trigrams, etc., also called n-grams) gathered from a private corpus. We arbitrarily stop at the least frequent single letter, namely "z". There are 326 polygrams including the 26 single letters.

e t a o i n s r h l d u c th m g y p he w f in b the re er an on at ha v ou it en ng or k to es st ing te ar is ti al nd se nt ed as ve le me ll ea hi ne co hat of li de be tha no ri ca ic ot ro ly and ho ut om us that so io ra el et ma wh ce pe ta wa ch la fo ion ur si ent ec di il do ge ee pr her thi yo ac ns ow for ul ke all un ere we ss id ct em rs lo tio wi tion you rt ay ad mo ld po ai tr bu su mi wo hin oo pa pl ts ter gh av sh os nc ol ig ther ey ab am op ir na ni im ie if ate j thin not sa x ver fi bl are go bo one ev rea out ave tu ati oul was ry uld ould hey ome vi ba hav they here iv res ci tt ith ck ag con fe but ers have ia lly eve ap rd wit fr ally da pro sta mp his tin with ill ny ex hou ess ty cl com ht nk up sp ye ei can ear ght int est nce ki ff ep ted use ug ons od igh men ov ting som some pp ive atio our ore ation hing eo ect ak uc sti cou ple any ue ide ust ju ik pre gr cr der ef gi tho rr um ink ga ls there ike wha fa act han sc what ant thing ment q ell ua cu ight art ew bi whe oi oun ugh this nte per ica ble ist wou lik au like ru ove ob woul would rn z

**Part 2**

To avoid the redundancy between polygrams like "woul" and "would" in Part 1, we choose a subset of them by iteratively finding the length of the longest polygrams more frequent than the original frequency of "z", then greedily selecting the most frequent polygram among those with that length. We cut out that polygram from the corpus, then reanalyze and repeat. (This was a long computation.) The polygrams got eliminated in the following order. This list of 126 polygrams has far less redundancy.

5 letters: ation there thing would

4 letters: that they have tion ally with ting some ther what this ight like thin

3 letters: the ing and ent for you ter ver one are not ate out was con all but pro res ere ill com rea ear use can cou any nce ple ust ive hou ers ell ess ant ble

2 letters: in to it re is on ed er es of en ar or as al an ic be st se ly ch ow at id wh le ac ay me un ad ro su ge if th mo go pe so lo do de us po ab he ir im am ld tr ur ne ap gh il we ve ts ke no ma ex ul ta te co pl

The remaining single letters in decreasing order of frequency were

s t i a m e d o p c h l u f y r b n w g k v j q z

The letter "x" dropped below the "z" frequency threshold after elimination of the "ex" bigram.

**Part 3.1**

Part 2 cut out occurrences of a polygram out of a word, so, for example, the word "others" became two words "o" and "s" in the corpus after elimination of "ther". In this part, we instead replace a common polygram with a single composite character, so the 5-letter word "other" becomes the 3-letter word "o(the)r", where (the) counts as only one letter. We choose composite characters greedily, maximizing at each step the scoring function N*(L-1), where N is the number of occurrences and L is the length of the polygram. This scoring function is equivalent to the total number of keystrokes that would be saved to type the entire corpus if the polygram could be typed with one keystroke. After substituting highest scoring composite polygram throughout the corpus, we reanalyze and repeat. (This was a long computation.) We arbitrarily stop when the best score drops below the frequency of "z", finding 126 polygrams:

the in re an that on er ou it st en (in)g or al at to is ar le as ed th es of ic (an)d ly have om be se wh no (ou)ld i(on) but ac ight peop(le) (en)t id ll w(it)h like f(or) ch y(ou) ay ad pro me ab(ou)t we su s(om)e ge he lo ve if a(re) do v(er) un (th)(in)k go (ou)gh de (no)t po w(as) (on)e mo (the)y ju(st) fr(om) am so ir us im te ne c(on) ce il tr t(er) ap how (al)(ly) (be)cau(se) ct c(om) ab (at)e (al)l i(ll) o(the)r ur (at)(i(on)) ak pe ig (an)y ts p(re) ol sh (the)(re) k(no)w la (th)((in)g) d(if)fe(re)n ex (ou)nd (th)(is) (wh)(at) (wh)(ic)h c(an) (ac)t li ri ro w((ou)ld) m((en)t)

**Part 3.2**

Here are the 218 composite polygrams selected in greedy order when using the scoring function N*L instead of N*(L-1). We now need to explicitly forbid creating useless one-letter composite characters.

th in (th)e re an on at er ou st en (in)g it or al to is ar le as ed es of ic (an)d ly om (th)(at) be se wh no ve i(on) ac (en)t ha id ll f(or) y(ou) ay ch (ou)ld me ut ro li we su gh ad ge lo he ab wi ke pe do a(re) v(er) if un go de (no)t ri s(om)e po (ou)t w(as) (on)e mo ((th)e)y so am ir us ne c(on) b(ut) ce ct (pe)op(le) (ha)(ve) te (al)(ly) t(er) (th)(in)k il (wi)(th) ho c(om) im p(ro) ju(st) (gh)t fr(om) (at)e (al)l (at)(i(on)) (an)y ((th)e)r ur fe p(re) la ma tr ((th)e)(re) (th)((in)g) ap ex ts co (th)(is) (wh)(at) c(an) (ac)t (er)s (ou)n w((ou)ld) m((en)t) ss ta (li)(ke) (no)w ye qu (be)cau(se) (ar)t (an)t up sh d(on) (ab)((ou)t) ul ig ag bo my (en)d (ge)t (ar)d pl sa i(st) (ou)r by (wh)(ic)h w(or) em (ho)w (wh)o (mo)(re) ol k((in)g) ti h(as) (ou)(gh) i(ll) um d(er) i(ve) p(er) (es)s w(ay) (in)d od (it)y (re)d v(en) ok (th)(an) gu (in)e ra (re)n o(((th)e)r) (do)(es) (ac)k sp ((th)e)m publ(ic) (or)t (id)e fo (wh)e(re) (ic)e d(id) ty s(ay) mu(ch) (wi)(ll) (we)(ll) (as)s k((no)w) pr t((in)g) op ud d(ay) cl (wh)(en) (ou)s (an)s ck a(st) ((ou)n)d b(le) ci gr wn (ab)(le) (v(er))y (of)f (in)t

**Part 3.3**

Here are the 62 composite polygrams selected in greedy order when using the scoring function N*(L-2).

the that ing tion and ould ent have for all people think with you about some ter though what ver ight this like not one are ate because was just from con but which pro res more ere ill com rea ear actu(all)y can than any does nce time ive well know (the)re ers when ess ound ant able th(ing) out part

**Source code**

Here is the Haskell source code. The main optimization was to preprocess the corpus into a list of words with word counts with sort | uniq -c.

The Google Books n-grams corpus is another choice, but I don't like how it includes as common words words like "iii" which is probably an artifact of OCR scanning Roman numeral page numbers.

Previous vaguely similar attempt, in Perl.. Previous thoughts invoking Braille contractions. I have heard anecdotes that Braille contractions were derived from frequencies in the Bible.

### Mike Izbicki: I got lenses in my Functors

The typeparams package provides type lenses. Let’s combine them with Functors. Because why not?! You’ll need to have at least skimmed the linked README to understand what’s going on here.

First, enable some GHC magic:

> {-# LANGUAGE TemplateHaskell #-} > {-# LANGUAGE ScopedTypeVariables #-} > {-# LANGUAGE KindSignatures #-} > {-# LANGUAGE TypeFamilies #-} > {-# LANGUAGE MultiParamTypeClasses #-} > {-# LANGUAGE UndecidableInstances #-} > {-# LANGUAGE FlexibleInstances #-} > {-# LANGUAGE RankNTypes #-}And import our libraries:

> import Control.Category > import Prelude hiding ( (.), id, Functor(..) ) > import Data.ParamsWe’ll use the Either type as our main example. It’s defined as:

data Either a b = Left a | Right bThe Functor instance is pretty straightforward:

class Functor f where fmap :: (a -> b) -> f a -> f b instance Functor (Either a) where fmap f (Left a) = Left a fmap f (Right b) = Right $ f bBut this instance has a key limitation: We can map a function only over the the last type.

Bifunctors are the current solution to this problem. A recent, popular proposal suggested adding them to base. But this is an ad hoc solution whose application does not extend far beyond the Either type.

Type lenses will (kinda sort of) provide a cleaner solution. That is, they fix the problem about as well as regular old lenses fix the problems of record selectors. As a bonus, we’ll get a convenient mechanism for mapping over nested Functors.

Defining the FunctorHere is the alternative definition of the Functor class using type lenses:

> class Functor lens t where > fmap' :: a ~ GetParam lens t > => TypeLens p lens > -> (a -> b) > -> t > -> SetParam lens b tIt’s okay if you don’t understand the type signature at first glace. (That’s how know you’re using lenses, afterall!) Let’s step through it using the Either example.

The first argument is the type lens. This indicates which parameter we will be mapping over the type t. In the Either data type, we could use the variable _a to map over the Left component or _b to map over the Right.

Next, we encounter two type families, GetParam and SetParam. These act as getters and setters at the type level. In the above example, GetParam is used to extract arbitrary type params from a type. It is defined as:

type family GetParam (p::k1) (t:: *) :: k3 type instance GetParam Param_a (Either a b) = a type instance GetParam Param_b (Either a b) = bThe SetParam type similarly sets the type of arbitrary params in a type. It is defined as:

type family SetParam (p::k1) (a::k2) (t:: *) :: * type instance SetParam Param_a a' (Either a b) = Either a' b type instance SetParam Param_b b' (Either a b) = Either a b'These instances can be automatically provided for any type by calling the mkParams template haskell function like so:

> mkParams ''EitherQuick aside: With injective type families and a little sugar, we could make this definition of Functor a tad cleaner.

InstancesWe can replicate the traditional Functor instance with the code:

instance Functor (Param_b Base) (Either a b) where fmap' lens f (Left a) = Left a fmap' lens f (Right b) = Right $ f band create a “Left” Functor instance as:

instance Functor (Param_a Base) (Either a b) where fmap' lens f (Left a) = Left $ f a fmap' lens f (Right b) = Right bTogether, these instances let us run the commands:

ghci> fmap _b length $ Left "Roses are red," Left "Roses are red," ghci> fmap _b length $ Rightt "Violets are blue," Right 17 ghci> fmap _a length $ Left "Haskell is fun," Left 15 ghci> fmap _a length $ Right "Type lenses are cool." Right "Type lenses are cool." But wait! There’s more!With the above definitions, we can’t combine our type lenses at all. Enter the funnily named and awkwardly typed zoom combinator:

zoom :: TypeLens a p -> TypeLens a (Zoom p)This combinator lets us zoom into a composed type lens, removing the outer most layer. For example, given the composed type lens:

ghci> :t _a._b._a._b _a._b._a._b :: TypeLens a (Param_a (Param_b (Param_a (Param_b a))))Then zooming in removes the first _a:

ghci> :t zoom (_a._b._a._b) zoom (_a._b._a._b) :: TypeLens a (Param_b (Param_a (Param_b a)))We will use this combinator to redefine our Functor instances. The new instances will recursively map over every Functor in our input lens:

> instance Functor p b => Functor (Param_b p) (Either a b) where > fmap' lens f (Left a) = Left a > fmap' lens f (Right b) = Right $ fmap' (zoom lens) f b > > instance Functor p a => Functor (Param_a p) (Either a b) where > fmap' lens f (Left a) = Left $ fmap' (zoom lens) f a > fmap' lens f (Right b) = Right bThe type Base provides the base case of the recursion:

> instance Functor Base t where > fmap' _ f a = f aNow, in order to call fmap’, we must compose our lens with the type lens:

_base :: TypeLens Base BaseFor example:

ghci> :t _a._b._a._b._base deeplens :: TypeLens Base (Param_a (Param_b (Param_a (Param_b Base))))And we call fmap’ like:

ghci> fmap' (_a._b._a._b._base) length $ Left $ Right $ Left $ Right "still simpler than the lens package " Left (Right (Left (Right 42))) ghci> fmap' (_a._b._a._b._base) length $ Left $ Right $ Left $ Left "... for now ..." Left (Right (Left (Left "... for now ...")))Composing all of our lenses with _base is tedious. So let’s write a function that automates that task:

> fmap :: > ( Functor lens t > ) => TypeLens Base lens > -> (GetParam lens t -> c) > -> t > -> SetParam lens c t > fmap lens = fmap' (lens._base)And we call fmap as:

ghci> fmap (_a._b._a._b) length $ Left $ Right $ Left $ Left "mwahhahahaha" Left (Right (Left (Left "mwahhahahaha"))) More Functors!We can easily define more of these new Functor instances. In fact, the procedure is exactly as mechanical for type lens based Functors as it is for the traditional Functors. All you have to do is replace every function application with a recursive Functor call:

f x --> fmap' (zoom lens) f xHere are some examples using the list and Maybe functors:

> mkParams ''[] > instance Functor p a => Functor (Param_a p) [a] where > fmap' lens f [] = [] > fmap' lens f (a:as) = fmap' (zoom lens) f a : fmap' lens f as > mkParams ''Maybe > instance Functor p a => Functor (Param_a p) (Maybe a) where > fmap' lens f Nothing = Nothing > fmap' lens f (Just a) = Just $ fmap' (zoom lens) f aLet’s create a variable that uses all of our functors:

> monster = > [ Nothing > , Just (Left "Hello!") > , Just (Right 42) > , Just (Left "World!") > ]And go to town:

ghci> fmap (_a._a._a._a) succ monster [Nothing,Just (Left "Ifmmp\""),Just (Right 42),Just (Left "Xpsme\"")] ghci> fmap (_a._a._a) length monster [Nothing,Just (Left 6),Just (Right 42),Just (Left 6)] ghci> fmap (_a._a) (const 3.4) monster [Nothing,Just 3.4,Just 3.4,Just 3.4] ghci> fmap _a show monster ["Nothing","Just (Left \"Hello!\")","Just (Right 42)","Just (Left \"World!\")"] Tune in next time…In our next installment, we’ll tackle Applicative parsing with type lenses. Thought the lens package had too many operators??? You ‘aint seen ‘nothin yet.

### [CfP] HOPE 2014 :: The 3rd ACM SIGPLAN Workshop on Higher-Order Programming with Effects

### What kind of things are easy in Haskell and hard in Scala, and vice-versa?

### Are Haskell salaries competitive?

I've been wondering if salaries for positions writing Haskell are competitive with salaries for positions writing more mainstream programming languages. Haskellers say that they're more productive when writing Haskell and I would hope that a productive engineer is rewarded accordingly. However, there's also a sentiment that "I'd be willing to take a pay cut, if I could write Haskell full time." In addition, the market for Haskell programmers has been described as a "buyer's market", I would expect this drive down the salary of a Haskell developer. Does anybody care to share their story being employed as a Haskell developer or employing Haskell developers?

submitted by cheecheeo[link] [41 comments]

### What's the state of oauth2 with Haskell?

I have been looking at all of my usual resources (hackage, mostly) trying to figure out the easiest way to implement an OAuth2 authentication system with a Yesod app. I just want to make API calls to an OAuth2 provider. It seems that every library I look at is either broken or incomplete or both.

How are you guys handling this?

submitted by 2piix[link] [5 comments]

### Reichert Brothers - HsQML StocQt - Stock Market Visualization in Haskell and Qt Quick

### Why does ghci contain mingw?

I just realized (by looking through my environment variables), that ghci came with its own installation of mingw and added the bin directory to PATH. I removed it from PATH, and ghci seems to run just fine, so what is it for?

submitted by ym_twosixonetwo[link] [9 comments]

### Theory Lunch (Institute of Cybernetics, Tallinn): Proving the beauty of a mind

In the previous Theory Lunch talk we introduced the notion of Nash equilibrium for games in normal form. Today, we went through the proof of Nash’s theorem of existence of mixed strategy Nash equilibria for finite games in normal form.

Let us recall the basic notions. In a *game in normal form* we have:

- A set of
*players*. - A set of
*strategies*for each player. - A collection of
*utility functions*which associate to each*strategic profile*a real number, such that is the*utility*player gets from the strategic profile .

A *Nash equilibrium* for a game in normal form is a strategic profile such that, for every player and every strategy feasible for player , it is the case that . We had seen that not every finite game in normal form admits a pure strategy Nash equilibrium: so, we introduced randomization.

A *mixed strategy* for player is a probability distribution . If is finite, this is the same as assigning values for . A *mixed strategy profile* is a collection of mixed strategies for each player. A *mixed strategy Nash equilibrium* is a mixed strategy profile such that, for every player and every mixed strategy feasible for player ,

.

The idea behind Nash’s proof goes as follows. If the game is finite, then a mixed strategy for player is identified with a point of

therefore, mixed strategy profiles can be identified with points of

which is compact and convex as all of its components are. Mixed strategy Nash equilibria are those points of where each pure strategy , , , is used in the most efficient way: by relaxing the condition and allowing a small “slack” with respect to such most efficient way, it is possible to define a continuous transformation of mixed strategy profiles into mixed strategy profiles, which will have a fixed point because of the Brouwer fixed-point theorem. By gradually reducing the slack, a mixed strategy Nash equilibrium is found as a limit point of such approximations.

Suppose player has available the pure strategies for . Let be an arbitrary mixed strategy profile and be an arbitrary integer. Consider the following quantities:

- .
- .
- .
- .

Given , the sum is bounded from below by , hence the functions

are continuous and nonnegative and satisfy whatever and are. As a consequence, the functions

that is,

are continuous transformations of into itself. Let be a fixed point of , whose existence is ensured by the Brouwer fixed-point theorem: as is compact, the sequence has a limit point .

Suppose, for the sake of contradiction, that is not a mixed strategy Nash equilibrium. Then there must be a player and a mixed strategy such that . The only way this may happen, is that some *pure* strategy is used *suboptimally* by , that is,

Choose and so that:

- belongs to a subsequence converging to .
- .
- .
- .

Points 2 and 3 tells us that is strictly smaller than : this, together with point 4, yields , thus . But is a fixed point for , so

:

and as may be taken arbitrarily large and be made arbitrarily small, we must conclude that too. This is a contradiction.

### Philip Wadler: Reset the Net

"One year ago, we learned that the internet is under surveillance, and our activities are being monitored to create permanent records of our private lives — no matter how innocent or ordinary those lives might be.

Today, we can begin the work of effectively shutting down the collection of our online communications, even if the US Congress fails to do the same. That’s why I’m asking you to join me on June 5th for Reset the Net, when people and companies all over the world will come together to implement the technological solutions that can put an end to the mass surveillance programs of any government. This is the beginning of a moment where we the people begin to protect our universal human rights with the laws of nature rather than the laws of nations.

We have the technology, and adopting encryption is the first effective step that everyone can take to end mass surveillance. That’s why I am excited for Reset the Net — it will mark the moment when we turn political expression into practical action, and protect ourselves on a large scale.

Join us on June 5th, and don’t ask for your privacy. Take it back.”