# News aggregator

### José Pedro Magalhães: POPL 2015 is in India, and I think that is a bad idea

Personally, I do not feel welcome or even safe in India, and consequently will not attend POPL 2015. I find it regrettable that the POPL Steering Committee failed to keep to its own anti-harassment policy when choosing a venue for POPL 2015.

### Edward Z. Yang: Equality, roughly speaking

In Software Foundations, equality is defined in this way:

Even Coq's equality relation is not built in. It has (roughly) the following inductive definition.

Inductive eq0 {X:Type} : X -> X -> Prop := refl_equal0 : forall x, eq0 x x.*Why the roughly?* Well, as it turns out, Coq defines equality a little differently (reformatted to match the Software Foundations presentation):

What’s the difference? The trick is to look at the induction principles that Coq generates for each of these:

eq0_ind : forall (X : Type) (P : X -> X -> Prop), (forall x : X, P x x) -> forall y y0 : X, eq0 y y0 -> P y y0 eq1_ind : forall (X : Type) (x : X) (P : X -> Prop), P x -> forall y : X, eq1 x y -> P yDuring our Homotopy Type Theory reading group, Jeremy pointed out that the difference between these two principles is exactly the difference between path induction (eq0) and based path induction (eq1). (This is covered in the Homotopy Type Theory book in section 1.12) So, Coq uses the slightly weirder definition because it happens to be a bit more convenient. (I’m sure this is folklore, but I sure didn’t notice this until now! For more reading, check out this excellent blog post by Dan Licata.)

### incorrect MonadPlus law "v >> mzero = mzero"?

### Request for review: numbers

Hey Haskellers, acting maintainer of numbers here.

I got a pull request that I'm a bit hesitant to accept for a few reasons. But I don't actually use the library, so I'd appreciate some additional opinions on a few questions that have arisen:

- Should the fields of Interval be strict?
- Should there be a Bounded instance for Bounded a => Interval a?
- Is the current Ord instance useful, sensible, and acceptable?

Additionally, it would be nice if you could glance at the other changes in the PR and make sure they look good.

https://github.com/DanBurton/numbers/pull/10/files

Do you use the numbers library? Wish it had some new feature? Want to write a test suite or benchmark? I'd love to hear from you. Do you know of a class or tutorial that teaches via the numbers library? Let me know! I feel a bit disconnected from this library's history, and I'd like to be a little more plugged in.

submitted by drb226[link] [comment]

### Is it safe to create global variables usingunsafePerformIO?

### Theory Lunch (Institute of Cybernetics, Tallinn): Transgressing the limits

Today, the 14th of January 2014, we had a special session of our Theory Lunch. I spoke about ultrafilters and how they allow to generalize the notion of limit.

Consider the space of bounded sequences of real numbers, together with the supremum norm. We would like to define a notion of limit which holds for *every* and satisfies the well known properties of standard limit:

*Linearity:*.*Omogeneity:*.*Monotonicity:*if for every then .*Nontriviality:*if for every then .*Consistency:*if the limit exists in the classical sense, then the two notions coincide.

The consistency condition is reasonable also because it avoid trivial cases: if we fix and we define the limit of the sequence as the value , then the first four properties are satisfied.

Let us recall the classical definition of limit: we say that converges to if and only if, for every , the set of values such that is *cofinite*, *i.e.*, has a finite complement. The family of cofinite subsets of (in fact, of any set ) has the following properties:

*Upper closure:*if and then .*Meet stability:*if then .

A family of subsets of with the two properties above is called a *filter* on . An immediate example is the *trivial filter* ; another example is the *improper filter* . The family of cofinite subset of is called the *Fréchet filter* on . The Fréchet filter is not the improper one if and only if is infinite.

An *ultrafilter* on is a filter on satisfying the following additional conditions:

*Properness:*.*Maximality:*for every , either or .

For example, if , then is an ultrafilter on , called the *principal ultrafilter* generated by . Observe that : if we say that is *free*. These are, in fact, the only two options.

**Lemma 1.** For a proper filter to be an ultrafilter, it is necessary and sufficient that it satisfies the following condition: for every and nonempty , if then for at least one .

*Proof:* It is sufficient to prove the thesis with . If with , then is a proper filter that properly contains . If the condition is satisfied, for every which is neither nor we have , thus either or .

**Theorem 1.** Every nonprincipal ultrafilter is free. In addition, an ultrafilter is free if and only if it extends the Fréchet filter. In particular, every ultrafilter over a finite set is principal.

*Proof:* Let be a nonprincipal ultrafilter. Let : then , so either there exists such that and , or there exists such that and . In the first case, ; in the second case, we consider and reduce to the first case. As is arbitrary, is free.

Now, for every the set belongs to but not to : therefore, no principal ultrafilter extends the Fréchet filter. On the other hand, if is an ultrafilter, is finite, and , then by maximality, hence for some because of Lemma 1, thus cannot be a free ultrafilter.

So it seems that free ultrafilters are the right thing to consider when trying to expand the concept of limit. There is an issue, though: we have not seen any single example of a free ultrafilter; in fact, we do not even (yet) know whether free ultrafilters do exist! The answer to this problem comes, in a shamelessly nonconstructive way, from the following

**Ultrafilter lemma.** Every proper filter can be extended to an ultrafilter.

The ultrafilter lemma, together with Theorem 1, implies the existence of free ultrafilters on every infinite set, and in particular on . On the other hand, to prove the ultrafilter lemma the Axiom of Choice is required, in the form of Zorn’s lemma. Before giving such proof, we recall that a family of sets has the *finite intersection property* if every finite subfamily has a nonempty intersection: every proper filter has the finite intersection property.

*Proof of the ultrafilter lemma.* Let be a proper filter on and let be the family of the collections of subsets of that extend and have the finite intersection property, ordered by inclusion. Let be a totally ordered subfamily of : then extends and has the finite intersection property, because for every finitely many there exists by construction such that .

By Zorn’s lemma, has a maximal element , which surely satisfies and . If and , then still has the finite intersection property, therefore by maximality. If then still has the finite intersection property, therefore again by maximality.

Suppose, for the sake of contradiction, that there exists such that and : then neither nor have the finite intersection property, hence there exist such that . But means , and means : therefore,

against having the finite intersection property.

We are now ready to expand the idea of limit. Let be a metric space and let be an ultrafilter on : we say that is the *ultralimit* of the sequence along if for every the set

belongs to . (Observe how, in the standard definition of limit, the above set is required to belong to the Fréchet filter.) If this is the case, we write

Ultralimits, if they exist, are unique and satisfy our first four conditions. Moreover, the choice of a principal ultrafilter corresponds to the trivial definition . So, what about free ultrafilters?

**Theorem 2.** Every bounded sequence of real numbers has an ultralimit along every free ultrafilter on .

*Proof:* It is not restrictive to suppose for every . Let be an arbitrary, but fixed, free ultrafilter on . We will construct a sequence of closed intervals , , such that and for every . By the Cantor intersection theorem it will be : we will then show that .

Let . Let be either or , chosen according to the following criterion: . If both halves satisfy the criterion we just choose one once and for all. We iterate the procedure by always choosing as one of the two halves of such that .

Let . Let , and let be so large that : then , thus . As the smaller set belongs to , so does the larger one.

We have thus almost achieved our original target: a notion of limit which applies to every bounded sequence of real numbers. Such notion will depend on the specific free ultrafilter we choose: but it is already very reassuring that such a notion exists at all! To complete our job we need one more check: we have to be sure that the definition is consistent with the classical one. And this is indeed what happens!

**Theorem 3.** Let be a sequence of real numbers and let . Then in the classical sense if and only if for every free ultrafilter on .

To prove Theorem 3 we make use of an auxiliary result, which is of interest by itself.

**Lemma 2.** Let be the family of collections of subsets of that have the finite intersection property. The maximal elements of are precisely the ultrafilters.

*Proof:* Every ultrafilter is clearly maximal in . If is maximal in , then it is clearly proper and upper closed, and we can reason as in the proof of the ultrafilter lemma to show that it is actually an ultrafilter.

*Proof of Theorem 3:* Suppose does not converge to in the classical sense. Fix such that the set is infinite. Then the family has the finite intersection property: let be a free ultrafilter that extends it. Then , and does not have an ultralimit along .

The converse implication follows from the classical definition of limit, together with the very notion of free ultrafilter.

Theorem 3 does hold for sequences of real numbers, but does not extend to arbitrary metric spaces. In fact, the following holds, which we state without proving.

**Theorem 4.** Let be a metric space. The following are equivalent.

- For some free ultrafilter on , every sequence in has an ultralimit along .
- For every free ultrafilter on , every sequence in has an ultralimit along .
- is compact.

Ultrafilters are useful in many other contexts. For instance, they are used to construct *hyperreal numbers*, which in turn allow a rigorous definition of infinitesimals and the foundation of calculus over those. But this might be the topic for another Theory Lunch talk.

### The marriage of bisimulations and Kripke logical relations

*bisimulations*and

*Kripke logical relations (KLRs)*. While both approaches are powerful, their complementary advantages have led us and other researchers to wonder whether there is an essential tradeoff between them. Furthermore, both approaches seem to suffer from fundamental limitations if one is interested in scaling them to inter-language reasoning.

In this paper, we propose *relation transition systems (RTSs)*, which
marry together some of the most appealing aspects of KLRs and
bisimulations. In particular, RTSs show how bisimulations' support
for reasoning about recursive features via *coinduction* can be
synthesized with KLRs' support for reasoning about local state via
*state transition systems*. Moreover, we have designed RTSs to avoid
the limitations of KLRs and bisimulations that preclude their
generalization to inter-language reasoning. Notably, unlike KLRs,
RTSs are transitively composable.
I understand the paper as offering an extension to bisimulation that handles the notion of hidden transitions properly and so allows a generalisation of KLRs to any systems that can be treated using bisimulations. Applications to verified compilation are mentioned, and everything has been validated in Coq.

### apfelmus: FRP - Release of reactive-banana version 0.8

I am very pleased to announce the release of version 0.8.0.0 of my reactive-banana library on hackage. The major additions are a push-driven implementation that actually performs like one and a new module Reactive.Banana.Prim which can be used to implement your own, custom FRP library.

After a long wait, I have finally found the time to implement a push-driven algorithm that actually deserves that name – the previous implementations had taken a couple of shortcuts that rendered the performance closer to that of a pull-driven implementation. There was also an unexpected space leak, which I have fixed using a reasoning principle I’d like to call space invariants. Note that this release doesn’t include garbage collection for dynamically switched events just yet. But having rewritten the core algorithm over and over again for several years now, I finally understand its structure so well that garbage collection is easy to add – in fact, I have already implemented it in the development branch for the future 0.9 release.

Starting with this release, the development of reactive-banana will focus on performance – the banana is ready to pull out the guns and participate in the benchmarking game. (You see, the logo is no idle threat!) In fact, John Lato has put together a set of benchmarks for different FRP libraries. Unfortunately, reactive-banana took a severe beating there, coming out as the slowest contender. Oops. The main problem is that the library uses a stack of monad transfomers in an inner loop – bad idea.

Now, optimizing monad transformers seems to be an issue of general interest, but the only public information I could find was a moderately useful wiki page. If you have any tips or benchmarks for optimizing monad transformer stacks, please let me and the community know!

The other major addition to the reactive-banana library is a module Reactive.Banana.Prim which presents the core algorithm in such a way that you can use it to implement your very own FRP API. Essentially, it implements a basic FRP system on top of which you can implement richer semantics – like observable sharing, recursion, simultaneous events, change notification, and so on. Of course, few people will ever want to do that, also given that reactive-banana is currently not the fastest fruit in the basket.

But this new module is my main motivation for releasing version 0.8. It contains the lessons that I’ve learned from implementing yet another toy FRP system for my threepenny-gui project, and it’s time to put the latter on a solid footing. In particular, it appears that widget abstractions greatly benefit from dynamic event switching, which means that future versions of Threepenny cannot do without a solid FRP subsystem.

### Generalized null / zero

### [ANN] yocto-0.1.2

### Lazy MWC Random?

### Alejandro Cabrera: 10 Ways to Incorporate Haskell into a Modern, Functional, CS Curriculum

The topic on my thoughts: how could Haskell be incorporated into a modern CS curriculum? I decided to run as radically as I could with this, writing a rough draft of what such a curriculum might look like.

I won't make any arguments for or against functional programming here. I refer readers instead to papers like this one and this one, talks like this one or this one, and books like this one. This is an exercise in ideation.

Without further ado, let's begin:

**0. Haskell for Introductory Computer Science**

Imagine this is the first time you're learning programming. You've never been exposed to mutation, to functions, to compiling, algorithms, or any of the details of architecture. Where do we start?

Four weeks of Haskell. Enough to get through the first few chapters of LYAH and be able to start thinking about recursion. End each week with a simple exercise, for example, upper-casing a list of strings, upper-casing only the first letter of every string that is longer than 2 characters - little things to build confidence. The Udacity Introduction to Computer Science course has many appropriate ideas for beginning level exercises.

With that introductory period out of the way, now's the time to show why computer science is relevant! Take the time to show case the areas: operating systems, networking, algorithms, programming languages, cryptography, architecture, hardware, and more. Make it relevant:

* Operating systems: Why are there so many? What does it do? How does my application (email, browser, Steam) run from beginning to end?

* Algorithms: How do you figure out if two characters have collided in a video game? How do you sort a list of address contacts alphabetically by last name?

* Networking: How do you send an instant message or an email to a friend on the other side of the world?

* Programming Languages: As with operating systems.

There are many applicable introductory exercises here that can set the pace for future courses.

**1. Haskell for Basic Algorithms**

This one, and the latter algorithms course on this "Top 10 List", deserve special attention

**.**

Algorithms are fundamentally

**pure**constructs. You give them a well-defined input, and receive a well-defined output

Take plenty of time to provide weekly exercises. Teaching sorting algorithms, trees, string matching algorithms, and more will be a delight here, I predict.

It's also a good time to introduce basic run-time analysis, e.g., Big O notation.

This is also a beautiful time to introduce

**QuickCheck**

**in conjunction with invariants.**

**2. Haskell for Data Structures**

Very similar to the basic algorithms course, except now we teach students about some basic ways to organize data. Lists, vectors, trees, hash maps, and graphs - these should be enough to keep most students (and practitioners) well-equipped for years!

QuickCheck

**and frequent programming exercises will do well here.**

If an advanced version of this course is desired, I highly recommend starting from here to brainstorm a variant: Purely Functional Data Structures

**3. Haskell for Networking**

This can be very low-level (OSI network stack, TCP window buffering, etc.), it can be very high-level

**(**HTTP, distributed systems), or some mix of the two.

I think the most important concepts students can come of this course with would be:

* Validating data at the boundaries and the dangers/usefulness of IO

* How to communicate with other systems

* How to write their own protocols, and why in general, they shouldn't reinvent the wheel

**4. Haskell for Operating Systems**

Teach them to ask - what is an operating system? How do I manage my resources?

It's worth surveying the concepts of: memory management, file systems, data persistence, concurrency, parallelism, process management, task scheduling, and possibly a bit more.

Great projects in this course include: 1) write your own shell, 2) write a simple, local task manager.

**5. Haskell for Comparative Programming Languagues**

Let's talk types, functions, composition, and problem solving using different approaches. Ideally, such a course would come after learning how to design solutions to mid-sized programming challenges.

After that, have students write an interpreter for a Lisp.

**6. Haskell for Compiler Construction**

More on: write your own language. This course should cover parsing, lexical analysis, type analysis, and the conversion from source to assembly.

**7. Haskell for Advanced Algorithms**

This one's going to be fun. Unleash the power of equational reasoning to put together a course that runs through: graph theory, network flow analysis, greedy algorithms, memoization, and more.

This would also be a great time to discuss how the price one pays for purity in regards to asymptotic performance, and how to overcome that, if necessary.

Also, an extended treatment of algorithmic analysis in the presence of laziness would be valuable here.

**8. Haskell for Introductory Design of Programs**

Really, this should be higher up in this list, and a very early course.

The goal of this course is to come out of it knowing how to:

* Get a big picture of what they're trying to build

* Break it down into the smaller pieces

* Iterate 'til they get the big picture running

It's a great time to teach some basic ideas for testing, how to experiment with the REPL, and how to take advantage of the type system for simple things.

On a more social level, it's a wonderful time to also guide students towards collaborative design, e.g., how to work together and make it fun and efficient.

**9. Haskell for High-Performance Computing**

This could be a very fun course. It affords the opportunity to allow students to run wild with simulations and experiments of their choosing, while learning about what it means to do high-performance computing in a functional language.

Given that, it should teach some basic tools that will be applicable to most or all projects. How does one benchmark well? What are the evils of optimization? What is over-optimization? When is optimization needed? What tools exist right now to harness parallelism in Haskell (Repa, Accelerate, etc.)? When is data distribution needed? Why is parallelism important? How is parallelism different than concurrency? How can the type system be wielded to help keep units (km/s, etc.) consistent across calculations?

**I'd advocate for letting students choose their own projects built in parallel with the course taking place.**

**A simple default is to optimize the much-lauded matrix multiply to larger and larger scales (distributed even, if they want to go so far!). Writing a collision detection engine for a simple game would be pretty interesting, as well.**

**Notably (in my opinion) absent topics**:

* Hardware: CPUs, cache hierarchies, main memory, storage, interconnects

* Advanced data structures

* Cryptography

* Web development

* Type systems

* Cross-disciplinary development, e.g., Haskell for CS + [Physics, Chemistry, Biology, etc.]

These topics are absent from my list for no reason other than I didn't think of them 'til the list was done and articulated. There's so much one can learn and apply at the level of abstraction that computer science (and mathematics) affords that we could specialize further and further. For my own sake, I'm setting a limit. :)

**Final Notes**

I've just brain-stormed a curriculum in Haskell. There's a lot of details missing, but it's a working draft.

There's also other things to consider, beyond the technical aspects. Consider the social aspects. How we teach students to work together? How do we keep learning engaging and fun? How do we help students connect to the greater community of developers that exist outside of academia? How do we keep the lessons relevant to the lives that they lead? How do we encourage them to pursue their passions?