# News aggregator

### 8 Funded PhD Positions at St Andrews

### Qualification regions

Do you think something like this would be useful/good? If an identifier baz is ambiguous in an assume foo.bar block, and one of the possibilities is foo.bar.baz, assume it's that one.

Example:

module MyNum where import Data.Vector import Data.ByteString assume Data.Vector where doStuffWithVectors = map (+2) -- map could be from Prelude, Vector, or ByteString, assume it's the Vector one assume Data.ByteString where doStuffWithByteStrings = foldr (+) 0 -- assume it's ByteString's foldr (+) :: (ExpLike e, ExpLike e') => e t -> e' t -> Exp t (-) :: (ExpLike e, ExpLike e') => e t -> e' t -> Exp t assume MyNum where baz = exp1 + exp3 submitted by vahokif[link] [11 comments]

### language-puppet: Version 0.11.0 - now with a full serving of unsafeCoerce

A new version is out, with a focus on the bleeding edge ! The focus of this release was to update important dependencies, so we have now :

- lens 4
- attoparsec 0.11
- aeson 0.7
- text up to 1.1
- parsers 0.10
- and new versions of filecache and hruby

The goal here is to reap the benefits of the speed improvements in attoparsec, text and aeson, and be ready for the GHC 7.8 release.

There were a few hurdles to get there. The move from Data.Attoparsec.Number to Data.Scientific for number representation was not a big problem (even though code in hruby could be polished), but the lens and parsers upgrade proved more problematic.

Lens related breakageThe main problem with lens 4 was that it broke strict-base-types. I don’t think this will last long, but here is a temporary workaround for the impatient. Other than that, several instances of x ^. contains y were replaced by x & has (ix y) for the map-like types. This was a painless upgrade.

The trouble with parsersI like this package a lot, because it exposes a nice API and supports several underlying parsers. But it comes with a couple problems.

The first one is related to the TokenParsing typeclass. This let you define how someSpace, the function that skip spaces, is defined. Unfortunately, the parsers library comes with an instance for parsec that will skip the characters satisfying isSpace. While this certainly is a sane choice, this is a problem for people who would like to use parsec as the underlying parser, but with a different implementation of someSpace. In my case, I also wanted to skip single line (start with #) and multi-line (/* these */) comments. A solution is to create a newtype, and redefine all instances. For those wondering how this is done, here is the relevant part. Please let me know if there is a way to do that that does not require that much boilerplate.

The second problem is that the expression parser builder (at Text.Parser.Expression) is much slower than what is in parsec. Switching to it from Text.Parsec.Expr resulted in a 25% slowdown, so I switched back to parsec. Unfortunately, I didn’t immediately realize this was the culprit, and instead believed it a case newtypes lowering performance. My code is now littered with unsafeCoerces, that I will remove in the next version (provided this does not result in a performance hit).

New features of previous versions I did not blog about- Several bugs were solved thanks to Pi3r.
- Several new facts were added.
- A parsing bug was fixed (it was also fixed in the official documentation.
- An ‘all nodes’ testing mode was added for puppetresources. This can be used that way :

This should provide a nice overview of the current state of your manifests. And it tested those nodes about 50 times faster than Puppet can compile the corresponding catalogs.

Behind the scene changesThe Puppet.Daemon machinery has been simplified a lot. It previously worked by spawning a pre-determined amount of threads specialized for parsing or compiling catalogs. The threads communicated with each other using shared Chans. The reason was that I wanted to control the memory usage, and didn’t want to have too many concurrent threads at the same time. It turns out that most memory is used by the parsed AST, which is shared using the (filecache)[http://hackage.haskell.org/package/filecache] module, so this is not a real concern.

I did rip all that, and now the only threads that is spawned is an OS thread for the embedded Ruby interpreter, and an IO thread for the filecache thread. The user of the library can then spawn as many parallel threads as he wants. As a result, concurrency is a bit better, even though there are still contention points :

- The parsed file cache is held by the filecache thread, and communicates with a Chan. I will eventually replace this with an MVar, or some other primitive that doesn’t require a dedicated thread.
- The Lua interpreter requires a LuaState, that should not be used by several threads at once. It is stored in a shared MVar.
- The Ruby interpreter is the main performance bottleneck. It is single threaded, and very slow. The only way to speed it up would be to parse more of the ruby language (there is a parser for common Erb patterns included !), or to switch to another interpreter that would support multithreading. Both are major endeavors.

The next releases will probably be Ruby 2.1 support for hruby, and some performance work on filecache.

### The Ur Programming Language Family

### xmonad/osxmonad

### Alejandro Cabrera: My OSCON 2014 Proposal: The Case for Haskell

- Is about something I'm passionate about
- Is a talk I would give even if I wasn't accepted
- Requires me to learn even more thoroughly what I'm proposing to speak on

For those of you that would like to submit a proposal in the future, I have two thoughts to share with you.

First -

**DO IT!**If you have something you'd love to share, get it out there. You are Allowed to Apply.

Secondly, if you need a simple video recording solution (I was on Linux, I used my laptop web cam, and I lacked a proper camera), I'm recommending the YouTube Recording interface. Given the capabilities of HTML5, I wasn't entirely surprised that such a thing existed. However, I was pleased with the outcome. It certainly worked for me, and worked better than either Cheese of Guvcview.

More on the process:

I must've re-re-recorded myself at least 10 times in trying to express my abstract. It took practice. It was

**scary**the first time around. I stumbled on words. I ran out of breath. I forgot where I was. It got easier as I got into the cycle of editing my script, tweaking, optimizing, and simplifying. It was very similar to developing, testing, and refactoring. There is

**Zen**in all of this.

So that was it. Lots and lots of practice. Now, the long wait.

Oh, and by the way - sharing is caring: here's the link to watch my video proposal: The Case for Haskell

Let me know what you think. As suggested by (2) above, I intend to give this talk whether or not I'm accepted to speak at OSCON, so let me know if there's something that you'd love to hear about!

### How many of you use Xmonad? How many do that because it's written in Haskell?

Most people I know that use Xmonad don't know Haskell. They use Xmonad because they like the concept, but configuring it feels like a burden to them due the fact that they don't really know Haskell. This leads me to wonder about the opposite question - Do any of you Haskellers prefer Xmonad due to its Haskell configuration? If so, do you think that your knowladge in Haskell gives you a major advantage in customizing Xmonad to your needs?

submitted by Reish[link] [79 comments]

### With AMD making server ARM SoC's, where's Haskell?

AMD announced that they were making the A1100 Processor with 8 cores, ~2Ghz, and can access to tons of (registered) memory.

But where's Haskell?

I have a raspberry pi, so I looked into it, and the wiki said that there wasn't much progress, especially in the interactive mode. (which is problematic for compiling TH, or so I read.)

Though on the wiki it now says

It will possibly be available in GHC 7.8.

AnandTech says that OEMs don't expect to have any things out for people to use until possibly Q4 this year. I hope to get one :)

submitted by levischuck[link] [17 comments]

### José Pedro Magalhães: POPL 2015 is in India, and I think that is a bad idea

Personally, I do not feel welcome or even safe in India, and consequently will not attend POPL 2015. I find it regrettable that the POPL Steering Committee failed to keep to its own anti-harassment policy when choosing a venue for POPL 2015.

### Edward Z. Yang: Equality, roughly speaking

In Software Foundations, equality is defined in this way:

Even Coq's equality relation is not built in. It has (roughly) the following inductive definition.

Inductive eq0 {X:Type} : X -> X -> Prop := refl_equal0 : forall x, eq0 x x.*Why the roughly?* Well, as it turns out, Coq defines equality a little differently (reformatted to match the Software Foundations presentation):

What’s the difference? The trick is to look at the induction principles that Coq generates for each of these:

eq0_ind : forall (X : Type) (P : X -> X -> Prop), (forall x : X, P x x) -> forall y y0 : X, eq0 y y0 -> P y y0 eq1_ind : forall (X : Type) (x : X) (P : X -> Prop), P x -> forall y : X, eq1 x y -> P yDuring our Homotopy Type Theory reading group, Jeremy pointed out that the difference between these two principles is exactly the difference between path induction (eq0) and based path induction (eq1). (This is covered in the Homotopy Type Theory book in section 1.12) So, Coq uses the slightly weirder definition because it happens to be a bit more convenient. (I’m sure this is folklore, but I sure didn’t notice this until now! For more reading, check out this excellent blog post by Dan Licata.)

### incorrect MonadPlus law "v >> mzero = mzero"?

### Request for review: numbers

Hey Haskellers, acting maintainer of numbers here.

I got a pull request that I'm a bit hesitant to accept for a few reasons. But I don't actually use the library, so I'd appreciate some additional opinions on a few questions that have arisen:

- Should the fields of Interval be strict?
- Should there be a Bounded instance for Bounded a => Interval a?
- Is the current Ord instance useful, sensible, and acceptable?

Additionally, it would be nice if you could glance at the other changes in the PR and make sure they look good.

https://github.com/DanBurton/numbers/pull/10/files

Do you use the numbers library? Wish it had some new feature? Want to write a test suite or benchmark? I'd love to hear from you. Do you know of a class or tutorial that teaches via the numbers library? Let me know! I feel a bit disconnected from this library's history, and I'd like to be a little more plugged in.

submitted by drb226[link] [comment]

### Is it safe to create global variables usingunsafePerformIO?

### Theory Lunch (Institute of Cybernetics, Tallinn): Transgressing the limits

Today, the 14th of January 2014, we had a special session of our Theory Lunch. I spoke about ultrafilters and how they allow to generalize the notion of limit.

Consider the space of bounded sequences of real numbers, together with the supremum norm. We would like to define a notion of limit which holds for *every* and satisfies the well known properties of standard limit:

*Linearity:*.*Omogeneity:*.*Monotonicity:*if for every then .*Nontriviality:*if for every then .*Consistency:*if the limit exists in the classical sense, then the two notions coincide.

The consistency condition is reasonable also because it avoid trivial cases: if we fix and we define the limit of the sequence as the value , then the first four properties are satisfied.

Let us recall the classical definition of limit: we say that converges to if and only if, for every , the set of values such that is *cofinite*, *i.e.*, has a finite complement. The family of cofinite subsets of (in fact, of any set ) has the following properties:

*Upper closure:*if and then .*Meet stability:*if then .

A family of subsets of with the two properties above is called a *filter* on . An immediate example is the *trivial filter* ; another example is the *improper filter* . The family of cofinite subset of is called the *Fréchet filter* on . The Fréchet filter is not the improper one if and only if is infinite.

An *ultrafilter* on is a filter on satisfying the following additional conditions:

*Properness:*.*Maximality:*for every , either or .

For example, if , then is an ultrafilter on , called the *principal ultrafilter* generated by . Observe that : if we say that is *free*. These are, in fact, the only two options.

**Lemma 1.** For a proper filter to be an ultrafilter, it is necessary and sufficient that it satisfies the following condition: for every and nonempty , if then for at least one .

*Proof:* It is sufficient to prove the thesis with . If with , then is a proper filter that properly contains . If the condition is satisfied, for every which is neither nor we have , thus either or .

**Theorem 1.** Every nonprincipal ultrafilter is free. In addition, an ultrafilter is free if and only if it extends the Fréchet filter. In particular, every ultrafilter over a finite set is principal.

*Proof:* Let be a nonprincipal ultrafilter. Let : then , so either there exists such that and , or there exists such that and . In the first case, ; in the second case, we consider and reduce to the first case. As is arbitrary, is free.

Now, for every the set belongs to but not to : therefore, no principal ultrafilter extends the Fréchet filter. On the other hand, if is an ultrafilter, is finite, and , then by maximality, hence for some because of Lemma 1, thus cannot be a free ultrafilter.

So it seems that free ultrafilters are the right thing to consider when trying to expand the concept of limit. There is an issue, though: we have not seen any single example of a free ultrafilter; in fact, we do not even (yet) know whether free ultrafilters do exist! The answer to this problem comes, in a shamelessly nonconstructive way, from the following

**Ultrafilter lemma.** Every proper filter can be extended to an ultrafilter.

The ultrafilter lemma, together with Theorem 1, implies the existence of free ultrafilters on every infinite set, and in particular on . On the other hand, to prove the ultrafilter lemma the Axiom of Choice is required, in the form of Zorn’s lemma. Before giving such proof, we recall that a family of sets has the *finite intersection property* if every finite subfamily has a nonempty intersection: every proper filter has the finite intersection property.

*Proof of the ultrafilter lemma.* Let be a proper filter on and let be the family of the collections of subsets of that extend and have the finite intersection property, ordered by inclusion. Let be a totally ordered subfamily of : then extends and has the finite intersection property, because for every finitely many there exists by construction such that .

By Zorn’s lemma, has a maximal element , which surely satisfies and . If and , then still has the finite intersection property, therefore by maximality. If then still has the finite intersection property, therefore again by maximality.

Suppose, for the sake of contradiction, that there exists such that and : then neither nor have the finite intersection property, hence there exist such that . But means , and means : therefore,

against having the finite intersection property.

We are now ready to expand the idea of limit. Let be a metric space and let be an ultrafilter on : we say that is the *ultralimit* of the sequence along if for every the set

belongs to . (Observe how, in the standard definition of limit, the above set is required to belong to the Fréchet filter.) If this is the case, we write

Ultralimits, if they exist, are unique and satisfy our first four conditions. Moreover, the choice of a principal ultrafilter corresponds to the trivial definition . So, what about free ultrafilters?

**Theorem 2.** Every bounded sequence of real numbers has an ultralimit along every free ultrafilter on .

*Proof:* It is not restrictive to suppose for every . Let be an arbitrary, but fixed, free ultrafilter on . We will construct a sequence of closed intervals , , such that and for every . By the Cantor intersection theorem it will be : we will then show that .

Let . Let be either or , chosen according to the following criterion: . If both halves satisfy the criterion we just choose one once and for all. We iterate the procedure by always choosing as one of the two halves of such that .

Let . Let , and let be so large that : then , thus . As the smaller set belongs to , so does the larger one.

We have thus almost achieved our original target: a notion of limit which applies to every bounded sequence of real numbers. Such notion will depend on the specific free ultrafilter we choose: but it is already very reassuring that such a notion exists at all! To complete our job we need one more check: we have to be sure that the definition is consistent with the classical one. And this is indeed what happens!

**Theorem 3.** Let be a sequence of real numbers and let . Then in the classical sense if and only if for every free ultrafilter on .

To prove Theorem 3 we make use of an auxiliary result, which is of interest by itself.

**Lemma 2.** Let be the family of collections of subsets of that have the finite intersection property. The maximal elements of are precisely the ultrafilters.

*Proof:* Every ultrafilter is clearly maximal in . If is maximal in , then it is clearly proper and upper closed, and we can reason as in the proof of the ultrafilter lemma to show that it is actually an ultrafilter.

*Proof of Theorem 3:* Suppose does not converge to in the classical sense. Fix such that the set is infinite. Then the family has the finite intersection property: let be a free ultrafilter that extends it. Then , and does not have an ultralimit along .

The converse implication follows from the classical definition of limit, together with the very notion of free ultrafilter.

Theorem 3 does hold for sequences of real numbers, but does not extend to arbitrary metric spaces. In fact, the following holds, which we state without proving.

**Theorem 4.** Let be a metric space. The following are equivalent.

- For some free ultrafilter on , every sequence in has an ultralimit along .
- For every free ultrafilter on , every sequence in has an ultralimit along .
- is compact.

Ultrafilters are useful in many other contexts. For instance, they are used to construct *hyperreal numbers*, which in turn allow a rigorous definition of infinitesimals and the foundation of calculus over those. But this might be the topic for another Theory Lunch talk.