# News aggregator

### Suggestion: Use (.) for function application

Since we're talking about how to make our Haskell code more readable, I thought I'd propose a ridiculous suggestion: Let's use (.) for (backwards) function application!

import Prelude hiding ((.)) (.) :: a -> (a -> b) -> b x . f = f x infixl 0 . main :: IO () main = do print("bananas".reverse) -- "sananab" print(4.negate.recip) -- -0.25 print([1..5].map (\ x -> x * 2).filter (\ x -> x > 5)) -- [6,8,10]This immediately makes Haskell much more familiar to all kinds of programmers.

(Note: I'm not serious.)

submitted by taylorfausak[link] [85 comments]

### Xavier Leroy will receive the Royal Society's 2016 Milner Award

The Royal Society will award Xavier Leroy the Milner Award 2016

... in recognition of his research on the OCaml functional programming language and on the formal verification of compilers.

Xavier's replied:

It is very moving to see how far we have come, from Milner's great ideas of the 1970s to tools as powerful and as widely used as OCaml and Coq.

### Mark Jason Dominus: A message to the aliens, part 11/23 (solar system)

Earlier articles: Introduction Common features Page 1 (numerals) Page 2 (arithmetic) Page 3 (exponents) Page 4 (algebra) Page 5 (geometry) Page 6 (chemistry) Page 7 (mass) Page 8 (time and space) Page 9 (physical units) Page 10 (temperature)

This is page 11 of the *Cosmic Call*
message. An explanation follows.

The 10 digits are:

0

1

2

3

4

5

6

7

8

9

Page 11, headed with the glyph for “physics” is evidently a chart of the solar system, with the Sun at left. The Earth is also labeled, as is Jupiter , the planet most likely to be visible to the recipients. The “Jupiter” glyph does not appear again. Pluto is included, as it was still considered a planet in 1999. (Pluto's status as only one of many similar trans-Neptunian objects was not well appreciated in 1999 when the message was composed, the second TNO having only been discovered in 1992.) To the extent permitted by the low resolution of the image, The diameters of the planets and the Sun are to scale, but not their relative positions; the page is much too small for that. Saturn's rings are not shown, as they are in the Pioneer plaque; by this time it was appreciated that ring systems may be common around large planets.

The masses and radii of Jupiter and the Sun are given, Jupiter above the illustration and the Sun below. The (external) temperature of the Sun is also given, 5763 kelvins. This should be visible to the aliens because the Sun is a blackbody emitter and the spectrum of blackbody radiation is a clear indicator of its temperature. This data should allow the aliens to locate us, should they be so inclined: they know which way the message came from, and can look for a star with the right size and temperature in that direction. When they get closer, Jupiter and the sizes of the planets will provide a confirmation that they are in the right place. Later pages explain that we live on the Earth, so the aliens will know where to point their fusion cannon in order to obliterate our planet.

The next article will discuss page 12, shown at right. (Click to enlarge.) Try to figure it out before then.### Simple web framework that support user registration/authentication/sessions

Hi, I'm new here on reddit, and to haskell as well. I have decent math background and basic understanding of haskell.

That being said, I can't decide what to use for my web app prototype.

I've tried yesod, but it's so bloated and messy and it doesn't feel like Haskell at all.

I saw wheb and scotty. Scotty looked best to me, but I saw on github last commit was ~2 years ago, and there is no clear way to do authentication.

Is there some tutorial on how to add authentication for standard email/password and integrate it with Google/FB login? I would hate to do whole auth process (confirmation mail, password retrieval etc..) my self.

submitted by reygoch[link] [11 comments]

### LVars and expressions

An LVar is a variable which can only be updated so that the sequence of values it took and will take (history) is (weakly) increasing (w.r.t. to a user defined order).

LVars can be read by specifying thresholds. For instance,

get var {thr1, thr2}blocks until var is either >= thr1 or >= thr2 and, once unblocked, returns the threshold reached.

There are some conditions that must hold, but they're not needed to understand what follows.

ShortcircuitingThink about boolean expressions:

True && True && False && False && False && True && False && ...The partial evaluations of this expression form a monotonic sequence:

True, True, False, False, False, False, False, ...where the order is True < False. Once we reach False, there's no turning back.

Generalizing shortcircuitingSo, what about an expression that is only partially evaluated as far as needed?

Consider this example:

let votes = vote1 + vote2 + vote3 + vote4 + ... in if votes > 50 then "Yahoo!" else "Sob :("where each vote is a non negative integer.

In general, there's no need to evaluate votes completely to evaluate the expression above. If we try to evaluate show votes however, votes must be completely evaluated.

Note that referential transparency is never violated!

Other considerationsWe could define increasing operators:

infixl inc (+>) x +> y = sup x (x+y)This way, the compiler knows that

let votes = vote1 +> vote2 +> vote3 +> ...is "increasing".

ParallelismLVars are used for parallelism. Can we do the same with increasing expressions? The main advantage is that we wouldn't rely on mutation.

The expression above can be computed in parallel because +> is associative and commutative.

Note that with LVars we implicitly use the operator sup which computes the least upper bound:

let x = val1 `sup` val2 `sup` val3 `sup` val4 `sup` ...When val1, val2, ... are simple numbers, sup is equivalent to max.

submitted by Kiuhnm[link] [12 comments]

### Gabriel Gonzalez: How to make your Haskell code more readable to non-Haskell programmers

This post summarizes a few tips that increase the readability of Haskell code in my anecdotal experience. Each guideline will have a corresponding rationale.

Do not take this post to mean that all Haskell code should be written this way. These are guidelines for code that you wish to use as expository examples to introduce people to the language without scaring them with unfamiliar syntax.

Rule #1: Don't use ($)This is probably the most controversial guideline but I believe this is the recommendation which has the highest impact on readability.

A typical example of this issue is something like the following code:

print $ even $ 4 * 2... which is equivalent to this code:

print (even (4 * 2))The biggest issue with the dollar sign is that most people will not recognize it as an operator! There is no precedent for using the dollar sign as an operator in any other languages. Indeed, the vast majority of developers program in languages that do not support adding new operators at all, such as Javascript, Java, C++, or Python, so you cannot reasonably expect them to immediately recognize that the dollar symbol is an operator.

This then leads people to believe that the dollar sign is some sort of built-in language syntax, which in turn convinces them that Haskell's syntax is needlessly complex and optimized for being terse over readable. This perception is compounded by the fact that the most significant use of the dollar symbol outside of Haskell is in Perl (a language notorious for being write-only).

Suppose that they do recognize that the symbol represents an operator. They still cannot guess at what the operator means. There is no obvious mental connection between a symbol used for currency and function application. There is also no prior art for this connection outside of the Haskell language.

Even if a newcomer is lucky enough to guess that the symbol represents function application, it's still ambiguous because they cannot tell if the symbol is left- or right-associative. Even people who do actually take the time to learn Haskell in more depth have difficulty understanding how ($) behaves and frequently confuse it with the composition operator, (.). If people earnestly learning the language have difficulty understanding ($), what chance do skeptics have?

By this point you've already lost many people who might have been potentially interested in the language, and for what? The dollar sign does not even shorten the expression.

Rule #2: Use operators sparinglyRule #1 is a special case of Rule #2.

My rough guideline for which operators to use is that assocative operators are okay, and all other operators are not okay.

Okay:

- (.)
- (+) / (*)
- (&&) / (||)
- (++)

Not okay:

- (<$>) / (<*>) - Use liftA{n} or ApplicativeDo in the future
- (^.) / (^..) / %~ / .~ - Use view / toListOf / over / set instead

You don't have to agree with me on the specific operators to keep or reject. The important part is just using them more sparingly when teaching Haskell.

The issues with operators are very similar in principle to the issue with the dollar sign:

- They are not recognizable as operators to some people, especially if they have no equivalent in other languages
- Their meaning is not immediately obvious
- Their precedence and fixity are not obvious, particular for Haskell-specific operators

The main reason I slightly prefer associative operators is that their fixity does not matter and they usually have prior art outside the language as commonly used mathematical operators.

Rule #3: Use do notation generouslyPrefer do notation over (>>=) or fmap when available, even if it makes your code a few lines longer. People won't reject a language on the basis of verbosity (Java and Go are living proof of that), but they will reject languages on the basis of unfamiliar operators or functions.

This means that instead of writing this:

example = getLine >>= putStrLn . (++ "!")You instead write something like this:

example = dostr <- getLine

putStrLn (str ++ "!")

If you really want a one-liner you can still use do notation, just by adding a semicolon:

example = do str <- getLine; putStrLn (str ++ "!")do notation and semicolons are immediately recognizable to outsiders because they resemble subroutine syntax and in the most common case (IO) it is in fact subroutine syntax.

A corollary of this is to use the newly added ApplicativeDo extension, which was recently merged into the GHC mainline and will be available in the next GHC release. I believe that ApplicativeDo will be more readable to outsiders than the (<$>) and (<*>) operators.

Rule #4: Don't use lensesDon't get me wrong: I'm one of the biggest advocates for lenses and I think they firmly belong as a mainstream Haskell idiom. However, I don't feel they are appropriate for beginners.

The biggest issues are that:

- It's difficult to explain to beginners how lenses work
- They require Template Haskell or boilerplate lens definitions
- They require separate names for function accessors and lenses, and one or the other is bound to look ugly as a result
- They lead to poor inferred types and error message, even when using the more monomorphic versions in lens-family-core

Lenses are wonderful, but there's no hurry to teach them. There are already plenty of uniquely amazing things about the Haskell language worth learning before even mentioning lenses.

Rule #5: Use where and let generouslyResist the temptation to write one giant expression spanning multiple lines. Break it up into smaller sub-expressions each defined on their own line using either where or let.

This rule exists primarily to ease imperative programmers into functional programming. These programmers are accustomed to frequent visual "punctuation" in the form of statement boundaries when reading code. let and where visually simulate decomposing a larger program into smaller "statements" even if they are really sub-expressions and not statements.

Rule #6: Use point-free style sparinglyEvery Haskell programmer goes through a phase where they try to see if they can eliminate all variable names. Spoiler alert: you always can, but this just makes the code terse and unreadable.

For example, I'll be damned if I know what this means without some careful thought and some pen and paper:

((==) <*>)... but I can tell at a glance what this equivalent expression does:

\f x -> x == f xThis is a real example, by the way.

There's no hard and fast rule for where to draw the line, but when in doubt err on the side of being less point-free.

ConclusionThat's it! Those six simple rules go a very long way towards improving the readability of Haskell code to outsiders.

Haskell is actually a supremely readable language once you familiarize yourself with the prevailing idioms, thanks to:

- purity
- the opinionated functional paradigm
- minimization of needless side effects and state

However, we should make an extra effort to make our code readable even to complete outsiders with absolutely no familiarity or experience with the language. The entry barrier is one of the most widely cited criticisms of the language and I believe that a simple and clean coding style can lower that barrier.

### Yesod Web Framework: The new runtime Hamlet API

The Hamlet HTML templating system is, by default, parsed at compile time, allowing it to do quite a bit of static checking. But it can also be useful to parse these templates at runtime, such as if you wanted to allow dynamic content to be submitted by users.

A few weeks ago, a discussion on Reddit pointed out that the API for runtime Hamlet in shakespeare was pretty bad. Just a week later, I came up with a use case for runtime Hamlet myself, and decided to put together a new, more user friendly API.

This new API is provided in Text.Hamlet.Runtime. Hopefully it's much easier to follow going on here. And this time, there are even comments on the functions! I'm including the example from the module below to give a better feel for how this works:

{-# LANGUAGE OverloadedStrings #-} import Text.Hamlet.Runtime import qualified Data.Map as Map import Text.Blaze.Html.Renderer.String (renderHtml) main :: IO () main = do template <- parseHamletTemplate defaultHamletSettings $ unlines [ "<p>Hello, #{name}" , "$if hungry" , " <p>Available food:" , " <ul>" , " $forall food <- foods" , " <li>#{food}" ] let hamletDataMap = Map.fromList [ ("name", "Michael") , ("hungry", toHamletData True) -- always True , ("foods", toHamletData [ "Apples" , "Bananas" , "Carrots" ]) ] html <- renderHamletTemplate template hamletDataMap putStrLn $ renderHtml html### Anchors in Haddock

### Linking to anchors in Haddock

### Edward Kmett: Some Rough Notes on Univalent Foundations and B-Systems, Part I

I recently attended RDP in Warsaw, where there was quite a bit of work on Homotopy Type Theory, including a special workshop organized to present recent and ongoing work. The organizers of all the events did a fantastic job and there was a great deal of exciting work. I should add that I will not be able to go to RDP next year, as the two constituent central conferences (RTA — Rewriting Techniques and Applications and TLCA — Typed Lambda Calculus and Applications) have merged and changed names. Next year it will now be called FSCD — Formal Structures for Computation and Deduction. So I very much look forward to attending FSCD instead.

In any case, one of the invited speakers was Vladimir Voevodsky, who gave an invited talk on his recent work relating to univalent foundations titled "From Syntax to Semantics of Dependent Type Theories — Formalized”. This was a very clear talk that helped me understand his current research direction and the motivations for it. I also had the benefit of some very useful conversations with others involved in collaboration with some of this work, who patiently answered my questions. The notes below are complimentary to the slides from his talk.

I had sort of understood what the motivation for studying “C-Systems” was, but I had not taken it on myself to look at Voevodsky’s “B-Systems” before, nor had I grasped how his research programme fit together. Since I found this experience enlightening, I figured I might as well write up what I think I understand, with all the usual caveats. Also note, in all the below, by “type theory” I invariably mean the intensional sort. So all the following is in reference to the B-systems paper that Voevodsky has posted on arXiv (arXiv:1410.5389).

That said, if anything I describe here strikes you as funny, it is more likely that I am not describing things right than that the source material is troublesome — i.e. take this with a grain of salt. And bear in mind that I am not attempting to directly paraphrase Voevodsky himself or others I spoke to, but rather I am giving an account of where what they described resonated with me, and filtered through my own examples, etc. Also, if all of the “why and wherefore” is already familiar to you, feel free to skip directly to the “B-Systems” section where I will just discuss Voevodsky’s paper on this topic, and my attempts to understand portions of it. And if you already understand B-Systems, please do reply and explain all the things I’m sure I’m missing!

**Some Review**

We have a model of type theory in simiplicial sets that validates the univalence axiom (and now a few other models that validate this axiom as well). This is to say, it is a model with not only higher dimensional structure, but higher structure of a very “coherent” sort. The heart of this relates to our construction of a “universe”. In our categorical model, all our types translate into objects of various sorts. The “universe,” aka the type-of-types, translates into a very special object, one which “indexes” all other objects. A more categorical way of saying this is that all other types are “fibered over” the universe — i.e. that from every other type there is a map back to a specific point within the universe. The univalence axiom can be read as saying that all equivalent types are fibered over points in the universe that are connected (i.e. there is a path between those points).

Even in a relatively simple dependent type theory, equivalence of types quickly becomes undecidable in general, as it is a superset of the problem of deciding type inhabitation, which in turn corresponds to the decidability of propositions in the logic corresponding to a type theory, and then by any number of well-known results cannot be resolved in general for most interesting theories. This in turn means that the structure of a univalent universe is “describable” but it is not fully enumerable, and is very complex.

We also have a line of work dating back to before the introduction of univalence, which investigated the higher groupoid structure (or, if you prefer, higher topological structure or quillen model structure) induced by identity types. But without either univalence or higher-inductive types, this higher groupoid structure is unobservable internally. This is to say, models were possible that would send types to things with higher structure, but no particular use would be made of this higher structure. So, such models could potentially be used to demonstrate that certain new axioms were not conservative over the existing theory, but on their own they did not provide ideas about how to extend the theory.

How to relate this higher groupoid structure to universes? Well, in a universe, one has paths. Without univalence, these are just identity paths. But regardless, we now get a funny “completion” as our identity paths must *themselves* be objects in our universe, and so too the paths between them, etc. In models without higher structure, we might say “there is only one path from each object to itself” and then we need not worry too much about this potential explosion of paths at each level. But by enforcing the higher groupoid structure, this means that our universe now blossoms with all the potentially distinct paths at each level. However, with the only way in our syntax to create such “extra paths” as reflexivity, any such path structure in our model remains “latent”, and can be added or removed without any effect.

The univalence axiom relies on these higher groupoid structures, but it cannot be reduced to them. Rather, in the model, we must have a fibration over the universe with identity lifting along this fibration to reach the next step — to then modify the universe by forcing paths other than identity paths — those between equivalent types. This is in a sense a further “higher completion” of our universe, adding in first all the possible paths between types, but then the paths between those paths, and so on up. Because, by univalence, we *can* state such paths, then in our model we *must* include all

of them.

**The Problem**

All along I have been saying “models of type theory”. And it is true enough. We do know how to model type theories of various sorts categorically (i.e. representing the translation from their syntax into their semantics as functorial). But we do not have full models of "fully-featured" type theories; i.e. if we view type theories as pizzas we have models of cheese slices, and perhaps slices with olives and slices with pepperoni, etc. But we do not have models of pizzas with "all the toppings". Here, by "toppings" I mean things such as the addition of "all inductive types," "some coinductive types," "certain higher-inductive types," "pattern matching," "induction-induction," "induction-recursion," "excluded middle as an axiom," "choice as an axiom," "propositional resizing as an axiom," etc.

Rather, we have a grab bag of tricks, as well as a few slightly different approaches — Categories with Attributes, Categories with Families, and so forth. One peculiar feature of these sorts of models, as opposed to the models of extensional type theory, is that these models are not indexed by types, but by “lists of types” that directly correspond to the contexts in which we make typing judgments in intensional theories.

In any case, these models are usually used in an ad-hoc fashion. If you want to examine a particular feature of a language, you first pick from one of these different but related sorts of models. Then you go on to build a version with the minimal set of what you need — so maybe identity types, maybe sigma types, maybe natural numbers, and then you introduce your new construction or generate your result or the like.

So people may say “we know how to work with these things, and we know the tricks, so given a theory, we can throw together the facts about it pretty quickly.” Now of course there are maybe only a hundred people on the planet (myself not among them) who can *really* just throw together a categorical model of some one or another dependent type theory at the drop of a hat.

But there’s a broader problem. How can we speak about the mutual compatibility of different extensions and features if each one is validated independently in a different way? This is a problem very familiar to us in the field of programming languages — you have a lot of “improvements” to your language, all of a similar form. But then you put such “improvements” together and now something goes wrong. In fact, the famous “newtype deriving bug” in GHC some years back, which opened a big hole in the type system, was of exactly that form — two extensions (in that case, newtype deriving and type families) that are on their own safe and useful, together have an unexpected bad effect. It is also possible to imagine bad interactions occuring only when three extensions exist together, and soforth. So as the number of extensions increases, the number of interactions to check spirals upwards in a very difficult fashion.

So the correct way to have confidence in the coexistence of these various extensions is to have a general model that contains the sort of theory we actually want to work in, rather than these toy theories that let us look at portions in isolation. And this certainly involves having a theory that lets us validate all inductive types at once in the model, rather than extending it over and over for each new type we add. Additionally, people tend to model things with at most one universe. And when we are not looking at universes, it is often omitted altogether, or done “incorrectly” as an inhabitant of itself, purely for the sake of convenience.

So now, if I tell someone with mathematical experience what my theory “means” and they say “is this actually proven” I’m in the embarrassing position of saying “no, it is not. but the important bits all are and we know how to put them together.” So here I am, trying to advocate the idea of fully formal verification, but without a fully top-to-bottom formally verified system myself — not even codewise, but in even the basic mathematical sense.

Univalence makes this problem more urgent. Without univalence, we can often get away with more hand-wavy arguments, because things are “obvious”. Furthermore, they relate to the way things are done elsewhere. So logic can be believed by analogy to how people usually think about logic, numbers by analogy to the peano system, which people already “believe in,” and soforth. Furthermore, without univalence, most operations are “directly constructive” in the sense that you can pick your favorite “obvious” and non-categorical model, and they will tend to hold in that as well — so you can think of your types as sets, and terms as elements of sets. Or you can think of your types as classifying computer programs and your terms as runnable code, etc. In each case, the behavior leads to basically what you would expect.

But in none of these “obvious” models does univalence hold. And furthermore, it is “obviously” wrong in them.

And that is just on the “propaganda” side as people say. For the same reasons, univalence tends to be incompatible with many “obvious” extensions — for example, not only “uniqueness of identity proofs” has to go, but pattern matching had to be rethought so as not to imply it, and furthermore it is not known if it is sound in concert with many other extensions such as general coinductive types, etc. (In fact, the newtype deriving bug itself can be seen as a "very special case" of the incompatibility of univalence with Uniqueness of Identity Proofs, as I have been discussing with people informally for quite some time).

Hence, because univalence interacts with so many other extensions, it feels even more urgent to have a full account. Unlike prior research, which really focused on developing and understanding type systems, this is more of an engineering problem, although a proof-engineering problem to be sure.

**The Approach**

Rather than just giving a full account of “one important type system,” Voevodsky seems to be aiming for a generally smooth way to develop such full accounts even as type systems change. So he is interested in reusable technology, so to speak. One analogy may be that he is interested in building the categorical semantics version of a logical framework. His tool for doing this is what he calls a “C-system”, which is a slight variant of Cartmell’s Categories with Attributes mentioned above. One important aspect of C-systems seems to be that that they stratify types and terms in some fashion, and that you can see them as generated by some “data” about a ground set of types, terms, and relations. To be honest, I haven’t looked at them more closely than that, since I saw at least some of the “point” of them and know that to really understand the details I'll have to study categorical semantics more generally, which is ongoing.

But the plan isn’t just to have a suitable categorical model of type theories. Rather it is to give a description of how one goes from the “raw terms” as syntax trees all the way through to how typing judgments are passed on them and then to their full elaborations in contexts and finally to their eventual “meaning” as categorically presented.

Of course, most of these elements are well studied already, as are their interactions. But they do not live in a particularly compatible formulation with categorical semantics. This then makes it difficult to prove that “all the pieces line up” and in particular, a pain to prove that a given categorical semantics for a given syntax is the “initial” one — i.e. that if there is any other semantics for that syntax, it can be arrived at by first “factoring through” the morphism from syntax to the initial semantics. Such proofs can be executed, but again it would be good to have “reusable technology” to carry them out in general.

**pre-B-Systems**

Now we move into the proper notes on Voevodsky's "B-Systems" paper.

If C-systems are at the end-point of the conveyor belt, we need the pieces in the middle. And that is what a B-system is. Continuing the analogy with the conveyor belt, what we get out at the end is a “finished piece” — so an object in a C-system is a categorified version of a “type in-context” capturing the “indexing” or “fibration” of that type over its “base space” of types it may depend on, and also capturing the families of terms that may be formed in various different contexts of other terms with other types.

B-systems, which are a more “syntactic” presentation, have a very direct notion of “dynamics” built in — they describe how objects and contexts may be combined and put together, and directly give, by their laws, which slots fit into which tabs, etc. Furthermore, B-systems are to be built by equipping simpler systems with successively more structure. This gives us a certain sort of notion of how to talk about the distinction between things closer to "raw syntax" (not imbued with any particular meaning) and that subset of raw syntactic structures which have certain specified actions.

So enough prelude. What precisely is a B-system? We start with a pre-B-system, as described below (corresponding to Definition 2.1 in the paper).

First there is a family of sets, indexed by the natural numbers. We call it B_n. B_0 is to be thought of as the empty context. B_1 as the set of typing contexts with one element, B_2 as the set with two elements, where the second may be indexed over the first, etc. Elements of B_3 thus can be thought of as looking like "x_1 : T_1, x_2 : T_2(x_1), x_3 : T_3(x_1,x_2)" where T_2 is a type family over one type, T_3 a type family over two types, etc.

For all typing contexts of at least one element, we can also interpret them as simply the _type_ of their last element, but as indexed by the types of all their prior elements. Conceptually, B_n is the set of "types in context, with no more than n-1 dependencies".

Now, we introduce another family of sets, indexed by the natural numbers starting at 1. We call this set ˜B_n. ˜B_1 is to be thought of as the set of all values that may be drawn from any type in the set B_1, and soforth. Thus, each set ˜B_n is to be thought of as fibered over B_n. We think of this as "terms in context, whose types have no more than n-1 dependencies". Elements of ˜B_3 can be though of as looking like "x_1 : T_1, x_2 : T_2(x_1), x_3 : T_3(x_1,x_2) ⊢ y : x". That is to say, elements of B_n for some n look like "everything to the left of the turnstile" and elements of ˜B_n for some n look like "the left and right hand sides of the turnstile together."

We now, for each n, give a map:

∂ : ˜B_n+1 -> B_n+1.

This map is the witness to this fibration. Conceptually, it says "give me an element of some type of dependency level n, and I will pick out which type this is an element of". We can call ∂ the "type of" operator.

We add a second basic map:

ft : B_n+1 -> B_n

This is a witness to the fact that all our higher B_n are built as extensions of smaller ones. It says "Give me a context, and I will give you the smaller context that has every element except the final one". Alternately, it reads "Give me a type indexed over a context, and I will throw away the type and give back just the context." Or, "Give me a type that may depend on n+1 things, and I will give the type it depends on that may only depend on n things. We can call ft the "context of" operator.

Finally, we add a number of maps to correspond to weakening and substitution -- four in all. In each case, we take m >= n. we denote the i-fold application of ft by ft_i.

1) T (type weakening).

T : (Y : B_n+1) -> (X : B_m+1) -> ft(Y) = ft_(m+1-n)(X) -> B_m+2

This reads: Give me two types-in-context, X and Y. Now, if the context for Y agrees with the context for X in the initial segment (i.e. discarding the elements of the context of X which are "longer" than the context for Y), then I can give you back X again, but now in an extended context that includes Y as well.

2) ˜T (term weakening).

˜T : (Y : B_n+1) -> (r : ˜B_m+1) -> ft(Y)=ft_(m+1-n)(∂(r)) -> ˜B_m+2

This reads: Give me a type-in-context Y, and a term-in-context r. Now, if the context of Y agrees with the context for the type of r as above, then I can give you back r again, but now as a term-in-context whose type has an extended context that includes Y as well.

3) S (type substitution).

S : (s : ˜B_n+1) -> (X : B_m+2) -> ∂(s) = ft_(m+1-n)(X) -> B_m+1

This reads: give me a term-in-context s, and a type-in-context X. Now, if the context of the type of s agrees with the context of the X in the initial segment, we may then produce a new type, which is X with one less element in its context (because we have substituted the explicit term s for where the prior dependency data was recorded).

4) ˜S (term substitution).

˜S : (s : ˜B_n+1) -> (r : ˜B_m+2) -> ∂(s) = ft_(m+1-n)(∂(r)) -> ˜B_m+1

This reads: give me two terms-in-context, r and s. Now given the usual compatibility condition on contexts, we can produce a new term, which is like r, but where the context has one less dependency (because we have substituted the explicit term s for everywhere where there was dependency data prior).

Let us now review what we have: We have dependent terms and types, related by explicit maps between them. For every term we have its type, and for every type we have its context. Furthermore, we have weakening by types of types and terms -- so we record where "extra types" may be introduced into contexts without harm. We also have substitution of terms into type and terms -- so we record where reductions may take place, and the resulting effect on the dependency structure.

**Unital pre-B-systems**

We now introduce a further piece of data, which renders a pre-B-system a _unital_ pre-B-system, corresponding to Definition 2.2 in the paper. For each n we add an operation:

δ : B_n+1 -> ˜B_n+2

This map "turns a context into a term". Conceptually it is the step that equips a pre-B-system with a universe, as it is what allows types to transform into terms. I find the general definition a bit confusing, but I believe it can be rendered syntactically for for B_2, it can be as something like the following: "x_1 : T_1, x_2 : T_2(x_1) -> x_1 : T_1, x_2 : T_2(x_1), x_3 : U ⊢ x_2 : x_3". That is to say, given any context, we now have a universe U that gives the type of "universes of types", and we say that the type itself is a term that is an element of a universe. Informally, one can think of δ as the "term of" operator.

But this specific structure is not indicated by any laws yet on δ. Indeed, the next thing we do is to introduce a "B0-system" which adds some further coherence conditions to restrict this generality.

**B0-systems**

The following are my attempt to "verbalize" the B0 system conditions (as restrictions on non-unital pre-B-systems) as covered in definition 2.5. I do not reproduce here the actual formal statements of these conditions, for which one should refer to the paper and just reason through very carefully.

1. The context of a weakening of a type is the same as the weakening of the context of a type.

2. The type of the weakening of a term is the same as the weakening of the type of a term.

3. The context of a substitution into a type is the same as the substitution into a context of a type

4. The type of a substitution into a term is the same as a substitution into the type of a term.

Finally, we "upgrade" a non-unital B0-system to a unital B0-system with one further condition:

5. ∂(δ(X)) =T(X,X).

I read this to say that, "for any type-in-context X, the type of the term of X is the same as the weakening of the context of X by the assumption of X itself." This is to say, if I create a term for some type X, and then discard that term, this is the same as extending the context of X by X again.

Here, I have not even gotten to "full" B-systems yet, and am only on page 5 of a seventeen page paper. But I have been poking at these notes for long enough without posting them, so I'll leave off for now, and hopefully, possibly, when time permits, return to at least the second half of section 2.

### How would I split a conduit input through multiple conduits, combining their output?

I want to take X inputs from a source, feed each one through a different conduit, then combine the outputs in the same order to a single output conduit. What would be the best (best means most effecient in this case) way to do that? To better explain the question, see this image.

submitted by Unknownloner[link] [2 comments]

### The GHC Team: GHC Weekly News - 2015/09/17

Hi *,

Welcome for the latest entry in the GHC Weekly News. It's been a little while, but here we are!

And your author has finally returned from his 8 week sabbatical, too! So without any futher ado, lets get going...

8.0.1 release roadmapSo HEAD has been steaming along pretty smoothly for the past few months now. After talking with Simon last week, we decided that the best course of action would be to release 8.0.1 (a super-major release) sometime around late February, which were the plans for 7.10 (modulo a few weeks due to the FTP debates). The current schedule is roughly:

- November: Fork the new ghc-8.0 STABLE branch
- At this point, master development will probably slow as we fix bugs.
- This gives us 2 months or so until branch, from Today.
- This is nice as the branch is close to the first RC.

- December: First release candidate
- Mid/late February: Final release.

"**Why call it 8.0.1?**", you ask? Because we have a lot of excellent features planned! I'm particularly partial to Richard's work for merging types and kinds (Phab:D808). But there's a lot of other stuff.

For all the nitty gritty details, be sure to check 8.0.1 status page to keep track of everything - it will be our prime source of information and coordination. And be sure to read my email to `ghc-devs` for more info.

... and a 7.10.3 release perhaps?On top of this, we've been wondering if another release in the 7.10 branch should be done. Ben did the release shortly after I left, and for the most part looked pretty great. But there have been some snags, as usual.

So we're asking: who needs GHC 7.10.3? We'd really like to know of any major showstoppers you've found with 7.10 that are preventing you from using it. Especially if you're stuck or there's no clear workaround.

Currently, we're still not 100% committed to this course of action (since the release will take time away from other things). However, we'll keep the polls open for a while - so *please* get in touch with us if you need it! (Be sure to read my email for specific information.)

(Over the past two weeks)

- Bartosz Nitka writes to ghc-devs about the ongoing work to try and fix deterministic compilation in GHC (the dreaded ticket #4012). There's a very detailed breakdown of the current problems and issues in play, with responses from others - https://mail.haskell.org/pipermail/ghc-devs/2015-September/009964.html

- Richard Eisenberg wants to know - how can I download all of Hackage to play with it? GHC developers are surely interested in this, so they can find regressions quickly - https://mail.haskell.org/pipermail/ghc-devs/2015-September/009956.html

- I wrote to the list about the upcoming tentative 7.10.3 plans, as I mentioned above. https://mail.haskell.org/pipermail/ghc-devs/2015-September/009953.html

- I
*also*wrote to the list about the tentative 8.0.1 plans, too. https://mail.haskell.org/pipermail/ghc-devs/2015-September/009952.html

- Johan Tibell asks about his ongoing work for implementing unboxed sum types - in particular, converting unboxed sum types in StgCmm. https://mail.haskell.org/pipermail/ghc-devs/2015-September/009926.html

- Ryan Scott wrote a proposal for the automatic derivation of Lift through GHC's deriving mechanism, specifically for template-hasekll users. The response was positive and the code is going through review now (Phab:D1168). https://mail.haskell.org/pipermail/ghc-devs/2015-September/009838.html

- Andrew Gibiansky writes in with his own proposal for a new "Argument Do" syntax - a change which would allow do to appear in positions without ($) or parenthesis, essentially changing the parser to insert parens as needed. The code is up at Phabricator for brave souls (Phab:D1219). https://mail.haskell.org/pipermail/ghc-devs/2015-September/009821.html

- Edward Yang started a monstrous thread after some discussions at ICFP about a future for
*unlifted*data types in GHC. These currently exist as special magic, but the proposals included would allow users to declare their own types as unlifted, and make unlifted values more flexible (allowing newtype for example). See wiki:UnliftedDataTypes and Edward's thread for more. https://mail.haskell.org/pipermail/ghc-devs/2015-September/009799.html

(Over the past several weeks)

- Commit 374457809de343f409fbeea0a885877947a133a2 -
**Injective Type Families**

- Commit 8ecf6d8f7dfee9e5b1844cd196f83f00f3b6b879 -
**Applicative Do notation**

- Commit 6740d70d95cb81cea3859ff847afc61ec439db4f - Use IP-based CallStack in error and undefined

- Commit 43eb1dc52a4d3cbba9617f5a26177b8251d84b6a - Show MINIMAL complete definition in GHCi's :info

- Commit 296bc70b5ff6c853f2782e9ec5aa47a52110345e - Use a response file for linker command line arguments

- Commit 4356dacb4a2ae29dfbd7126b25b72d89bb9db1b0 - Forbid annotations when Safe Haskell is enabled

- Commit 7b211b4e5a38efca437d76ea442495370da7cc9a - Upgrade GCC/binutils to 5.2.0 release for Windows (i386/amd64)

(Over the past two weeks)

#10834, #10830, #10047, #9943, #1851, #1477, #8229, #8926, #8614, #10777, #8596, #10788, #9500, #9087, #10157, #10866, #10806, #10836, #10849, #10869, #10682, #10863, #10880, #10883, #10787, #8552, #10884, #7305, #5757, #9389, #8689, #10105, #8168, #9925, #10305, #4438, #9710, #10889, #10885, #10825, #10821, #10790, #10781, #9855, #9912, #10033, #9782, #10035, #9976, #10847, and #10865.

### good tutorials on haskell package for gnuplot?

Dear someone, I am trying to use the Graphics.Gnuplot.Simple-package for Haskell (I could gladly use the advanced too), but I only find some using plotList (on Hackage), and some comments on forums. Now I for instance dont know how to save the plot to file. Can someone recommend some good tutorials (either on youtube or as pdf)? that would be very appreciated.

submitted by stianhotboi[link] [1 comment]

### I had to write in Java today. It was awful.

I think Haskell has spoiled me. Now whenever I'm writing code in other languages, I'm constantly thinking how much shorter and more elegant it would be in Haskell. Anyone else feel an intense loathing for other programming languages after learning Haskell?

submitted by elliottneversleeps[link] [41 comments]