News aggregator

SPJ's Venn diagram on type-correctness

haskell-cafe - 0 sec ago
Hi all, Sometime in the past, I came across a presentation from SPJ in which he showed a Venn diagram showing "programs that compile" and "programs that are correct". Unfortunately, I cannot remember the exact wording and I'm unable to find the slides/talk on google/youtube. Does anyone remember the exact details of the diagram? Or the title of the talk? Is there still a link to it? Thanks, Mathijs
Categories: Offsite Discussion

something went wrong

del.icio.us/haskell - 2 hours 31 min ago
Categories: Offsite Blogs

Question: Finding the source of incomplete record construction

Haskell on Reddit - Mon, 07/06/2015 - 12:26am

Disclaimer: crossposted on StackOverflow

I'm trying to debug a large, complicated program in Haskell, which I didn't entirely write myself.

I'm trying to print my data structures to diagnose a bug, but when I do so, I get the following error: error: Prelude.undefined. As you can see, this error is extremely non-informative.

I'm reasonably sure that this is coming from a record that I've "partially" initialized, where I'm trying to access a field whose value has not been set.

The program (a compiler) is spread over two cabal projects, a library, and an executable which uses that library. This makes debugging using GHCI/cabal-repl hard: I can't run use GHCi on the executable, because it's not the source of the error, but recreating the input the executable gives to the library is too complicated to do by hand.

I'm wondering: what can I do to get more information about where the incorrect record is being created, what field is the source of the error, etc. Is there an RTS option or something I can use to give more information for error output?

submitted by jmite
[link] [12 comments]
Categories: Incoming News

ANN: zoom-refs - zoom and pairing for mutablereferences

haskell-cafe - Sun, 07/05/2015 - 10:49pm
Hello, I've just uploaded my package zoom-refs: http://hackage.haskell.org/package/zoom-refs-0.0.0.0 It's a "port" of State monad zoom (from lens) to mutable references: zoomTVar :: Lens' a b -> TVar a -> TVar b These TVars aren't actually the raw TVars from STM, but wrappers that provide the same functionality. Similar functions are provided for STRefs and IORefs. Additionally, TVars and STRefs can be paired to create composite references: pairTVars :: TVar a -> TVar b -> TVar (a,b) No such functionality is provided for IORefs, as there would be no way to guarantee atomicity of operations on the underlying references. Together. mutable references can be used sort of like Functors and Applicatives, though one needs to use lenses rather than plain functons to map them. Finally, there are multiple references that use traversals instead of lenses to zoom: readMultiTVar :: Monoid a -> MultiTVar a -> STM a readMultiTVarList :: MultiTVar a -> STM [a] readMultiTVarHead :: MultiTVar a
Categories: Offsite Discussion

Ken T Takusagawa: [dcqhszdf] ChaCha cipher example

Planet Haskell - Sun, 07/05/2015 - 10:15pm

The ChaCha cipher seems not to get as much love as Salsa20. Here is a step-by-step example of the ChaCha round function operating on a matrix. The format of the example is loosely based on the analogous example in section 4.1 of this Salsa20 paper: D. J. Bernstein. The Salsa20 family of stream ciphers. Document ID: 31364286077dcdff8e4509f9ff3139ad. URL: http://cr.yp.to/papers.html#salsafamily. Date: 2007.12.25.

original column [a;b;c;d] 61707865 04030201 14131211 00000007 after first line of round function 65737a66 04030201 14131211 7a616573 after second line of round function 65737a66 775858a7 8e747784 7a616573 after third line of round function dccbd30d 775858a7 8e747784 aab67ea6 after all 4 lines of round function, i.e., quarter round dccbd30d 395746a7 392af62a aab67ea6 original matrix, with the same original column above 61707865 3320646e 79622d32 6b206574 04030201 08070605 0c0b0a09 100f0e0d 14131211 18171615 1c1b1a19 201f1e1d 00000007 00000000 01040103 06020905 one round (4 quarter rounds on columns) dccbd30d 109b031b 0eb5ed20 4483ec2b 395746a7 d88a8f5f 7a292fab b06c9135 392af62a 6ac28db6 dfbce7ba a234a188 aab67ea6 e8383c7a 8d694938 0791063e after shift rows dccbd30d 109b031b 0eb5ed20 4483ec2b d88a8f5f 7a292fab b06c9135 395746a7 dfbce7ba a234a188 392af62a 6ac28db6 0791063e aab67ea6 e8383c7a 8d694938 after another 4 quarter rounds on columns 06b44c34 69a94c11 2ce99b08 216830d1 29b215bd 721e2a33 f0a18097 708e1ee5 2b0e8de3 b801251f 42265fb2 696de1c2 e6fef362 c96c6325 c6cc126e 82c0635a unshifting rows (concludes 1 double round) 06b44c34 69a94c11 2ce99b08 216830d1 708e1ee5 29b215bd 721e2a33 f0a18097 42265fb2 696de1c2 2b0e8de3 b801251f c96c6325 c6cc126e 82c0635a e6fef362 after 8 rounds (4 double rounds) f6093fbb efaf11c6 8bd2c9a4 bf1ff3da bf543ce8 c46c6b5e c717fe59 863195b1 2775d1a0 babe2495 1b5c653e df7dc23c 5f3e08d7 041df75f f6e58623 abc0ab7e Adding the original input to the output of 8 rounds 5779b820 22cf7634 0534f6d6 2a40594e c3573ee9 cc737163 d3230862 9640a3be 3b88e3b1 d2d53aaa 37777f57 ff9ce059 5f3e08de 041df75f f7e98726 b1c2b483 reading the above as bytes, little endian 20 b8 79 57 34 76 cf 22 d6 f6 34 05 4e 59 40 2a e9 3e 57 c3 63 71 73 cc 62 08 23 d3 be a3 40 96 b1 e3 88 3b aa 3a d5 d2 57 7f 77 37 59 e0 9c ff de 08 3e 5f 5f f7 1d 04 26 87 e9 f7 83 b4 c2 b1 same as above but with 20000 rounds (10000 double rounds) 11 a3 0a d7 30 d2 a3 dc d8 ad c8 d4 b6 e6 63 32 72 c0 44 51 e2 4c ed 68 9d 8d ff 27 99 93 70 d4 30 2e 83 09 d8 41 70 49 2c 32 fd d9 38 cc c9 ae 27 97 53 88 ec 09 65 e4 88 ff 66 7e be 7e 5d 65

The example was calculated using an implementation of ChaCha in Haskell, whose end results agree with Bernstein's C reference implementation. The Haskell implementation is polymorphic, allowing as matrix elements any data type (of any word width) implementing Bits, and parametrizable to matrices of any size 4xN. (Security is probably bad for N not equal to 4. For word width different from 32, you probably want different rotation amounts.) The flexibility comes at a cost: the implemention is 3000 times slower than Bernstein's reference C implementation (which is in turn is slower than SIMD optimized assembly-language implementations).

Also included in the same project is a similar Haskell implementation of Salsa20, parametrized to matrices of any size MxN because of the more regular structure of the Salsa20 quarter round function compared to ChaCha. We demonstrate taking advantage of polymorphism to use the same code both to evaluate Salsa20 on Word32 and to generate C code for the round function.

Categories: Offsite Blogs

stm onCommit

haskell-cafe - Sun, 07/05/2015 - 9:54pm
I need a onCommit functionality for stm. I know there is the stm-io-hooks [1] package. But it seems so big for such a small thing and then I have to translate everything to their monad. lift lift lift ... So I thought of a little hack to solve the issue and I'm wondering if this is safe to use. My understanding of stm internals is very limited. It's basically just a TChan with IO actions in it and there is another thread waiting to execute anything inserted into it. import Control.Concurrent.STM.TChan onCommitChan :: TChan (IO ()) {-# NOINLINE onCommit #-} onCommitChan = unsafePerformIO $ do chan <- newTChanIO forkIO $ forever $ atomically $ readTChan chan return chan onCommit :: IO () -> STM () onCommit = writeTChan onCommitChan It would be cool if an onCommit hook could be directly added to the stm package. Silvio [1] https://hackage.haskell.org/package/stm-io-hooks-1.0.1
Categories: Offsite Discussion

Danny Gratzer: A Basic Tutorial on JonPRL

Planet Haskell - Sun, 07/05/2015 - 6:00pm
Posted on July 6, 2015 Tags: jonprl, types

I was just over at OPLSS for the last two weeks. While there I finally met Jon Stering in person. What was particularly fun is that for that last few months he’s been creating a proof assistant called JonPRL in the spirit of Nuprl. As it turns out, it’s quite a fun project to work on so I’ve implemented a few features in it over the last couple of days and learned more or less how it works.

Since there’s basically no documentation on it besides the readme and of course the compiler so I thought I’d write down some of the stuff I’ve learned. There’s also a completely separate post on the underlying type theory for Nuprl and JonPRL that’s very interesting in its own right but this isn’t it. Hopefully I’ll get around to scribbling something about that because it’s really quite clever.

Here’s the layout of this tutorial

  • First we start with a whirlwind tutorial. I’ll introduce the basic syntax and we’ll go through some simple proofs together
  • I’ll then dive into some of the rational behind JonPRL’s theory. This should help you understand why some things work how they do
  • I’ll show off a few of JonPRL’s more unique features and (hopefully) interest you enough to start fiddling on your own
Getting JonPRL

JonPRL is pretty easy to build and install and having it will make this post more enjoyable. You’ll need smlnj since JonPRL is currently written in SML. This is available in most package managers (including homebrew) otherwise just grab the binary from the website. After this the following commands should get you a working executable

  • git clone ssh://git@github.com/jonsterling/jonprl
  • cd jonprl
  • git submodule init
  • git submodule update
  • make (This is excitingly fast to run)
  • make test (If you’re doubtful)

You should now have an executable called jonprl in the bin folder. There’s no prelude for jonprl so that’s it. You can now just feed it files like any reasonable compiler and watch it spew (currently difficult-to-decipher) output at you.

If you’re interested in actually writing JonPRL code, you should probably install David Christiansen’s Emacs mode. Now that we’re up and running, let’s actually figure out how the language works

The Different Languages in JonPRL

JonPRL is composed of really 3 different sorts of mini-languages

  • The term language
  • The tactic language
  • The language of commands to the proof assistant

In Coq, these roughly correspond to Gallina, Ltac, and Vernacular respectively.

The Term Language

The term language is an untyped language that contains a number of constructs that should be familiar to people who have been exposed to dependent types before. The actual concrete syntax is composed of 3 basic forms:

  • We can apply an “operator” (I’ll clarify this in a moment) with op(arg1; arg2; arg3).
  • We have variables with x.
  • And we have abstraction with x.e. JonPRL has one construct for binding x.e built into its syntax, that things like λ or Π are built off of.

An operator in this context is really anything you can imagine having a node in an AST for a language. So something like λ is an operator, as is if or pair (corresponding to (,) in Haskell). Each operator has a piece of information associated with it, called its arity. This arity tells you how many arguments an operator takes and how many variables x.y.z. ... each is allowed to bind. For example, with λ has an arity is written (1) since it takes 1 argument which binds 1 variable. Application (ap) has the arity (0; 0). It takes 2 arguments neither of which bind a variable.

So as mentioned we have functions and application. This means we could write (λx.x) y in JonPRL as ap(λ(x.x); y). The type of functions is written with Π. Remember that JonPRL’s language has a notion of dependence so the arity is (0; 1). The construct Π(A; x.B) corresponds to (x : A) → B in Agda or forall (x : A), B in Coq.

We also have dependent sums as well (Σs). In Agda you would write (M , N) to introduce a pair and Σ A λ x → B to type it. In JonPRL you have pair(M; N) and Σ(A; x.B). To inspect a Σ we have spread which let’s us eliminate a pair pair. Eg spread(0; 2) and you give it a Σ in the first spot and x.y.e in the second. It’ll then replace x with the first component and y with the second. Can you think of how to write fst and snd with this?

There’s sums, so inl(M), inr(N) and +(A; B) corresponds to Left, Right, and Either in Haskell. For case analysis there’s decide which has the arity (0; 1; 1). You should read decide(M; x.N; y.P) as something like

case E of Left x -> L Right y -> R

In addition we have unit and <> (pronounced axe for axiom usually). Neither of these takes any arguments so we write them just as I have above. They correspond to Haskell’s type- and value-level () respectively. Finally there’s void which is sometimes called false or empty in theorem prover land.

You’ll notice that I presented a bunch of types as if they were normal terms in this section. That’s because in this untyped computation system, types are literally just terms. There’s no typing relation to distinguish them yet so they just float around exactly as if they were λ or something! I call them types because I’m thinking of later when we have a typing relation built on top of this system but for now there are really just terms. It was still a little confusing for me to see Π(unit; _.unit) in a language without types, so I wanted to make this explicit.

Now we can introduce some more exotic terms. Later, we’re going to construct some rules around them that are going to make it behave that way we might expect but for now, they are just suggestively named constants.

  • U{i}, the ith level universe used to classify all types that can be built using types other than U{i} or higher. It’s closed under terms like Π and it contains all the types of smaller universes
  • =(0; 0; 0) this is equality between two terms at a type. It’s a proposition that’s going to precisely mirror what’s going on later in the type theory with the equality judgment
  • ∈(0; 0) this is just like = but internalizes membership in a type into the system. Remember that normally “This has that type” is a judgment but with this term we’re going to have a propositional counterpart to use in theorems.

In particular it’s important to distinguish the difference between ∈ the judgment and ∈ the term. There’s nothing inherent in ∈ above that makes it behave like a typing relation as you might expect. It’s on equal footing with flibbertyjibberty(0; 0; 0).

This term language contains the full untyped lambda calculus so we can write all sorts of fun programs like

λ(f.ap(λ(x.ap(f;(ap(x;x)))); λ(x.ap(f;(ap(x;x)))))

which is just the Y combinator. In particular this means that there’s no reason that every term in this language should normalize to a value. There are plenty of terms in here that diverge and in principle, there’s nothing that rules out them doing even stranger things than that. We really only depend on them being deterministic, that e ⇒ v and e ⇒ v' implies that v = v'.

Tactics

The other big language in JonPRL is the language of tactics. Luckily, this is very familiarly territory if you’re a Coq user. Unluckily, if you’ve never heard of Coq’s tactic mechanism this will seem completely alien. As a quick high level idea for what tactics are:

When we’re proving something in a proof assistant we have to deal with a lot of boring mechanical details. For example, when proving A → B → A I have to describe that I want to introduce the A and the B into my context, then I have to suggest using that A the context as a solution to the goal. Bleh. All of that is pretty obvious so let’s just get the computer to do it! In fact, we can build up a DSL of composable “proof procedures” or /tactics/ to modify a particular goal we’re trying to prove so that we don’t have to think so much about the low level details of the proof being generated. In the end this DSL will generate a proof term (or derivation in JonPRL) and we’ll check that so we never have to trust the actual tactics to be sound.

In Coq this is used to great effect. In particular see Adam Chlipala’s book to see incredibly complex theorems with one-line proofs thanks to tactics.

In JonPRL the tactic system works by modifying a sequent of the form H ⊢ A (a goal). Each time we run a tactic we get back a list of new goals to prove until eventually we get to trivial goals which produce no new subgoals. This means that when trying to prove a theorem in the tactic language we never actually see the resulting evidence generated by our proof. We just see this list of H ⊢ As to prove and we do so with tactics.

The tactic system is quite simple, to start we have a number of basic tactics which are useful no matter what goal you’re attempting to prove

  • id a tactic which does nothing
  • t1; t2 this runs the t1 tactic and runs t2 on any resulting subgoals
  • *{t} this runs t as long as t does something to the goal. If t ever fails for whatever reason it merely stops running, it doesn’t fail itself
  • ?{t} tries to run t once. If t fails nothing happens
  • !{t} runs t and if t does anything besides complete the proof it fails. This means that !{id} for example will always fail.
  • t1 | t2 runs t1 and if it fails it runs t2. Only one of the effects for t1 and t2 will be shown.
  • t; [t1, ..., tn] first runs t and then runs tactic ti on subgoal ith subgoal generated by t
  • trace "some words" will print some words to standard out. This is useful when trying to figure out why things haven’t gone your way.
  • fail is the opposite of id, it just fails. This is actually quite useful for forcing backtracking and one could probably implement a makeshift !{} as t; fail.

It’s helpful to see this as a sort of tree, a tactic takes one goal to a list of a subgoals to prove so we can imagine t as this part of a tree

H | ———–————————— (t) H' H'' H'''

If we have some tactic t2 then t; t2 will run t and then run t2 on H, H', and H''. Instead we could have t; [t1, t2, t3] then we’ll run t and (assuming it succeeds) we’ll run t1 on H', t2 on H'', and t3 on H'''. This is actually how things work under the hood, composable fragments of trees :)

Now those give us a sort of bedrock for building up scripts of tactics. We also have a bunch of tactics that actually let us manipulate things we’re trying to prove. The 4 big ones to be aware of are

  • intro
  • elim #NUM
  • eq-cd
  • mem-cd

The basic idea is that intro modifies the A part of the goal. If we’re looking at a function, so something like H ⊢ Π(A; x.B), this will move that A into the context, leaving us with H, x : A ⊢ B.

If you’re familiar with sequent calculus intro runs the appropriate right rule for the goal. If you’re not familiar with sequent calculus intro looks at the outermost operator of the A and runs a rule that applies when that operator is to the right of a the ⊢.

Now one tricky case is what should intro do if you’re looking at a Σ? Well now things get a bit dicey. We’d might expect to get two subgoals if we run intro on H ⊢ Σ(A; x.B), one which proves H ⊢ A and one which proves H ⊢ B or something, but what about the fact that x.B depends on whatever the underlying realizer (that’s the program extracted from) the proof of H ⊢ A! Further, Nuprl and JonPRL are based around extract-style proof systems. These mean that a goal shouldn’t depend on the particular piece of evidence proving of another goal. So instead we have to tell intro up front what we want the evidence for H ⊢ A to be is so that the H ⊢ B section may use it.

To do this we just give intro an argument. For example say we’re proving that · ⊢ Σ(unit; x.unit), we run intro [<>] which gives us two subgoals · ⊢ ∈(<>; unit) and · ⊢ unit. Here the [] let us denote the realizer we’re passing to intro. In general any term arguments to a tactic will be wrapped in []s. So the first goal says “OK, you said that this was your realizer for unit, but is it actually a realizer for unit?” and the second goal substitutes the given realizer into the second argument of Σ, x.unit, and asks us to prove that. Notice how here we have to prove ∈(<>; unit)? This is where that weird ∈ type comes in handy. It let’s us sort of play type checker and guide JonPRL through the process of type checking. This is actually very crucial since type checking in Nuprl and JonPRL is undecidable.

Now how do we actually go about proving ∈(<>; unit)? Well here mem-cd has got our back. This tactic transforms ∈(A; B) into the equivalent form =(A; A; B). In JonPRL and Nuprl, types are given meaning by how we interpret the equality of their members. In other words, if you give me a type you have to say

  1. What canonical terms are in that terms
  2. What it means for two canonical members to be equal

Long ago, Stuart Allen realized we could combine the two by specifying a partial equivalence relation for a type. In this case rather than having a separate notion of membership we check to see if something is equal to itself under the PER because when it is that PER behaves like a normal equivalence relation! So in JonPRL ∈ is actually just a very thin layer of sugar around = which is really the core defining notion of typehood. To handle = we have eq-cd which does clever things to handle most of the obvious cases of equality.

Finally, we have elim. Just like intro let us simplify things on the right of the ⊢, elim let’s us eliminate something on the left. So we tell elim to “eliminate” the nth item in the context (they’re numbered when JonPRL prints them) with elim #n.

Just like with anything, it’s hard to learn all the tactics without experimenting (though a complete list can be found with jonprl --list-tactics). Let’s go look at the command language so we can actually prove some theorems.

Commands

So in JonPRL there are only 4 commands you can write at the top level

  • Operator
  • [oper] =def= [term] (A definition)
  • Tactic
  • Theorem

The first three of these let us customize and extend the basic suite of operators and tactics JonPRL comes with. The last actually lets us state and prove theorems.

The best way to see these things is by example so we’re going to build up a small development in JonPRL. We’re going to show that products are monoid with unit up to some logical equivalence. There are a lot of proofs involved here

  1. prod(unit; A) entails A
  2. prod(A; unit) entails A
  3. A entails prod(unit; A)
  4. A entails prod(A; unit)
  5. prod(A; prod(B; C)) entails prod(prod(A; B); C)
  6. prod(prod(A; B); C) entails prod(A; prod(B; C))

I intend to prove 1, 2, and 5. The remaining proofs are either very similar or fun puzzles to work on. We could also prove that all the appropriate entailments are inverses and then we could say that everything is up to isomorphism.

First we want a new snazzy operator to signify nondependent products since writing Σ(A; x.B) is kind of annoying. We do this using operator

Operator prod : (0; 0).

This line declares prod as a new operator which takes two arguments binding zero variables each. Now we really want JonPRL to know that prod is sugar for Σ. To do this we use =def= which gives us a way to desugar a new operator into a mess of existing ones.

[prod(A; B)] =def= [Σ(A; _.B)].

Now we can change any occurrence of prod(A; B) for Σ(A; _.B) as we’d like. Okay, so we want to prove that we have a monoid here. What’s the first step? Let’s verify that unit is a left identity for prod. This entails proving that for all types A, prod(unit; A) ⊃ A and A ⊃ prod(unit; A). Let’s prove these as separate theorems. Translating our first statement into JonPRL we want to prove

Π(U{i}; A. Π(prod(unit; A); _. A))

In Agda notation this would be written

(A : Set) → (_ : prod(unit; A)) → A

Let’s prove our first theorem, we start by writing

Theorem left-id1 : [Π(U{i}; A. Π(prod(unit; A); _. A))] { id }.

This is the basic form of a theorem in JonPRL, a name, a term to prove, and a tactic script. Here we have id as a tactic script, which clearly doesn’t prove our goal. When we run JonPRL on this file (C-c C-l if you’re in Emacs) you get back

[XXX.jonprl:8.3-9.1]: tactic 'COMPLETE' failed with goal: ⊢ ΠA ∈ U{i}. (prod(unit; A)) => A Remaining subgoals: ⊢ ΠA ∈ U{i}. (prod(unit; A)) => A

So focus on that Remaining subgoals bit, that’s what we have left to prove, it’s our current goal. Now you may notice that this outputted goal is a lot prettier than our syntax! That’s because currently in JonPRL the input and outputted terms may not match, the latter is subject to pretty printing. In general this is great because you can read your remaining goals, but it does mean copying and pasting is a bother. There’s nothing to the left of that ⊢ yet so let’s run the only applicable tactic we know. Delete that id and replace it with

{ intro }.

The goal now becomes

Remaining subgoals:

1. A : U{i} ⊢ (prod(unit; A)) => A ⊢ U{i} ∈ U{i'}

Two ⊢s means two subgoals now. One looks pretty obvious, U{i'} is just the universe above U{i} (so that’s like Set₁ in Agda) so it should be the case that U{i} ∈ U{i'} by definition! So the next tactic should be something like [???, mem-cd; eq-cd]. Now what should that ??? be? Well we can’t use elim because there’s one thing in the context now (A : U{i}), but it doesn’t help us really. Instead let’s run unfold <prod>. This is a new tactic that’s going to replace that prod with the definition that we wrote earlier.

{ intro; [unfold <prod>, mem-cd; eq-cd] }

Notice here that , behinds less tightly than ; which is useful for saying stuff like this. This gives us

Remaining subgoals: 1. A : U{i} ⊢ (unit × A) => A

We run intro again

{ intro; [unfold <prod>, mem-cd; eq-cd]; intro }

Now we are in a similar position to before with two subgoals.

Remaining subgoals: 1. A : U{i} 2. _ : unit × A ⊢ A 1. A : U{i} ⊢ unit × A ∈ U{i}

The first subgoal is really what we want to be proving so let’s put a pin in that momentarily. Let’s get rid of that second subgoal with a new helpful tactic called auto. It runs eq-cd, mem-cd and intro repeatedly and is built to take care of boring goals just like this!

{ intro; [unfold <prod>, mem-cd; eq-cd]; intro; [id, auto] }

Notice that we used what is a pretty common pattern in JonPRL, to work on one subgoal at a time we use []’s and ids everywhere except where we want to do work, in this case the second subgoal.

Now we have

Remaining subgoals: 1. A : U{i} 2. _ : unit × A ⊢ A

Cool! Having a pair of unit × A really ought to mean that we have an A so we can use elim to get access to it.

{ intro; [unfold <prod>, mem-cd; eq-cd]; intro; [id, auto]; elim #2 }

This gives us

Remaining subgoals: 1. A : U{i} 2. _ : unit × A 3. s : unit 4. t : A ⊢ A

We’ve really got the answer now, #4 is precisely our goal. For this situations there’s assumption which is just a tactic which succeeds if what we’re trying to prove is in our context already. This will complete our proof

Theorem left-id1 : [Π(U{i}; A. Π(prod(unit; A); _. A))] { intro; [unfold <prod>, mem-cd; eq-cd]; intro; [id, auto]; elim #2; assumption }.

Now we know that auto will run all of the tactics on the first line except unfold <prod>, so what we just unfold <prod> first and run auto? It ought to do all the same stuff.. Indeed we can shorten our whole proof to unfold <prod>; auto; elim #2; assumption. With this more heavily automated proof, proving our next theorem follows easily.

Theorem right-id1 : [Π(U{i}; A. Π(prod(A; unit); _. A))] { unfold <prod>; auto; elim #2; assumption }.

Next, we have to prove associativity to complete the development that prod is a monoid. The statement here is a bit more complex.

Theorem assoc : [Π(U{i}; A. Π(U{i}; B. Π(U{i}; C. Π(prod(A; prod(B;C)); _. prod(prod(A;B); C)))))] { id }.

In Agda notation what I’ve written above is

assoc : (A B C : Set) → A × (B × C) → (A × B) × C assoc = ?

Let’s kick things off with unfold <prod>; auto to deal with all the boring stuff we had last time. In fact, since x appears in several nested places we’d have to run unfold quite a few times. Let’s just shorten all of those invocations into *{unfold <prod>}

{ *{unfold <prod>}; auto }

This leaves us with the state

Remaining subgoals:

1. A : U{i} 2. B : U{i} 3. C : U{i} 4. _ : A × B × C ⊢ A 1. A : U{i} 2. B : U{i} 3. C : U{i} 4. _ : A × B × C ⊢ B 1. A : U{i} 2. B : U{i} 3. C : U{i} 4. _ : A × B × C ⊢ C

In each of those goals we need to take apart the 4th hypothesis so let’s do that

{ *{unfold <prod>}; auto; elim #4 }

This leaves us with 3 subgoals still

1. A : U{i} 2. B : U{i} 3. C : U{i} 4. _ : A × B × C 5. s : A 6. t : B × C ⊢ A 1. A : U{i} 2. B : U{i} 3. C : U{i} 4. _ : A × B × C 5. s : A 6. t : B × C ⊢ B 1. A : U{i} 2. B : U{i} 3. C : U{i} 4. _ : A × B × C 5. s : A 6. t : B × C ⊢ C

The first subgoal is pretty easy, assumption should handle that. In the other two we want to eliminate 6 and then we should be able to apply assumption. In order to deal with this we use | to encode that disjunction. In particular we want to run assumption OR elim #6; assumption leaving us with

{ *{unfold <prod>}; auto; elim #4; (assumption | elim #6; assumption) }

This completes the proof!

Theorem assoc : [Π(U{i}; A. Π(U{i}; B. Π(U{i}; C. Π(prod(A; prod(B;C)); _. prod(prod(A;B); C)))))] { *{unfold <prod>}; auto; elim #4; (assumption | elim #6; assumption) }.

As a fun puzzle, what needs to change in this proof to prove we can associate the other way?

What on earth did we just do!?

So we just proved a theorem.. but what really just happened? I mean how did we go from “Here we have an untyped computation system which types just behaving as normal terms” to “Now apply auto and we’re done!”. In this section I’d like to briefly sketch the path from untyped computation to theorems.

The path looks like this

  • We start with our untyped language and its notion of computation

    We already discussed this in great depth before.

  • We define a judgment a = b ∈ A.

    This is a judgment, not a term in that language. It exists in whatever metalanguage we’re using. This judgment is defined across 3 terms in our untyped language (I’m only capitalizing A out of convention). This is supposed to represent that a and b are equal elements of type A. This also gives meaning to typehood: something is a type in CTT precisely when we know what the partial equivalence relation defined by - = - ∈ A on canonical values is.

    Notice here that I said partial. It isn’t the case that a = b ∈ A presupposes that we know that a : A and b : A because we don’t have a notion of : yet!

    In some sense this is where we depart from a type theory like Coq or Agda’s. We have programs already and on top of them we define this 3 part judgment which interacts which computation in a few ways I’m not specifying. In Coq, we would specify one notion of equality, generic over all types, and separately specify a typing relation.

  • From here we can define the normal judgments of Martin Lof’s type theory. For example, a : A is a = a ∈ A. We recover the judgment A type with A = A ∈ U (where U here is a universe).

This means that inhabiting a universe A = A ∈ U, isn’t necessarily inductively defined but rather negatively generated. We specify some condition a term must satisfy to occupy a universe.

Hypothetical judgments are introduced in the same way they would be in Martin-Lof’s presentations of type theory. The idea being that H ⊢ J if J is evident under the assumption that each term in H has the appropriate type and furthermore that J is functional (respects equality) with respect to what H contains. This isn’t really a higher order judgment, but it will be defined in terms of a higher order hypothetical judgment in the metatheory.

With this we have something that walks and quacks like normal type theory. Using the normal tools of our metatheory we can formulate proofs of a : A and do normal type theory things. This whole development is building up what is called “Computational Type Theory”. The way this diverges from Martin-Lof’s extensional type theory is subtle but it does directly descend from Martin-Lof’s famous 1979 paper “Constructive Mathematics and Computer Programming” (which you should read. Instead of my blog post).

Now there’s one final layer we have to consider, the PRL bit of JonPRL. We define a new judgment, H ⊢ A [ext a]. This is judgment is cleverly set up so two properties hold

  • H ⊢ A [ext a] should entail that H ⊢ a : A or H ⊢ a = a ∈ A
  • In H ⊢ A [ext a], a is an output and H and A are inputs. In particular, this implies that in any inference for this judgment, the subgoals may not use a in their H and A.

This means that a is completely determined by H and A which justifies my use of the term output. I mean this in the sense of Twelf and logic programming if that’s a more familiar phrasing. It’s this judgment that we see in JonPRL! Since that a is output we simply hide it, leaving us with H ⊢ A as we saw before. When we prove something with tactics in JonPRL we’re generating a derivation, a tree of inference rules which make H ⊢ A evident for our particular H and A! These rules aren’t really programs though, they don’t correspond one to one with proof terms we may run like they would in Coq. The computational interpretation of our program is bundled up in that a.

To see what I mean here we need a little bit more machinery. Specifically, let’s look at the rules for the equality around the proposition =(a; b; A). Remember that we have a term <> lying around,

a = b ∈ A ———————————————————— <> = <> ∈ =(a; b; A)

So the only member of =(a; b; A) is <> if a = b ∈ A actually holds. First off, notice that <> : A and <> : B doesn’t imply that A = B! In another example, λ(x. x) ∈ Π(A; _.A) for all A! This is a natural consequence of separating our typing judgment from our programming language. Secondly, there’s not really any computation in the e of H ⊢ =(a; b; A) (e). After all, in the end the only thing e could be so that e : =(a; b; A) is <>! However, there is potentially quite a large derivation involved in showing =(a; b; A)! For example, we might have something like this

x : =(A; B; U{i}); y : =(b; a; A) ⊢ =(a; b; B) ———————————————————————————————————————————————— Substitution x : =(A; B; U{i}); y : =(b; a; A) ⊢ =(a; b; A) ———————————————————————————————————————————————— Symmetry x : =(A; B; U{i}); y : =(b; a; A) ⊢ =(b; a; A) ———————————————————————————————————————————————— Assumption

Now we write derivations of this sequent up side down, so the thing we want to show starts on top and we write each rule application and subgoal below it (AI people apparently like this?). Now this was quite a derivation, but if we fill in the missing [ext e] for this derivation from the bottom up we get this

x : =(A; B; U{i}); y : =(b; a; A) ⊢ =(a; b; B) ———————————————————————————————————————————————— Substitution [ext <>] x : =(A; B; U{i}); y : =(b; a; A) ⊢ =(a; b; A) ———————————————————————————————————————————————— Symmetry [ext <>] x : =(A; B; U{i}); y : =(b; a; A) ⊢ =(b; a; A) ———————————————————————————————————————————————— Assumption [ext x]

Notice how at the bottom there was some computational content (That x signifies that we’re accessing a variable in our context) but than we throw it away right on the next line! That’s because we find that no matter what the extract was that let’s us derive =(b; a; A), the only realizer it could possible generate is <>. Remember our conditions, if we can make evident the fact that b = a ∈ A then <> ∈ =(b; a; A). Because we somehow managed to prove that b = a ∈ A holds, we’re entitled to just use <> to realize our proof. This means that despite our somewhat tedious derivation and the bookkeeping that we had to do to generate that program, that program reflects none of it.

This is why type checking in JonPRL is woefully undecidable: in part, the realizers that we want to type check contain none of the helpful hints that proof terms in Coq would. This also means that extraction from JonPRL proofs is built right into the system and we can actually generate cool and useful things! In Nuprl-land, folks at Cornell actually write proofs and use this realizers to run real software. From what Bob Constable said at OPLSS they can actually get these programs to run fast (within 5x of naive C code).

So to recap, in JonPRL we

  • See H ⊢ A
  • Use tactics to generate a derivation of this judgment
  • Once this derivation is generated, we can extract the computational content as a program in our untyped system

In fact, we can see all of this happen if you call JonPRL from the command line or hit C-c C-c in emacs! On our earlier proof we see

Operator prod : (0; 0). ⸤prod(A; B)⸥ ≝ ⸤A × B⸥. Theorem left-id1 : ⸤⊢ ΠA ∈ U{i}. (prod(unit; A)) => A⸥ { fun-intro(A.fun-intro(_.prod-elim(_; _.t.t); prod⁼(unit⁼; _.hyp⁼(A))); U⁼{i}) } ext { λ_. λ_. spread(_; _.t.t) }. Theorem right-id1 : ⸤⊢ ΠA ∈ U{i}. (prod(A; unit)) => A⸥ { fun-intro(A.fun-intro(_.prod-elim(_; s._.s); prod⁼(hyp⁼(A); _.unit⁼)); U⁼{i}) } ext { λ_. λ_. spread(_; s._.s) }. Theorem assoc : ⸤⊢ ΠA ∈ U{i}. ΠB ∈ U{i}. ΠC ∈ U{i}. (prod(A; prod(B; C))) => prod(prod(A; B); C)⸥ { fun-intro(A.fun-intro(B.fun-intro(C.fun-intro(_.independent-prod-intro(independent-prod-intro(prod-elim(_; s.t.prod-elim(t; _._.s)); prod-elim(_; _.t.prod-elim(t; s'._.s'))); prod-elim(_; _.t.prod-elim(t; _.t'.t'))); prod⁼(hyp⁼(A); _.prod⁼(hyp⁼(B); _.hyp⁼(C)))); U⁼{i}); U⁼{i}); U⁼{i}) } ext { λ_. λ_. λ_. λ_. ⟨⟨spread(_; s.t.spread(t; _._.s)), spread(_; _.t.spread(t; s'._.s'))⟩, spread(_; _.t.spread(t; _.t'.t'))⟩ }.

Now we can see that those Operator and ≝ bits are really what we typed with =def= and Operator in JonPRL, what’s interesting here are the theorems. There’s two bits, the derivation and the extract or realizer.

{ derivation of the sequent · ⊢ A } ext { the program in the untyped system extracted from our derivation }

We can move that derivation into a different proof assistant and check it. This gives us all the information we need to check that JonPRL’s reasoning and helps us not trust all of JonPRL (I wrote some of it so I’d be a little scared to trust it :). We can also see the computational bit of our proof in the extract. For example, the computation involved in taking A × unit → A is just λ_. λ_. spread(_; s._.s)! This is probably closer to what you’ve seen in Coq or Idris, even though I’d say the derivation is probably more similar in spirit (just ugly and beta normal). That’s because the extract need not have any notion of typing or proof, it’s just the computation needed to produce a witness of the appropriate type. This means for a really tricky proof of equality, your extract might just be <>! Your derivation however will always exactly reflect the complexity of your proof.

Killer features

OK, so I’ve just dumped about 50 years worth of hard research in type theory into your lap which is best left to ruminate for a bit. However, before I finish up this post I wanted to do a little bit of marketing so that you can see why one might be interested in JonPRL (or Nuprl). Since we’ve embraced this idea of programs first and types as PERs, we can define some really strange types completely seamlessly. For example, in JonPRL there’s a type ⋂(A; x.B), it behaves a lot like Π but with one big difference, the definition of - = - ∈ ⋂(A; x.B) looks like this

a : A ⊢ e = e' ∈ [a/x]B ———————————————————————— e = e' ∈ ⋂(A; x.B)

Notice here that e and e' may not use a anywhere in their bodies. That is, they have to be in [a/x]B without knowing anything about a and without even having access to it.

This is a pretty alien concept that turned out to be new in logic as well (it’s called “uniform quantification” I believe). It turns out to be very useful in PRL’s because it lets us declare things in our theorems without having them propogate into our witness. For example, we could have said

Theorem right-id1 : [⋂(U{i}; A. Π(prod(A; unit); _. A))] { unfold <prod>; auto; elim #2; assumption }.

With the observation that our realizer doesn’t need to depend on A at all (remember, no types!). Then the extract of this theorem is

λx. spread(x; s._.s)

There’s no spurious λ _. ... at the beginning! Even more wackily, we can define subsets of an existing type since realizers need not have unique types

e = e' ∈ A [e/x]P [e'/x]P ———————————————————————————— e = e' ∈ subset(A; x.P)

And in JonPRL we can now say things like “all odd numbers” by just saying subset(nat; n. ap(odd; n)). In intensional type theories, these types are hard to deal with and still the subject of open research. In CTT they just kinda fall out because of how we thought about types in the first place. Quotients are a similarly natural conception (just define a new type with a stricter PER) but JonPRL currently lacks them (though they shouldn’t be hard to add..).

Finally, if you’re looking for one last reason to dig into **PRL, the fact that we’ve defined all our equalities extensionally means that several very useful facts just fall right out of our theory

Theorem fun-ext : [⋂(U{i}; A. ⋂(Π(A; _.U{i}); B. ⋂(Π(A; a.ap(B;a)); f. ⋂(Π(A; a.ap(B;a)); g. ⋂(Π(A; a.=(ap(f; a); ap(g; a); ap(B; a))); _. =(f; g; Π(A; a.ap(B;a))))))))] { auto; ext; ?{elim #5 [a]}; auto }.

This means that two functions are equal in JonPRL if and only if they map equal arguments to equal output. This is quite pleasant for formalizing mathematics for example.

Wrap Up

Whew, we went through a lot! I didn’t intend for this to be a full tour of JonPRL, just a taste of how things sort of hang together and maybe enough to get you looking through the examples. Speaking of which, JonPRL comes with quite a few examples which are going to make a lot more sense now.

Additionally, you may be interested in the documentation in the README which covers most of the primitive operators in JonPRL. As for an exhaustive list of tactics, well….

Hopefully I’ll be writing about JonPRL again soon. Until then, I hope you’ve learned something cool :)

A huge thanks to David Christiansen and Jon Sterling for tons of helpful feedback on this

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

ANN: stack-0.1.2.0

Haskell on Reddit - Sun, 07/05/2015 - 5:31pm

New release of stack, a build tool. (Cross-post from the official mailing list)

Changes: https://github.com/commercialhaskell/stack/releases/tag/v0.1.2.0

General information and download links (now with package repositories for CentOS, Fedora, and Debian): https://github.com/commercialhaskell/stack#readme

If you already have stack-0.1.1.0, you can upgrade using: stack upgrade --git && stack upgrade

[normally just 'stack upgrade' will suffice, but due to my flub in the last release we ended up with a stack-9.9.9 on hackage which can't be removed (see https://github.com/haskell/hackage-server/issues/382), and you have to get the stack version that knows to ignore it first]

submitted by akurilin
[link] [4 comments]
Categories: Incoming News

Posted code snippet for Ex. 12 of Ch. 5 of "Fun w/Phantom Types".

haskell-cafe - Sun, 07/05/2015 - 4:52pm
Hi all, I’ve spent the last few days working on Exercise 12, at the end of Chapter 5 of Ralf Hinze’s paper, “Fun with Phantom Types”. (Thanks, Conal, for all the help!) I thought I’d share my code, in case anyone else happens to be in the same Haskell space-time as me, right now. :) https://github.com/capn-freako/Haskell_Misc/blob/master/norm_by_eval.hs I hope everyone is enjoying their weekend. Cheers, -db
Categories: Offsite Discussion

Method for randomly selecting an item from a list

Haskell on Reddit - Sun, 07/05/2015 - 2:45pm

Hello /r/haskell,

I'm using a Haskell approach for random selection of terms from a sum type. I very blatantly copied from fpcomplete's randomization tutorial.

Here it is:

import System.Random import qualified Data.Text as Text import qualified Data.Text.IO as Text data Victim = Bob | Joe | Henry | Deepak | Carlo deriving (Show, Enum, Bounded) instance Random Victim where randomR (a, b) g = case randomR (fromEnum a, fromEnum b) g of (x, g') -> (toEnum x, g') random g = randomR (minBound, maxBound) g main = do g <- newStdGen print $ (randoms g :: [Victim]) !! 0

After some time of using this to "choose a random victim" I "noticed" it leaning toward choosing Henry more often than the others. To get a better sample set (law of large numbers and all) I ran the program in a bash loop.

$ while (true); do runhaskell choose victim.hs >> log; done

I let this cook for a while until I got a total of 9399 entries.

And here's the 'histogram':

$ cat log | sort | uniq -c 1914 Henry 1868 Deepak 1901 Carlo 1826 Joe 1890 Bob

The whole way along, Henry was at the top. Is Henry "randomly" at the top of the list, or is there something fundamentally wrong with this approach, causing the third term of the sum type to come to the top?

submitted by nabokovian
[link] [4 comments]
Categories: Incoming News

Working with states in Haskell - I'm having decidophobia.

Haskell on Reddit - Sun, 07/05/2015 - 2:10pm

Have people already worked out some good patterns to follow (so that I won't have to reinvent the theory), or is there still a long way to go? Otherwise is it just like writing poems, that different people are supposed to have different aesthetic?

Let's say C++ is a knife that is so sharp and can cut everything easily, including your fingers. Then Haskell is a lengendary sword, it won't hurt your fingers but you feel guilty for you might not be good enough to be the hero.

In Haskell, you can make libraries of APIs, algorithms, design patterns, and anything.

But when I tried to write some bigger project in Haskell, I feel difficulties making decisions on architecting. For realtime programs, no matter how you use functional programming, there's always states you cannot escape from. And when states are complex enough, ecapsulation is needed to break it down into smaller parts.

For example:

When I use something that works like OOP, some people says: welcome to the new world, and this is not the way we do that. But no one actually told me what's the new way to do it.

I also tried to use Arrows, and I feel scared. Among the code I read, there is quite few people using it. (Seems arrow-do is still a GHC ext)

Some people say that FP should minimize the depth of hierarchy of state ecapsulation. But there are data structures that cannot be implemented in stateless ways. And replacing a binary-tree map into a hash map could break everything.

I could have avoid hard-coding with rigid types like IORef, STRef, or MVar by using Monad transformers. But it doesn't always make it better understood.

In imperative lauguages, you just use "variable"s and do not need to choose from different Monads or MonadTransformer.

In Java, you can just use Java-styled OOP and length syntax without thinking, because it's not your fault. (Sorry... I know Java has it's practical use, but I really think Java code is ugly.) In C++, there is tons of ways to write bad code, and few ways to write good ones. But you can tell.

However when it comes to Haskell, there's so many things to decide. Whatever idiom I follow, whichever way I choose, there is always possibly a better way to do it, showing that I could have work harder.

submitted by RnMss
[link] [11 comments]
Categories: Incoming News

Spinal Transplants

Haskell on Reddit - Sun, 07/05/2015 - 11:38am

Context is for the weak, let's jump right in!

λ let as = [1..] λ null $ as False λ null $ reverse as ^CInterrupted. λ let reverse' xs = assertSpine (reverse xs) xs λ reverse' [1..10] [10,9,8,7,6,5,4,3,2,1] λ null $ reverse' as False

Now, maybe you've never needed to check whether the reversal of an infinite list was empty, and that's ok.

But I found it interesting that for operations like reverse or sort, the structure of the output was identical to the structure of the output - the order of the elements changes, but the spine is the same shape. But the implementation of each of these doesn't take advantage of this - null (sort as) is O(n), not O(1).

So I came up with assertSpine, to transplant the spine of the input to the output using non-refutable patterns:

assertSpine :: [a] -> [b] -> [a] assertSpine ~[] [] = [] assertSpine ~(a:as) (_ : bs) = a : assertSpine as bs

(I chose the argument order so I could just use the Reader monad to do reverse' = assertSpine =<< reverse, sort' = assertSpine =<< sort, and id' = assertSpine =<< id, feel free to bikeshed).

Next I cranked out some tests with Criterion - using null and length to check the runtime for sppine-strict characteristics and head and sum to check the runtime of fullly strict characteristics:

main = do as <- (seq =<< sum) <$> replicateM 10000 randomIO :: IO [Int] defaultMain $ do (gname, f, f') <- [ ("id", id, id') , ("reverse", reverse, reverse') , ("sort", sort, sort') ] (tname, t) <- [ ("null", whnf . (null.)) , ("length", whnf . (length.)) , ("head", whnf . (head.)) , ("sum", whnf . (sum.)) ] let name = tname ++ "." ++ gname [ bench name (t f as), bench (name ++ "'") $ t f' as ]

With 10K element lists (a number chosen by an exteremely scientific method), I found:

  • id beat id' on all four measures (not suprising)

    | id | id' --------+---------------------+-------------------- null | 34.33 ns (2.065 ns) | 46.60 ns (2.282 ns) length | 26.86 μs (2.488 μs) | 604.7 μs (34.27 μs) head | 35.00 ns (2.029 ns) | 63.85 ns (3.665 ns) sum | 403.4 μs (14.27 μs) | 2.535 ms (146.0 μs)
  • reverse beat reverse' on length and sum, but was indistinguishable on head, and lost on null

    | reverse | reverse' --------+---------------------+-------------------- null | 81.77 μs (6.644 μs) | 46.43 ns (2.355 ns) length | 100.0 μs (5.651 μs) | 607.1 μs (44.65 μs) head | 81.04 μs (7.247 μs) | 79.33 μs (3.160 μs) sum | 563.2 μs (33.22 μs) | 3.068 ms (173.3 μs)
  • sort lost to sort' on null and length, but was indistinguishable on head, and won on sum

    | sort | sort' --------+---------------------+-------------------- null | 1.523 ms (83.32 μs) | 50.52 ns (2.642 ns) length | 8.748 ms (555.0 μs) | 619.9 μs (46.58 μs) head | 1.585 ms (254.3 μs) | 1.533 ms (107.6 μs) sum | 10.46 ms (586.6 μs) | 14.97 ms (985.2 μs)

If I get time today, I might try to crank out the memory overhead, or take a look at the Core.

It's certainly not "free", but there are definitely times when a spinal transplant can be adventageous. It all depends on what you're going to do to the data.

EDIT: Another discussion of this technique can be found here, with /u/apfelmus's withShape.

submitted by rampion
[link] [18 comments]
Categories: Incoming News

[ANN] sql-fragment : Type safe SQL query combinator

Haskell on Reddit - Sun, 07/05/2015 - 10:36am

I'm finally releasing sql-fragment and its companion sql-fragment-mysql-simple.

This is my first published package so all comments (about code, design, etc ...) are welcome. I haven't pushed it on hackage yet because I'm waiting for some feeback before doing so.

Overview

SQLFragment is a type safe SQL combinator based on the idea that, a SQL query

is a monoid joins can be deduced automatically from an join graph.

SQLFragment main intent is to allow to build easily complex query by reusing and combining pre-made fragments (which can be typed or typeless). This is especially useful when building reporting tools, when a lot of queries are similar and the results are either table or charts. In that case, query output can be used "raw" (.i.e a list of tuple or equivalent) and don't need to be mapped to any complex data type. Unlike many other SQL package, which make it hard to combine SQLFragment and String, SQLFragment makes it easy to write raw SQL if needed. Its purpose is to help write query quickly not make developper life hard. We trust the developper to not use "unsafe" string.

SQLFragment also provide support for dimensional units, HList records and automatic fragments generation from a database. The fragments generation use a space separated values file which can be generated from the database (see corresponding backend).

For more details look at the Database.SQLFragment.SQLFragment and Database.SQLFragment.Operators.

Synopsis Example

Let's say we have a table of customers, products, and orders, joining a customer to n products. I want to display in table the list of the customer which ordered the product 'blue T-shirt'.

With SQLFragment, supposing I have defined email and blue fragments so that

>>> toSelectQuery email "SELECT email FROM customers" >>> toSelectQuery blue "FROM product WHERE description = 'blue T-shirt'"

and the join graph as been properly set up in joins.

I can simply combine those two fragment using <>.

>>> toSelectQuery $ email <> blue !@! joins "SELECT email FROM customers JOIN orders ON (customers.id = customer_id) JOIN products ON (products.id = product_id) WHERE products.description = 'blue T-shirt'" submitted by maxigit
[link] [7 comments]
Categories: Incoming News

Development tools survey results

Haskell on Reddit - Sat, 07/04/2015 - 3:36pm

I made a summary of the survey that recently surfaced on this sub-reddit. In the process I cleaned up the entries a bit, I hope I did not mistakenly changed anything in that process.

/u/acow emacs haskell-mode ghc-mod company-ghc hlint

/u/zorasterisk emacs haskell-mode

/u/I4dcQsEpLzTHvD1qhlDE vi

/u/ephrion vim ghc-mod syntastic hlint few other indentation/highlighting plugins sometimes ghcid running in a tmux

/u/zcleghern EclipseFP

/u/ranjitjhala atom linter-hdevtools hover-tooltips-hdevtools hasktags

/u/bheklilr Sublime Text 3 + SublimeHaskell ghc-mod hoogle stylish-haskell hlint misc others

/u/pycube emacs haskell-mode haskell-flycheck plain company-mode

/u/ekilek22 ghcid arion

/u/_skp Atom (with some plugins, but not ide-haskell) ghc-mod hasktags hoogle

/u/fractalsea IntelliJ + Haskforce ghc-mod hlint

/u/implicit_cast Atom ide-haskell

/u/alt_account10 vim hasktags ghcid hlint hoogle + hoogle-index

/u/lally emacs haskell-mode

/u/Peaker emacs haskell-mode ghci-ng

/u/cretan_bull emacs ghc-mod haskell-mode haskell-indentation-mode company-mode

/u/Vektorweg Geany IDE

/u/mallai SublimeText + SublimeHaskell ghc-mod hasktags hoogle

/u/andrewthad vim tmux

/u/drwebb emacs haskell-mode structured-haskell-mode haskell-flycheck hindent haskell-dash hlint tmux

/u/WarDaft Notepad++ ghcid

/u/tejon SublimeText 3 SublimeHaskell (hsdev branch) SublimeRepl hsdev hlint stylish-haskell

/u/Mob_Of_One emacs haskell-mode ghci

/u/maxigit vim tmux ghcid

/u/shishkabeb emacs ghc-mod hlint standard autocomplete

/u/tikhonjelvis emacs haskell-mode

/u/hvr_ emacs haskell-mode

/u/ndmitchell SublimeText ghcid

/u/cgibbard vim

/u/ch0wn vim hlint ghc-mod hsimport

/u/clrnd vim ghci

/u/Crandom Atom ghci

/u/semanticistZombie vim haskell-vim stylish-haskell hlint

/u/gelisam vim hoogle ghci

/u/bryangarza emacs haskell-mode

/u/get-your-shinebox vim haskell-vim-now ghcid

/u/edwardkmett vim hlint

/u/AndrasKovacs emacs haskell-mode

/u/kfound vim hdevtools hlint syntastic

submitted by cies010
[link] [18 comments]
Categories: Incoming News

Do we have a Monadic abstraction over the Alternative interface?

Haskell on Reddit - Sat, 07/04/2015 - 6:01am

Does there exist a monad-transformer, which uses the Monad interface to abstract over the MonadPlus/Alternative functionality? I'm asking because I've just implemented the thing and am wondering, whether I should publish it.

What does it do?

It allows to abstract over the following expression:

clause1 <|> clause2 <|> clause3

with a monadic interface:

runAlt $ do lift $ clause1 lift $ clause2 lift $ clause3 What's the purpose?

Convenient construction of action-trees. The usefulness becomes evident when the clauses themselves use the "do" syntax. E.g.,

runAlt $ do lift $ do action1 action2 lift $ do action3 action4 lift $ do action5 action6

which otherwise would be the folowing uneditable thing:

(do action1 action2) <|> (do action3 action4) <|> (do action5 action6)

In case the solution not published yet, I'm all ears for the suggestions for the name of the thing.

submitted by nikita-volkov
[link] [comment]
Categories: Incoming News

Edwin Brady: Cross-platform Compilers for Functional Languages

Planet Haskell - Sat, 07/04/2015 - 5:06am

I’ve just submitted a new draft, Cross-platform Compilers for Functional Languages. Abstract:

Modern software is often designed to run on a virtual machine, such as the JVM or .NET’s CLR. Increasingly, even the web browser is considered a target platform with the Javascript engine as its virtual machine. The choice of programming language for a project is therefore restricted to the languages which target the desired platform. As a result, an important consideration for a programming language designer is the platform to be targetted. For a language to be truly cross-platform, it must not only support different operating systems (e.g. Windows, OSX and Linux) but it must also target different virtual machine environments such as JVM, .NET, Javascript and others. In this paper, I describe how this problem is addressed in the Idris programming language. The overall compilation process involves a number of intermediate representations and Idris exposes an interface to each of these representations, allowing back ends for different target platforms to decide which is most appropriate. I show how to use these representations to retarget Idris for multiple platforms, and further show how to build a generic foreign function interface supporting multiple platforms.

Constructive comments and suggestions are welcome, particularly if you’ve tried implementing a code generator for Idris.


Categories: Offsite Blogs