News aggregator

Distributed and persistent events bus in Haskell

haskell-cafe - Tue, 04/07/2015 - 3:50pm
Hello, I am implementing an application using event sourcing as primary storage for data, which implies I need a way to durably and reliably store streams of events on stable storage. I also need to be able to have an event distribution system on top of that persistent storage so that components can subscribe to stored events. So far I have implemented a simple store, e.g. a flat file, which reuses the format of Apache Kafka (just in case...). Not very robust nor sophisticated but can work for moderate loads. Now I am looking for the event distribution part in the hope of being able to reuse some distributed event bus system that might exist somewhere and not having to roll my own. I have had a look couple of months ago at Vaultaire, Marquise and friends, but I am not sure they are really suited to my use case: They seem to be geared toward very high workload and throughput, like log or huge data streams analysis. Thanks for any pointer you might share,
Categories: Offsite Discussion

[EPIT2015] Call for participation to a Spring School about the Coq proof assistant

General haskell list - Tue, 04/07/2015 - 2:56pm
*** Call for participation, please distribute. *** EPIT'2015 (http://www.epit2015.website) Spring School in Theoretical Computer Science Mechanizing Proofs of Programs in Coq May 25 to May 29, 2015, Frejus, France * Presentation The french spring school in theoretical computer science (EPIT) is a recurrent school which was created 40 years ago by Maurice Nivat. This year, the school is about the mechanization of proofs of programs using the proof assistant Coq. As no prerequisite is needed, the school targets any computer scientist that is curious about what a proof assistant is and how it can be integrated in its daily research work. * Program The school will take place between May 24 and May 29 and it will be divided into eight sessions. A session will consist in a (rather short) lecture (given in english) followed by practical exercises on computer. The five first sessions will be dedicated to a presentation of the main concepts and techniques
Categories: Incoming News

ANN: new release of OrgMode Parse - 0.1.0

Haskell on Reddit - Tue, 04/07/2015 - 9:03am

Merging in Greg Hale's pull request has fleshed out this package. Thanks Greg.

https://hackage.haskell.org/package/orgmode-parse-0.1.0

submitted by pspringmeyer
[link] [13 comments]
Categories: Incoming News

Are there any OpenGL Shader GLSL eDSL libraries?

Haskell on Reddit - Tue, 04/07/2015 - 8:08am

I've been looking at https://github.com/fiendfan1/Haskell-GLSL-eDSL and have been having trouble trying to get it to work. I got it to install, but when trying to write shaders with it, it fumbles. I tried just doing to the example on the README, but it gave me a "Expecting one more argument to `ShaderM VertexShader Object'" error. I tried looking at the source to find the declaration for ShaderM, but couldn't. I saw that it took two arguments, but it seems VertexShader takes two arguments as well, but when looking at the source for that, I can't find where it takes them or what they are.

Does anyone know if there are other GLSL eDSL libraries I can use instead? Or perhaps someone has used this one and they can help me out? I tried emailing the author a while ago, but they've not got back to me since.

Thanks!

submitted by Navajubble
[link] [9 comments]
Categories: Incoming News

Everything old is new again: Quoted Domain Specific Languages

Lambda the Ultimate - Tue, 04/07/2015 - 7:59am

Everything old is new again: Quoted Domain Specific Languages, by Shayan Najd, Sam Lindley, Josef Svenningsson, Philip Wadler:

We describe a new approach to domain specific languages (DSLs), called Quoted DSLs (QDSLs), that resurrects two old ideas: quotation,
from McCarthy’s Lisp of 1960, and the subformula property, from Gentzen’s natural deduction of 1935. Quoted terms allow the DSL to share the syntax and type system of the host language. Normalising quoted terms ensures the subformula property, which guarantees that one can use higher-order types in the source while guaranteeing first-order types in the target, and enables using types to guide fusion. We test our ideas by re-implementing Feldspar, which was originally implemented as an Embedded DSL (EDSL), as a QDSL; and we compare the QDSL and EDSL variants.

Neat paper first posted here by Blaisorblade that formalizes the properties needed for using quotation to implement DSLs. Most embedded DSLs use custom operators and lifted conditionals to make embedded programs seem more natural, but quoting enables the use of native operators and conditions, and might lead to more natural expressions of lightweight (possibly heterogenous) staged computations.

Categories: Offsite Discussion

[haskell.org Google Summer of Code] Call for Mentors

General haskell list - Tue, 04/07/2015 - 1:58am
We have had a rather large pool of potential students apply for this year's Google Summer of Code, but, ultimately, Google won't let us ask for a slot unless we have a potential mentor assigned in advance. On top of that, one thing we try to do with each project is wherever possible, assign both a primary and a backup mentor, so the available mentoring pool is drawn a little thin. Many hands make for light work, though: If you've mentored or thought about mentoring in years past, I'd encourage you to sign up on google-melange for the Google Summer of Code at: https://www.google-melange.com/gsoc/homepage/google/gsoc2015 and request a connection to haskell.org as a Mentor. Once you've done this you can help us vote on proposals, and should something seem appropriate to you, you can flag yourself as available as a potential mentor or backup mentor for one (or more) of the projects. We have a couple of weeks left to rate proposals and request slots, but it'd be good to make as much progress as we can this w
Categories: Incoming News

Mathematical APIs

Haskell on Reddit - Mon, 04/06/2015 - 11:26pm
Categories: Incoming News

Call for Haskell.org committee self-nominations

haskell-cafe - Mon, 04/06/2015 - 9:20pm
Dear Haskellers, We have been overdue for some time in calling for a new round of nominations to the Haskell.org Committee. We have three members due for retirement -- Jason Dagit, Edward Kmett, and Brent Yorgey. The committee would like to thank them for their excellent service. To nominate yourself, please send an email to committee at haskell.org by 21 April 2015. The retiring members are eligible to re-nominate themselves. Please feel free to include any information about yourself that you think will help us to make a decision. Being a member of the committee does not necessarily require a significant amount of time, but committee members should aim to be responsive during discussions when the committee is called upon to make a decision. Strong leadership, communication, and judgement are very important characteristics for committee members. The role is about setting policy, providing direction/guidance for Haskell.org infrastructure, planning for the long term, and being fiscally responsible with th
Categories: Offsite Discussion

Gabriel Gonzalez: Mathematical APIs

Planet Haskell - Mon, 04/06/2015 - 9:17pm

You can use mathematical APIs to smooth the onboarding process for new users of your library by reusing their existing intuition for algebraic operations.

Let's use an example from my turtle library for shell scripting, which uses the Shell Text type to represent a stream of lines. You can forward such a stream to the console using stdout:

stdout :: Shell Text -> IO ()

Thanks to Haskell's OverloadedStrings extension, we can use a string literal to encode a 1-element stream that emits just that one string literal:

>>> :set -XOverloadedStrings
>>> import Turtle
>>> stdout "Hello, world!"
Hello, world
>>>

Now let's drop a thin mathematical API on top of these streams so that we can treat streams like ordinary numbers. These examples will only work with turtle-1.1.0 and later.

If we add streams, we concatenate their elements in order. Here we concatenate three 1-element streams to generate a combined 3-element stream:

>>> stdout ("Line 1" + "Line 2" + "Line 3")
Line 1
Line 2
Line 3
>>>

0 will be the empty stream:

>>> stdout 0
>>> -- This prints nothing

1 is a stream that emits a single empty string:

>>> stdout 1

>>>

So what is 2? Well, it stands to reason that 2 must be 1 + 1:

>>> stdout 2


>>> stdout (1 + 1) -- Same thing


>>>

Similarly, 3 must be 1 + 1 + 1:

>>> stdout 3



>>>

... and so forth.

If we multiply two 1-element streams, we generate a new 1-element stream that concatenates their strings.

>>> stdout ("123" * "456")
123456
>>>

Now what do you think will happen if I do this?

>>> let x = "Line 1" + "Line 2" -- A stream with two elements
>>> let y = "Line A" + "Line B" -- A stream with two elements
>>> stdout (x * ", " * y) -- What this will print?

Well, we can reason about this using the same rules of addition and multiplication we learned in school.

First we substitute x and y with their definitions:

x * ", " * y

= ("Line 1" + "Line 2") * ", " * ("Line A" + "Line B")

Then we expand out the multiplication to get four terms:

= "Line 1" * ", " * "Line A"
+ "Line 1" * ", " * "Line B"
+ "Line 2" * ", " * "Line A"
+ "Line 2" * ", " * "Line B"

Then we use the rule that multiplying 1-element streams just concatenates their strings:

= "Line 1, Line A"
+ "Line 1, Line B"
+ "Line 2, Line A"
+ "Line 2, Line B"

... and there's our final result! We expect our stream to have four elements, one for each permutation of elements from the left and right streams.

Let's verify that:

>>> stdout (x * ", " * y)
Line 1, Line A
Line 1, Line B
Line 2, Line A
Line 2, Line B

We can even leverage our algebraic intuition when working with impure streams:

>>> stdout (stdin * ("!" + "?"))
Test<Enter>
Test!
Test?
ABC<Enter>
ABC!
ABC?
...

This is why I love mathematical APIs: they are concise, general, and expressive.

People judge programming languages using mindshare, but the mindshare of mathematics dwarfs all programming languages combined. By specifying programs mathematically we can unlock a large and latent pool of programming talent.

Categories: Offsite Blogs

Robert Harper: Old Neglected Theorems Are Still Theorems

Planet Haskell - Mon, 04/06/2015 - 5:11pm

I have very recently been thinking about the question of partiality vs totality in programming languages, a perennial topic in PL’s that every generation thinks it discovers for itself.  And this got me to remembering an old theorem that, it seems, hardly anyone knows ever existed in the first place.  What I like about the theorem is that it says something specific and technically accurate about the sizes of programs in total languages compared to those in partial languages.  The theorem provides some context for discussion that does not just amount to opinion or attitude (and attitude alway seems to abound when this topic arises).

The advantage of a total programming language such as Goedel’s T is that it ensures, by type checking, that every program terminates, and that every function is total. There is simply no way to have a well-typed program that goes into an infinite loop. This may seem appealing, until one considers that the upper bound on the time to termination can be quite large, so large that some terminating programs might just as well diverge as far as we humans are concerned. But never mind that, let us grant that it is a virtue of  T that it precludes divergence.

Why, then, bother with a language such as PCF that does not rule out divergence? After all, infinite loops are invariably bugs, so why not rule them out by type checking? (Don’t be fooled by glib arguments about useful programs, such as operating systems, that “run forever”. After all, infinite streams are programmable in the language M of inductive and coinductive types in which all functions terminate. Computing infinitely does not mean running forever, it just means “for as long as one wishes, without bound.”)  The notion does seem appealing until one actually tries to write a program in a language such as T.

Consider computing the greatest common divisor (GCD) of two natural numbers. This can be easily programmed in PCF by solving the following equations using general recursion:

The type of defined in this manner has partial function type , which suggests that it may not terminate for some inputs. But we may prove by induction on the sum of the pair of arguments that it is, in fact, a total function.

Now consider programming this function in T. It is, in fact, programmable using only primitive recursion, but the code to do it is rather painful (try it!). One way to see the problem is that in T the only form of looping is one that reduces a natural number by one on each recursive call; it is not (directly) possible to make a recursive call on a smaller number other than the immediate predecessor. In fact one may code up more general patterns of terminating recursion using only primitive recursion as a primitive, but if you examine the details, you will see that doing so comes at a significant price in performance and program complexity. Program complexity can be mitigated by building libraries that codify standard patterns of reasoning whose cost of development should be amortized over all programs, not just one in particular. But there is still the problem of performance. Indeed, the encoding of more general forms of recursion into primitive recursion means that, deep within the encoding, there must be “timer” that “goes down by ones” to ensure that the program terminates. The result will be that programs written with such libraries will not be nearly as fast as they ought to be.  (It is actually quite fun to derive “course of values” recursion from primitive recursion, and then to observe with horror what is actually going on, computationally, when using this derived notion.)

But, one may argue, T is simply not a serious language. A more serious total programming language would admit sophisticated patterns of control without performance penalty. Indeed, one could easily envision representing the natural numbers in binary, rather than unary, and allowing recursive calls to be made by halving to achieve logarithmic complexity. This is surely possible, as are numerous other such techniques. Could we not then have a practical language that rules out divergence?

We can, but at a cost.  One limitation of total programming languages is that they are not universal: you cannot write an interpreter for T within T (see Chapter 9 of PFPL for a proof).  More importantly, this limitation extends to any total language whatever.  If this limitation does not seem important, then consider the Blum Size Theorem (BST) (from 1967), which places a very different limitation on total languages.  Fix any total language, L, that permits writing functions on the natural numbers. Pick any blowup factor, say , or however expansive you wish to be.  The BST states that there is a total function on the natural numbers that is programmable in L, but whose shortest program in L is larger by the given blowup factor than its shortest program in PCF!

The underlying idea of the proof is that in a total language the proof of termination of a program must be baked into the code itself, whereas in a partial language the termination proof is an external verification condition left to the programmer. Roughly speaking, there are, and always will be, programs whose termination proof is rather complicated to express, if you fix in advance the means by which it may be proved total. (In T it was primitive recursion, but one can be more ambitious, yet still get caught by the BST.)  But if you leave room for ingenuity, then programs can be short, precisely because they do not have to embed the proof of their termination in their own running code.

There are ways around the BST, of course, and I am not saying otherwise.  For example, the BST merely guarantees the existence of a bad case, so one can always argue that such a case will never arise in practice.  Could be, but I did mention the GCD in T problem for a reason: there are natural problems that are difficult to express in a language such as T.  By fixing the possible termination arguments in advance, one is tempting fate, for there are many problems, such as the Collatz Conjecture, for which the termination proof of a very simple piece of code has been an open problem for decades, and has resisted at least some serious attempts on it.  One could argue that such a function is of no practical use.  I agree, but I point out the example not to say that it is useful, but to say that it is likely that its eventual termination proof will be quite nasty, and that this will have to be reflected in the program itself if you are limited to a T-like language (rendering it, once again, useless).  For another example, there is no inherent reason why termination need be assured by means similar to that used in T.  We got around this issue in NuPRL by separating the code from the proof, using a type theory based on a partial programming language, not a total one.  The proof of termination is still required for typing in the core theory (but not in the theory with “bar types” for embracing partiality).  But it’s not baked into the code itself, affecting its run-time; it is “off to the side”, large though it may be).

Updates: word smithing, fixed bad link, corrected gcd, removed erroneous parenthetical reference to Coq, fixed LaTeX problems.


Filed under: Programming, Research, Teaching Tagged: functional programming, primitive recursion, programming languages, type theory, verification
Categories: Offsite Blogs

Robert Harper: The Power of Negative Thinking

Planet Haskell - Mon, 04/06/2015 - 5:01pm

Exception tracking is a well-known tar baby of type system design.  After all, if expressions can have two sorts of result, why shouldn’t the type say something about them both?  Languages such as CLU, FX, and Java, to name three, provide “throws” or “raises” clauses to the types of procedures that specify an upper bound on the exceptions that can occur when they are called.  It all seems natural and easy, but somehow the idea never really works very well.  One culprit is any form of higher-order programming, which is inherent in object-oriented and functional languages alike.  To handle the indirection requires more complex concepts, such as effect polymorphism, to make thing work reasonably well.  Or untracked exceptions are used to avoid the need to track them.  Somehow such an appealing idea seems rather stickier to realize than one might expect.  But why?

A piece of the puzzle was put into place by Xavier Leroy and François Pessaux in their paper on tracking uncaught exceptions. Their idea was to move use type-based methods to track uncaught exceptions, but to move the clever typing techniques required out of the programming language itself and into a separate analysis tool.  They make effective use of the powerful concept of row polymorphism introduced by Didier Rémy for typing records and variants in various dialects of Caml.  Moving exception tracking out of the language and into a verification tool is the decisive move, because it liberates the analyzer from any constraints that may be essential at the language level.

But why track uncaught exceptions?  That is, why track uncaught exceptions, rather than caught exceptions?  From a purely methodological viewpoint it seems more important to know that a certain code fragment cannot raise certain exceptions (such as the exception in ML, which arises when a value matches no pattern in a case analysis).  In a closed world in which all of the possible exceptions are known, then tracking positive information about which exceptions might be raised amounts to the same as tracking which exceptions cannot be raised, by simply subtracting the raised set from the entire set.  As long as the raised set is an upper bound on the exceptions that might be raised, then the difference is a lower bound on the set of exceptions that cannot be raised.  Such conservative approximations are necessary because a non-trivial behavioral property of a program is always undecidable, and hence requires proof.  In practice this means that stronger invariants must be maintained than just the exception information so that one may prove, for example, that the values passed to a pattern match are limited to those that actually do satisfy some clause of an inexhaustive match.

How realistic is the closed world assumption?  For it to hold seems to require a whole-program analysis, and is therefore non-modular, a risky premise in today’s world.  Even on a whole-program basis exceptions must be static in the sense that, even if they are scoped, they may in principle be declared globally, after suitable renaming to avoid collisions.  The global declarations collectively determine the whole “world” from which positive exception tracking information may be subtracted to obtain negative exception information.  But in languages that admit multiple instantiations of modules, such as ML functors, static exceptions are not sufficient (each instance should introduce a distinct exception).  Instead, static exceptions must be replaced by dynamic exceptions that are allocated at initialization time, or even run-time, to ensure that no collisions can occur among the instances.  At that point we have an open world of exceptions, one in which there are exceptions that may be raised, but which cannot be named in any form of type that seeks to provide an upper bound on the possible uncaught exceptions that may arise.

For example consider the ML expression

let exception X in raise X end

If one were to use positive exception tracking, what would one say about the expression as a whole?  It can, in fact it does, raise the exception , yet this fact is unspeakable outside of the scope of the declaration.   If a tracker does not account for this fact, it is unsound in the sense that the uncaught exceptions no longer provide an upper bound on what may be raised.  One maneuver, used in Java, for example, is to admit a class of untracked exceptions about which no static information is maintained.  This is useful, because it allows one to track those exceptions that can be tracked (by the Java type system) and to not track those that cannot.

In an open world (which includes Java, because exceptions are a form of object) positive exception tracking becomes infeasible because there is no way to name the exceptions that might be tracked.  In the above example the exception is actually a bound variable bound to a reference to an exception constructor.  The name of the bound variable ought not matter, so it is not even clear what the exception raised should be called.  (It is amusing to see the messages generated by various ML compilers when reporting uncaught exceptions.  The information they provide is helpful, certainly, but is usually, strictly speaking, meaningless, involving identifiers that are not in scope.)

The methodological considerations mentioned earlier suggest a way around this difficulty.  Rather than attempt to track those exceptions that might be raised, instead track the exceptions that cannot be raised.  In the above example there is nothing to say about not being raised, because it is being raised, so we’re off the hook there.  The “dual” example

let exception X in 2+2 end

illustrates the power of negative thinking.  The body of the does not raise the exception bound to , and this may be recorded in a type that makes sense within the scope of .  The crucial point is that when exiting its scope it is sound to drop mention of this information in a type for the entire expression.  Information is lost, but the analysis is sound.  In contrast there is no way to drop positive information without losing soundness, as the first example shows.

One way to think about the situation is in terms of type refinements, which express properties of the behavior of expressions of a type.  To see this most clearly it is useful to separate the exception mechanism into two parts, the control part and the data part.  The control aspect is essentially just a formulation of error-passing style, in which every expression has either a normal return of a specified type, or an exceptional return of the type associated to all exceptions.  (Nick Benton and Andrew Kennedy nicely formulated this view of exceptions as an extension of the concept of a monad.)

The data aspect is, for dynamic exceptions, the type of dynamically classified values, which is written  in PFPL.  Think of it as an open-ended sum in which one can dynamically generate new classifiers (aka summands, injections, constructors, exceptions, channels, …) that carry a value of a specified type.  According to this view the exception is bound to a dynamically-generated classifier carrying a value of unit type.  (Classifier allocation is a storage effect, so that the data aspect necessarily involves effects, whereas the control aspect may, and, for reasons of parallelism, be taken as pure.)  Exception constructors are used to make values of type , which are passed to handlers that can deconstruct those values by pattern matching.

Type refinements come into play as a means of tracking the class of a classified value.  For the purposes of exception tracking, the crucial refinements of the type are the positive refinement, , and the negative refinement, , which specify that a classified value is, or is not, of class .  Positive exception tracking reduces to maintaining invariants expressed by a disjunction of positive refinements; negative exception tracking reduces to maintaining invariants expressed by a conjunction of negative refinements.  Revisiting the logic of exception tracking, the key is that the entailment

is valid, whereas the “entailment”

is not.  Thus, in the negative setting we may get ourselves out of the scope of an exception by weakening the refinement, an illustration of the power of negative thinking.


Filed under: Programming, Research Tagged: dynamic classification, execeptions, open vs closed world, type refinements
Categories: Offsite Blogs

Robert Harper: Web site sml-family.org Up and Running!

Planet Haskell - Mon, 04/06/2015 - 4:50pm

After far too long, and far too many obstacles to be overcome, Dave MacQueen, Lars Bergstrom, and I have finally prepared an open-source site for the entire family of languages derived from Standard ML.  The defining characteristic of Standard ML has always been that it has a rigorous definition, so that it is always clear what is a valid program and how it should behave.  And indeed we have seven different working compilers, all of which are compatible with each other, with the exception of some corner cases arising from known mistakes in the definition.  Moreover, there are several active projects developing new variations on the language, and it would be good to maintain the principle that such extensions be precisely defined.

To this end the sources of the 1990 and 1997 versions of the definition are on the web site, with the permission of MIT Press, as is the type-theoretic definition formulated by Stone and H., which was subsequently used as the basis for a complete machine-checked proof of type safety for the entire language done by Crary, Lee, and H.  It is be hoped that the errors in the definition (many are known, we provide links to the extensive lists provided by Kahrs and Rossberg in separate investigations) may now be corrected.  Anyone is free to propose an alteration to be merged into the main branch, which is called “SML, The Living Language” and also known as “Successor ML”.  One may think of this as a kind of “third edition” of the definition, but one that is in continual revision by the community.  Computer languages, like natural languages, belong to us all collectively, and we all contribute to their evolution.

Everyone is encouraged to create forks for experimental designs or new languages that enrich, extend, or significantly alter the semantics of the language.  The main branch will be for generally accepted corrections, modifications, and extensions, but it is to be expected that completely separate lines of development will also emerge.

The web site, sml-family.org is up and running, and will be announced in various likely places very soon.

Update: We have heard that some people get a “parked page” error from GoDaddy when accessing sml-family.org.  It appears to be a DNS propagation problem.

Update: The DNS problems have been resolved, and I believe that the web site is stably available now as linked above.

Update: Word smithing for clarity.


Filed under: Research Tagged: functional programming, sml
Categories: Offsite Blogs

Robert Harper: Bellman on “Dynamic Programming”

Planet Haskell - Mon, 04/06/2015 - 4:41pm

Everyone who has studied algorithms has wondered “why the hell is Bellman’s memorization technique called dynamic programming?”.  I recently learned the answer from my colleague, Guy Blelloch, who dug up the explanation from Richard Bellman himself:

“I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes.

“An interesting question is, ‘Where did the name, dynamic programming, come from?’ The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research, in his presence. You can imagine how he felt, then, about the term, mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various rea- sons. I decided therefore to use the word, ‘programming.’ I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities” (p. 159).

As with algorithms, so too with dynamic languages?

Update: why is it called “memoization” and not “memorization”?

Update: rewrite of the commentary.


Filed under: Research, Teaching Tagged: dynamic and static typing
Categories: Offsite Blogs

skip hpc during build

glasgow-user - Mon, 04/06/2015 - 3:35pm
I'm deleting hpc after building ghc for a vm to save space. Is there an easy way to skip building it in the first place? -- View this message in context: http://haskell.1045720.n5.nabble.com/skip-hpc-during-build-tp5768327.html Sent from the Haskell - Glasgow-haskell-users mailing list archive at Nabble.com.
Categories: Offsite Discussion