News aggregator

Announcing BufferBuilder - Encode JSON 2-5x faster than Aeson

Haskell on Reddit - Mon, 02/23/2015 - 12:52pm

Hi all,

/u/implicit_cast and I wrote a new library for fast encoding of JSON (and UTF-8, and buffers of bytes in general). Our benchmarks show that it is about five times faster than Aeson if encoding from records straight into JSON, and about two times faster if you use it to encode Aeson Values directly.

Here is a blog post describing the rationale and backstory: http://chadaustin.me/2015/02/buffer-builder/

And here are the packages on Hackage:

https://hackage.haskell.org/package/buffer-builder-0.2.0.3

https://hackage.haskell.org/package/buffer-builder-aeson-0.2.0.3

submitted by chadaustin
[link] [23 comments]
Categories: Incoming News

hpc - ignore some expressions

haskell-cafe - Mon, 02/23/2015 - 12:36pm
Hi, I am generating code coverage reports using hpc, it works great. The reports would be much more helpful if I could somehow tell hpc to treat certain expressions as "covered" though. As an example, let's think of "impossible" cases of a function. f x | check x = result x | otherwise = error "impossible" I know there are techniques to get rid of (or minimise the number of) the impossible cases. I also can see how this information can be relevant, i.e. seeing whether `check` always holds or not in the report. However, at least for one version of the report, I want the error call like the above example to be treated as covered. Or maybe a different colour could be used to annotate "ignored"? Did anyone need/want anything like this? Is it possible? My guess is that this is somewhat possible using manually generated .tix files and merging them with the .tix file that is generated in the usual way. However this feels too adhoc, fragile and seems to be too much effort to reach a simple goal. Best, O
Categories: Offsite Discussion

Trying to create a good monad homework assignment for undergrads

Haskell on Reddit - Mon, 02/23/2015 - 9:59am

Does anyone have any suggestions? I'm gonna assume they'll have a basic understanding.

submitted by gin-writ
[link] [10 comments]
Categories: Incoming News

Throwing an Exception Back with Continuations?

Haskell on Reddit - Mon, 02/23/2015 - 2:09am

I'm playing around with the pipes library and I'd like to have a producer which runs a certain interaction between a producer and a consumer, (except the producer and consumer really both send messages back and forth so that's not a good name for them) but the outer producer intercepts some (but not all) of the messages being sent to the inner consumer and sends them down the outer pipeline to the outer consumer, then sends the response of the outer consumer back as the simulated inner response. It sounds like a sort of nested call/cc operation but I'm not sure if it makes sense or if it's possible to get it working without excessive use of the continuation monad and even then I'm not completely sure if it will work or not.

inner: innerProducer --> interceptor --> innerConsumer
outer: outerProducer --> outerConsumer

With continuations, I think I want the interceptor to sometimes throw an "exception" with its current continuation and the message to be sent. The outer producer catches this exception, sends the message, awaits a response from downstream, then sends the response back through the continuation. I'm pretty sure this can't fit exclusively in the ExceptionT monad but I could be wrong.

I haven't worked with continuations before so I have no idea if this works or not. Does this make any sense?

edit: Monad morphisms might help too but I'm not sure what the limits of those are.

submitted by expwnent
[link] [5 comments]
Categories: Incoming News

Reference card - HaskellWiki

del.icio.us/haskell - Mon, 02/23/2015 - 1:18am
Categories: Offsite Blogs

ANN: Lambdabot 5.0

haskell-cafe - Sun, 02/22/2015 - 9:36pm
Dear Haskell users, after a long hiatus, I'm pleased to announce that we have just released Lambdabot 5.0 on hackage. Lambdabot is an IRC bot that can also be used offline; it provides tools for querying information about and even for producing Haskell code. To install it, use the following command: cabal install lambdabot-haskell-plugins lambdabot (cabal install lambdabot *should* work, but instead seems to send cabal's dependency solver onto a wild goose chase from which it fails to return.) * What's new Lambdabot is now maintained by a team, currently consisting of James Cook and me. The repository and bugtracker have moved to https://github.com/lambdabot/lambdabot The most significant change compared to lambdabot 4.3 is that lambdabot has been refactored and split into multiple packages. As a result, lambdabot has become easier to customize: simply edit the source files (Main.hs and Modules.hs) in the lambdabot package, and optionally add your own plugins. Other notable changes include - The
Categories: Offsite Discussion

Russell O'Connor: Parallel Evaluation of the Typed Lambda Calculus

Planet Haskell - Sun, 02/22/2015 - 5:31pm

There is much written about the duality between strict-order (call-by-value) evalutaion for the lambda calculus and the normal-order (call-by-need) evaluation (or semantic equivently, lazy evaluation). In the simply typed lambda calculus, all evaluation eventually terminates, so both evaluation strategies result in the same values. However, when general recursion is added to the simply typed lambda calculus (via a fixpoint operator, for example) then evaluation of some expressions does not terminate. More expressions terminate with normal-order evaluation than with strict-order evaluation. In fact, if evaluation terminates in any order, then it terminates with normal-order evaluation.

I would like to discuss the possibility of a third, even laxer evaluation strategy available for the typed lambda calculus that allows for even more expressions to terminate. I did just say that normal-order evaluation is, in some sense, a best possible evaluation order, so, in order to beat it, we will be adding more redexes that add the commuting conversions.

The typed lambda calculus enjoys certain commuting conversions for case expressions that allow every elimination term to pass through the case expression. For example, the commuting conversion for the π₁ elimination term and the case experssion says that

π₁(case e₀ of σ₁ x. e₁ | σ₂ y. e₂)

converts to

case e₀ of σ₁ x. π₁(e₁) | σ₂ y. π₁(e₂)

These commuting conversions are required so that the subformula property holds.

My understanding is that a corollary of this says that

f(case e₀ of σ₁ x. e₁ | σ₂ y. e₂)

and

case e₀ of σ₁ x. f(e₁) | σ₂ y. f(e₂)

are denotationally equivalent whenever f is a strict function.

I would like to develop a version of the lambda calculus that allows these two expressions to denote the same value for any f. Call this, the unrestricted commuting conversion property. A lambda calculus with this property would necessarily be parallel and thus will require a parallel evaluation strategy. For example, the natural definition of or becomes the parallel-or operation.

or x y := if x then True else y

This definition has the usual short-circuit property that or True ⊥ is True where ⊥ is defined by

⊥ := fix id

If we use the unrestricted commuting conversion property then we also have the that or ⊥ True is True:

or ⊥ True = {definition of or} if ⊥ then True else True = {β-expansion} if ⊥ then const True ⟨⟩ else const True ⟨⟩ = {commuting} const True (if ⊥ then ⟨⟩ else ⟨⟩) = {β-reduction} True

Hence or is parallel-or.

Other parallel functions, such as the majority function, also follow from their natural definitions.

maj x y z := if x then (or y z) else (and y z)

In this case maj ⊥ True True is True.

maj ⊥ True True = {definition of maj} if ⊥ then (or True True) else (and True True) = {evaluation of (or True True) and (and True True) if ⊥ then True else True = {commuting} True

It is easy to verify that maj True ⊥ True and maj True True ⊥ are also both True.

My big question is whether we can devise some nice operational semantics for the lambda calculus that will have the unrestricted commuting conversions property that I desire. Below I document my first attempt at such operational semantics, but, spoiler alert, it does not work.

The usual rule for computing weak head normal form for the case expression case e₀ of σ₁ x. e₁ | σ₂ y. e₂ says to first convert e₀ to weak head normal form. If it is σ₁ e₀′ then return the weak head normal form of e₁[x ↦ e₀′]. If it is σ₂ e₀′ then return the weak head normal form of e₂[y ↦ e₀′]. In addition to this rule, I want to add another rule for computing the weak head normal form for the case expression. This alernative rules says that we compute the weak head normal forms of e₁ and e₂. If we get C₁ e₁′ and C₂ e₂′ respectively for introduction terms (a.k.a. constructors) C₁ and C₂, and if additionally C₁ = C₂ then return the following as a weak head normal form, C₁ (case e₀ of σ₁ x. e₁′ | σ₂ y. e₂′). If C₁ ≠ C₂ or if we get stuck on a neutral term (e.g. a varaible), then this rule does not apply.

This new rule is in addition to the usual rule for case. Any implementation must run these two rules in parallel because it is possible that either rule (or both) can result in non-termination when recursivly computing weak head normal forms of sub-terms. I suppose that in case one has unlifted products then when computing the weak head normal form of a case expression having a product type or function type, one can immediately return

⟨case e₀ of σ₁ x. π₁ e₁ | σ₂ y. π₁ e₂, case e₀ of σ₁ x. π₂ e₁ | σ₂ y. π₂ e₂⟩ or λz. case e₀ of σ₁ x. e₁ z | σ₂ y. e₂ z

This amended computation of weak head normal form seems to work for computing or and maj functions above so that they are non-strict in every argument, but there is another example where even this method of computing weak head normal form is not sufficent. Consider the functions that implements associativity for the sum type.

assocL : A + (B + C) -> (A + B) + C assocL z := case z of σ₁ a. σ₁ (σ₁ a) | σ₂ bc. (case bc of σ₁ b. σ₁ (σ₂ b) | σ₂ c. σ₂ c) assocR : (A + B) + C -> A + (B + C) assocR z := case z of σ₁ ab. (case ab of σ₁ a. σ₁ a | σ₂ b. σ₂ (σ₁ b)) | σ₂ c. σ₂ (σ₂ c)

Now let us use unrestricted commuting conversions to evaluate assocR (assocL (σ₂ ⊥)).

assocR (assocL (σ₂ ⊥)) = { definition of assocL and case evaluation } assocR (case ⊥. σ₁ b. σ₁ (σ₂ b) | σ₂ c. σ₂ c) = { commuting conversion } case ⊥. σ₁ b. assocR (σ₁ (σ₂ b)) | σ₂ c. assocR (σ₂ c) = { definition of assocR and case evaluations } case ⊥. σ₁ b. σ₂ (σ₁ b) | σ₂ c. σ₂ (σ₂ c) = { commuting conversion } σ₂ (case ⊥. σ₁ b. σ₁ b | σ₂ c. σ₂ c) = { η-contraction for case } σ₂ ⊥

Even if η-contraction is not a reduction rule used for computation, it is still the case that t and case t. σ₁ b. σ₁ b | σ₂ c. σ₂ c should always be dentotationally equivalent. Anyhow, we see that by using commuting conversions that a weak head normal form of assocR (assocL (σ₂ ⊥)) should expose the σ₂ constructor. However, if you apply even my ammended computation of weak head normal form, it will not produce any constructor.

What I find particularly surprising is the domain semantics of assocL and assocR. assocL seems to map σ₂ ⊥ to ⊥ because no constructor can be exposed. assocR maps ⊥ to ⊥. Therefore, according to the denotational semantics, the composition should map σ₂ ⊥ to ⊥, but as we saw, under parallel evaluation it does not. It would seem that the naive denotational semantics appears to not capture the semantics of parallel evaluation. The term case ⊥. σ₁ b. σ₁ (σ₂ b) | σ₂ c. σ₂ c seems to be more defined than ⊥, even though no constructor is available in the head position.

Although my attempt at nice operational semantics failed, I am still confident some nice computation method exists. At the very least, I believe a rewriting system will work which has all the usual rewriting rules plus a few extra new redexes that says that an elimination term applied to the case expression commutes the elimination term into all of the branches, and another that says when all branches of a case expression contain the same introduction term, that introduction term is commuted to the outside of the case expression, and maybe also the rules I listed above for unlifted products. I conjecture this rewriting system is confluent and unrestricted commuting conversions are convertable (probably requiring η-conversions as well).

Without proofs of my conjectures I am a little concerned that this all does not actually work out. There may be some bad interaction with fixpoints that I am not seeing. If this does all work out then shouldn’t I have heard about it by now?

Categories: Offsite Blogs

Instance of typeclass as return value

Haskell on Reddit - Sun, 02/22/2015 - 4:06pm

Suppose I create a typeclass with some instances.

class Value a instance Value Int instance Value Bool

Now I want to return a Value from a function.

foo :: (Value v) => v foo = True

But I get the error message:

Could not deduce (v ~ Bool) from the context (Value v) bound by the type signature for foo :: Value v => v at /var/folders/mw/sjqkv7xs5rv6dy83b_g17z5c0000gn/T/flycheck2929287s/Value.hs:19:8-21 ‘v’ is a rigid type variable bound by the type signature for foo :: Value v => v at /var/folders/mw/sjqkv7xs5rv6dy83b_g17z5c0000gn/T/flycheck2929287s/Value.hs:19:8 Relevant bindings include foo :: v (bound at /var/folders/mw/sjqkv7xs5rv6dy83b_g17z5c0000gn/T/flycheck2929287s/Value.hs:20:1) In the expression: True In an equation for ‘foo’: foo = True

However, this does work for a typeclass such as Num:

foo :: (Num a) => a foo = 10 -- No error

Why does my typeclass behave differently than a 'builtin' typeclass? How can I fix it?

submitted by joseph
[link] [21 comments]
Categories: Incoming News

Looking for a new release manager for cabal

haskell-cafe - Sun, 02/22/2015 - 3:01pm
(bcc: haskell-cafe) Hi, After about 3 years of cabal releases I'm looking for someone else to take over the responsibility as cabal release manager. As a release manager I try to keep on top of pull requests, make releases, and make sure bugs get triaged and bugfixes get release. Anyone interested?
Categories: Offsite Discussion

PPDP 2015: 2nd call for papers

General haskell list - Sun, 02/22/2015 - 2:16pm
====================================================================== Call for papers 17th International Symposium on Principles and Practice of Declarative Programming PPDP 2015 Special Issue of Science of Computer Programming (SCP) Siena, Italy, July 14-16, 2015 (co-located with LOPSTR 2015) http://costa.ls.fi.upm.es/ppdp15 ====================================================================== SUBMISSION DEADLINE: 20 MARCH, 2015 PPDP 2015 is a forum that brings together researchers from the declarative programming communities, including those working in the logic, constraint and functional programming paradigms, but also embracing languages, database languages, and knowledge representation languages. The goal is to stimulate research in the use of logical formalisms and methods for specifying, performing, and analyzing computations, including mechanisms for mobility, modularity, concurrency, object-orientation, security,
Categories: Incoming News

LOPSTR 2015: 2nd Call for Papers

General haskell list - Sun, 02/22/2015 - 2:13pm
============================================================ 25th International Symposium on Logic-Based Program Synthesis and Transformation LOPSTR 2015 Special Issue of Formal Aspects of Computing http://alpha.diism.unisi.it/lopstr15/ University of Siena, Siena, IT, July 13-15, 2015 (co-located with PPDP 2015) DEADLINES Abstract submission: April 6, 2015 Paper/Extended abstract submission: April 13, 2015 ============================================================ The aim of the LOPSTR series is to stimulate and promote international research and collaboration on logic-based program development. LOPSTR is open to contributions in logic-based program development in any language paradigm. LOPSTR has a reputation for being a lively, friendly forum for presenting and discussing work in progress. Formal proceedings are produced only after the symposium so that authors can incorporate this feed
Categories: Incoming News

Roman Cheplyaka: Examples of monads in a dynamic language

Planet Haskell - Sun, 02/22/2015 - 2:00pm

In Monads in dynamic languages, I explained what the definition of a monad in a dynamic language should be and concluded that there’s nothing precluding them from existing. But I didn’t give an example either.

So, in case you are still wondering whether non-trivial monads are possible in a dynamic language, here you go. I’ll implement a couple of simple monads — Reader and Maybe — with proofs.

And all that will take place in the ultimate dynamic language — the (extensional) untyped lambda calculus.

The definitions of the Reader and Maybe monads are not anything special; they are the same definitions as you would write, say, in Haskell, except Maybe is Church-encoded.

What I find fascinating about this is that despite the untyped language, which allows more things to go wrong than a typed one, the monad laws still hold. You can still write monadic code and reason about it in the untyped lambda calculus in the same way as you would do in a typed language.

Reader return x = λr.x a >>= k = λr.k(ar)r Left identity return x >>= k { inline return } = λr.x >>= k { inline >>= } = λr.k((λr.x)r)r { β-reduce } = λr.kxr { η-reduce } = kx Right identity a >>= return { inline return } = a >>= λx.λr.x { inline >>= } = λr.(λx.λr.x)(ar)r { β-reduce } = λr.ar { η-reduce } = a Associativity a >>= f >>= g { inline 1st >>= } = λr.f(ar)r >>= g { inline 2nd >>= } = λr.g((λr.f(ar)r)r)r { β-reduce } = λr.g(f(ar)r)r a >>= (λx. f x >>= g) { inline 2nd >>= } = a >>= λx.λr.g((fx)r)r { inline 1st >>= } = λr.(λx.λr.g(fxr)r)(ar)r { β-reduce } = λr.g(f(ar)r)r Maybe return x = λj.λn.jx a >>= k = λj.λn.a(λx.kxjn)n Left identity return x >>= k { inline return } = λj.λn.jx >>= k { inline >>= } = λj.λn.(λj.λn.jx)(λx.kxjn)n { β-reduce } = λj.λn.kxjn { η-reduce } = kx Right identity a >>= return { inline return } = a >>= λx.λj.λn.jx { inline >>= } = λj.λn.a(λx.(λx.λj.λn.jx)xjn)n { β-reduce } = λj.λn.a(λx.jx)n { η-reduce } = λj.λn.ajn { η-reduce } = a Associativity a >>= f >>= g { inline 1st >>= } = (λj.λn.a(λx.fxjn)n) >>= g { inline 2nd >>= } = (λj.λn.(λj.λn.a(λx.fxjn)n)(λx.gxjn)n) { β-reduce } = λj.λn.a(λx.fx(λx.gxjn)n)n a >>= (λx. f x >>= g) { inline 2nd >>= } = a >>= (λx.λj.λn.fx(λx.gxjn)n) { inline 1st >>= } = λj.λn.a(λx.(λx.λj.λn.fx(λx.gxjn)n)xjn)n { β-reduce } = λj.λn.a(λx.fx(λx.gxjn)n)n
Categories: Offsite Blogs

Robert Harper: OPLSS 2015 Announcement

Planet Haskell - Sun, 02/22/2015 - 1:40pm
We are pleased to announce the preliminary program for the 14th Annual Oregon Programming Languages Summer School (OPLSS) to be held June 15th to 27th, 2015 at the University of Oregon in Eugene.

This year’s program is titled Types, Logic, Semantics, and Verification and features the following speakers:

  • Amal Ahmed, Northeastern University
  • Nick Benton, Microsoft Cambridge Research Lab
  • Adam Chlipala, Massachusetts Institute of Technology
  • Robert Constable, Cornell University
  • Peter Dybjer, Chalmers University of Technology
  • Robert Harper, Carnegie Mellon University
  • Ed Morehouse, Carnegie Mellon University
  • Greg Morrisett, Harvard University
  • Frank Pfenning, Carnegie Mellon University

The registration deadline is March 16, 2015.

Full information on registration and scholarships is available at https://www.cs.uoregon.edu/research/summerschool/.

Please address all inquiries to summerschool@cs.uoregon.edu.

Best regards from the OPLSS 2015 organizers!

Robert Harper

Greg Morrisett

Zena Ariola

 
Filed under: Research, Teaching Tagged: OPLSS15
Categories: Offsite Blogs

Is working with numeric formulas on Haskell a pain for anyone else?

Haskell on Reddit - Sun, 02/22/2015 - 12:47pm

Since I started using Haskell, I can't ever implement any kind of mathematical formula without having a bad time. For example:

bin :: (Num a) => a -> [a] bin 0 = [] bin 1 = [] bin x = b : bin (x - l / (2 - b)) where l = 2 ^ floor (log x / log 2) b = floor (x * 2 / (l * 3))

This is an enumeration of the set of binary strings (0, 1, 00, 01, 10, 11, 100...). Plug this definition on GHC it will tell you some things you didn't know about your mother:

binstr.hs:2:5: Could not deduce (Eq a) arising from the literal ‘0’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Eq a) to the context of the type signature for bin :: Num a => a -> [a] In the pattern: 0 In an equation for ‘bin’: bin 0 = [] binstr.hs:5:13: Could not deduce (RealFrac a) arising from a use of ‘floor’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (RealFrac a) to the context of the type signature for bin :: Num a => a -> [a] In the second argument of ‘(^)’, namely ‘floor (log x / log 2)’ In the expression: 2 ^ floor (log x / log 2) In an equation for ‘l’: l = 2 ^ floor (log x / log 2) binstr.hs:5:20: Could not deduce (Floating a) arising from a use of ‘log’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Floating a) to the context of the type signature for bin :: Num a => a -> [a] In the first argument of ‘(/)’, namely ‘log x’ In the first argument of ‘floor’, namely ‘(log x / log 2)’ In the second argument of ‘(^)’, namely ‘floor (log x / log 2)’ binstr.hs:5:26: Could not deduce (Fractional a) arising from a use of ‘/’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Fractional a) to the context of the type signature for bin :: Num a => a -> [a] In the first argument of ‘floor’, namely ‘(log x / log 2)’ In the second argument of ‘(^)’, namely ‘floor (log x / log 2)’ In the expression: 2 ^ floor (log x / log 2) binstr.hs:6:9: Could not deduce (Integral a) arising from a use of ‘floor’ from the context (Num a) bound by the type signature for bin :: Num a => a -> [a] at binstr.hs:1:8-26 Possible fix: add (Integral a) to the context of the type signature for bin :: Num a => a -> [a] In the expression: floor (x * 2 / (l * 3)) In an equation for ‘b’: b = floor (x * 2 / (l * 3)) In an equation for ‘bin’: bin x = b : bin (x - l / (2 - b)) where l = 2 ^ floor (log x / log 2) b = floor (x * 2 / (l * 3))

I'm experienced enough to know what it means before even reading it: I probably got some "floating/integral" usage wrong on the pipeline. So what do I do now? I read the type of every single numeric function you used in order to fix the plumbing. After at least an hour of playing around, I get this:

bin :: (Integral a) => a -> [a] bin 0 = [] bin 1 = [] bin x = b : bin (x - (div l (2 - b))) where l = 2 ^ floor (logBase 2 (fromIntegral x)) b = div (x * 2) (l * 3) fbin :: (RealFrac a, Floating a) => a -> [a] fbin 0 = [] fbin 1 = [] fbin x = b : fbin (x - l / (2 - b)) where l = 2 ^ floor (log x / log 2) b = flr (x * 2 / (l * 3)) flr = fromIntegral . floor

Which is not only uglier, but I also needed 2 versions. Even on C++ I can write a single version that works in a minute. I honestly can't think in a single programming language that writing such function would be any harder. I have a feeling there is something really, really wrong with Haskell's numeric typeclasses. For example: Num, Integral, Integer, RealFrac, Fractional, Enum - at this point, it seems like we are just listing arbitrary properties in a way that is not just confusing, but absolutely meaningless in a mathematical point of view, ugly in a software engineering sense and sometimes plainly wrong. For example, why "Float" is not an instance of "Integral", when, in reality, Floats can absolutely always be used where Integers can.

That is not how numbers work. I want to be able to write my mathematical formulas without having to plumb a lot of conversions all around. I don't want to have a gigant type constraints list. I don't want to think aboout how Haskell writer's made sense of numeric types. I just want a "Real" type that I can use. The compiler should be able to do the rest. Who designed that schema? What is the justification behind this?

submitted by SrPeixinho
[link] [47 comments]
Categories: Incoming News