News aggregator

How do I sort items in a wxhaskell listCtrl?

haskell-cafe - Wed, 03/02/2016 - 2:59pm
Hi list, I couldn't find any clues googling and turn to you in the hope that someone might point me in the right direction: I found the function |listCtrlSortItems2| with type |ListCtrl a -> Closure b -> IO Bool| and |closureCreate| (which type is a bit involved..), but I have no idea how to use them.. In the end, I would like to have a listCtrl where clicking on column headers will sort the items and maybe even indicating sorting order with a little arrow. Any help is really appreciated, Tilmann _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Haskell 2016: Call for Papers

haskell-cafe - Wed, 03/02/2016 - 12:01am
======================================================================== ACM SIGPLAN CALL FOR SUBMISSIONS Haskell Symposium 2016 Nara, Japan, 22-23 September 2015, directly after ICFP http://www.haskell.org/haskell-symposium/2016 ======================================================================== ** The Haskell Symposium has an early track this year ** ** See the Submission Timetable for details. ** The ACM SIGPLAN Haskell Symposium 2016 will be co-located with the International Conference on Functional Programming (ICFP 2016) in Vancouver, Canada. The Haskell Symposium aims to present original research on Haskell, discuss practical experience and future development of the language, and to promote other forms of denotative programming. Topics of interest include: * Language Design, with a focus on possible extensions and modifications of Haskell as well as critical discussions of the
Categories: Offsite Discussion

[TFP 2016] 2nd call for papers

haskell-cafe - Tue, 03/01/2016 - 10:00am
----------------------------- C A L L F O R P A P E R S ----------------------------- ======== TFP 2016 =========== 17th Symposium on Trends in Functional Programming June 8-10, 2016 University of Maryland, College Park Near Washington, DC http://tfp2016.org/ The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trends in the area. It aspires to be a lively environment for presenting the latest research results, and other contributions (see below). Authors of draft papers will be invited to submit revised papers based on the feedback receive at the symposium. A post-symposium refereeing process will then select a subset of these articles for formal publication. TFP 201
Categories: Offsite Discussion

[TFP 2016] 2nd call for papers

General haskell list - Tue, 03/01/2016 - 9:59am
----------------------------- C A L L F O R P A P E R S ----------------------------- ======== TFP 2016 =========== 17th Symposium on Trends in Functional Programming June 8-10, 2016 University of Maryland, College Park Near Washington, DC http://tfp2016.org/ The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trends in the area. It aspires to be a lively environment for presenting the latest research results, and other contributions (see below). Authors of draft papers will be invited to submit revised papers based on the feedback receive at the symposium. A post-symposium refereeing process will then select a subset of these articles for formal publication. TFP 201
Categories: Incoming News

Douglas M. Auclair (geophf): February 2016 1HaskellADay Problems and Solutions

Planet Haskell - Tue, 03/01/2016 - 12:32am
February 2016
  • February 29th, 2016: For today's #LeapDay #haskell problem, @geophf asks for your (π) deets! courtesy of @OnThisDayinMath http://lpaste.net/8336236749340540928 Today's #haskell solution *Main> last10idx (take 1000000 π) ~> 999699 http://lpaste.net/6200187768866340864 @OnThisDayinMath 
  • February 26th, 2016: Doing some geo-plotting with #Wikidata for today's (coming up) #haskell problem http://lpaste.net/7869134501572509696 ... and we extracted State capitol lat/longs from #wikidata with this #haskell solution http://lpaste.net/2256342570529456128 
  • February 25th, 2016: A little bit of parsing wikidata from a US State/Capitol SPARQL query for today's #haskell problem http://lpaste.net/7506043738804715520 Got SPARQL-y, and today's #haskell solution gets you State capitol data ... in #haskell! (Did I mention #haskell?) http://lpaste.net/1147233748136230912
  • February 24th, 2016: Yesterday we sliced, sliced, baby! For today's #haskell problem, we get all dicy wid it! http://lpaste.net/4683779741730209792 converting triples to acids You get a protein, and YOU get a protein, and EVERYBODY GETS A PROTEIN in today's #haskell solution http://lpaste.net/4088917949371383808
  • February 23rd, 2016: Today's #Haskell problem has us singing "Splice, Splice, baby!" http://lpaste.net/4813324790125297664  <iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/rog8ou-ZepE/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/rog8ou-ZepE?feature=player_embedded" width="320"></iframe>And the #haskell solution uses Comonads to chunk the input gene sequence to nucleotide triples http://lpaste.net/306850901920841728
  • February 22nd, 2016: This week we'll look at gene sequencing. Today's #haskell problem we'll create a bi-directionally mapped CODON tablehttp://lpaste.net/5133590580013563904 And the #haskell solution tabled that CODONs! http://lpaste.net/6015179222207692800 improving the original along the way.
  • February 19th, 2016: For today's #haskell problem, we deliver our customer the product: codes and, well: codes for their reports. YAY! http://lpaste.net/6281519086354563072 Annnnnnnnnddddd the file of TWENTY-ONE THOUSAND NINE HUNDRED CODES! http://lpaste.net/8746184321112473600 
  • February 17th, 2016: Today's #haskell problem gives ONE MILLION codes to the reports-god ... *ahem* I MEANT reports-GENERATOR! http://lpaste.net/4707338728270462976 ... And the solution:
    Dr. Evil: ONE MILLION CODES!
    Mr. NumberTwo: Sorry, sir, it's only 21900 codes.
    http://lpaste.net/5798834052991549440
    Dr. Evil: ... 
  • February 16th, 2016: Today's #haskell problem has us parsing MORE! This time regional offices and business units http://lpaste.net/7840815037705879552 And we have our Regional offices and business unitshttp://lpaste.net/6948226477460553728 as a #graph and as a #haskell module 
  • February 15th, 2016: This week we'll be looking at accounting and generating reports! Because YAY! Today's #haskell problem: parsing foodhttp://lpaste.net/5434234717320773632 Foods: Parsed (but hopefully not eaten. YUCK!), Snark: captured, Chart: pie-d ... HUH?!?http://lpaste.net/2151340485682135040 
  • February 12th, 2016: Today's #Haskell problem generalizes random strings to sequences of enumerated values http://lpaste.net/2086566589242540032, specifically Gene sequences Today's #haskell solution surely gives us a lot of nucleotides to consider! http://lpaste.net/2201413576650915840
  • February 11th, 2016: Writing random strings for today's #haskell problem http://lpaste.net/3636386820536664064 because that's every coder's dream-project. Today's #haskell solution does some stats on the generated random strings http://lpaste.net/652044497311498240
  • February 10th, 2016: You generate a set of 'random' numbers. The next set is very similar ... let's fix that for today's #haskell problem http://lpaste.net/4512472853710372864 Today's #haskell solution made rnd more rnd, because reasons http://lpaste.net/6036010844385443840
  • February 9th, 2016: We learn the past tense of the verb 'to see' is 'See(d),' 'saw(ed),' or 'sowed' http://lpaste.net/4776920810533158912 and we generate some random numbers. ICYMI that was the announcement for today's #haskell problem: it's a Big Generator. Yes. It is. (*groan Okay, I'll stop) (NEVER!)
    The #haskell solution has us
    shiftR to the Right ...
    movin' to the Left ...
    we are the voices of the
    Big Generator! http://lpaste.net/879893392832593920
  • February 8th, 2016: Creating a random seed from POSIX time for today's #haskell problem http://lpaste.net/2781359140864262144 En-split-ified POSIX time http://lpaste.net/8844075898622705664
  • February 5th, 2016: We tackle an Amino Acid CODON table for today's #haskell problem http://lpaste.net/6606149825735950336 suggested by a GATTACA-tweet by @randal_olson The Amino Acid table as a #graph #haskell-solution http://lpaste.net/9122099753146908672

  • February 4th, 2016: Today's #haskell problem has us create the cal (not Ripken) app http://lpaste.net/846188289084882944 Today's #haskell solution has us Rikpenin' dat Cal! http://lpaste.net/8386114973348134912
  • February 3rd, 2016: The Days of Our Lives (or at least of the year) for today's #haskell problem http://lpaste.net/6997792822417948672 These were the best days of our lives! / Back in the Summer of '69! http://lpaste.net/6823202028173393920
  • February 2nd, 2016: Dates and Days for today's #haskell problem http://lpaste.net/1501657254015795200 Date nuts and Grape nuts! ... no ... wait. http://lpaste.net/3546538887843151872
  • February 1st, 2016: Happy February, everyone! Today's #haskell problem: arrows, monads, comonads! http://lpaste.net/3387588436050313216
Categories: Offsite Blogs

LambdaCube: Version 0.5 released

Planet Haskell - Mon, 02/29/2016 - 3:49pm

The time has come to release a new version of LambdaCube 3D on Hackage, which brings lots of improvements. Also, the previous release is available on Stackage since the 2016-02-19 nightly. Just as last time, this post will only scratch the surface and give a high-level overview of what the past weeks brought.

The most visible change in the language is in the handling of tuples. Instead of defining them as distinct product types, the underlying machinery was changed to heterogeneous lists. As a consequence, the language is more complete, as we don’t have to explicitly define tuples of different arities in the compiler any more, and this also allowed us to simplify the codebase quite a bit. There is one gotcha though: unary tuples are a thing now, and they must be explicitly marked where they can occur (e.g. across shader boundaries). With the current syntax, a unary tuple is formed by surrounding any expression with double parentheses. You can read more about it in the language specification.

Another important change is that functions in the source program appear as functions in the generated code, i.e. they aren’t automatically inlined. Since modern GPU drivers often perform CSE during shader compilation, aggressively inlining everything puts unnecessary burden on them, so it’s probably for the better not to do it in most cases. This change also makes it much easier to read the generated code.

As for the internals of the compiler, many things were changed and improved since the last release. We’d like to highlight just one of these developments: we switched from parsec to megaparsec, which brought some performance improvements.

The online editor has a new time control feature: you can both pause the time and set it with a slider. We removed most of the custom uniforms, and now every example calculates everything using only the time as the input. As an added bonus, the LambdaCube 3D logo texture can be used in the editor as showcased by the Texturing example.

On the community side, the most important new thing is the lambdacube3d-discuss mailing list. We also added a new community page to the website so it’s easier to find all the places for LambdaCube related discussions. As for the website, both the API docs and the language specs pages received some love, plus we added a package overview page to dispel some confusion. Finally, this being the second release of the new system, we’re also introducing a changelog for the compiler.

Our short term plan is to take a little break from the compiler itself and improve the Quake 3 example. It serves both as a benchmark/testbed as well as a reality check that shows us how it feels to develop against our API. On a bit longer term, we intend to separate the compiler frontend as a self-contained component that could be used for making Haskell-like languages independently of the LambdaCube 3D project.

Categories: Offsite Blogs

Question: Do block precedence

haskell-cafe - Mon, 02/29/2016 - 1:20am
Dear Haskell-Cafe mailing list people (?) I've been writing parenthesis around do blocks since forever now, but I don't get why they are necessary. I can't seem to come up with a program where they are necessary. Am I missing something or are parenthesis around do blocks nececairy for no reason? Since parsing 'do' blocks as if they have parenthesis around them doesn't seem to break any code, why not do so? when (doBlocksNeedParenthesis) do putStrLn "This code is invalid." when (doBlocksNeedParenthesis) $ do putStrLn "This code is valid." when (doBlocksHaveInvisibleParenthesis) do putStrLn "These are equal v" when (doBlocksHaveInvisibleParenthesis) (do putStrLn "These are equal ^") _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Infer Nat type from Integer argument

haskell-cafe - Sun, 02/28/2016 - 11:40pm
{-# LANGUAGE DataKinds, KindSignatures, ScopedTypeVariables, GADTs, AllowAmbiguousTypes #-} import GHC.TypeLits import Data.Proxy import Data.Type.Equality data NatString (n :: Nat) = NatString String deriving Show showNS :: KnownNat n => NatString n -> (String, Integer) showNS b< at >(NatString s) = (s, natVal b) In this example, we use NatString like this:
Categories: Offsite Discussion

Functional dependencies and overloading of operation

haskell-cafe - Sun, 02/28/2016 - 11:00pm
I am trying to use Functional dependencies to overload a operation on Vector space and its basis (Use the Vector spaces module Math.Algebras.VectorSpace <https://hackage.haskell.org/package/HaskellForMaths-0.4.8/docs/Math-Algebras-VectorSpace.html>). I have tried to mimic the example for matrices and vectors from https://wiki.haskell.org/Functional_dependencies <https://wiki.haskell.org/Functional_dependencies>. I have tried different ways of defining classes and instances, but I do not get it to work. What I want is to have the «same» function for these cases: operation :: a -> a -> Vect k a operation :: a -> Vect k a -> Vect k a operation :: Vect k a -> a -> Vect k a operation :: Vect k a -> Vect k a -> Vect k a Her are som sample code to illustrate what I want. Do anybody have an idea to how to solve it? {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FunctionalDependencies #-} import Math.Algebras.VectorSpace linearExtension :: (Eq k, Num k, Ord a)
Categories: Offsite Discussion

FLTKHS - GHCi help

haskell-cafe - Sun, 02/28/2016 - 9:19pm
Hi all, I am the author of FLTKHS (http://github.com/deech/fltkhs) which aims to make it easy to install, write and deploy a native GUI application in pure Haskell. It is already able to build static executables on Linux, *BSD and OSX (Yosemite & El Capitan). However a smooth GHCi experience across platforms is still lacking due to some outstanding issues that I am unable to figure it out. I have written up the issues in the documentation [1]. I could use some help specifically with running a REPL with a C++ shared library in GHC 7.10.3. It works fine in 7.8.4. This is documented in the section titiled "GHCi (Linux, *BSD & OSX Yosemite)" in the link below. There is also a mysterious GHCi error on Windows that I can't seem to figure out. It is documented in "GHCi (Windows only)" section. Any help or pointers are appreciated. Thanks! -deech [1] http://hackage.haskell.org/package/fltkhs/docs/Graphics-UI-FLTK-LowLevel-FLTKHS.html#g:3 _______________________________________________ Haskell-Cafe mailing list H
Categories: Offsite Discussion

Gabriel Gonzalez: State of the Haskell Ecosystem - February 2016 Edition

Planet Haskell - Sun, 02/28/2016 - 4:02pm

Six months ago I released the first "State of the Haskell Ecosystem", a collaborative wiki documenting the maturity of the Haskell language for various application domains:

The primary goals of this wiki are to:

  • Advertise what areas the Haskell language and ecosystem excel at
  • Warn newcomers about common pitfalls so they avoid unpleasant surprises
  • Give new contributors ideas for where they can improve things

Every six months I plan to post about what changed since the last update in order to highlight any major changes or trends.

Education

The biggest improvement in the Haskell ecosystem was the Early Access release of the Haskell Programming from first principles book. The book is not yet released but I consider this book the best resource for people new to the language. The material is very beginner-friendly and written for somebody without any functional programming experience whatsoever.

The book is not free, but if you're really serious about learning Haskell the price is well worth it and this book will save you a lot of headaches.

The rating of the "Educational" section still remains "Immature" until this book is out of Early Access and finally released, but once the book is out I will finally mark Haskell as having "Mature" educational resources.

IDE support

For a long time vim and emacs were the Haskell editors of choice. Now more traditional IDEs like Atom and IntelliJ are starting to get Haskell support, but their respective Haskell plugins still need a bit more polish.

Also, the Haskell for Mac is supposed to work really well for learning the language if you have an OS X development environment.

However, my rating hasn't changed for IDE support, and I believe this is still the biggest gap in the Haskell ecosystem so I want to draw attention to this area for people interested in contributing to Haskell. Improving IDE support is the single easiest way to lower the entry barrier to newcomers.

If you're not sure what editor to contribute to I recommend the ide-haskell plugin for Atom. This editor and plugin are freely available and cross-platform and many users have reported an excellent experience with this plugin, although some setup issues still remain.

Another important area where newcomers can contribute is the Leksah IDE, which is a true integrated development environment for Haskell which is also written in Haskell.

Front-end web programming

stack recently added support for ghcjs, meaning that it's now very easy to start a new ghcjs project. Previously, setting up ghcjs correctly was very difficult, but those days are over now.

The ghcjs ecosystem still has a long way to go before I would rate it as "Mature", but stack support is a big and necessary step in that direction.

Standalone GUI applications

Right now I'm looking for just one really polished widget toolkit before I rate this area of the Haskell ecosystem "Mature".

Deech has made great strides in improving the ease of setup and use for the FLTK Haskell bindings and integration with visual interface builders. The setup process still needs a bit more polish but I think his work probably holds the most promise for a mature widget toolkit binding.

Types

The Liquid Haskell extension has made some great strides in adding refinement types to the language. This is not yet an official language extension, but you can still use it today and it works really well. You can learn more about refinement types by reading the awesome Liquid Haskell tutorial:

I already gave Haskell a "Best in class" rating for the type system, but advances like Liquid Haskell just further cement its lead.

Parsing

I upgraded the parsing rating from "Mature" to "Best in class". Haskell has always been a leader among languages when it comes to parsing, but I held off on a "Best in class" rating for a while because all the Haskell parsing libraries required you to sacrifice one of the following features:

  • Good error messages
  • Full backtracking (i.e. no need to left-factor the grammar)
  • First-class parsers (i.e. not a parser generator like happy)

The Earley library changed that and provides a well-rounded choice. That doesn't mean that I recommend Earley for all parsing use cases, though, and there are still great reasons to use other parsing libraries:

  • attoparsec remains the king of speed, generating parsers competitive in speed with C
  • trifecta remains the king of error messages, generating gorgeous clang-style errors

However, if Earley gets a little more polish then I'd probably switch to Earley as my default parsing library recommendation for new users.

Distributed systems

The newly added glue-core / glue-* libraries give Haskell a new service toolkit useful. Haskell still gets an "Immature" rating in this area until I see people consolidate on a common stack for service-oriented architectures and report success stories in industry.

New sections

Two generous contributors added two new sections to the wiki which I would like to highlight:

I would like to thank both of them for their contributions!

Conclusions

As always, visit the Github collaborative wiki for the most up-to-date information since this post will eventually go stale. Pull requests are always welcome, both for corrections and new additions.

Categories: Offsite Blogs

Yesod Web Framework: First class stream fusion

Planet Haskell - Sun, 02/28/2016 - 7:00am

This is a blog post about a little thought experiment I've been playing with recently. The idea has been bouncing around in my head for a few years now, but some recent discussions with coworkers at FP Complete (particularly Chris Done and Francesco Mazzoli) made me spend a few hours on this, and I thought it would be worth sharing for some feedback.

Premise

The basic premise is this: we typically follow a philosophy in many common libraries that there's the nice abstraction layer that we want to present to users, and then the low-level approach under the surface that the user should never know about but makes everything fast. This is generally a concept of fusion and rewrite rules, and appears in things like build/foldr fusion in base, and stream fusion in vector (and more recently, in conduit).

Here are the ideas fueling this thought experiment:

  • Making our code only fast when GHC rewrite rules fire correctly leads to unreliable speedups. (Check the benchmarks on the example repo, which show conduit slowing down despite its implementation of stream fusion.) This is a very difficult situation to solve as a library user.

  • By hiding the real implementation away under a nice abstraction, library users do not necessarily have any understanding of what kinds of code will be fast and what will be slow. This is not quite as frustrating as the previous point, but still quite surprising.

  • On the flip side, the high level abstractions generally allow for more flexible code to be written than the lower level approach may allow.

  • Is there a way to make the low-level, fast approach the primary interface that the user sees, lose a minimal amount of functionality, and perhaps regain that functionality by making the more featureful abstraction available via explicit opt-in?

  • Perhaps we can get better category laws out of a different formulation of a streaming library (like pipes has), but still hold onto extra functionality (like conduit has).

If that was too abstract, don't worry about it. Keep reading, and you'll see where these ideas led me.

Standard stream fusion

Duncan Coutts, Roman Leshchinskiy and Don Stewart introduced a concept called stream fusion, which powers the awesome speed and minimal memory usage of the vector package for many common cases. The idea is:

  • We have a stream abstraction which can be aggressively optimized by GHC (details unimportant for understanding this post)

  • Represent vector operations as stream operations, wrapped by functions that convert to and from vectors. For example:

    mapVector f = streamToVector . mapStream f . vectorToStream
  • Use GHC rewrite rules to remove conversions back and forth between vectors and streams, e.g.:

    mapVector f . mapVector g = streamToVector . mapStream f . vectorToStream . streamToVector . mapStream g . vectorToStream -- Apply rule: vectorToStream . streamToVector = id = streamToVector . mapStream f . id . mapStream g . vectorToStream = streamToVector . mapStream f . mapStream g . vectorToStream

In practice, this can allow long chains of vector operation applications to ultimately rewrite away any trace of the vector, run in constant space, and get compiled down to a tight inner loop to boot. Yay!

User facing stream fusion

However, there's an underlying, unstated assumption that goes along with all of this: users would rather look at vector functions instead of stream functions, and therefore we should rely on GHC rewrite rules to hide the "complicated" stream stuff. (Note: I'm simplifying a lot here, there are other reasons to like having a Vector-oriented interface for users. We'll touch on that later.)

But let's look at this concretely with some type signatures. First, our main Stream datatype:

data Stream o m r

This type produces a stream of o values, runs in the m monad, and ultimately ends with a value of r. The r type parameter is in practice most useful so that we can get Functor/Applicative/Monad instance of our type, but for our purposes today we can assume it will always be (). And m allows us more flexibility for optimizing things like mapM, but if you treat it as Identity we have no effects going on. Said another way: Stream o Identity () is more or less identical to [o] or Vector o.

How about common functions? Well, since this is just a thought experiment, I only implemented a few. Consider:

enumFromToS :: (Ord o, Monad m, Num o) => o -> o -> Stream o m () mapS :: Functor m => (i -> o) -> Stream i m r -> Stream o m r foldlS :: (Monad m) => (r -> i -> r) -> r -> Stream i m () -> m r -- Yes, we can build up more specific functions sumS :: (Num i, Monad m) => Stream i m () -> m i sumS = foldlS (+) 0

If you ignore the m and r type parameters, these functions look identical to their list and vector counterparts. As opposed to lists and vectors, though, we know for a fact that these functions will never end up creating a list of values in memory, since no such capability exists for a Stream. Take, for example, the typical bad implementation of average for lists:

average :: [Double] -> Double average list = sum list / length list

This is problematic, since it traverses the entire list twice, being both CPU inefficient and possibly forcing a large list to remain resident in memory. This mistake cannot be made naively with the stream implementation. Instead, you're forced to write it the efficient way, avoiding confusion down the road:

averageS :: (Fractional i, Monad m) => Stream i m () -> m i averageS = fmap (\(total, count) -> total / count) . foldlS go (0, 0) where go (!total, !count) i = (total + i, count + 1)

Of course, this is also a downside: when you're trying to do something simple without worrying about efficiency, being forced to deal with the lower-level abstraction can be an annoyance. That's one major question of this thought experiment: which world is the better one to live in?

Capturing complex patterns

Coroutine-based streaming libraries like conduit and pipes provide for the ability for some really complex flows of control without breaking composability. For example, in conduit, you can use ZipSink to feed two consumers of data in parallel and then use standard Applicative notation to combine the result values. You can also monadically compose multiple transformers of a data stream together and pass unconsumed data from one to the other (leftovers). Without some significant additions to our stream layer (which would likely harm performance), we can't do any of that.

Interestingly, all of the "cool" stuff you want to do in conduit happens before you connect a component to its upstream or downstream neighbors. For example, let's say I have two functions for parsing different parts of a data file:

parseHeader :: Monad m => Sink ByteString m Header parseBody :: Monad m => Sink ByteString m Body

I can compose these together monadically (or applicatively in this case) like so:

parseHeaderAndBody :: Monad m => Sink ByteString m (Header, Body) parseHeaderAndBody = (,) <$> parseHeader <*> parseBody

So what if we had a conversion function that takes a coroutine-based abstraction and converted it into our streaming abstraction? We don't expect to have the same level of performance as a hand-written streaming abstraction, but can we at least get composability? Thankfully, the answer is yes. The Gotenks module implements a conduit-like library*. This library follows all of the common patterns: await, yield, and leftover functions, monadic composition, and could be extended with other conduit features like ZipSink.

* Unlike conduit, Gotenks does not provide finalizers. They complicate things for a small example like this, and after a lot of thought over the years, I think it's the one extra feature in conduit vs pipes that we could most do without.

One thing notably missing, though, is any kind of operator like =$=, $$, or (from pipes) >-> or <-<, which allows us to connect an upstream and downstream component together. The reason is that, instead, we have three functions to convert to our streaming abstraction:

toSource :: Applicative m => Gotenks () o m r -> Stream o m r toTransform :: Applicative m => Gotenks i o m r -> Stream i m () -> Stream o m r toSink :: Monad m => Gotenks i Void m r -> Stream i m () -> m r

And then, we're able to use standard function applications - just like in the streaming layer - to stick our components together. For example, take this snippet from the benchmark:

[ bench' "vegito" $ \x -> runIdentity $ sumS $ mapS (+ 1) $ mapS (* 2) $ enumFromToS 1 x , bench' "gotenks" $ \x -> runIdentity $ toSink sumG $ toTransform (mapG (+ 1)) $ toTransform (mapG (* 2)) $ toSource (enumFromToG 1 x)

The obvious benefit here is that our coroutine-based layer is fully compatible with our stream-based layer, making for easy interop/composition. But in addition:

  • We now get to trivially prove the category laws, since we're just using function composition! This is more important than it may at first seem. To my knowledge, this is the first time we've ever gotten a streaming implementation that has baked-in leftover support and full category laws, including left identity. The reason this works is because we now have an explicit conversion step where we "throw away" leftovers, which doesn't exist in conduit.
  • In case you were worried: the Gotenks layer is implemented as a functor combined with the codensity transform, guaranteeing trivially that we're also obeying the monad laws. So without breaking a sweat, we've now got a great law-abiding system. (Also, we get efficient right-association of monadic bind.)
  • While the coroutine-based code will by nature be slower, the rest of our pipeline can remain fast by sticking to streams.
What's next?

Honestly, I have no idea what's next. I wanted to see if I could write a streaming implementation that was guaranteed fast, provided interop with conduit-style workflows, and would be relatively easy to teach. With the exception of the two extra type parameters possibly causing confusion, I think everything else is true. As far as where this goes next, I'm very much open to feedback.

UPDATE Benchmark results

Don Stewart asked me on Twitter to share the benchmark results for this repo. They're not particularly enlightening, which is why I didn't include them initially. Nonetheless, putting them here makes it clear what I'm getting at: vegito, vector, and conduit (when stream fusion kicks in) are all the same speed. In fact, the more interesting thing is to look at their compiled core, which is identical. The annoyance is that, while Data.Conduit.List and Data.Conduit.Combinators both fire their rewrite rules, the combinators provided by the Conduit module do not fire, leading to a significant (read: 200-fold) slowdown. This slowdown is exacerbated by the choice of benchmark, which is intended to demonstrate the specific power of the stream fusion optimizations.

Categories: Offsite Blogs

Gabriel Gonzalez: Auto-generate a command line interface from a data type

Planet Haskell - Sun, 02/28/2016 - 2:10am

I'm releasing the optparse-generic library which uses Haskell's support for generic programming to auto-generate command-line interfaces for a wide variety of types.

For example, suppose that you define a record with two fields:

data Example = Example { foo :: Int, bar :: Double }

You can auto-generate a command-line interface tailored to that record like this:

{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE OverloadedStrings #-}

import Options.Generic

data Example = Example { foo :: Int, bar :: Double }
deriving (Generic, Show)

instance ParseRecord Example

main = do
x <- getRecord "Test program"
print (x :: Example)

This generates the following command-line interface:

$ stack runghc Example.hs -- --help
Test program

Usage: Example.hs --foo INT --bar DOUBLE

Available options:
-h,--help Show this help text

... and we can verify that the interface works by supplying the appropriate arguments:

$ stack runghc Example.hs -- --foo 1 --bar 2.5
Example {foo = 1, bar = 2.5}

You can also compile the program into a native executable binary:

$ stack ghc Example.hs
[1 of 1] Compiling Main ( Example.hs, Example.o )
Linking Example ...
$ ./Example --foo 1 --bar 2.5
Example {foo = 1, bar = 2.5}Features

The auto-generated interface tries to be as intelligent as possible. For example, if you omit the record labels:

data Example = Example Int Double

... then the fields will become positional arguments:

$ ./Example --help
Test program

Usage: Example INT DOUBLE

Available options:
-h,--help Show this help text

$ ./Example 1 2.5
Example 1 2.5

If you wrap a field in Maybe:

data Example = Example { foo :: Maybe Int }

... then the corresponding command-line flag/argument becomes optional:

$ ./Example --help
Test program

Usage: Example [--foo INT]

Available options:
-h,--help Show this help text

$ ./Example
Example {foo = Nothing}

$ ./Example --foo 2
Example {foo = Just 2}

If a field is a list of values:

data Example = Example { foo :: [Int] }

... then the corresponding command-line flag/argument can be repeated:

$ ./Example --foo 1 --foo 2
Example {foo = [1,2]}

$ ./Example
Example {foo = []}

If you wrap a value in First or Last:

data Example = Example { foo :: First Int, bar :: Last Int }

... then you will get the first or last match, respectively:

$ ./Example --foo 1 --foo 2 --bar 1 --bar 2
Example {foo = First {getFirst = Just 1}, bar = Last {getLast = Just 2}}

$ ./Example
Example {foo = First {getFirst = Nothing}, bar = Last {getLast = Nothing}}

You can even do fancier things like ask for the Sum or Product of all matching fields:

data Example = Example { foo :: Sum Int, bar :: Product Int }

... and it will do the "right thing":

$ ./Example --foo 1 --foo 2 --bar 1 --bar 2
Example {foo = Sum {getSum = 3}, bar = Product {getProduct = 2}}

$ ./Example
Example {foo = Sum {getSum = 0}, bar = Product {getProduct = 1}}

If a data type has multiple constructors:

data Example
= Create { name :: Text, duration :: Maybe Int }
| Kill { name :: Text }

... then that translates to subcommands named after each constructor:

$ ./Example --help
Test program

Usage: Example (create | kill)

Available options:
-h,--help Show this help text

Available commands:
create
kill

$ ./Example create --help
Usage: Example create --name TEXT [--duration INT]

Available options:
-h,--help Show this help text

$ ./Example kill --help
Usage: Example kill --name TEXT

Available options:

-h,--help Show this help text

$ ./Example create --name foo --duration 60
Create {name = "foo", duration = Just 60}

$ ./Example kill --name foo
Kill {name = "foo"}

This library also supports many existing Haskell data types out of the box. For example, if you just need to get a Double and Int from the command line you could just write:

{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE OverloadedStrings #-}

import Options.Generic

main = do
x <- getRecord "Test program"
print (x :: (Double, Int))

... and that will parse two positional arguments:

$ ./Example --help
Test program

Usage: Example DOUBLE INT

Available options:
-h,--help Show this help text

$ ./Example 1.1 2
(1.1,2)Compile-time safety

Haskell's support for generic programming is done completely at compile time. This means that if you ask for something that cannot be sensibly converted into a command-line interface your program will fail to compile.

For example, if you ask for a list of lists:

data Example = Example { foo :: [[Int]] }

.. then the compiler will fail with the following error message since you can't (idiomatically) model "repeated (repeated Ints)" on the command line:

No instance for (ParseField [Int])
arising from a use of ‘Options.Generic.$gdmparseRecord’
In the expression: Options.Generic.$gdmparseRecord
In an equation for ‘parseRecord’:
parseRecord = Options.Generic.$gdmparseRecord
In the instance declaration for ‘ParseRecord Example’Conclusion

If you would like to use this package or learn more you can find this package:

I also plan to re-export this package's functionality from turtle to further simplify command-line programming.

Categories: Offsite Blogs

The 13-line example in Text.Megaparsec.Expr

haskell-cafe - Sat, 02/27/2016 - 6:08am
At the bottom of the Hackage documentation for Text.Megaparsec.Expr [1] is a 13-line demonstration program. It includes no import statements. I added the ones I could deduce, which produced this: import Text.Megaparsec import Text.Megaparsec.Expr import Text.Megaparsec.Lexer (symbol,integer) parens = between (symbol "(") (symbol ")") expr = makeExprParser term table <?> "expression" term = parens expr <|> integer <?> "term" table = [ [ prefix "-" negate , prefix "+" id ] , [ postfix "++" (+1) ] , [ binary "*" (*) , binary "/" div ] , [ binary "+" (+) , binary "-" (-) ] ] binary name f = InfixL (reservedOp name >> return f) prefix name f = Prefix (reservedOp name >> return f) postfix name f = Postfix (reservedOp name >> return f) That still won't compile, because GHC does not know what reservedOp means. Does reservedOp refer to something that no longer exists, or have
Categories: Offsite Discussion

GHC documentation of modules exposed by the base package

libraries list - Fri, 02/26/2016 - 4:01pm
Dear all, By running in GHC 7.10.3 $ ghc-pkg field base-4.8.2.0 exposed-modules I can see various modules exposed by base-4.8.2.0 which are not documented (e.g. GHC.NUm) in http://downloads.haskell.org/~ghc/7.10.3/docs/html/libraries/index.html Which is the criterion for adding GHC documentation to the modules exposed by base? All the best,
Categories: Offsite Discussion

ANNOUNCE: Applied Functional Programming (AFP)Summerschool 4-15 July 2016, Utrecht, Netherlands

haskell-cafe - Fri, 02/26/2016 - 4:00pm
=========== AFP Summerschool 2016 =========== Applied Functional Programming (AFP) Summerschool July 4-15, 2016 Utrecht University, Department of Information and Computing Sciences Utrecht, The Netherlands Summerschool & registration website: http://www.utrechtsummerschool.nl/courses/science/applied-functional-programming-in-haskell AFP website : http://www.cs.uu.nl/wiki/USCS contact : Uscs-afp< at >lists.science.uu.nl *** The 2016 edition of the Applied Functional Programming (AFP) Summerschool in Utrecht, Netherlands will be held from 4-15 July 2016. The summerschool teaches Haskell on both beginners and advanced levels via lectures and lab exercises. More info can be found via the references above, included here is a summary from the summerschool info: ``Typed functional programming languages allow for the development of robust, concise programs in a short amount of time. The key advantages are higher-order functions as an abstraction mechanism, and an advanced type system for safety and re
Categories: Offsite Discussion

Call for Participation: MSFP 2016

General haskell list - Fri, 02/26/2016 - 11:44am
Sixth Workshop on MATHEMATICALLY STRUCTURED FUNCTIONAL PROGRAMMING 8 April 2016, in Eindhoven, The Netherlands A satellite workshop of ETAPS 2016 CALL FOR PARTICIPATION http://msfp2016.bentnib.org/ **The early registration deadline for ETAPS is 1st March** The sixth workshop on Mathematically Structured Functional Programming is devoted to the derivation of functionality from structure. It is a celebration of the direct impact of Theoretical Computer Science on programs as we write them today. This year's MSFP will be held on Friday 8th April 2016, Co-located with ETAPS 2016 in Eindhoven, The Netherlands. The programme will contain the following accepted papers: - Maciej Piróg. Eilenberg-Moore Monoids and Backtracking Monad Transformers. - Bartek Klin and Michał Szynwelski. SMT solving for functional programming over infinite structures. - Niccolò Veltri, Tarmo Uustalu and Denis Firsov. Variations on Noetherianness. - Danel Ahman and Tarmo Uustalu. Directed containers as categories. -
Categories: Incoming News

Haskell Symposium 2016 CFP?

haskell-cafe - Fri, 02/26/2016 - 9:24am
Hi Café Is there going to be a Haskell Symposium this year? I haven't been able to find the call for papers. Best regards Ivan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Robin KAY: HsQML 0.3.4.0 released: Belated Automatic List Models

Planet Haskell - Fri, 02/26/2016 - 2:46am
A couple of days ago I released HsQML 0.3.4.0, the long-awaited and much overdue next release of my GUI binding to the Qt Quick framework. You can download the latest release from Hackage as usual.

The major new addition in this release is the AutoListModel component, which you can import in your QML scripts from the HsQML.Model 1.0 namespace. This provides a way of creating QAbstractItemModels based on lists of items. This feature is some way off full support for creating custom models, but still a significant improvement over binding marshalled arrays to QML views directly. Namely, when the updating a list via the AutoListModel, it can generate item add, remove, and change events based on the differences between the old and new arrays. An direct array binding, on the other hand, would cause the entire model to be reset so that views lose their state, cannot animate changes, etc.

This is demonstrated by the hsqml-model1 sample program included in the latest release of the hsqml-demo-samples package and pictured below. Try running it and see how the view of blue squares at bottom animates when you change the model.



Basic reference documentation for the AutoListModel is included in the Hackage documentation. The topic demands some more exposition, and will be the topic of further blog posts with some more exciting sample programs in the works.

release-0.3.4.0 - 2016.02.24 * Added AutoListModel component. * Added functions for joining and killing engines. * Added functions to manipulate Qt's command-line arguments. * Added exception handler to callbacks. * Relaxed Cabal dependency constraint on 'filepath', 'tagged',
'transformers', and 'QuickCheck'. * Changed runEngineLoop to pass through command line arguments by
default. * Fixed class at same address as deleted class causing inaccessible
objects. * Fixed memory corruption bug prior to Qt 5.2 with workaround. * Fixed building with Fedora-style moc executable names (non-qtselect). * Fixed building GHCi objects with GHC 7.10. * Fixed missing strong reference on engine context objects. * Fixed missing include breaking compilation with Qt 5.0. * Fixed switch compiler warnings. * Fixed imports to support older GHCs.
Categories: Offsite Blogs

[ANN] sparkle: native Apache Spark applications inHaskell

haskell-cafe - Thu, 02/25/2016 - 7:50pm
Hello -cafe! Recently at Tweag I/O we've been working on sparkle, a library for writing (distributed) Apache Spark applications directly in Haskell! We have published a blog post introducing the project (and some of its challenges) here: http://www.tweag.io/blog/haskell-meets-large-scale-distributed-analytics The corresponding repository lives at https://github.com/tweag/sparkle While this is still early stage work, we can already write non-trivial Spark applications in Haskell and have them run accross an entire cluster. We obviously do not cover the whole Spark API yet (very, very far from that) but would be glad to already get some feedback. Cheers
Categories: Offsite Discussion