News aggregator

Haskell Weekly News

General haskell list - Fri, 05/01/2015 - 8:16am
*Top Picks:* - Haskell wins a bumper crop of 18 accepted Google Summer of Code proposals <http://www.reddit.com/r/haskell/comments/342rvp/google_summer_of_code_18_projects_accepted/>. Mentor offers total 44, another thumpin' sign of life. Kudos to Edward Kmett, Johan Tibell, Shachaf Ben-Kiki, and Gershom Bazerman. Gloria Haskella sine labore nulla. - Reddit Haskell celebrates <http://www.reddit.com/r/haskell/comments/33wnf1/20k_rhaskell_subscribers_nice/> 20k subscribers. It's the 7th largest PL community on reddit <https://www.reddit.com/r/programming/wiki/faq#wiki_what_language_reddits_are_there.3F> after python, javascript, java, php, ruby, c++, in that order. - Matthew Griffith <http://mechanical-elephant.com/thoughts/2015-04-20-becoming-productive-in-haskell/> migrates away from Python: "I do most of my web prototyping in Haskell now." A journey vividly logged and much loved on Hacker News <https://news.ycombinator.com/item?id=9431944>. And also on Prog
Categories: Incoming News

Hypothetical lens xml parsing that documents errors?

Haskell on Reddit - Fri, 05/01/2015 - 6:58am

I've been using the hexpat-lens library to parse XML, and it's really wonderful. I can write functions like this:

readPlugins :: ByteString -> [Plugin] readPlugins xml = map fromJust $ -- Revisit this decision foreach (xml ^.. xmlText ./ named "Report" ./ named "ReportHost" ./ named "ReportItem") $ \node -> Plugin <$> node ^? ix "pluginID" . textIntPrism <*> node ^? ix "severity" . textIntPrism . intSeverityPrism <*> node ^? ix "pluginFamily" <*> node ^? id ./ named "fname" ./ text <*> node ^? id ./ named "description" ./ text

This gets all the ReportItem nodes (well, the ones at the right place in the tree) and extracts relevant information from child nodes and attributes. This is a cool and useful way to query xml. Additionally, it is easy to write. Now, what it doesn't do well is error handling. You might say "duh, you're using fromJust", but even if you switched out map fromJust with sequence (and made the function's return type be Maybe [Plugin], you'll still get absolutely no information about why a document fails to parse.

So, there are sort of two questions I'm getting at, the first being: Is there a way to use the Prism optic to capture failure information? If I compose a bunch of Prisms and then run the resulting Prism, is there a way to get an indication of which step failed.

The second seems like a more difficult, but possibly orthogonal, question. Is there a way to track state inside of any of the optics? I ask because the kind of error information that I'm really looking for would be more like this:

Report[0]:ReportHost[0]:ReportItem[54]:fname not found

And so the optics would have to be passing around information about the context. Any thoughts would be appreciated.

submitted by andrewthad
[link] [11 comments]
Categories: Incoming News

Why is Functor used on Lens?

Haskell on Reddit - Fri, 05/01/2015 - 6:26am

This is how you define a simple _1 Lens and use it:

data Const a b = Const { getConst :: a } deriving Show instance Functor (Const a) where fmap f (Const a) = Const a _1 f (l,r) = fmap (\ x -> (x,r)) (f l) view lens thing = getConst $ lens Const thing main = do print $ view (_1._1) ((1,2),2)

But Functor is used just as a trick to define a way to combine the setter and the getter. Isn't that overkill / much less readable than just passing comb as a parameter?

_1 f (left,right) comb = comb (f left comb) (\ x -> (x,right)) view lens thing = lens const thing const main = do print $ view (_1._1) ((1,2),2) submitted by SrPeixinho
[link] [4 comments]
Categories: Incoming News

Looking for Haskell developer / data strucutures researcher (part-time, remote)

Haskell on Reddit - Fri, 05/01/2015 - 6:03am

I'm searching for associate for researching data structures and developing code generation system in Haskell.

Occupancy: ~ 10 hrs / week. Rate: 100-150 USD / week (subject of discussion).

Required:

  • being very interested in the project. I think it is possible, because I used to develop this project as a hobby. You might (and should) make this project the subject of your next course project / thesis in your high school.

Great if you know:

  • Haskell well (monad transformers, GADTs, type families)
  • Java, at least a little
  • how does CPU work (pipelining, branch prediction, caches)

I guess this job is primarily interesting for students on their 1st-3rd year.

Reach me at: leventov.ru@gmail.com (Google hangout chat, preferred) leventov@ya.ru (e-mail)

submitted by leventov
[link] [22 comments]
Categories: Incoming News

Roles, GND, Data.Coerce

haskell-cafe - Thu, 04/30/2015 - 11:58pm
All, An issue that came up with GHC 7.8 was the design of Roles and Data.Coerce, with it's ability to break module abstractions by default, requiring programmers adopt explicit role annotations to enforce some types of invariants for abstract data types. Because of this issue, as of now, GND & Data.Coerce are still disabled in Safe Haskell. We'd like to change this but need feedback from everyone. I've written up a wiki page with lots of information on the issue: https://ghc.haskell.org/trac/ghc/wiki/SafeRoles Please read, it has information on how Roles work, the problem it raises and subtleties in the current system. Possible paths forward are also there, but I'll include below. Please be aware that this discussion is about Roles in general, not specifically how when using -XSafe. Ideally we'd come up with a solution that works regardless of if using Safe Haskell or not. I personally like option (3), although it isn't clear to me how hairy the implementation would be. == Possible Paths Forward for
Categories: Offsite Discussion

Danny Gratzer: Bracket Abstraction: The Smallest PL You've Ever Seen

Planet Haskell - Thu, 04/30/2015 - 6:00pm
Posted on May 1, 2015 Tags: types, haskell

It’s well known that lambda calculus is an extremely small, Turing Complete language. In fact, most programming languages over the last 5 years have grown some (typed and or broken) embedding of lambda calculus with aptly named lambdas.

This is wonderful and everything but lambda calculus is actually a little complicated. It’s centred around binding and substituting for variables, while this is elegant it’s a little difficult to formalize mathematically. It’s natural to wonder whether we can avoid dealing with variables by building up all our lambda terms from a special privileged few.

These systems (sometimes called combinator calculi) are quite pleasant to model formally, but how do we know that our system is complete? In this post I’d like to go over translating any lambda calculus program into a particular combinator calculus, SK calculus.

What is SK Combinator Calculus?

SK combinator calculus is a language with exactly 3 types of expressions.

  1. We can apply one term to another, e e,
  2. We have one term s
  3. We another term k

Besides the obvious ones, there are two main rules for this system:

  1. s a b c = (a c) (b c)
  2. k a b = a

And that’s it. What makes SK calculus so remarkable is how minimal it is. We now show that it’s Turing complete by translating lambda calculus into it.

Bracket Abstraction

First things first, let’s just define how to represent both SK calculus and lambda calculus in our Haskell program.

data Lam = Var Int | Ap Lam Lam | Lam Lam data SK = S | K | SKAp SK SK

Now we begin by defining a translation from a simplified lambda calculus to SK calculus. This simplified calculus is just SK supplemented with variables. By defining this step, the actual transformation becomes remarkably crisp.

data SKH = Var' Int | S' | K' | SKAp' SKH SKH

Note that while SKH has variables, but no way to bind them. In order to remove a variable, we have bracket. bracket has the property that replacing Var 0 in a term, e, with a term, e', is the same as SKAp (bracket e) e'.

-- Remove one variable bracket :: SKH -> SKH bracket (Var' 0) = SKAp' (SKAp' S' K') K' bracket (Var' i) = Var' (i - 1) bracket (SKAp' l r) = SKAp' (SKAp' S' (bracket l)) (bracket r) bracket x = x

If we’re at Var 0 we replace the variable with the term s k k. This has the property that (s k k) A = A. It’s traditional to abbreviate s k k as i (leading to the name SKI calculus) but i is strictly unnecessary as we can see.

If we’re at an application, we do something really clever. We have two terms which both have a free variable, so we bracket them and use S to supply the free variable to both of them! Remember that

s (bracket A) (bracket B) C = ((bracket A) C) ((bracket B) C)

which is exactly what we require by the specification of bracket.

Now that we have a way to remove free variables from an SKH term, we can close off a term with no free variables to give back a normal SK term.

close :: SKH -> SK close (Var' _) = error "Not closed" close S' = S close K' = K close (SKAp' l r) = SKAp (close l) (close r)

Now our translator can be written nicely.

l2h :: Lam -> SKH l2h (Var i) = Var' i l2h (Ap l r) = SKAp' (l2h l) (l2h r) l2h (Lam h) = bracket (l2h h) translate :: Lam -> SK translate = close . l2h

l2h is the main worker in this function. It works across SKH’s because it needs to deal with open terms during the translation. However, during the process we repeatedly call bracket so every time we go under a binder we call bracket afterwards, removing the free variable we just introduced.

This means that if we call l2h on a closed lambda term we get back a closed SKH term. This justifies using close after the toplevel call to l2h in translate which wraps up our conversion.

For funsies I decided to translate the Y combinator and got back this mess

(s ((s ((s s) ((s k) k))) ((s ((s s) ((s ((s s) k)) k))) ((s ((s s) k)) k)))) ((s ((s s) ((s k) k))) ((s ((s s) ((s ((s s) k)) k))) ((s ((s s) k)) k)))

Completely useless, but kinda fun to look at. More interestingly, the canonical nonterminating lambda term is λx. x x which gives back s i i, much more readable.

Wrap Up

Now that we’ve performed this translation we have a very nice proof of the turing completeness of SK calculus. This has some nice upshots, folks who study things like realizability models of constructive logics use Partial Combinatory Algebras a model of computation. This is essentially an algebraic model of SK calculus.

If nothing else, it’s really quite crazy that such a small language is possible of simulating any computable function across numbers.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Is it acceptable if Applicative behave not like aMonad

haskell-cafe - Thu, 04/30/2015 - 4:45pm
Hello, I have such a question: assume you have some type `T` which has Applicative and Monad instances. Is it ok if code like this: foo :: Int -> T String bar :: Int -> T Int (,) <$> foo 10 <*> bar "20" behaves not like this code: foobar = do x <- foo 10 y <- bar "20" return (x, y) The word "behaves" I mean not just returning value but the effect performed also. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

FP Complete: Update on GHC 7.10 in Stackage

Planet Haskell - Thu, 04/30/2015 - 12:38pm

It seems that every time a blog post about Stackage comes out, someone comments on Reddit how excited they are for Stackage to be ready for GHC 7.10. Unfortunately, there's still an open issue about packages that have incorrect bounds or won't compile.

Well, as about 30 different package authors probably figured out today, I decided to take a crack at compiling Stackage with 7.10. This involved changing around some bounds, sending some pull requests, setting some expected test failures, and unfortunately temporarily removing a few packages from the build. But the result is well worth it: we now have a working build of Stackage with GHC 7.10!

To give some more information: this snapshot has 1028 packages, compared to 1106 for GHC 7.8. When updating the build-constraints.yaml file, I added the phrase "GHC 7.10" next to every modification I made. I encourage people to take a look at the file and see if there are any projects you'd like to send a pull request to and add GHC 7.10 support. If you do so, please ping me once the change is on Hackage so I can add it back to Stackage.

The question is: what do we do now? I'm interested in other opinions, but my recommendation is:

  • Early next week, I switch over the official nightly builds to start using GHC 7.10. LTS 2 will continue running, and will use GHC 7.8 as it does already. (LTS releases never change GHC major version.)
  • We work on improving the package and test coverage for GHC 7.10.
  • No earlier than July 1 do we release LTS 3, which will support GHC 7.10. If there is concern that the release isn't ready yet, we can hold off an extra month (though I doubt that will be necessary).

To ease the burden on package maintainers, LTS support cycles do not overlap. LTS 2 will be supported for a minimum of 3 months from its initial release (April 1, 2015), which is why LTS 3 will be released no sooner than July 1, 2015.

I'm quite surprised and excited that Stackage was able to move to GHC 7.10 so quickly. Thank you to package authors for updated your code so quickly!

Categories: Offsite Blogs

Douglas M. Auclair (geophf): April 2015 1HaskellADay Problems and Solutions

Planet Haskell - Thu, 04/30/2015 - 11:21am
April 2015
  • April 30th, 2015: "SHOW ME DA MONAY!" http://lpaste.net/3352992723589136384 for today's #haskell problem  <iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/1-mOKMq19zU/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/1-mOKMq19zU?feature=player_embedded" width="320"></iframe>Simple? Sure! Solution? Yes. http://lpaste.net/7331259237240143872
  • April 29th, 2015: We take stock of the Stochastic Oscillator http://lpaste.net/8447434917217828864 for today's #haskell problem #trading We are so partially stoched for a partial solution for the Stochastic Oscillator http://lpaste.net/4307607333212520448 
  • April 28th, 2015: Today's #haskell puzzle as a ken-ken solver http://lpaste.net/6211501623257071616 a solution (beyond my ... ken) is defined at http://lpaste.net/929006498481176576
  • April 27th, 2015: Rainy days and Mondays do not stop the mail, nor today's #haskell problem! http://lpaste.net/6468251516921708544 The solution posted at http://lpaste.net/6973841984536444928 … shows us view-patterns and how to spell the word 'intercalate'.
  • April 24th, 2015: Bidirectionally (map) yours! for today's #haskell problem http://lpaste.net/1645129197724631040 A solution to this problem is posted at http://lpaste.net/540860373977268224 
  • April 23rd, 2015: Today's #haskell problem looks impossible! http://lpaste.net/6861042906254278656 So this looks like this is a job for ... KIM POSSIBLE! YAY! @sheshanaag offers a solution at http://lpaste.net/131309 .
  • April 22nd, 2015: "I need tea." #BritishProblems "I need clean data" #EveryonesPipeDream "Deletia" today's #haskell problem http://lpaste.net/2343021306984792064 Deletia solution? Solution deleted? Here ya go! http://lpaste.net/5973874852434542592
  • April 21st, 2015: In which we learn about Tag-categories, and then Levenshtein distances between them http://lpaste.net/2118427670256549888 for today's #haskell problem Okay, wait: is it a group of categories or a category of groups? me confused! A solution to today's #haskell at http://lpaste.net/8855539857825464320
  • April 20th, 2015: Today we can't see the forest for the trees, so let's change that http://lpaste.net/3949027037724803072 A solution to our first day in the tag-forest http://lpaste.net/4634897048192155648 ... make sure you're marking your trail with breadcrumbs!
  • April 17th, 2015: No. Wait. You wanted line breaks with that, too? Well, why didn't you say so in the first place? http://lpaste.net/8638783844922687488 Have some curry with a line-breaky solution at http://lpaste.net/8752969226978852864
  • April 16th, 2015: "more then." #okaythen Sry, not sry, but here's today's #haskell problem: http://lpaste.net/6680706931826360320 I can't even. lolz. rofl. lmao. whatevs. And a big-ole-blob-o-words is given as the solution http://lpaste.net/2810223588836114432 for today's #haskell problem. It ain't pretty, but... there it is
  • April 15th, 2015: Poseidon's trident or Andrew's Pitchfork analysis, if you prefer, for today's #haskell problem http://lpaste.net/5072355173985157120
  • April 14th, 2015: Refining the SMA-trend-ride http://lpaste.net/3856617311658049536 for today's #haskell problem. Trending and throttling doesn't ... quite get us there, but ... solution: http://lpaste.net/9223292936442085376
  • April 13th, 2015: In today's #haskell problem we learn zombies are comonadic, and like eating SMA-brains. http://lpaste.net/8924989388807471104 Yeah. That. Hold the zombies, please! (Or: when $40k net profit is not enough by half!) http://lpaste.net/955577567060951040
  • April 10th, 2015: Today's #haskell problem delivered with much GRAVITAS, boils down to: don't be a dumb@$$ when investing http://lpaste.net/5255378926062010368 #KeepinItReal The SMA-advisor is REALLY chatty, but how good is it? TBD, but here's a very simple advisor: http://lpaste.net/109712 Backtesting for this strategy is posted at http://lpaste.net/109687 (or: how a not so good buy/sell strategy give you not so good results!)
  • April 9th, 2015: A bit of analysis of historical stock data http://lpaste.net/6960188425236381696 for today's #haskell problem A solution to the SMA-analyses part is posted at http://lpaste.net/3427480809555099648 
  • April 8th, 2015: MOAR! MOAR! You clamor for MOAR real-world #haskell problems, and how can I say no? http://lpaste.net/5198207211930648576 Downloading stock screens Hint: get the screens from a web service; look at, e.g.: https://code.google.com/p/yahoo-finance-managed/wiki/YahooFinanceAPIs A 'foldrM'-solution to this problem is posted at http://lpaste.net/2729747257602605056
  • April 7th, 2015: Looking at a bit of real-world #haskell for today's stock (kinda-)screen-scraping problem at http://lpaste.net/5737110678548774912 Hint: perhaps you'd like to solve this problem using tagsoup? https://hackage.haskell.org/package/tagsoup *GASP* You mean ... it actually ... works? http://lpaste.net/1209131365107236864 A MonadWriter-y tagsoup-y Monoidial-MultiMap-y solution
  • April 6th, 2015: What do three men teaching all of high school make, beside today's #haskell problem? http://lpaste.net/667230964799242240 Tired men, of course! Thanks, George Boole! Three Men and a High School, SOLVED! http://lpaste.net/7942804585247145984
  • April 3rd, 2015: reverseR that list like a Viking! Rrrrr! for today's problem http://lpaste.net/8513906085948555264 … #haskell Totes cheated to get you the solution http://lpaste.net/1880031563417124864 used a library that I wrote, so, like, yeah, totes cheated! ;)
  • April 2nd, 2015: We're breaking new ground for today's #haskell problem: let's reverse lists... relationally. And tag-type some values http://lpaste.net/389291192849793024 After several fits and starts @geophf learns how to reverse a list... relationally http://lpaste.net/7875722904095162368 and can count to the nr 5, as well
  • April 1st, 2015: Take a drink of today's #haskell problem: love potion nr9 http://lpaste.net/435384893539614720 because, after all: all we need is love, la-di-dah-di-da! A solution can be found au shaque d'amour posted at http://lpaste.net/6859866252718899200
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/9SOryJvTAGs/0.jpg" frameborder="0" height="266" src="http://www.youtube.com/embed/9SOryJvTAGs?feature=player_embedded" width="320"></iframe>
          Categories: Offsite Blogs

          Prof. Hudak passed away last night

          Haskell on Reddit - Thu, 04/30/2015 - 10:14am

          This really saddens me. I don't have much to say but that he is a great man and he has inspired me a lot. Let's hold him and his family in light.

          submitted by enzozhc
          [link] [17 comments]
          Categories: Incoming News

          Brandon Simmons: Announcing hashabler: like hashable only more so

          Planet Haskell - Thu, 04/30/2015 - 9:03am

          I’ve just released the first version of a haskell library for principled, cross-platform & extensible hashing of types, which includes an implementation of the FNV-1a algorithm. It is available on hackage, and can be installed with:

          cabal install hashabler

          hashabler is a rewrite of the hashable library by Milan Straka and Johan Tibell, having the following goals:

          • Extensibility; it should be easy to implement a new hashing algorithm on any Hashable type, for instance if one needed more hash bits

          • Honest hashing of values, and principled hashing of algebraic data types (see e.g. #30)

          • Cross-platform consistent hash values, with a versioning guarantee. Where possible we ensure morally identical data hashes to indentical values regardless of processor word size and endianness.

          • Make implementing identical hash routines in other languages as painless as possible. We provide an implementation of a simple hashing algorithm (FNV-1a) and make an effort define Hashable instances in a way that is well-documented and sensible, so that e.g. one can (hopefully) easily implement string hashing routine in JavaScript that will match the way we hash strings here.

          Motivation

          I started writing a fast concurrent bloom filter variant, but found none of the existing libraries fit my needs. In particular hashable was deficient in a number of ways:

          • The number of hash bits my data structure requires can vary based on user parameters, and possibly be more than the 64-bits supported by hashable

          • Users might like to serialize their bloomfilter and store it, pass it to other machines, or work with it in a different language, so we need

            • hash values that are consistent across platforms
            • some guarantee of consistency across library versions

          I was also very concerned about the general approach taken for algebraic types, which results in collision, the use of “hashing” numeric values to themselves, dubious combining functions, etc. It wasn’t at all clear to me how to ensure my data structure wouldn’t be broken if I used hashable. See below for a very brief investigation into hash goodness of the two libraries.

          There isn’t interest in supporting my use case or addressing these issues in hashable (see e.g. #73, #30, and #74) and apparently hashable is working in practice for people, but maybe this new package will be useful for some other folks.

          Hash goodness of hashable and hashabler, briefly

          Hashing-based data structures assume some “goodness” of the underlying hash function, and may depend on the goodness of the hash function in ways that aren’t always clear or well-understood. “Goodness” also seems to be somewhat subjective, but can be expressed statistically in terms of bit-independence tests, and avalanche properties, etc.; various things that e.g. smhasher looks at.

          I thought for fun I’d visualize some distributions, as that’s easier for my puny brain to understand than statistics. We visualize 32-bit hashes by quantizing by 64x64 and mapping that to a pixel following a hilbert curve to maintain locality of hash values. Then when multiple hash values fall within the same 64x64 pixel, we darken the pixel, and finally mark it red if we can’t go any further to indicate clipping.

          It’s easy to cherry-pick inputs that will result in some bad behavior by hashable, but below I’ve tried to show some fairly realistic examples of strange or less-good distributions in hashable. I haven’t analysed these at all. Images are cropped ¼ size, but are representative of the whole 32-bit range.

          First, here’s a hash of all [Ordering] of size 10 (~59K distinct values):

          Hashabler:

          Hashable:

          Next here’s the hash of one million (Word8,Word8,Word8) (having a domain ~ 16 mil):

          Hashabler:

          Hashable:

          I saw no difference when hashing english words, which is good news as that’s probably a very common use-case.

          Please help

          If you could test the library on a big endian machine and let me know how it goes, that would be great. See here.

          You can also check out the TODOs scattered throughout the code and send pull requests. I mayb not be able to get to them until June, but will be very grateful!

          P.S. hire me

          I’m always open to interesting work or just hearing about how companies are using haskell. Feel free to send me an email at brandon.m.simmons@gmail.com

          Categories: Offsite Blogs

          Jan Stolarek: Smarter conditionals with dependent types: a quick case study

          Planet Haskell - Thu, 04/30/2015 - 8:42am

          Find the type error in the following Haskell expression:

          if null xs then tail xs else xs

          You can’t, of course: this program is obviously nonsense unless you’re a typechecker. The trouble is that only certain computations make sense if the null xs test is True, whilst others make sense if it is False. However, as far as the type system is concerned, the type of the then branch is the type of the else branch is the type of the entire conditional. Statically, the test is irrelevant. Which is odd, because if the test really were irrelevant, we wouldn’t do it. Of course, tail [] doesn’t go wrong – well-typed programs don’t go wrong – so we’d better pick a different word for the way they do go.

          The above quote is an opening paragraph of Conor McBride’s “Epigram: Practical Programming with Dependent Types” paper. As always, Conor makes a good point – this test is completely irrelevant for the typechecker although it is very relevant at run time. Clearly the type system fails to accurately approximate runtime behaviour of our program. In this short post I will show how to fix this in Haskell using dependent types.

          The problem is that the types used in this short program carry no information about the manipulated data. This is true both for Bool returned by null xs, which contains no evidence of the result, as well as lists, that store no information about their length. As some of you probably realize the latter is easily fixed by using vectors, ie. length-indexed lists:

          data N = Z | S N  -- natural numbers   data Vec a (n :: N) where Nil  :: Vec a Z Cons :: a -> Vec a n -> Vec a (S n)

          The type of vector encodes its length, which means that the type checker can now be aware whether it is dealing with an empty vector. Now let’s write null and tail functions that work on vectors:

          vecNull :: Vec a n -> Bool vecNull Nil        = True vecNull (Cons _ _) = False   vecTail :: Vec a (S n) -> Vec a n vecTail (Cons _ tl) = tl

          vecNull is nothing surprising – it returns True for empty vector and False for non-empty one. But the tail function for vectors differs from its implementation for lists. tail from Haskell’s standard prelude is not defined for an empty list so calling tail [] results in an exception (that would be the case in Conor’s example). But the type signature of vecTail requires that input vector is non-empty. As a result we can rule out the Nil case. That also means that Conor’s example will no longer typecheck1. But how can we write a correct version of this example, one that removes first element of a vector only when it is non-empty? Here’s an attempt:

          shorten :: Vec a n -> Vec a m shorten xs = case vecNull xs of True -> xs False -> vecTail xs

          That however won’t compile: now that we written type-safe tail function typechecker requires a proof that vector passed to it as an argument is non empty. The weak link in this code is the vecNull function. It tests whether a vector is empty but delivers no type-level proof of the result. In other words we need:

          vecNull` :: Vec a n -> IsNull n

          ie. a function with result type carrying the information about the length of the list. This data type will have the runtime representation isomorphic to Bool, ie. it will be an enumeration with two constructors, and the type index will correspond to length of a vector:

          data IsNull (n :: N) where Null    :: IsNull Z NotNull :: IsNull (S n)

          Null represents empty vectors, NotNull represents non-empty ones. We can now implement a version of vecNull that carries proof of the result at the type level:

          vecNull` :: Vec a n -> IsNull n vecNull` Nil        = Null vecNull` (Cons _ _) = NotNull

          The type signature of vecNull` says that the return type must have the same index as the input vector. Pattern matching on the Nil case provides the type checker with the information that the n index of Vec is Z. This means that the return value in this case must be Null – the NotNull constructor is indexed with S and that obviously does not match Z. Similarly in the Cons case the return value must be NotNull. However, replacing vecNull in the definition of shorten with our new vecNull` will again result in a type error. The problem comes from the type signature of shorten:

          shorten :: Vec a n -> Vec a m

          By indexing input and output vectors with different length indices – n and m – we tell the typechecker that these are completely unrelated. But that is not true! Knowing the input length n we know exactly what the result should be: if the input vector is empty the result vector is also empty; if the input vector is not empty it should be shortened by one. Since we need to express this at the type level we will use a type family:

          type family Pred (n :: N) :: N where Pred Z = Z Pred (S n) = n

          (In a fully-fledged dependently-typed language we would write normal function and then apply it at the type level.) Now we can finally write:

          shorten :: Vec a n -> Vec a (Pred n) shorten xs = case vecNull` xs of Null -> xs NotNull -> vecTail xs

          This definition should not go wrong. Trying to swap expression in the branches will result in a type error.

          1. Assuming we don’t abuse Haskell’s unsoundness as logic, eg. by using undefined.
          Categories: Offsite Blogs

          What are some good examples of Haskell services/daemons?

          Haskell on Reddit - Thu, 04/30/2015 - 7:37am

          Hi.

          What are some good examples of open-source Haskell services/daemons that have been set up to work with modern init systems like systemd/upstart? I am hoping to find a good role model for the service I am working on making. Particularly, I am interested in how it handles configuration files, reloading configuration files when getting a HUP signal, and error logging.

          Ryan

          submitted by ryantm
          [link] [9 comments]
          Categories: Incoming News

          Current status of Dependent Haskell

          Haskell on Reddit - Thu, 04/30/2015 - 5:45am

          I don't want to bother the authors of DependentHaskell and that's why I'm asking here, hoping someone knows something.

          I have much hope in seeing this project becoming part of GHC (many thanks to the developers working on it) so I had like to know:

          How the work is going?

          Is it still in an early phase or is there already substantial work being done?

          How much time before a possible release? (Just want to know if it is feasible in months or years or more)

          submitted by game_coder
          [link] [16 comments]
          Categories: Incoming News

          CFP: IEEE International Conference on Cloud and Autonomic Computing (CAC 2015)

          haskell-cafe - Wed, 04/29/2015 - 10:15pm
          [ Less than two weeks to deadline. Apologies if you receive multiple copies of this email.] IEEE International Conference on Cloud and Autonomic Computing (CAC 2015) (pending IEEE support) autonomic-conference.org Cambridge, MA, USA September 21-25, 2015 Co-located with the Ninth IEEE International Conference on Self-Adaptive and Self-Organizing System (SASO 2015) and with the 15th IEEE Peer-to-Peer Computing Conference Call for Papers Overview Enterprise-scale cloud platforms and services systems, present common and cross-cutting challenges in maximizing power efficiency and performance while maintaining predictable and reliable behavior, and at the same time responding appropriately to environmental and system changes such as hardware failures and varying workloads. Autonomic computing systems address the challenges in managing these environments by integrating monitoring, decision-processing and actuation capabilities to autonomously manage resources and applications based on high-level policies.
          Categories: Offsite Discussion

          Danny Gratzer: Compiling With CPS

          Planet Haskell - Wed, 04/29/2015 - 6:00pm
          Posted on April 30, 2015 Tags: compilers, haskell

          Hello folks. It’s been a busy month so I haven’t had much a chance to write but I think now’s a good time to talk about another compiler related subject: continuation passing style conversion.

          When you’re compiling a functional languages (in a sane way) your compiler mostly consists of phases which run over the AST and simplify it. For example in a language with pattern matching, it’s almost certainly the case that we can write something like

          case x of (1, 2) -> True (_, _) -> False

          Wonderfully concise code. However, it’s hard to compile nested patterns like that. In the compiler, we might simplify this to

          case x of (a, b) -> case a of 1 -> case b of 2 -> True _ -> False _ -> False

          note to future me, write a pattern matching compiler

          We’ve transformed our large nested pattern into a series of simpler, unnested patterns. The benefit here is that this maps more straight to a series of conditionals (or jumps).

          Now one of the biggest decisions in any compiler is what to do with expressions. We want to get rid of complicated nested expressions because chances are our compilation target doesn’t support them. In my second to last post we transformed a functional language into something like SSA. In this post, we’re going to walk through a different intermediate representation: CPS.

          What is CPS

          CPS is a restriction of how a functional language works. In CPS we don’t have nested expressions anymore. We instead have a series of lets which telescope out and each binds a “flat” expressions. This is the process of “removing expressions” from our language. A compiler probably is targeting something with a much weaker notion of expressions (like assembly) and so we change our tree like structure into something more linear.

          Additionally, no functions return. Instead, they take a continuation and when they’re about to return they instead pass their value to it. This means that conceptually, all functions are transformed from a -> b to (a, b -> void) -> void. Logically, this is actually a reasonable thing to do. This corresponds to mapping a proposition b to ¬ ¬ b. What’s cool here is that since each function call calls a continuation instead of returning its result, we can imagine that each function just transferring control over to some other part of the program instead of returning. This leads to a very slick and efficient way of implementing CPSed function calls as we’ll see.

          This means we’d change something like

          fact n = if n == 0 then 1 else n * fact (n - 1)

          into

          fact n k = if n == 0 then k 1 else let n' = n - 1 in fact n' (\r -> let r' = n * r in k r')

          To see what’s going on here we

          1. Added an extra argument to fact, its return continuation
          2. In the first branch, we pass the result to the continuation instead of returning it
          3. In the next branch, we lift the nested expression n - 1 into a flat let binding
          4. We add an extra argument to the recursive call, the continuation
          5. In this continuation, we apply multiply the result of the recursive call by n (Note here that we did close over n, this lambda is a real lambda)
          6. Finally, we pass the final result to the original continuation k.

          The only tree-style-nesting here comes from the top if expression, everything else is completely linear.

          Let’s formalize this process by converting Simply Typed Lambda Calculus (STLC) to CPS form.

          STLC to CPS

          First things first, we specify an AST for normal STLC.

          data Tp = Arr Tp Tp | Int deriving Show data Op = Plus | Minus | Times | Divide -- The Tp in App refers to the return type, it's helpful later data Exp a = App (Exp a) (Exp a) Tp | Lam Tp (Scope () Exp a) | Num Int -- No need for binding here since we have Minus | IfZ (Exp a) (Exp a) (Exp a) | Binop Op (Exp a) (Exp a) | Var a

          We’ve supplemented our lambda calculus with natural numbers and some binary operations because it makes things a bit more fun. Additionally, we’re using bound to deal with bindings for lambdas. This means there’s a terribly boring monad instance lying around that I won’t bore you with.

          To convert to CPS, we first need to figure out how to convert our types. Since CPS functions never return we want them to go to Void, the unoccupied type. However, since our language doesn’t allow Void outside of continuations, and doesn’t allow functions that don’t go to Void, let’s bundle them up into one new type Cont a which is just a function from a -> Void. However, this presents us with a problem, how do we turn an Arr a b into this style of function? It seems like our function should take two arguments, a and b -> Void so that it can produce a Void of its own. However, this requires products since currying isn’t possible with the restriction that all functions return Void! Therefore, we supplement our CPS language with pairs and projections for them.

          Now we can write the AST for CPS types and a conversion between Tp and it.

          data CTp = Cont CTp | CInt | CPair CTp CTp cpsTp :: Tp -> CTp cpsTp (Arr l r) = Cont $ CPair (cpsTp l) (Cont (cpsTp r)) cpsTp Int = CInt

          The only interesting thing here is how we translate function types, but we talked about that above. Now for expressions.

          We want to define a new data type that encapsulates the restrictions of CPS. In order to do this we factor out our data types into “flat expressions” and “CPS expressions”. Flat expressions are things like values and variables while CPS expressions contain things like “Jump to this continuation” or “Branch on this flat expression”. Finally, there’s let expressions to perform various operations on expressions.

          data LetBinder a = OpBind Op (FlatExp a) (FlatExp a) | ProjL a | ProjR a | Pair (FlatExp a) (FlatExp a) data FlatExp a = CNum Int | CVar a | CLam CTp a (CExp a) data CExp a = Let a (LetBinder a) (CExp a) | CIf (FlatExp a) (CExp a) (CExp a) | Jump (FlatExp a) (FlatExp a) | Halt (FlatExp a)

          Lets let us bind the results of a few “primitive operations” across values and variables to a fresh variable. This is where things like “incrementing a number” happen. Additionally, in order to create a pair or access its elements we need to us a Let.

          Notice that here application is spelled Jump hinting that it really is just a jmp and not dealing with the stack in any way. They’re all jumps we can not overflow the stack as would be an issue with a normal calling convention. To seal of the chain of function calls we have Halt, it takes a FlatExp and returns it as the result of the program.

          Expressions here are also parameterized over variables but we can’t use bound with them (for reasons that deserve a blogpost-y rant :). Because of this we settle for just ensuring that each a is globally unique.

          So now instead of having a bunch of nested Exps, we have flat expressions which compute exactly one thing and linearize the tree of expressions into a series of flat ones with let binders. It’s still not quite “linear” since both lambdas and if branches let us have something tree-like.

          We can now define conversion to CPS with one major helper function

          cps :: (Eq a, Enum a) => Exp a -> (FlatExp a -> Gen a (CExp a)) -> Gen a (CExp a)

          This takes an expression, a “continuation” and produces a CExp. We have some monad-gen stuff going on here because we need unique variables. The “continuation” is an actual Haskell function. So our function breaks an expression down to a FlatExp and then feeds it to the continuation.

          cps (Var a) c = c (CVar a) cps (Num i) c = c (CNum i)

          The first two cases are easy since variables and numbers are already flat expressions, they go straight into the continuation.

          cps (IfZ i t e) c = cps i $ \ic -> CIf ic <$> cps t c <*> cps e c

          For IfZ we first recurse on the i. Then once we have a flattened computation representing i, we use CIf and recurse.

          cps (Binop op l r) c = cps l $ \fl -> cps r $ \fr -> gen >>= \out -> Let out (OpBind op fl fr) <$> c (CVar out)

          Like before, we use cps to recurse on the left and right sides of the expression. This gives us two flat expressions which we use with OpBind to compute the result and bind it to out. Now that we have a variable for the result we just toss it to the continuation.

          cps (Lam tp body) c = do [pairArg, newCont, newArg] <- replicateM 3 gen let body' = instantiate1 (Var newArg) body cbody <- cps body' (return . Jump (CVar newCont)) c (CLam (cpsTp tp) pairArg $ Let newArg (ProjL pairArg) $ Let newCont (ProjR pairArg) $ cbody)

          Converting a lambda is a little bit more work. It needs to take a pair so a lot of the work is projecting out the left component (the argument) and the right component (the continuation). With those two things in hand we recurse in the body using the continuation supplied as an argument. The actual code makes this process look a little out of order. Remember that we only use cbody once we’ve bound the projections to newArg and pairArg respectively.

          cps (App l r tp) c = do arg <- gen cont <- CLam (cpsTp tp) arg <$> c (CVar arg) cps l $ \fl -> cps r $ \fr -> gen >>= \pair -> return $ Let pair (Pair fr cont) (Jump fl (CVar pair))

          For application we just create a lambda for the current continuation. We then evaluate the left and right sides of the application using recursive calls. Now that we have a function to jump to, we create a pair of the argument and the continuation and bind it to a name. From there, we just jump to fl, the function. Turning the continuation into a lambda is a little strange, it’s also we needed an annotation for App. The lambda uses the return type of the application and constructs a continuation that maps a to c a. Note that c a is a Haskell expressions with the type CExp a.

          convert :: Exp Int -> CExp Int convert = runGen . flip cps (return . Halt)

          With this, we’ve written a nice little compiler pass to convert expressions into their CPS forms. By doing this we’ve “eliminated expressions”. Everything is now flat and evaluation basically proceeds by evaluating one small computation and using the result to compute another and another.

          There’s still some things left to compile out before this is machine code though

          • Closures - these can be converted to explicitly pass records with closure conversion
          • Hoist lambdas out of nested scope - this gets rid of anonymous functions, something we don’t have in C or assembly
          • Make allocation explicit - Allocate a block of memory for a group of let statements and have them explicitly move the results of their computations to it
          • Register allocation - Cleverly choose whether to store some particular variable in a register or load it in as needed.

          Once we’ve done these steps we’ve basically written a compiler. However, they’re all influenced by the fact that we’ve compiled out expressions and (really) function calls with our conversion to CPS, it makes the process much much simpler.

          Wrap Up

          CPS conversion is a nice alternative to something like STG machines for lazy languages or SSA for imperative ones. As far as I’m aware the main SML interpreter (SML/NJ) compiles code in this way. As does Ur/Web if I’m not crazy. Additionally, the course entitled “Higher Order, Typed Compilation” which is taught here at CMU uses CPS conversion to make compiling SML really quite pleasant.

          In fact, someone (Andrew Appel?) once wrote a paper that noted that SSA and CPS are actually the same. The key difference is that in SSA we merge multiple blocks together using the phi function. In CPS, we just let multiple source blocks jump to the same destination block (continuation). You can see this in our conversion of IfZ to CPS, instead of using phi to merge in the two branches, they both just use the continuation to jump to the remainder of the program. It makes things a little simpler because no one person needs to worry about

          Finally, if you’re compiling a language like Scheme with call/cc, using CPS conversion makes the whole thing completely trivial. All you do is define call/cc at the CPS level as

          call/cc (f, c) = f ((λ (x, c') → c x), c)

          So instead of using the continuation supplied to us in the expression we give to f, we use the one for the whole call/cc invocation! This causes us to not return into the body of f but instead to carry on the rest of the program as if f had returned whatever value x is. This is how my old Scheme compiler did things, I put figuring out how to implement call/cc off for a week before I realized it was a 10 minute job!

          Hope this was helpful!

          <script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
          Categories: Offsite Blogs

          Diagrams: Diagrams + Cairo + Gtk + Mouse picking, Reloaded

          Planet Haskell - Wed, 04/29/2015 - 6:00pm

          by Brent Yorgey on April 30, 2015

          Tagged as: cairo, GTK, mouse, coordinates, transformation, features, 1.3.

          Diagrams + Cairo + Gtk + Mouse picking, reloaded

          A little over a year ago, Christopher Mears wrote a nice article on how to match up mouse clicks in a GTK window with parts of a diagram. The only downside was that to make it work, you had to explicitly construct the diagram in such a way that its coordinate system precisely matched the coordinates of the window you wanted to use, so that there was essentially no "translation" to do. This was unfortunate, since constructing a diagram in a particular global coordinate system is not a very "diagrams-y" sort of thing to do. However, the 1.3 release of diagrams includes a new feature that makes matching up mouse clicks and diagrams much easier and more idiomatic, and I thought it would be worth updating Chris's original example to work more idiomatically in diagrams 1.3. The complete code is listed at the end.

          First, here's how we construct the house. This is quite different from the way Chris did it; I have tried to make it more idiomatic by focusing on local relationships of constituent pieces, rather than putting everything at absolute global coordinates. We first create all the constituent pieces:

          > -- The diagram to be drawn, with features tagged by strings. > prettyHouse :: QDiagram Cairo V2 Double [String] > prettyHouse = house > where > roof = triangle 1 # scaleToY 0.75 # centerY # fc blue > door = rect 0.2 0.4 # fc red > handle = circle 0.02 # fc black > wall = square 1 # fc yellow > chimney = fromOffsets [0 ^& 0.25, 0.1 ^& 0, 0 ^& (-0.4)] > # closeTrail # strokeT # fc green > # centerX > # named "chimney" > smoke = mconcat > [ circle 0.05 # translate v > | v <- [ zero, 0.05 ^& 0.15 ] > ] > # fc grey

          We then put the pieces together, labelling each by its name with the value function. Diagrams can be valuated by any monoid; when two diagrams are combined, the value at each point will be the mappend of the values of the two component diagrams. In this case, each point in the final diagram will accumulate a list of Strings corresponding to the pieces of the house which are under that point. Note how we make use of combinators like vcat and mconcat, alignments like alignB, snugL and snugR, and the use of a named subdiagram (the chimney) to position the components relative to each other. (You can click on any of the above function names to go to their documentation!)

          > house = vcat > [ mconcat > [ roof # snugR # value ["roof"] > , chimney # snugL # value ["chimney"] > ] > # centerX > , mconcat > [ handle # translate (0.05 ^& 0.2) # value ["handle"] > , door # alignB # value ["door"] > , wall # alignB # value ["wall"] > ] > ] > # withName "chimney" (\chim -> > atop (smoke # moveTo (location chim) # translateY 0.4 > # value ["smoke"] > ) > )

          Now, when we render the diagram to a GTK window, we can get diagrams to give us an affine transformation that mediates between the diagram's local coordinates and the GTK window's coordinates. I'll just highlight a few pieces of the code; the complete listing can be found at the end of the post. We first create an IORef to hold the transformation:

          > gtk2DiaRef <- (newIORef mempty :: IO (IORef (T2 Double)))

          We initialize it with the identity transformation. We use the renderDiaT function to get not only a rendering action but also the transformation from diagram to GTK coordinates; we save the inverse of the transformation in the IORef (since we will want to convert from GTK to diagram coordinates):

          > let (dia2gtk, (_,r)) = renderDiaT Cairo > (CairoOptions "" (mkWidth 250) PNG False) > prettyHouse > > -- store the inverse of the diagram -> window coordinate transformation > -- for later use in interpreting mouse clicks > writeIORef gtk2DiaRef (inv dia2gtk)

          (Note that if it is possible for the first motion notify event to happen before the expose event, then such mouse motions will be computed to correspond to the wrong part of the diagram, but who cares.) Now, when we receive a mouse click, we apply the stored transformation to convert to a point in diagram coordinates, and pass it to the sample function to extract a list of house components at that location.

          > (x,y) <- eventCoordinates > > -- transform the mouse click back into diagram coordinates. > gtk2Dia <- liftIO $ readIORef gtk2DiaRef > let pt' = transform gtk2Dia (p2 (x,y)) > > liftIO $ do > putStrLn $ show (x,y) ++ ": " > ++ intercalate " " (sample prettyHouse pt')

          The final product ends up looking and behaving identically to the video that Chris made.

          Finally, here's the complete code. A lot of it is just boring standard GTK setup.

          > import Control.Monad (void) > import Control.Monad.IO.Class (liftIO) > import Data.IORef > import Data.List (intercalate) > import Diagrams.Backend.Cairo > import Diagrams.Backend.Cairo.Internal > import Diagrams.Prelude > import Graphics.UI.Gtk > > main :: IO () > main = do > -- Ordinary Gtk setup. > void initGUI > w <- windowNew > da <- drawingAreaNew > w `containerAdd` da > void $ w `on` deleteEvent $ liftIO mainQuit >> return True > > -- Make an IORef to hold the transformation from window to diagram > -- coordinates. > gtk2DiaRef <- (newIORef mempty :: IO (IORef (T2 Double))) > > -- Render the diagram on the drawing area and save the transformation. > void $ da `on` exposeEvent $ liftIO $ do > dw <- widgetGetDrawWindow da > > -- renderDiaT returns both a rendering result as well as the > -- transformation from diagram to output coordinates. > let (dia2gtk, (_,r)) = renderDiaT Cairo > (CairoOptions "" (mkWidth 250) PNG False) > prettyHouse > > -- store the inverse of the diagram -> window coordinate transformation > -- for later use in interpreting mouse clicks > writeIORef gtk2DiaRef (inv dia2gtk) > > renderWithDrawable dw r > return True > > -- When the mouse moves, show the coordinates and the objects under > -- the pointer. > void $ da `on` motionNotifyEvent $ do > (x,y) <- eventCoordinates > > -- transform the mouse click back into diagram coordinates. > gtk2Dia <- liftIO $ readIORef gtk2DiaRef > let pt' = transform gtk2Dia (p2 (x,y)) > > liftIO $ do > putStrLn $ show (x,y) ++ ": " > ++ intercalate " " (sample prettyHouse pt') > return True > > -- Run the Gtk main loop. > da `widgetAddEvents` [PointerMotionMask] > widgetShowAll w > mainGUI > > -- The diagram to be drawn, with features tagged by strings. > prettyHouse :: QDiagram Cairo V2 Double [String] > prettyHouse = house > where > roof = triangle 1 # scaleToY 0.75 # centerY # fc blue > door = rect 0.2 0.4 # fc red > handle = circle 0.02 # fc black > wall = square 1 # fc yellow > chimney = fromOffsets [0 ^& 0.25, 0.1 ^& 0, 0 ^& (-0.4)] > # closeTrail # strokeT # fc green > # centerX > # named "chimney" > smoke = mconcat > [ circle 0.05 # translate v > | v <- [ zero, 0.05 ^& 0.15 ] > ] > # fc grey > house = vcat > [ mconcat > [ roof # snugR # value ["roof"] > , chimney # snugL # value ["chimney"] > ] > # centerX > , mconcat > [ handle # translate (0.05 ^& 0.2) # value ["handle"] > , door # alignB # value ["door"] > , wall # alignB # value ["wall"] > ] > ] > # withName "chimney" (\chim -> > atop (smoke # moveTo (location chim) # translateY 0.4 > # value ["smoke"] > ) > )
          Categories: Offsite Blogs