News aggregator

protocol-buffers vs. aeson

Haskell on Reddit - Sat, 06/07/2014 - 11:44am

Hi, has anyone ever done a benchmark of Google's Protocol Buffers with the protocol-buffers package versus JSON with aeson? I know that aeson is quite (if not really) fast, but I haven't found any performance measurements for protocol-buffers (though it is commonly said that Protocol Buffers are "efficient" and "fast").

If no such benchmark exists, is there significant interest in having one?

submitted by dermesser
[link] [17 comments]
Categories: Incoming News

Danny Gratzer: Overview of A Scheme Compiler

Planet Haskell - Fri, 06/06/2014 - 6:00pm
Posted on June 7, 2014

For the last few months I’ve been spending a fair amount of time on a fun little Scheme to C compiler, c_of_scheme.

In this post I’ll outline the high level overview of c_of_scheme and in future posts detail the specifics of each component.

Modules

c_of_scheme is divided into 11 modules: 2 utility modules, 6 modules which each handle one step of compilation, a module with definitions of ASTs, a driver, and of course Main.

First, let’s discuss the utility modules, Utils.Gen and Utils.Error. Gen defines a monad, Gen. This is used to generate unique integers to be used as identifiers. For example:

data Var = Name String | Gen Integer genVar :: Gen Var genVar = Gen <$> gen

Other stages of the compiler (continuation passing style, closure conversion, and lambda lifting) need lots of temporaries so this is used throughout the compiler.

Gen also comes with a monad transformer that implements a handful of useful MTL type classes. Overall, nothing too stunning.

The other uninteresting utility module is Error, this is just a wrapper around Either with a few functions for throwing errors and good pretty printing of errors. This is used internally to signal a major internal error.

The precise interface is given by a set of functions failRW, failCPS, failClos, etc., which correspond to each stage of compilation. These generate lovely pretty printed error messages for each stage. This will become clearer as we go over each phase individually and it’s clear what needs to signal failure.

A module that’s worth mentioning that’s not a compilation stage but not quite a utility module is AST. This defines the various abstract syntax trees and primops for our representation of Scheme. This also defines the compiler monad, which combines our error monad with Gen and some other bits and bobs useful for our compiler. More on AST in future posts.

Stages of Compilation

Now let’s actually go over the individual phases of compilation.

Parsing (Parser.hs)

This is the least interesting phase of compilation.. I personally just dislike parsing so I don’t have much to say about this.

A legal Scheme program is a list of definitions, we don’t currently allow top level expressions. We also don’t currently support the usual define sugar for functions.

The parser uses Parsec because I just happen to know the Parsec API, ironically because of this. If anyone cares enough to write a proper lexer and/or parser or something, I’m more than happy to help!

Rewrite Top Levels (RewriteTopLevel.hs)

This phase is a little peculiar. It exists because we’re targeting C and C has a fairly annoying restriction on what it allows top levels to initialized to.

In C, we can’t write something like

int c = 1 + 1 + 1;

but in our dialect of Scheme, this is the only way to write interesting computations! This phase of compilation rewrites top levels (Shocking!) to match the C definition of top levels.

This is done by changing each definition to an Init, this will later turn into a C declaration without initialization. Next we create a new function, our main function, that is a series of assignments which pair each top level definition to its initializer.

For example

(define foo 1) (define bar 2) (define quux (+ foo bar)) (define _ (display quux))

will become

(init foo) (init bar) (init quux) (init _) (define magical-main (lambda () (set! foo 1) (set! bar 1) (set! quux (+ foo bar)) (set! _ (display quux))))

where magical-main will be the first thing called in the generated code.

A caveat, we turn (define foo (lambda (..) ...)) into something different since it’s more efficient to directly convert these to functions.

Continuations Passing Style Conversion (CPS.hs)

This is the first interesting bit of compilation, CPS is a style where each function call is a tail call. Here’s an example non-CPS code converted to CPS.

(define foo (lambda (y) (+ 1 y))) (define foo-cps (lambda (cont x y) ((lambda (+') ((lambda (one) ((lambda (x') ((lambda (result) (cont result)) (+' one x'))) x)) 1)) +)))

Notice how with the CPS’ed version we’ve actually made evaluation order explicit and have removed non-primitive expressions.

CPS.hs converts the AST to use CPS. We’ll detail this process later but for now I’ll mention one more interesting tidbit.

CPS.hs is also where we implement call/cc! In fact it’s trivial to do. All we do as add the declaration for

(define call/cc (lambda (c f) (f c (lambda (ignored x) (c x))))) Optimizations (OptimizeCPS.hs)

This module implements the simple optimizations we perform. For now this is limited to simple inlining and constant folding, but this should improve in the future.

These optimizations are implemented quite pleasantly with recursion schemes.

Closure Conversion + Lambda Lifting (ClosureConvert.hs)

This is the most difficult phase of compilation, for me anyways. In concept it’s quite simple though.

The idea is that we take the implicit closure “argument” that all scheme procedures take and make it explicit. To this end we add three new primops, NewClos, ReadClos, and WriteClos. These do much what you would expect and let us treat closures opaquely as first class values.

Next we change each procedure to take an extra argument, its closure, and change closed over variables to be selected from this closure. Finally we change each lambda to be paired with its closure when constructed.

This sounded pretty feasible to me on paper, but in practice it seems to be the greatest source of bugs in c_of_scheme. It finally seems to work nicely now so I’ll be sure to blog about it soon.

Code Generation (CodeGen.hs)

This is the final stop in our compilation pipeline - we generate C code.

To do this we use one of my libraries. This is actually quite a simple step in the compiler since closure-converted, CPS-ed code is quite close to C.

Some of the details that code generation handles:

  • Interfacing to the runtime system
  • Generating the main method
  • Generating declarations for all the variables used in our intermediate language
  • Mapping the Scheme variables to appropriate C names

While this might sound daunting, this isn’t actually so bad.

Driver (Driver.hs)

While I might not write a post on it, Driver is my personal favorite module. It glues together all of the previous compilation phases and provides a bunch of nice high level functions like compileScheme.

The reason I like it so much is that all the code in it is a very nice, clean example of composing components as good old functions.

If you’re looking to understand c_of_scheme’s particular implementation, I’d urge you to start with Driver. It’ll provide a bit of an intuition from what goes to where.

The Runtime System

Currently c_of_scheme has an incredibly naive runtime system. Mostly because it’s being written by an incredibly naive C programmer (hi!).

I already wrote about the most interesting bit of the RTS: tail calls.

I plan on talking a bit about the RTS in the context of code generation (since it’d be impossible not to), and perhaps a post on c_of_scheme’s simple little mark and sweep GC.

Wrap Up

So that’s the high level overview of c_of_scheme, I think the compiler is best exemplified by one particular function in Driver.hs:

compileScheme :: [SDec UserPrim] -> Compiler [CExtDecl] compileScheme = addPrimops >=> makeMain >=> cpsify >=> optimizeCPS >=> closConvert >=> codegen where addPrimops = return . (++prims)

This chains together all the phases of compilation into one big old function from the Scheme AST to the C one.

Now, if you’re really interested in c_of_scheme, go ahead and grab the source with

hg clone ssh://hg@bitbucket.org/jozefg/c_of_scheme

I do use mercurial so you can also grab a zip from bitbucket if you’re unwilling to use mercurial for one command :)

I should have posts about each specific phase of compilation up in Real Soon Now. I’ll edit with a list of links to posts below as they are written.

Thanks to @tylerholien for proofreading

<script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

How do you choose the fixity of operators when designing DSLs?

Haskell on Reddit - Fri, 06/06/2014 - 5:38pm

Since designing DSLs in Haskell is so common, I'm just wondering if there are some tricks out there to keep the fixity declarations the best they can be, or if it's just a matter of trial and error?

Thanks!

submitted by 5outh
[link] [28 comments]
Categories: Incoming News

Why are non-parallel programs run with parallel flags slower?

Haskell on Reddit - Fri, 06/06/2014 - 3:33pm

I compile my non-parallel program with GHC 7.8.2 with parallel flags. When I run the program with parallel RTS options (+RTS -N -RTS), the performance is significantly worse than running without any flags. Why is this?

submitted by fiendfan1
[link] [6 comments]
Categories: Incoming News

Final call for talk proposals: HOPE'14 (Workshop on Higher-Order Programming with Effects, affiliated with ICFP'14)

General haskell list - Fri, 06/06/2014 - 2:26pm
---------------------------------------------------------------------- CALL FOR TALK PROPOSALS HOPE 2014 The 3rd ACM SIGPLAN Workshop on Higher-Order Programming with Effects August 31, 2014 Gothenburg, Sweden (the day before ICFP 2014) http://hope2014.mpi-sws.org ---------------------------------------------------------------------- HOPE 2014 aims at bringing together researchers interested in the design, semantics, implementation, and verification of higher-order effectful programs. It will be *informal*, consisting of invited talks, contributed talks on work in progress, and open-ended discussion sessions. --------------------- Goals of the Workshop --------------------- A recurring theme in many papers at ICFP, and in the research of many ICFP attendees, is the interaction of higher-order programming with various kinds of effects
Categories: Incoming News

Ken T Takusagawa: [qplzrxby] Letter and polygram frequencies

Planet Haskell - Fri, 06/06/2014 - 1:32pm

Part 1

In order of decreasing frequency, here are the common English polygrams (single letters, bigrams, trigrams, etc., also called n-grams) gathered from a private corpus.  We arbitrarily stop at the least frequent single letter, namely "z". There are 326 polygrams including the 26 single letters.

e t a o i n s r h l d u c th m g y p he w f in b the re er an on at ha v ou it en ng or k to es st ing te ar is ti al nd se nt ed as ve le me ll ea hi ne co hat of li de be tha no ri ca ic ot ro ly and ho ut om us that so io ra el et ma wh ce pe ta wa ch la fo ion ur si ent ec di il do ge ee pr her thi yo ac ns ow for ul ke all un ere we ss id ct em rs lo tio wi tion you rt ay ad mo ld po ai tr bu su mi wo hin oo pa pl ts ter gh av sh os nc ol ig ther ey ab am op ir na ni im ie if ate j thin not sa x ver fi bl are go bo one ev rea out ave tu ati oul was ry uld ould hey ome vi ba hav they here iv res ci tt ith ck ag con fe but ers have ia lly eve ap rd wit fr ally da pro sta mp his tin with ill ny ex hou ess ty cl com ht nk up sp ye ei can ear ght int est nce ki ff ep ted use ug ons od igh men ov ting som some pp ive atio our ore ation hing eo ect ak uc sti cou ple any ue ide ust ju ik pre gr cr der ef gi tho rr um ink ga ls there ike wha fa act han sc what ant thing ment q ell ua cu ight art ew bi whe oi oun ugh this nte per ica ble ist wou lik au like ru ove ob woul would rn z

Part 2

To avoid the redundancy between polygrams like "woul" and "would" in Part 1, we choose a subset of them by iteratively finding the length of the longest polygrams more frequent than the original frequency of "z", then greedily selecting the most frequent polygram among those with that length.  We cut out that polygram from the corpus, then reanalyze and repeat.  (This was a long computation.)  The polygrams got eliminated in the following order.  This list of 126 polygrams has far less redundancy.

5 letters: ation there thing would
4 letters: that they have tion ally with ting some ther what this ight like thin
3 letters: the ing and ent for you ter ver one are not ate out was con all but pro res ere ill com rea ear use can cou any nce ple ust ive hou ers ell ess ant ble
2 letters: in to it re is on ed er es of en ar or as al an ic be st se ly ch ow at id wh le ac ay me un ad ro su ge if th mo go pe so lo do de us po ab he ir im am ld tr ur ne ap gh il we ve ts ke no ma ex ul ta te co pl

The remaining single letters in decreasing order of frequency were
s t i a m e d o p c h l u f y r b n w g k v j q z

The letter "x" dropped below the "z" frequency threshold after elimination of the "ex" bigram.

Part 3.1

Part 2 cut out occurrences of a polygram out of a word, so, for example, the word "others" became two words "o" and "s" in the corpus after elimination of "ther". In this part, we instead replace a common polygram with a single composite character, so the 5-letter word "other" becomes the 3-letter word "o(the)r", where (the) counts as only one letter. We choose composite characters greedily, maximizing at each step the scoring function N*(L-1), where N is the number of occurrences and L is the length of the polygram. This scoring function is equivalent to the total number of keystrokes that would be saved to type the entire corpus if the polygram could be typed with one keystroke. After substituting highest scoring composite polygram throughout the corpus, we reanalyze and repeat. (This was a long computation.) We arbitrarily stop when the best score drops below the frequency of "z", finding 126 polygrams:

the in re an that on er ou it st en (in)g or al at to is ar le as ed th es of ic (an)d ly have om be se wh no (ou)ld i(on) but ac ight peop(le) (en)t id ll w(it)h like f(or) ch y(ou) ay ad pro me ab(ou)t we su s(om)e ge he lo ve if a(re) do v(er) un (th)(in)k go (ou)gh de (no)t po w(as) (on)e mo (the)y ju(st) fr(om) am so ir us im te ne c(on) ce il tr t(er) ap how (al)(ly) (be)cau(se) ct c(om) ab (at)e (al)l i(ll) o(the)r ur (at)(i(on)) ak pe ig (an)y ts p(re) ol sh (the)(re) k(no)w la (th)((in)g) d(if)fe(re)n ex (ou)nd (th)(is) (wh)(at) (wh)(ic)h c(an) (ac)t li ri ro w((ou)ld) m((en)t)

Part 3.2

Here are the 218 composite polygrams selected in greedy order when using the scoring function N*L instead of N*(L-1). We now need to explicitly forbid creating useless one-letter composite characters.

th in (th)e re an on at er ou st en (in)g it or al to is ar le as ed es of ic (an)d ly om (th)(at) be se wh no ve i(on) ac (en)t ha id ll f(or) y(ou) ay ch (ou)ld me ut ro li we su gh ad ge lo he ab wi ke pe do a(re) v(er) if un go de (no)t ri s(om)e po (ou)t w(as) (on)e mo ((th)e)y so am ir us ne c(on) b(ut) ce ct (pe)op(le) (ha)(ve) te (al)(ly) t(er) (th)(in)k il (wi)(th) ho c(om) im p(ro) ju(st) (gh)t fr(om) (at)e (al)l (at)(i(on)) (an)y ((th)e)r ur fe p(re) la ma tr ((th)e)(re) (th)((in)g) ap ex ts co (th)(is) (wh)(at) c(an) (ac)t (er)s (ou)n w((ou)ld) m((en)t) ss ta (li)(ke) (no)w ye qu (be)cau(se) (ar)t (an)t up sh d(on) (ab)((ou)t) ul ig ag bo my (en)d (ge)t (ar)d pl sa i(st) (ou)r by (wh)(ic)h w(or) em (ho)w (wh)o (mo)(re) ol k((in)g) ti h(as) (ou)(gh) i(ll) um d(er) i(ve) p(er) (es)s w(ay) (in)d od (it)y (re)d v(en) ok (th)(an) gu (in)e ra (re)n o(((th)e)r) (do)(es) (ac)k sp ((th)e)m publ(ic) (or)t (id)e fo (wh)e(re) (ic)e d(id) ty s(ay) mu(ch) (wi)(ll) (we)(ll) (as)s k((no)w) pr t((in)g) op ud d(ay) cl (wh)(en) (ou)s (an)s ck a(st) ((ou)n)d b(le) ci gr wn (ab)(le) (v(er))y (of)f (in)t

Part 3.3

Here are the 62 composite polygrams selected in greedy order when using the scoring function N*(L-2).

the that ing tion and ould ent have for all people think with you about some ter though what ver ight this like not one are ate because was just from con but which pro res more ere ill com rea ear actu(all)y can than any does nce time ive well know (the)re ers when ess ound ant able th(ing) out part

Source code

Here is the Haskell source code.  The main optimization was to preprocess the corpus into a list of words with word counts with sort | uniq -c.

The Google Books n-grams corpus is another choice, but I don't like how it includes as common words words like "iii" which is probably an artifact of OCR scanning Roman numeral page numbers.

Previous vaguely similar attempt, in Perl.. Previous thoughts invoking Braille contractions. I have heard anecdotes that Braille contractions were derived from frequencies in the Bible.

Categories: Offsite Blogs

Mike Izbicki: I got lenses in my Functors

Planet Haskell - Fri, 06/06/2014 - 8:37am

The typeparams package provides type lenses. Let’s combine them with Functors. Because why not?! You’ll need to have at least skimmed the linked README to understand what’s going on here.

First, enable some GHC magic:

> {-# LANGUAGE TemplateHaskell #-} > {-# LANGUAGE ScopedTypeVariables #-} > {-# LANGUAGE KindSignatures #-} > {-# LANGUAGE TypeFamilies #-} > {-# LANGUAGE MultiParamTypeClasses #-} > {-# LANGUAGE UndecidableInstances #-} > {-# LANGUAGE FlexibleInstances #-} > {-# LANGUAGE RankNTypes #-}

And import our libraries:

> import Control.Category > import Prelude hiding ( (.), id, Functor(..) ) > import Data.Params

We’ll use the Either type as our main example. It’s defined as:

data Either a b = Left a | Right b

The Functor instance is pretty straightforward:

class Functor f where fmap :: (a -> b) -> f a -> f b instance Functor (Either a) where fmap f (Left a) = Left a fmap f (Right b) = Right $ f b

But this instance has a key limitation: We can map a function only over the the last type.

Bifunctors are the current solution to this problem. A recent, popular proposal suggested adding them to base. But this is an ad hoc solution whose application does not extend far beyond the Either type.

Type lenses will (kinda sort of) provide a cleaner solution. That is, they fix the problem about as well as regular old lenses fix the problems of record selectors. As a bonus, we’ll get a convenient mechanism for mapping over nested Functors.

Defining the Functor

Here is the alternative definition of the Functor class using type lenses:

> class Functor lens t where > fmap' :: a ~ GetParam lens t > => TypeLens p lens > -> (a -> b) > -> t > -> SetParam lens b t

It’s okay if you don’t understand the type signature at first glace. (That’s how know you’re using lenses, afterall!) Let’s step through it using the Either example.

The first argument is the type lens. This indicates which parameter we will be mapping over the type t. In the Either data type, we could use the variable _a to map over the Left component or _b to map over the Right.

Next, we encounter two type families, GetParam and SetParam. These act as getters and setters at the type level. In the above example, GetParam is used to extract arbitrary type params from a type. It is defined as:

type family GetParam (p::k1) (t:: *) :: k3 type instance GetParam Param_a (Either a b) = a type instance GetParam Param_b (Either a b) = b

The SetParam type similarly sets the type of arbitrary params in a type. It is defined as:

type family SetParam (p::k1) (a::k2) (t:: *) :: * type instance SetParam Param_a a' (Either a b) = Either a' b type instance SetParam Param_b b' (Either a b) = Either a b'

These instances can be automatically provided for any type by calling the mkParams template haskell function like so:

> mkParams ''Either

Quick aside: With injective type families and a little sugar, we could make this definition of Functor a tad cleaner.

Instances

We can replicate the traditional Functor instance with the code:

instance Functor (Param_b Base) (Either a b) where fmap' lens f (Left a) = Left a fmap' lens f (Right b) = Right $ f b

and create a “Left” Functor instance as:

instance Functor (Param_a Base) (Either a b) where fmap' lens f (Left a) = Left $ f a fmap' lens f (Right b) = Right b

Together, these instances let us run the commands:

ghci> fmap _b length $ Left "Roses are red," Left "Roses are red," ghci> fmap _b length $ Rightt "Violets are blue," Right 17 ghci> fmap _a length $ Left "Haskell is fun," Left 15 ghci> fmap _a length $ Right "Type lenses are cool." Right "Type lenses are cool." But wait! There’s more!

With the above definitions, we can’t combine our type lenses at all. Enter the funnily named and awkwardly typed zoom combinator:

zoom :: TypeLens a p -> TypeLens a (Zoom p)

This combinator lets us zoom into a composed type lens, removing the outer most layer. For example, given the composed type lens:

ghci> :t _a._b._a._b _a._b._a._b :: TypeLens a (Param_a (Param_b (Param_a (Param_b a))))

Then zooming in removes the first _a:

ghci> :t zoom (_a._b._a._b) zoom (_a._b._a._b) :: TypeLens a (Param_b (Param_a (Param_b a)))

We will use this combinator to redefine our Functor instances. The new instances will recursively map over every Functor in our input lens:

> instance Functor p b => Functor (Param_b p) (Either a b) where > fmap' lens f (Left a) = Left a > fmap' lens f (Right b) = Right $ fmap' (zoom lens) f b > > instance Functor p a => Functor (Param_a p) (Either a b) where > fmap' lens f (Left a) = Left $ fmap' (zoom lens) f a > fmap' lens f (Right b) = Right b

The type Base provides the base case of the recursion:

> instance Functor Base t where > fmap' _ f a = f a

Now, in order to call fmap’, we must compose our lens with the type lens:

_base :: TypeLens Base Base

For example:

ghci> :t _a._b._a._b._base deeplens :: TypeLens Base (Param_a (Param_b (Param_a (Param_b Base))))

And we call fmap’ like:

ghci> fmap' (_a._b._a._b._base) length $ Left $ Right $ Left $ Right "still simpler than the lens package " Left (Right (Left (Right 42))) ghci> fmap' (_a._b._a._b._base) length $ Left $ Right $ Left $ Left "... for now ..." Left (Right (Left (Left "... for now ...")))

Composing all of our lenses with _base is tedious. So let’s write a function that automates that task:

> fmap :: > ( Functor lens t > ) => TypeLens Base lens > -> (GetParam lens t -> c) > -> t > -> SetParam lens c t > fmap lens = fmap' (lens._base)

And we call fmap as:

ghci> fmap (_a._b._a._b) length $ Left $ Right $ Left $ Left "mwahhahahaha" Left (Right (Left (Left "mwahhahahaha"))) More Functors!

We can easily define more of these new Functor instances. In fact, the procedure is exactly as mechanical for type lens based Functors as it is for the traditional Functors. All you have to do is replace every function application with a recursive Functor call:

f x --> fmap' (zoom lens) f x

Here are some examples using the list and Maybe functors:

> mkParams ''[] > instance Functor p a => Functor (Param_a p) [a] where > fmap' lens f [] = [] > fmap' lens f (a:as) = fmap' (zoom lens) f a : fmap' lens f as > mkParams ''Maybe > instance Functor p a => Functor (Param_a p) (Maybe a) where > fmap' lens f Nothing = Nothing > fmap' lens f (Just a) = Just $ fmap' (zoom lens) f a

Let’s create a variable that uses all of our functors:

> monster = > [ Nothing > , Just (Left "Hello!") > , Just (Right 42) > , Just (Left "World!") > ]

And go to town:

ghci> fmap (_a._a._a._a) succ monster [Nothing,Just (Left "Ifmmp\""),Just (Right 42),Just (Left "Xpsme\"")] ghci> fmap (_a._a._a) length monster [Nothing,Just (Left 6),Just (Right 42),Just (Left 6)] ghci> fmap (_a._a) (const 3.4) monster [Nothing,Just 3.4,Just 3.4,Just 3.4] ghci> fmap _a show monster ["Nothing","Just (Left \"Hello!\")","Just (Right 42)","Just (Left \"World!\")"] Tune in next time…

In our next installment, we’ll tackle Applicative parsing with type lenses. Thought the lens package had too many operators??? You ‘aint seen ‘nothin yet.

Categories: Offsite Blogs

Are Haskell salaries competitive?

Haskell on Reddit - Thu, 06/05/2014 - 10:16pm

I've been wondering if salaries for positions writing Haskell are competitive with salaries for positions writing more mainstream programming languages. Haskellers say that they're more productive when writing Haskell and I would hope that a productive engineer is rewarded accordingly. However, there's also a sentiment that "I'd be willing to take a pay cut, if I could write Haskell full time." In addition, the market for Haskell programmers has been described as a "buyer's market", I would expect this drive down the salary of a Haskell developer. Does anybody care to share their story being employed as a Haskell developer or employing Haskell developers?

submitted by cheecheeo
[link] [41 comments]
Categories: Incoming News

What's the state of oauth2 with Haskell?

Haskell on Reddit - Thu, 06/05/2014 - 9:03pm

I have been looking at all of my usual resources (hackage, mostly) trying to figure out the easiest way to implement an OAuth2 authentication system with a Yesod app. I just want to make API calls to an OAuth2 provider. It seems that every library I look at is either broken or incomplete or both.

How are you guys handling this?

submitted by 2piix
[link] [5 comments]
Categories: Incoming News

Why does ghci contain mingw?

Haskell on Reddit - Thu, 06/05/2014 - 10:37am

I just realized (by looking through my environment variables), that ghci came with its own installation of mingw and added the bin directory to PATH. I removed it from PATH, and ghci seems to run just fine, so what is it for?

submitted by ym_twosixonetwo
[link] [9 comments]
Categories: Incoming News

Music-Suite

Haskell on Reddit - Thu, 06/05/2014 - 6:16am
Categories: Incoming News