News aggregator

Arer there standard functions `left :: Either a b -> Maybe a` and `right :: Either a b -> Maybe b` somewhere?

Haskell on Reddit - Tue, 05/05/2015 - 1:07pm

The implementation is easy:

left = either Just (const Nothing) right = either (const Nothing) Just

But I was expecting to find this in a standard library, but didn't turn anything up in hoogle.

submitted by peterjoel
[link] [19 comments]
Categories: Incoming News

Extensible states

haskell-cafe - Tue, 05/05/2015 - 10:40am
Hi, Anyone used some of the extensible record packages to create a kind of extensible state monad? I mean something that besides having "get", "gets" and "put" would have some kind of "add" and "gets": add :: a -> State () gets :: State (Maybe a) or add :: LabelName -> a -> State () gets :: LabelName -> State (Maybe a) So that I can extend the state without using additional monad transformers. Monad transformers are very hard for beginners and scramble error messages I did the first option for MFlow, hplayground and Transient packages (setSData and getSData). But my solution uses a map indexed by type and this requires a lookup for each access. I would like to know if there is an alternative with no lookups. I´m not obsessed with speed but In some applications the speed may be important.... Anyone?
Categories: Offsite Discussion

Three Complementary Approaches to Bidirectional Programming

Haskell on Reddit - Tue, 05/05/2015 - 10:27am

Just stumbled across a cool paper titled Three Complementary Approaches to Bidirectional Programming

The idea is to define higher-order that takes forward transformation functions and returns the corresponding inverse functions (i.e backward transformation).

I think you will enjoy it!

submitted by r0naa
[link] [comment]
Categories: Incoming News

Theory Lunch (Institute of Cybernetics, Tallinn): A concrete piece of evidence for incompleteness

Planet Haskell - Tue, 05/05/2015 - 10:23am

On Thursday, the 25th of March 2015, Venanzio Capretta gave a Theory Lunch talk about Goodstein’s theorem. Later, on the 9th of March, Wolfgang Jeltsch talked about ordinal numbers, which are at the base of Goodstein’s proof. Here, I am writing down a small recollection of their arguments.

Given a base , consider the base- writing of the nonnegative integer

where each is an integer between and . The Cantor base- writing of is obtained by iteratively applying the base- writing to the exponents as well, until the only values appearing are integers between and . For example, for and , we have

and also

Given a nonnegative integer , consider the Goodstein sequence defined for by putting , and by constructing from as follows:

  1. Take the Cantor base- representation of .
  2. Convert each into , getting a new number.
  3. If the value obtained at the previous point is positive, then subtract from it.
    (This is called the woodworm’s trick.)

Goodstein’s theorem. Whatever the initial value , the Goodstein sequence ultimately reaches the value in finitely many steps.

Goodstein’s proof relies on the use of ordinal arithmetic. Recall the definition: an ordinal number is an equivalence class of well-ordered sets modulo order isomorphisms, i.e., order-preserving bijections.Observe that such order isomorphism between well-ordered sets, if it exists, is unique: if and are well-ordered sets, and are two distinct order isomorphisms, then either or has a minimum , which cannot correspond to any element of .

An interval in a well-ordered set is a subset of the form .

Fact 1. Given any two well-ordered sets, either they are order-isomorphic, or one of them is order-isomorphic to an initial interval of the other.

In particular, every ordinal is order-isomorphic to the interval .

All ordinal numbers can be obtained via von Neumann’s classification:

  • The zero ordinal is , which is trivially well-ordered as it has no nonempty subsets.
  • A successor ordinal is an ordinal of the form , with every object in being smaller than in .
    For instance, can be seen as .
  • A limit ordinal is a nonzero ordinal which is not a successor. Such ordinal must be the least upper bound of the collection of all the ordinals below it.
    For instance, the smallest transfinite ordinal is the limit of the collection of the finite ordinals.

Observe that, with this convention, each ordinal is an element of every ordinal strictly greater than itself.

Fact 2. Every set of ordinal numbers is well-ordered with respect to the relation: if and only if .

Operations between ordinal numbers are defined as follows: (up to order isomorphisms)

  • is a copy of followed by a copy of , with every object in being strictly smaller than any object in .
    If and are finite ordinals, then has the intuitive meaning. On the other hand, , as a copy of followed by a copy of is order-isomorphic to : but is strictly larger than , as the latter is an initial interval of the former.
  • is a stack of copies of , with each object in each layer being strictly smaller than any object of any layer above.
    If and are finite ordinals, then has the intuitive meaning. On the other hand, is a stack of copies of , which is order-isomorphic to : but is a stack of copies of , which is order-isomorphic to .
  • is if , if is the successor of , and the least upper bound of the ordinals of the form with if is a limit ordinal.
    If and are finite ordinals, then has the intuitive meaning. On the other hand, is the least upper bound of all the ordinals of the form where is a finite ordinal, which is precisely : but .

Proof of Goodstein’s theorem: To each integer value we associate an ordinal number by replacing each (which, let’s not forget, is the base is written in) with . For example, if , then

and (which, incidentally, equals ) so that

We notice that, in our example, , but : why is it so?, and is it just a case, or is there a rule behind this?

At each step where , consider the writing . Three cases are possible:

  1. .
    Then , as , and .
  2. and .
    Then for a transfinite ordinal , and .
  3. and .
    Then for some , and is a number whose th digit in base is zero: correspondingly, the rightmost term in will be replaced by a smaller ordinal in .

It is then clear that the sequence is strictly decreasing. But the collection of all ordinals not larger than is a well-ordered set, and every nonincreasing sequence in a well-ordered set is ultimately constant: hence, there must be a value such that . But the only way it can be so is when : in turn, the only option for to be zero, is that is zero as well. This proves the theorem.

So why is it that Goodstein’s theorem is not provable in the first order Peano arithmetics? The intuitive reason, is that the exponentiations can be arbitrarily many, which requires having available all the ordinals up to

, times , times:

this, however, is impossible if induction only allows finitely many steps, as it is the case for first-order Peano arithmetics. A full discussion of a counterexample, however, would greatly exceed the scope of this post.

Categories: Offsite Blogs

Functional Jobs: Sr. Software Engineer at McGraw-Hill Education (Full-time)

Planet Haskell - Tue, 05/05/2015 - 9:42am

McGraw-Hill Education is a digital learning company that draws on its more than 100 years of educational expertise to offer solutions which improve learning outcomes around the world. McGraw-Hill Education is the adaptive education technology leader with the vision for creating a highly personalized learning experience that prepares students of all ages for the world that awaits. The Company has offices across North America, India, China, Europe, the Middle East and South America, and makes its learning solutions available in more than 65 languages. For additional information, visit

This Senior Software Engineer position will be joining the new adaptive learning engineering team at McGraw-Hill Education's new and growing Research & Development center in Boston's Innovation District.

We make software that helps college students study smarter, earn better grades, and retain more knowledge.

The LearnSmart adaptive engine powers the products in our LearnSmart Advantage suite — LearnSmart, SmartBook, LearnSmart Achieve, LearnSmart Prep, and LearnSmart Labs. These products provide a personalized learning path that continuously adapts course content based on a student’s current knowledge and confidence level.

On our team, you'll get to: Move textbooks and learning into the digital era Create software used by millions of students Advance the state of the art in adaptive learning technology Make a real difference in education

Our team's products are built with Flow, a functional language in the ML family. Flow lets us write code once and deliver it to students on multiple platforms and device types. Other languages in our development ecosystem include especially JavaScript, but also C++, SWF (Flash), and Haxe.

If you're interested in functional languages like Scala, Swift, Erlang, Clojure, F#, Lisp, Haskell, and OCaml, then you'll enjoy learning Flow. We don't require that you have previous experience with functional programming, only enthusiasm for learning it. But if you have do some experience with functional languages, so much the better! (On-the-job experience is best, but coursework, personal projects, and open-source contributions count too.)

Required Experience:

Have a solid grasp of CS fundamentals (languages, algorithms, and data structures) Be comfortable moving between multiple programming languages Be comfortable with modern software practices: version control (Git), test-driven development, continuous integration, and Agile methodologies.

Get information on how to apply for this position.

Categories: Offsite Blogs

FFI nested structs - malloc - free

haskell-cafe - Tue, 05/05/2015 - 4:42am
FFI Gurus, I created a c2hs FFI of a nested C structure, where struct A has a pointer to a struct B. To do so, I used a malloc, but I am unsure if the memory will be freed when the resulting Ptr is freed. The example at this link uses the same technique, so it will serve as an example. Line 48 and 51 do the malloc and assign the pointer in the struct, from inside a Storable poke implementation. But, there is no explicit free, nor a finalizer. Will the memory be freed when the Ptr of the Storable is freed? If it is, it implies that some magic keeps track of mallocs inside a poke, and creates finalizers. Or, this example leaks. If it leaks, how do I create a finalizer from inside a poke implementation? Mike
Categories: Offsite Discussion

notepad++ && haskell -> is there a template?

Haskell on Reddit - Tue, 05/05/2015 - 2:03am

Hi, I'm searching for a template plugin or something like that which structures my haskell code in npp, so that all my '=' are on the same level like that: func :: a -> a func a | a == True = 1 | a == False = 2

also it would be nice to have a possibility to fold functions/modules etc.

Is there a way to do all this in notepad++.

Bonus question: is it possible to get some syntax-style/look like you can get with emacs (-> are replaced by real arrows etc.?)

Thx for your help

submitted by baxbear
[link] [2 comments]
Categories: Incoming News

Philip Wadler: Bright Club: Computability

Planet Haskell - Mon, 05/04/2015 - 11:19pm
<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="" frameborder="0" height="266" src="" width="320"></iframe>A brief introduction to the hilarious subject of computability theory. Performed as part of Bright Club at The Stand in Edinburgh, on Tuesday 28 April 2015.
Categories: Offsite Blogs

[Jobs] Galois is hiring

haskell-cafe - Mon, 05/04/2015 - 10:16pm
Hello, Galois is hiring! We're looking for researchers, principal investigators, software engineers, and project leads with expertise in functional programming, formal methods, computer security, control systems, or networking. Positions are available at both our Portland OR and Arlington VA offices. For more information on the application process and the available positions, please visit out web site: -Iavor _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

What was your next step after you gave up and washed your hands of any hope of ever binding that massive C++ library you have to use for work to Haskell?

Haskell on Reddit - Mon, 05/04/2015 - 9:59pm

I suspect you moved on to F# or Lua. Maybe you just embraced the new improved C++14. I'd like to hear from you so I know what my options are.

submitted by rdfox
[link] [39 comments]
Categories: Incoming News

Danny Gratzer: A Proof of Church Rosser in Twelf

Planet Haskell - Mon, 05/04/2015 - 6:00pm
Posted on May 5, 2015 Tags: twelf, types

An important property in any term rewriting system, a system of rules for saying one term can be rewritten into another, is called confluence. In a term rewriting system more than one rule may apply at a time, confluence states that it doesn’t matter in what order we apply these rules. In other words, there’s some sort of diamond property in our system

Starting Term / \ / \ Rule 1 / \ Rule 2 / \ / \ B C \ / A bunch \ of / rules later \ / \ / \ / Same end point

In words (and not a crappy ascii picture)

  1. Suppose we have some term A
  2. The system lets us rewrite A to B
  3. The system lets us rewrite A to C

Then two things hold

  1. The system lets us rewrite B to D in some number of rewrites
  2. The system lets us rewrite C to D with a different series of rewrites

In the specific case of lambda calculus, confluence is referred to as the “Church-Rosser Theorem”. This theorem has several important corollaries, including that the normal forms of any lambda term is unique. To see this, remember that a normal form is always “at the bottom” of diamonds like the one we drew above. This means that if some term had multiple steps to take, they all must converge before one of them reaches a normal form. If any of them did hit a normal form first, they couldn’t complete the diamond.

Proving Church-Rosser

In this post I’d like to go over a proof of the Church Rosser theorem in Twelf, everyone’s favorite mechanized metalogic. To follow along if you don’t know Twelf, perhaps some shameless self linking will help.

We need to start by actually defining lambda calculus. In keeping with Twelf style, we laugh at those restricted by the bounds of inductive types and use higher order abstract syntax to get binding for free.

term : type. ap : term -> term -> term. lam : (term -> term) -> term.

We have to constructors, ap, which applies one term to another. The interesting one here is lam which embeds the LF function space, term -> term into term. This actually makes sense because term isn’t an inductive type, just a type family with a few members. There’s no underlying induction principle with which we can derive contradictions. To be perfectly honest I’m not sure how the proof of soundness of something like Twelf %total mechanism proceeds. If a reader is feeling curious, I believe this is the appropriate paper to read.

With this, something like λx. x x as lam [x] ap x x.

Now on to evaluation. We want to talk about things as a term rewriting system, so we opt for a small step evaluation approach.

step : term -> term -> type. step/b : step (ap (lam F) A) (F A). step/ap1 : step (ap F A) (ap F' A) <- step F F'. step/ap2 : step (ap F A) (ap F A') <- step A A'. step/lam : step (lam [x] M x) (lam [x] M' x) <- ({x} step (M x) (M' x)). step* : term -> term -> type. step*/z : step* A A. step*/s : step* A C <- step A B <- step* B C.

We start with the 4 sorts of steps you can make in this system. 3 of them are merely “if you can step somewhere else, you can pull the rewrite out”, I’ve heard these referred to as compatibility rules. This is what ap1, ap2 and lam do, lam being the most interesting since it deals with going under a binder. Finally, the main rule is step/b which defines beta reduction. Note that HOAS gives us this for free as application.

Finally, step* is for a series of steps. We either have no steps, or a step followed by another series of steps. Now we want to prove a couple theorems about our system. These are mostly the lifting of the “compatibility rules” up to working on step*s. The first is the lifting of ap1.

step*/left : step* F F' -> step* (ap F A) (ap F' A) -> type. %mode +{F : term} +{F' : term} +{A : term} +{In : step* F F'} -{Out : step* (ap F A) (ap F' A)} (step*/left In Out). - : step*/left step*/z step*/z. - : step*/left (step*/s S* S) (step*/s S'* (step/ap1 S)) <- step*/left S* S'*. %worlds (lam-block) (step*/left _ _). %total (T) (step*/left T _).

Note, the mode specification I’m using a little peculiar. It needs to be this verbose because otherwise A mode-errors. Type inference is peculiar.

The theorem says that if F steps to F' in several steps, for all A, ap F A steps to ap F' A in many steps. The actual proof is quite boring, we just recurse and apply step/ap1 until everything type checks.

Note that the world specification for step*/left is a little strange. We use the block lam-block because later one of our theorem needs this. The block is just

%block lam-block : block {x : term}.

We need to annotate this on all our theorems because Twelf’s world subsumption checker isn’t convinced that lam-block can subsume the empty worlds we check some of our theorems in. Ah well.

Similarly to step*/left there is step*/right. The proof is 1 character off so I won’t duplicate it.

step*/right : step* A A' -> step* (ap F A) (ap F A') -> type.

Finally, we have step/lam, the lifting of the compatibility rule for lambdas. This one is a little more fun since it actually works by pattern matching on functions.

step*/lam : ({x} step* (F x) (F' x)) -> step* (lam F) (lam F') -> type. %mode step*/lam +A -B. - : step*/lam ([x] step*/z) step*/z. - : step*/lam ([x] step*/s (S* x) (S x)) (step*/s S'* (step/lam S)) <- step*/lam S* S'*. %worlds (lam-block) (step*/lam _ _). %total (T) (step*/lam T _).

What’s fun here is that we’re inducting on a dependent function. So the first case matches [x] step*/z and the second [x] step*/s (S* x) (S x). Other than that we just use step/lam to lift up S and recurse to lift up S* in the second case.

We need one final (more complicated) lemma about substitution. It states that if A steps to A', then F A steps to F A' in many steps for all F. This proceeds by induction on the derivation that A steps to A'. First off, here’s the formal statement in Twelf

This is the lemma that actually needs the world with lam-blocks

subst : {F} step A A' -> step* (F A) (F A') -> type. %mode subst +A +B -C.

Now the actual proof. The first two cases are for constant functions and the identity function

- : subst ([x] A) S step*/z. - : subst ([x] x) S (step*/s step*/z S).

In the case of the constant functions the results of F A and F A' are the same so we don’t need to step at all. In the case of the identity function we just step with the step from A to A'.

In the next case, we deal with nested lambdas.

- : subst ([x] lam ([y] F y x)) S S'* <- ({y} subst (F y) S (S* y)) <- step*/lam S* S'*.

Here we recurse, but we carefully do this under a pi type. The reason for doing this is because we’re recursing on the open body of the inner lambda. This has a free variable and we need a pi type in order to actually apply F to something to get at the body. Otherwise this just uses step*/lam to lift the step across the body to the step across lambdas.

Finally, application.

- : subst ([x] ap (F x) (A x)) S S* <- subst F S F* <- subst A S A* <- step*/left F* S1* <- step*/right A* S2* <- join S1* S2* S*.

This looks complicated, but isn’t so bad. We first recurse, and then use various compatibility lemmas to actually plumb the results of the recursive calls to the right parts of the final term. Since there are two individual pieces of stepping, one for the argument and one for the function, we use join to slap them together.

With this, we’ve got all our lemmas

%worlds (lam-block) (subst _ _ _). %total (T) (subst T _ _). The Main Theorem

Now that we have all the pieces in place, we’re ready to state and prove confluence. Here’s our statement in Twelf

confluent : step A B -> step A C -> step* B D -> step* C D -> type. %mode confluent +A +B -C -D.

Unfortunately, there’s a bit of a combinatorial explosion with this. There are approximately 3 * 3 * 3 + 1 = 10 cases for this theorem. And thanks to the lemmas we’ve proven, they’re all boring.

First we have the cases where step A B is a step/ap1.

- : confluent (step/ap1 S1) (step/ap1 S2) S1'* S2'* <- confluent S1 S2 S1* S2* <- step*/left S1* S1'* <- step*/left S2* S2'*. - : confluent (step/ap1 S1) (step/ap2 S2) (step*/s step*/z (step/ap2 S2)) (step*/s step*/z (step/ap1 S1)). - : confluent (step/ap1 (step/lam F) : step (ap _ A) _) step/b (step*/s step*/z step/b) (step*/s step*/z (F A)).

In the first case, we have two ap1s. We recurse on the smaller S1 and S2 and then immediately use one of our lemmas to lift the results of the recursive call, which step the function part of the the ap we’re looking at, to work across the whole ap term. In the second case there, we’re stepping the function in one, and the argument in the other. In order to bring these to a common term we just apply the first step to the resulting term of the second step and vice versa. This means that we’re doing something like this

F A / \ S1 / \ S2 / \ F' A F A' \ / S2 \ / S1 \ / F' A'

This clearly commutes so this case goes through. For the final case, we’re applying a lambda to some term so we can beta reduce. On one side we step the body of the lambda some how, and on the other we immediately substitute. Now we do something clever. What is a proof that lam A steps to lam B? It’s a proof that for any x, A x steps to B x. In fact, it’s just a function from x to such a step A x to B x. So we have that lying around in F. So to step from the beta-reduced term G A to G' A all we do is apply F to A! The other direction is just beta-reducing ap (lam G') A to the desired G' A.

In the next set of cases we deal with ap2!

- : confluent (step/ap2 S1) (step/ap2 S2) S1'* S2'* <- confluent S1 S2 S1* S2* <- step*/right S1* S1'* <- step*/right S2* S2'*. - : confluent (step/ap2 S1) (step/ap1 S2) (step*/s step*/z (step/ap1 S2)) (step*/s step*/z (step/ap2 S1)). - : confluent (step/ap2 S) (step/b : step (ap (lam F) _) _) (step*/s step*/z step/b) S1* <- subst F S S1*.

The first two cases are almost identical to what we’ve seen before. The key difference here is in the third case. This is again where we’re stepping something on one side and beta-reducing on the other. We can’t use the nice free stepping provided by F here since we’re stepping the argument, not the function. For this we appeal to subst which let’s us step F A to F A' using S1* exactly as required. The other direction is trivial just like it was in the ap1 case, we just have to step ap (lam F) A' to F A' which is done with beta reduction.

I’m not going to detail the cases to do with step/b as the first argument because they’re just mirrors of the cases we’ve looked at before. That only leaves us with one more case, the case for step/lam.

- : confluent (step/lam F1) (step/lam F2) F1'* F2'* <- ({x} confluent (F1 x) (F2 x) (F1* x) (F2* x)) <- step*/lam F1* F1'* <- step*/lam F2* F2'*.

This is just like all the other “diagonal” cases, like confluent (ap1 S1) (ap1 S2) .... We first recurse (this time using a pi to unbind the body of the lambda) and then use compatibility rules in order to get something we can give back from confluent. And with this, we can actually prove that lambda calculus is confluent.

%worlds (lam-block) (confluent _ _ _ _). %total (T) (confluent T _ _ _). Wrap Up

We went through a fairly significant proof here, but the end results were interesting at least. One nice thing this proof illustrates is how well HOAS lets us encode these proofs. It’s a very Twelf-y approach to use lambdas to represent bindings. All in all, it’s a fun proof.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

A Poor Man's Type Level Debugger.

Haskell on Reddit - Mon, 05/04/2015 - 3:58pm

I stumbled upon a trick to step through the typechecker. Basically run the program repeatedly while incrementing the context stack and see where it stops.

To demonstrate here's a gist that has:

  • a type level list type AToF = (A (B (C (D (E (F ())))))
  • a type level function Contains that checks if its argument is in that list.
  • a value level function run that drives Contains.

Running the file with the command line :

for i in {0..21}; do `ghc -fcontext-stack=${i} -i TypeLevelDebugger.hs -e "run (undefined :: AToF) (undefined :: F ())"`; done


<interactive>:1:1: Context reduction stack overflow; size = 1 Use -fcontext-stack=N to increase stack size to N TypeEqual (A ()) (F ()) match In the expression: run (undefined :: AToF) (undefined :: F ()) In an equation for ‘it’: it = run (undefined :: AToF) (undefined :: F ()) <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (B ()) (F ()) match ... <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (C ()) (F ()) match ... <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (D ()) (F ()) match ... <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (E ()) (F ()) match ... <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (F ()) (F ()) match ... <interactive>:1:1: ... yesOrNo0 ~ Yes ... <interactive>:1:1: No instance for (Show Yes) arising from a use of ‘print’ In a stmt of an interactive GHCi command: print it ...

The third line of each error contains the type level trace. On all but the first iteration the noise has been deleted.

Now you can actually step through lens at the type level!


submitted by deech
[link] [2 comments]
Categories: Incoming News

Call for participation (FCRC+LCTES)

General haskell list - Mon, 05/04/2015 - 3:56pm
FCRC’15 <> Early bird registration is now open for FCRC 2015, the event that assembles a spectrum of affiliated research conferences and workshops into a week-long coordinated meeting. FCRC 2015 will be held at the Oregon Convention Center, in Portland, Oregon from June 13 – 19, 2015. Early Registration ends May 18th. This year’s plenary speakers include: Michael Stonebraker, MIT CSAIL, 2014 ACM Turing Award Winner, who will deliver the 2014 ACM Turing Lecture; Andrew Yao, Tsinghua University; Olivier Temam, Google; Don Syme, Microsoft Research; Kathy Yelick, University of California at Berkeley and Balaji Prabhakar, Stanford University Computational Complexity Conference CRA-W 2015 Career Mentoring Workshops EC 2015 [The 16th ACM Conference on Economics and Computation] HPDC 2015 [The 24th International Symposium on High-Performance Parallel and Distributed Computing] ISCA 2015 [The 42nd International Symposium on Computer Architecture] ISMM 2015 [ACM SIGPLAN International
Categories: Incoming News

Any clever idea for a quick way to browse through Hackage?

Haskell on Reddit - Mon, 05/04/2015 - 1:23pm

Everytime I need to use a function from any library on Hackage, the usual process is:

  1. Open the browser.

  2. Google something like "hackage gloss".

  3. Find the Gloss on hackage link..

  4. Click on the module I want - Graphics.Gloss.Data.Picture.

  5. Finally read the function's description...

Since my internet is terrible, this process is currently a bottleneck of my Haskell programming and is affecting my workflow. Is there any way to improve looking for documentation? Some command line tool would be great. (If you somehow use Hoogle for that purpose, please explain how - I couldn't get how to make it useful for it on the command line.)

submitted by SrPeixinho
[link] [18 comments]
Categories: Incoming News