Laziness, tail recursion, performance, and memory use

Submitted by jgoerzen on Wed, 04/06/2005 - 8:55am.


<CosmicRay> I've also never quite understood the exact mechanics of memory utilization and performance of recursive functions in a lazy language like haskell
<CosmicRay> it seems quite different than in most other fp languages
<shapr> xerox: That would be fun to write.
<CosmicRay> and it seems like some of the rules of thumb ("tail recursion is good", for instance) don't apply
<xerox> shapr, go for it! (as you say ;))
<dustin`> I will write something nifty once I've finished "An Introduction to Higher-Order Categorical Logic". There seems to be a lot of "viewing concepts from category theory through Haskell", but not much "viewing Haskell through category theory"
<JaffaCake> CosmicRay: tail recursion is still good, but you have to watch out for laziness too
<shapr> xerox: I think I'd call that article "Quiver of Artemis" or something.
* xerox takes the vocabulary
<CosmicRay> yeah, that's the bit I don't quite get yet. If I am generating a list and wish to consume it lazily, do I write my function differently? that sort of thing. also I haven't yet quite understood the whole deal with leaving unevaluated values on the heap (there's a word for that, I forget...)
<nlv11757_> closures?
<jlouis> CosmicRay: thunks?
<CosmicRay> that sounds right
<CosmicRay> (thunks)
<CosmicRay> unless haskell is using "closures" to refer to somthing new, I don't think that's what I mean
<nlv11757_> closures == thunks
<CosmicRay> oh.
<JaffaCake> you probably understand "closure" to mean "function closure"
<CosmicRay> yes
<JaffaCake> in Haskell a closure doesn't necessarily have to be a function
<JaffaCake> it can be any expression
<Darius> CosmicRay: I wrote StackOverflow that indirectly and partially addresses that issue using the folds as a representative.
<nlv11757_> lots of terms for unevaluated things
<CosmicRay> interesting, so I take it this expanded closure is required for laziness to work?
<Itkovian> he! my powerbook has been shipped!
<Itkovian> whooiee!
<Itkovian> rejoice
<Darius> Though in general, hand-evaluating some iterations of a loop should give some ideas about the behavior of the function.
<nlv11757_> you dont evaluate it, just allocate a thunk and evaluate it if needed
<CosmicRay> so let's say I'm writing a recursive function like this:
<nlv11757_> you refer to the memory location of the thunk, so when it is evaluated once, every pointer to that location will suddenly not point to the thunk anymore but the evaluated value
<CosmicRay> myFoo [] = []
<nlv11757_> so no re-evaluation is needed
<CosmicRay> myFoo (x:xs) = (x + 5) : myFoo xs
<JaffaCake> i.e. map (+5)
<CosmicRay> right, but I don't know how map works internally, so I wrote it out
<JaffaCake> it works just like that ;)
<CosmicRay> so when I call this function, and consume, say, 2 elements, it only evaluates the first 2 elements, right? (basic laziness)
<nlv11757_> @plugs definition map
-lambdabot/#haskell- Variable not in scope: `definition'
<JaffaCake> yes
<CosmicRay> and those 2 elements disappear from memory entirely after I'm done with them, I assume
<Darius> CosmicRay: If they are unreferenced, yes.
<JaffaCake> they'll be garbage collected, as long as nothing else references them
<CosmicRay> right.
<CosmicRay> so now, what if I write it this way:
<CosmicRay> myFoo accum [] = accum
<CosmicRay> myFoo accum (x:xs) = myFoo ((x + 5) : accum) xs
<CosmicRay> will this force the entire list to be generated before any output is given?
<CosmicRay> (I do realize this presents the output in a different order)
--> monochrom (trebla@H146.C220.tor.velocet.net) has joined #haskell
<JaffaCake> well, the entire input is consumed before the result is available
<JaffaCake> and the output will be a list of unevaluated closures
<CosmicRay> which will probably make my program not very memory-friendly, right?
<-- rifleman has quit (Connection timed out)
<CosmicRay> or at least less memory-friendly than the first version
<CosmicRay> probably slower too?
<JaffaCake> depends on the demand
<JaffaCake> it might be faster, if you definitely want the entire list
<CosmicRay> hmm, interesting, how could it be faster?
<JaffaCake> because the tails aren't thunks
<JaffaCake> with map, each tail is a thunk, which adds a little overhead
<JaffaCake> time it and see!
<CosmicRay> it's less overhead to have a list of thunks than to have to evaluate a thunk to get the next element in a list?
<CosmicRay> I believe you, I'm just trying to understand why
<JaffaCake> in both cases, the elements will be thunks
<Darius> CosmicRay: map will produce a list of thunks as well as thunks for tails.
<JaffaCake> in the map case only, each tail is a thunk
<CosmicRay> Darius: ahh.
<CosmicRay> Darius: twice the thunkage then, eh?
<CosmicRay> so you have a thunk that returns another thunk as the next element of the list?
<JaffaCake> on the other hand, map deforests nicely....
<jlouis> But map (+5) will return quite a lot faster than myFoo, right?
<CosmicRay> jlouis: than the second version (the one with the accumulator), I'd assume
<jlouis> CosmicRay: yes
<Igloo> It'll return the /first/ element faster
<CosmicRay> this is very interesting.
<jlouis> Igloo: yes
<CosmicRay> the version with the accumulator would usually be the preferred version for other FP languages, esp. if they can do tail recursion optimization (here I'm thinking of ocaml, for instance)
<CosmicRay> since the version written more like the haskell map could consume vast amounts of stack
<Darius> CosmicRay: That's because without laziness, the "map" version in not-tail-recursive.
<CosmicRay> right
<JaffaCake> yes - but in Haskell you might prefer the map version for two reaons
<Darius> CosmicRay: Laziness though causes the function to "return" before handling the next element.
<CosmicRay> so it seems that in haskell, tail recursion is only desirable in some narrow cases
<CosmicRay> Darius: right
<JaffaCake> it works with infinite lists
<TFK> The Monad Reader, eh?
<jlouis> I fail to see why myFoo is better than map (+5) if we want the entire list. The overhead in map with forcing the tail is also done in myFoo when we build the list of thunks
<jlouis> what am I missing?
<shapr> TFK: yeah, want to write a TMR article?
<JaffaCake> it consumes constant memory
<JaffaCake> and it deforests nicely
<TFK> I'll get censored :-/
<JaffaCake> *3* reasons :)
<jlouis> JaffaCake: map?
* TFK reads issue #1
<Darius> CosmicRay: Tail-recursion is always desirable, it's just that different functions are tail-recursive or not.
<CosmicRay> what does "deforest" mean?
<CosmicRay> Darius: well it seems here that there are some good reasons to not be tail recursive in haskell
<nlv11757_> folding structures into values
<Darius> jlouis: Most foldr based things
<JaffaCake> deforest == eliminate intermediate structures in a composition
<Darius> CosmicRay: No. The "map" version -is- tail-recursive.
* CosmicRay blinks
<Darius> CosmicRay: It just doesn't look like it.
<JaffaCake> Darius: I don't agree
<CosmicRay> Darius: are you saying that the lazy nature of haskell makes it tail recursive automatically?
<nlv11757_> deforest prevents that first a complete structure is being built, to be broken down again.
<Lunar^> JaffaCake: btw, I have somehow narrowed a concurrency problem with FFI, but I'm still unhappy with these tests, as they don't always works
<monochrom> Deforest means in fold.unfold the intermediate list is optimized away.
<Darius> CosmicRay: In this case, yes, in general no and it can make "tail-recursive" things non-tail-recursive.
<JaffaCake> Lunar^: nice giong
<Darius> JaffaCake: If we wrote map in a strict language with explicit thunks using lambda, one would consider it tail-recursive (trivially), no?
<CosmicRay> monochrom: ah, so sort of like evaluating an equation by substituting values for the unknowns?
<JaffaCake> Darius: no, map isn't tail recursive in any language
<Lunar^> JaffaCake: where should I post them, anyway ?
<TheHunter> jlouis, oh i forgot, i already fixed SeenModule today.
<monochrom> I don't see that analogy. I think it is like fusing two loops.
<JaffaCake> Lunar^: on the bugs list, if it's not too big
--> atom-z (~atomz@cpc1-hudd6-3-0-cust178.hudd.cable.ntl.com) has joined #haskell
<Igloo> CosmicRay: deforestation means that in map (+1) . map (+2) you never make a list whoe values are 2 more than the input list
<Lunar^> 118 total
<jlouis> TheHunter: Do you have a repository online? I could take a look at it
<Lunar^> JaffaCake: I'll do that then
<Igloo> CosmicRay: You take each value and apply (+2) then (+1) and then return that as a list cell
<TheHunter> jlouis, the 6.2 insertWith must be changed to "insertWith f k e m = FM.addToFM_C (flip f) m k e"
<CosmicRay> ok. I think I understand this bit. thanks everyone. now next question :-) How might seq or strict record fields improve performance?
<-- ex_nor has quit ("Leaving")
<TheHunter> jlouis, and the order of the arguments in SeenModule must be reversed.
<CosmicRay> Igloo: gotcha, thanks
--> exnor (~exnor@ppp113-151.static.internode.on.net) has joined #haskell
<jlouis> TheHunter: thats it?
<jlouis> Because then I am goin to do that ;)
<skew> CosmicRay: I think the big thing those do is give the strictness analyzer information
<monochrom> In foldl (+) 0 [1..10000], if you use seq somewhere, you will save a lot of stack space.
<TheHunter> jlouis, yeah, that's it.
<jlouis> TheHunter: okie
<CosmicRay> monochrom: what about foldr?
<monochrom> foldr (+) 0 [1..10000] doesn't really benefit from seq.
<monochrom> You use seq when you know the eager strategy beats the lazy strategy.
<skew> monochrom: (foldr (+) 0 [1..10000] :: Int) shouldn't benfit, should it?
<skew> monochrom: foldl, I mean
<JaffaCake> if you use seq in foldl, it becomes a tail-recursive accumulator
<JaffaCake> foldr isn't tail-recursive, so doesn't benefit in the same way
<monochrom> which comes to what skew says. you use seq to help the strictness analyzer to be more aggressive. this eliminates a lot of spurrious thunking.
<TheHunter> jlouis, shall send you an email, so that dons doesn't have to deal with conflicting changes?
<jlouis> TheHunter: jlouis@mongers.org, please do
<CosmicRay> hmm, so why is there there difference between foldl and foldr in this instance?
<CosmicRay> I thought I could work that out but couldn't quite :-)
<skew> CosmicRay: foldr needs stack space
<monochrom> foldr (+) 0 [1..n] uses Theta(n) space, eager or lazy.
<CosmicRay> since it actually has to begin processing at the right side of the list?
<monochrom> foldl (+) 0 [1..n] uses O(1) space eager, Theta(n) space naive lazy.
<skew> foldr (+) 0 [1,2,3] turns into (1 + (2 + (3 + 0))), which takes linear stack space recursing down the list
<monochrom> no theoretic difference in time complexity.
<thou> is anybody running a recent ghc from CVS? I just updated and built it (trying to get GLUT to work), and it seems OK except ghci can't parse anything (I get ghc-6.5: panic! (the `impossible' happened, GHC version 6.5): tcSyntaxOp "noSyntaxExpr"); anyone seen this before?
<monochrom> (in practice, wasting space is a good way to waste time too)
<JaffaCake> thou: think that was introduced yesterday
<thou> i figured something like that, just wanted to check. thanks, JaffaCake
<JaffaCake> it smells like Simon PJ's fault :)
<JaffaCake> and he's not about today
<skew> lazy foldl (+) 0 [1,2,3] accumulates like foldl (+) (0+1) [2,3] => foldl (+) ((0+1)+2) [3] =>foldl (+) (((0+1)+2)+3) => (((0+1)+2)+3, then uses space forcing that expression
<CosmicRay> hmm. from my memory, in ocaml, foldr is also more efficient, but for a different reason
<CosmicRay> err, no maybe I'm mixed up.
<CosmicRay> hmm.
<thou> JaffaCake: thanks
<tromp> foldr stacks many function applications
<tromp> which cannot be made strict
<CosmicRay> hmm, in ocaml, foldl is more efficient because it is tail-recursive
<tromp> foldr leaves applications in the accumalator which can be made strict
<tromp> change last foldr to foldl:)
<CosmicRay> so, if I don't force the strictness in haskell, foldr is the more efficient?
<CosmicRay> wouldn't it be possible for the optimizer to see what's going on with foldl and evaluate that expression that's stacking up immediately?
<tromp> about same memeory wise, but foldl is tail recursive
<monochrom> There was a subtlety in the statements I made. On space complexity of foldl (+) 0 [1..n], my statement is about space of "foldl (+) 0", not of the part [1..n].
--> _JusSx__ (~davide@adsl-19-62.38-151.net24.it) has joined #haskell
<tromp> that's what strictness analysis might achieve, CosmicRay
<Darius> CosmicRay: Yes, but if the argument passed to foldl isn't strict then treating it as such can change the behavior of the code.
<CosmicRay> tromp: ah.
<tromp> not sure exactly what cases ghc will catch
<CosmicRay> so I'm looking at the implementation of foldl and foldr in hugs for clarity. to me it looks like one would end up with closures all over the place with either one
<CosmicRay> foldl f z [] = z
<CosmicRay> foldl f z (x:xs) = foldl f (f z x) xs
<CosmicRay> foldr f z [] = z
<CosmicRay> foldr f z (x:xs) = f x (foldr f z xs)
<Heffalump> foldl' exists
<CosmicRay> yes, I know
<CosmicRay> I want to understand these first :-)
<jlouis> TheHunter: applied, thanks
<JaffaCake> foldl always creates a chain of new thunks the same length as the input list
<CosmicRay> do those thunks represent (f z x) or the recursive call to foldl itself (or both?)
<monochrom> f z x
<JaffaCake> the (f z x) call is the thunk; the recursve call to foldl is tail recursive
<CosmicRay> so when my program demands the result from foldl be computed, it first runs through the entire list via the tail recursion, generating a bunch of thunks, then it evaluates all the thunks to produce the final result?
<nlv11757_> isnt it the case that only one thunk is created initially for the top-level call....and only when for example a bit of result is needed, this thunk is evaluated a bit creating a new thunk representing the tail call......
<monochrom> if naive, yes
<nlv11757_> if that is enough for the result that was needed of course, otherwise more is evaluated...
<nlv11757_> thats the idea right?
<CosmicRay> well with foldl and foldr, there is no "bit" of the result, there is all of the result or none of it
<CosmicRay> since they aren't building lists
<tromp> no, it has a lazy (thunked) representation of f (f (... (f z xn) ... x1) x0
<monochrom> yes nlv, but what is tail-recursive in eager becomes monolithic in lazy: either not computed at all or pursued to its final conclusion
<nlv11757_> ow of course in the fold case there is no bit of result, but i meant it in general for recursive functions....
<monochrom> Yes.
<CosmicRay> monochrom: in that case, why would we get a whole chain of thunks out there with foldl?
<monochrom> For example "take 10 (map f [1..])", with lazy you'll just generate 10 items of the list.
<monochrom> The thunk looks like (((a+b)+c)+d)+e...
<monochrom> I mean the thunk looks like (((a+b)+c)+d)+e... if naively lazy.
<-- JusSx has quit (Read error: 110 (Connection timed out))
<monochrom> There is no urgency to recall 0+1 = 1 until you've finished foldl-ing
<CosmicRay> ok, I think I grok that.
<CosmicRay> so with foldr, why don't we wind up in the same situation?
--> chris2 (~chris@p54889A9B.dip0.t-ipconnect.de) has joined #haskell
<monochrom> we do.
<nlv11757_> wouldnt the first thunk be 'map f [1..]'
<CosmicRay> oh.
<CosmicRay> hm.
<monochrom> but no one whines about foldr :)
<CosmicRay> oh right, you said they both use Theta(n) space lazy.
<CosmicRay> why do we even have a foldl given foldl'?
<nlv11757_> btw there can be a bit of result when dealing with a fold
<-- metaperl_ has quit (Read error: 145 (Connection timed out))
<nlv11757_> it depends on the operator
<monochrom> I can't think of an application of the lazy foldl for the moment.
<xerox> :\
<Darius> CosmicRay: There is a semantic difference between foldl and foldl'
<Darius> But foldl is mostly useless.
<nlv11757_> see cosmicray, if there never was a bit of result to a fold......take 10 (foldr (:) [] ([1 .. ])) wouldnt work
<CosmicRay> ah ok, so foldl is useful when there is in fact a bit of the result, due to laziness?
<nlv11757_> foldl is usefull when not used in a lazy sense
<nlv11757_> i think
<Darius> CosmicRay: A possible scenario when you might want to use foldl v. foldl' is described on http://www.haskell.org/hawiki/StackOverflow.
<musasabi> Created a DiceModule for lambdabot...
<monochrom> I can think up a pathological example.
<Darius> CosmicRay: Nevertheless, it's not very compelling and in almost all cases you want foldl' (unless you want foldr).
<wilx> @type foldl'
-lambdabot/#haskell- bzzt
<nlv11757_> foldl1 ?
<wilx> @type foldl1
-lambdabot/#haskell- foldl1 :: forall a. (a -> a -> a) -> [a] -> a
<wilx> @type foldl
-lambdabot/#haskell- foldl :: forall a b. (a -> b -> a) -> a -> [b] -> a
<vegai> question for you ... oldtimers
<monochrom> good explanation on that wiki
<vegai> when you see the type of the complexity of e.g. foldl, do you instantly understand it?
<Lunar^> JaffaCake: Do "dynamic" wrapped functions always get their own thread?
<JaffaCake> Haskell thread? yes
<Darius> @type Data.List.foldl'
<monochrom> Almost instantly.
-lambdabot/#haskell- Data.List.foldl' :: forall a b. (a -> b -> a) -> a -> [b] -> a
<CosmicRay> thanks for all the exlpanations

Submitted by gour on Wed, 04/06/2005 - 10:49am.

Great stuff put in my Haskell archive, i.e. I'm postponing digestion :-)

It would be great if some of the Haskell gurus is willing to expand his/her gang of disciples (I consider that expanding # of Haskell programmers brings some benefit to the Haskell community in general) by making such stuff more easily approachable by:

  • breaking the whole thing into few parts (articles), each dealing with one aspect, e.g.:
    • laziness
    • tail recursion
    • memory usage, etc.
  • systematic approach, i.e. showing the path from 'light' to 'harcdore' optimisation
  • some analysis of the tools, e.g. ghc compiler in regards

The 'unpredictability' nature of Haskell, i.e. inability to predict (better) how the program will behave in terms of performance, space leaks, heap usage etc. is, still, maybe the reason why some are recommending OCaml over Haskell as a general programming language.

otoh, the beauty of Haskell syntax is enchanting.

Is it possible (and how) to bridge the gap?

Sincerely,
Gour

p.s. btw, what is with the 2nd issue of MR?

Submitted by jgoerzen on Wed, 04/06/2005 - 12:08pm.

I think that to me, Haskell is no less predictable than any other language. I just don't know how to predict it yet.

foldl on OCaml, for instance, can lead to using huge amounts of memory if used on large lists.

C string-processing functions break on strings that contain \0.

Each language has its little things to be aware of.

Submitted by gour on Thu, 04/07/2005 - 2:58am.

I think that to me, Haskell is no less predictable than any other language. I just don't know how to predict it yet.

:-)

In my case, I cannot speak about OCaml, and in Haskell I'm still at the "It works!" excitement-level, but I've found out many posts about the subject.

It could be that:

  • you're reaching very close to Haskell guru level (considering you write this MissingH lib and hacking on lower-level) and do not experience the problem or
  • the problem does not exist actually and it just a 'bad image' of Haskell based on language-shootout result

foldl on OCaml, for instance, can lead to using huge amounts of memory if used on large lists.

What is your general experience in comparing OCaml & Haskell?

I'm thinking about average-Joe programmer (there was a post recently on Haskell list about it) and the performance he gets with the Haskell out-of-the-box, 'cause 'wc-rewriting thread' showed that to get decent performance, one has to trade beatiful FP expressions and then one can question the FP route?

Each language has its little things to be aware of.

Sure it does. The questions remains what can be done so that one can get better performance in Haskell without the need to 'hold a PhD in CS' :-)

So, is it a question of the language itself, or simply about getting knowledge how to use it?

If it's the latter, that's why I proposed some Haskell guru prepare some paper(s).

I see that even accomplished Haskell hackers have problems solving profiled bottlenecks, e.g. on gtk2hs project there is a problem that c2hs tool uses enormous quantity of heap that gtk2hs bindings are not compilable under 1GB of RAM and A. Simon commenting the problem nicely wrote:

So it should be fixable, isn"t it?
Probably. But just where do you put these two seq and ~ annotations?

Sincerely,
Gour

Submitted by jgoerzen on Thu, 04/07/2005 - 7:50am.

I appreciate the compliment, but I'm nowhere near a Haskell guru.

I think that the problem is one of experience. Many of us have years of experience with strict, imperative languages. We all know about things that are bad: trying to read an entire file into memory at once, infinite lists, infinite loops, not checking error codes, malloc without free, etc. Memory leaks are a common problem for C programmers, for instance, and can be exceptionally difficult to track down or even detect. (I'd say far worse than various Haskell memory issues).

The interesting part about Haskell is that, in general, it has none of the above problems. hGetContents might look like it's reading a file into memory at once, for instance, and it gives you the same convenience of doing so, but it's not.

Haskell has other problems, sure. I've had memory issues trying to work with my GZip code in MissingH. Fixing it took some doing, but less doing than a memory leak in C. There are also the issues of closures building up as this discussion illustrated. And as you can see from that discussion, I didn't really have a firm grasp on what those issues are.

I don't think Haskell is any worse off. I just think that almost all of us individually are less experienced working with these things than we are working with the problems in strict, imperative languages. I suspect that the Haskell community collectively is also less experienced.

In my own case, I've been using Haskell about 6 months and imperative languages 15 or so years. Haskell really is that different.

Regarding OCaml, I'd say that OCaml is much closer to a more traditional imperative language than it is to Haskell. OCaml is strict and non-lazy, so it shares more problems/limitations with imperative languages than it does with Haskell. OCaml does have a nice functional side to it, and its typing system is quite similar to Haskell's, which make it an interesting language. Its syntax feels archaic, and its I/O system has serious shortcomings, all of which left me searching for something better -- which I found in Haskell.

OCaml could be "Haskell Lite".

-- John

Submitted by gour on Thu, 04/07/2005 - 9:46am.

I appreciate the compliment, but I'm nowhere near a Haskell guru.

Well, this is symptom of a real guru - to be humble :-)

I think that the problem is one of experience. Many of us have years of experience with strict, imperative languages. We all know about things that are bad: trying to read an entire file into memory at once, infinite lists, infinite loops, not checking error codes, malloc without free, etc. Memory leaks are a common problem for C programmers, for instance, and can be exceptionally difficult to track down or even detect. (I'd say far worse than various Haskell memory issues).

Good point.

However, there are plenty of literatures warning newbies about such things, so it would be nice to have some more info for the Haskell about its caveat.

Haskell has other problems, sure. I've had memory issues trying to work with my GZip code in MissingH. Fixing it took some doing, but less doing than a memory leak in C. There are also the issues of closures building up as this discussion illustrated. And as you can see from that discussion, I didn't really have a firm grasp on what those issues are.

For the major part of the discussion I did not bother to get firm grasp either - leaving it for the more 'mature' time in my Haskell evolution :-)

I don't think Haskell is any worse off. I just think that almost all of us individually are less experienced working with these things than we are working with the problems in strict, imperative languages. I suspect that the Haskell community collectively is also less experienced.

I do not want to be pushy, but if the experience that Haskell community got so far is not exchanging enough and there are no more papers (dealing with the subject on different levels) suited for the wider programmers audience, it could happen that those things will remain a kind of 'black-magic' and therefore many won't tackle it at all and (possibly) leave Haskell as a general-programming language-tool.

otoh, I believe that wider acceptance of Haskell could only benefit and not harm Haskell community, isn't it?

Regarding OCaml, I'd say that OCaml is much closer to a more traditional imperative language than it is to Haskell. OCaml is strict and non-lazy, so it shares more problems/limitations with imperative languages than it does with Haskell. OCaml does have a nice functional side to it, and its typing system is quite similar to Haskell's, which make it an interesting language. Its syntax feels archaic, and its I/O system has serious shortcomings, all of which left me searching for something better -- which I found in Haskell.

After being exposed to imperative languages and being out of programming for quite some years, I wanted to try something different and found that, at the moment, FP in general and Haskell syntax in specific suits my mind much more than any other combination (I tried for some time with Ruby too).

Therefore I'm happy to see someone coming OCaml --> Haskell way ;)

Although I am not user of your library (yet), I hope will be able to take advantage of it in the future project(s).

OCaml could be "Haskell Lite".

Huh...this sounds quite powerful ;)

Sincerely,
Gour

Submitted by jgoerzen on Thu, 04/07/2005 - 12:58pm.

Your point about documentation (papers, etc.) it quite correct. I think that Haskell's two largest weaknesses right now are: 1) breadth of library modules available, and 2) suitable practical documentation. It is a sore point, and one that does need fixing.

Haskell can be a great language for the non-academic types, but from looking at the text available online, it sometimes doesn't look that way. Having these things remain "black magic" is the worst possible result, because that means Haskell is inaccessible to newcomers (and even some veterans.)

This site and the wiki are sort of informal attempts at sharing knowledge with the community. Shae's The Monad Reader is a more formal attempt. These are good things to have, but we really need a more organizaed system at some point. I know it's being slowly worked on -- maybe this is something you'd be interested in helping with?

As for OCaml, it will not be anything new or exciting to you now that you've tried Haskell. Aside from performance, I can't really think of any nifty features it has that Haskell or traditional imperative languages lack.

Submitted by gour on Thu, 04/07/2005 - 11:52pm.

Your point about documentation (papers, etc.) it quite correct. I think that Haskell's two largest weaknesses right now are: 1) breadth of library modules available, and 2) suitable practical documentation. It is a sore point, and one that does need fixing.

I'm glad we agree on that. Your libs helps to improve the situation and it's nice that GUIU libs are also arriving...darcs is also doing a great propaganda.

Haskell can be a great language for the non-academic types, but from looking at the text available online, it sometimes doesn't look that way. Having these things remain "black magic" is the worst possible result, because that means Haskell is inaccessible to newcomers (and even some veterans.)

True.

These are good things to have, but we really need a more organizaed system at some point.

It would be nice to have something like that.

I know it's being slowly worked on -- maybe this is something you'd be interested in helping with?

I'd be definitely interested to help as much as I can considering my (present) Haskell skills and the fact that I help in gtk2hs project - having decent GUI bindings falls in the domain of 'missing libs'.

So, I'll be around and see how (much) I can help.

As for OCaml, it will not be anything new or exciting to you now that you've tried Haskell. Aside from performance, I can't really think of any nifty features it has that Haskell or traditional imperative languages lack.

I also believe Haskell is 'enough' for me and I still have a long way to finish Craft of FP (I'm on the 7th chapter with the exercises :-)

Sincerely,
Gour

Submitted by jgoerzen on Fri, 04/08/2005 - 7:05am.

Quick docs link:

darcs get http://darcs.complete.org/haskell-v8

Feel free to contribute and let me know where to pull your patches from :-)

Submitted by gour on Fri, 04/08/2005 - 9:05am.

Hi!

Quick docs link:
darcs get http://darcs.complete.org/haskell-v8
Feel free to contribute and let me know where to pull your patches from :-)

Hey, this looks nice :-)

What do you use for authoring (if not emacs), i.e. how to contribute ?

Sincerely,
Gour

Submitted by jgoerzen on Fri, 04/08/2005 - 10:24am.

I use Emacs, it's probably the easiest to use. The text is written using Docbook SGML. Take a look at debianInstall.sh -- it will install what you need if running Debian, and hopefully point you in the right direction if not.

Submitted by gour on Fri, 04/08/2005 - 12:14pm.

I use Emacs, it's probably the easiest to use.

Yeah...I thought so, but I use Vim :-)

The text is written using Docbook SGML.

After several attempts I gave up on XML & DocBook and went back to LaTeX.

However, for tutorials and similar things, I've found out about txt2tags 'cause it is simple and has several back-ends -- SGML (although linuxdoc) is one of them.

Take a look at debianInstall.sh -- it will install what you need if running Debian, and hopefully point you in the right direction if not.

I'm on Gentoo and for sure there is good emacs support, but I'd prefer to stay with Vim hopefully) being able to contribute in some other way, i.e. format?

Sincerely,
Gour

Submitted by jgoerzen on Fri, 04/08/2005 - 1:26pm.

Vim will be fine. It also has SGML mode. What I meant with the reference to debianInstall.sh is installing the SGML toolset. These are the tools that convert .sgml files into .pdf, .html, .whatever files. Also there is scons, which is the make replacement used. I suspect Gentoo has ebuilds for all of these.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.