News aggregator

Help with Attoparsec for skipping unwanted lines. Criticism wanted.

Haskell on Reddit - Sun, 05/04/2014 - 6:21pm

Hi! I've been trying my hand at parsing files in Haskell, and I'm having a hard time skipping over unneeded content.

As a concrete example, here's a gist of code for parsing the output of the nm tool. You can clone the code and run it very easily if required.

In particular, I'm having trouble expressing how to remove the lines between the header and the table.

  • Is there a skipMany alternative (no pun intended) that will stop skipping as soon as another parser starts working? Something like:

    skipManyAndSwitchOver :: (Alternative f) => f a -> f b -> f [b]

  • Any other critique on my implementation so far?

Thank you for your help!

EDIT: I think I got it! I've defined a new combinator skipTill:

skipTill :: (Alternative f) => f a -> f b -> f b skipTill skippedAction nextAction = nextAction <|> skippedAction *> skipTill skippedAction nextAction

Which can be used like so:

anyLine :: Parser Text anyLine = takeTill isEndOfLine <* endOfLine example = anyLine `skipTill` objectFileHeader <* endOfLine submitted by khold_stare
[link] [2 comments]
Categories: Incoming News

Danny Gratzer: Getting Proper Tail Calls Out of C

Planet Haskell - Sun, 05/04/2014 - 6:00pm
Posted on May 5, 2014

While I don’t exactly love writing C, it has a lot to offer as a compilation target. Its got lots of smart compilers that can target just about every platform I’ve ever heard of and tons of others, its got the ability to mess with low level aspects of itself, and C’s got some nice high level abstractions like functions.

One big issue I have with it as a target is that its function calls suck. I’m usually compiling a functional language where tail call optimization is imperative (heh) and C makes this a lot harder than it should.

This post illustrates how I currently beat C into actually generating proper tail calls. Most of the code from this post is straight from c_of_scheme. If you’re having trouble understanding some function than there may actually be documentation for it in the source :)

The first step involves something called continuation passing style. The idea here is that reify the implicit “continuation” for each expression to an explicitly function.

So we’ll turn something like

(+ 1 (* 2 2))

Into something like

(lambda (k) ((lambda (mult-result) (k (+ 1 mult-result))) (* 2 2)))

Notice how now the order of evaluation is completely determined by how we pass things around? We pass each result along the chain of continuations and every non-primitive function becomes a tail call.

This has one more very important effect, none of these function calls will return. We’re going to pass control off to each continuation and the very last function will exit the program. This means that as soon as we call a continuation, we can nuke the stack and every function call has become identical to calling a continuation.

C actually has a similar notion to this and when we run this code through closure conversion and lambda lifting (a subject that’s worth of its own rant) we’ll end up with functions that look something like

void _gen1(scm_t arg, scm_t cont, ...){ scm_apply(cont, scm_add(scm_t arg, 1)); }

It’s worth a mention that scm_apply will unwrap the continuation and actually apply it since it’s just a normal function. We know that the call to scm_apply will never return. We can tell C this with __attribute__((noreturn)). Theoretically this also enables the use of something much like TCO: once the last function is called, we can reuse the stack frame of _gen1 and if the function actually returns despite our promises simply segfault (hooray for C).

Unfortunately, GCC doesn’t seem to do this on its own in my case. So I cried for a little bit and offered it many flags in the hopes that it would be merciful and just do it for me but it didn’t. And now I can actually illustrate how I did this manually.

It turns out this is possible to do with only a tiny impact on the generated code from the compiler and a bit of monkeying with scm_apply. First, I’ll explain how scm_apply looks normally.

void scm_apply(int i, scm_t f, ...) { int x; va_list va; scm_t *arg_list = malloc(sizeof(scm_t) * i + 1); va_start(va, f); for(x = 1; x < i+1; ++x){ arg_list[x] = va_arg(va, scm_t); } if(f->state != 4){ printf("Attempted to apply nonfunction\n"); exit(1); } else { arg_list[0] = f->val.scm_lam.clos; f->; } }

Note that scm_t is a pointer to a discriminated union in C to fake the dynamic types found in Scheme.

So the first bit is just the varargs goo to extract the arguments given to scm_apply. Once we have all of those in an array, we look at the state field of f, our function. If it’s not 4, then we don’t really have a function so we complain loudly and exit. Otherwise we just get the actual function pointer out of f and call it.

This is a little tricky to read if you’re not familiar with the DU’s in C, but there’s nothing exactly earth shattering in there.

Now, since every function call is going through scm_apply, we add a global ticker to count how many function calls have gone through there

static int stack_frames; ... void scm_apply(int i, scm_t f, ...) { ... else { ++stack_frames; .... } }

Now we know just how quickly we’re burning through the available stack space.

Next we need to add a special case of scm_apply which we’ll call scm_init. It looks like this

void scm_init(lam_t f){ stack_frames = 0; scm_apply(0, mkLam(scm_top_clos, f)); // Call main }

All this does is initialize stack_frames and call scm_apply. We can modify the codegen so that the main function is passed to scm_init. We know that this main function will take no arguments in c_of_scheme for reasons that aren’t entirely relevant to this post.

OK, so now is the magic and like all good C magic, it starts by including setjmp.

#include <setjmp.h>

Now we add 3 more global variables (please don’t hate me)

static scm_t current_fun; static scm_t* current_args; static jmp_buf env;

Now we modify scm_apply so that if we’re at a depth of 100 function calls or more we stick the current function and arguments into these global variables and longjmp with env!

Now we need a good place to longjmp to, the place where env points to. This is what scm_init, we know that it’s called almost immediately so it’s relatively “low” on the stack. So scm_init now becomes

void scm_init(lam_t f){ stack_frames = 0; if(setjmp(env)){ stack_frames = 0; current_fun->; } scm_apply(0, mkLam(scm_top_clos, f)); // Call main }

Notice that we do know error checking and just go straight into calling the next function after a longjmp. In order to set up current_fun and current_args correctly scm_apply must be modified

void scm_apply(int i, scm_t f, ...) { int x; va_list va; scm_t *arg_list = malloc(sizeof(scm_t) * i + 1); va_start(va, f); for(x = 1; x < i+1; ++x){ arg_list[x] = va_arg(va, scm_t); } if(f->state != 4){ printf("Attempted to apply nonfunction\n"); exit(1); } else { arg_list[0] = f->val.scm_lam.clos; if(stack_frames >= 100){ // Transfer continuation up current_fun = f; current_args = arg_list; longjmp(env, 1); } ++stack_frames; f->; } }

This meant that now when we’ve applied 100 functions, we jump back to scm_init, demolishing all those unused stack frames and keep going.

There it is, that’s my minimally invasive technique for tail calls in C. From what I’ve heard this is also used by Chicken Scheme.

<script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Vincent Hanquez: Simple time with Hourglass

Planet Haskell - Sun, 05/04/2014 - 6:00pm

Each time, I’ve used time API in Haskell, I’m left with the distinct feeling that the API is not what I want it to be. After one time too many searching API to do some basic thing, I’ve decided to look at the design space and just try implementing what I want to use.

Before going into this re-design, this is my list of issues with the current API:

  • UTCTime is represented as number of day since a date (sometimes in 19th century), plus a time difference in seconds from the beginning of the day. This is probably the worst representation to settle to as main type as it neither a good computer representation nor a good human representation.
  • Every time I need to use the time API, i need to look at the documentation. With the number of time I used the time API, I feel like I shouldn’t need to anymore. Sure it got easier, but it’s not as trivial at I want it to be. The number of functions, and the number of types make it difficult. YMMV.
  • Too many calendar modules. I just want the standard western calendar module. It’s called the gregorian calendar and time make sure you need to remember that, as it’s part of many function names useful to do things.
  • C time format string for parsing and printing. Each time I need to format time, does pretty much mean I need to consult the documentation (again), as there’s almost 50 different formatters, that are represented with single letter (that for some of them doesn’t have any link to what they represent).
  • Need to add the old-locale package when doing formatting. Why is this old, if it’s still in use and doesn’t have a replacement ?
  • A local time API that get on the way, different types than global time. TimeOfDay, ZonedTime, LocalTime. YMMV.

Ironically, old-time seems much closer to what I have in mind with some part of the time API. The name seems to imply that this was the time api before it got changed in what is currently available.


So I’ve got 4 items on this design list:

  1. Some better types
  2. Use the system API to go faster
  3. Unified and open system
  4. Better capability for printing and parsing
Better types

I wanted the main time type to be computer friendly, and linked to how existing API return the time:

  • On Windows system, it’s the number of 100 nanoseconds (1 tick) since 1 January 1601.
  • On Unix system, it’s simply the number of seconds since 1st January 1970.

It’s probably fair to expect other systems to have similar accounting method, and anyway just those two flavors covers probably 99% of usage. I originally planned to keep the system referential in the type, but instead it’s simpler to choose one.

Inventing a new one would be fairly pointless, as it would force both system to do operations. Converting between windows and unix epoch, is really simple and very cheap (one int64 addition, one int64 multiplication), so Unix has been chosen.

Along with the computer types, proper human types are useful for interacting with the users. This mean a Date type, a TimeOfDay, and a combined DateTime describe in pseudo haskell as:

data Date = Date Year Month Day data TimeOfDay = TimeOfDay Hour Minute Seconds data DateTime = DateTime Date TimeOfDay Use the System, Luke !

Heavy conversion between seconds and date is done by the system. Most systems got a very efficient way to do that:

One side effect is that we have the same working code as the system. There’s much less need to worry about exactness or bugs in this critical piece.

For futureproofing, a haskell implementation could be used as fall back for other systems or different compiler target (e.g. haste), if anyone is interested.

Unified API

I don’t want to have to remember many different functions to interact with many types. Also time representation should be all equivalent as to which time value they represent. So that mean it’s easy to convert between them with a unified system.

So 2 type classes have been devised:

  • one Timeable typeclass to represent type that can be converted to a time value.

  • one Time typeclass to represent time type that can be created from a time value.

With this, hourglass support conversion between time types:

> timeConvert (Elasped 0) :: Date Date { dateYear = 1970, dateMonth = January, dateDay = 1 } > timeConvert (Date 1970 January 1) :: Elapsed Elapsed 0 > timeConvert (DateTime (Date 1970 January 1) (TimeOfDay 0 0 0 0)) :: Date Date { dateYear = 1970, dateMonth = January, dateDay = 1 }

Anyone can add new calendar types or other low level types, and still interact with them with the built-in functions, provided it implement conversion with the Elapsed. It allow anyone to define new calendar for example, without complicating anything.

Better formatting API

Formatter have a known enumeration types:

> timePrint [Format_Day,Format_Text '-',Format_Month2] (Date 2011 January 12) "12-01"

But can be overloaded either by string, or some known formats:

> timePrint "DD-MM-YYYY" (Date 2011 January 12) "12-01-2011" > timePrint ISO8601_Date (Date 2011 January 12) "2011-01-12"

Someone could also re-add C time format string too with this design, without changing the API.


The API and values returned has been tested under 32 and 64 bits linux, freeBSD, and Windows 7. It’s got the same limitations that the system has:

  • 32 bit linux or BSD: between year 1902 and 2038. this doesn’t apply to the x32 flavor of linux, and the latest openbsd 5.5.
  • 64 bit linux or BSD: between year 1 (as BC date before bring all sort of random problems) and few billions of years. this ought to be enough for everyone :-)
  • windows is limited to date between 1601 and 9999.

I find the tradeoff acceptable considering that in counterpart we have descent performance, and all-in-all a working range that is enough.

For a look on performance, as measured by criterion:

The library is small too:

  • time (haskell=1434 (94.5%), C=84 (5.5%)
  • hourglass (haskell=884 (98%), C=19 (2%)

And its documentation is available on hackage, and the code on github.

Example of use > t <- timeCurrent > timeGetDate t Date {dateYear = 2014, dateMonth = May, dateDay = 4} > t 1399183466s > timeGetElapsed t 1399183466s > timeGetDateTimeOfDay t DateTime { dtDate = Date {dateYear = 2014, dateMonth = May, dateDay = 4} , dtTime = TimeOfDay {todHour = 6, todMin = 4, todSec = 26, todNSec = 0ns}} > timePrint "YYYY-MM-DD" t "2014-05-04" > timePrint "DD Mon YYYY EPOCH TZHM" t "04 May 2014 1399183466 +0000" Q&A
  • Q: Report issue, wishlist ..
  • A: issue-tracker

  • Q: Do I have to use this ?
  • A: No, you can still use time if you prefer.

Categories: Offsite Blogs

wanted: Function to break circular data dependencies

haskell-cafe - Sun, 05/04/2014 - 5:00pm
Is a function like the following possible?: avoidCircularDataDependency :: a -> a -> a avoidCircularDataDependency a b = ? I want avoidCircularDataDependency to evaluate 'a', but if in the process of evaluating 'a' its own result is demanded (which normally would result in an infinite loop) it returns 'b' otherwise it returns 'a' . I've often found myself wanting a function like this. It would make certain kinds of knot-tying/cycle detection _much_ easier. Is there any reason why this function can't/shouldn't exist? - Job _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

wren gayle romano: Women in FP

Planet Haskell - Sun, 05/04/2014 - 4:56pm

Katie Miller is giving a talk about FP outreach and diversity at next month's Erlang User Conference. She sent a questionnaire to the Lambda Ladies mailing list about our experiences, and I thought I'd share my responses here.

What led you to pursue functional programming?

Curiosity. I was in grad school, working towards a masters in computer science. And I was particularly interested in programming languages, though I was only familiar with imperative and OO languages at that point. I saw a course on functional programming, so I signed up. Best CS decision I ever made.

What sort of perception did you have of functional programming before you learnt more about it? Was this a barrier? If so, how did you overcome this?

All I knew at the time was that it was some sort of paradigm completely different from imperative and OO. That's it really. This was before FP started becoming popular; so, long before Clojure or Scala were invented, and long before C++ considered adding lambdas/closures to the language. Even within the FP community, Haskell was still considered the new kid on the block (despite having been around for quite some time).

What were the challenges for becoming part of the FP community?

The main challenge was just in figuring out where the community was and how to take part. As I said, this was long before FP became popular. My first FP language was Haskell, but I'd learned it in that course on functional programming so I didn't really know what the community was like. It was a year or two after taking the class that I decided to start really using Haskell for projects. At the time I was taking part in the Perl community, so I thought I'd go searching for some Haskell mailing lists to join. That's when I found the firehose that is Haskell Cafe.

Why do you think women are underrepresented in FP, more so than in programming generally?

I think there are a number of reasons. One of the big ones is how academic the community is. I don't mean that in the way people usually do. I'm an academic, and I love it here! No, the problem is that this creates a huge selection bias. I only really found FP by stumbling into it, and I only stumbled into it because I had a number of supportive advisors who helped foster my interest in programming languages. By the point I found FP, many women would have already been filtered out. Just getting into and affording college is a huge thing, especially for women of color. Let alone making it through undergrad and then getting into a masters program in CS without a bachelor's in CS. Let alone ending up at a school that can offer good FP classes, and finding those supportive advisors to help you along and guide you in the right direction.

If my story is anything to go by, it takes a lot of privilege (and luck) just to get to the point where you discover FP. After that, then you add on all the issues about maintaining community involvement. Because the community is so academic, this heightens issues of impostor syndrome. (Even men are driven out of FP due to impostor syndrome!) And since FP tends to be sold in a hyper-intellectualized manner, this evokes the "math is hard" brand of anti-intellectualism. While this drives a lot of people away, I think it has a differentially powerful impact on women due to the way we gender the sciences. That is, FP propaganda has a habit of taking the things which cause women to be underrepresented in STEM generally, and then cranking them up to eleven.

Another issue, and one I haven't seen discussed very often, is the fact that many FP communities are also FOSS communities. Women are more underrepresented in FOSS than in other CS communities, so the fact that FP tends to be FOSS means that women will tend to be more underrepresented in FP than other CS communities.

What strategies do you think the community could employ to address this problem and improve the (gender, and other types of) diversity in FP?

Setting up communities which aren't so hyper-intellectualized is a big step. Getting rid of all that propaganda and just treating FP like any other paradigm will do a lot to mitigate the impact of impostor syndrome and "math is hard" anti-intellectualism. It's no panacea, but it's probably the easiest thing we can tackle. Addressing the systemic issues is a lot harder.

Do you think it helps to have a women's group like Lambda Ladies? How has it been helpful for you?

I do think it helps. Over the years I've seen a lot of women come and go (mostly go) on Haskell Cafe. Overall I feel like the Cafe is one of the safer and more welcoming communities, but we've still had our misogynistic flareups. And after each one, I've watched the subsequent evacuation as women no longer feel quite so safe or welcome. By offering a safe space, women's groups are an important form of resistance against this sort of problem. It's a space where you don't always have to be on your guard against harassment. It's a space where you don't have to worry about how you present yourself, don't have to worry that femininity will undermine your credibility, don't have to worry about how asking "stupid" questions will affect the way people think of women as a whole.

Also —we don't really do this on LL, but— women's groups can provide a safe environment for venting about the sorts of problems we encounter online, in the work force, etc. Venting is always a tricky topic, but I think the importance of venting is grossly underrated. Whatever community you're a part of, bad things are going to come up sooner or later. When that happens, having a side community where you can let off steam or discuss why the particular thing is problematic is an important way to deal with the emotional fallout of these bad things. Once you've dealt with it, you can return to the main community; but if you have nowhere to deal with it, then things build up and up until you're just done and you quit the community.

In addition to providing a safe space, women's groups also serve an important role regarding announcements for jobs, conferences, etc. The announcements we get are tailored for women and so include important details like how welcoming they are of women, whether they can offer travel expenses, whether they offer child care, and so on.

For me, LL has been helpful mainly as a place to witness women in FP. Just seeing other women is energizing, and keeps me interested in being out there as part of the general FP community. The bit about announcements has also been helpful.

Categories: Offsite Blogs

Ideomatic way to compose "sum"-like functions and apply over a functor

Haskell on Reddit - Sun, 05/04/2014 - 2:46pm

Imagine you have a function "a -> a -> a" similar to (+) or (++). Let's call it 'sum'. If you have two arguments "x :: a" and "y :: a" you can apply it like this

result = sum x y


result = x `sum` y

Imagine you need to apply it over arguments in a functor (sorry if it's a wrong way to put it), "xf :: f a" and "yf :: f a". You can use the notation from Applicative:

result = sum <$> xf <*> yf

So far so good. Now imagine you need to "sum" three arguments. As long as they are "plain", you can simply write:

result = x `sum` y `sum` z

and it looks clean and nice. But what would be a "nice" notation for that if the arguments are in a functor? I came up with this:

result = ((.).(.)) sum sum <$> xf <*> yf <*> zf

but the owlish face at the beginning is somewhat scary (i.e. not intuitively obvious). Using some sort of foldM seems an overkill when the number of arguments is small.

Any advice? Thanks in advance!

submitted by lukewarm
[link] [6 comments]
Categories: Incoming News

What happened to #haskell?

Haskell on Reddit - Sun, 05/04/2014 - 11:47am

I briefly played around with haskell back in 2007 or so, and back then #haskell seemed to have a lot of helpful and patient people willing to answer even dumb noob questions. Since I've been actually using haskell for real the past year or so, I've been hanging around the irc channel and it seems like that is no longer the case. Lots of questions either get ignored totally or are given one line non-answers now. Is it just my imagination or do you basically have to use stackoverflow to get help now? Is stackoverflow the reason #haskell isn't helpful anymore, or is it just a co-incidence?

submitted by haskellnoob
[link] [57 comments]
Categories: Incoming News

Deadline extension: The 20th IEEE Pacific Rim International Symposium on Dependable Computing (PRDC 2014)

General haskell list - Sun, 05/04/2014 - 11:22am
*** Apologies for cross-posting *** *Due to numerous requests, the deadline of PRDC 2014 has been extended to May 16th 2014.* *Call for Papers: The 20th IEEE Pacific Rim International Symposium on Dependable Computing (PRDC 2014)* *Singapore Nov 18-21, 2014 * PRDC 2014 ( is the twentieth in this series of symposia started in 1989 that are devoted to dependable and fault-tolerant computing. PRDC is recognized as the main event in the Pacific area that covers the many dimensions of dependability and fault tolerance, encompassing fundamental theoretical approaches, practical experimental projects, and commercial components and systems. As applications of computing systems have permeated into all aspects of daily life, the dependability of computing systems has become increasingly critical. This symposium provides a forum for countries around the Pacific Rim and other areas of the world to exchange ideas for improving the dependability of computing systems.
Categories: Incoming News

What's the most idiomatic way to partially apply a function to an arbitrary subset of its arguments?

Haskell on Reddit - Sun, 05/04/2014 - 8:57am

It often happens in Haskell that there is a function with lots of parameters

a -> b -> c -> d -> e -> f

and it would be convenient to partially apply it not to "a" and "b" but, say, to "b" and "e".

For a 2-parameter function flip works fine but writing a bunch of n-ary flips feels crufty.

I'm also aware of passing a record as an argument which allows the simulation of keyword args, but many (most?) library functions aren't written that way.

Is there another way, besides generating the closure definitions through Template Haskell?

submitted by DreamyDreugh
[link] [18 comments]
Categories: Incoming News

State of secure coding in Haskell?

Haskell on Reddit - Sun, 05/04/2014 - 5:05am

Recently, I've been forced to study a bit of software security. Everyone who has ever done so probably knows how frustrating that can be - pretty much every tool for building programs seems either broken or beyond repair.

Static checking and fuzzing seem to be the state of art in C and Java side of the fence. What is the status of such tools in Haskell? Are there taint tracking tools or other static analysers beyond the type system? Or any recommended libraries specially aimed for secure programming? (I'd guess that quickcheck covers pretty much everything you'd use fuzzing for. And without the nasty difficulty of devising oracles.)

(As an example why I feel such tools are necessary, both the snap tutorial and the example in scotty docs present code with an obvious xss vulnerability without any warning sticker attached.)

submitted by aleator
[link] [4 comments]
Categories: Incoming News

What's the best way to get involved with open source Haskell projects?

Haskell on Reddit - Sun, 05/04/2014 - 4:44am

Hi. I've been learning Haskell for a while now and I feel like a good way to practice would be to get involved with some open source projects. I was wondering if anyone knew the best way to start getting involved. Are there any projects which are welcoming to people who may not be the best Haskell programmers or is there a particular project you know that could do with a hand?

Thanks and sorry if this is in the wrong place. I originally posted this to /r/haskellquestions but there were no replies. Thanks

submitted by hailmattyhall
[link] [5 comments]
Categories: Incoming News

Edward Z. Yang: The cost of weak pointers and finalizers in GHC

Planet Haskell - Sun, 05/04/2014 - 2:55am

Weak pointers and finalizers are a very convenient feature for many types of programs. Weak pointers are useful for implementing memotables and solving certain classes of memory leaks, while finalizers are useful for fitting "allocate/deallocate" memory models into a garbage-collected language. Of course, these features don’t come for free, and so one might wonder what the cost of utilizing these two (closely related) features are in GHC. In this blog post, I want to explain how weak pointers and finalizers are implemented in the GHC runtime system and characterize what extra overheads you incur by using them. These post assumes some basic knowledge about how the runtime system and copying garbage collection work.

The userland API

The API for weak pointers is in System.Mem.Weak; in its full generality, a weak pointer consists of a key and a value, with the property that if the key is alive, then the value is considered alive. (A "simple" weak reference is simply one where the key and value are the same.) A weak pointer can also optionally be associated with a finalizer, which is run when the object is garbage collected. Haskell finalizers are not guaranteed to run.

Foreign pointers in Foreign.ForeignPtr also have a the capability to attach a C finalizer; i.e. a function pointer that might get run during garbage collection. As it turns out, these finalizers are also implemented using weak pointers, but C finalizers are treated differently from Haskell finalizers.

Representation of weak pointers

A weak pointer is a special type of object with the following layout:

typedef struct _StgWeak { /* Weak v */ StgHeader header; StgClosure *cfinalizers; StgClosure *key; StgClosure *value; /* v */ StgClosure *finalizer; struct _StgWeak *link; } StgWeak;

As we can see, we have pointers to the key and value, as well as separate pointers for a single Haskell finalizer (just a normal closure) and C finalizers (which have the type StgCFinalizerList). There is also a link field for linking weak pointers together. In fact, when the weak pointer is created, it is added to the nursery's list of weak pointers (aptly named weak_ptr_list). This list is global, so we do have to take out a global lock when a new weak pointer is allocated.

Garbage collecting weak pointers

Pop quiz! When we do a (minor) garbage collection on weak pointers, which of the fields in StgWeak are considered pointers, and which fields are considered non-pointers? The correct answer is: only the first field is considered a “pointer”; the rest are treated as non-pointers by normal GC. This is actually what you would expect: if we handled the key and value fields as normal pointer fields during GC, then they wouldn’t be weak at all.

Once garbage collection has been completed (modulo all of the weak references), we then go through the weak pointer list and check if the keys are alive. If they are, then the values and finalizers should be considered alive, so we mark them as live, and head back and do more garbage collection. This process will continue as long as we keep discovering new weak pointers to process; however, this will only occur when the key and the value are different (if they are the same, then the key must have already been processed by the GC). Live weak pointers are removed from the "old" list and placed into the new list of live weak pointers, for the next time.

Once there are no more newly discovered live pointers, the list of dead pointers is collected together, and the finalizers are scheduled (scheduleFinalizers). C finalizers are run on the spot during GC, while Haskell finalizers are batched together into a list and then shunted off to a freshly created thread to be run.

That's it! There are some details for how to handle liveness of finalizers (which are heap objects too, so even if an object is dead we have to keep the finalizer alive for one more GC) and threads (a finalizer for a weak pointer can keep a thread alive).

Tallying up the costs

To summarize, here are the extra costs of a weak pointer:

  1. Allocating a weak pointer requires taking a global lock (this could be fixed) and costs six words (fairly hefty as far as Haskell heap objects tend to go.)
  2. During each minor GC, processing weak pointers takes time linear to the size of the weak pointer lists for all of the generations being collected. Furthermore, this process involves traversing a linked list, so data locality will not be very good. This process may happen more than once, although once it is determined that a weak pointer is live, it is not processed again. The cost of redoing GC when a weak pointer is found to be live is simply the cost of synchronizing all parallel GC threads together.
  3. The number of times you have to switch between GC'ing and processing weak pointers depends on the structure of the heap. Take a heap and add a special "weak link" from a key to its dependent weak value. Then we can classify objects by the minimum number of weak links we must traverse from a root to reach the object: call this the "weak distance". Supposing that a given weak pointer's weak distance is n, then we spend O(n) time processing that weak pointer during minor GC. The maximum weak distance constitutes how many times we need to redo the GC.

In short, weak pointers are reasonably cheap when they are not deeply nested: you only pay the cost of traversing a linked list of all of the pointers you have allocated once per garbage collection. In the pessimal case (a chain of weak links, where the value of each weak pointer was not considered reachable until we discovered its key is live in the previous iteration), we can spend quadratic time processing weak pointers.

Categories: Offsite Blogs