News aggregator

Where am I going wrong with my bounding box function?

haskell-cafe - Mon, 06/20/2016 - 10:52pm
I am writing a bounding box module for the octree <> library. You can find my branch here <>. In the function below, I try to make explicit the implicit bounding boxes of an Octree. The problem is, not all bounding boxes are valid. This is the function in question, followed by a way to replicate the problem in ghci. explicateMBB :: (BBox3, Octree a) -> [BBox3] explicateMBB (mbb, (Leaf _)) = [mbb] explicateMBB (mbb, (Node { split = split', nwu = nwu', nwd = nwd', neu = neu', ned = ned', swu = swu', swd = swd', seu = seu', sed = sed' })) = mbb:concatMap explicateMBB octList where octList = zip boxList children boxList = [swdBox, sedBox, nwdBox, nedBox, swuBox, seuB
Categories: Offsite Discussion

AFL + Haskell

haskell-cafe - Mon, 06/20/2016 - 7:20pm
Hello, There was an interesting post today [1] on Hacker News about fuzzing Rust with american-fuzzy-lop (AFL). Interestingly the AFL instrumentation gets applied as LLVM pass. I started to wonder if it would be possible to do the same with Haskell binaries? I know there are plenty of tools designed specifically for Haskell but still it might be interesting to see how AFL performs in this setting. Cheers, Krzysztof Skrzętnicki [1] _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

Source code location in IO?

haskell-cafe - Mon, 06/20/2016 - 4:03pm
Is it compatible with the semantics of Haskell to have a function sourceLocation :: IO String which when run returns the source file location at which it is used? For example, suppose Main.hs is module Main where main = do putStrLn =<< sourceLocation putStrLn "Hello" putStrLn =<< sourceLocation It would print the following when run Main.hs:4 Hello Main.hs:6 and module Main where main = do let s = sourceLocation putStrLn =<< s putStrLn "Hello" putStrLn =<< s It would print the following when run Main.hs:4 Hello Main.hs:4 If this is not compatible with the semantics of Haskell, why not? I agree that the two programs must have the same denotation, but is there anything that requires them to have the same output when run? Tom _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to:
Categories: Offsite Discussion

Resumable applicative parsing from tail end?

haskell-cafe - Mon, 06/20/2016 - 11:42am
Hello all, in my attempt to write a simulation program, I am facing the following problem: as the simulation progresses, I collect "primary" log entries, each one associated with a timestamp. These entries carry the information I am interested in (and not so much the final state). Currently I am using a simple list to collect log entries. Since (:) is so much easier than (++) I prepend new entries to the head of the list. So my log runs backwards in time with the oldest entries at the tail end. Because the log-entries are typically too fine-grained to be useful, I want to aggregate several of them into a single log entry in a secondary log. I want to do this "as I go" and discard primary log entries which are already aggregated. Thus I can keep the primary log reasonably small. I thought that aggregation is just a special case of parsing. And indeed, I could write an applicative parser PrimaryLog -> (PrimaryLog, SecondaryLog) which does the trick, except I had to reverse the ordering of the primary lo
Categories: Offsite Discussion

Philip Wadler: “Dishonesty on an industrial scale”: EU law expert analyses referendum debate

Planet Haskell - Mon, 06/20/2016 - 8:46am

Authoritative sources in the EU debate are thin on the ground, so I was pleased when a colleague pointed me to a video by University of Liverpool Law School’s Professor Michael Dougan, a leading expert in EU law. It runs 25 minutes and is well worth the time. I've transcribed a couple of segments below.I've just watched with increasing dismay as this referendum debate has unfolded, and though I have to say the Remain side has not covered themselves in glory at points with their use of dodgy statistics, I think the Leave campaign has degenerated into dishonesty, really, on an industrial scale --- there's really no other way to put it ...  For someone who works in the field like I do, it's probably the equivalent of an evolutionary biologist listening to a bunch of creationists tell the public that creation theory is right and evolution is completely wrong. It really is that bad ... and yet it's working.The second idea I think which has become pervasive is that somehow this is a debate about us and them. The EU is somebody else, and we are somehow the pathetic victims of Brussels as if this country was incapable of looking after itself.  To an EU lawyer, indeed to anyone who works in the field, this is just absolutely bizarre.  ... In the field we refer to The Big Three, the UK, France, and Germany, because between them the UK, France, and Germany provide the EU with its political, its economic, its diplomatic leadership. And indeed, virtually nothing happens in the EU without the big three being in control of it. The UK, to put it simply, has enormous influence within the EU.  It sets agendas, it negotiates alliances, it builds and brokers compromises.  ... Remember, the EU is not run by the unelected eurocrats of the commission as we hear all the time, it's actually run by the 28 governments of the council working together with the European Parliament.  Despite the fact that majority voting is the normal rule within the council, in practice about 90% of EU decisions are still taken by consensus.  In other words, the member states negotiate until everyone feels basically happy with the decision. And it couldn't be otherwise.  The EU is the creation of its member states, it has to serve their basic national interests, and it does so through a process of compromise and negotiation. So the EU isn't someone else, it's not something that happens to us, we are major players, leading players, within the European Union.Here's to an informed debate!
Categories: Offsite Blogs

The GHC Team: Contributing to GHC

Planet Haskell - Mon, 06/20/2016 - 6:35am

This post is a response to ​Anthony's blog post about contributing to GHC, and the subsequent ​Reddit discussion. You'll find it easier to follow my comments below if you read Anthony's post first. Short summary: many of Anthony's criticisms are accurate, but they are not easy to address, especially in a volunteer-only project. However we will do something about the feature-request process.

Barrier to entry

I am ashamed that GHC seems difficult to contribute to. The details don't matter; the fact that it made you feel that way is by definition bad. I'm really sorry. We should find a way to do better.

An underlying issue is one of effort budget. GHC has essentially no full-timers, unlike many open-source projects. In particular, Simon Marlow and I are volunteers like everyone else, and like everyone else we have a day job. Microsoft Research generously pays Well Typed for front-line support, manifested in the heroic form of Ben and Austin, totalling around one FTE. But their effort is fully absorbed by managing releases, reviewing patches, maintaining the infrastructure, and so on.

It's a real challenge to maintain a low barrier to entry for a large complex project, whose motive force is primarily volunteers. It means that any initiative will only fly if someone steps up to drive it.

Technical documentation

The questions of scattered and inconsistent documentation are difficult to address. GHC is twenty-five years old; it has hundreds of authors and documentation that was once accurate falls out of date. I would love it to have consistent, well-explained documentation. But I don't know how to achieve it.

GHC's technical documentation is either in the code itself, or on GHC's wiki. An advantage of a wiki is that anyone can edit it. Yes, instructions and technical descriptions go out of date. Who will fix them? You, gentle reader! There is no one else.

But in practice few do. Perhaps that's because such work is invisible, so no one even knows you've done it? What would make people feel "I'd really like to improve those instructions or that documentation?"

I will argue for two things. First, I find Notes incredibly helpful. They live with the code, so they are less likely to go out of date. They don't mess up the flow of the code. They can be referred to from multiple places. They are the single most successful code documentation mechanism we have. (To be fair to Anthony, I don't think was complaining about Notes, just expressing surprise.)

Second, I really do think that each new proposed feature needs a description, somewhere, of what it is, why it's a good thing, and as precisely as possible what its specification is. Plus perhaps some implementation notes. It's very important when reading an implementation to know what it is that the code is seeking to implement. So yes, I do frequently ask for a specification.

Arc, github, phabricator, etc

Personally I carry no torch for arc. I do whatever I'm told, workflow-wise. If GitHub has got a lot better since we took the Arc decision, and there's a consensus that we should shift, I'm all for it provided someone explains to me what my new workflows should be. I am absolutely in favour of reducing barriers to contribution. (Other members of the GHC team have much stronger and better informed views than me, though.)

But there are costs to moving, especially if the move implies friction in the future. In particular, those that worry me most are those surrounding issue tracking. Github's issue tracker simply doesn't seem sufficient for a project as large and multi-faceted as GHC; in particular, tags as the only form of metadata is extremely limiting.

One alternative would be to use Github and Trac side-by-side, but then we face the issue of conflicting ticket number spaces as both systems insist on ticket numbers of the form #NNNN.

Coding style

Coding style is rather a personal thing, and we have been reluctant to enforce a single uniform style. (Quite apart from the task of reformatting 150k lines of Haskell.) Personally I don't find it an obstacle to reading other people's code.

Feature requests

That leaves the most substantial issue that Anthony poses: the process of making a contribution.

For fixing a bug, I think that (aside from the arc/Github debate) things are not too bad. On the arc/Github question, Austin has been working with the Phabricator maintainers to try to have them introduce a workflow which might be more familiar and more convenient for Github experts.

But for offering a new feature, Anthony argues that the process is unacceptably arduous. There are lots of things to think about

  • GHC has tons of features implemented by talented and motivated folk... who have since moved on. So when a new feature is proposed, my baseline guess is that I will personally be responsible for maintaining it in five years time. So I want to understand what the feature is. I want to understand how the implementation works. I want to be reasonably sure that it doesn't add a bunch of complexity to an already very complicated code base. And since any new feature adds some complexity, I want to have some confidence that the feature commands broad support -- even when it's behind a language extension flag.

So actually I think it's reasonable that the process should be somewhat arduous. A new feature imposes costs on every single person who works on that code in the future. We don't really make this point explicitly, but the ​contributor guidelines for Phabricator do. It might be helpful if we articulated similar guidelines.

  • "Any proposal needs a lightweight way to gauge broad support, then a period of constructive refinement". Indeed! What would be such a lightweight way? The trouble is that there is an enormous silent majority, and discussions are typically driven by a handful of articulate and well-informed contributors. All credit to them, but they may very well be an unrepresentative sample. I simply don't know how to gauge true broad support.
  • There is a problem with my own personal lack of bandwidth. I am one of the main gatekeepers for some big chunks of GHC, the renamer, typechecker and optimisation passes. That is good in a way, because if GHC lacked gatekeepers, it would soon lose conceptual integrity and become a huge ball of mud. But it is bad in other ways. I review a lot of code; but not fast enough! In prioritising I am guided by my (doubtless flawed) perceptions of things that lots of people are eagerly awaiting. The same thing goes for Simon Marlow, especially in the runtime system. We both have other day jobs. Even writing this post means that I'm not reviewing someone's code.

But I am acutely aware that "Simon and Simon are busy" is pretty cold comfort to someone awaiting a review. Maybe there should be a bunch of other people playing this role. That would be great. For example, Richard Eisenberg has taken responsibility for Template Haskell, which is totally fantastic. I would love to hear from people who are willing to take overall responsibility for a piece of GHC.

None of this is an argument for the status quo. Simon, Ben, Austin, Herbert, and I have been talking about a rather more systematic process for new features. We'd like to learn from experience elsewhere, rather than reinvent the wheel, such as the ​Rust process. Please suggest processes that you have seen working well elsewhere.

Categories: Offsite Blogs

How to avoid expensive and unnecessary typeconversions?

haskell-cafe - Mon, 06/20/2016 - 12:01am
Suppose you have a class ListLike (which you do have, actually) and it has methods for operations on types that are like lists, and in particular it has a method fromListLike which converts any ListLike value to any other: fromListLike :: ListLike full' item => full' -> full the default implementation of this is simply fromListLike = fromList . toList but this means that if you happen to apply fromListLike to a value which is already the desired type, two unnecessary and possibly expensive conversions take place. My question is, how can one write code that implements fromListLike in such a way that when the two type parameters are the same type it just uses fromListLike = id I've thought about using a class Convert a b class and writing an instance for every pair of ListLike instances. I haven't convinced myself this would work, and if it does it means that adding a new instance involves writing a bunch of Convert instances. Maybe one could somehow have a default implementation (fromList
Categories: Offsite Discussion

Proposal: add ReadPrec-based functions to Read1/Read2

libraries list - Sun, 06/19/2016 - 9:40pm
GHC 8.0 added the Data.Functor.Classes module [1] to base, and with it the Read1 and Read2 typeclasses. The current definition of Read1 is this: class Read1 f where liftReadsPrec :: (Int -> ReadS a) -> ReadS [a] -> Int -> ReadS (f a) liftReadList :: (Int -> ReadS a) -> ReadS [a] -> ReadS [f a] There's a pretty big problem with this definition: it uses ReadS (a synonym for String -> [(a, String)]). This sort of parser is very slow (the docs even admit as such [2]), and moreover, the actual Read typeclass on which Read1 is based tries to avoid using it whenever possible. The Read typeclass has this definition currently: class Read a where readsPrec :: Int -> ReadS a readList :: ReadS [a] readPrec :: ReadPrec a readListPrec :: ReadPrec [a] Where ReadPrec is a much more efficient parser datatype. When deriving Read instances, GHC defines them in terms of readPrec, and gives the other methods default definitions that leverage readPrec. For t
Categories: Offsite Discussion

Simon's email classified as spam

glasgow-user - Sun, 06/19/2016 - 8:08pm
Dear GHC devs/users This is another test to see if email from me, relayed via, ends up in your spam folder. Gershom thinks he’s fixed it (below). Can I trespass on your patience once more? Just let me know if this email ends up in your inbox or spam. Can you cc John and Gershom (but perhaps not everyone else)? Thanks Simon | From: Gershom B [mailto:gershomb< at >] | Sent: 18 June 2016 18:53 | To: Simon Peyton Jones <simonpj< at >>; John Wiegley | <johnw< at >> | Cc: Michael Burge <michaelburge< at >> | Subject: Re: FW: CMM-to-SAM: Register allocation weirdness | | Simon — I just found two possible sources of the problem (first: the top | level config didn’t take hold due to other errors when updating — fixed that, | and second, it might be possible the top level config isn’t retroactively | applied to all lists — so i added the config to the relevant lists directly). | | I think if you try one more time it might work (fingers crossed). ____
Categories: Offsite Discussion

FP Complete: async exceptions, STM, and deadlocks

Planet Haskell - Sun, 06/19/2016 - 6:00pm

For a change of pace, I wanted to cover a simple topic: asynchronous exceptions, and how they interact with Software Transactional Memory and MVars, as well as the GHC runtime system's deadlock detection. As fate would have it, the topics in question occurred twice in the past month, once in a bug report against the yaml package, and once in a major refactoring of FP Complete's distributed computation platform. I'll focus on the first since it's an easier case to explain, but I'll try to explain the second (and jucier) one too.

To get this started off nicely, I'll share an interesting piece of code and its output. If you're playing along at home, try guessing what the output of this program will be, and if your guess is different from the actual output, try to predict why. Hopefully, by the end of this post, it will be clear (although still perhaps surprising).

#!/usr/bin/env stack -- stack --resolver ghc-7.10.3 runghc import Control.Concurrent import Control.Exception mayThrow1, mayThrow2, mayThrow3 :: IO Int mayThrow1 = return 5 -- nope, don't throw mayThrow2 = error "throw a normal exception" mayThrow3 = do var <- newEmptyMVar takeMVar var -- BlockedIndefinitelyOnMVar, always! -- implements the idea from: -- -- -- Don't use the async package, which already works around the bug -- I'm demonstrating! tryAny :: IO a -> IO (Either SomeException a) tryAny action = do var <- newEmptyMVar uninterruptibleMask_ $ forkIOWithUnmask $ \restore -> do res <- try $ restore action putMVar var res takeMVar var main :: IO () main = do tryAny mayThrow1 >>= print tryAny mayThrow2 >>= print tryAny mayThrow3 >>= print putStrLn "done!"

And actual output:

Right 5 Left throw a normal exception example1.hs: thread blocked indefinitely in an MVar operation

To clarify, that last line of output means that the program itself exited because a BlockedIndefinitelyOnMVar exception was thrown to the main thread and was not caught. Also, this is possibly the first time I've written an interesting code example for a blog post that uses only the base package :)

Synchronous vs asynchronous exceptions

There are two* ways an exception can be thrown in Haskell:

  • Synchronous exceptions are generated by the current thread. What's important about these is that we generally want to be able to recover from them. For example, if you try to read from a file, and the file doesn't exist, you may wish to use some default value instead of having your program exit, or perhaps prompt the user for a different file location.

  • Asynchronous exceptions are thrown by either a different user thread, or by the runtime system itself. For example, in the async package, race will kill the longer-running thread with an asynchronous exception. Similarly, the timeout function will kill an action which has run for too long. And epic foreshadowment the runtime system will kill threads which appear to be deadlocked on MVars or STM actions.

    In contrast to synchronous exceptions, we almost never want to recover from asynchronous exceptions. In fact, this is a common mistake in Haskell code, and from what I've seen has been the largest source of confusion and concern amongst users when it comes to Haskell's runtime exception system.

Alright, got that? Catch the synchronous ones and recover, never recover from asynchronous exceptions**.

* You could argue that creating an exception in a pure value, via functions like error and undefined, is a third category. We're going to ignore that for now.

** You may be wondering "why make it possible to catch async exceptions if we can't recover from them?" The answer is that we still want to be able to perform cleanup actions, like closing file handles. This is the topic of the aforementioned upcoming blog post.

Catching all exceptions

Quite a few years ago, I wrote a School of Haskell article on how to catch all exceptions. I'm not going to rehash that article here, since (1) the content is still current, and (2) I'll likely be sharing a new blog post in a few days with a newer approach. But the gist of it is that if we want to catch all synchronous exceptions in an action without catching asynchronous ones, we run that action in a separate thread, and then grab the result from that thread using a mutable variable. The SoH article uses the async package, which hides away some of the details. However, the core idea is easy enough to express as I did above:

tryAny :: IO a -> IO (Either SomeException a) tryAny action = do var <- newEmptyMVar uninterruptibleMask_ $ forkIOWithUnmask $ \restore -> do res <- try $ restore action putMVar var res takeMVar var

Let's address some important points in the design of this function, since it has to step quite carefully around asynchronous exceptions:

  • We use forkIOWithUnmask to ensure that the newly created thread has asynchronous exceptions masked. This means that we will receive no async exceptions in the new thread, until we wrap an exception with the restore function.
  • We restore the user's action, meaning async exceptions can be received there. But we wrap that restore action with a try, catching all synchronous and asynchronous exceptions.
  • Since outside of the try we still have all async exceptions masked, there's nothing preventing the putMVar call in the child thread from being called. This means that, in all cases of the action terminating, the putMVar will ensure that the var is filled up.
  • The takeMVar in the parent thread will block until the child thread completes. And since we're guaranteed that the var will always be filled once the action terminates, we seem (once again, epic foreshadowment) to be guaranteed that the takeMVar call will always wait for action to somehow terminate, and once it's terminated will return either the exception is died with or the success result.
Deadlock protection

Alright, I've foreshadowed enough already, I'm guessing everyone's figured out where the problem is going to come. Sure enough, if we look at the program I started this post with, we see that tryAny mayThrow3 >>= print does not catch the exception generated by mayThrow3, but instead itself dies from GHC's deadlock protection (BlockedIndefinitelyInMVar). Simon Marlow explained the reason for this to me twice, once in his (really really good) book, and once when I raised issue #14 on async because I hadn't been smart enough to connect the dots from the book to my actual issue. As long as I'm linking so much to Github, you may also like seeing my pull request to work around issue #14.

Don't worry, I'm not going to make you read all of that. The summary is pretty simple: GHC's run time system has very straightforward deadlock detection. When it finds a group of threads that are all blocked on blocking mutable variables (MVars or STM variables), and finds that no other threads have references to the variables it question, it determines that all of the threads are deadlocked, and sends all of them asynchronous exceptions (either BlockedIndefinitelyOnMVar or BlockedIndefinitelyOnSTM).

So let's analyze tryAny mayThrow3 in excruciating detail. The series of events here is:

  1. Create a new empty MVar to allow the child thread to pass back the result of mayThrow3 to the parent thread.
  2. Fork a new child thread (yes, exceptions are masked, but that's not terribly important for us).
  3. Back in the parent thread: block waiting for that empty MVar to get filled.
  4. In the child thread, set up try to catch all exceptions, and then run the user's action (with async exceptions unmasked).
  5. As it happens, I carefully crafted mayThrow3 to deadlock on an MVar. So now the child thread can't make progress, and is fully deadlocked.

OK, the stage is set: the main thread is blocked on an MVar for the result. As I proved above, we know for a fact that this MVar will ultimately be filled with the result of the user action, once the user action completes or dies with an exception. However, GHC doesn't know this, it just sees the main thread blocked on an MVar action, and that only the main thread and child thread have references to that variable.

In the child thread, we've created a new MVar deadlock. This is a true and legitimate deadlock (if you don't believe me, go back and look at the implementation of mayThrow3). So GHC decides that both the main thread and the child thread are currently blocked on MVar actions, and there is no other thread that can unblock either of them. And GHC is correct. However, the next part is the unfortunate part:

GHC kills both threads with an asynchronous exceptions

Why is this unfortunate? Because we know that if the child thread receives the async exception, it will exit, the result MVar will get filled, and the main thread can complete successfully. That's exactly the behavior we want, but not the behavior GHC is able to deliver to us.

The workaround

The workaround I'm about to demonstrate is not elegant, and it leaves a bad taste in my mouth. If anyone has a better idea, let me know. The idea is this: we know that there's no legitimate reason for the takeMVar inside tryAny to be killed with a BlockedIndefinitelyOnMVar, even if GHC doesn't know that. So we outsmart GHC by catching and ignoring that specific exception. And just in case our logic is wrong and there's a flawed implementation here, we only catch-and-ignore a fixed number of times (arbitrary choice: 10) to avoid creating a real deadlock.

I've added a commit for enclosed-exceptions that implements this behavior, but for the lazy, here's a simplified version of it:

tryAny :: IO a -> IO (Either SomeException a) tryAny action = do var <- newEmptyMVar uninterruptibleMask_ $ forkIOWithUnmask $ \restore -> do res <- try $ restore action putMVar var res retryCount 10 (takeMVar var) where retryCount cnt0 action = loop cnt0 where loop 0 = action loop cnt = action `catch` \BlockedIndefinitelyOnMVar -> loop (cnt - 1)

Swallow the bit of vomit rising in the back of your throat, try this out in the original program, and see if it solves the issue. You should see the following output:

Right 5 Left throw a normal exception Left thread blocked indefinitely in an MVar operation done!

Note that the "thread blocked" has actually been caught, and the final done! line is actually printed.

Problem with STM-based tryAny

tryAny was already implemented safely in terms of the async package, which uses STM in place of MVars as we did here. Mitchell Rosen came up with a really convincing demonstration for why we shouldn't use STM for this purpose. To quote, with the program:

#!/usr/bin/env stack -- stack --resolver lts-6.3 runghc --package yaml {-# LANGUAGE OverloadedStrings #-} import Control.Concurrent.STM import Data.Yaml main = do mval <- atomically (return $! decode "") print (mval :: Maybe Value)

We get the rather surprising output:

Control.Concurrent.STM.atomically was nested

The reason is that decode is calling a bunch of IO functions as FFI calls to the libyaml library, and is wrapped in unsafePerformIO to make the morally-pure YAML-decoding action not live in IO. Under the surface, it uses tryAny to avoid unexpected exceptions from breaking out of the unsafePerformIO dungeon it lives in. That's all well and good, but now if you use decode inside an atomically block, you have two layers of STM occurring, which is strictly forbidden.

Extra credit Figure out why I needed to use $! to make this fail.

Our distributed computing problem

This problem arose about two months ago, and is far more complicated than the examples I've discussed until now. In fact, I've so far been unsuccessful at reproducing this with a minimal test case, which is definitely annoying. Nonetheless, I can explain what basically happened:

  1. Have a queue of work items to be performed, represented by a TChan
  2. Have one thread (filler) grabbing work items from somewhere (like a socket) and putting them on the TChan
  3. Have another thread (worker) grabbing work items and performing them
  4. The filler has logic to detect when no more items are incoming, and exits
  5. Run these two threads together using the race function, which guarantees that once the filler exits, the worker will be killed too

Unfortunately, this runs into the same problem I described above, at least in some circumstances. The thread calling race is blocked on an MVar under the surface, which filler and worker both have access to. filler completes, puts something in the MVar, and then disappears, losing its reference to the MVar and the TChan. Meanwhile, worker is still blocked waiting on the TChan, to which only it has access now, resulting in a deadlock condition.

In our application, we ended up with a situation where the worker thread was deadlocked on the TChan, and the thread calling race was blocked on the MVar. It's a slightly more complicated case than this though, since in the simple race case, the thread calling race doesn't actually get blocked on the MVar. Nonetheless, it's another interesting case where the deadlock prevention kills too many threads.

Fortunately for our application, there's a very simple - and more robust - solution, which I recommend to anyone doing this kind of concurrent programming. The problem was that our worker thread had no logic to terminate itself, and instead just waited to be killed by race. Instead, we switched over to a closeable TChan implementation, where the filler thread could explicitly closed out the TChan and then the worker would kill itself instead of waiting around for an async exception.

So to leave this blog post with some recommendations, and a small amount of extra foreshadowment (fourth time getting to use that word in one blog post!):

  • Explicit exit conditions are better than implicit ones
  • Avoid relying on async exceptions for control flow if you can
  • Relying on the deadlock detection to clean up your messes is a bad idea
  • And even if you don't want the deadlock detection to clean up your messes, it may try to do so for you badly anyway
Categories: Offsite Blogs

Problem binding GEGL library

haskell-cafe - Sun, 06/19/2016 - 2:48pm
Hello fellow haskellers, I am currently working on Haskell bindings to the GEGL [0] library. Since it's my first time working with the FFI and the whole idea of binding a library into Haskell, I have now run into a problem I can't resolve myself. The GEGL headers expose a function "gegl_buffer_iterator_new" which returns an iterator over a selected area in a buffer. This iterator contains a float pointer with pixel data. My problem is to make the iterator and the data it contains accessible to Haskell and to iterate over it. Can someone of you please help me with this problem? One of my initial thoughts was to wrap it into a monad, but I am totally inexperienced with that. My project can be found on my Github profile [1]. Many thanks in advance, nek0 links: [0]: [1]: _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to:
Categories: Offsite Discussion

Philip Wadler: CAPS: Cycling Action Plan for Scotland

Planet Haskell - Sun, 06/19/2016 - 11:12am

In 2010, the Scottish Government set out a 'vision' that 10% of all trips should be made by cycle and foot by 2020. In the recent Scottish parliamentary elections, the SNP maintained it is 'determined' to achieve this vision, but funding remains low, at 2% of the transport budget. But the government established CAPS, the Cycling Action Plan for Scotland, and its most recent report includes a heartening call to action.
CAPS 2016 Pre-requisites for Success: A successful modal shift to cycling requires the following six pre-requisites for success being met:
  • A shared national vision for a 10% modal share of everyday journeys should remain, with a related clear aspiration for reduction in car use, especially for short journeys, by both national and local government.
  • A long term increase in sustained funding is required, with year-on-year increases over time towards a 10% allocation of national and council transport budgets as Edinburgh is achieving. The long term commitment to 2030 to dual carriageways between seven Scottish cities should be matched by an equally long term commitment to cycling if modal shift ambitions are to be met and sustained.
  • The national 10% modal share vision should be supported by local cycling strategies and delivery plans at council and regional levels. Local modal share objectives should be coordinated with the national vision to create a feasible route to 10%.
  • Cities will be the driver of significant modal shift and the national vision should be directly coordinated with a specific focus on reaching at least 10% modal share in the cities and the largest urban areas, implementing best practice.
  • The primary investment focus should be on enabling cycling through changing the physical environment for short journeys to enable anyone to cycle.
  • Government at all levels needs to build and maintain staff capacity to manage cycle infrastructure and the local road network in the present financial climate. 

Build and maintain dedicated cycling infrastructure, enabling people aged 8-80 to cycle on coherent cycle networks in cities and towns. This entails cohesive, comprehensive and seamless networks of on-road segregated paths in cities and, where appropriate, alongside trunk roads and busier local roads in rural areas. In the urban setting such networks will link into and incorporate existing off-road networks where they deliver direct and high quality routes. ‘Success’ should not only be measured in terms of additional kilometres of network but have a qualitative aspect, including following good practice design standards, numbers of segregated cycle lanes, and integration with public transport. Perceptions of safety and protection of non-motorised users - both of which must be tackled - will be enhanced by the introduction of measures such as 20 mph speed limits in urban settings. Develop a long term communication plan that represents cycling as something that anyone can do, not simply a minority and is a transport mode that brings many benefits to Scotland: a healthier, less polluting nation, enjoying better public space, improved air quality and less congested streets. A continuous and consistent campaign would aim to win ‘the hearts and minds’ of the public about the benefits of cycling. Rather than about behaviour, or indeed about the environment, the main message for this campaign should be about improved quality of life for people. The economic benefits for rural and suburban areas should not be overlooked. Cycling for leisure, recreation and sport can be essential gateway activities to enable more people to try and enjoy everyday utility cycling and the economic benefits of leisure and tourism-related cycling, especially in remote areas, should not be overlooked. Programmes should reinforce different forms of cycling to maximise inclusiveness while never losing sight of the over-riding need for modal shift. 
Categories: Offsite Blogs

Philip Wadler: Roseburn to Leith Walk

Planet Haskell - Sun, 06/19/2016 - 10:39am

I want the UK to stay in the EU, and I want Scotland to be an independent country. But it is hard for one person to have an impact on such major issues. I've decided on a more modest focus: improving cycling in Edinburgh and Scotland, and specifically building more segregated cycle paths.

Thanks to the work of many, including Spokes and Walk Cycle Vote, there is already progress. In 2012 Edinburgh City Council has agreed to commit 5% of its transport budget to cycling, increasing by 1% a year until it reaches 10% next year. One of the first major outcomes is the proposed Roseburn to Leith Walk Cycle route.

The proposal includes a detailed economic case, which is worth a look from anyone interested in promotion of segregated cycling. It predicts that the estimated £6.3M cost will lead to an 88% increase in folk cycling in the affected regions, leading to a £13.1M savings from improvements to health, £5.3M improvement to the GCP (Gross Cycling Product, e.g., increases in tourism and is sales of cycles), and £1.0M of benefits from reduced car usage (such as fewer accidents).
Benefits: Gross Cycling ProductResearch suggests that cycling benefits the local economy and a national study carried out by the London School of Economics (LSE) in 20108 concluded that each cyclist contributes a Gross Cycling Product (GCP) of £230 per year to the economy. This research was supported by a European wide study which found that cycling delivers wider economic benefits in terms of supporting jobs and driving tourism – with cycling having a greater employment intensity than any other transport sub-sector. Applying the findings of the LSE study to the forecast increase in cycling, the scheme will generate a GCP benefit of £5,753,218 over the 10 year scheme life.
Categories: Offsite Blogs

Compose :: Conference, Melbourne, 2016,Call for Presentations!

haskell-cafe - Sun, 06/19/2016 - 3:10am
(Web version: # Call for Presentations Compose Melbourne - - is pleased to announce its call for presentations. Compose Melbourne is a new functional programming conference focused on developing the community and bringing typed functional programming to a wider audience. It is a 2-day event being held in Melbourne, Australia on the 29th and 30th of August 2016. The first day features a single track of presentations followed by a second day of workshops and an unconference. It is the new sister-conference of the NY based Compose Conference. Submit your presentation proposal here: # Important Dates - CFP Launch Party - 19th May - CFP Opens - 19th May - CFP Closes - June 30th - Notify Presenters - July 15th - Conference Day 1: Presentations - Monday, 29-Aug-2016 - Conference Day 2: Unconference - Tuesday, 30-Aug-2016 # Talk and Workshop Submission ## Day 1 - Presentat
Categories: Offsite Discussion

My responses to challenge problems in BartoszMilewski's "Category Theory for Programmers", thru Ch. 10.

haskell-cafe - Sun, 06/19/2016 - 12:03am
Hi all, I just finished chapter 10 in Bartosz Milewski’s “Category Theory for Programmers”, and thought I’d share my responses to his challenge problems: I’d love to hear anyone’s feedback. Cheers, -db _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

ST Vector / STRef -- Performances and allocation

haskell-cafe - Sat, 06/18/2016 - 9:37pm
Hello. I read a reddit post [0] which ends in a debate about haskell performances. Someone gives a C implementation which runs in 15ms on some dataset. I wanted to learn about mutable vector and ST and got something in haskell which runs in 50ms. Code is here [1]. [0] [1] When I'm profiling the code, I'm surprised to see more than 50% of the %alloc in function which, from my point of view, must not allocate. For example, this function : inc :: forall s. STRef s Int -> ST s () inc v = modifySTRef' v (+1) Is responsible for 10% of the allocation in this program. I was hoping that the "unboxing magic" of GHC may have replaced my Int somewhere by an unboxed one, but I guess the allocation means that the STRef is somehow pointing to a heap allocated Int. Do you know a way to remove theses allocations? As a second question, if you see any other way to improve thi
Categories: Offsite Discussion

bit sets in IntSet vs. Integer

libraries list - Sat, 06/18/2016 - 2:59pm
In my set-cover package I make extensive use of bit vectors. I tested both IntSet and Integer for this purpose. In GHC-7.4.2 Integer was visibly faster than IntSet, but in GHC-7.8.4 and later, IntSet clearly wins over Integer. I guess this can be attributed to IntSet using BitMaps at the leaves since containers-0.5. However, on a 64-bit machine a BitMap is a Word64. I wondered whether we could get further speedup by using a vector type. E.g. AVX processors could perform bit manipulations on 256 bit words. Do the bit operations on Integer make use of vector instructions? _______________________________________________ Libraries mailing list Libraries< at >
Categories: Offsite Discussion

Dynamic Witnesses for Static Type Errors (or, ill-typed programs usually go wrong)

Lambda the Ultimate - Sat, 06/18/2016 - 6:51am

Dynamic Witnesses for Static Type Errors (or, ill-typed programs usually go wrong) by Eric L. Seidel, Ranjit Jhala, Westley Weimer:

Static type errors are a common stumbling block for newcomers to typed functional languages. We present a dynamic approach to explaining type errors by generating counterexample witness inputs that illustrate how an ill-typed program goes wrong. First, given an ill-typed function, we symbolically execute the body to dynamically synthesize witness values that can make the program go wrong. We prove that our procedure synthesizes general witnesses in that if a witness is found, then for all inhabited input types, there exist values that can make the function go wrong. Second, we show how to extend the above procedure to produce a reduction graph that can be used to interactively visualize and debug witness executions. Third, we evaluate our approach on two data sets comprising over 4,500 ill-typed student programs. Our technique is able to generate witnesses for 88% of the programs, and our reduction graph yields small counterexamples for 81% of the witnesses.

Sounds like a great idea to make type systems more accessible, particularly for beginners. The current limitations are described the discussion, section 54:

There are, of course, drawbacks to our approach. Four that stand out are: (1) coverage limits due to random generation, (2) the inability to handle certain instances of infinite types, (3) dealing with an explosion in the size of generated traces, and (4) handling ad-hoc polymorphism.

It's also not clear whether this would produce proper witnesses for violations of higher kinded types or other more sophisticated uses of type systems. There are plenty of examples where invariants are encoded in types, eg. lightweight static capabilities.

Categories: Offsite Discussion

Monad Stack - State + Rand?

haskell-cafe - Sat, 06/18/2016 - 5:22am
Hi. I'm working through "Haskell Design Patterns" and got inspired to try to create my first monad stack. What I really wanted though (not shown in the book) was to combine State and Rand. I daresay I got something to compile: walk :: RandomGen g => StateT (Float, Float) (Rand g) (Float, Float) walk = do (x, y) <- get put (x + 1, y + 1) get >>= return However, the moment I try to insert a getRandomR or something in it, I get an error Could not deduce (MonadRandom (StateT (Float, Float) (Rand g))) arising from a use of `getRandomR' <...snip...> add an instance declaration for (MonadRandom (StateT (Float, Float) (Rand g))) I see there are instances MonadRandom m => MonadRandom (StateT s m) RandomGen g => MonadRandom (Rand g) in Control.Monad.Random.Class, so I am not quite sure what is expected of me.
Categories: Offsite Discussion

Macros For Cabal Version

haskell-cafe - Sat, 06/18/2016 - 12:14am
Hi all, I have a non-trivial build script that I'm trying to make compatible with Cabal 1.22 and Cabal 1.24. So far the only macro I can find that helps me determine which version I'm using is __GLASGOW_HASKELL__. Is there a better one? Thanks! -deech _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion