News aggregator

[ANN] HacBerlin - Haskell Hackathon in Berlin,26-28 Sep 2014

General haskell list - Fri, 08/29/2014 - 8:33am
Hi everyone, this is just a quick reminder: The Haskell Hackathon in Berlin is coming soon and there are still some places left. Please register now: http://goo.gl/aLfnWu The first keynote is also fixed; it will be given by Andres Löh (http://www.andres-loeh.de/). Thanks, Andres! Where: Berlin, Germany When: Fri 26 - Sun 28 September 2014 Meet in Berlin, discuss, hack together and improve the Haskell infrastructure. We welcome all programmers interested in Haskell, beginners and experts! For all details, visit our wiki page (http://www.haskell.org/haskellwiki/HacBerlin2014) and make sure to register now! Cheers, Stefan
Categories: Incoming News

pythons with statements in Haskell

Haskell on Reddit - Fri, 08/29/2014 - 6:13am

In Python there are context managers that define an __enter__ and an __exit__ method. An example would be:

with open('someFile.file') as f: do_whatever_with_file(f)

This automatically opens and closes the file on leaving the block or when an error happens. This context managers can be easily implemented for everything that requires a similar mechanism.

Now I wanted to have this same abstraction in Haskell, but I couldn't think of a way to do it. I know there is bracket, but with bracket we have to explicitly specify what happens.

In Haskell we can use for this particular example withFile, but what I'm looking for is something more general, like this:
with (openFile "foo.bar" ReadMode) (\file -> do whatever).

But because openFile "foo.bar" ReadMode has type IO Handle it is not distinguishable from say connectTo :: HostName -> PortID -> IO Handle which would need to be handled differently.
Anyone has an idea how one could implement such a with statement in Haskell or is it not possible?

submitted by Zinggi57
[link] [29 comments]
Categories: Incoming News

Antti-Juhani Kaijanaho (ibid): Licentiate Thesis is now publicly available

Planet Haskell - Fri, 08/29/2014 - 2:45am

My recently accepted Licentiate Thesis, which I posted about a couple of days ago, is now available in JyX.

Here is the abstract again for reference:

Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.

Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

Categories: Offsite Blogs

Problem finding rewrite rules

haskell-cafe - Thu, 08/28/2014 - 9:40pm
Dear Cafe, I'm currently looking at the optimization GHC is doing and I cannot find the rewrite rules it fires. When I run my test code with ghc -O2 -ddump-simpl-stats -ddump-rule-firings Main.hs GHC shows the rules which are fired: ... Rule fired: Class op + ... Rule fired: +## ... and so on. Nothing new, nothing special. However, where do I find the definitions of these rules ? I grepped[1] the GHC code base and found nothing so far. I didn't find any documentation on it either. Can anyone point me to some place where I can find further information ? Thank you folks and have a nice day Dominik PS.: Since I'm working on numerical stable code with directed rounding I'm only interested in these two particular rules. I suspect them to break parts of my code. [1] http://jamie-wong.com/2013/07/12/grep-test _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

How do neural nets in Haskell work?

Haskell on Reddit - Thu, 08/28/2014 - 7:59pm

So I'm really new to Haskell and functional programming in general, and I don't yet understand how Haskell libraries for neural networks can be efficient.

Since nothing can be changed, wouldn't you have to create an entirely new copy of a neural network every time you update some weights/threshold value for a single neuron? Wouldn't that be a lot slower than just overwriting the previous values?

EDIT: Thanks everyone for all the resources!

submitted by ninklo
[link] [15 comments]
Categories: Incoming News

Discussion: Why is inet_addr in IO?

libraries list - Thu, 08/28/2014 - 3:41pm
inet_addr doesn't do any lookups, it is basically a glorified parsing problem converting from a string to a host address. http://hackage.haskell.org/package/network-2.6.0.1/docs/src/Network-Socket.html#inet_addr Why, beyond the obvious implementation details, does it live in IO? Given that the current version lives in IO, should there be a version that is obviously made pure so that folks who want to use it directly can do so without having to bury it behind an unsafePerformIO? This isn't a concrete proposal at this time, but more an attempt to sound out if such a proposal should be formed. -Edward _______________________________________________ Libraries mailing list Libraries< at >haskell.org http://www.haskell.org/mailman/listinfo/libraries
Categories: Offsite Discussion

DepTrack, a library to express dependencies between objects.

Haskell on Reddit - Thu, 08/28/2014 - 3:33pm

https://github.com/lucasdicioccio/deptrack

Once, I had the question "how does the system restarts if the power goes off"? One need an exhaustive picture of all components of a system to answer such questions. Some people may fit everything in their brain, I prefer written documentation. Dare I say, I even prefer "executable" documentation.

I wrote the DepTrack library to easily model and graph dependencies between systems. DepTrack addresses two main requirements:

a) library users should attach the meaning they want to dependencies. For instance, if you want a picture, just use a string and some metadata for the label/shape of the nodes; if you want to troubleshoot the system, then add some "check-if-it-work IO(Bool)".

b) users should iterate using short edit-and-refine work cycles. This requirement implies that dependencies should compose well, even for heterogeneous objects.

Algebraic datatypes address Requirement a). Meanwhile, cheap data notations and the functional-programming idioms address Requirement b).

At this point, DepTrack exposes an Applicative interface to "wrap" data constructors around dependency annotations. Then, when evaluating a computation, DepTrack builds a tree by "nesting" dependencies along annotations. I use an early-and-unpolished monadic interface with the do-notation to show the PoC to my colleagues. I'll add a polished monadic interface soon. The boilerplate is light enough: notably, some non-haskeller colleagues found it "highly readable" and could review my modelizations. Another colleague even patched a modelization where I forgot a piece of a system (remember: they saw the do-notation).

DepTrack has some limitations. For instance, DepTrack cannot "evaluate" cycles. However I already could generate some quite involved models (thousands of nodes, with a mix of hardware/software) and some artistic pictures with Gephi.

I hope DepTrack will let you build great value (for you and your businesses). For instance, I found that expressing an application dependencies without loops is enlightening about its "boot process". While picture outputs help to keep in mind what may go wrong.

submitted by lucasdicioccio
[link] [2 comments]
Categories: Incoming News

Functional Jobs: Senior Software Engineer (Functional) at McGraw-Hill Education (Full-time)

Planet Haskell - Thu, 08/28/2014 - 3:18pm

This Senior Software Engineer position is with the new LearnSmart team at McGraw-Hill Education's new and growing Research & Development center in Boston's Innovation District. We make software that helps college students study smarter, earn better grades, and retain more knowledge.

The LearnSmart adaptive engine powers the products in our LearnSmart Advantage suite — LearnSmart, SmartBook, LearnSmart Achieve, LearnSmart Prep, and LearnSmart Labs. These products provide a personalized learning path that continuously adapts course content based on a student’s current knowledge and confidence level.

On our team, you'll get to:

  • Move textbooks and learning into the digital era
  • Create software used by millions of students
  • Advance the state of the art in adaptive learning technology
  • Make a real difference in education

Our team's products are built with Flow, a functional language in the ML family. Flow lets us write code once and deliver it to students on multiple platforms and device types. Other languages in our development ecosystem include especially JavaScript, but also C++, SWF (Flash), and Haxe.

If you're interested in functional languages like Scala, Swift, Erlang, Clojure, F#, Lisp, Haskell, and OCaml, then you'll enjoy learning Flow. We don't require that you have previous experience with functional programming, only enthusiasm for learning it. But if you do have some experience with functional languages, so much the better! (On-the-job experience is best, but coursework, personal projects, and open-source contributions count too.)

We require only that you:

  • Have a solid grasp of CS fundamentals (languages, algorithms, and data structures)
  • Be comfortable moving between multiple programming languages
  • Be comfortable with modern software practices: version control (Git), test-driven development, continuous integration, Agile

Get information on how to apply for this position.

Categories: Offsite Blogs

"Hackathon"

haskell-cafe - Thu, 08/28/2014 - 3:06pm
Hi Cafe, In my experience, "hackathon" can refer to two very different sorts of events: hacking marathons (such as jacobsHack), where participants tend to work overnight to accomplish something amazing in a limited time; and hacker weekends (such as Hac Phi), where participants work on projects, socialize, and then (presumably) rest at night. Both of these sorts of events have their place in the world, and I'm in no way suggesting one is "better" than the other. But, I do think it be good for all of us to name them differently, so folks know what they are signing up for. In particular, I'm worried that calling hacker weekends "hackathons" may discourage those of us with outside, inflexible commitments (e.g. kids; the need for 8 hours of sleep) from attending. Conversely, folks looking for the higher-energy environment of an all-night marathon might be disappointed to show up at a hacker weekend. What do you think? Is this distinction pointless? Would being consistent about this difference help? Here are
Categories: Offsite Discussion

What is the proper way to communicate with a web client via TCP using Haskell?

Haskell on Reddit - Thu, 08/28/2014 - 1:03pm

I've noticed the biggest socket.io binding on Github has only 14 stars and the example code doesn't compile. The websockets library is more popular, but it is more complicated to use (I have no idea, to be honest - a minimalistic server/client example like socket.io's would help a lot) and there is also the problem that many clients don't support websockets (socket.io can fallback to HTTP).

So, what is the right way to deal with it?

submitted by SrPeixinho
[link] [6 comments]
Categories: Incoming News

Call to arms for Haskell students

haskell-cafe - Thu, 08/28/2014 - 11:51am
Dear Haskellers, My (previous) university is organizing one of the first university hackathons in Europe. I am participating and would like to have a Haskell team for the hackathon. However, I don't know any Haskellers around or who can travel to Bremen. Let me know if anyone is interested to participate and team up with me. If it helps there are quite inexpensive flights to Bremen. The hackathon is only for students though. Quoted is the formal invite. Best regards, Ernesto It is about time to bring hackathons to European universities. A few
Categories: Offsite Discussion

Douglas M. Auclair (geophf): Dylan: the harsh realities of the market

Planet Haskell - Thu, 08/28/2014 - 10:06am
So, this is a little case study.

I did everything for Dylan. And when I say everything, I mean everything.  Here's my resumé:


  • I got excited about Dylan as a user, and I used it. I bought an old Mac that I don't ever remember the designation for, it's so '90's old, and got the floppies for the Dylan IDE from Apple research.
I'm not joking.
  • I integrated Dylan into my work at work, building an XML parser then open-sourcing it to the community under the (then) non-restrictive license. I think mine was the only XML parser that was industrial-strength for Dylan. Can't claim originality: I ported over the Common-LISP one, but it was a lot of (fun) work.
  • I made improvements to the gwydion-dylan compiler, including some library documentation (you can see my name right there, right in the compiler code), including some library functionality, did I work on the compiler itself? The Dylan syntax extensions or type system? I don't recall; if not in those places, I know I've looked at those guts: I had my fingers all over parts of the compiler.
I was in the Dylan compiler code. For you ll-types ('little language') that's no big deal.
But ask a software developer in industry if they've ever been in their compiler code. I have, too: I've found bugs in Java Sun-compiler that I fixed locally and reported up the chain.
  • I taught a course at our community college on Dylan. I had five students from our company that made satellite mission software.
  • I effing had commercial licenses bought when the boss asked me: what do we have to do to get this (my system) done/integrated into the build. I put my job on the line, for Dylan. ... The boss bought the licenses: he'd rather spend the $x than spending six weeks to back-port down to Java or C++.
  • I built a rule-based man-power scheduling system that had previously took three administrative assistants three days each quarter to generate. My system did it, and printed out a PDF in less than one second. I sold it, so that means I started a commercial company and sold my software.
I sold commercial Dylan software. That I wrote. Myself. And sold. Because people bought it. Because it was that good.
Hells yeah.
Question: what more could I have done?
I kept Dylan alive for awhile. In industry. For real.
So why is Dylan dead?
That's not the question.
Or, that question is answered over and over and over again.
Good languages, beautiful languages, right-thing languages languish and die all the time.
Dylan was the right-thing, and they (Apple) killed it in the lab, and for a reason.
Who is Dylan for?
That's not the question either. Because you get vague, general, useless answers.
The question is to ask it like Paul Graham answered it for LISP.
Lisp is a pointless, useless, weird language that nobody uses.
But Paul and his partner didn't care. They didn't give a ...
Something.
... what anybody else thought. They knew that this language, the language they loved, was built and designed and made for them. Just them and only them, because the only other people who were using it were college kids on comp.lang.lisp asking for the answers for problem-set 3 on last night's homework.
That's what Lisp was good for: nothing.That's who Lisp was good for: nobody.
Same exact scenario for Erlang. Exactly the same. Erlang was only good for Joe Armstrong and a couple of buddies/weirdos like him, you know: kooks, who believed that Erlang was the right-thing for what they were doing, because they were on a mission, see, and nothing nobody could say could stop them nor stand against them, and all who would rise up against them would fall.
All.
What made Lisp and Haskell and Erlang and Scala and Prolog (yes, Prolog, although you'll never hear that success story publicly, but $26M and three lives saved? Because of a Prolog system I wrote? And that's just one day in one month's report for data? I call that a success) work when nobody sane would say that these things would work?
Well, it took a few crazy ones to say, no, not: 'say' that it would work, but would make it work with their beloved programming language come hell or high water or, worse: indifferent silence, or ridicule, or pity from the rest of the world.
That is the lesson of perl and python and all these other languages. They're not good for anything. They suck. And they suck in libraries and syntax and semantics and weirdness-factor and everything.
But two, not one, but at least two people loved that language enough to risk everything, and ...
They lost.
Wait. What?
Did you think I was going to paint the rosy picture and lie to you and say 'they won'?
Because they didn't.
Who uses Lisp commercially? Or Haskell, except some fringers, or Scala or Clojure or Erlang or Smalltalk or Prolog
... or Dylan.
These languages are defined, right there in the dictionary.
Erlang: see 'career wrecker.'
Nobody uses those languages nor admits to even touching them with a 10-foot (3-meter) pole. I had an intern from college. 'Yeah, we studied this weird language called ML in Comp.sci. Nobody uses it.'
She was blown away when I started singing ML's praises and what it can do.
A meta-language, and she called it useless? Seriously?
Because that's what the mainstream sees.
Newsflash. I'm sorry. Dylan, Haskell, Idris: these aren't main-stream, and they never will be.
Algebraic types? Dependent types? You'll never see them. They're too ... research-y. They stink of academe, which is: they stink of uselessness-to-industry. You'll be dead and buried to see them in this form, even after they discover the eternity elixir. Sorry.
Or you'll see them in Visual Basic as a new Type-class form that only a few Microserfs use because they happened to have written those extensions. Everybody else?
Nah.
Here's how Dylan will succeed, right now.
Bruce and I will put our heads together, start a company, and we'll code something. Not for anybody else to use and to love and to cherish, just for us, only for us, and it will blow out the effing doors, and we'll be bought out for $40M because our real worth is $127M.
And the first thing that Apple will do, after they bought us, is to show us the door, then convert the code into Java. Or Swift. Or Objective-C, or whatever.
And that's how we'll win.
Not the $40M. Not the lecture series on 'How to Make Functional Programming Work in Industry for Real' afterwards at FLoC and ICFP conferences with fan-bois and -girls wanting to talk to us afterwards and ask us how they can get a job doing functional programming.
Not that.
We'll win because we made something in Dylan, and it was real, and it worked, and it actually did something for enough people that we can now go to our graves knowing that we did something once with our lives (and we can do it again and again, too: there's no upper limit on the successes you're allowed to have, people) that meant something to some bodies. And we did that. With Dylan.
Nyaah!
I've done that several times already, by my counting: the Prolog project, the Dylan project, the Mercury project, and my writing.
I'm ready to do that, again.
Because, actually, fundamentally, doing something in this world and for it ... there's nothing like it.
You write that research paper, and I come up to you, waving it in your face, demanding you implement your research because I need it to do my job in Industry?
I've done that to three professors so far. Effing changed their world-view in that moment. "What?" they said, to a person, "somebody actually wants to use this?" The look of bemused surprise on their faces?

It was sad, actually, because they did write something that somebody out there (moiself) needed, but they never knew that what they were doing meant something.

And it did.
Effing change your world-view. Your job? Your research? Your programming language?
That's status quo, and that's good and necessary and dulce and de leche (or decorum, I forget which).
But get up out of the level you're at, and do something with it so that that other person, slouched in their chair, sits up and takes notice, and a light comes over their face and they say, 'Ooh! That does that? Wow!' and watch their world change, because of you and what you've done.
Dylan is for nothing and for nobody.
So is everything under the Sun, my friend.
Put your hand to the plow, and with the sweat of your brow, make it yours for this specific thing.
Regardless of the long hours, long months of unrewarded work, and regardless of the hecklers, naysayers, and concerned friends and parents, and regardless of the mountain of unpaid bills.
You make it work, and you don't stop until it does.
That's how I've won.
Every time.
Categories: Offsite Blogs

Well-Typed.Com: Dealing with Asynchronous Exceptions during Resource Acquisition

Planet Haskell - Thu, 08/28/2014 - 9:48am
Introduction

Consider the following code: we open a socket, compute with it, and finally close the socket again. The computation happens inside an exception handler (try), so even when an exception happens we still close the socket:

example1 :: (Socket -> IO a) -> IO a example1 compute = do -- WRONG s <- openSocket r <- try $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a

Although this code correctly deals with synchronous exceptions–exceptions that are the direct result of the execution of the program–it does not deal correctly with asynchronous exceptions–exceptions that are raised as the result of an external event, such as a signal from another thread. For example, in

example2 :: (Socket -> IO a) -> IO (Maybe a) example2 compute = timeout someTimeout $ example1 compute

it is possible that the timeout signal arrives after we have opened the socket but before we have installed the exception handler (or indeed, after we leave the scope of the exception handler but before we close the socket). In order to address this we have to control precisely where asynchronous exceptions can and cannot be delivered:

example3 :: (Socket -> IO a) -> IO a example3 compute = mask $ \restore -> do s <- openSocket r <- try $ restore $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a

We mask asynchronous exceptions, and then restore them only inside the scope of the exception handler. This very common pattern is captured by the higher level combinator bracket, and we might rewrite the example as

example4 :: (Socket -> IO a) -> IO a example4 = bracket openSocket closeSocket Allowing asynchronous exceptions during resource acquisition

Suppose that we wanted to define a derived operation that opens a socket and performs some kind of handshake with the server on the other end:

openHandshake :: IO Socket openHandshake = do mask $ \restore -> do s <- openSocket r <- try $ restore $ handshake s case r of Left ex -> closeSocket s >> throwIO (ex :: SomeException) Right () -> return s

(These and the other examples can be defined in terms of bracket and similar, but we use mask directly so that it’s easier to see what is happening.) We might use openHandshake as follows:

example5 :: (Socket -> IO a) -> IO a example5 compute = do mask $ \restore -> do s <- openHandshake r <- try $ restore $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a

There are no resource leaks in this code, but there is a different problem: we call openHandshake with asynchronous exceptions masked. Although openHandshake calls restore before doing the handshake, restore restores the masking state to that of the enclosing context. Hence the handshake with the server cannot be timed out. This may not be what we want–we may want to be able to interrupt example5 with a timeout either during the handshake or during the argument computation.

Note that this is not a solution:

example6 :: (Socket -> IO a) -> IO a example6 compute = do mask $ \restore -> do s <- restore openHandshake -- WRONG r <- try $ restore $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a

Consider what might happen: if an asynchronous exception is raised after openHandshake returns the socket, but before we leave the scope of restore, the asynchronous exception will be raised and the socket will be leaked. Installing an exception handler does not help: since we don’t have a handle on the socket, we cannot release it.

Interruptible operations

Consider this definition from the standard libraries:

withMVar :: MVar a -> (a -> IO b) -> IO b withMVar m io = mask $ \restore -> do a <- takeMVar m b <- restore (io a) `onException` putMVar m a putMVar m a return b

This follows almost exactly the same pattern as the examples we have seen so far; we mask asynchronous exceptions, take the contents of the MVar, and then execute some operation io with the contents of the MVar, finally putting the contents of the MVar back when the computation completes or when an exception is raised.

An MVar acts as a lock, with takeMVar taking the role of acquiring the lock. This may, of course, take a long time if the lock is currently held by another thread. But we call takeMVar with asynchronous exceptions masked. Does this mean that the takeMVar cannot be timed out? No, it does not: takeMVar is a so-called interruptible operation. From the Asynchronous Exceptions in Haskell paper:

Any operation which may need to wait indefinitely for a resource (e.g., takeMVar) may receive asynchronous exceptions even within an enclosing block, but only while the resource is unavailable. Such operations are termed interruptible operations. (..) takeMVar behaves atomatically when enclosed in block. The takeMVar may receive asynchronous exceptions right up until the point when it acquires the MVar, but not after.

(block has been replaced by mask since the publication of the paper, but the principle is the same.) Although the existence of interruptible operations makes understanding the semantics of mask harder, they are necessary: like in the previous section, wrapping takeMVar in restore is not safe. If we really want to mask asynchronous exceptions, even across interruptible operations, Control.Exception offers uninterruptibleMask.

Custom interruptible operations

So an interruptible operation is one that can be interrupted by an asynchronous exception even when asynchronous exceptions are masked. Can we define our own interruptible operations? Yes, we can:

-- | Open a socket and perform handshake with the server -- -- Note: this is an interruptible operation. openHandshake' :: IO Socket openHandshake' = mask_ $ do s <- openSocket r <- try $ unsafeUnmask $ handshake s case r of Left ex -> closeSocket s >> throwIO (ex :: SomeException) Right () -> return s

unsafeUnmask is defined in GHC.IO, and unmasks asynchronous exceptions, no matter what the enclosing context is. This is of course somewhat dangerous, because now calling openHandshake' inside a mask suddenly opens up the possibility of an asynchronous exception being raised; and the only way to know is to look at the implementation of openHandshake', or its Haddock documentation. This is somewhat unsatisfactory, but exactly the same goes for takeMVar and any other interruptible operation, or any combinator that uses an interruptible operation under the hood. A sad state of affairs, perhaps, but one that we don’t currently have a better solution for.

Actually, using unsafeUnmask is a bit too crude. Control.Exception does not export it, but does export

allowInterrupt :: IO () allowInterrupt = unsafeUnmask $ return ()

with documentation

When invoked inside mask, this function allows a blocked asynchronous exception to be raised, if one exists. It is equivalent to performing an interruptible operation, but does not involve any actual blocking.

When called outside mask, or inside uninterruptibleMask, this function has no effect.

(emphasis mine.) Sadly, this documentation does not reflect the actual semantics: unsafeUnmask, and as a consequence allowInterrupt, unmasks asynchronous exceptions no matter what the enclosing context is: even inside uninterruptibleMask. We can however define our own operator to do this:

interruptible :: IO a -> IO a interruptible act = do st <- getMaskingState case st of Unmasked -> act MaskedInterruptible -> unsafeUnmask act MaskedUninterruptible -> act

where we call unsafeUnmask only if the enclosing context is mask, but not if it is uninterruptibleMask (TODO: What is the semantics when we nest these two?). We can use it as follows to define a better version of openHandshake:

-- | Open a socket and perform handshake with the server -- -- Note: this is an interruptible operation. openHandshake' :: IO Socket openHandshake' = mask_ $ do s <- openSocket r <- try $ interruptible $ handshake s case r of Left ex -> closeSocket s >> throwIO (ex :: SomeException) Right () -> return s Resource allocation timeout

If we wanted to timeout the allocation of the resource only, we might do

example7 :: (Socket -> IO a) -> IO a example7 compute = do mask $ \restore -> do ms <- timeout someTimeout $ openHandshake' case ms of Nothing -> throwIO (userError "Server busy") Just s -> do r <- try $ restore $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a

Exceptions are masked when we enter the scope of the timeout, and are unmasked only once we are inside the exception handler in openHandshake'–in other words, if a timeout happens, we are guaranteed to clean up the socket. The surrounding mask is however necessary. For example, suppose we are writing some unit tests and we are testing openHandshake'. This is wrong:

example8 :: IO () example8 = do ms <- timeout someTimeout $ openHandshake' case ms of Just s -> closeSocket s Nothing -> return ()

Even if we are sure that the example8 will not be interrupted by asynchronous exceptions, there is still a potential resource leak here: the timeout exception might be raised just after we leave the mask_ scope from openHandshake but just before we leave the timeout scope. If we are sure we don’t need to worry about other asynchronous exceptions we can write

example8 :: IO () example8 = do s <- mask_ $ timeout someTimeout $ openHandshake' case ms of Just s -> closeSocket s Nothing -> return ()

although of course it might be better to simply write

example8 :: IO () example8 = bracket (timeout someTimeout $ openHandshake') (\ms -> case ms of Just s -> closeSocket s Nothing -> return ()) (\_ -> return ()) Conclusions

Making sure that resources are properly deallocated in the presence of asynchronous exceptions is difficult. It is very important to make sure that asynchronous exceptions are masked at crucial points; unmasking them at the point of calling a resource allocation function is not safe. If you nevertheless want to be able to timeout resource allocation, you need to make your resource allocation function interruptible.

For completeness’ sake, there are some other solutions that avoid the use of unsafeUnmask. One option is to thread the restore argument through (and compose multiple restore arguments if there are multiple nested calls to mask). This requires resource allocations to have a different signature, however, and it is very error prone: a single mask somewhere along the call chain where we forget to thread through the restore argument will mean the code is no longer interruptible. The other option is to run the code that we want to be interruptible in a separate thread, and wait for the thread to finish with, for example, a takeMVar. Getting this right is however no easy task, and it doesn’t change anything fundamentally anyway: rather than using unsafeUnmask we are now using a primitive interruptible operation; either way we introduce the possibility of exceptions even in the scope of mask_.

Finally, when your application does not fit the bracket pattern we have been using (implicitly or explicitly), you may want to have a look at resourcet and pipes or conduit, or my talk Lazy I/O and Alternatives in Haskell.

Categories: Offsite Blogs

Haskell Weekly News: Issue 303

General haskell list - Thu, 08/28/2014 - 6:40am
Welcome to issue 303 of the HWN, an issue covering crowd-sourced bits of information about Haskell from around the web. This issue covers from August 17 to 23, 2014 Quotes of the Week * monochrom: "point free" can be decomposed to: "point" refers to ".", "free" refers to using no "$". :) Top Reddit Stories * λ Bubble Pop! Domain: chrisuehlinger.com, Score: 97, Comments: 41 Original: [1] http://goo.gl/hVQq2F On Reddit: [2] http://goo.gl/OQWXK2 * The fundamental problem of programming language package management Domain: blog.ezyang.com, Score: 82, Comments: 54 Original: [3] http://goo.gl/fWmA0P On Reddit: [4] http://goo.gl/PfJbY0 * How Programming language subreddits talk (including Haskell) Domain: github.com, Score: 72, Comments: 31 Original: [5] http://goo.gl/2Ef0tB On Reddit: [6] http://goo.gl/KpTH74 * A fast, generic and type-safe image processing library written in Haskell Domain: hackage.haskell.org, Score: 59, Comm
Categories: Incoming News

haskell on hadoop

Haskell on Reddit - Thu, 08/28/2014 - 5:48am

has anyone used haskell on hadoop? What were your experiences? I found hadron which looks pretty nice. (Here's a nice talk on hadron that was posted here a while back.)

Edit: I also need hbase bindings, but it looks like hbase-haskell hasn't been touched for a while

submitted by ludflu
[link] [9 comments]
Categories: Incoming News

Would it be plausible to script games in Haskell on top of a C/C++ engine?

Haskell on Reddit - Thu, 08/28/2014 - 2:04am

Just a random thought of mine I wanted to get an opinion on. I currently C and Python for video game programming but my favorite language is Haskell. Would something like this work well?

submitted by ProbablyALinuxUser
[link] [35 comments]
Categories: Incoming News