In Python there are context managers that define an __enter__ and an __exit__ method. An example would be:with open('someFile.file') as f: do_whatever_with_file(f)
This automatically opens and closes the file on leaving the block or when an error happens. This context managers can be easily implemented for everything that requires a similar mechanism.
Now I wanted to have this same abstraction in Haskell, but I couldn't think of a way to do it. I know there is bracket, but with bracket we have to explicitly specify what happens.
In Haskell we can use for this particular example withFile, but what I'm looking for is something more general, like this:
with (openFile "foo.bar" ReadMode) (\file -> do whatever).
But because openFile "foo.bar" ReadMode has type IO Handle it is not distinguishable from say connectTo :: HostName -> PortID -> IO Handle which would need to be handled differently.
Anyone has an idea how one could implement such a with statement in Haskell or is it not possible?
[link] [29 comments]
Here is the abstract again for reference:
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.
Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis
So I'm really new to Haskell and functional programming in general, and I don't yet understand how Haskell libraries for neural networks can be efficient.
Since nothing can be changed, wouldn't you have to create an entirely new copy of a neural network every time you update some weights/threshold value for a single neuron? Wouldn't that be a lot slower than just overwriting the previous values?
EDIT: Thanks everyone for all the resources!submitted by ninklo
[link] [15 comments]
Once, I had the question "how does the system restarts if the power goes off"? One need an exhaustive picture of all components of a system to answer such questions. Some people may fit everything in their brain, I prefer written documentation. Dare I say, I even prefer "executable" documentation.
I wrote the DepTrack library to easily model and graph dependencies between systems. DepTrack addresses two main requirements:
a) library users should attach the meaning they want to dependencies. For instance, if you want a picture, just use a string and some metadata for the label/shape of the nodes; if you want to troubleshoot the system, then add some "check-if-it-work IO(Bool)".
b) users should iterate using short edit-and-refine work cycles. This requirement implies that dependencies should compose well, even for heterogeneous objects.
Algebraic datatypes address Requirement a). Meanwhile, cheap data notations and the functional-programming idioms address Requirement b).
At this point, DepTrack exposes an Applicative interface to "wrap" data constructors around dependency annotations. Then, when evaluating a computation, DepTrack builds a tree by "nesting" dependencies along annotations. I use an early-and-unpolished monadic interface with the do-notation to show the PoC to my colleagues. I'll add a polished monadic interface soon. The boilerplate is light enough: notably, some non-haskeller colleagues found it "highly readable" and could review my modelizations. Another colleague even patched a modelization where I forgot a piece of a system (remember: they saw the do-notation).
DepTrack has some limitations. For instance, DepTrack cannot "evaluate" cycles. However I already could generate some quite involved models (thousands of nodes, with a mix of hardware/software) and some artistic pictures with Gephi.
I hope DepTrack will let you build great value (for you and your businesses). For instance, I found that expressing an application dependencies without loops is enlightening about its "boot process". While picture outputs help to keep in mind what may go wrong.submitted by lucasdicioccio
[link] [2 comments]
This Senior Software Engineer position is with the new LearnSmart team at McGraw-Hill Education's new and growing Research & Development center in Boston's Innovation District. We make software that helps college students study smarter, earn better grades, and retain more knowledge.
The LearnSmart adaptive engine powers the products in our LearnSmart Advantage suite — LearnSmart, SmartBook, LearnSmart Achieve, LearnSmart Prep, and LearnSmart Labs. These products provide a personalized learning path that continuously adapts course content based on a student’s current knowledge and confidence level.
On our team, you'll get to:
- Move textbooks and learning into the digital era
- Create software used by millions of students
- Advance the state of the art in adaptive learning technology
- Make a real difference in education
If you're interested in functional languages like Scala, Swift, Erlang, Clojure, F#, Lisp, Haskell, and OCaml, then you'll enjoy learning Flow. We don't require that you have previous experience with functional programming, only enthusiasm for learning it. But if you do have some experience with functional languages, so much the better! (On-the-job experience is best, but coursework, personal projects, and open-source contributions count too.)
We require only that you:
- Have a solid grasp of CS fundamentals (languages, algorithms, and data structures)
- Be comfortable moving between multiple programming languages
- Be comfortable with modern software practices: version control (Git), test-driven development, continuous integration, Agile
Get information on how to apply for this position.
I've noticed the biggest socket.io binding on Github has only 14 stars and the example code doesn't compile. The websockets library is more popular, but it is more complicated to use (I have no idea, to be honest - a minimalistic server/client example like socket.io's would help a lot) and there is also the problem that many clients don't support websockets (socket.io can fallback to HTTP).
So, what is the right way to deal with it?submitted by SrPeixinho
[link] [6 comments]
I did everything for Dylan. And when I say everything, I mean everything. Here's my resumé:
- I got excited about Dylan as a user, and I used it. I bought an old Mac that I don't ever remember the designation for, it's so '90's old, and got the floppies for the Dylan IDE from Apple research.
- I integrated Dylan into my work at work, building an XML parser then open-sourcing it to the community under the (then) non-restrictive license. I think mine was the only XML parser that was industrial-strength for Dylan. Can't claim originality: I ported over the Common-LISP one, but it was a lot of (fun) work.
- I made improvements to the gwydion-dylan compiler, including some library documentation (you can see my name right there, right in the compiler code), including some library functionality, did I work on the compiler itself? The Dylan syntax extensions or type system? I don't recall; if not in those places, I know I've looked at those guts: I had my fingers all over parts of the compiler.
But ask a software developer in industry if they've ever been in their compiler code. I have, too: I've found bugs in Java Sun-compiler that I fixed locally and reported up the chain.
- I taught a course at our community college on Dylan. I had five students from our company that made satellite mission software.
- I effing had commercial licenses bought when the boss asked me: what do we have to do to get this (my system) done/integrated into the build. I put my job on the line, for Dylan. ... The boss bought the licenses: he'd rather spend the $x than spending six weeks to back-port down to Java or C++.
- I built a rule-based man-power scheduling system that had previously took three administrative assistants three days each quarter to generate. My system did it, and printed out a PDF in less than one second. I sold it, so that means I started a commercial company and sold my software.
Question: what more could I have done?
I kept Dylan alive for awhile. In industry. For real.
So why is Dylan dead?
That's not the question.
Or, that question is answered over and over and over again.
Good languages, beautiful languages, right-thing languages languish and die all the time.
Dylan was the right-thing, and they (Apple) killed it in the lab, and for a reason.
Who is Dylan for?
That's not the question either. Because you get vague, general, useless answers.
The question is to ask it like Paul Graham answered it for LISP.
Lisp is a pointless, useless, weird language that nobody uses.
But Paul and his partner didn't care. They didn't give a ...
... what anybody else thought. They knew that this language, the language they loved, was built and designed and made for them. Just them and only them, because the only other people who were using it were college kids on comp.lang.lisp asking for the answers for problem-set 3 on last night's homework.
That's what Lisp was good for: nothing.That's who Lisp was good for: nobody.
Same exact scenario for Erlang. Exactly the same. Erlang was only good for Joe Armstrong and a couple of buddies/weirdos like him, you know: kooks, who believed that Erlang was the right-thing for what they were doing, because they were on a mission, see, and nothing nobody could say could stop them nor stand against them, and all who would rise up against them would fall.
What made Lisp and Haskell and Erlang and Scala and Prolog (yes, Prolog, although you'll never hear that success story publicly, but $26M and three lives saved? Because of a Prolog system I wrote? And that's just one day in one month's report for data? I call that a success) work when nobody sane would say that these things would work?
Well, it took a few crazy ones to say, no, not: 'say' that it would work, but would make it work with their beloved programming language come hell or high water or, worse: indifferent silence, or ridicule, or pity from the rest of the world.
That is the lesson of perl and python and all these other languages. They're not good for anything. They suck. And they suck in libraries and syntax and semantics and weirdness-factor and everything.
But two, not one, but at least two people loved that language enough to risk everything, and ...
Did you think I was going to paint the rosy picture and lie to you and say 'they won'?
Because they didn't.
Who uses Lisp commercially? Or Haskell, except some fringers, or Scala or Clojure or Erlang or Smalltalk or Prolog
... or Dylan.
These languages are defined, right there in the dictionary.
Erlang: see 'career wrecker.'
Nobody uses those languages nor admits to even touching them with a 10-foot (3-meter) pole. I had an intern from college. 'Yeah, we studied this weird language called ML in Comp.sci. Nobody uses it.'
She was blown away when I started singing ML's praises and what it can do.
A meta-language, and she called it useless? Seriously?
Because that's what the mainstream sees.
Newsflash. I'm sorry. Dylan, Haskell, Idris: these aren't main-stream, and they never will be.
Algebraic types? Dependent types? You'll never see them. They're too ... research-y. They stink of academe, which is: they stink of uselessness-to-industry. You'll be dead and buried to see them in this form, even after they discover the eternity elixir. Sorry.
Or you'll see them in Visual Basic as a new Type-class form that only a few Microserfs use because they happened to have written those extensions. Everybody else?
Here's how Dylan will succeed, right now.
Bruce and I will put our heads together, start a company, and we'll code something. Not for anybody else to use and to love and to cherish, just for us, only for us, and it will blow out the effing doors, and we'll be bought out for $40M because our real worth is $127M.
And the first thing that Apple will do, after they bought us, is to show us the door, then convert the code into Java. Or Swift. Or Objective-C, or whatever.
And that's how we'll win.
Not the $40M. Not the lecture series on 'How to Make Functional Programming Work in Industry for Real' afterwards at FLoC and ICFP conferences with fan-bois and -girls wanting to talk to us afterwards and ask us how they can get a job doing functional programming.
We'll win because we made something in Dylan, and it was real, and it worked, and it actually did something for enough people that we can now go to our graves knowing that we did something once with our lives (and we can do it again and again, too: there's no upper limit on the successes you're allowed to have, people) that meant something to some bodies. And we did that. With Dylan.
I've done that several times already, by my counting: the Prolog project, the Dylan project, the Mercury project, and my writing.
I'm ready to do that, again.
Because, actually, fundamentally, doing something in this world and for it ... there's nothing like it.
You write that research paper, and I come up to you, waving it in your face, demanding you implement your research because I need it to do my job in Industry?
I've done that to three professors so far. Effing changed their world-view in that moment. "What?" they said, to a person, "somebody actually wants to use this?" The look of bemused surprise on their faces?
It was sad, actually, because they did write something that somebody out there (moiself) needed, but they never knew that what they were doing meant something.
And it did.
Effing change your world-view. Your job? Your research? Your programming language?
That's status quo, and that's good and necessary and dulce and de leche (or decorum, I forget which).
But get up out of the level you're at, and do something with it so that that other person, slouched in their chair, sits up and takes notice, and a light comes over their face and they say, 'Ooh! That does that? Wow!' and watch their world change, because of you and what you've done.
Dylan is for nothing and for nobody.
So is everything under the Sun, my friend.
Put your hand to the plow, and with the sweat of your brow, make it yours for this specific thing.
Regardless of the long hours, long months of unrewarded work, and regardless of the hecklers, naysayers, and concerned friends and parents, and regardless of the mountain of unpaid bills.
You make it work, and you don't stop until it does.
That's how I've won.
Consider the following code: we open a socket, compute with it, and finally close the socket again. The computation happens inside an exception handler (try), so even when an exception happens we still close the socket:example1 :: (Socket -> IO a) -> IO a example1 compute = do -- WRONG s <- openSocket r <- try $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a
Although this code correctly deals with synchronous exceptions–exceptions that are the direct result of the execution of the program–it does not deal correctly with asynchronous exceptions–exceptions that are raised as the result of an external event, such as a signal from another thread. For example, inexample2 :: (Socket -> IO a) -> IO (Maybe a) example2 compute = timeout someTimeout $ example1 compute
it is possible that the timeout signal arrives after we have opened the socket but before we have installed the exception handler (or indeed, after we leave the scope of the exception handler but before we close the socket). In order to address this we have to control precisely where asynchronous exceptions can and cannot be delivered:example3 :: (Socket -> IO a) -> IO a example3 compute = mask $ \restore -> do s <- openSocket r <- try $ restore $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a
We mask asynchronous exceptions, and then restore them only inside the scope of the exception handler. This very common pattern is captured by the higher level combinator bracket, and we might rewrite the example asexample4 :: (Socket -> IO a) -> IO a example4 = bracket openSocket closeSocket Allowing asynchronous exceptions during resource acquisition
Suppose that we wanted to define a derived operation that opens a socket and performs some kind of handshake with the server on the other end:openHandshake :: IO Socket openHandshake = do mask $ \restore -> do s <- openSocket r <- try $ restore $ handshake s case r of Left ex -> closeSocket s >> throwIO (ex :: SomeException) Right () -> return s
(These and the other examples can be defined in terms of bracket and similar, but we use mask directly so that it’s easier to see what is happening.) We might use openHandshake as follows:example5 :: (Socket -> IO a) -> IO a example5 compute = do mask $ \restore -> do s <- openHandshake r <- try $ restore $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a
There are no resource leaks in this code, but there is a different problem: we call openHandshake with asynchronous exceptions masked. Although openHandshake calls restore before doing the handshake, restore restores the masking state to that of the enclosing context. Hence the handshake with the server cannot be timed out. This may not be what we want–we may want to be able to interrupt example5 with a timeout either during the handshake or during the argument computation.
Note that this is not a solution:example6 :: (Socket -> IO a) -> IO a example6 compute = do mask $ \restore -> do s <- restore openHandshake -- WRONG r <- try $ restore $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a
Consider what might happen: if an asynchronous exception is raised after openHandshake returns the socket, but before we leave the scope of restore, the asynchronous exception will be raised and the socket will be leaked. Installing an exception handler does not help: since we don’t have a handle on the socket, we cannot release it.Interruptible operations
Consider this definition from the standard libraries:withMVar :: MVar a -> (a -> IO b) -> IO b withMVar m io = mask $ \restore -> do a <- takeMVar m b <- restore (io a) `onException` putMVar m a putMVar m a return b
This follows almost exactly the same pattern as the examples we have seen so far; we mask asynchronous exceptions, take the contents of the MVar, and then execute some operation io with the contents of the MVar, finally putting the contents of the MVar back when the computation completes or when an exception is raised.
An MVar acts as a lock, with takeMVar taking the role of acquiring the lock. This may, of course, take a long time if the lock is currently held by another thread. But we call takeMVar with asynchronous exceptions masked. Does this mean that the takeMVar cannot be timed out? No, it does not: takeMVar is a so-called interruptible operation. From the Asynchronous Exceptions in Haskell paper:Any operation which may need to wait indefinitely for a resource (e.g., takeMVar) may receive asynchronous exceptions even within an enclosing block, but only while the resource is unavailable. Such operations are termed interruptible operations. (..) takeMVar behaves atomatically when enclosed in block. The takeMVar may receive asynchronous exceptions right up until the point when it acquires the MVar, but not after.
(block has been replaced by mask since the publication of the paper, but the principle is the same.) Although the existence of interruptible operations makes understanding the semantics of mask harder, they are necessary: like in the previous section, wrapping takeMVar in restore is not safe. If we really want to mask asynchronous exceptions, even across interruptible operations, Control.Exception offers uninterruptibleMask.Custom interruptible operations
So an interruptible operation is one that can be interrupted by an asynchronous exception even when asynchronous exceptions are masked. Can we define our own interruptible operations? Yes, we can:-- | Open a socket and perform handshake with the server -- -- Note: this is an interruptible operation. openHandshake' :: IO Socket openHandshake' = mask_ $ do s <- openSocket r <- try $ unsafeUnmask $ handshake s case r of Left ex -> closeSocket s >> throwIO (ex :: SomeException) Right () -> return s
unsafeUnmask is defined in GHC.IO, and unmasks asynchronous exceptions, no matter what the enclosing context is. This is of course somewhat dangerous, because now calling openHandshake' inside a mask suddenly opens up the possibility of an asynchronous exception being raised; and the only way to know is to look at the implementation of openHandshake', or its Haddock documentation. This is somewhat unsatisfactory, but exactly the same goes for takeMVar and any other interruptible operation, or any combinator that uses an interruptible operation under the hood. A sad state of affairs, perhaps, but one that we don’t currently have a better solution for.
Actually, using unsafeUnmask is a bit too crude. Control.Exception does not export it, but does exportallowInterrupt :: IO () allowInterrupt = unsafeUnmask $ return ()
When invoked inside mask, this function allows a blocked asynchronous exception to be raised, if one exists. It is equivalent to performing an interruptible operation, but does not involve any actual blocking.When called outside mask, or inside uninterruptibleMask, this function has no effect.
(emphasis mine.) Sadly, this documentation does not reflect the actual semantics: unsafeUnmask, and as a consequence allowInterrupt, unmasks asynchronous exceptions no matter what the enclosing context is: even inside uninterruptibleMask. We can however define our own operator to do this:interruptible :: IO a -> IO a interruptible act = do st <- getMaskingState case st of Unmasked -> act MaskedInterruptible -> unsafeUnmask act MaskedUninterruptible -> act
where we call unsafeUnmask only if the enclosing context is mask, but not if it is uninterruptibleMask (TODO: What is the semantics when we nest these two?). We can use it as follows to define a better version of openHandshake:-- | Open a socket and perform handshake with the server -- -- Note: this is an interruptible operation. openHandshake' :: IO Socket openHandshake' = mask_ $ do s <- openSocket r <- try $ interruptible $ handshake s case r of Left ex -> closeSocket s >> throwIO (ex :: SomeException) Right () -> return s Resource allocation timeout
If we wanted to timeout the allocation of the resource only, we might doexample7 :: (Socket -> IO a) -> IO a example7 compute = do mask $ \restore -> do ms <- timeout someTimeout $ openHandshake' case ms of Nothing -> throwIO (userError "Server busy") Just s -> do r <- try $ restore $ compute s closeSocket s case r of Left ex -> throwIO (ex :: SomeException) Right a -> return a
Exceptions are masked when we enter the scope of the timeout, and are unmasked only once we are inside the exception handler in openHandshake'–in other words, if a timeout happens, we are guaranteed to clean up the socket. The surrounding mask is however necessary. For example, suppose we are writing some unit tests and we are testing openHandshake'. This is wrong:example8 :: IO () example8 = do ms <- timeout someTimeout $ openHandshake' case ms of Just s -> closeSocket s Nothing -> return ()
Even if we are sure that the example8 will not be interrupted by asynchronous exceptions, there is still a potential resource leak here: the timeout exception might be raised just after we leave the mask_ scope from openHandshake but just before we leave the timeout scope. If we are sure we don’t need to worry about other asynchronous exceptions we can writeexample8 :: IO () example8 = do s <- mask_ $ timeout someTimeout $ openHandshake' case ms of Just s -> closeSocket s Nothing -> return ()
although of course it might be better to simply writeexample8 :: IO () example8 = bracket (timeout someTimeout $ openHandshake') (\ms -> case ms of Just s -> closeSocket s Nothing -> return ()) (\_ -> return ()) Conclusions
Making sure that resources are properly deallocated in the presence of asynchronous exceptions is difficult. It is very important to make sure that asynchronous exceptions are masked at crucial points; unmasking them at the point of calling a resource allocation function is not safe. If you nevertheless want to be able to timeout resource allocation, you need to make your resource allocation function interruptible.
For completeness’ sake, there are some other solutions that avoid the use of unsafeUnmask. One option is to thread the restore argument through (and compose multiple restore arguments if there are multiple nested calls to mask). This requires resource allocations to have a different signature, however, and it is very error prone: a single mask somewhere along the call chain where we forget to thread through the restore argument will mean the code is no longer interruptible. The other option is to run the code that we want to be interruptible in a separate thread, and wait for the thread to finish with, for example, a takeMVar. Getting this right is however no easy task, and it doesn’t change anything fundamentally anyway: rather than using unsafeUnmask we are now using a primitive interruptible operation; either way we introduce the possibility of exceptions even in the scope of mask_.
Finally, when your application does not fit the bracket pattern we have been using (implicitly or explicitly), you may want to have a look at resourcet and pipes or conduit, or my talk Lazy I/O and Alternatives in Haskell.
Edit: I also need hbase bindings, but it looks like hbase-haskell hasn't been touched for a whilesubmitted by ludflu
[link] [9 comments]
Just a random thought of mine I wanted to get an opinion on. I currently C and Python for video game programming but my favorite language is Haskell. Would something like this work well?submitted by ProbablyALinuxUser
[link] [35 comments]