When I was at ICFP last week, it became clear that I had made a huge mistake in the past three years. A few of us were talking, including Erik de Castro Lopo, and when I mentioned that he was the original inspiration for creating the conduit package, everyone else was surprised. So firstly: Erik, I apologize for not making it clear that you initially kicked off development by finding some fun corner cases in enumerator that were difficult to debug.
So to rectify that, I think it's only fair that I write the following:
- conduit is entirely Erik's fault.
- If you love conduit, write Erik a thank you email.
- More importantly, if you hate conduit, there's no need to complain to me anymore. Erik presumably will be quite happy to receive all such further communications.
- In other words, it's not my company, I just work here.
Thanks Erik :)
UPDATE Please also read my follow-up blog post clarifying this one, just in case you're confused.
If you're interested in helping someone become a better Haskell programmer, I would appreciate a code review along with feedback & criticism. Thanks in advance!
Compute the most depended on packages in Hackage by requesting a list of all packages along with their cabal files. For each cabal file, get a list of unique dependencies. Count these up to determine how many packages depend on a particular package. Sort.
I am comfortable with Haskell, and really enjoy it, but to be shamefully honest it took me 4 hours to do this. I think it is because I am not familiar with the libraries--I think a large portion of the time was spent reading documentation and library code & tests (to see how to use the library!). Any advice on how to be more efficient? Two things I have started doing are try to use command line hoogle and haskell-mode more often, and grow a "cheat sheet" which is just a super dense collection of functions and their types. I write this out by hand.
Another thing I am weary of is that Haskell is beautiful, but when I try to write something 'real' or 'productive' in it, I generally hack my way through it and end up with.. not so beautiful code. Advice on how to grow or build great Haskell code from the beginning?
I have the Control.Concurrent.Async code commented out because when I have ulimit -n set to high enough, I eventually hit this problem:file descriptor 1024 out of range for select (0--1024). Recompile with -threaded to work around this.
But, when I compile with -threaded, the program quickly hits this notorious issue:getAddrInfo: does not exist (Name or service not known)
Google and friends will tell you that you need 'withSocketsDo' on Windows. But I am on a Linux machine, and I cannot find how else to debug. I have seen this error for a bad / ill-formed URL, but I don't think that is the case here because serial map runs fine.
Is there any advantage to using a streaming library (pipes or conduit) to construct this code?
As far as performance goes, it appears to be okay (besides the parallelism trouble described above):time cabal run +RTS -s Preprocessing executable 'hackage-mining' for hackage-mining-0.1.0.0... [("base",6533),("bytestring",2249),("containers",2223),("mtl",1817),("text",1270),("transformers",1155),("directory",1032),("filepath",969),("time",818),("array",687)] 1,941,217,280 bytes allocated in the heap 157,663,360 bytes copied during GC 9,957,384 bytes maximum residency (13 sample(s)) 182,664 bytes maximum slop 27 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 3728 colls, 0 par 0.18s 0.18s 0.0000s 0.0008s Gen 1 13 colls, 0 par 0.10s 0.10s 0.0079s 0.0127s TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1) SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled) INIT time 0.00s ( 0.00s elapsed) MUT time 0.81s (2205.36s elapsed) GC time 0.28s ( 0.28s elapsed) EXIT time 0.00s ( 0.00s elapsed) Total time 1.09s (2205.64s elapsed) Alloc rate 2,401,538,990 bytes per MUT second Productivity 74.2% of total user, 0.0% of total elapsed gc_alloc_block_sync: 0 whitehole_spin: 0 gen.sync: 0 gen.sync: 0 real 36m45.647s user 0m47.507s sys 0m10.817s submitted by brooksbp
[link] [2 comments]
I need to do two tasks, I'm searching for Haskell libraries to
Convert HTML documents on the wild to a DOM structure using the HTML5 parsing algorithm. This rules out xmlhtml but apparently - from a 2010 discussion - TagSoup qualifies as an HTML5 parser. Is this accurate?
Run CSS selectors on the resulting DOM. selectors from Hackage seem to qualify - but it seems to be based on xml-conduit, not sure how it compares to TagSoup (or whether they can be used together), but it seems to be about XML parsing, not HTML5. There's also dom-selector, and, what seems more promising, HXT combined with HandsomeSoup.
TagSoup can be interfaced with HXT using hxt-tagsoup. Is this the way to go? Should I use some other combination of libraries?
The code at the HandsomeSoup Github looks good and does what I want,import Text.XML.HXT.Core import Text.HandsomeSoup main = do let doc = fromUrl "http://www.google.com/search?q=egon+schiele" links <- runX $ doc >>> css "h3.r a" ! "href" mapM_ putStrLn links
Is it easy to switch it to use hxt-tagsoup? (or rather, would it make sense?)submitted by protestor
[link] [2 comments]
So im a beginner with haskell and im currently working on an assignment with haskell in school and i have gotten stuck at this 5:
I havent found anything on how to do it so i would love to get any help i can get.
I dont want you do to it for me i just want some help so i can solve it on my own
And something on assignment 5.1 would be great too
*edit: Cleaned up a bit and added a link to the assignmentsubmitted by Fassticman
[link] [4 comments]
Without further ado, here's the HIW 2014 Youtube Playlist (kindly provided by Malcolm Wallace)
Can someone explain me why it is impossible to introspect into a function at runtime in Haskell?submitted by felipeZ
[link] [6 comments]
It’s been very quiet on the blog these past few months not because I’m spending less time on functional programming but precisely for the opposite reason. Since January I’ve been working together with Richard Eisenberg to extend his singletons library. This work was finished in June and last Friday I gave a talk about our research on Haskell Symposium 2014. This was the first time I’ve been to the ICFP and Haskell Symposium. It was pretty cool to finally meet all these people I know only from IRC. I also admit that the atmosphere of the conference quite surprised me as it often felt like some sort of fan convention rather than the biggest event in the field of functional programming.
The paper Richard and I published is titled “Promoting Functions to Type Families in Haskell”. This work is based on Richard’s earlier paper “Dependently typed programming with singletons” presented two years ago on Haskell Symposium. Back then Richard presented the singletons library that uses Template Haskell to generate singleton types and functions that operate on them. Singleton types are types that have only one value (aside from bottom) which allows to reason about runtime values during compilation (some introduction to singletons can be found in this post on Richard’s blog). This smart encoding allows to simulate some of the features of dependent types in Haskell. In our current work we extended promotion capabilities of the library. Promotion is only concerned with generating type-level definitions from term-level ones. Type-level language in GHC has become quite expressive during the last couple of years but it is still missing many features available in the term-level language. Richard and I have found ways to encode almost all of these missing features using the already existing type-level language features. What this means is that you can write normal term-level definition and then our library will automatically generate an equivalent type family. You’re only forbidden from using infinite terms, the do-notation, and decomposing String literals to Chars. Numeric literals are also very problematic and the support is very limited but some of the issues can be worked around. What is really cool is that our library allows you to have partial application at the type level, which GHC normally prohibits.
You can learn more by watching my talk on YouTube, reading the paper or the singletons documentation. Here I’d like to add a few more information that are not present in the paper. So first of all the paper was concerned only with promotion and didn’t say anything about singletonization. But as we enabled more and more language constructs to be promoted we also made them singletonizable. So almost everything that can be promoted can also be singletonized. The most notable exception to this rule are type classes, which are not yet implemented at the moment.
An interesting issue was raised by Adam Gundry in a question after the talk: what about difference between lazy term-level semantics and strict type-level semantics? You can listen to my answer in the video but I’ll elaborate some more on this here. At one point during our work we were wondering about this issue and decided to demonstrate an example of an algorithm that crucially relies on laziness to work, ie. fails to work with strict semantics. I think it’s not straightforward to come up with such an algorithm but luckily I recalled the backwards state monad from Philip Wadler’s paper “The essence of functional programming”1. Bind operator of that monad looks like this (definition copied from the paper):m `bindS` k = \s2 -> let (a,s0) = m s1 (b,s1) = k a s2 in (b,s0)
The tricky part here is that the output of call to m becomes input to call to k, while the output of call to k becomes the input of m. Implementing this in a strict language does not at all look straightforward. So I promoted that definition expecting it to fail spectacularly but to my surprised it worked perfectly fine. After some investigation I understood what’s going on. Type-level computations performed by GHC are about constraint solving. It turns out that GHC is able to figure out in which order to solve these constraints and get the result. It’s exactly analogous to what happens with the term-level version at runtime: we have an order of dependencies between the closures and there is a way in which we can run these closures to get the final result.
All of this work is a small part of a larger endeavour to push Haskell’s type system towards dependent types. With singletons you can write type-level functions easily by writing their definitions using the term-level language and then promoting these definitions. And then you can singletonize your functions to work on singleton types. There were two other talks about dependent types during the conference: Stephanie Weirich’s “Depending on Types” keynote lecture during ICPF and Richard’s “Dependent Haskell” talk during Haskell Implementators Workshop. I encourage everyone interested in Haskell’s type system to watch both of these talks.
- The awful truth is that this monad does not really work with the released version of singletons. I only realized that when I was writing this post. See issue #94 on singletons bug tracker.
I tried my hand at writing a tutorial for working with Netwire 5, GLFW, and OpenGL (primarily geared at the Netwire side). Some links:
Any sort of feedback would be greatly appreciated -- grammatical mistakes, lack of clarity, being just plain run about a piece of my logic or code, anything. I want to get better at this kind of thing so I can give back to the community. Thanks for your time.submitted by crockeo
[link] [4 comments]