# News aggregator

### Sky Blue Trades | The Teeny-Tiny Happy Hakyll TikZ Pixie

del.icio.us/haskell - Mon, 11/25/2013 - 1:54pm
Categories: Offsite Blogs

### Including TikZ images in a Hakyll blog

Haskell on Reddit - Mon, 11/25/2013 - 1:36pm
Categories: Incoming News

### YBlog - Category Theory Presentation

del.icio.us/haskell - Mon, 11/25/2013 - 12:55pm
Categories: Offsite Blogs

### Master thesis project - Haskell STM

haskell-cafe - Mon, 11/25/2013 - 12:40pm
Dear Haskellers, I found Haskell and became enlightened for the first time in many years. I choose to continue on this path, and choose Haskell for my master thesis. I got some nice ideas from people on the #haskell channel . Altogether, I have elaborated a topic: "Evaluating Intel TSX in a MVCC based STM implementation for Haskell" The idea is to investigate whether the "new" Intel TSX can bring any value in terms of performance and saftey when implemented in a MVCC based STM to be used in the Haskell runtime system (GHC). The MVCC based STM using TSX will be evaluated and compared to the existing implementation, compiled with the two settings STM_CG_LOCK and STM_FG_LOCKS respectively. The biggest challange still, is to convince Lund University, LTH, to accept my own master thesis project. I have to present a detailed synopsis about the project, for them to choose an examinator and supervisor. As they put it, the examinator "must have time" and it's up to that person to decide whether or not this projec
Categories: Offsite Discussion

### www21.in.tum.de

del.icio.us/haskell - Mon, 11/25/2013 - 12:06pm
Categories: Offsite Blogs

### Ian Ross: The Teeny-Tiny Happy Hakyll TikZ Pixie

Planet Haskell - Mon, 11/25/2013 - 10:56am
The Teeny-Tiny Happy Hakyll TikZ Pixie November 25, 2013

For my blog, I use Jasper Van der Jeugt’s Hakyll system. When I was first looking for a blogging platform, I rejected all the usual choices (Wordpress, etc.), mostly because I’m a borderline obsessive control freak and they didn’t give enough configurability. Plus, they weren’t Haskell. I started writing my own blogging system, but then I found Hakyll. “Great,” I thought, “that’s perfect, except I want something almost completely different!”

Fortunately, Hakyll is written as a library that allows you to do more or less anything you like with it. This is common with Haskell tools: XMonad is another “do what you like example”.

Anyway, I set my blog up in a way completely different from the “normal” Hakyll blog, but still using all the handy tools that Jasper provided. It was pretty easy to do (and it’s even easier now, since Jasper redid the central page processing abstraction to use monads instead of arrows).

Carried away with my success, I added an extra feature that I thought would be cool. I’ve since changed my mind three or four times about whether it really was a good idea, but I’ve now come to the opinion that it’s kind of sweet.

Here’s how it works. There’s a very powerful LaTeX drawing package called TikZ1 that allows you to say things like this:

\begin{scope}[very thick] \draw (0.5,0.5) +(-.25,-.25) rectangle ++(.25,.25); \end{scope}

to draw a little square (TikZ can do a little bit more than this, obviously!). The thing I did was to allow you to embed this within the Markdown for blog posts (between rows of @ signs, just like you put code samples between rows of ~ characters – there’s a special syntax to say “don’t wrap my TikZ code” so that you can add extra libraries and arbitrary LaTeX setup code as well). These bits of TikZ code are pulled out during processing of the blog articles and processed to generate SVG files, using htlatex, and these SVGs are then embedded in the resulting HTML pages, like this:

<object data="http://www.skybluetrades.net/blog/tikzs/fb64dd78438b9e870da1dcd35f984c2f.svg" height="20" style="display: block; margin-left: auto; margin-right: auto;" type="image/svg+xml" width="20"></object>

Why do this? Well, often when you’re writing things with some mathematical content, you want to embed little diagrams along the way to explain things. You can switch out of Emacs, start up Inkscape or something and do some drawing. The problem with that is that it’s really easy to get distracted by the details of the prettiness of your drawing, it’s hard to maintain consistent styling throughout a series of pictures, and it’s hard (or annoying) to get things lined up right in precise diagrams. Or you can quickly write some TikZ right in the document where you are. Once you’ve learnt enough TikZ to be useful, it’s very quick to knock out an accurate sketch of what you want (since you’re talking directly in terms of coordinates, rather than shifting things around by hand), and it’s easy to come back and apply consistent styling when you’re done.

Here are some examples taken straight from the TikZ manual:

<object data="http://www.skybluetrades.net/blog/tikzs/10c7193d66090db050d126f5d6ffcfba.svg" height="285" style="width: 24%;" type="image/svg+xml" width="287"></object> <object data="http://www.skybluetrades.net/blog/tikzs/3e0cdc97b502a14a3e9e38be75485bba.svg" height="125" style="width: 24%;" type="image/svg+xml" width="77"></object> <object data="http://www.skybluetrades.net/blog/tikzs/01272fa54c0354099c2ef86ee8cf0d78.svg" height="223" style="width: 24%;" type="image/svg+xml" width="116"></object> <object data="http://www.skybluetrades.net/blog/tikzs/7b4319680256ffe8671e221904c431a1.svg" height="142" style="width: 24%;" type="image/svg+xml" width="151"></object>

Personally, I think that’s kind of fun, although I agree that it won’t be to everyone’s taste. If you want to play with it, you can find the code in my blog repository. You can see the raw Markdown code for this post here.

1. TikZ ist kein Zeichenprogramm – “TikZ is not a drawing program”. It’s based on a lower-level package called PGF.

haskell colophonia <script src="http://zor.livefyre.com/wjs/v3.0/javascripts/livefyre.js" type="text/javascript"></script> <script type="text/javascript"> (function () { var articleId = fyre.conv.load.makeArticleId(null); fyre.conv.load({}, [{ el: 'livefyre-comments', network: "livefyre.com", siteId: "290329", articleId: articleId, signed: false, collectionMeta: { articleId: articleId, url: fyre.conv.load.makeCollectionUrl(), } }], function() {}); }()); </script>
Categories: Offsite Blogs

Haskell on Reddit - Mon, 11/25/2013 - 10:31am
Categories: Incoming News

### Suggestions for custom language implementation AND analysis with "plugins"?

Haskell on Reddit - Mon, 11/25/2013 - 9:56am

I want to implement a custom language for real time applications like vehicle control and navigation, which is my background. I figure I'll use LLVM as the backend. However, I don't just want to implement the compiler; I also want to make tools that perform different kinds of analyses on the code. For example, the tools might interface with an external simulation library and automatically tune certain parameters to get good predicted performance. Another possibility is visualizing the response of functions to particular signals of interest. Yet another is running the code through a suite of monte carlo (e.g. random atmospheric) tests and compiling the result statistics.

I want to be able to run these tools on both large functions as well as small ones (i.e. larger and smaller sections of the AST). I'd like for each of these bits of performance analysis functionality to be mostly modular, and ultimately integrated with an IDE in kind of a plugin sense so that different users could pick and choose what analysis functionality is useful to them. In my experience, getting this kind of feedback about the code is a time-intensive process that is spread across multiple different tools and provides very limited ability to inspect both small sections of code as well as the system as a whole. I'm hoping an integrated tool will provide tighter, faster feedback and thus lead to better designs.

I am appealing to the community for guidance about an architectural approach for tackling this problem. What recommendations come to mind, in terms of both design and library selection? I have heard few and mixed reviews about Haskell plugins and their usefulness, so more opinions would be useful. Given all the progress on the web side lately, would it makes sense to implement each plugin as a snaplet and run an IDE in-browser (or something similar)? Haskell Center has shown this is a possibility. Perhaps integration with Yi is an alternative?

I realize it's a big, open question for a big problem with lots of risk of false starts and dead ends. I'm trying to get more experienced viewpoints to hopefully reduce that risk. Thanks much.

submitted by fluffynukeit
Categories: Incoming News

### New Functional Programming Job Opportunities

haskell-cafe - Mon, 11/25/2013 - 9:00am
Here are some functional programming job opportunities that were posted recently: Software Developer - Functional Programming at Genetec http://functionaljobs.com/jobs/8660-software-developer-functional-programming-at-genetec Cheers, Sean Murphy FunctionalJobs.com
Categories: Offsite Discussion

### FP Complete: Call For Entries

Planet Haskell - Mon, 11/25/2013 - 7:12am
FP Haskell Competition Call for Entries

We just launched our new Free Community edition of FP Haskell Center™ making it easier than ever to participate in our FP Haskell Competition. Each month we are giving away up to $2,500 in cash for Haskell projects. This is an excellent opportunity for Haskell developers to make some serious money and get some recognition for their work. Why Submit? What’s in it for you…besides the cash ($1,000 for first prize and $500 for up to three runners up), this is a prefect platform to publish and get feedback on your work. Why not share your knowledge and get paid for the work you’re already doing? We want to give you money. Submit your personal projects, work with teams to develop other projects…there is no limit to how many submissions you can turn in. Take a look at our recent winners and see if your projects compare. Visit our competition overview page to learn more. Categories: Offsite Blogs ### The Haskell Cast #4 - Simon Marlow on Parallelism and Concurrency Haskell on Reddit - Mon, 11/25/2013 - 7:00am Categories: Incoming News ### How to process infinite sequence with Repa or Accelerate? Haskell on Reddit - Mon, 11/25/2013 - 4:36am Categories: Incoming News ### Using indexed free monads to QuickCheck JSON Haskell on Reddit - Mon, 11/25/2013 - 4:27am Categories: Incoming News ### www.glc.us.es del.icio.us/haskell - Mon, 11/25/2013 - 4:25am Categories: Offsite Blogs ### [ANNOUNCE]hPDB - Is it the fastest parallel PDB parser? Part of structural bioinformatics library collection... haskell-cafe - Mon, 11/25/2013 - 1:18am Dear Haskellers, I would like to present a benchmark of Protein Databank parsers that indicates that one written in Haskell seems to outpace all others when using 4 or more of parallel cores: hPDB - Haskell library for processing atomic biomolecular structures in Protein Data Bank format -- Michal Jan Gajda BMC Research Notes.2013, 6:483. DOI: 10.1186/1756-0500-6-483 URL: http://www.biomedcentral.com/1756-0500/6/483 Please let me know if you know of any other parsers that could be added to this benchmark. Along with hTalos, and parseSTAR parser libraries for nuclear magnetic resonance data it adds to growing collection of bioinformatic libraries written in Haskell. Together with CloudHaskell and modern 48-core machines, they allow to process multigigabyte bioinformatic databases in a matter of few minutes (slightly over 8 minutes in case of over 10GB of PDB.) If interested, please see: http://biohaskell.org/. Categories: Offsite Discussion ### Mark Jason Dominus: The shittiest project I ever worked on Planet Haskell - Sun, 11/24/2013 - 8:18pm Sometimes in job interviews I've been asked to describe a project I worked on that failed. This is the one I always think of first. In 1995 I quit my regular job as senior web engineer for Time-Warner and became a consultant developing interactive content for the World-Wide Web, which was still a pretty new thing at the time. Time-Warner taught me many things. One was that many large companies are not single organizations; they are much more like a bunch of related medium-sized companies that all share a building and a steam plant. (Another was that I didn't like being part of a large company.) One of my early clients was Prudential, which is a large life insurance, real estate, and financial services conglomerate based in Newark, New Jersey—another fine example of a large company that actually turned out to be a bunch of medium-sized companies sharing a single building. I did a number of projects for them, one of which was to produce an online directory of Prudential-affiliated real estate brokers. I'm sure everyone is familiar with this sort of thing by now. The idea was that you would visit a form on their web site, put in your zip code or town name, and it would extract the nearby brokers from a database and present them to you on a web page, ordered by distance. The project really sucked, partly because Prudential was disorganized and bureaucratic, and partly because I didn't know what I was doing. I quoted a flat fee for the job, assuming that it would be straightforward and that I had a good idea of what was required. But I hadn't counted on bureaucratic pettifoggery and the need for every layer of the management hierarchy to stir the soup a little. They tweaked and re-tweaked every little thing. The data set they delivered was very dirty, much of it garbled or incomplete, and they kept having to fix their exporting process, which they did incompletely, several times. They also changed their minds at least once about which affiliated real estate agencies should be in the results, and had to re-send a new data set with the new correct subset of affiliates, and then the new data would be garbled or incomplete. So I received replacement data six or seven times. This would not have been a problem, except that each time they presented me with a file in a somewhat different format, probably exported from some loser's constantly-evolving Excel spreadsheet. So I had to write seven or eight different versions of the program that validated and loaded the data. These days I would handle this easily; after the first or second iteration I would explain the situation: I had based my estimate on certain expectations of how much work would be required; I had not expected to clean up dirty data in eight different formats; they had the choice of delivering clean data in the same format as before, renegotiating the fee, or finding someone else to do the project. But in 1995 I was too green to do this, and I did the extra work for free. Similarly, they tweaked the output format of the program repeatedly over weeks: first the affiliates should be listed in distance order, but no, they should be listed alphabetically if they are in the same town and then after that the ones from other towns, grouped by town; no, the Prudential Preferred affiliates must be listed first regardless of distance, which necessitated a redelivery of the data which up until then hadn't distinguished between ordinary and Preferred affiliates; no wait, that doesn't make sense, it puts a far-off Preferred affiliate ahead of a nearby regular affiliate... again, this is something that many clients do, but I wasn't expecting it and it took a lot of time I hadn't budgeted for. Also these people had, I now know, an unusually bad case of it. Anyway, we finally got it just right, and it had been approved by multiple layers of management and given a gold star by the Compliance Department, and my clients took it to the Prudential Real Estate people for a demonstration. You may recall that Prudential is actually a bunch of medium-sized companies that share a building in Newark. The people I was working with were part of one of these medium-sized companies. The real estate business people were in a different company. The report I got about the demo was that the real estate people loved it, it was just what they wanted. “But,” they said, “how do we collect the referral fees?” Prudential Real Estate is a franchise operation. Prudential does not actually broker any real estate. Instead, a local franchisee pays a fee for the use of the name and logo and other services. One of the other services is that Prudential runs a national toll-free number; you can call this up and they will refer you to a nearby affiliate who will help you buy or sell real estate. And for each such referral, the affiliate pays Prudential a referral fee. We had put together a real estate affiliate locator application which let you locate a nearby Prudential-affiliated franchisee and contact them directly, bypassing the referral and eliminating Prudential's opportunity to collect a referral fee. So I was told to make one final change to the affiliate locator. It now worked like this: The user would enter their town or zip code; the application would consult the database and find the contact information for the nearby affiliates, it would order them in the special order dictated by the Compliance Department, and then it would display a web page with the addresses and phone numbers of the affiliates carefully suppressed. Instead, the name of each affiliate would be followed by the Prudential national toll-free number AND NOTHING ELSE. Even the names were suspect. For a while Prudential considered replacing each affiliate's name with a canned string, something like "Prudential Real Estate Affiliate", because what if the web user decided to look up the affiliate in the Yellow Pages and call them directly? It was eventually decided that the presence of the toll-free number directly underneath rendered this risk acceptably small, so the names stayed. But everything else was gone. Prudential didn't need an affiliate locator application. They needed a static HTML page that told people to call the number. All the work I had put into importing the data, into formatting the output, into displaying the realtors in precisely the right order, had been a complete waste of time. [ Addendum 20131018: This article is available in Chinese. ] Categories: Offsite Blogs ### Mark Jason Dominus: Cobblestones Planet Haskell - Sun, 11/24/2013 - 8:16pm This is a public service announcement. This is not a picture of a cobbled street: Rather, these stones are "Belgian block", also called setts. Cobblestones look like this: I took these pictures in front of the library of the American Philosophical Society on South 5th Street in Philadelphia. South 5th Street is paved with Belgian block, and the lane beside the APS is cobbled. You can just barely distinguish them in this satellite photograph. Categories: Offsite Blogs ### Mark Jason Dominus: In which I revisit the pastimes of my misspent youth Planet Haskell - Sun, 11/24/2013 - 8:16pm Last weekend I was at a flea market and saw an HP-15C calculator for$10. The HP-15C was the last pocket calculator I owned, some time before pocket calculators became ridiculous. It was a really nice calculator when I got it in 1986, one of my most prized possessions.

I lost my original one somewhere along the way, and also the spare I had bought from a friend against the day when I lost the original, and I was glad to get another one, even though I didn't have any idea what I was going to do with it. My phone has a perfectly serviceable scientific calculator in it, a very HP-ish one called RealCalc. (It's nice, you should check it out.) The 15C was sufficiently popular that someone actually brought it back a couple of years ago, in a new and improved version, with the same interface but 21st-century technology, and I thought hard about getting one, but decided I couldn't justify spending that much money on something so useless, even if it was charming. Finding a cheap replacement was a delightful surprise.

Then on Friday night I was sitting around thinking about which numbers n are such that a perfect square, and I couldn't think of any examples except for 0, 2, and 4. Normally I would just run and ask the computer, which would take about two minutes to write the program and one second to run it. But I was out in the courtyard, it was a really nice evening, my favorite time of the year, the fading light was beautiful, and I wasn't going to squander it by going inside to brute-force some number problem.

But I did have the HP-15C in my pocket, and the HP-15C is programmable, by mid-1980s programmable calculator standards. That is to say, it is just barely programmable, but just barely is all you need to implement linear search for solutions of . So I wrote the program and discovered, to my surprise, that I still remember many of the fussy details of how to program an HP-15C. For example, the SST button single-steps through the listing, in program mode, but single-steps the execution in run mode. And instead of using the special test 5 to see if the x and y registers are equal you might as well subtract them and use the x=0 test; it uses the same amount of program memory and you won't have to flip the calculator over to remember what test 5 is. And the x2 and INT() operations are on the blue shift key.

Here's the program:

001 - 42,21,11 002 - 43 11 003 - 1 004 - 0 005 - 20 006 - 9 007 - 40 008 - 36 009 - 11 010 - 36 011 - 43 44 012 - 30 013 - 43 20 014 - 31 015 - 43 32 016 - 42,21,12 017 - 40 018 - 45 0 019 - 32 11 020 - 2 021 - 44,40, 0 022 - 22 12 I see now that when I tested for integrality, I did it the wrong way. My method used four steps: 010 - 36 -- push stack 011 - 43 44 -- x ← INT(x) 012 - 30 -- subtract 013 - 43 20 -- test x=0 ? but it would have been better to just test the fractional part of the value for zeroness: 42 44 -- x ← FRAC(x) 43 20 -- test x=0 ? Saving two instructions might not seem like a big deal, but it takes the calculator a significant amount of time to execute two instructions. The original program takes 55.2 seconds to find n=80; with the shorter code, it takes only 49.2 seconds, a 10% improvement. And when your debugging tool can only display a single line of numeric operation codes, you really want to keep the program as simple as you can.

Besides, stuff should be done right. That's why it's called "right".

But I kind of wish I had that part of my brain back. Who knows what useful thing I would be able to remember if I wasn't wasting my precious few brain cells remembering that the back-step key ("BST") is on the blue shift, and that "42,21,12" is the code for "subroutine B starts here".

Anyway, the program worked, once I had debugged it, and in short order (by 1986 standards) produced the solutions n=18, 80, 154, which was enough to get my phone to search the OEIS and find the rest of the sequence. The OEIS entry mentioned that the solutions have the generating function

and when I saw that in the denominator, I laughed, really loudly. My new neighbor was in her back yard, which adjoins the courtyard, and heard me, and said that if I was going to laugh like that I had to explain what was so funny. I said “Do you really want to know?” and she said yes, but I think she was mistaken.

Categories: Offsite Blogs

### Mark Jason Dominus: Overlapping intervals

Planet Haskell - Sun, 11/24/2013 - 8:16pm
Our database stores, among other things, "budgets", which have a lifetime with a start and end time. A business rule is that no two budgets may be in force at the same time. I wanted to build a method which, given a proposed start and end time for a new budget, decided whether there was already a budget in force during any part of the proposed period.

The method signature is:

sub find_overlapping_budgets { my ($self,$start, $end) = @_; ... } and I want to search the contents of$self->budgets for any budgets that overlap the time interval from $start to$end. Budgets have a start_date and an end_date property.

My first thought was that for each existing budget, it's enough to check to see if its start_date or its end_date lies in the interval of interest, so I wrote it like this:

sub find_overlapping_budgets { my ($self,$start, $end) = @_; return$self->budgets->search({ [ { start_date => { ">=" , $start }, start_date => { "<=" ,$end }, }, { end_date => { ">=" , $start }, end_date => { "<=" ,$end }, }, ] }); } People ridicule Lisp for having too many parentheses, and code like this, a two-line function which ends with },},]});}, should demonstrate that that is nothing but xenophobia. I'm not gonna explain the ridiculous proliferation of braces and brackets here, except to say that this is expressing the following condition:

which we can abbreviate as:

And if this condition holds, then the intervals overlap. Anyway, this seemed reasonable at the time, but is totally wrong, and happily, the automated tests I wrote for the method caught the error. Say that we ask whether we can create a budget that runs from June 1 to June 10. Say there is a budget that already exists, running from June 6 to June 7. Then the query asks :

Both of the disjuncts are false, so the method reports that there is no overlap. My implementation was just completely wrong. it's not enough to check to see if either endpoint of the proposed interval lies within an existing interval; you also have to check to see if any of the endspoints of the existing intervals lie within the proposed interval. (Alert readers will have noticed that although the condition "Intervals A and B overlap" is symmetric in A and B, the condition as I wrote it is not symmetric, and this should raise your suspicions.)

This was yet another time when I felt slightly foolish as I wrote the automated tests, assuming that the time and effort I spent on testing this trivial function would would be time and effort thrown away on nothing—and then they detected a real fault. Someday perhaps I'll stop feeling foolish writing tests for functions like this one; until then, many cases just like this one will help me remember that I must write the tests even though I feel foolish doing it.

Okay, how to get this right? I tried a bunch of things, mostly involving writing out a conjunction of every required condition and then using boolean algebra to simplify the resulting expression:

This didn't work well, partly because I was doing it at two in the morning, partly because there are many conditions, all very similar, and I kept getting them mixed up, and partly because, for implementation reasons, the final expression must be a query on interval A, even though it is most naturally expressed symmetrically between the two intervals.

But then I had a happy idea: For some reason it seemed much simpler to express the opposite condition, that the two intervals do not conflict. If they don't conflict, then interval A must be entirely to the left of interval B, so that or vice-versa, so that Then the intervals do not overlap if either of these is true:

and the condition that we want, that the two intervals do overlap, is simply its negation:

This is correct, or at least all the tests now pass, and it is even simpler than the incorrect condition I wrote in the first place. The code looks like this:

sub find_overlapping_budgets { my ($self,$start, $end) = @_; return$self->budgets->search({ end_date => { '>=', $start }, start_date => { '<=',$end }, }); } Usually I like to draw some larger lesson from this sort of thing. What comes to mind now (other than “Just write the tests, fool!”) is this: The end result is quite clever. Often I see the final version of the code and say "Oh, I wonder why I didn't see that right off?" Not this time. I want to say I couldn't have found it by myself, except that I did find it by myself, not by just pulling it magically out of my head, but by applying technique.

Instead of "not by magically pulling it out of my head" I was about to write "not by just thinking", but that is not quite right. I did solve it by "just thinking", but it was a different sort of thinking. Sometimes I consider a problem, and a solution leaps to mind, as it did in this case, except that it was wrong. That is what I call "just thinking". But applying carefully-learned and practiced technique is also thinking.

The techniques I applied in this problem included: noticing and analyzing symmetries of the original problem, and application of laws of boolean algebra, both in the unsuccessful and the successful attempt. Higher-level strategies included trying more than one approach, and working backwards. Learning and correctly applying technique made me effectively a better thinker, not just in general, but in this particular case.

[ Addendum 20130917: Dfan Schmidt remarks: "I'm astonished you didn't know the interval-overlap trick already." I was a little surprised, also, when I tried to pull the answer out of my head and didn't find one there already, either from having read it somewhere before, or from having solved the problem before. ]

Categories: Offsite Blogs