News aggregator

bitemyapp/learnhaskell

del.icio.us/haskell - Fri, 05/23/2014 - 3:27pm
Categories: Offsite Blogs

bitemyapp/learnhaskell

del.icio.us/haskell - Fri, 05/23/2014 - 3:27pm
Categories: Offsite Blogs

Stackage Server | FP Complete

del.icio.us/haskell - Fri, 05/23/2014 - 2:45pm
Categories: Offsite Blogs

Philip Wadler: Will an independent Scotland support science? Just look at my office

Planet Haskell - Fri, 05/23/2014 - 9:40am
In the debate over Scottish Independence, one topic of particular interest to me and my colleagues is how funding for science and research will fare (see my previous post). It was in the news again today, with some academics voicing "grave concerns that the country does not sleepwalk into a situation that jeopardises its present success in the highly-competitive arena of biomedical research". Not that the current situation is rosy. Other academics in the same article observe"The Campaign for Science and Engineering (CaSE) has noted 'the cumulative erosion' of the science budget of 'over £1.1billion' and CaSE director, Dr Sarah Main, has commented that 'the last four years of a flat cash science budget is biting scientists and engineers and squeezing universities'.One question one might ask is which government shows stronger appreciation of the value of science?  The coalition planned to slash science funding as part of its austerity programme, with a reprieve at the last moment leading to only a mild cut. The UK as a whole tends to elect governments that cut education and maintain science funding only when pressed.

In contrast, time and again, the Scottish people elect governments that understand the value of education and science. Why else is Scotland home to more top universities per head than anywhere else in the world?

As one concrete example, consider my office. The award-winning Informatics Forum (pictured above) would not exist without direct support from the Scottish Government. Read this press release from 2005:
Scottish Enterprise Edinburgh and Lothian has secured an additional £14 million from the Scottish Executive towards the £42 million construction costs of the University of Edinburgh's Informatics Forum. ...

A further £5 million has been awarded by Scottish Enterprise Edinburgh and Lothian towards a strategy which will maximise engagement with local and international industry, ensuring Scotland reaps the economic benefits the Forum will generate. ...

Tim O’Shea, Principal of the University of Edinburgh, says: ‘Scotland is already a world-leader in a number of areas of Informatics and with the vision and support of the Scottish Executive and Scottish Enterprise Edinburgh and Lothian it will become even stronger.’
Categories: Offsite Blogs

Philip Wadler: The Funding Gap

Planet Haskell - Fri, 05/23/2014 - 9:37am
One ongoing debate regards the 'funding gap' that might be faced by Scottish science in event of independence. I've been trying to track down numbers. Not surprisingly, it depends on what assumptions you make.

The Royal Society of Edinburgh sponsored a series of discussions, now available in print and online, Enlightening the Constitutional Debate.  The following appears on page 182:
To maintain the international quality of our research base, Professor Paterson added, we must maintain our access to international funding and maintain our international standards. To do so, it has been calculated that an independent Scotland would need to find an extra £300 million in funds per annum – double the amount currently distributed by the Scottish Funding Council.Lindsay Paterson is my colleague at the University of Edinburgh, so I wrote to him asking the source of his figures. He referred me to his detailed notes, where he explains (footnote 35) that
Public expenditure on research in Scotland is about 0.95% of GDP, whereas the average in the comparison developed countries noted in that footnote is 0.7%. The difference, 0.25%, is £325 million in a GDP of £130 billion. So the RSEs summary above is inaccurate: £300 million is not the difference between what Scotland spends now and what it would need to spend to fund science at the same level as currently, it is the difference between what Scotland spends now and what it would spend if it spent the same amount as a typical developed country.

So what is the actual 'funding gap'? Michael Danson at Heritot-Watt University has written a note that explains the numbers. Scotland wins 10-11% of the funding from UK Research Council, but pays only around 9% of taxes. In addition, there is funding from the remainder of the UK government and from UK charities. Adding it all up, he puts the shortfall between £97 and £143 million, where the latter figure makes the assumption that no UK charity will contribute a pence to Scotland. On more reasonable assumptions, a figure of around £100 million seems more likely. As he notes, that's less than the rise in science funding the Scottish Government has already approved over the last decade.

That's two estimates. What figures have you seen for the funding gap?
Categories: Offsite Blogs

Philip Wadler: Enemy of the People

Planet Haskell - Fri, 05/23/2014 - 5:30am
George Monbiot's column captures my feelings exactly. I wasn't previously familiar with Ibsen's play An Enemy of the People, which Monbiot summarises and relates to attitudes about climate change.
Thomas Stockmann is a doctor in a small Norwegian town, and medical officer at the public baths whose construction has been overseen by his brother, the mayor. The baths, the mayor boasts, "will become the focus of our municipal life! … Houses and landed property are rising in value every day."
But Stockmann discovers that the pipes have been built in the wrong place, and the water feeding the baths is contaminated. "The source is poisoned … We are making our living by retailing filth and corruption! The whole of our flourishing municipal life derives its sustenance from a lie!" People bathing in the water to improve their health are instead falling ill.
Stockmann expects to be treated as a hero for exposing this deadly threat. After the mayor discovers that re-laying the pipes would cost a fortune and probably sink the whole project, he decides that his brother's report "has not convinced me that the condition of the water at the baths is as bad as you represent it to be".
The mayor proposes to ignore the problem, make some cosmetic adjustments and carry on as before. After all, "the matter in hand is not simply a scientific one. It is a complicated matter, and has its economic as well as its technical side." The local paper, the baths committee and the business people side with the mayor against the doctor's "unreliable and exaggerated accounts".
Astonished and enraged, Stockmann lashes out madly at everyone. He attacks the town as a nest of imbeciles, and finds himself, in turn, denounced as an enemy of the people. His windows are broken, his clothes are torn, he's evicted and ruined.
Today's editorial in the Daily Telegraph, which was by no means the worst of the recent commentary on this issue, follows the first three acts of the play. Marking the new assessment by the Intergovernmental Panel on Climate Change, the Telegraph sides with the mayor. First it suggests that the panel cannot be trusted, partly because its accounts are unreliable and exaggerated and partly because it uses "model-driven assumptions" to forecast future trends. (What would the Telegraph prefer? Tea leaves? Entrails?). Then it suggests that trying to stop manmade climate change would be too expensive. Then it proposes making some cosmetic adjustments and carrying on as before. ("Perhaps instead of continued doom-mongering, however, greater thought needs to be given to how mankind might adapt to the climatic realities.")(Image above shows Marilyn Monroe reading her husband, Arthur Miller's, translation of Ibsen's play.)
Categories: Offsite Blogs

partitionM

Haskell on Reddit - Fri, 05/23/2014 - 4:37am

Here’s a little partitionM I wrote. I found it handy so I share there.

http://lpaste.net/104509

What do you think?

submitted by _skp
[link] [17 comments]
Categories: Incoming News

Jan Stolarek: Parallel Haskell challenge (also, how to make your research project fail)

Planet Haskell - Fri, 05/23/2014 - 3:28am

In September 2012, after playing with Haskell for a couple of months, I decided to go serious with functional programming as a research topic. By that time I came across many papers and presentations about parallel Haskell, all of them saying how great Haskell is for doing parallel and concurrent computations. This looked very interesting, especially that in my PhD thesis I used an algorithm that seemed to be embarrassingly parallel in nature. I wanted to start my research by applying Haskell to something I am already familiar with so I decided to write efficient, parallel Haskell implementation of the algorithm I used in my PhD. This attempt was supposed to be a motivation to learn various approaches to parallel programming in Haskell: Repa, DPH, Accelerate and possibly some others. The task seemed simple and I estimated it should take me about 5 months.

I was wrong and I failed. After about 6 months I abandoned the project. Despite my initial optimism, upon closer examination the algorithm turned out not to be embarrassingly parallel. I could not find a good way of parallelizing it and doing things in functional setting made things even more difficult. I don’t think I will ever get back to this project so I’m putting the code on GitHub. In this post I will give a brief overview of the algorithm, discuss parallelization strategies I came up with and the state of the implementation. I hope that someone will pick it up and solve the problems I was unable to solve. Consider this a challenge problem in parallel programming in Haskell. I think that if solution is found it might be worthy a paper (unless it is something obvious that escaped me). In any case, please let me know if you’re interested in continuing my work.

Lattice structure

The algorithm I wanted to parallelize is called the “lattice structure”. It is used to compute a Discrete Wavelet Transform (DWT) of a signal1. I will describe how it works but will not go into details of why it works the way it does (if you’re interested in the gory details take a look at this paper).

Let’s begin by defining a two-point base operation:

This operations takes two floating-point values x and y as input and returns two new values x’ and y’ created by performing simple matrix multiplication. In other words:


where is a real parameter. Base operation is visualised like this:

(The idea behind base operation is almost identical as in the butterfly diagram used in Fast Fourier Transforms).

The lattice structure accepts input of even length, sends it through a series of layers and outputs a transformed signal of the same length as input. Lattice structure is organised into layers of base operations connected like this:

The number of layers may be arbitrary; the number of base operations depends on the length of input signal. Within each layer all base operations are identical, i.e. they share the same value of . Each layer is shifted by one relatively to its preceding layer. At the end of signal there is a cyclic wrap-around, as denoted by and arrows. This has to do with the edge effects. By edge effects I mean the question of what to do at the ends of a signal, where we might have less samples than required to actually perform our signal transformation (because the signal ends and the samples are missing). There are various approaches to this problem. Cyclic wrap-around performed by this structure means that a finite-length signal is in fact treated as it was an infinite, cyclic signal. This approach does not give the best results, but it is very easy to implement. I decided to use it and focus on more important issues.

Note that if we don’t need to keep the original signal the lattice structure could operate in place. This allows for a memory-efficient implementation in languages that have destructive updates. If we want to keep the original signal it is enough that the first layer copies data from old array to a new one. All other layers can operate in place on the new array.

Parallelization opportunities

One look at the lattice structure and you see that it is parallel – base operations within a single layer are independent of each other and can easily be processed in parallel. This approach seems very appropriate for CUDA architecture. But since I am not familiar with GPU programming I decided to begin by exploring parallelism opportunities on a standard CPU.

For CPU computations you can divide input signal into chunks containing many base operations and distribute these chunks to threads running on different cores. Repa library uses this parallelization strategy under the hood. The major problem here is that after each layer has been computed we need to synchronize threads to assemble the result. The question is whether the gains from parallelism are larger than this cost.

After some thought I came up with another parallelization strategy. Instead of synchronizing after each layer I would give each thread its own chunk of signal to propagate through all the layers and then merge the result at the end. This approach requires that each thread is given an input chunk that is slightly larger than the expected output. This results from the fact that here we will not perform cyclic wrap-around but instead we will narrow down the signal. This idea is shown in the image below:

This example assumes dividing the signal between two threads. Each thread receives an input signal of length 8 and produces output of length 4. A couple of issues arise with this approach. As you can see there is some overlap of computations between neighbouring threads, which means we will compute some base operations twice. I derived a formula to estimate amount of duplicate computations with a conclusion that in practice this issue can be completely neglected. Another issue is that the original signal has to be enlarged, because we don’t perform a wrap-around but instead expect the wrapped signal components to be part of the signal (these extra operations are marked in grey colour on the image above). This means that we need to create input vector that is longer than the original one and fill it with appropriate data. We then need to slice that input into chunks, pass each chunk to a separate thread and once all threads are finished we need to assemble the result. Chunking the input signal and assembling the results at the end are extra costs, but they allow us to avoid synchronizing threads between layers. Again, this approach might be implemented with Repa.

A third approach I came up with was a form of nested parallelism: distribute overlapping chunks to separate threads and have each thread compute base operations in parallel, e.g. by using SIMD instructions.

Methodology

My plan was to implement various versions of the above parallelization strategies and compare their performance. When I worked in Matlab I used its profiling capabilities to get precise execution times for my code. So one of the first questions I had to answer was “how do I measure performance of my code in Haskell?” After some googling I quickly came across criterion benchmarking library. Criterion is really convenient to use because it automatically runs the benchmarked function multiple times and performs statistical analysis of the results. It also plots the results in a very accessible form.

While criterion offered me a lot of features I needed, it also raised many questions and issues. One question was whether the forcing of lazily generated benchmark input data distorts the benchmark results. It took me several days to come up with experiments that answered this question. Another issue was that of the reliability of the results. For example I observed that results can differ significantly across runs. This is of course to be expected in a multi-tasking environment. I tried to eliminate the problem by switching my Linux to single-user mode where I could disable all background services. Still, it happened that some results differed significantly across multiple runs, which definitely points out that running benchmarks is not a good way to precisely answer the question “which of the implementations is the most efficient?”. Another observation I made about criterion was that results of benchmarking functions that use FFI depend on their placement in the benchmark suite. I was not able to solve that problem and it undermined my trust in the criterion results. Later during my work I decided to benchmark not only the functions performing the Discrete Wavelet Transform but also all the smaller components that comprise them. Some of the results were impossible for me to interpret in a meaningful way. I ended up not really trusting results from criterion.

Another tool I used for measuring parallel performance was Threadscope. This nifty program visualizes CPU load during program execution as well as garbage collection and some other events like activating threads or putting them to sleep. Threadscope provided me with some insight into what is going on when I run my program. Information from it was very valuable although I couldn’t use it to get the most important information I needed for a multi-threaded code: “how much time does the OS need to start multiple threads and synchronize them later?”.

Implementation

As already mentioned, one of my goals for this project was to learn various parallelization techniques and libraries. This resulted in implementing algorithms described above in a couple of ways. First of all I used three different approaches to handle cyclic wrap-around of the signal between the layers:

  • cyclic shift – after computing one layer perform a cyclic shift of the intermediate transformed signal. First element of the signal becomes the last, all other elements are shifted by one to the front. This is rather inefficient, especially for lists.
  • signal extension – instead of doing cyclic shift extend the initial signal and then shorten it after each layer (this approach is required for the second parallelization strategy but it can be used in the first one as well). Constructing the extended signal is time consuming but once lattice structure computations are started the transition between layers becomes much faster for lists. For other data structures, like vectors, it is time consuming because my implementation creates a new, shorter signal and copies data from existing vector to a new one. Since vectors provide constant-time indexing it would be possible to avoid copying by using smarter indexing. I don’t remember why I didn’t implement that.
  • smart indexing – the most efficient way of implementing cyclic wrap-around is using indexing that shifts the base operations by one on the odd layers. Obviously, to be efficient it requires a data structure that provides constant-time indexing. It requires no copying or any other modification of output data from a layer. Thus it carries no memory and execution overhead.

Now that we know how to implement cyclic wrap-around let’s focus on the actual implementations of the lattice structure. I only implemented the first parallelization strategy, i.e. the one that requires thread synchronization after each layer. I admit I don’t remember the exact reasons why I didn’t implement the signal-chunking strategy. I think I did some preliminary measurements and concluded that overhead of chunking the signal is way to big. Obviously, the strategy that was supposed to use nested parallelizm was also not implemented because it relied on the chunking strategy. So all of the code uses parallelizm within a single layer and synchronizes threads after each layer.

Below is an alphabetic list of what you will find in my source code in the Signal.Wavelet.* modules:

  • Signal.Wavelet.C1 – I wanted to at least match the performance of C, so I made a sequential implementation in C (see cbits/dwt.c) and linked it into Haskell using FFI bindings. I had serious doubts that the overhead of calling C via FFI might distort the results, but luckily it turned out that it does not – see this post. This implementation uses smart indexing to perform cyclic wrap-around. It also operates in place (except for the first layer, as described earlier).
  • Signal.Wavelet.Eval1 – this implementation uses lists and the Eval monad. It uses cyclic shift of the input signal between layers. This implementation was not actually a serious effort. I don’t expect anything that operates on lazy lists to have decent performance in numerical computations. Surprisingly though, adding Eval turned out to be a performance killer compared to the sequential implementation on lists. I never investigated why this happens
  • Signal.Wavelet.Eval2 – same as Eval1, but uses signal extension instead of cyclic shift. Performance is also very poor.
  • Signal.Wavelet.List1 – sequential implementation on lazy lists with cyclic shift of the signal between the layers. Written as a reference implementation to test other implementations with QuickCheck.
  • Signal.Wavelet.List2 – same as previous, but uses signal extension. I wrote it because it was only about 10 lines of code.
  • Signal.Wavelet.Repa1 – parallel and sequential implementation using Repa with cyclic shift between layers. Uses unsafe Repa operations (unsafe = no bounds checking when indexing), forces each layer after it is computed and is as strict as possible.
  • Signal.Wavelet.Repa2 – same as previous, but uses signal extension.
  • Signal.Wavelet.Repa3 – this implementation uses internals of the Repa library. To make it run you need to install modified version of Repa that exposes its internal modules. In this implementation I created a new type of Repa array that represents a lattice structure. With this implementation I wanted to see if I can get better performance from Repa if I place the lattice computations inside the array representation. This implementation uses smart indexing.
  • Signal.Wavelet.Vector1 - this implementation is a Haskell rewrite of the C algorithm that was supposed to be my baseline. It uses mutable vectors and lots of unsafe operations. The code is ugly – it is in fact an imperative algorithm written in a functional language.

In most of the above implementations I tried to write my code in a way that is idiomatic to functional languages. After all this is what the Haskell propaganda advertised – parallelism (almost) for free! The exceptions are Repa3 and Vector1 implementations.

Results

Criterion tests each of the above implementations by feeding it a vector containing 16384 elements and then performing a 6 layer transformation. Each implementation is benchmarked 100 times. Based on these 100 runs criterion computes average runtime, standard deviation, influence of outlying results on the average and a few more things like plotting the results. Below are the benchmarking results on Intel i7 M620 CPU using two cores (click to enlarge):

“DWT” prefix of all the benchmarks denotes the forward DWT. There is also the IDWT (inverse DWT) but the results are similar so I elided them. “Seq” suffix denotes sequential implementation, “Par” suffix denotes parallel implementation. As you can see there are no results for the Eval* implementations. The reason is that they are so slow that differences between other implementations become invisible on the bar chart.

The results are interesting. First of all the C implementation is really fast. The only Haskell implementation that comes close to it is Vector1. Too bad the code of Vector1 relies on tons of unsafe operations and isn’t written in functional style at all. All Repa implementations are noticeably slower. The interesting part is that for Repa1 and Repa2 using parallelism slows down the execution time by a factor of 2. For some reason this is not the case for Repa3, where parallelism improves performance. Sadly, Repa3 is as slow as implementation that uses lazy lists.

The detailed results, which I’m not presenting here because there’s a lot of them, raise more questions. For example in one of the benchmarks run on a slower machine most of the running times for the Repa1 implementation were around 3.25ms. But there was one that was only around 1ms. What to make of such a result? Were all the runs, except for this one, slowed down by some background process? Is it some mysterious caching effect? Or is it just some criterion glitch? There were many such questions where I wasn’t able to figure out the answer by looking at the criterion results.

There are more benchmarks in the sources – see the benchmark suite file.

Mistakes and other issues

From a time perspective I can identify several mistakes that I have made that eventually lead to a failure of this project. Firstly, I think that focusing on CPU implementations instead of GPU was wrong. My plan was to quickly deal with the CPU implementations, which I thought I knew how to do, and then figure out how to implement these algorithms on a GPU. However, the CPU implementation turned out to be much slower than I expected and I spent a lot of time trying to actually make my CPU code faster. In the end I never even attempted a GPU implementation.

An important theoretical issue that I should have addressed early in the project is how big input signal do I need to benefit from parallelism. Parallelism based on multiple threads comes with a cost of launching and synchronizing threads. Given that Repa implementations do that for each layer I really pay a lot of extra cost. As you’ve seen my benchmarks use vectors with 16K elements. The problem is that this seems not enough to benefit from parallelism and at the same time it is much more than encountered in typical real-world applications of DWT. So perhaps there is no point in parallelizing the lattice structure, other than using SIMD instructions?

I think the main cause why this project failed is that I did not have sufficient knowledge of parallelism. I’ve read several papers on Repa and DPH and thought that I know enough to implement parallel version of an algorithm I am familiar with. I struggled to understand benchmark results that I got from criterion but in hindsight I think this was not a good approach. The right thing to do was looking at the generated assembly, something that I did not know how to do at that time. I should also have a deeper understanding of hardware and thread handling by the operating system. As a side note, I think this shows that parallelism is not really for free and still requires some arcane knowledge from the programmer. I guess there is a lot to do in the research on parallelism in Haskell.

Summary

I have undertaken a project that seemed like a relatively simple task but it ended up as a failure. This was not the first and probably not the last time in my career – it’s just the way science is. I think the major factor that contributed to failure was me not realizing that I have insufficient knowledge. But I don’t consider my work on this to be a wasted effort. I learned how to use FFI and how to benchmark and test my code. This in turn lead to many posts on this blog.

What remains is an unanswered question: how to implement an efficient, parallel lattice structure in Haskell? I hope thanks to this post and putting my code on Github someone will answer this question.

Acknowledgements

During my work on this project I contacted Ben Lippmeier, author of the Repa library. Ben helped me realize some things that I have missed in my work. That sped up my decision to abandon this project and I thank Ben for that.

UPDATE (28/05/2014)

One of the comments below suggests it would be interesting to see performance of parallel implementation in C++ or Rust. In fact I have attempted a parallel implementation in C using SSE3 SIMD instructions. I undertook my effort a few months after giving up on the project with a sole purpose of seeing whether the C implementation can be made faster. I haven’t finished that attempt so I have not described it in the original post, but since the subject was raised I’ll briefly describe what I have accomplished. The idea was to modify the C1 implementation and rewrite the computations in the assembly language using Intel intrinsics. That turned out to be quite simple although at one point I’ve run into some unexpected segmentation faults that I was unable to debug. Since this was taking more time than I was planning to dedicate to this experiment I gave up. I tested this implementation just now and surprisingly there are no segfaults. Still, the code is incomplete – signal wrap-around in the odd layers is not implemented and by eye-balling the results I guess that there might be some other bugs in the implementation. I’ve run the benchmarks and the results show that using SSE3 speeds up the C implementation by about 25%-30%, which is quite a lot. Implementing signal wrap-around will certainly slow the implementation down, but I still think that the performance gain will remain significant. I pushed my work to the sse3 branch. Feel free to finish the implementation.

  1. Orthogonal transform, to be more precise. It is possible to construct lattice structures for biorthogonal wavelets, but that is well beyond the scope of this post.
Categories: Offsite Blogs

Ken T Takusagawa: [xjrnkknh] RandT ST

Planet Haskell - Thu, 05/22/2014 - 7:07pm

Here is a brief example of combining the RandT monad transformer with the ST monad. We write a random value into an STRef, then read it. The magic function is lift :: (MonadTrans t, Monad m) => m a -> t m a .

{-# LANGUAGE ScopedTypeVariables #-}
module Main where {
import Control.Monad.Random(RandT, getRandomR, evalRandT);
import Control.Monad.ST.Lazy(ST, runST);
import System.Random(RandomGen, StdGen, mkStdGen);
import Control.Monad.Trans(lift);
import Data.STRef.Lazy(STRef, writeSTRef, readSTRef, newSTRef);

-- We could use a shortcut like this, but will not for pedagogical purposes.
type RS s a = RandT StdGen (ST s) a;

doWrite :: (RandomGen g) => STRef s Int -> RandT g (ST s) ();
doWrite v = do {
  r :: Int <- getRandomR (1, 6);
  lift $ writeSTRef v r;
};

foo :: (RandomGen g) => RandT g (ST s) Int;
foo = do {
  v :: STRef s Int <- lift $ newSTRef 0;
  doWrite v;
  out :: Int <- lift $ readSTRef v;
  return out;
};

runAll :: Int;
runAll = runST $ evalRandT foo $ mkStdGen 12345;

main :: IO ();
main = print runAll;
}

Here is the output, typical of the flaw in random number generation of the first sample.
6

Previously, an example of ErrorT and ST.

Categories: Offsite Blogs

Eric Kidd: Learning Middle Egyptian with Anki, slowly

Planet Haskell - Thu, 05/22/2014 - 6:30pm

Although I don't usually mention it here, one of my hobbies is learning languages. French is my strongest by far, but I've been experimenting with seeing just how slowly I can learn Middle Egyptian. Normally, I need to reach a certain minimum degree of obsession to actually make progress, but it turns out that software can help a bit, as I explain in this post on the Beeminder blog.

But when I decided to learn Egyptian, I was faced with a dilemma: I couldn't justify spending more than an hour per week on it. Hierogylphs are cool, but come on—it's a dead language. Unfortunately, it's hard to learn a language in slow motion, because two things always go wrong:

  1. I get distracted, and I never actually put in that hour per week…
  2. I forget everything I learn between lessons…

Of course, one key tool here is Anki, which clever exploits the spacing effect of human memory. To oversimplify, if I'm forced to recall something shortly before I would have otherwise forgotten it, I'll remember it at least twice as long the next time. This allows remembering things for O(2^N) time for N effort, which is a nice trick.

Hierogloss

On a related note, I have a new toy up on GitHub: hierogloss, which extends Markdown with support for interlinear glosses rendered using JSesh:

H: z:A1*Z1 |
Categories: Offsite Blogs