I hate it when maintainers become unreachable. At the same time, I’m not immune to that myself (if nothing else, I could be hit by a bus tomorrow).
So I contacted a few people with a request to become backup maintainers (BM) for some of my more popular Haskell packages, and they kindly agreed. Specifically:
- Oliver Charles is now BM for all my testing-related packages: tasty and its add-ons, smallcheck, obsolete test-framework add-ons, and ansi-terminal (a dependency of tasty)
- Adam Bergmark is now BM for the haskell-suite projects: haskell-names, haskell-packages, hse-cpp, and traverse-with-class (a dependency of haskell-names)
- Sjoerd Visscher is co-BM for traverse-with-class
- Oleksandr Manzyuk is now BM for ariadne and bert (a dependency of ariadne)
Being a backup maintainer comes with very light responsibilities:
- should I become unreachable (temporarily or permanently), and a change has to be made to a package to keep it usable, the BM is supposed to review, apply, and release that change.
- if I am unreachable for a long time or permanently, and there’s a person/people who want to take over maintenance/development of all or some of the packages, and the BM has no objections to them doing so, the BM is supposed to give them the necessary privileges. (Of course, that person may be the BM him/herself!)
The BM for a package is added as a maintainer of that package on hackage and as a collaborator for the package’s github repository.
To make it clear, there’s no obligation for the BM to fix bugs or continue the development after I disappear. It would be unreasonable to request a person to commit to such a responsibility at an indefinite point in the future.
I assume that if a project is important, there will be people willing to take care of it; and if it isn’t, then it doesn’t matter anyway. The main responsibility of the BM is thus to make it easy for such a hypothetical person to take over.
As to what it means to be «unreachable», I completely trust my BM’s judgement here. I don’t want them to follow any bureaucratic procedures. The risk of something going wrong is very low and easily outweighed by the benefits of timely response to problems.
One package that doesn’t have a BM yet is standalone-haddock. If you use it and would like to become a BM, please get in touch.
I also encourage other package maintainers to follow this example and appoint BMs for their popular packages.
For quite a while I’ve been using a small Vim plugin that lets me write configuration that is specific to a system, it loaded a config file based on the system’s host name. Unfortunately I can’t seem to find that plugin anywhere now, so I’ve put it in a snippet. This allowed me to easily create a single clean Vim configuration and check it into version control, while still allowing for settings that are unique to a system.
Lately I’ve found it slightly limited though, I really wanted to use other things to trigger the loading of some piece of configuration. So I wrote my first ever Vim plugin: localcfg
Hopefully someone will find it useful.
I was rather frustrated with the use of Template Haskell as the main entry point for the big framework projects (Yesod, Snap, etc.) While these frameworks offer template-free options, they seem like second class citizens. So I started work on Wheb with the goal that out of the box I could start a project quickly, without Template Haskell and without the dreaded long list of Language Pragma's at the top. I was inspired by the simplicity of the Scotty project which showed how easy a Haskell server could be to write.
I have included a couple of plugin examples for handling Auth and Sessions. Wheb is database and template agnostic, but I plan to write some plugins soon to make it easier to use the major libraries with Wheb.
I just started work on it last weekend but wanted to share my progress. Take a look if it interests you!haskllllll
[link] [18 comments]
the majority of coding i do is in python and c, so i decided to build a forth interpreter in haskell to get back into it. i feel like my haskell code is improving (especially since i learned about lambdabot's @pl function (just kidding)) but if any more advanced haskellers have any ideas for me, i'd be glad to hear them.
the interpreter is here: https://github.com/0x65/hforth
it just features the basics, like allowing you to declare words, conditionals, and a couple of primitives. i plan on adding loops soon, i just haven't read through that chapter of "starting forth" yet :)submitted by nycs
[link] [4 comments]
There is a post on hackernews that says SPJ is to be knighted. Can anyone verify? If true congratulations to Simon.submitted by jtlien
[link] [13 comments]
I just released version 0.12.0, which should compile on GHC 7.8 once the following packages are fixed :
- regex-pcre-builtin (Edit 2014/02/09: now updated)
- strict-base-types (Edit 2014/02/09: now updated)
- charset (There is a pull request under way)
It seems to compile fine against GHC 7.6.3, even though I couldn’t really test the resulting executable (I gave a shot at Nix, but hruby is somewhat broken as a result).
This release doesn’t bring much on the table apart from an hypothetical 7.8 compatibility.
I made several claims of performance increase, previously, so here are the results :0.10.5 0.10.6 0.11.1 0.12.0 49 nodes, N1 10.74s 9.76s 9.03s 49 nodes, N2 10.48s 7.66s 7.01s 49 nodes, N4 9.7s 6.89s 6.37s 49 nodes, N8 12.46s 13.4s 11.77s Single node 2.4s 2.24s 2.02s 1.88s
The measurements were done on my workstation, sporting a 4 cores HT processor (8 logical cores).
The performance improvements can be explained in the following way :
- Between 0.10.5 and 0.10.6, the Ruby interpreter mode of execution was modified from a Channel based system to an MVar one.
- Between 0.10.6 and 0.11.1, all systems that would run on their own thread were modified to use the calling thread instead, reducing synchronization overhead (except for the Ruby thread). This gave a 9% performance boost for single threaded work, and a 29% performance boost when using four cores. The 8-cores performance worsened, because of the wasted work of the parser (This is explained in the previous post).
- Between 0.11.1 and 0.12.0, I moved from GHC 7.6.3 to GHC 7.8-rc1, and bumped the version of many dependencies (including text and aeson, both having received a speed boost recently). This resulted in a “free” 7% speed boost.
As it is shown here, the initial parsing is extremely costly, as computing the catalogs for 49 nodes is about 5 times as long as computing it for a single node. As the parsed files get cached, catalog computing becomes more and more effective (about 50 times faster than Puppet). I don’t think the current parser can be sped up significantly without ditching its readability, so this is about as fast as it will get.
The next goals are a huge simplification of the testing system, and perhaps an external DSL. There are compiled binaries and ubuntu packages at the usual place.
So there was a discussion recently on the libraries mailing list about how to deal with MonadPlus. In particular, the following purported law fails all over the place: x >> mzero = mzero. The reason it fails is that we are essentially assuming that any "effects" that x has can be undone once we realize the whole computation is supposed to "fail". Indeed this rule is too strong to make sense for our general notion that MonadPlus provides a notion of choice or addition. I propose that the correct notion that MonadPlus should capture is that of a right-seminearring. (The name right-nearsemiring is also used in the literature.) Below I explain what the heck a (right-)seminearring is.Monoids
First, I will assume you know what a monoid is. In particular, it's any associative binary operation with a distinguished element which serves as both left- and right-identity for the binary operation. These are ubiquitous and have become fairly well-known in the Haskell community of late. A prime example is (+,0) —that is, addition together with the zero element; for just about any any notion of "numbers". Another prime example is (*,1)— multiplication together with unit; again, for just about any notion of "numbers".
An important caveat regarding intuitions is that: both "addition" and "multiplication" of our usual notions of numbers turn out to be commutative monoids. For the non-commutative case, let's turn to regular expressions (regexes). First we have the notion of regex catenation, which captures the notion of sequencing: first we match one regex and then another; let's write this as (*,1) where here we take 1 to mean the regex which matches only the empty string. This catenation of strings is very different from multiplication of numbers because we can't swap things around. The regex a*b will first match a and then match b; whereas the regex b*a will match b first. Nevertheless, catenation (of strings/sequences/regexes/graphs/...) together with the empty element still forms a monoid because catenation is associative and catenating the empty element does nothing, no matter which side you catenate on.
Importantly, the non-deterministic choice for regexes also forms a monoid: (+,0) where we take 0 to be the absurd element. Notably, the empty element (e.g., the singleton set of strings, containing only the empty string) is distinct from the absurd element (e.g., the empty set of strings). We often spell 1 as ε and spell 0 as ∅; but I'm going to stick with the arithmetical notation of 1 and 0.Seminearrings
Okay, so what the heck is a right-seminearring? First, we assume some ambient set of elements. They could be "numbers" or "strings" or "graphs" or whatever; but we'll just call them elements. Second, we assume we have a semigroup (*)— that is, our * operator is associative, and that's it. Semigroups are just monoids without the identity element. In our particular case, we're going to assume that * is non-commutative. Thus, it's going to work like catenation— except we don't necessarily have an empty element to work with. Third, we assume we have some monoid (+,0). Our + operator is going to act like non-deterministic choice in regexes— but, we're not going to assume that it's commutative! That is, while it represents "choice", it's some sort of biased choice. Maybe we always try the left option first; or maybe we always try the right option first; or maybe we flip a biased coin and try the left option first only 80% of the time; whatever, the point is it's not entirely non-deterministic, so we can't simply flip our additions around. Finally, we require that our (*) semigroup distributes from the right over our (+,0) monoid (or conversely, that we can factor the monoid out from under the semigroup, again only factoring out parts that are on the right). That is, symbolically, we require the following two laws to hold:
0*x = 0
(x+y)*z = (x*z)+(y*z)
So, what have we done here? Well, we have these two interlocking operations where "catenation" distributes over "choice". What the first law mean is that: (1) if we first do something absurd or impossible and then do x, well that's impossible. We'll never get around to doing x so we might as well just drop that part. The second law means: (2) if we first have a choice between x and y and then we'll catenate whichever one with z, this is the same as saying our choice is really between doing x followed by z vs doing y followed by z.MonadPlus
Okay, so what does any of this have to do with MonadPlus? Intuitively, our * operator is performing catenation or sequencing of things. Monads are all about sequencing. So how about we use the monad operator (>>) as our "multiplication"! This does what we need it to since (>>) is associative, by the monad laws. In order to turn a monad into a MonadPlus we must define mplus (aka the + operator) and we must define a mzero (aka the 0 element). And the laws our MonadPlus instance must uphold are just the two laws about distributing/factoring on the right. In restating them below, I'm going to generalize the laws to use (>>=) in lieu of (>>):
mzero >>= f = mzero
(x `mplus` y) >>= f = (x >>= f) `mplus` (y >>= f)
And the reason why these laws make sense are just as described before. If we're going to "fail" or do something absurd followed by doing something else, well we'll never get around to that something else because we've already "failed". And if we first make a choice and then end up doing the same thing regardless of the choice we made, well we can just push that continuation down underneath the choice.
Both of these laws make intuitive sense for what we want out of MonadPlus. And given that seminearrings are something which have shown up often enough to be named, it seems reasonable to assume that's the actual pattern we're trying to capture. The one sticking point I could see is my generalization to using (>>=). In the second law, we allow f to be a function which "looks inside" the monad, rather than simply being some fixed monadic value z. There's a chance that some current MonadPlus implementations will break this law because of that insight. If so, then we can still back off to the weaker claim that MonadPlus should implement a right-seminearring exactly, i.e., with the (>>) operator as our notion of multiplication/catenation. This I leave as an exercise for the reader. This is discussed further in the addendum below.
Notably, from these laws it is impossible to derive x*0 = 0, aka x >> mzero = mzero. And indeed that is a stringent requirement to have, since it means we must be able to undo the "effects" of x, or else avoid doing those "effects" in the first place by looking into the future to know that we will eventually "fail". If we could look into the future to know we will fail, then we could implement backtracking search for logic programming in such a way that we always pick the right answer. Not just return results consistent with always choosing the right answer, which backtracking allows us to do; but rather, to always know the right answer beforehand and so never need to backtrack! If we satisfy the x*0 = 0 law, then we could perform all the "search" during compile time when we're applying the rewrite rule associated with this law.Addendum
There's a long history of debate between proponents of the generalized distribution law I presented above, vs the so-called "catch" law. In particular, Maybe, IO, and STM obey the catch law but do not obey the generalized distribution law. To give an example, consider the following function:
f a' = if a == a' then mzero else return a'
Which is used in the following code and evaluation trace for the Maybe monad:
mplus (return a) b >>= f
⟶ Just a >>= f
⟶ f a
⟶ if a == a then mzero else return a
As opposed to the following code and evaluation trace:
mplus (return a >>= f) (b >>= f)
⟶ mplus (f a) (b >>= f)
⟶ mplus mzero (b >>= f)
⟶ b >>= f
But b >>= f is not guaranteed to be identical to mzero. The problem here is, as I suspected, because the generalized distribution law allows the continuation to "look inside". If we revert back to the non-generalized distribution law which uses (>>), then this problem goes away— at least for the Maybe monad.Second Addendum (2014.02.06)
Even though Maybe satisfies the non-generalized distributivity laws, it's notable that other problematic MonadPlus instances like IO fail even there! For example,
First consider mplus a b >> (c >> mzero). Whenever a succeeds, we get that this is the same as a >> c >> mzero; and if a fails, then this is the same as a' >> b >> c >> mzero where a' is the prefix of a up until failure occurs.
Now instead consider mplus (a >> c >> mzero) (b >> c >> mzero). Here, if a succeeds, then this is the same as a >> c >> b >> c >> mzero; and if a fails, then it's the same as a' >> b >> c >> mzero. So the problem is, depending on whether we distribute or not, the effects of c will occur once or twice.
Notably, the problem we're running into here is exactly the same one we started out with, the failure of x >> mzero = mzero. Were this law to hold for IO (etc) then we wouldn't run into the problem of running c once or twice depending on distributivity.
It indexes the "big" sites that talk about Haskell. This is a test/beta version. If it is a good idea I can continue.submitted by j4px
[link] [22 comments]