# News aggregator

### Having trouble with Data.Vector.Binary

Haskell on Reddit - Mon, 02/02/2015 - 6:56pm

Hi everyone,

I'm having some issues with populating a Vector Int16 from binary data using the vector-binary-instances package. My binary file is only about 47 meg on the disk, but every time I run the decode function I get the exception

Exception: ./Data/Vector/Generic/Mutable.hs:494 (new): negative length -5767399219783011081

It's baffling me because the new function takes an Int as an argument, and the number of my Int16's more than fits into an Int without overflow.

I can construct the vectors piece-by-piece using the Get monad, but I'm looking for something that will run a bit faster, as using a function like

getN n = V.replicateM n (fmap fromIntegral getWord16be)

is running too slowly.

Here's my code, in case it helps:

module Main where import System.IO import Data.Int import Data.Binary import Data.Binary.Get import qualified Data.ByteString.Lazy as B import qualified Data.Vector as V import Data.Vector.Binary myFile = "some-binary-file" mySeek = 2304 main :: IO () main = do handle <- openFile myFile ReadMode hSeek handle AbsoluteSeek mySeek bs <- B.hGetContents handle print $B.length bs let vec = decode bs :: V.Vector Int16 print$ V.length vec submitted by bstamour
Categories: Incoming News

del.icio.us/haskell - Mon, 02/02/2015 - 6:45pm
Categories: Offsite Blogs

General haskell list - Mon, 02/02/2015 - 1:42pm
===================================================================== ACM SIGPLAN CALL FOR SUBMISSIONS Haskell Symposium 2015 Vancouver, Canada, 3-4 September 2015, directly after ICFP http://www.haskell.org/haskell-symposium/2015 ===================================================================== ** The Haskell Symposium has an early track this year ** ** See the Submission Timetable for details. ** The ACM SIGPLAN Haskell Symposium 2015 will be co-located with the International Conference on Functional Programming (ICFP 2015) in Vancouver, Canada. The Haskell Symposium aims to present original research on Haskell, discuss practical experience and future development of the language, and to promote other forms of denotative programming. Topics of interest include: * Language Design, with a focus on possible extensions and modifications of Haskell as well as critical discussions of the status quo; * Theory, s
Categories: Incoming News

### GHC API - list all exports of a module

haskell-cafe - Mon, 02/02/2015 - 1:21pm
Dear Cafe, currently I'm working on a small program for which I need a list of all *exported* function and data types. In order to achieve this I've been toying around with the GHC API. My observations so far: * Using the pure parser function yields a HsModule object, which gives me a list of all exports and all declarations (internal & exported). I cannot or don't know how to use the list of exports (just RdrNames) to filter the declarations. * Using parseModule (which is essentially the same as the parser function) and typecheckModule on a ModuleGraph yields a TypecheckedModule object which contains a list of exported Names which in turn can be used to get additional information about them. So, the second approach seems more promising, but I'm not too happy with it. The reason is that calling typecheckModule results in a compilation of the module and all of its dependencies, which I think is not necessary for what I want to achieve. This is especially cumbersome for lar
Categories: Offsite Discussion

### Monadic yak shaving | «Insert Name Here»

del.icio.us/haskell - Mon, 02/02/2015 - 1:19pm
Categories: Offsite Blogs

### Monadic yak shaving | «Insert Name Here»

del.icio.us/haskell - Mon, 02/02/2015 - 1:19pm
Categories: Offsite Blogs

### Toccata

del.icio.us/haskell - Mon, 02/02/2015 - 1:13pm
Categories: Offsite Blogs

### Toccata

del.icio.us/haskell - Mon, 02/02/2015 - 1:13pm
Categories: Offsite Blogs

### What could be the possible reason of linking errorson "_info" and "_closure"?

haskell-cafe - Mon, 02/02/2015 - 11:27am
Hi, I am making a cabal project including a library and a executable using the library. Building the library is fine. But when linking src/main, I got "undefined reference to someFunction1_info" and "someFunction1_closure".
Categories: Offsite Discussion

### Difficulties with cyclic data structures

Haskell on Reddit - Mon, 02/02/2015 - 10:59am

I've recently come up with a problem that's been much harder than I would have expected, and I was hoping I could tap the bright minds around here to point me in the right direction.

I'm trying to make a text adventure game in the vein of Zork. The player can walk between rooms, pick up items, there are some locked doors and rooms that require a torch – you get the idea. No matter how hard I try, I can't come up with a solution that allows efficient "updates" to the game world, while maintaining type safety and not introducing unpalatably large amounts of boilerplate.

Consider the following game world:

+------+ | | +-------+ Yard +-------+ | | | | | +------+ | Locked | | | | +--------+---+ | Locked | | | | Kitchen | +------+-----+ | | | | | Contains: +-----------+ Livingroom | | Flashlight | | | | | +------+-----+ +------------+ | | | | +-----+-----+ | | | Basement | | | | Dark | | | | Contains: | | Key | | | +-----------+

The obvious (and obviously wrong) solution would be to create some record definitions like this:

data Room = Room { roomName :: String , roomItems :: [Item] , roomDark :: Bool , roomDoors :: [Door] } data Item = Item { itemName :: String , makesLight :: Bool } data Door = Door { doorFrom :: Room , doorTo :: Room , doorLocked :: Bool }

The problem is that our records are going to form a directed cyclic graph. Tying the knot allows us to construct cyclic data structures, but we can't "modify" them. The player is going to be walking around and picking up items and interacting with things, so that won't do.

I've come up with 3 workarounds, but all of them are undesirable in some way or another.

First, I can stick everything into a proper graph data structure with a library like FGL. This solves the update problem, and gives me some other nice properties. The problem stems from the fact that anything that needs to reference anything else must be in the graph. That means rooms, items, containers, non-player characters, and most other things will need to be nodes in the graph. Now I need to create an algebraic data type like this:

data Node = NodeItem Item | NodeRoom Room | NodeCharacter Character | NodeContainer Container

An example of the graph might look like this:

+------------------+ | | +---+ yard :: RoomNode +------+ | | | | | +------------------+ | LockedDoor | | LockedDoor | | +----------+----------+ +-----------+------------+ | | Door | | | kitchen :: RoomNode +------+ livingroom :: RoomNode | | | | | +----------+----------+ +------------+-----------+ | | Contains | | Door | | +---------+-------+ +------------+---------+ | | | | | key :: ItemNode | | basement :: RoomNode | | | | | +-----------------+ +------------+---------+ | | Contains | +------------+-----------+ | | | flashlight :: ItemNode | | | +------------------------+

This certainly works, but now the type checker isn't able to help me as much as it would have before. I would have gotten a compile-time error before if I tried to add an item to the list of connected rooms, for example. Now I have this single type which encompasses most of the types in my game. In order to keep my functions total, I'm going to have to do a lot of things like this:

itemRoom :: Node -> Room itemRoom (NodeItem x) = getItemRoom x itemRoom _ = error "This shouldn't happen."

The function is now total, but I won't get a compile error if someone calls connectedRooms with an Item (which is a nonsensical thing to do). It also strikes me as a code smell if 90% of my functions are guarding against weird cases like this. I think it indicates that the type system is being abused, as you should have a pretty good idea about the incoming types most of the time.

I could use Maybe for everything:

itemRoom :: Node -> Maybe Room itemRoom (NodeItem x) = Just $getItemRoom x itemRoom _ = Nothing But this doesn't seem much better. We've eliminated a bunch of potential runtime errors, which is great, but the compiler is just ignoring a huge variety of potential nonsense code. itemRoom should only ever be called on a Room, and I'd like to get a compile error if I should call it on something else. Plus, I think wrapping 90% of your functions in the same monad is probably a code smell as well. A variant of this idea is storing everything in a Map. It changes the underlying data structure, but the same problems still exist. In addition to that, now all of my rooms, items, containers, and characters share a single namespace. I consider that to be even worse for this use case. Option 2 is to set up a series of simply typed lists and treat my items as references into those lists. That might look something like this: data GameWorld = GameWorld { items :: [Item], , rooms :: [Room], , containers :: [Container] , characters :: [Character] } type ItemRef = Int type RoomRef = Int type ContainerRef = Int type CharacterRef = Int Now updating a Room (for example) means updating the rooms list on the GameWorld at a given index. That works nicely, and I get compile time errors, as my lists have nice, simple types. There's still a problem though, which is that this introduces the possibility that a connected thing won't be in the list. For instance, if I construct a new Room that has a connection to RoomRef 7, it's possible that the rooms list won't have an entry at that index. I would have gotten a compile time error before, but now I won't. I can compensate for that by wrapping virtually everything with Maybe, but as I mentioned before, I feel like that counts as working around the type system rather than with it. Aside from all of that, this additional layer of indirection adds a lot of boilerplate. It's not the end of the world, but now I need functions for resolving a RoomRef to a Room, updating a Room at a given RoomRef, removing a Room at a given RoomRef, and all sorts of other things. This explodes into the cartesian product types by the desirable generic functions for working with said types. There are probably some nice type-level programming tricks I could use to eliminate the boilerplate, but I admit that I haven't spend more than a few hours learning about the topic. Option 3 is unfortunate, as it will probably work, but is completely evil. I could use STRefs (or IORefs). References are type checked and must point at a thing, so that covers compile time errors for incorrect type usage and missing values. The problem is obvious though; now I'm infecting almost everything with a stateful monad, which takes me out of the beautiful world of Haskell and right back to having mutable state everywhere. So I turn to you. Do you have any thoughts about how I could manage this? An ideal solution would allow me to: • connect a series of dissimilar things in a type safe way • be able to "update" the data structure coherently • get compile time errors when a Room is passed where an Item was required (etc.) • get compile time errors when a required value isn't present • doesn't involve huge amount of boilerplate (tricks to eliminate the boilerplate are welcome) • not involve wrapping every function with the IO monad (or something similar) submitted by ask_me_about_cats [link] [37 comments] Categories: Incoming News ### PFQ 4.1 is out! Haskell on Reddit - Mon, 02/02/2015 - 10:34am Categories: Incoming News ### Robert Harper: Summer of Programming Languages Planet Haskell - Mon, 02/02/2015 - 9:51am Having just returned from the annual Oregon Programming Languages Summer School, at which I teach every year, I am once again very impressed with the impressive growth in the technical sophistication of the field and with its ability to attract brilliant young students whose enthusiasm and idealism are inspiring. Eugene was, as ever, an ideal setting for the summer school, providing a gorgeous setting for work and relaxation. I was particularly glad for the numerous chances to talk with students outside of the classroom, usually over beer, and I enjoyed, as usual, the superb cycling conditions in Eugene and the surrounding countryside. Many students commented to me that the atmosphere at the summer school is wonderful, filled with people who are passionate about programming languages research, and suffused with a spirit of cooperation and sharing of ideas. Started by Zena Ariola a dozen years ago, this year’s instance was organized by Greg Morrisett and Amal Ahmed in consultation with Zena. As usual, the success of the school depended critically on the dedication of Jim Allen, who has been the de facto chief operating officer since it’s inception. Without Jim, OPLSS could not exist. His attention to detail, and his engagement with the students are legendary. Support from the National Science Foundation CISE Division, ACM SIGPLANMicrosoft Research, Jane Street Capital, and BAE Systems was essential for providing an excellent venue, for supporting a roster of first-rate lecturers, and for supporting the participation of students who might otherwise not have been able to attend. And, of course, an outstanding roster of lecturers donated their time to come to Eugene for a week to share their ideas with the students and their fellow lecturers. The schedule of lectures is posted on the web site, all of which were taped, and are made available on the web. In addition many speakers provided course notes, software, and other backing materials that are also available online. So even if you were not able to attend, you can still benefit from the summer school, and perhaps feel more motivated to come next summer. Greg and I will be organizing, in consultation with Zena. Applying the principle “don’t fix what isn’t broken”, we do not anticipate major changes, but there is always room for improvement and the need to freshen up the content every year. For me the central idea of the summer school is the applicability of deep theory to everyday practice. Long a dream held by researchers such as me, these connections become more “real” every year as the theoretical abstractions of yesterday become the concrete practices of today. It’s breathtaking to see how far we’ve come from the days when I was a student just beginning to grasp the opportunities afforded by ideas from proof theory, type theory, and category theory (the Holy Trinity) to building beautiful software systems. No longer the abstruse fantasies of mad (computer) scientists, these ideas are the very air we breathe in PL research. Gone are the days of ad hoc language designs done in innocence of the foundations on which they rest. Nowadays serious industrial-strength languages are emerging that are grounded in theory and informed by practice. Two examples have arisen just this summer, Rust (from Mozila) and Swift (from Apple), that exemplify the trend. Although I have not had time to study them carefully, much less write serious code using them, it is evident from even a brief review of their web sites that these are serious languages that take account of the academic developments of the last couple of decades in formulating new language designs to address new classes of problems that have arisen in programming practice. These languages are type safe, a basic criterion of sensibility, and feature sophisticated type systems that include ideas such as sum types, which have long been missing from commercial languages, or provided only in comically obtuse ways (such as objects). The infamous null pointer mistakes have been eradicated, and the importance of pattern matching (in the sense of the ML family of languages) is finally being appreciated as the cure for Boolean blindness. For once I can look at new industrial languages without an overwhelming sense of disappointment, but instead with optimism and enthusiasm that important ideas are finally, at long last, being recognized and adopted. As has often been observed, it takes 25 years for an academic language idea to make it into industrial practice. With Java it was simply the 1970’s idea of automatic storage management; with languages such as Rust and Swift we are seeing ideas from the 80’s and 90’s make their way into industrial practice. It’s cause for celebration, and encouragement for those entering the field: the right ideas do win out in the end, one just has to have the courage to be irrelevant. I hope to find the time to comment more meaningfully on the recent developments in practical programming languages, including Rust and Swift, but also languages such as Go and OCaml that are also making inroads into programming practice. (The overwhelming success and future dominance of Haskell is self-evident. Kudos!) But for now, let me say that the golden age of programming language research is here and now, and promises to continue indefinitely. Update: word smithing. Filed under: Programming, Research, Teaching Tagged: OPLSS14, programming languages Categories: Offsite Blogs ### ANNOUNCE: monad-levels General haskell list - Mon, 02/02/2015 - 7:22am I'm pleased to announce the first release of my new monad-levels library (aka Yet Another Monad Transformer Library ;-) http://hackage.haskell.org/package/monad-levels I've written more about the motivations of the library here: https://ivanmiljenovic.wordpress.com/2015/02/02/monadic-yak-shaving/ Categories: Incoming News ### parallelism and purity haskell-cafe - Mon, 02/02/2015 - 3:38am Hi cafe, I received questions about parallelism in Haskell. I'm familiar with concurrency but not with parallelism. So, let me ask: Q1) Are there any *applications* using Haskell parallelism? Q2) What kind of parallelism is used in the application? - Task parallel - Data parallel - Data-flow paralle? - etc Q3) Does Haskell purity contribute for the application in the parallelism point of view? I'm one of the translators of "Parallel and Concurrent Programming in Haskell" into Japanese. So, please don't say read the book. :-) --Kazu Categories: Offsite Discussion ### Ian Ross: Non-diffusive atmospheric flow #10: significance of flow patterns Planet Haskell - Mon, 02/02/2015 - 1:35am Non-diffusive atmospheric flow #10: significance of flow patterns February 2, 2015 The spherical PDF we constructed by kernel density estimation in the article before last appeared to have “bumps”, i.e. it’s not uniform in <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> and <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>. We’d like to interpret these bumps as preferred regimes of atmospheric flow, but before we do that, we need to decide whether these bumps are significant. There is a huge amount of confusion that surrounds this idea of significance, mostly caused by blind use of “standard recipes” in common data analysis cases. Here, we have some data analysis that’s anything but standard, and that will rather paradoxically make it much easier to understand what we really mean by significance. So what do we mean by “significance”? A phenomena is significant if it is unlikely to have occurred by chance. Right away, this definition raises two questions. First, chance implies some sort of probability, a continuous quantity, which leads to the idea of different levels of significance. Second, if we are going to be thinking about probabilities, we are going to need to talk about a distribution for those probabilities. The basic idea is thus to compare our data to a distribution that we explicitly decide based on a null hypothesis. A bump in our PDF will be called significant if it is unlikely to have occurred in data generated under the assumptions in our null hypothesis. So, what’s a good null hypothesis in this case? We’re trying to determine whether these bumps are a significant deviation from “nothing happening”. In this case, “nothing happening” would mean that the data points we use to generate the PDF are distributed uniformly over the unit sphere parametrised by <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> and <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>, i.e. that no point in <semantics>(θ,ϕ)<annotation encoding="application/x-tex">(\theta, \phi)</annotation></semantics> space is any more or less likely to occur than any other. We’ll talk more about how we make use of this idea (and how we sample from our “uniform on the sphere” null hypothesis distribution) below. We thus want some way of comparing the PDF generated by doing KDE on our data points to PDFs generated by doing KDE on “fake” data points sampled from our null hypothesis distribution. We’re going to follow a sampling-based approach: we will generate “fake” data sets, do exactly the same analysis we did on our real data points to produce “fake” PDFs, then compare these “fake” PDFs to our real one (in a way that will be demonstrated below). There are a couple of important things to note here, which I usually think of under the heading of “minimisation of cleverness”. First, we do exactly the same analysis on our “fake” data as we do on our “real” data. That means that we can treat arbitrarily complex chains of data analysis without drama: here, we’re doing KDE, which is quite complicated from a statistical perspective, but we don’t really need to think about that, because the fact that we treat the fake and real data sets identically means that we’re comparing like with like and the complexity just “divides out”. Second, because we’re simulating, in the sense that we generate fake data based on a null hypothesis on the data and run it through whatever data analysis steps we’re doing, we make any assumptions that go into our null hypothesis perfectly explicit. If we assume Gaussian data, we can see that, because we’ll be sampling from a Gaussian to generate our fake data. If, as in this case, our null hypothesis distribution is something else, we’ll see that perfectly clearly because we sample directly from that distribution to generate our fake data. This is a huge contrast to the usual case for “normal” statistics, where one chooses some standard test (<semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>-test, Mann-Whitney test, Kolmogorov-Smirnov test, and so on) and turns a crank to produce a test statistic. In this case, the assumptions inherent in the form of the test are hidden – a good statistician will know what those assumptions are and will understand the consequences of them, but a bad statistician (I am a bad statistician) won’t and will almost certainly end up applying tests outside of the regime where they are appropriate. You see this all the time in published literature: people use tests that are based on the assumption of Gaussianity on data that clearly isn’t Gaussian, people use tests that assume particular variance structures that clearly aren’t correct, and so on. Of course, there’s a very good reason for this. The kind of sampling-based strategy we’re going to use here needs a lot of computing power. Before compute power was cheap, standard tests were all that you could do. Old habits die hard, and it’s also easier to teach a small set of standard tests than to educate students in how to design their own sampling-based tests. But we have oodles of compute power, we have a very non-standard situation, and so a sampling-based approach allows us to sidestep all the hard thinking we would have to do to be good statisticians in this sense, which is what I meant by “minimisation of cleverness”. So, we’re going to do sampling-based significance testing here. It is shockingly easy to do and, if you’ve been exposed to the confusion of “traditional” significance testing, shockingly easy to understand. Simulating from the null hypothesis In order to test the significance of the bumps we see in our spherical PDF, we need some way of sampling points from our null hypothesis distribution, i.e. from a probability distribution that is uniform on the unit sphere. The simplest way to do this is to sample points from a spherically symmetric three-dimensional probability distribution then project those points onto the unit sphere. The most suitable three-dimensional distribution, at least from the point of view of convenience, is a three dimensional Gaussian distribution with zero mean and unit covariance matrix. This is particularly convenient because if we sample points <semantics> Categories: Offsite Blogs ### monad-levels : a new library for dealing with monad stacks Haskell on Reddit - Mon, 02/02/2015 - 12:57am Categories: Incoming News ### Douglas M. Auclair (geophf): January 2015 1HaskellADay Problems and Solutions Planet Haskell - Mon, 02/02/2015 - 12:03am January 2015 • January 30th, 2015: Catted Prime Factors http://lpaste.net/119587 Today's answer shows us composed primes. http://lpaste.net/119639 No. Wait. Primes aren't composed, by definition, right? Well, anyway... • January 29th, 2015: For today's haskell problem we peace-out by making triangles, http://lpaste.net/119524 not war. And our solution: http://lpaste.net/119562 make my day, punk! ... with triangles (extra-spicy, please)! Bleh, our non-partial solution: no fun, no triangles! http://lpaste.net/119562 Can you fix it? • January 28th, 2015: Criss-cross; http://lpaste.net/119353 APPLESAUCE! ... butnot Christopher Cross for today's Haskell problem. Anndddd, safety first, or don't cross that yellow line! http://lpaste.net/119434 for today's solution. • January 27th, 2015: What's 'All in the Family'? http://lpaste.net/119319 Elegance is. Today's Haskell problem is about a little bit more elegance. And lines. And triangles. Our solution http://lpaste.net/119326 involves slopes and stuff ... from Elementary School. Remember? • January 26th, 2015: Just a simple counting-triangles problem using Data.Graph for today's #haskell problem http://lpaste.net/119228 In our solution we put the triangles into a Graph, counting crows http://lpaste.net/119256 ... I meant: 'triangles.' • January 23rd, 2015: A simple request on this fine Friday: solve the 'aug'mented matrix for today's #haskell problem http://lpaste.net/119049 Row-echelon solverSOLVED! (WOOT!s included for free!) lpaste.net/119102 • January 22nd, 2015: Row-echelon form; not solved yet, http://lpaste.net/118964 but PDC! ('pretty durn closer!') Aug! Our matrix has been row-echeloned! http://lpaste.net/119045 A solution for today's Haskell problem. • January 21st, 2015: Vectors are Num(bers), too ... adding two (same-length) vectors for today's Haskell problem http://lpaste.net/118906 ... and rollin' the vector-addition solution is posted at http://lpaste.net/118907 • January 20th, 2015: May I introduce Miss Abby Normal http://lpaste.net/118862 for today's Haskell problem? Vector normalization for reduced row-echelon form. NORM! A solution to today's Haskell problem is CHEERSfully posted at http://lpaste.net/118898 • January 19th, 2015: Oh, something new for today's #haskell problem? Well, then: how about a ... MATH PROBLEM! ... 'sorta' Eheh. http://lpaste.net/118714 We have a guest who decided to solve the problem: Þor (Thor). http://lpaste.net/118729 THOR SMASH! No. Wait. That's the Hulk. Never mind. • January 16th, 2015: Let it snow, let it snow, let it matrix-solver! for today's #haskell problem with (Ooh! Ooh!) pretty matrix pictures! http://lpaste.net/118571 Picture. Terminal. Snowflake-matrix. BlinchtenLightzen. • January 15th, 2015: n, k, and b are all you need (oh, and the answers) for today's Haskell problem http://lpaste.net/118488 • January 14th, 2015: More fun with fractals for today's #haskell problem: plotting the Harriss Spiral http://lpaste.net/118414 • January 13th, 2015: Yesterday we did not learn about the (constructive?) truth of 2+2 = 4. Sorry. Today we try to equate two circled rows http://lpaste.net/118367 • January 9th, 2015: In which it is shown that snarkiness is unavoidable when @geophf presents a Mensa-puzzle for today's #haskell problem http://lpaste.net/118121 • January 8th, 2015: Cressida, a smart little girl (in fact: a Mensan), would like you to determine her age for today's #haskell problem http://lpaste.net/118059 • January 7th, 2015: Fibbing down-low on the fibs! for today's haskell problem lpaste.net/118013 • January 5th, 2015: I'm looking over a 4-leaf tuple/That I overlooked before! #haskell problem today: over definition for Lens-on-tupleshttp://lpaste.net/117906 A solution is 'overdefined' by @bretthall at http://lpaste.net/117934 • January 4th, 2015: It's a new year, and I've just woken up and smelled the Lens-coffee! Bob Marley welcomes you to the New Year with a #haskell problem about #lenses http://lpaste.net/117815 Sing with me! set _2 "love" (1, 2) A solution is offered by @bretthall at http://lpaste.net/117933 Categories: Offsite Blogs ### Robert Harper: Parallelism and Concurrency, Revisited Planet Haskell - Sun, 02/01/2015 - 11:45pm I still get compliments on and criticisms of my post from three years ago (can it possibly be that long?) on parallelism and concurrency. In that post I offered a “top down” argument to the effect that these are different abstractions with different goals: parallelism is about exploiting computational resources to maximize efficiency, concurrency is about non-deterministic composition of components in a system. Parallelism never introduces bugs (the semantics is identical to the sequential execution), but concurrency could be said to be the mother lode of all bugs (the semantics of a component changes drastically, without careful provision, when composed concurrently with other components). From this point of view the two concepts aren’t comparable, yet relatively few people seem to accept the distinction, or, even if they do, do not accept the terminology. Here I’m going to try a possible explanation of why the two concepts, which seem separable to me, may seem inseparable to others. I think that it is to do with scheduling. One view of parallelism is that it’s just talk for concurrency, because all you do when you’re programming in parallel is fork off some threads, and then do something with their results when they’re done. I’ve previously argued that parallelism is about cost, but let’s leave that aside. It’s unarguable that a parallel computation does consist of a bunch of, well, parallel computations, and so it is about concurrency. I’ve previously argued that that’s not a good way to think about concurrency either, but let’s leave that aside as well. So, the story goes, concurrency and parallelism are synonymous, and people like me are just creating confusion. Perhaps that is true, but here’s why it may not be a good idea to think of parallelism this way. Scheduling as you learned about it in OS class (for example) is a altogether different than scheduling for parallelism. There are two aspects of OS-like scheduling that I think are relevant here. First, it is non-deterministic, and second, it is competitive. Non-deterministic, because you have little or no control over what runs when or for how long. A beast like the Linux scheduler is controlled by a zillion “voodoo parameters” (a turn of phrase borrowed from my queueing theory colleague, Mor Harchol-Balter), and who the hell knows what is going to happen to your poor threads once they’re in its clutches. Second, and more importantly, an OS-like scheduler is allocating resources competitively. You’ve got your threads, I’ve got my threads, and we both want ours to get run as soon as possible. We’ll even pay for the privilege (priorities) if necessary. The scheduler, and the queueing theory behind it is designed to optimize resource usage on a competitive basis, taking account of quality of service guarantees purchased by the participants. It does not matter whether there is one processor or one thousand processors, the schedule is unpredictable. That’s what makes concurrent programming hard: you have to program against all possible schedules. And that’s why it’s hard to prove much about the time or space complexity of your program when it’s implemented concurrently. Parallel scheduling is a whole ‘nother ball of wax. It is (usually, but not necessarily) deterministic, so that you can prove bounds on its efficiency (Brent-type theorems, as discussed in a previous post and in PFPL). And, more importantly, it is cooperative in the sense that all threads are working together for the same computation towards the same ends. The threads are scheduled so as to get the job (there’s only one) done as quickly and as efficiently as possible. Deterministic schedulers for parallelism are the most common, because they are the easiest to analyze with respect to their time and space bounds. Greedy schedulers, which guarantee to maximize use of available processors, never leaving any idle when there is work to be done, form an important class for which the simple form of Brent’s Theorem is obvious. Many deterministic greedy scheduling algorithms are known, of which I will mention p-DFS and p-BFS, which do p-at-a-time depth- and breadth-first search of the dependency graph, and various forms of work-stealing schedulers, pioneered by Charles Leiserson at MIT. (Incidentally, if you don’t already know what p-DFS or p-BFS are, I’ll warn you that they are a little trickier than they sound. In particular p-DFS uses a data structure that is sort of like a stack but is not a stack.) These differ significantly in their time bounds (for example, work stealing usually involves expectation over a random variable, whereas the depth- and breadth-first traversals do not), and differ dramatically in their space complexity. For example, p-BFS is absolutely dreadful in its space complexity. (For a full discussion of these issues in parallel scheduling, I recommend Dan Spoonhower’s PhD Dissertation. His semantic profiling diagrams are amazingly beautiful and informative!) So here’s the thing: when you’re programming in parallel, you don’t just throw some threads at some non-deterministic competitive scheduler. Rather, you generate an implicit dependency graph that a cooperative scheduler uses to maximize efficiency, end-to-end. At the high level you do an asymptotic cost analysis without considering platform parameters such as the number of processors or the nature of the interconnect. At the low level the implementation has to validate that cost analysis by using clever techniques to ensure that, once the platform parameters are known, maximum use is made of the computational resources to get your job done for you as fast as possible. Not only are there no bugs introduced by the mere fact of being scheduled in parallel, but even better, you can prove a theorem that tells you how fast your program is going to run on a real platform. Now how cool is that? [Update: word-smithing.] [Update: more word-smithing for clarity and concision.] Filed under: Programming, Research Tagged: concurrency, parallelism Categories: Offsite Blogs ### cpp question haskell-cafe - Sun, 02/01/2015 - 11:42pm With the new version of GHC (7.8.3) I am running into problems with cpp. The header of one of the files of my uu-parsinglib library reads as follows: {-# LANGUAGE NoMonomorphismRestriction, RankNTypes, FlexibleContexts, CPP #-} #define DEMO(p,i) demo "p" i p #define DEMOG(p,i) demo "p" i (mkP (p)) module Text.ParserCombinators.UU.Demo.MergeAndPermute where ... However when I try to compile this file I get error messages like: Text/ParserCombinators/UU/Demo/Demo.hs:88:17: Not in scope: data constructor ‘DEMOG’ So it seems that the cpp is no longer called at all. Any hints? I am running: MacBook-Doaitse:src doaitse$ ghc --version The Glorious Glasgow Haskell Compilation System, version 7.8.3 MacBook-Doaitse:src doaitse\$ gcc --version Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn) Target: x86_64-apple-darwin14.1.0 Thread
Categories: Offsite Discussion

### Ivan Lazar Miljenovic: Monadic yak shaving

Planet Haskell - Sun, 02/01/2015 - 11:12pm

Similarly to how Bryan O’Sullivan got side-tracked over five years ago, I recently found myself wishing a library existed to more easily deal with monad transformers.

There are quite a few libraries that try and provide more convenient ways of dealing with monad transformers (typically using those defined in transformers so as to avoid re-defining them all the time and to provide inter-compatibility): the old standard of mtl, the type-family variant found in monads-tf, the more ambitious layers package and Roman Cheplyaka’s monad-classes work.

However, I found that none of the libraries I could find really satisfied me. Even layers and monad-classes – that aim to simplify/remove the quadratic instance problem still require a “catch-all” default instance for all other monad transformers. Ideally for me, if I want to define a new transformer class, then I should only need to define instances for transformers that directly implement it’s functionality.

As such, I’m pleased to announce the first (alpha-level) release of my new library: monad-levels.

Why I wrote this library

Originally, all I wanted was to be able to lift operations in a base monad up through any transformers I might stack on top of it.

Except that I didn’t want to just lift up a single monad up through the stack: I wanted to be able to convert a function on my base monad up to whatever set of transformers I had stacked up on top of it. So I resigned myself to writing out instances for every existing transformer in the transformers library.

As I started doing so though, I noticed a common pattern: for each method in the instance, I would be using a combination of the following operations (using StateT as an example):

• wrap: apply the monad transformer (e.g. m (a,s) → StateT s m a)
• unwrap: remove the monad transformer (e.g. StateT s m a → m (a,s))
• addInternal: adds the internal per-transformer specific state (e.g. m a → m (a,s))

In particular, wrap is used everywhere, unwrap is used when lowering existing monads down so that they can (eventually) be used in the base monad and addInternal is used when lifting monadic values.

Thus, if I define this as a type class for monad transformers, then I could use the wonderful DefaultSignatures extension to simplify defining all the instances, and what’s more such a class would be re-usable.

Generally, the definition of unwrap and addInternal require information from within the scope of the transformer (e.g. the s parameter within StateT); as such, wrap ends up being a continuation function. I thus came up with the following class:

class (Monad m) => MonadLevel m where type LowerMonad m :: * -> * type InnerValue m a :: * -- A continuation-based approach for how to lift/lower a monadic value. wrap :: ( (m a -> LowerMonad m (InnerValue m a) -- unwrap -> (LowerMonad m a -> LowerMonad m (InnerValue m a)) -- addInternal -> LowerMonad m (InnerValue m a) ) -> m a

(Note that I’m not using MonadTrans for this as I also wanted to be able to use this with newtype wrappers.)

So I define this class, use DefaultSignatures and my instances – whilst still needing to be explicitly defined – become much simpler (and in many/most cases empty)!

Becoming more ambitious

Whilst I was looking to see if any existing libraries had something similar (layers came the closest, but it uses multiple classes and requires being able to specify function inverses when using it), I came across Roman Cheplyaka’s blog post on how monad-control uses closed type families to automatically recursively lift monads down to a monad that satisfies the required constraint. I became intrigued with this, and wondered if it would be possible to achieve this for any constraint (more specifically something of kind (* → *) → Constraint) rather than using something that was almost identical for every possible monad transformer class.

So I wrote a prototype that made it seem as if this would indeed work (note that I used the term “lower” rather than “lift”, as I saw it as lowering operations on the overall monadic stack down to where the constraint would be satisfied):

data Nat = Zero | Suc Nat class SatisfyConstraint (n :: Nat) (m :: * -> *) (c :: (* -> *) -> Constraint) where _lower :: Proxy c -> Proxy n -> (forall m'. (c m') => m' a) -> m a instance (ConstraintSatisfied c m ~ True, c m) => SatisfyConstraint Zero m c where _lower _ _ m = m instance (MonadLevel m, SatisfyConstraint n (LowerMonad m) c) => SatisfyConstraint (Suc n) m c where _lower _ _ m = wrap (\ _unwrap addI -> addI (_lower (Proxy :: Proxy c) (Proxy :: Proxy n) m))

(This is a simplified snippet: for more information – including where the ConstraintSatisfied definition comes from – see here.)

With this, you also get liftBase for free! However, if all I wanted was a function just to lift a value in the base monad up the stack, then I could have used a much simpler definition. For this to actually be useful, I have to be able to write (semi-)arbitrary functions and lift/lower them as well.

I could just go back to my original plan and use MonadLevel combined with DefaultSignatures and not bother with this automatic lifting/lowering business… but I’ve already started, and in for a penny in for a pound. So full steam ahead!

It took a while to sort out it would work (dealing with State and Reader was easy; having to extend how this worked for Cont took quite a while and then even more for Writer) but monad-levels is now able to deal with arbitrary monadic functions.

Well… I say arbitrary…

To be able to deal with functions, you first need to use the provided sub-language to be able to specify the type of the function. For example, a basic function of type m a → m a is specified as Func MonadicValue (MkVarFnFrom MonadicValue)) (or more simply as just MkVarFn MonadicValue, using the inbuilt simplification that most such functions will return a value of type m a); something more complicated like CallCC becomes MkVarFn (Func (Func ValueOnly (MonadicOther b)) MonadicValue).

This language of lower-able functions is used to be able to know how to convert arguments and results up and down the monadic stack.

The end result

I’m finally releasing this library after being able to successfully replicate all the existing monad transformer classes in mtl (with the exception of the deprecated MonadError class). As an example, here is the equivalent to MonadCont:

One thing that should be obvious is that the constraint is a tad more complicated than that required for MonadCont. Specifically, it requires the a and b parameters as well; this is because not all instances of MonadLevel allow dealing with arbitrary other monadic values (that is, we’re dealing with m a over all, but we also need to consider m b in this case). In practice, however, the only existing monad transformer with this constraint is ContT itself, and you can’t pass through a call to callCC from one ContT transformer to another (as there’s no way to distinguish between the two).

(Something that might not be obvious is that the interaction between StateT – both lazy and strict – and how I’ve defined callCC differs from how it’s defined in mtl. Hence why I started this thread on Haskell Cafe.)

But, any monad transformer in the transformers library that is an instance of MonadCont also satisfies the requirements for the HasCont constraint, and furthermore just by making it an instance of MonadLevel any new transformer (including a newtype wrapper over a monadic stack) will also automatically satisfy the constraint!

Drawbacks

There are two main sources of problems currently with monad-levels.

Alpha-state
• Whilst the types line up and playing with the various classes in ghci seems to work, there is no comprehensive test-suite as yet to verify that it is indeed sound.
• I have no idea how it compares speed- and memory-wise to mtl; as it uses a lot of type families, explicit dictionary passing, etc. I expect it to be slower, but I haven’t compared it or investigated if there’s anywhere I can improve it.

• I’m not sure of all the names (e.g. MkVarFn and MkVarFnFrom for dealing with variadic functions probably could be improved); not to mention that there’s probably also room for improvement in terms of what is exported (e.g. should the actual type classes for dealing with variadic arguments be fully exported in case people think of more possible argument types?).

• It could also do with a lot more documentation.

These, however, are for the most part just a matter of time (though it might be that the performance one should actually belong to the next category).

Implications of approach/implementation

The biggest noticeable problem is one of discovery: if you look at mtl, it’s obvious to tell when a transformer is an instance of a particular class; in contrast, with monad-levels there’s no obvious way of looking at Haddock documentation to tell whether or not this is the case. The best you can do for a specific constraint c and monad m (without trying it in ghci) is that if it’s MonadLevel_ definition has AllowOtherValues m ~ True and DefaultAllowConstraints m ~ True (both of which are the defaults) and the latter hasn’t been overriden with instance ConstraintPassThrough c m ~ False then it is allowed. (Assuming that the constraint and its functions are sound, and something screwy hasn’t been done like having the monad actually being a loop.)

Something that might also be a problem for some is the complexity: lots of language extensions are used, not to mention using a lot of things like Proxy and explicit dictionary passing.

As part of this, this means things like type errors sometimes being difficult to resolve due to the large usage of associated types and constraint kinds. Furthermore, as you probably saw in the HasCont definition shown above, you typically need to use ScopedTypeVariables with proxies.

Go forth and use it!

Whilst it most definitely isn’t perfect, I think monad-levels is now at a usable state. As such, I’d appreciate any attempts people make at using it and giving me any feedback you might have.

This is also the first time I’ve used git and Github for my own project. I missed the simplicity and discoverability of darcs, but magit for Emacs makes using it a bit easier, and in-place branches and re-basing turned out to be quite nice.