News aggregator

Ken T Takusagawa: [fltalwhq] integerLog2 and an introduction to unboxed types

Planet Haskell - Mon, 09/05/2016 - 1:18am

Some notes on using unboxed types in Haskell, and in particular, notes on creating a boxed wrapper for the integer-gmp library function integerLog2# :: Integer -> Int# which returns an unboxed type.

{-# LANGUAGE MagicHash #-}
import GHC.Exts(Int(I#));
import GHC.Integer.Logarithms(integerLog2#);

integerLog2 :: Integer -> Int;
integerLog2 i = if i < 1
then error "must be positive"
-- because integerLog2# does no bounds checking
else I# (integerLog2# i);

The pragma MagicHash prevents GHC from interpreting the hash symbol as an operator.  Without it, one gets error messages like this:

parse error on input `#'
not in scope: `#'

It would be nice if GHC emitted a suggestion of MagicHash on errors like this.

The constructor I# is findable using Hoogle, searching for Int#->Int.

One must use parentheses around the argument to I#.  The standard trick of removing parentheses with the dollar sign results in an error:

bad1 i = I# $ integerLog2# i;

Couldn't match kind `*' with `#'
When matching types
r0 :: *
GHC.Prim.Int# :: #

Using the composition operator in point-free style fails similarly:

bad2 = I# . integerLog2#;

Couldn't match kind `*' with `#'
When matching types
b0 :: *
GHC.Prim.Int# :: #
Expected type: b0 -> Int
Actual type: GHC.Prim.Int# -> Int
In the first argument of `(.)', namely `I#'

Unboxed types are a different "kind", the # kind, than boxed types, which are a * kind.

Categories: Offsite Blogs

Yesod Web Framework: Better CI for the Yesod scaffoldings

Planet Haskell - Mon, 09/05/2016 - 1:00am

After having completely forgotten to do this for a long time, I finally set aside some time last night to fix up the Travis CI for the Yesod scaffoldings. We have a number of different flavors of scaffoldings (e.g., PostgreSQL, MySQL, simple, and minimal), and keep all of the different flavors as branches on a single repo so that improvements to the base scaffolding can easily be merged into all of the others. The goal of my changes was to:

  • Have Travis check against the different snapshots likely to be selected by the stack new command
  • Automate testing against live databases, so that I can be lazy and not set up those databases for local testing

Overall, this went pretty smoothly, and also serves as a nice example of a short Stack-based Travis configuration. You can see the latest PostgreSQL Travis configuration.

Beyond my desire to be lazy in the scaffolding release process, the other obvious benefit is much more confidence when review PRs against the scaffolding that things will actually work.

Interesting discoveries

I discovered two things in this work that I hadn't realized previously:

  • Due to a dependency on a relatively recent yesod-auth version, only LTS Haskell 6 and up and Stackage Nightly are supported by the scaffolding. Therefore, for the build matrix, I haven't included LTS 5 and lower.
  • I was unaware of a bug in GHC 8.0.1 which prevents the scaffolding from compiling. This bug has already been resolved upstream and the fix will be included in GHC 8.0.2. In the meanwhile, I've blocked GHC 8.0.1 from usage in the scaffolding.

Upon reflection, this is probably a good thing: we are all but guaranteed that a new user will start off with LTS 6, which is a well tested set of packages and a very stable GHC version, leading to hopefully little user friction when getting started.

A user could in theory do something like stack new foo yesod-simple --resolver lts-5 --solver, but I think it's fair to assume that someone passing in such specific flags knows what he/she is doing and we should just stay out of his/her way.


Now that CI is in better shape, this is a good time to remind everyone of how to contribute to the Yesod scaffoldings. The PostgreSQL scaffolding is considered the base, with the other flavors merging in changes from there. If you want to make a change, please submit a PR to the postgres branch of the yesod-scaffold repo. If you have a patch which is specific to one of the scaffoldings instead (like changing MySQL config settings), please submit it to that branch.

Note that it is not supported to send pull requests against the .hsfiles files in the stack-templates repo, as such changes can't be properly tested, and will be overwritten the next time a change from the yesod-scaffold repo is merged in.

Categories: Offsite Blogs

Edward Z. Yang: The Edit-Recompile Manager

Planet Haskell - Fri, 09/02/2016 - 6:40pm

A common claim I keep seeing repeated is that there are too many language-specific package managers, and that we should use a distribution's package manager instead. As an example, I opened the most recent HN discussion related to package managers, and sure enough the third comment was on this (very) dead horse. (But wait! There's more.) But it rarely feels like there is any forward progress on these threads. Why?

Here is my hypothesis: these two camps of people are talking past each other, because the term "package manager" has been overloaded to mean two things:

  1. For end-users, it denotes an install manager, primarily responsible for installing some useful software so that they can use it. Software here usually gets installed once, and then used for a long time.
  2. For developers, it denotes an edit-recompile manager: a piece of software for letting you take a software project under development and (re)build it, as quickly as possible. The installation of packages is a means, but it is not the end.

It should be clear that while these two use-cases have some shared mechanism, the priorities are overwhelmingly different:

  • End-users don't care about how a package is built, just that the things they want to install have been built. For developers, speed on rebuild is an overriding concern. To achieve this performance, a deep understanding of the structure of the programming language is needed.
  • End-users usually just want one version of any piece software. Developers use multiple versions, because that is the cost of doing business with a diverse, rapidly updated, decentralized package ecosystem.
  • End-users care about it "just working": thus, a distribution package manager emphasizes control over the full stack (usually requiring root.) Developers care about flexibility for the software they are rebuilding and don't mind if a little setup is needed.

So the next time someone says that there are too many language-specific package managers, mentally replace "package manager" with "edit-recompile manager". Does the complaint still make sense? Maybe it does, but not in the usual sense: what they may actually be advocating for is an interface between these two worlds. And that seems like a project that is both tractable and worth doing.

Categories: Offsite Blogs

Christopher Allen: The Hashrocket websocket shootout in Haskell

Planet Haskell - Fri, 09/02/2016 - 6:00pm

I recently PR’d a Haskell entry to Hashrocket’s websocket shootout. Haskell seemed to do a lot better than C++, Rust, Golang, Elixir, Erlang, NodeJS, Ruby MRI, and JRuby. Although the Haskell version has been since fixed, so I can no longer run the benchmark reliably on my machine, so any final results will have to come from Hashrocket running the unagi-chan variant.

How the benchmark works

The idea is to test how many concurrent clients a single websocket server (process?) can serve and how efficiently it can broadcast messages to all the clients.

The constraints of the benchmark are that your 95th percentile round-trip time cannot exceed 250ms. This is a better measurement/restriction for concurrency benchmarks than the usual “how many can it handle before it crashes” or throughput metrics, so props to Hashrocket on that point.

The client as-designed will increase the number of clients connected in the step-size specified and send test events at each step. If the 95th percentile round trip time exceeds 250ms, the benchmark client disconnects all client connections and halts. So, the last “line” of output you see from the client is essentially where you peaked before failing the SLA constraint.

What follows is the flawed Broadcast implementation I wrote that drops messages, so caveat lector

Everything below is retracted for now as Broadcast was dropping messages, which wasn’t explicitly permitted in the benchmark. I’m currently kicking around an unagi-chan based variant PR’d by Sebastian Graf, but I don’t think unagi-chan was designed for broadcasting across many thousands of channels.

For some context, this benchmark using Tsung is roughly what I expected in terms of results modulo hardware differences, which is why I wasn’t that surprised when I saw the initial results. Currently the Go websocket client seems to behave very differently from Tsung’s, so I don’t have a reproduction of what Tom Hunger’s benchmark did.

Before I retracted the results, I was peaking at 45,000 concurrent clients and a very low/flat latency with this version that uses Broadcast. However, Broadcast was dropping messages, so it’s not a valid comparison when the other benchmark servers weren’t dropping any messages. Incidentally, load-shedding is a great strategy for consistent server performance when it’s permissible ;)

Here’s the source to the Haskell version at time of writing:

{-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE QuasiQuotes #-} module Main where import qualified Control.Concurrent as C import qualified Control.Concurrent.Broadcast as BC import Control.Lens hiding ((.=)) import Control.Monad (forever) import Data.Aeson import Data.Aeson.Lens import Data.Aeson.Types import Data.ByteString.Lazy (ByteString, toStrict) import qualified Data.Char as DC import Data.Functor (void) import Data.Text (Text) import Data.Text.Encoding (decodeUtf8) import GHC.Generics import Network.HTTP.Types (status400) import Network.Wai import Network.Wai.Handler.Warp import Network.Wai.Handler.WebSockets import Network.WebSockets import Text.RawString.QQ

The above is just the usual preamble/noise. I had quasiquotes for a test/example I didn’t use in the actual server.

type Broadcaster = BC.Broadcast ByteString

Hedging my bets in case I switched again after changing the broadcast type from Text to a lazy ByteString.

amendTest :: Maybe Value amendTest = decode $ [r| {"type":"broadcast","payload":{"foo": "bar"}} |] amendBroadcast :: Value -> Value amendBroadcast v = v & key "type" . _String .~ "broadcastResult"

Above was just test code.

broadcastThread :: Broadcaster -> Connection -> IO () broadcastThread bc conn = forever $ do t <- BC.listen bc sendTextData conn t

That’s all I do to relay broadcasted data to the listeners. Under the hood, Broadcast is:

MVar (Either [MVar a] a)

I used broadcast from concurrent-extra because I knew I wanted the propagation/thread wake to happen via the MVar machinery in the GHC runtime system.

wtf conn = sendTextData conn ("<img src=\"\" />" :: Text)

Error return method borrowed from ocharles.

mkPayload :: Text -> Value -> ByteString mkPayload type_ payload = encode $ object [ "type" .= String type_ , "payload" .= payload ]

Constructing a JSON value fitting the format expected by the test client and then encode-ing it into a ByteString.

bidiHandler :: Broadcaster -> Connection -> IO () bidiHandler bc conn = do _ <- C.forkIO (broadcastThread bc conn) -- [ 1 ] forever $ do -- [2] msg <- receiveDataMessage conn -- [3] case msg of Text t -> do let Just payload = t ^? key "payload" -- [ 4 ] case t ^? key "type" . _String of -- [ 5 ] Just "echo" -> sendTextData conn (mkPayload "echo" payload) -- [ 6 ] Just "broadcast" -> BC.signal bc (mkPayload "broadcastResult" payload) -- [ 7 ] _ -> wtf conn _ -> do wtf conn

I hate reading overly chopped-up code, so I annotated this one in the mode of the haskell book.

  1. We run the broadcast listener that relays data to the websocket client in a separate thread

  2. Running the client listener that (potentially) broadcasts data or just echoes back to the client in a Control.Monad.forever block.

  3. Block on receiving a data message (sum type, Text or Bytes)

  4. Pluck the payload value out of the JSON body because I’m too lazy to make a datatype for this.

  5. Get the event type out of the JSON body to dispatch on. We’re going to either echo or broadcast.

  6. If the event type was echo, kick the JSON data back to the client, but with the event type amended to echo.

  7. If the event type was broadcast, signal the broadcast handle to propagate the new JSON body with the payload and a broadcastResult event type.

wsApp :: Broadcaster -> ServerApp wsApp bc pending = do conn <- acceptRequest pending bidiHandler bc conn

Passing on the Broadcast handle and Connection to the handler.

main :: IO () main = do bc <- runServer "" 3000 (wsApp bc)

Spawn a Broadcast, pass the handle on to wsApp, run it with the provided server from the wai-websockets library. That’s it.

Some thoughts

Erlang is the only runtime competitive on per-thread (process in their lingo) overhead, but they bite the dust on message send. MVar take/put pairing is ~25-40ns, you’re eating at least 1,000 ns in Erlang. It’s possible a custom Erlang implementation (Cowboy?) could do a better job here, but I’m not sure how to do broadcast especially efficiently in Erlang.

Asking how to efficiently broadcast to many Erlang processes on the mailing list gets you smarmy answers.

I was initially disappointed I didn’t get an excuse to optimize any of the Haskell code. It was limited only by the number of TCP connections I could bind, I had 2/3s of my 95th percentile RTT to burn yet. I messed with ulimit and the like a bit, but to really uncap it I’d need to change the client to connect to multiple IP addresses so I can use more TCP connections. Now I know it was because Broadcast was dropping messages and not tickling the slow parts as much as an implementation that forces broadcasts to all clients.

I know this site is a bit of a disaster zone, but if you like my writing or think you could learn something useful from me, please take a look at the book I've been writing with my coauthor Julie. There's a free sample available too!

Posted on September 3, 2016

Categories: Offsite Blogs

Brent Yorgey: Deep work and email habits

Planet Haskell - Fri, 09/02/2016 - 1:54pm

Lately I have been enjoying Cal Newport’s writing on work, and particularly his new book Deep Work which I am in the middle of reading (definitely recommended). His basic thesis is about the power of sustained, focused, distraction-free work on cognitively demanding tasks—what he calls deep work. It takes intentional effort to make the time and space for this kind of work, but Newport argues cogently that doing so can have enormous benefits.

Newport’s ideas have really resonated with me—I think I was already converging (albeit slowly, with little clarity) on similar ideas and practices over the last few years—and I’ve begun trying to put some of them more deliberately into practice. First, I have scheduled two large (4 hour) blocks of time for deep work each week. These blocks are sacrosanct: I won’t meet with students, schedule committee meetings, or do anything else during those times. I physically go somewhere other than my office—usually the library, occasionally my favorite coffee shop, somewhere relatively quiet and interruption-free where students and colleagues won’t find me. I first do as much as possible without turning on my laptop: course planning, reading, brainstorming, a lot of longhand writing (blog posts, papers, grant proposals, whatever—for example, I wrote this blog post itself longhand during my deep work session this morning). Sometimes if I need to write a longer, thoughtful email response, I will even print out the message beforehand and write my response longhand. Only towards the end of the session will I pull out my laptop, if I have specific projects to work on deeply that require a computer, like some sort of coding project.

Anecdotally at least, so far this feels incredibly successful—I get a lot done during these deep work sessions and always come away feeling accomplished and energized. The thing that feels especially good is that I’m not just getting a large amount of stuff done, but I’m getting important, difficult stuff done.

Another related practice I have recently adopted is that I do not read or write any email before 4pm. I have literally blocked myself from accessing email on my computers and phone between midnight and 4pm. Perhaps this sounds heretical, but that’s just the point—“because doing otherwise would be heresy” is a terrible reason for doing anything, and the fact is that not many of us really stop to consider and consciously choose the way we make use of technologies like email and social media. It’s taken some getting used to, but by now I don’t think I am ever going back. At 4pm I triage my inbox—respond to things that need a quick response, archive or delete stuff I don’t care about, and forward other things to my personal bug tracker for dealing with later. I am typically able to totally clear out my inbox before going home for the day. Over the course of the day I keep a list of emails I want to write later, and I write those at the same time that I triage my inbox, or sometimes later in the evening before going to bed. It feels way more efficient to batch most of my email processing into a focused session like this, and freeing to not be distracted by it the rest of the day. But do I ever miss it? Yes, all the time—and that’s exactly the point! Left to my natural tendencies I distract myself silly checking my email constantly.

Time will tell how much of this sticks and how my approach might change over time—I’ve scheduled a reminder for myself to write a followup post six months from now. As always, I’m happy to hear and respond to thoughts, reactions, questions, etc. in the comments.

Categories: Offsite Blogs

Dan Burton: GPG signing for github & mac

Planet Haskell - Fri, 09/02/2016 - 1:18pm
I just went through a few steps to get gpg signing to work on my mac and show up on github. I wanted to quickly document the process since the instructions are a little bit scattered. All of it basically came … Continue reading →
Categories: Offsite Blogs

Well-Typed.Com: Haskell eXchange, Hackathon, and Courses

Planet Haskell - Thu, 09/01/2016 - 7:04am

In October 2016, we are co-organizing various events in London. Now is the time to register for:

  • the Haskell eXchange, a two-day three-track conference with a large number of Haskell-related talks and workshops on a wide variety of topics, including keynotes by Simon Peyton Jones, Don Stewart, Conor McBride and Graham Hutton;

  • the Haskell eXchange Hackathon, a two-day event for everyone who wants to get involved coding on projects related to the Haskell infrastructure, such as Hackage and Cabal;

  • our Haskell courses, including a two-day introductory course, a one-day course on type-level programming in GHC, and a two-day course on lazy evaluation and performance.

Haskell eXchange

Thursday, October 5 – Friday, October 6

The Haskell eXchange is a general Haskell conference aimed at Haskell enthusiasts of all skill levels. The Haskell eXchange is organized annually, and 2016 is its fifth year. For the second year in a row, the venue will be Skills Matter’s CodeNode, where we have space for three parallel tracks. New this year: a large number of beginner-focused talks. At all times, at least one track will be available with a talk aimed at (relative) newcomers to Haskell. Of course, there are also plenty of talks on more advanced topics. The four keynote speakers are Simon Peyton Jones, Don Stewart, Conor McBride and Graham Hutton.

Registration is open; you can buy tickets via Skills Matter.

Haskell eXchange Hackathon

Saturday, October 7 – Sunday, October 8

We are going to repeat the successful Haskell Infrastructure Hackathon that we organized last year directly after the Haskell eXchange. Once again, everyone who is already contributing to Haskell projects related to the Haskell infrastructure as well as everyone who wants to get involved and talk to active contributors is invited to spend two days hacking on various projects, such as Hackage and Cabal.

Registration is open. This event is free to attend (and you can attend independently of the Haskell eXchange), but there is limited space, so you have to register.

Haskell courses Fast Track to Haskell

Monday, October 3 – Tuesday, October 4

This is a two-day general introduction to Haskell, aimed at developers who have experience with other (usually non-functional) programming languages, and want to learn about Haskell. Topics include defining basic datatypes and functions, the importance of type-driven design, abstraction via higher-order functions, handling effects (such as input/output) explicitly, and general programming patterns such as applicative functors and monads. This hands-on course includes several small exercises and programming assignments that allow to practice and feedback from the instructor during the course.

Registration is open; you can buy tickets via Skills Matter.

Guide to the Haskell Type System

Wednesday, October 5

This one-day course focuses on several of the type-system-oriented language extensions that GHC offers and shows how to put them to good use. Topics include the kind system and promoting datatypes, GADTs, type families, moving even more towards dependent types via the new TypeInType. The extensions will be explained, illustrated with exampls, and we provide advice on how and when to best use them.

Registration is open; you can buy tickets via Skills Matter.

Guide to Haskell Performance

Monday, October 9 – Tuesday, October 10

In this two-day course, we focus on how to write performant Haskell code that scales. We systematically explain how lazy evaluation works, and how one can reason about the time and space performance of code that is evaluated lazily. We look at various common pitfalls and explain them. We look at data structures and their performance characteristics and discuss their suitability for various tasks. We also discuss how one can best debug the performance of Haskell code, and look at existing high-performance Haskell libraries and their implementation to learn general techniques that can be reused.

Registration is open; you can buy tickets via Skills Matter.

Other courses and events

Well-Typed also offers on-demand on-site training and consulting. Please contact us if you are interested in consulting, or in events or courses that are not listed here.

We also have a low-volume mailing list where we occasionally announce events that we organize or participate in (subscribe here).

Categories: Offsite Blogs

Douglas M. Auclair (geophf): August 2016 1HaskellADay Problems and Solutions

Planet Haskell - Thu, 09/01/2016 - 6:57am
August 2016

  • August 25th, 2016: Today's #haskell exercise looks at historical prices of #bitcoinToday's #haskell solution is worth $180k ... five years ago. I wonder what it will be worth 5 years hence?  
  • August 23rd, 2016: Enough diving into the node's data, let's look at the structure of the related nodes for today's #haskell problem. The structure of tweets and related data for today's #haskell solution 
  • August 22nd, 2016: Today's #haskell problem is parsing twitter hashtags and a bit of data fingerprinting/exploration of same. BOOM! Today's #haskell solution analyzes hashtags twitter-users ('tweeps') use
  • August 19th, 2016: For today's #haskell exercise we look at unique users in a set of twitter graph-JSON. Today's #haskell solution gives us a list of users, then their tweets, from twitter graph-JSON data 
  • August 18th, 2016: For today's #haskell problem we extract and reify URLs from twitter graph-JSON. Today's #haskell solution extract URLs from twitter data as easily as looking up the URLs in a JSON map.
  • August 17th, 2016: For today's #haskell problem we explore the relationships from and to tweets and their related data. Today's #haskell solution relates data to tweets extracted from graph-JSON 
  • August 16th, 2016: For today's #haskell exercise we begin converting nodes in a graph to more specific types (Tweets are up first). We create some JSON Value-extractors and with those find the tweets in graph JSON in today's #Haskell solution 
  • August 15th, 2016: Today's #haskell exercise looks at twitter data as labeled/typed nodes and relations in JSON  
    Okay! For today's #haskell solution we discover our node and relation types in twitter data-as-graphs JSON! 
  • August 10th, 2016: Today's #Haskell problem we look at the big data-problem: getting a grasp of large indices of tweets in graph JSON. Today's #Haskell solution time-stamps and gives 'small-data' indices to tweets from graph JSON 
  • August 9th, 2016: For today's #haskell problem we extract the tweets from rows of graph data encoded in JSON. Today's #Haskell solution extracts the tweets from graph JSON and does some simple queries
  • August 8th, 2016: For today's #haskell problem we look at reading in the graph of a twitter-feed as JSON and just a bit of parsing. We leverage the Cypher library for today's #haskell solution to look at 100 rows of tweets encoded as JSON 
  • August 5th, 2016: Today's #Haskell problem we go for the Big Kahuna: solving a Kakuro puzzleOkay, we have a #Haskell solution ... finally ... maybe. The solver took too long, so I solved it myself faster :/ 
  • August 4th, 2016: Today's #Haskell exercise looks at (simple) constraints of unknown values for a sum-solver. Today's #Haskell solution also uses QBits to solve constrained unknowns 
  • August 3rd, 2016: Today's #haskell problem provides the cheatsheet: "What are the unique 4-number sums to 27?" We round-trip the Set category for today's #haskell solution
  • August 2nd, 2016: Today's #haskell exercise looks at solving our sums when we know some of the numbers alreadyQBits actually work nicely for today's #Haskell solution 
  • August 1st, 2016: For today's #Haskell exercise we play the 'Numbers Game.' The #haskell solution is a guarded combine >>= permute in the [Int]-domain. I like the Kleisli category; ICYMI.
  • Categories: Offsite Blogs

    Edward Z. Yang: Backpack and separate compilation

    Planet Haskell - Thu, 09/01/2016 - 12:26am

    When building a module system which supports parametrizing code over multiple implementations (i.e., functors), you run into an important implementation question: how do you compile said parametric code? In existing language implementations are three major schools of thought:

    1. The separate compilation school says that you should compile your functors independently of their implementations. This school values compilation time over performance: once a functor is built, you can freely swap out implementations of its parameters without needing to rebuild the functor, leading to fast compile times. Pre-Flambda OCaml works this way. The downside is that it's not possible to optimize the functor body based on implementation knowledge (unless, perhaps, you have a just-in-time compiler handy).
    2. The specialize at use school says, well, you can get performance by inlining functors at their use-sites, where the implementations are known. If the functor body is not too large, you can transparently get good performance benefits without needing to change the architecture of your compiler in major ways. Post-FLambda OCaml and C++ templates in the Borland model both work this way. The downside is that the code must be re-optimized at each use site, and there may end up being substantial duplication of code (this can be reduced at link time)
    3. The repository of specializations school says that it's dumb to keep recompiling the instantiations: instead, the compiled code for each instantiation should be cached somewhere globally; the next time the same instance is needed, it should be reused. C++ templates in the Cfront model and Backpack work this way.

    The repository perspective sounds nice, until you realize that it requires major architectural changes to the way your compiler works: most compilers don't try to write intermediate results into some shared cache, and adding support for this can be quite complex and error-prone.

    Backpack sidesteps the issue by offloading the work of caching instantiations to the package manager, which does know how to cache intermediate products. The trade off is that Backpack is not as integrated into Haskell itself as some might like (it's extremely not first-class.)

    Categories: Offsite Blogs

    Michael Snoyman: Using AppVeyor for Haskell+Windows CI

    Planet Haskell - Tue, 08/30/2016 - 6:00pm

    I don't think I ever documented this before, so just a quick post to get this out there. Many of us working on open source Haskell libraries already use Travis CI for doing continuous integration builds of our software. Some time ago they added support for OS X, making it possible to cover Linux and OS X with multiple configurations on their systems. For any project with a stack.yaml file, this can be easily achieved using the Stack recommended Travis configuration.

    Unfortunately, this leaves Windows testing out, which is unfortunate, because Windows is likely to be the most common build to fail. Fortunately, AppVeyor provides a similar experience to Travis, but for Windows. In order to get set up, just:

    1. Sign in with their web interface and add your Github repo
    2. Add an appveyor.yaml to your project

    Here's a simple file I've used on a few projects with succeess:

    build: off before_test: - curl -sS -L --insecure - 7z x stack.exe clone_folder: "c:\\stack" environment: global: STACK_ROOT: "c:\\sr" test_script: - stack setup > nul # The ugly echo "" hack is to avoid complaints about 0 being an invalid file # descriptor - echo "" | stack --no-terminal test

    All this does is:

    • Downloads the Stack zip file
    • Unpacks the stack.exe executable
    • Changes the STACK_ROOT to deal with Windows long path issues
    • Run stack setup to get a toolchain
    • Run stack --no-terminal test to build your package and run the test suites

    You're free to modify this in any way you want, e.g., add in --bench to build benchmarks, add --pedantic to fail on warnings, etc. If you have more system library dependencies, you'll need to consult the AppVeyor docs to see how to install them. And in our use cases for Stack, we found that using the AppVeyor caching functionality made builds unreliable (due to the large size of the cache). You may want to experiment with turning it back on, since this setup is slow (it downloads and installs a full GHC toolchain and builds all library dependencies each time).

    Categories: Offsite Blogs

    Joachim Breitner: Explicit vertical alignment in Haskell

    Planet Haskell - Tue, 08/30/2016 - 7:35am

    Chris Done’s automatic Haskell formatter hindent is released in a new version, and getting quite a bit of deserved attention. He is polling the Haskell programmers on whether two or four spaces are the right indentation. But that is just cosmetics…

    I am in principle very much in favor of automatic formatting, and I hope that a tool like hindent will eventually be better at formatting code than a human.

    But it currently is not there yet. Code is literature meant to be read, and good code goes at length to be easily readable, and formatting can carry semantic information.

    The Haskell syntax was (at least I get that impression) designed to allow the authors to write nicely looking, easy to understand code. One important tool here is vertical alignment of corresponding concepts on different lines. Compare

    maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2


    maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2

    The former is a quick to grasp specification, the latter (the output of hindent at the moment) is a desert of numbers and operators.

    I see two ways forward:

    • Tools like hindent get improved to the point that they are able to detect such patterns, and indent it properly (which would be great, but very tricky, and probably never complete) or
    • We give the user a way to indicate intentional alignment in a non-obtrusive way that gets detected and preserved by the tool.

    What could such ways be?

    • For guards, it could simply detect that within one function definitions, there are multiple | on the same column, and keep them aligned.
    • More general, one could take the approach lhs2Tex (which, IMHO, with careful input, a proportional font and with the great polytable LaTeX backend, produces the most pleasing code listings) takes. There, two spaces or more indicate an alignment point, and if two such alignment points are in the same column, their alignment is preserved – even if there are lines in between!

      With the latter approach, the code up there would be written

      maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2

      And now the intended alignment is explicit.

    (This post is cross-posted on reddit.)

    Update (2016-09-05) Shortly after this post, the Haskell formatter brittany gets released, which supports vertial alignment. Yay!

    Categories: Offsite Blogs

    Edward Z. Yang: cabal new-build is a package manager

    Planet Haskell - Mon, 08/29/2016 - 3:32pm

    An old article I occasionally see cited today is Repeat after me: "Cabal is not a Package Manager". Many of the complaints don't apply to cabal-install 1.24's new Nix-style local builds. Let's set the record straight.

    Fact: cabal new-build doesn't handle non-Haskell dependencies

    OK, so this is one thing that hasn't changed since Ivan's article. Unlike Stack, cabal new-build will not handle downloading and installing GHC for you, and like Stack, it won't download and install system libraries or compiler toolchains: you have to do that yourself. This is definitely a case where you should lean on your system package manager to bootstrap a working installation of Cabal and GHC.

    Fact: The Cabal file format can record non-Haskell pkg-config dependencies

    Since 2007, the Cabal file format has a pkgconfig-depends field which can be used to specify dependencies on libraries understood by the pkg-config tool. It won't install the non-Haskell dependency for you, but it can let you know early on if a library is not available.

    In fact, cabal-install's dependency solver knows about the pkgconfig-depends field, and will pick versions and set flags so that we don't end up with a package with an unsatisfiable pkg-config dependency.

    Fact: cabal new-build 2.0 handles build-tools dependencies

    As of writing, this feature is unreleased (if you are impatient, get a copy of HEAD from the GitHub repository or install cabal-install-head from hvr's PPA). However, in cabal-install 2.0, build-tools dependencies will be transparently built and added to your PATH. Thus, if you want to install a package which has build-tools: happy, cabal new-build will automatically install happy and add it to the PATH when building this package. These executables are tracked by new-build and we will avoid rebuilding the executable if it is already present.

    Since build-tools identify executable names, not packages, there is a set of hardcoded build-tools which are treated in this way, coinciding with the set of build-tools that simple Setup scripts know how to use natively. They are hscolour, haddock, happy, alex, hsc2hs, c2hs, cpphs and greencard.

    Fact: cabal new-build can upgrade packages without breaking your database

    Suppose you are working on some project which depends on a few dependencies. You decide to upgrade one of your dependencies by relaxing a version constraint in your project configuration. After making this change, all it takes is a cabal new-build to rebuild the relevant dependency and start using it. That's it! Even better, if you had an old project using the old dependency, well, it still is working, just as you would hope.

    What is actually going on is that cabal new-build doesn't do anything like a traditional upgrade. Packages installed to cabal new-build's global store are uniquely identified by a Nix-style identifier which captures all of the information that may have affected the build, including the specific versions that were built against. Thus, a package "upgrade" actually is just the installation of a package under a different unique identifier which can coexist with the old one. You will never end up with a broken package database because you typed new-build.

    There is not presently a mechanism for removing packages besides deleting your store (.cabal/store), but it is worth noting that deleting your store is a completely safe operation: cabal new-build won't decide that it wants to build your package differently if the store doesn't exist; the store is purely a cache and does not influence the dependency solving process.

    Fact: Hackage trustees, in addition to package authors, can edit Cabal files for published packages to fix bugs

    If a package is uploaded with bad version bounds and a subsequent new release breaks them, a Hackage Trustee can intervene, making a modification to the Cabal file to update the version bounds in light of the new information. This is a more limited form of intervention than the patches of Linux distributions, but it is similar in nature.

    Fact: If you can, use your system package manager

    cabal new-build is great, but it's not for everyone. If you just need a working pandoc binary on your system and you don't care about having the latest and greatest, you should download and install it via your operating system's package manager. Distro packages are great for binaries; they're less good for libraries, which are often too old for developers (though it is often the easiest way to get a working install of OpenGL). cabal new-build is oriented at developers of Haskell packages, who need to build and depend on packages which are not distributed by the operating system.

    I hope this post clears up some misconceptions!

    Categories: Offsite Blogs

    Functional Jobs: Senior Software Engineer (Haskell) at Front Row Education (Full-time)

    Planet Haskell - Mon, 08/29/2016 - 11:59am

    Senior Software Engineer to join fast-growing education startup transforming the way 3+ million K-12 students learn Math and English.

    What you tell your friends you do

    “You know how teachers in public schools are always overworked and overstressed with 30 kids per classroom and never ending state tests? I make their lives possible and help their students make it pretty far in life”

    What you really will be doing

    Architect, design and develop new web applications, tools and distributed systems for the Front Row ecosystem in Haskell, Flow, PostgreSQL, Ansible and many others. You will get to work on your deliverable end-to-end, from the UX to the deployment logic

    Mentor and support more junior developers in the organization

    Create, improve and refine workflows and processes for delivering quality software on time and without incurring debt

    Work closely with Front Row educators, product managers, customer support representatives and account executives to help the business move fast and efficiently through relentless automation.

    How you will do this

    You’re part of an agile, multidisciplinary team. You bring your own unique skill set to the table and collaborate with others to accomplish your team’s goals.

    You prioritize your work with the team and its product owner, weighing both the business and technical value of each task.

    You experiment, test, try, fail and learn all the time

    You don’t do things just because they were always done that way, you bring your experience and expertise with you and help the team make the best decisions

    What have we worked on in the last quarter

    We have rewritten our business logic to be decoupled from the Common Core math standards, supporting US state-specific standards and international math systems

    Prototyped and tested a High School Math MVP product in classrooms

    Changed assigning Math and English to a work queue metaphor across all products for conceptual product simplicity and consistency

    Implemented a Selenium QA test suite 100% in Haskell

    Released multiple open source libraries for generating automated unit test fixtures, integrating with AWS, parsing and visualizing Postgres logs and much more

    Made numerous performance optimization passes on the system for supporting classrooms with weak Internet bandwidth


    We’re an agile and lean small team of engineers, teachers and product people working on solving important problems in education. We hyper-focus on speeds, communication and prioritizing what matters to our millions of users.

    • You’re smart and can find a way to show us.
    • A track record of 5+ years of working in, or leading, teams that rapidly ship high quality web-based software that provides great value to users. Having done this at a startup a plus.
    • Awesome at a Functional Programming language: Haskell / Scala / Clojure / Erlang etc
    • Exceptional emotional intelligence and people skills
    • Organized and meticulous, but still able to focus on the big picture of the product
    • A ton of startup hustle: we're a fast-growing, VC-backed, Silicon Valley tech company that works hard to achieve the greatest impact we can.
    • Money, sweet
    • Medical, dental, vision
    • Incredible opportunity to grow, learn and build lifetime bonds with other passionate people who share your values
    • Food, catered lunch & dinner 4 days a week + snacks on snacks
    • Room for you to do things your way at our downtown San Francisco location right by the Powell Station BART, or you can work remotely from anywhere in the US, if that’s how you roll
    • Awesome monthly team events + smaller get-togethers (board game nights, trivia, etc)

    Get information on how to apply for this position.

    Categories: Offsite Blogs

    Philip Wadler: Option A: Think about the children

    Planet Haskell - Mon, 08/29/2016 - 10:47am
    Fellow tweeter @DarlingSteveEDI captured my image (above) as we gathered for Ride the Route this morning, in support of Option A for the Edinburgh's proposed West-East Cycle Route (the route formerly known as Roseburn to Leith Walk). My own snap of the gathering is below.

    Fellow blogger Eilidh Troup considers another aspect of the route, safety for schoolchildren. Option A is far safer than Option B for young children cycling to school: the only road crossing in Option A is guarded by a lollipop lady, while children taking Option B must cross *three* busy intersections unaided.

    It's down to the wire: members of the Transport and Environment Committee vote tomorrow. The final decision may be closely balanced, so even sending your councillor (and councillors on the committee) a line or two can have a huge impact. If you haven't written, write now, right now!

      Roseburn to Leith Walk A vs B: time to act!
      Ride the Route in support of Option A

    Late breaking addendum:
      Sustrans supports Option A: It’s time for some big decisions…

    Categories: Offsite Blogs

    Philip Wadler: Roseburn to Leith Walk A vs B: time to act!

    Planet Haskell - Mon, 08/29/2016 - 10:36am
    On 2 August, I attended a meeting in Roseburn organised by those opposed to the new cycleway planned by the city. Local shopkeepers fear they will see a reduction in business, unaware this is a common cycling fallacy: study after study has shown that adding cycleways increases business, not the reverse, because pedestrians and cyclists find the area more attractive.

    Feelings in Roseburn run strong. The locals don't trust the council: who can blame them after the fiasco over trams? But the leaders of the campaign are adept at cherry picking statistics, and, sadly, neither side was listening to the other.

    On 30 August, the Edinburgh Council Transport and Environment Committee will decide between two options for the cycle route, A and B. Route A is direct. Route B goes round the houses, adding substantial time and rendering the whole route less attractive. If B is built, the opportunity to shift the area away from cars, to make it a more pleasant place to be and draw more business from those travelling by foot, bus, and cycle, goes out the window.

    Locals like neither A nor B, but in a spirit of compromise the Transport and Environment Committee may opt for B. This will be a disaster, as route B will be far less likely to draw people out of their cars and onto their cycles, undermining Edinburgh's ambitious programme to attract more people to cycling before it even gets off the ground.

    Investing in cycling infrastructure can make an enormous difference. Scotland suffers 2000 deaths per year due to pollution, and 2500 deaths per year due to inactivity. The original proposal for the cycleway estimates benefits of £14.5M over ten years (largely from improved health of those attracted to cycling) vs a cost of £5.7M, a staggering 3.3x return on investment. Katie Cycles to School is a brilliant video from Pedal on Parliament that drives home how investment in cycling will improve lives for cyclists and non-cyclists alike.

    Want more detail? Much has been written on the issues.
      Roseburn Cycle Route: Evidence-based local community support.
      Conviction Needed.

    The Transport Committee will need determination to carry the plan through to a successful conclusion. This is make or break: will Edinburgh be a city for cars or a city for people? Please write to your councillors and the transport and environment committee to let them know your views.

    Roseburn to Leith Walk Cycleway: A vs B
    Roseburn to Leith Walk Cycleway: the websiteRoseburn to Leith Walk

    Ride the Route in support of Option A
    Option A: Think about the children
    Categories: Offsite Blogs

    Christopher Done: hindent 5: One style to rule them all

    Planet Haskell - Sun, 08/28/2016 - 6:00pm
    Reminder of the past

    In 2014, in my last post about hindent, I wrote these points:

    1. Automatic formatting is important:
    1. Other people also care about this
    2. The Haskell community is not immune to code formatting debates

    I proposed my hindent tool, which:

    1. Would format your code.
    2. Supported multiple styles.
    3. Supported further extension/addition of more styles trivially.
    Things learned

    I made some statements in that post that I’m going to re-evaluate in this post:

    1. Let’s have a code style discussion. I propose to solve it with tooling.
    2. It’s not practical to force everyone into one single style.
    Code formatting is solved with tooling

    I’ve used hindent for two years, it solves the problem. There are a couple exceptions1. On the whole, though, it’s a completely different working experience:

    • Code always looks the same.
    • I don’t make any style decisions. I just think about the tree I need for my program.
    • I don’t do any manual line-breaking.
    • I’ve come to exploit it by writing lazy code like do x<-getLine;when(x>5)(print 5) and then hitting a keybinding to reformat it.
    Switching style is realistic

    I’ve been writing Haskell in my own style for years. For me, my style is better for structured editing, more consistent, and visually easier to read, than most code I’ve seen. It’s like Lisp. Using hindent, with my ChrisDone style, I had it automatically formatted for me. I used 2-space indents.

    The most popular style in the community2 is JohanTibell: The alignment, line-breaking, and spacing (4 spaces instead of 2) differs significantly to my own style.

    At FP Complete I’ve done a lot of projects, private FP Complete projects, client projects, and public FP Complete projects (like Stack). For the first year or so I generally stuck to my guns when working on code only I was going to touch and used my superior style.

    But once the JohanTibell style in hindent was quite stable, I found that I didn’t mind using it while collaborating with people who prefer that style. The tooling made it so automatic, that I didn’t have to understand the style or make any style decisions, I just wrote code and got on with it. It doesn’t work great with structured-haskell-mode, but that’s ok. Eventually I got used to it, and eventually switched to using it for my own personal projects.

    I completely did a U-turn. So I’m hoping that much of the community can do so too and put aside their stylistic preferences and embrace a standard.

    Going forward

    hindent-5.* now supports one style, based on the Johan Tibell style guide. My own style guide is now deprecated in favor of that. The style flag --style foo is now silently ignored.

    There is a demonstration web site in which you can try examples, and also get a link for the example to show other people the output (for debugging).

    HIndent now has a “literate” test suite here: You can read through it as a document, a bit like Johan’s style guide. But running the test suite parses this file and checks that each code fence is printed as written.

    There’s also a, since I rewrote comment handling, switched to a bytestring-builder, improved the quadratic line-breaking algorithm to short-circuit, among other improvements, hindent now formats things in 1.5ms instead of 1s.

    For those who still want to stick with their old hindent, Andrew Gibiansky is keeping a fork of hindent 4 for his personal use, and has said he’ll accept PR’s for that.

    HIndent is not perfect, there’s always room for improvement (issue tracker welcomes issues), but over time that problem space gets smaller and smaller. There is support for Emacs, Vim and Atom. I would appreciate support for SublimeText too.

    Give it a try!

    1. Such as CPP #if directives–they are tricky to handle. Comments are also tricky, but I’ve re-implemented comment handling from scratch and it works pretty well now. See the pretty extensive tests.

    2. From a survey of the top downloaded 1000 packages on Hackage, 660 are 4-spaced and 343 are 2-spaced. All else being equal, 4 spaces wins.

    Categories: Offsite Blogs

    Michael Snoyman: Follow up: and the Evil Cabal

    Planet Haskell - Sun, 08/28/2016 - 6:00pm

    Yesterday I put out a blog post describing a very problematic situation with the committee. As often happens with this kind of thing, a very lively discussion occurred on Reddit. There are many repeating themes over there, so instead of trying to address the points in that discussion, I'm going to give some responses in this post.

    • Firstly: thank you to those of you who subscribed to the haskell-community list and made your voices heard. That was the best response to the blog post I could have hoped for, and it happened. At this point, the Twitter poll and mailing list discussion both point to a desire to have Stack as the primary option on the downloads page (the latter is a tied vote of 6 to 6, indicating the change proposed should not happen). As far as I'm concerned, the committee has two options:

      • Listen to the voices of the community and make Stack the primary option on

      • Ignore the community voices and put the Haskell Platform at the top of the page, thus confirming my claims of an oligarchy.

    • Clarification: I do not believe anyone involved in this is an evil person. I thought my wording was unambiguous, but apparently not. The collusion among the projects is what gets the term "Evil Cabal." That said, I do believe that there were bad actions taken by individuals involved, and I've called some of those out. There's a much longer backstory here of the nepotism I refer to, starting at least at ICFP 2014 and GPS Haskell, but that's a story I'm not getting into right now.

    • A few people who should know better claimed that there's no reason for my complaint given that the Haskell Platform now ships with Stack. This is incorrect for multiple reasons. Firstly, one of my complaints in the blog post is that we've never discussed technical merits, so such a claim should be seen as absurd immediately. There's a great Reddit comment explaining that this inclusion is just misdirection. In any event, here are just 140 characters worth of reasons the Haskell Platform is inferior to Stack for a new user

      • There is no clear "getting started" guide for new users. Giving someone a download is only half the battle. If they don't know where to go next, the download it useless. (Compare with haskell-lang's getting started.)

      • Choice confusion: saying "HP vs Stack" is actually misleading. The real question is "HP+cabal-install vs HP+Stack vs Stack". A new user is not in a strong enough position to make this decision.

      • Stack will select the appropriate version of GHC to be used based on the project the user is working on. Bundling GHC with Stack insists on a specific GHC version. (I'm not arguing that there's no benefit to including GHC in the installer, but there are definitely downsides too.)

      • The HP release process has historically been very slow, whereas the Stack release process is a well oiled machine. I have major concerns about users being stuck with out-of-date Stack executables by using the HP and running into already fixed bugs. This isn't hypothetical: GHC for Mac OS X shipped an old Stack version for a while resulting in many bug reports. (This is an example of download page decisions causing extra work for the Stack team.)

      • Bonus point (not on Twitter): Stack on its own is very well tested. We have little experience in the wild of HP+Stack. Just assuming it will work is scary, and goes against the history of buggy Haskell Platform releases.

    • A lot of the discussion seemed to assume I was saying to get rid of cabal-install entirely. In fact, my blog post said the exact opposite: let it continue if people want to work on it. I'm talking exclusively about the story we tell to new users. Again, technical discussions should have occurred long ago about what's the best course of action. I'm claiming that Stack is by far the best option for the vast majority of new users. The committee has never to my knowledge argued publicly against that.

    • There was a lot of "tone policing," saying things like I need to have more patience, work with not against the committee, follow the principle of charity, etc. If this is the first time I raised these issues, you'd be right. Unfortunately, there is a long history here of many years of wasted time and effort. The reason I always link back to pull request #130 is because it represents the tipping point from "work with the committee without making a fuss" to "I need to make all of these decisions as public as possible so bad decisions don't slip in."

      Let me ask you all: if I had just responded to the mailing list thread asking for a different course of action to be taken, would most of you know that this drama was happening? This needed to be public, so that no more massive changes could slip under everyone's radar.

      Also: it's ironic to see people accusing me of violating the principle of charity by reading my words in the most negative way they possibly can. That's true irony, not just misrepresenting someone's position.

    • For a long time, people have attacked FP Complete every chance they could, presumably because attacking a company is easier than attacking an individual. There is no "FP Complete" conspiracy going on here. I decided to write this blog post on my own, not part of any FP Complete strategy. I discussed it with others, most of whom do not work for FP Complete. In fact, most of the discussion happened publicly, on Twitter, for you all to see.

      If you want to attack someone, attack me. Be intellectually honest. And while you're at it: try to actually attack the arguments made instead of resorting to silly ad hominems about power grabs. Such tin-foil hattery is unbecoming.

    • There's a legitimate discussion about how we get feedback from multiple forms of communication (mailing lists, Twitter, Reddit). While that's a great question to ask and a conversation to have, it really misses the point here completely: we're looking for a very simple vote on three options. We can trivially put up a Google Form or similar and link to it from all media. We did this just fine with the FTP debate. It feels almost disingenuous to claim that we don't know how to deal with this problem when we've already dealt with it in the past.

    Categories: Offsite Blogs

    Dimitri Sabadie: luminance designs

    Planet Haskell - Sun, 08/28/2016 - 5:46pm

    luminance-0.7.0 was released a few days ago and I decided it was time to explain exactly what luminance is and what were the design choices I made. After a very interesting talk with nical about other rust graphics frameworks (e.g. gfx, glium, vulkano, etc.), I thought it was time to give people some more information about luminance and how to compare it to other frameworks.


    luminance started as a Haskell package, extracted from a “3D engine” I had been working on for a while called quaazar. I came to the realization that I wasn’t using the Haskell garbage collector at all and that I could benefit from using a language without GC. Rust is a very famous language and well appreciated in the Haskell community, so I decided to jump in and learn Rust. I migrated luminance in a month or two. The mapping is described in this blog entry.

    What is luminance for?

    I’ve been writing 3D applications for a while and I always was frustrated by how OpenGL is badly designed. Let’s sum up the lack of design of OpenGL:

    • weakly typed: OpenGL has types, but… it actually does not. GLint, GLuint or GLbitfield are all defined as aliases to primary and integral types (i.e. something like typedef float GLfloat). Try it with grep -Rn "typedef [a-zA-Z]* GLfloat" /usr/include/GL. This leads to the fact that framebuffers, textures, shader stages, shader program or even uniforms, etc. have the same type (GLuint, i.e. unsigned int). Thus, a function like glCompileShader expects a GLuint as argument, though you can pass a framebuffer, because it’s also represented as a GLuint – very bad for us. It’s better to consider that those are just untyped – :( – handles.
    • runtime overhead: Because of the point above, functions cannot assume you’re passing a value of a the expected type – e.g. the example just above with glCompileShader and a framebuffer. That means OpenGL implementations have to check against all the values you’re passing as arguments to be sure they match the type. That’s basically several tests for each call of an OpenGL function. If the type doesn’t match, you’re screwed and see the next point.
    • error handling: This is catastrophic. Because of the runtime overhead, almost all functions might set the error flag. You have to check the error flag with the glGetError function, adding a side-effect, preventing parallelism, etc.
    • global state: OpenGL works on the concept of global mutation. You have a state, wrapped in a context, and each time you want to do something with the GPU, you have to change something in the context. Such a context is important; however, some mutations shouldn’t be required. For instance, when you want to change the value of an object or use a texture, OpenGL requires you to bind the object. If you forget to bind for the next object, the mutation will occurs on the first object. Side effects, side effects…

    The goal of luminance is to fix most of those issues by providing a safe, stateless and elegant graphics framework. It should be as low-level as possible, but shouldn’t sacrifice runtime performances – CPU charge as well as memory bandwidth. That is why if you know how to program with OpenGL, you won’t feel lost when getting your feet wet with luminance.

    Because of the many OpenGL versions and other technologies (among them, vulkan), luminance has an extra aim: abstract over the trending graphics API.

    Types in luminance

    In luminance, all graphics resources – and even concepts – have their own respective type. For instance, instead of GLuint for both shader programs and textures, luminance has Program and Texture. That ensures you don’t pass values with the wrong types.

    Because of static warranties provided by compile-time, with such a scheme of strong-typing, the runtime shouldn’t have to check for type safety. Unfortunately, because luminance wraps over OpenGL in the luminance-gl backend, we can only add static warranties; we cannot remove the runtime overhead.

    Error handling

    luminance follows the Rust conventions and uses the famous Option and Result types to specify errors. You will never have to check against a global error flag, because this is just all wrong. Keep in mind, you have the try! macro in your Rust prelude; use it as often as possible!

    Even though Rust needs to provide an exception handler – i.e. panics – there’s no such thing as exceptions in Rust. The try! macro is just syntactic sugar to:

    match result {
    Ok(x) => x,
    Err(e) => return e

    luminance is stateless. That means you don’t have to bind an object to be able to use it. luminance takes care of that for you in a very simple way. To achieve this and keep performances running, it’s required to add a bit of high-level to the OpenGL API by leveraging how binds should happen.

    Whatever the task you’re trying to reach, whatever computation or problem, it’s always better to gather / group the computation by batches. A good example of that is how magnetic hard drive disks work or your RAM. If you spread your data across the disk region (fragmented data) or across several non-contiguous addresses in your RAM, it will end up by unnecessary moves. The hard drive’s head will have to go all over the disk to gather the information, and it’s very likely you’ll destroy the RAM performance (and your CPU caches) if you don’t put the data in a contiguous area.

    If you don’t group your OpenGL resources – for instances, you render 400 objects with shader A, 10 objects with shader B, then 20 objects with shader A, 32 objects with shader C, 349 objects with shader A and finally 439 objects with shader B, you’ll add more OpenGL calls to the equation – hence more global state mutations, and those are costly.

    Instead of this:

    1. 400 objects with shader A
    2. 10 objects with shader B
    3. 20 objects with shader A
    4. 32 objects with shader C
    5. 348 objects with shader A
    6. 439 objects with shader B

    luminance forces you to group your resources like this:

    1. 400 + 20 + 348 objects with shader A
    2. 10 + 439 objects with shader B
    3. 32 objects with shader C

    This is done via types called Pipeline, ShadingCommand and RenderCommand.


    A Pipeline gathers shading commands under a Framebuffer. That means that all ShadingCommand embedded in the Pipeline will output to the embedded Framebuffer. Simple, yet powerful, because we can bind the framebuffer when executing the pipeline and don’t have to worry about framebuffer until the next execution of another Pipeline.


    A ShadingCommand gathers render commands under a shader Program along with an update function. The update function is used to customize the Program by providing uniforms – i.e. Uniform. If you want to change a Programs Uniform once a frame – and only if the Program is only called once in the frame – it’s the right place to do it.

    All RenderCommand embedded in the ShadingCommand will be rendered using the embedded shader Program. Like with the Pipeline, we don’t have to worry about binding: we just have to use the embedded shader program when executing the ShadingCommand, and we’ll bind another program the next time a ShadingCommand is ran!


    A RenderCommand gathers all the information required to render a Tessellation, that is:

    • the blending equation, source and destination blending factors
    • whether the depth test should be performed
    • an update function to update the Program being in use – so that each object can have different properties used in the shader program
    • a reference to the Tessellation to render
    • the number of instances of the Tessellation to render
    • the size of the rasterized points (if the Tessellation contains any)
    What about shaders?

    Shaders are written in… the backend’s expected format. For OpenGL, you’ll have to write GLSL. The backends automatically inserts the version pragma (#version 330 core for OpenGL 3.3 for instance). In the first place, I wanted to migrate cheddar, my Haskell shader EDSL. But… the sad part of the story is that Rust is – yet – unable to handle that kind of stuff correctly. I started to implement an EDSL for luminance with macros. Even though it was usable, the error handling is seriously terrible – macros shouldn’t be used for such an important purpose. Then some rustaceans pointed out I could implement a (rustc) compiler plugin. That enables the use of new constructs directly into Rust by extending its syntax. This is great.

    However, with the hindsight, I will not do that. For a very simple reason. luminance is, currently, simple, stateless and most of all: it works! I released a PC demo in Köln, Germany using luminance and a demoscene graphics framework I’m working on:

    pouë link

    youtube capture

    ion demoscene framework

    While developping Céleri Rémoulade, I decided to bake the shaders directly into Rust – to get used to what I had wanted to build, i.e., a shader EDSL. So there’re a bunch of constant &'static str everywhere. Each time I wanted to make a fix to a shader, I had to leave the application, make the change, recompile, rerun… I’m not sure it’s a good thing. Interactive programming is a very good thing we can enjoy – yes, even in strongly typed languages ;).

    I saw that gfx doesn’t have its own shader EDSL either and requires you to provide several shader implementations (one per backend). I don’t know; I think it’s not that bad if you only target a single backend (i.e. OpenGL 3.3 or Vulkan). Transpiling shaders is a thing, I’ve been told…

    sneaking out…

    Feel free to dig in the code of Céleri Rémoulade here. It’s demoscene code, so it had been rushed on before the release – read: it’s not as clean as I wanted it to be.

    I’ll provide you with more information in the next weeks, but I prefer spending my spare time writing code than explaining what I’m gonna do – and missing the time to actually do it. ;)

    Keep the vibe!

    Categories: Offsite Blogs

    Philip Wadler: Ride the Route in support of Option A

    Planet Haskell - Sun, 08/28/2016 - 8:17am

    I've written before about the Edinburgh West-East Cycle Route (previously called Roseburn to Leith Walk), and the importance of choosing Option A over Option B.

    It's fantastic that Edinburgh has decided to invest 10% of its transport budget into active travel. If we invest regularly and wisely in cycling infrastructure, within twenty years Edinburgh could be a much more pleasant place to live and work, on a par with Copenhagen or Rotterdam. But that requires investing the effectively. The choice of Option A vs B is a crucial step along the way. Option B offers a far less direct route and will do far less to attract new people to cycling, undermining the investment and making it harder to attract additional funding from Sustrans. Unless we start well, it will be harder to continue well.
    SNP Councillors are putting it about that since Sustrans awarded its competition to Glasgow rather than Edinburgh that the route cannot be funded. But that is nonsense. Edinburgh can build the route on its own, it would just take longer. And in any event, year on year funding from Sustrans is still available. But funding is only likely to be awarded for an ambitious project that will attract more folk to cycling, and that means Option A.
    (Imagine if auto routes were awarded by competition. You can have the M80 to Glasgow or the M90 to Edinburgh, but not both ... Sort of like the idea of holding a bake sale to fund a war ...)
    Supporters have organised a Ride the Route event 8am Monday 29 August, leaving from Charlotte Square, which will take councillors and press along the route to promote Option A.  (And here's a second announcement from Pedal on Parliament.) I hope to see you there!
    Categories: Offsite Blogs

    Michael Snoyman: and the Evil Cabal

    Planet Haskell - Sat, 08/27/2016 - 6:00pm

    There's no point being coy or saying anything but what I actually believe, and saying it bluntly. So here it is:

    The committee has consistently engaged in tactics which silence the voices of all non-members, and stacks the committee to prevent dissenting opinions from joining.

    I've said various parts of this previously. You may have heard me say things like the oligarchy, refer to the "evil cabal of Haskell" (referring to the nepotism which exists amongst Hackage, cabal-install,, and the Haskell Platform), or engage in lengthy debates with committee members about their actions.

    This is a pretty long post, if you want to see my request, please jump to the end.

    The backstory

    To summarize a quick backstory: many of us in the community have been dissatisfied with the four members of the "evil cabal" for years, and have made efforts to improve them, only to be met with opposition. One by one, some of us have been replacing these components with alternatives. Hackage's downtime led to an FP Complete mirror and more reliable doc hosting on cabal-install's weaknesses led to the creation of the Stack build tool. Haskell Platform's poor curation process and broken installer led to Stackage Nightly and LTS Haskell, as well some of the Stack featureset. And most recently, the committee's poor decisions (as I'll demonstrate shortly) for website content led to resurrecting, a website devoted to actually making Haskell a more approachable language.

    As you can see, at this point all four members of the evil cabal have been replaced with better options, and community discussions and user statistics indicate that most users are switching over. (For an example of statistics, have a look at the package download count on Hackage, indicating that the vast majority of users are no longer downloading packages via cabal-install+Hackage.) I frankly have no problem at all with the continued existence and usage of these four projects; if people want to spend their time on them and use what I consider to be inferior tools, let them. The only remaining pain point is that new, unsuspecting users will arrive at download page instead of the much more intuitive get started page.

    EDIT Ignore that bit about the download statistics, it's apparently due to the CDN usage on Hackage. Instead, one need only look at how often a question about Haskell Platform is answered with "don't do that, use Stack instead." For a great example, see the discussion of the Rust Platform.

    The newest attempt

    Alright, with that out of the way, why am I writing this blog post now? It's due to this post on the Haskell-community mailing list, proposing promoting the Haskell Platform above all other options (yet again). Never heard of that mailing list? That's not particularly surprising. That mailing list was created in response to a series of complaints by me, claiming that the committee acted in a secretive way and ignored all community input. The response to this was, instead of listening to the many community discussions already occuring on Twitter and Reddit, to create a brand new mailing list, have an echo chamber of people sympathetic to Evil Cabal thought, and insist that "real" community discussions go on there.

    We're seeing this process work exactly as the committee wants. Let me demonstrate clearly how. At the time of writing this blog post, three people have voted in favor of promoting the HP on haskell-community, including two committee members (Adam Foltzer and John Wiegley) and the person who originally proposed it, Jason Dagit. There were two objections: Chris Allen and myself. So with a sample size of 5, we see that 60% of the community wants the HP.

    The lie

    A few hours after this mailing list post, I put out a poll on Twitter. At time of writing (4 hours or so into the poll), we have 122 votes, with 85% in favor of Stack, and 15% in favor of some flavor of the Haskell Platform (or, as we'll now be calling, the Perfect Haskell Platform). Before anyone gets too excited: yes, a poll of my Twitter followers is obviously a biased sample, but no more biased than the haskell-community list. My real point is this:

    The committee is posing questions of significant importance in echo chambers where they'll get the response they want from a small group of people, instead of engaging the community correctly on platforms that make participation easy.

    This isn't the first time this has happened. When we last discussed download page content, a similar phenonmonon occurred. Magically, the haskell-community discussion had a bias in favor of the Haskell Platform. In response, I created a Google Form, and Stack was the clear victor:

    Yet despite this clear feedback, the committee went ahead with putting minimal installers at the top, not Stack (they weren't quite brazen enough to put the Perfect Haskell Platform at the top or even above Stack, for which I am grateful).

    Proper behavior

    As I see it, the committee has two correct options to move forward with making the download page decision:

    • Accept the votes from my Twitter poll in addition to the haskell-community votes
    • Decide that my poll is invalid for some reason, and do a proper poll of the community, with proper advertisement on Reddit, Twitter, the more popular mailing lists, etc

    If past behavior is any indication though, I predict a third outcome: stating that the only valid form of feedback is on the haskell-community mailing list, ignore the clear community groundswell against their decisions, and continue to make unilateral, oligarchic decisions. Namely: promote the Haskell Platform, thereby misleading all unfortunate new Haskellers who end up at instead the much better

    Further evidence

    Everyone's always asking me for more of the details on what's gone on here, especially given how some people vilify my actions. I've never felt comfortable putting that kind of content on blogs shared with other authors when some of those others don't want me to call out the negative actions. However, thankfully I now have my own blog to state this from. This won't include every punch thrown in this long and sordid saga, but hopefully will give a much better idea of what's going on here.

    • Not only are conversations held in private by the committee, but:

      • Their private nature is used to shut down commentary on committee actions
      • There is open deception about what was actually discussed in private

      Evidence: see this troubling Reddit thread. I made the (very true) claim that Gershom made a unilateral decision about the downloads page. You can see the evidence of this where he made that decision. Adam Foltzer tried to call my claim false, and ultimately Gershom himself confirmed I was correct. Adam then claimed offense at this whole discussion and backed out.

    • When I proposed making Stack the preferred download option (at a time when Stack did not appear at all on, Gershom summarilly closed the pull request. I have referenced this pull request many times. I don't believe any well intentioned person can read that long discussion and believe that the committee has a healthy process for maintaining a community website.

    • At no point in any of these discussions has the committee opened up discussion to either the technical advantages of the HP vs Stack, or the relative popularity. Instead, we get discussions of committee process, internal votes, an inability to make changes at certain periods of time based on previously made and undocumented decisions.

    • We often hear statements from committee members about the strong support for their actions, or lack of controversy on an issue. These claims are many times patently false to any objective third party. For example, Gershom claimed that the pull request #122 that he unilaterally decided to merge was "thought to be entirely mundane and uncontroversial." Everyone is welcome to read the Reddit discussion and decide if Gershom is giving a fair summary or not.

    • Chris Done - a coworker of mine - spent his own time on creating the first, due to his unhappiness with the homepage at that time. His new site was met with much enthusiasm, and he was pressured by many to get it onto itself. What ensued was almost a year of pain working out the details, having content changed to match the evil cabal narrative, and eventually a rollout. At the end of this, Chris was - without any given reason - not admitted to the committee, denying him access to share an opinion on what should be on the site he designed and created.

    My request

    Thank you for either getting through all of that, or skipping to this final section. Here's my request: so many people have told me that they feel disenfranchised by these false-flag "community" processes, and just give up on speaking up. This allows the negative behavior we've seen dominate the evil cabal in Haskell for so long. If you've already moved on to Stack and Stackage yourself, you're mostly free of this cabal. I'm asking you to think of the next generation of Haskell users, and speak up.

    Most powerful course of action: subscribe to the haskell-community mailing list and speak out about how the committee has handled the downloads page. Don't just echo my message here: say what you believe. If you think they've done a good job, then say so. If you think (like I do) that they've done a bad job, and are misleading users with their decisions, say that.

    Next best: comment about this on Reddit or Twitter. Get your voice out there and be heard, even if it isn't in the committee echo chamber.

    In addition to that: expect me to put out more polls on Twitter and possibly elsewhere. Please vote! We've let a select few make damaging decisions for too long, make your voice heard. I'm confident that we will have a more user-friendly Haskell experience if we actually start listening to users.

    And finally: as long as it is being mismanaged, steer people away from This is why we created Link to it, tell your friends about it, warn people away from, and maybe even help improve its content.

    Archive link of the Reddit and Github threads quoted above:

    Categories: Offsite Blogs