# News aggregator

### Yesod Web Framework: Announcing auto-update

Kazu and I are happy to announce the first release of auto-update, a library to run update actions on a given schedule. To make it more concrete, let's start with a motivating example.

Suppose you're writing a web service which will return the current time. This is simple enough with WAI and Warp, e.g.:

{-# LANGUAGE OverloadedStrings #-} import Data.ByteString.Lazy.Char8 (pack) import Data.Time (formatTime, getCurrentTime) import Network.HTTP.Types (status200) import Network.Wai (responseLBS) import Network.Wai.Handler.Warp (run) import System.Locale (defaultTimeLocale) main :: IO () main = run 3000 app where app _ respond = do now <- getCurrentTime respond $ responseLBS status200 [("Content-Type", "text/plain")] $ pack $ formatTime defaultTimeLocale "%c" nowThis is all well and good, but it's a bit inefficient. Imagine if you have a
thousand requests per second (some people *really* like do know what time it
is). We will end up recalculating the string representation of the time a 999
extra times than is necessary! To work around this, we have a simple solution:
spawn a worker thread to calculate the time once per second. (Note: it will
actually calculate it slightly less than once per second due to the way
threadDelay works; we're assuming we have a little bit of latitude in
returning a value thats a few milliseconds off.)

Now we will calculate the current time once per second, which is far more efficient... right? Well, it depends on server load. Previously, we talked about a server getting a thousand requests per second. Let's instead reverse it: a server that gets one request every thousand seconds. In that case, our optimization turns into a pessimization.

This problem doesn't just affect getting the current time. Another example is flushing logs. A hot web server could be crippled by flushing logs to disk on every request, whereas flushing once a second on a less popular server simply keeps the process running for no reason. One option is to put the power in the hands of users of a library to decide how often to flush. But often times, we won't know until runtime how frequently a service will be requested. Or even more complicated: traffic will come in spikes, with both busy and idle times.

(Note that I've only given examples of running web servers, though I'm certain there are plenty of other examples out there to draw from.)

This is the problem that auto-update comes to solve. With auto-update, you declare an update function, a frequency with which it should run, and a threshold at which it should "daemonize". The first few times you request a value, it's calculated in the main thread. Once you cross the daemonize threshold, a dedicated worker thread is spawned to recalculate the value. If the value is not requested during an update period, the worker thread is shut down, and we go back to the beginning.

Let's see how our running example works out with this:

{-# LANGUAGE OverloadedStrings #-} import Control.AutoUpdate (defaultUpdateSettings, mkAutoUpdate, updateAction) import Data.ByteString.Lazy.Char8 (ByteString, pack) import Data.Time (formatTime, getCurrentTime) import Network.HTTP.Types (status200) import Network.Wai (responseLBS) import Network.Wai.Handler.Warp (run) import System.Locale (defaultTimeLocale) getCurrentTimeString :: IO ByteString getCurrentTimeString = do now <- getCurrentTime return $ pack $ formatTime defaultTimeLocale "%c" now main :: IO () main = do getTime <- mkAutoUpdate defaultUpdateSettings { updateAction = getCurrentTimeString } run 3000 (app getTime) where app getTime _ respond = do time <- getTime respond $ responseLBS status200 [("Content-Type", "text/plain")] timeIf you want to see the impact of this change, add a putStrLn call to getCurrentTimeString and make a bunch of requests to the service. You should see just one request per second, once you get past that initial threshold period (default of 3).

Kazu and I have started using this library in a few places:

- fast-logger no longer requires explicit flushing; it's handled for you automatically.
- wai-logger and wai-extra's request logger, by extension, inherit this functionality.
- Warp no longer has a dedicated thread for getting the current time.
- The Yesod scaffolding was able to get rid of an annoying bit of commentary.

Hopefully others will enjoy and use this library as well.

Control.ReaperThe second module in auto-update is Control.Reaper. This provides something similar, but slightly different, from Control.AutoUpdate. The goal is to spawn reaper/cleanup threads on demand. These threads can handle such things as:

- Recycling resources in a resource pool.
- Closing out unused connections in a connection pool.
- Terminating threads that have overstayed a timeout.

This module is currently being used in Warp for slowloris timeouts and file descriptor cache management, though I will likely use it in http-client in the near future as well for its connection manager management.

### Dominic Steinitz: Fun with (Kalman) Filters Part II

Suppose we have particle moving in at constant velocity in 1 dimension, where the velocity is sampled from a distribution. We can observe the position of the particle at fixed intervals and we wish to estimate its initial velocity. For generality, let us assume that the positions and the velocities can be perturbed at each interval and that our measurements are noisy.

A point of Haskell interest: using type level literals caught a bug in the mathematical description (one of the dimensions of a matrix was incorrect). Of course, this would have become apparent at run-time but proof checking of this nature is surely the future for mathematicians. One could conceive of writing an implementation of an algorithm or proof, compiling it but never actually running it purely to check that some aspects of the algorithm or proof are correct.

The Mathematical ModelWe take the position as and the velocity :

where and are all IID normal with means of 0 and variances of and

We can re-write this as

where

Let us denote the mean and variance of as and respectively and note that

Since and are jointly Gaussian and recalling that = as covariance matrices are symmetric, we can calculate their mean and covariance matrix as

We can now use standard formulæ which say if

then

and apply this to

to give

This is called the measurement update; more explicitly

Sometimes the measurement residual , the measurement prediction covariance and the filter gain are defined and the measurement update is written as

We further have that

We thus obtain the Kalman filter prediction step:

Further information can be found in (Boyd 2008), (Kleeman 1996) and (Särkkä 2013).

A Haskell ImplementationThe hmatrix now uses type level literals via the DataKind extension in ghc to enforce compatibility of matrix and vector operations at the type level. See here for more details. Sadly a bug in the hmatrix implementation means we can’t currently use this excellent feature and we content ourselves with comments describing what the types would be were it possible to use it.

> {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > {-# OPTIONS_GHC -fno-warn-type-defaults #-} > {-# OPTIONS_GHC -fno-warn-unused-do-bind #-} > {-# OPTIONS_GHC -fno-warn-missing-methods #-} > {-# OPTIONS_GHC -fno-warn-orphans #-} > {-# LANGUAGE DataKinds #-} > {-# LANGUAGE ScopedTypeVariables #-} > {-# LANGUAGE RankNTypes #-} > module FunWithKalmanPart1a where > import Numeric.LinearAlgebra.HMatrix hiding ( outer ) > import Data.Random.Source.PureMT > import Data.Random hiding ( gamma ) > import Control.Monad.State > import qualified Control.Monad.Writer as W > import Control.Monad.LoopsLet us make our model almost deterministic but with noisy observations.

> stateVariance :: Double > stateVariance = 1e-6 > obsVariance :: Double > obsVariance = 1.0And let us start with a prior normal distribution with a mean position and velocity of 0 with moderate variances and no correlation.

> -- muPrior :: R 2 > muPrior :: Vector Double > muPrior = vector [0.0, 0.0] > -- sigmaPrior :: Sq 2 > sigmaPrior :: Matrix Double > sigmaPrior = (2 >< 2) [ 1e1, 0.0 > , 0.0, 1e1 > ]We now set up the parameters for our model as outlined in the preceeding section.

> deltaT :: Double > deltaT = 0.001 > -- bigA :: Sq 2 > bigA :: Matrix Double > bigA = (2 >< 2) [ 1, deltaT > , 0, 1 > ] > a :: Double > a = 1.0 > -- bigH :: L 1 2 > bigH :: Matrix Double > bigH = (1 >< 2) [ a, 0 > ] > -- bigSigmaY :: Sq 1 > bigSigmaY :: Matrix Double > bigSigmaY = (1 >< 1) [ obsVariance ] > -- bigSigmaX :: Sq 2 > bigSigmaX :: Matrix Double > bigSigmaX = (2 >< 2) [ stateVariance, 0.0 > , 0.0, stateVariance > ]The implementation of the Kalman filter using the hmatrix package is straightforward.

> -- outer :: forall m n . (KnownNat m, KnownNat n) => > -- R n -> Sq n -> L m n -> Sq m -> Sq n -> Sq n -> [R m] -> [(R n, Sq n)] > outer :: Vector Double > -> Matrix Double > -> Matrix Double > -> Matrix Double > -> Matrix Double > -> Matrix Double > -> [Vector Double] > -> [(Vector Double, Matrix Double)] > outer muPrior sigmaPrior bigH bigSigmaY bigA bigSigmaX ys = result > where > result = scanl update (muPrior, sigmaPrior) ys > > -- update :: (R n, Sq n) -> R m -> (R n, Sq n) > update (xHatFlat, bigSigmaHatFlat) y = > (xHatFlatNew, bigSigmaHatFlatNew) > where > -- v :: R m > v = y - bigH #> xHatFlat > -- bigS :: Sq m > bigS = bigH <> bigSigmaHatFlat <> (tr bigH) + bigSigmaY > -- bigK :: L n m > bigK = bigSigmaHatFlat <> (tr bigH) <> (inv bigS) > -- xHat :: R n > xHat = xHatFlat + bigK #> v > -- bigSigmaHat :: Sq n > bigSigmaHat = bigSigmaHatFlat - bigK <> bigS <> (tr bigK) > -- xHatFlatNew :: R n > xHatFlatNew = bigA #> xHat > -- bigSigmaHatFlatNew :: Sq n > bigSigmaHatFlatNew = bigA <> bigSigmaHat <> (tr bigA) + bigSigmaXWe create some ranodm data using our model parameters.

> singleSample ::(Double, Double) -> > RVarT (W.Writer [(Double, (Double, Double))]) (Double, Double) > singleSample (xPrev, vPrev) = do > psiX <- rvarT (Normal 0.0 stateVariance) > let xNew = xPrev + deltaT * vPrev + psiX > psiV <- rvarT (Normal 0.0 stateVariance) > let vNew = vPrev + psiV > upsilon <- rvarT (Normal 0.0 obsVariance) > let y = a * xNew + upsilon > lift $ W.tell [(y, (xNew, vNew))] > return (xNew, vNew) > streamSample :: RVarT (W.Writer [(Double, (Double, Double))]) (Double, Double) > streamSample = iterateM_ singleSample (1.0, 1.0) > samples :: ((Double, Double), [(Double, (Double, Double))]) > samples = W.runWriter (evalStateT (sample streamSample) (pureMT 2))Here are the actual values of the randomly generated positions.

> actualXs :: [Double] > actualXs = map (fst . snd) $ take nObs $ snd samples > test :: [(Vector Double, Matrix Double)] > test = outer muPrior sigmaPrior bigH bigSigmaY bigA bigSigmaX > (map (\x -> vector [x]) $ map fst $ snd samples)And using the Kalman filter we can estimate the positions.

> estXs :: [Double] > estXs = map (!!0) $ map toList $ map fst $ take nObs test > nObs :: Int > nObs = 1000And we can see that the estimates track the actual positions quite nicely.

Of course we really wanted to estimate the velocity.

> actualVs :: [Double] > actualVs = map (snd . snd) $ take nObs $ snd samples > estVs :: [Double] > estVs = map (!!1) $ map toList $ map fst $ take nObs test BibliographyBoyd, Stephen. 2008. “EE363 Linear Dynamical Systems.” http://stanford.edu/class/ee363.

Kleeman, Lindsay. 1996. “Understanding and Applying Kalman Filtering.” In *Proceedings of the Second Workshop on Perceptive Systems, Curtin University of Technology, Perth Western Australia (25-26 January 1996)*.

Särkkä, Simo. 2013. *Bayesian Filtering and Smoothing*. Vol. 3. Cambridge University Press.

### Side-by-side pretty printing

### Danny Gratzer: Equality is Hard

Equality seems like one of the simplest things to talk about in a theorem prover. After all, the notion of equality is something any small child can intuitively grasp. The sad bit is, while it’s quite easy to hand-wave about, how equality is formalized seems to be a rather complex topic.

In this post I’m going to attempt to cover a few of the main different means of “equality proofs” or identity types and the surrounding concepts. I’m opting for a slightly more informal approach in the hopes of covering more ground.

Definitional EqualityThis is not really an equality type per say, but it’s worth stating explicitly what definitional equality is since I must refer to it several times throughout this post.

Two terms A and B are definitional equal is a judgment notated

Γ ⊢ A ≡ BThis is *not* a user level proof but rather a primitive, untyped judgment in the meta-theory of the language itself. The typing rules of the language will likely include a rule along the lines of

So this isn’t an identity type you would prove something with, but a much more magical notion that two things are completely the same to the typechecker.

Now in most type theories we have a slightly more powerful notion of definitional equality where not only are x ≡ y if x is y only by definition but also by computation.

So in Coq for example

(2 + 2) ≡ 4Even though “definitionally” these are entirely separate entities. In most theories, definitionally equal means “inlining all definitions and with normalization”, but not all.

In type theories that distinguish between the two, the judgment that when normalized x is y is called judgmental equality. I won’t distinguish between the two further because most don’t, but it’s worth noting that they can be seen as separate concepts.

Propositional EqualityThis is the sort of equality that we’ll spend the rest of our time discussing. Propositional equality is a particular type constructor with the type/kind

Id : (A : Set) → A → A → TypeWe should be able to prove a number of definitions like

reflexivity : (A : Set)(x : A) → Id x x symmetry : (A : Set)(x y : A) → Id x y → Id y x transitivity : (A : Set)(x y z : A) → Id x y → Id y z → Id x zThis is an entirely separate issue from definitional equality since propositional equality is a concept that users can hypothesis about.

One very important difference is that we can make proofs like

sanity : Id 1 2 → ⊥Since the identity proposition is a type family which can be used just like any other proposition. This is in stark contrast to definitional equality which a user can’t even normally utter!

IntensionalThis is arguably the simplest form of equality. Identity types are just normal inductive types with normal induction principles. The most common is equality given by Martin Lof

data Id (A : Set) : A → A → Type where Refl : (x : A) → Id x xThis yields a simple induction principle

id-ind : (P : (x y : A) → Id x y → Type) → ((x : A) → P x x (Refl x)) → (x y : A)(p : Id x y) → P x y pIn other words, if we can prove that P holds for the reflexivity case, than P holds for any x and y where Id x y.

We can actually phrase Id in a number of ways, including

data Id (A : Set)(x : A) : A → Set where Refl : Id x xThis really makes a difference in the resulting induction principle

j : (A : Set)(x : A)(P : (y : A) → Id x y → Set) → P x Refl → (y : A)(p : Id x y) → P y pThis clearly turned out a bit differently! In particular now P is only parametrized over *one* value of A, y. This particular elimination is traditionally named j.

These alternative phrasings can have serious impacts on proofs that use them. It also has even more subtle effects on things like heterogeneous equality which we’ll discuss later.

The fact that this only relies on simple inductive principles is also a win for typechecking. Equality/substitution fall straight out of how normal inductive types are handled! This also means that we can keep decidability within reason.

The price we pay of course is that this is much more painful to work with. An intensional identity type means the burden of constructing our equality proofs falls on users. Furthermore, we lose the ability to talk about observational equality.

Observational equality is the idea that two “thingies” are indistinguishable by any test.

It’s clear that we can prove that if Id x y, then f x = f y, but it’s less clear how to go the other way and prove something like

fun_ext : (A B : Set)(f g : A → B) → ((x : A) → Id (f x) (g x)) → Id f g fun_ext f g p = ??Even though this is clearly desirable. If we know that f and g behave exactly the same way, we’d like our equality to be able to state that. However, we don’t know that f and g are *constructed* the same way, making this impossible to prove.

This can be introduced as an axiom but to maintain our inductively defined equality type we have to sacrifice one of the following

- Coherence
- Inductive types
- Extensionality
- Decidability

Some this has been avoided by regarding equality as an induction over the *class* of types as in Martin Lof’s intuitionist type theory.

In the type theory that we’ve outlined, this isn’t expressible sadly.

Definitional + ExtensionalSome type theories go a different route to equality, giving us back the extensionality in the process. One of those type theories is extensional type theory.

In the simplest formulation, we have intensional type theory with a new rule, reflection

Γ ⊢ p : Id x y ——————————–———— Γ ⊢ x ≡ yThis means that our normal propositional equality can be shoved *back* into the more magical definitional equality. This gives us a lot more power, all the typecheckers magic and support of definitional equality can be used with our equality types!

It isn’t all puppies an kittens though, arbitrary reflection can also make things undecidable in general. For example Martin Lof’s system is undecidable in with extensional equality.

It’s worth noting that no extensional type theory is implemented this way. Instead they’ve taken a different approach to defining types themselves!

In this model of ETT types are regarded as a partial equivalence relation (PER) over unityped (untyped if you want to get in a flamewar) lambda calculus terms.

These PERs precisely reflect the extensional equality at that “type” and we then check membership by reflexivity. So a : T is synonymous with (a, a) ∈ T. Notice that since we are dealing with a PER, we know that ∀ a. (a, a) ∈ T need not hold. This is reassuring, otherwise we’d be able to prove that every type was inhabited by every term!

The actual NuRPL&friends theory is a little more complicated than that. It’s not entirely dependent on PERs and allows a few different ways of introducing types, but I find that PERs are a helpful idea.

Propositional ExtensionalityThis is another flavor of extensional type theory which is really just intensional type theory plus some axioms.

We can arrive at this type theory in a number of ways, the simplest is to add axiom K

k : (A : Set)(x : A)(P : (x : A) → Id x x → Type) → P x (Refl x) → (p : Id x x) → P x pThis says that if we can prove that for any property P, P x (Refl x) holds, then it holds for any proof that Id x x. This is subtly different than straightforward induction on Id because here we’re not proving that a property parameterized over two different values of A, but only one.

This is horribly inconsistent in something like homotopy type theory but lends a bit of convenience to theories where we don’t give Id as much meaning.

Using k we can prove that for any p q : Id x y, then Id p q. In Agda notation

prop : (A : Set)(x y : A)(p q : x ≡ y) → p ≡ q prop A x .x refl q = k A P (λ _ → refl) x q where P : (x : A) → x ≡ x → Set P _ p = refl ≡ pThis can be further refined to show that that we can eliminate all proofs that Id x x are Refl x

rec : (A : Set)(P : A → Set)(x y : A)(p : P x) → x ≡ y → P y rec A P x .x p refl = p rec-refl-is-useless : (A : Set)(P : A → Set)(x : A) → (p : P x)(eq : x ≡ x) → p ≡ rec A P x x p eq rec-refl-is-useless A P x p eq with prop A x x eq refl rec-refl-is-useless A P x p .refl | refl = reflThis form of extensional type theory still leaves a clear distinction between propositional equality and definitional equality by avoiding a reflection rule. However, with rec-refl-is–useless we can do much of the same things, whenever we have something that matches on an equality proof we can just remove it.

We essentially have normal propositional equality, but with the knowledge that things can only be equal in 1 way, up to propositional equality!

Heterogeneous EqualityThe next form of equality we’ll talk about is slightly different than previous ones. Heterogeneous equality is designed to co-exist in some other type theory and supplement the existing form of equality.

Heterogeneous equality is most commonly defined with John Major equality

data JMeq : (A B : Set) → A → B → Set where JMrefl : (A : Set)(x : A) → JMeq A A x xThis is termed after a British politician since while it promises that any two terms can be equal regardless of their class (type), only two things from the same class can ever be equal.

Now remember how earlier I’d mentioned that how we phrase these inductive equality types can have a huge impact? We’ll here we can see that because the above definition doesn’t typecheck in Agda!

That’s because Agda is predicative, meaning that a type constructor can’t quantify over the same universe it occupies. We can however, cleverly phrase JMeq so to avoid this

data JMeq (A : Set) : (B : Set) → A → B → Set where JMrefl : (a : A) → JMeq A A a aNow the constructor avoids quantifying over Set and therefore fits inside the same universe as A and B.

JMeq is usually paired with an axiom to reflect heterogeneous equality back into our normal equality proof.

reflect : (A : Set)(x y : A) → JMeq x y → Id x yThis reflection doesn’t look necessary, but arises for similar reasons that dictate that k is unprovable.

It looks like this heterogeneous equality is a lot more trouble than it’s worth at first. It really shines when we’re working with terms that we *know* must be the same, but require pattern matching or other jiggering to prove.

If you’re looking for a concrete example, look no further than Observational Equality Now!. This paper gives allows observational equality to be jammed into a principally intensional system!

Wrap UpSo this has been a whirlwind tour through a lot of different type theories. I partially wrote this to gather some of this information in one (free) place. If there’s something here missing that you’d like to see added, feel free to comment or email me.

*Thanks to Jon Sterling for proof reading and many subtle corrections :)*

### Old code broken by new Typeable class

### Creating a "Group" Type Class

I'm new-ish to Haskell, a mathematician by training, and going through LYAH.

When I got to the section on type classes, my first though was to make a "group" type class that implements all the standard group axioms. First, I started with a monoid:

class (Eq a) => Mon a where unit :: a mult :: a -> a -> a mult x unit = x mult unit x = xThis works fine. I can create instances of Mon a where a is a finite set and define group operations and everything works. The trouble is when I try to add the inverse operation. I *want* to do something like

but this gives a parse error on the last lines. As far as I can tell, GHC wants an explicit definition of inv, but inv is defined implicitly for groups. Sure, I could implement inv for every instance of Grp and hope it satisfies the axioms, but I'd much rather force inv to have properties in the class declaration so my Grp instances are *provably* groups.

How can I do this?

EDIT: Thanks to the commenters for pointing out my mistake (which, admittedly, I should have seen first thing) in not recognizing that the scope of variables in pattern matching is local.

submitted by generating_loop[link] [11 comments]

### Dealing with encodings

### Ask Haskellers: How to Think Functionally?

Hello, everybody.

I've been programming for two decades, and although I've been using functional programming, it has been as a support for imperative code. I still don't know how to design a full functional solution from zero. So I was thinking that you could explain your mental process.

Take for example this problem:

We want to pump water out of a mine sump. We have two water level sensors (D, E). When D goes on, we pump out water until E goes off (this is to realize a form of hysteresis and avoid “bouncing” around a given level).

There are also a few gas sensors for carbon monoxide, methane and airflow levels (A,B,C). If any of those becomes critical, an alarm must be raised.

Finally, to prevent explosions, the pump must not be operated when methane is above a certain level.

Carlo Pescio solves it using OOP with a design adaptable to changes (replace the two boolean water sensors for an integer/real level sensor, using two or more pumps alternatively, considering the possibility of one of these pumps could fail).

This is my mental process for solving it in haskell. Sensors have type *IO Bool*, the pump *Bool -> IO ()*, and I need a loop (sorry, an infinite recursive function) inside main, that check the sensors, and then calculate and set the state of the pump and alarm. Oh, but the hysteresis implies certain form of state!

Also, since we're thinking in an adaptable solution, it would be nice to make it easier to change. My only mental tool in haskell is to try to express the problem in form of an "algebra", something I can combine. But, how?

When you see a problem like this, what do you to to make more clear it functional nature?

EDIT: Thanks to you all for your great responses!

submitted by mvaliente2001[link] [19 comments]

### wren gayle romano: Imagine that this is not an academic debate

A followup to my previous [reddit version]:

The examples are of limited utility. The problem is not a few bad apples or a few bad words; were that the case it would be easier to address. The problem is a subtle one: it's in the tone and tenor of conversation, it's in the things *not* talked about, in the implicitization of assumptions, and in a decentering of the sorts of communities of engagement that Haskell was founded on.

Back in 2003 and 2005, communities like Haskell Cafe were communities of praxis. That is, we gathered because we do Haskell, and our gathering was a way to meet others who do Haskell. Our discussions were centered on this praxis and on how we could improve our own doing of Haskell. Naturally, as a place of learning it was also a place of teaching— but teaching was never the goal, teaching was a necessary means to the end of improving our own understandings of being lazy with class. The assumptions implicit in the community at the time were that Haskell was a path to explore, and an obscure one at that. It is not The Way™ by any stretch of the imagination. And being a small community it was easy to know every person in it, to converse as you would with a friend not as you would online.

Over time the tone and nature of the Cafe changed considerably. It's hard to explain the shift without overly praising the way things were before or overly condemning the shift. Whereas the Cafe used to be a place for people to encounter one another on their solitary journeys, in time it became less of a resting stop (or dare I say: cafe) and more of a meeting hall. No longer a place to meet those who do Haskell, but rather a place for a certain communal doing of Haskell. I single the Cafe out only because I have the longest history with that community, but the same overall shift has occurred everywhere I've seen. Whereas previously it was a community of praxis, now it is more a community of educationalism. In the public spaces there is more teaching of Haskell than doing of it. There's nothing wrong with teaching, but when teaching becomes the thing-being-done rather than a means to an end, it twists the message. It's no longer people asking for help and receiving personal guidance, it's offering up half-baked monad tutorials to the faceless masses. And from tutorialization it's a very short path to proselytizing and evangelizing. And this weaponization of knowledge always serves to marginalize and exclude very specific voices from the community.

One class of voices being excluded is women. To see an example of this, consider the response to Doaitse Swierstra's comment at the 2012 Haskell Symposium. Stop thinking about the comment. The comment is not the point. The point is, once the problematic nature of the comment was raised, how did the community respond? If you want a specific example, this is it. The example is not in what Swierstra said, the example is in how the Haskell community responded to being called out. If you don't recall how this went down, here's the reddit version; though it's worth pointing out that there were many other conversations outside of reddit. A *very* small number of people acquitted themselves well. A handful of people knew how to speak the party line but flubbed it by mansplaining, engaging in flamewars, or allowing the conversation to be derailed. And a great many people were showing their asses all over the place. Now I want you to go through and read every single comment there, including the ones below threshold. I want you to read those comments and imagine that this is not an academic debate. Imagine that this is *your* life. Imagine that *you* are the unnamed party under discussion. That *your* feelings are the ones everyone thinks they know so much about. That you personally are the one each commenter is accusing of overreacting. Imagine that you are a woman, that you are walking down the street in the middle of the night in an unfamiliar town after a long day of talks. It was raining earlier so the streets are wet. You're probably wearing flats, but your feet still hurt. You're tired. Perhaps you had a drink over dinner with other conference-goers, or perhaps not. Reading each comment, before going on to the next one, stop and ask yourself: would *you* feel safe if this commenter decided to follow you home on that darkened street? Do you feel like this person can comprehend that you are a human being on that wet street? Do you trust this person's intentions in being around you late at night? And ask yourself, when some other commenter on that thread follows you home at night and rapes you in the hotel, do you feel safe going to the comment's author to tell them what happened? Because none of this is academic. As a woman you go to conferences and this is how you are treated. And the metric of whether you can be around someone is not whether they seem interesting or smart or anything else, the metric is: do you feel safe? If you can understand anything about what this is like, then reading that thread will make you extremely uncomfortable. The problem is not that some person makes a comment. The problem is that masculinized communities are not safe for women. The problem is that certain modes of interaction are actively hostile to certain participants. The problem is finding yourself in an uncomfortable situation and knowing that noone has your back. Knowing that anyone who agrees with you will remain silent because they do not think you are worth the time and energy to bother supporting. Because that's what silence says. Silence says you are not worth it. Silence says you are not one of us. Silence says I do not think you are entirely human. And for all the upvotes and all the conversation my previous comment has sparked on twitter, irc, and elsewhere, I sure don't hear anyone *here* speaking up to say they got my back.

This is not a problem about women in Haskell. Women are just the go-to example, the example cis het middle-class educated able white men are used to engaging. Countless voices are excluded by the current atmosphere in Haskell communities. I know they are excluded because I personally watched them walk out the door after incidents like the one above, and I've been watching them leave for a decade. I'm in various communities for queer programmers, and many of the folks there use Haskell but none of them will come within ten feet of "official" Haskell communities. That aversion is even stronger in the transgender/genderqueer community. I personally know at least a dozen trans Haskellers, but I'm the only one who participates in the "official" Haskell community. Last fall I got hatemail from Haskellers for bringing up the violence against trans women of color on my blog, since that blog is syndicated to Planet Haskell. Again, when I brought this up, people would express their dismay in private conversations, but noone would say a damn thing in public nor even acknowledge that I had spoken. Ours has never been a great community for people of color, and when I talk to POC about Haskell I do not even consider directing them to the "official" channels. When Ken Shan gave the program chair report at the Haskell symposium last year, there was a similarly unwholesome response as with Swierstra's comment the year before. A number of people have shared their experiences in response to Ken's call, but overwhelmingly people feel like their stories of being marginalized and excluded "don't count" or "aren't enough to mention". Stop. Think about that. A lot of people are coming forward to talk about how they've been made to feel uncomfortable, and *while telling those stories* they feel the need to qualify. While actively explaining their own experiences of racism, sexism, heterosexism, cissexism, ablism, sanism, etc, they feel the simultaneous need to point out that these experiences are not out of the ordinary. Experiencing bigotry is so within the ordinary that people feel like they're being a bother to even mention it. This is what I'm talking about. This is what I mean when I say that there is a growing miasma in our community. This is how racism and sexism and ablism work. It's not smacking someone on the ass or using the N-word. It's a pervasive and insidious tone in the conversation, a thousand and one not-so-subtle clues about who gets to be included and who doesn't. And yes the sexual assaults and slurs and all that factor in, but that's the marzipan on top of the cake. The cake is made out of assuming someone who dresses "like a rapper" can't be a hacker. The cake is made out of assuming that "mother" and "professional" are exclusive categories. The cake is made out of well-actuallys and feigned surprise. And it works this way because this is how it avoids being called into question. So when you ask for specific examples you're missing the point. I can give examples, but doing so only contributes to the errant belief that bigotry happens in moments. Bigotry is not a moment. Bigotry is a sustained state of being that permeates one's actions and how one forms and engages with community. So knowing about that hatemail, or knowing about when I had to call someone out for sharing titty pictures on Haskell Cafe, or knowing about the formation of #nothaskell, or knowing about how tepid the response to Tim's article or Ken's report were, knowing about none of these specifics helps to engage with the actual problem.

comments