News aggregator

FP Complete: Stackage Badges

Planet Haskell - Sun, 10/18/2015 - 11:00pm

This is a guest blog from Konstantin Zudov, who has been making a number of wonderful enhancements to the Stackage Server website.

Snapshot badges for packages on Stackage

Stackage Server just got a new feature: snapshot badges. Take a look:

  • stack/lts-2:
  • stack/lts-3:
  • stack/lts (the latest):
  • stack/nightly:

Package authors can add the badges to their to tell users in which snapshots the package is present and provide a link to the package page.

Here is an example of how that can be done:

# PackageName [![packagename on Stackage LTS 2](]( [![packagename on Stackage LTS 3](]( [![packagename on Stackage Nightly](](

In case of stack it would look like:


Categories: Offsite Blogs

Christopher Allen: Either and (,) in Haskell are not arbitrary

Planet Haskell - Sun, 10/18/2015 - 6:00pm

Alternate title: Unnecessary particularity considered harmful

Since I’d rather explain this in O(1) rather than O(twitter) time, this is a brief rundown of why the way type constructors and constructor classes work in Haskell is not arbitrary. The post is not a tutorial on higher-kinded types, constructor classes, or functor. Don’t know these things? I write stuff so you can learn ’em.

First, the data types we’re dealing with:

data Either a b = Left a | Right b -- sorta fake data (,) a b = (a, b)

We’ll use Functor to make the point, and Functor looks like this:

class Functor f where fmap :: (a -> b) -> f a -> f b

Some of the post-FTP drama has included people asserting that the way the Foldable instances for Either and (,) work is arbitrary. Not so. They work on the same principle as the Functor instances:

Prelude> fmap (+1) (Right 1) Right 2 Prelude> fmap (+1) (Left "blah") Left "blah" Prelude> fmap (+1) (0, 0) (0,1)

The first thing to recognize is that Left and Right in Either mean nothing to your program and similarly the first and second positions in (,) mean nothing in and of themselves. Because type constructors in Haskell work the same way as data constructors and functions in general do, the way their instances work is the only way they could work. Either and (,) will always have one type argument that gets mapped and one that does not. It doesn’t really matter which data constructor that is; we can only benefit by letting the consistent semantics of Haskell pick the type argument for us.

The only useful purpose a Functor for Either can ever have is to have one type which is transformed by the lifted functions and one which is not. If you want to be able to pick arbitrary targets, then you want lenses rather than a typeclass. If you want to able to transform both, then you want Bifunctor.

Note that with bimap in the Bifunctor class you have to provide two functions you’re mapping rather than one because the types a and b could vary and be different types. Even if they are the same type, you can’t write the Functor instance as if they were because the Either and (,) are defined with two distinct type arguments.

If you want a “tuple” of values that were all of the same type…well, go ahead. You can write it yourself:

-- We use this in the book to demonstrate how -- type constructors and constructor classes -- work in Haskell, as it happens. data Pair a = Pair a a deriving (Eq, Show) instance Functor Pair where fmap f (Pair a a') = Pair (f a) (f a')

Then to see how the Functor for this behaves:

Prelude> fmap (+1) (Pair 1 1) Pair 2 2 Prelude> fmap show (Pair 1 1) Pair "1" "1" Prelude> fmap show (Pair 1 9001) Pair "1" "9001"

A Functor for (,) can only ever map over one of the fields of the type. It might as well be the one that occurs naturally from the order of arguments to the type constructor (read: functions). length written in terms of Foldable has nothing to do with the contents of the Foldable structure; it has to do with the structure itself. You wouldn’t expect length of a list of lists to measure the length of one or more of the sublists:

Prelude> length [[], []] 2

You would expect it to measure how many cons cells were in the outermost list. Unless you lifted it. If you lifted it, then you could get the measure of all the list values contained within! (this is why we have fmap)

Prelude> fmap length [[], ["lol"]] [0,1] Prelude> (fmap . fmap) length [[], ["lol"]] [[],[3]] -- Doesn't change for Maybe Prelude> length (Just "blah") 1 Prelude> fmap length (Just "blah") Just 4 Prelude> fmap length (Nothing :: Maybe String) Nothing

Similarly, no matter what food you move around on your plate, length for (,) is never going to do anything but return 1 because there’s always one value of the type you’re folding over with (,). Even if you add type lambdas.

Prelude> length ("blah", "") 1 -- unless we lift it over the tuple structure. Prelude> fmap length ("blah", "") ("blah",0) Prelude> fmap length ("blah", "Papuchon") ("blah",8)

Want to map over the left-hand side? Use Bifunctor:

Prelude> import Data.Bifunctor Prelude> :t first first :: Bifunctor p => (a -> b) -> p a c -> p b c Prelude> :t second second :: Bifunctor p => (b -> c) -> p a b -> p a c Prelude> first length ("blah", "Papuchon") (4,"Papuchon") Prelude> second length ("blah", "Papuchon") ("blah",8)

Or lenses! Whatever you like!

The Functor and Foldable for Either and (,) can only ever do one useful thing. We may as well make it so we know exactly which type is being mapped over by looking at the type. What Functor and Foldable do, how they work, is essentially what the combination of higher kinded types and typeclasses into constructor classes is for. This is their purpose for existing. If you want to address more structure than what Functor/Foldable let you talk about, then use Bifunctor or Bifoldable. If you want to choose arbitrary targets, then use lenses and prisms. There’s no reason to break the consistent and predictable semantics of the language because the (necessary by construction!) Functor instance for Either or (,) appears arbitrary to you. In fact, they’re the complete opposite of arbitrary or contingent because their instances follow directly from how the datatypes are defined. This uniqueness and necessity is why we can have the DeriveFunctor and DeriveFoldable extensions which will generate Functor and Foldable instances knowing only the definition of a datatype.


It doesn’t matter if the definition of Either was:

data Either a b = Left b | Right a

It matters that a default exists and is chosen for the Functor because that’s the only reason to make something Left or Right. Contrary to developer intuitions, Right doesn’t mean “success”. The data constructors of Either are defined by what the Functor/Applicative/etc. instances do.

I’ve used Left to indicate “success” in situations where I want to stop fmap’ing a computation that might fail. It is the picking-of-a-winner that Haskell’s semantics induce that is valuable and not arbitrary. What is arbitrary is what we call left and right and the syntactic position of their type arguments in the type constructor. There’s much less utility in an Either that doesn’t have a Functor with a default target.

Further, they aren’t arbitrary. Following from the definition of arbitrary that Google provided:

based on random choice or personal whim, rather than any reason or system.

We can break it down as follows:

  1. Is there a reason the Either Functor works the way it does? Yes, it makes the datatype more useful in that it gives us a biased-choice Functor which is frequently useful regardless of whether the biased-target represents success or not. The way Functor behaves is useful insofar as its only reason for existing is to pick one of the two exclusive choices. There is no reason for programmers to favor the target being Left or Right. Those words mean nothing and word/name-fetishism kills software reuse and modularity.

  2. Is there a systematic cause for why the Either Functor works the way it does? Yes, cf. Jones’ work on Gofer dating to 1993/1994. The way the Functor behaves is necessary and follows from how the language works in a natural way. You can make a learner predict what the Either Functor does if you teach them how HKTs and constructor classes work. I’ve done this with learners before. This isn’t surprising if you know Haskell.


It does not matter whether one of your types is going to be in the Left or Right data constructor, all that matters is what you want your Functor-target to be. Not having a universal winner for Left or Right being the Functor target is bizarre and counter-productive. You can not and will not ever have a single Functor that lets you pick either/or of Left or Right because a != b.

If you want to map over your “error” value, I have news for you! Right just became your error value. The names Left and Right mean nothing. The code is what it does. If you want to be able to arbitrarily pick Left, Right or both as a target, what you want is Bifunctor or a prism. It is madness to give programmers an avenue to introduce useless arbitrariness to their code. Preventing the proliferation of meaningless difference is an excellent way for people doing PL to improve a language.

We’ve covered both ways in which the Functor instance is not arbitrary, due to being both necessary and useful. We can also see that the way the Either Functor works is neither random nor based on whim.

I know this site is a bit of a disaster zone, but if you like my writing or think you could learn something useful from me, please take a look at the book I've been writing with my coauthor Julie. There's a free sample available too!

Posted on October 19, 2015

Categories: Offsite Blogs

MonadFix wiki confusion

haskell-cafe - Sun, 10/18/2015 - 3:40pm
Hello, the MonadFix wiki at has a statement that I feel is a bit misleading. In section "2.2 Lazy algorithm interleaved with effects", it claims that making the BTree data structure strict doesn't cause endless recursion. Well, that's true, but that's just because rep_x_sum returns a tuple containing the BTree and the summed values of the current subtree, and the tuple is lazily constructed - postponing the construction of the tree value. So highlighting the fact that the function still works when the BTree structure is made strict is kind of a red herring. Maybe the confusion could be avoided by removing the part about making BTree strict, or adding a note about the tuple still ensuring lazy construction? -- Samuel
Categories: Offsite Discussion

Gabriel Gonzalez: Explicit is better than implicit

Planet Haskell - Sun, 10/18/2015 - 1:57pm

Many of the limitations associated with Haskell type classes can be solved very cleanly with lenses. This lens-driven programming is more explicit but significantly more general and (in my opinion) easier to use.

All of these examples will work with any lens-like library, but I will begin with the lens-simple library to provide simpler types with better type inference and better type errors and then later transition to the lens library which has a larger set of utilities.

Case study #1 - fmap bias

Let's begin with a simple example - the Functor instance for Either:

fmap (+ 1) (Right 2 ) = Right 3

fmap (+ 1) (Left "Foo") = Left "Foo"

Some people object to this instance because it's biased to Right values. The only way we can use fmap to transform Left values is to wrap Either in a newtype.

These same people would probably like the lens-simple library which provides an over function that generalizes fmap. Instead of using the type to infer what to transform we can explicitly specify what we wish to transform by supplying _Left or _Right:

$ stack install lens-simple --resolver=lts-3.9
$ stack ghci --resolver=lts-3.9
>>> import Lens.Simple
>>> over _Right (+ 1) (Right 2)
Right 3
>>> over _Right (+ 1) (Left "Foo")
Left "Foo"
>>> over _Left (++ "!") (Right 2)
Right 2
>>> over _Left (++ "!") (Left "Foo")
Left "Foo!"

The inferred types are exactly what we would expect:

>>> :type over _Right
over _Right :: (b -> b') -> Either a b -> Either a b'
>>> :type over _Left
over _Left :: (b -> b') -> Either b b1 -> Either b' b1

Same thing for tuples. fmap only lets us transform the second value of a tuple, but over lets us specify which one we want to transform:

>>> over _1 (+ 1) (2, "Foo")
>>> over _2 (++ "!") (2, "Foo")

We can even transform both of the values in the tuple if they share the same type:

>>> over both (+ 1) (3, 4)

Again, the inferred types are exactly what we expect:

>>> :type over _2
over _2 :: (b -> b') -> (a, b) -> (a, b')
>>> :type over _1
over _1 :: (b -> b') -> (b, b1) -> (b', b1)
>>> :type over both
over both :: (b -> b') -> (b, b) -> (b', b')Case study #2 - length confusion

Many people have complained about the tuple instance for Foldable, which gives weird behavior like this in ghc-7.10 or later:

>>> length (3, 4)

We could eliminate all confusion by specifying what we intend to count at the term level instead of the type level:

>>> lengthOf _2 (3, 4)
>>> lengthOf both (3, 4)

This works for Either, too:

>>> lengthOf _Right (Right 1)
>>> lengthOf _Right (Left "Foo")
>>> lengthOf _Left (Right 1)
>>> lengthOf _Left (Left "Foo")

... and this trick is not limited to length. We can improve any Foldable function by taking a lens instead of a type class constraint:

>>> sumOf both (3, 4)
>>> mapMOf_ both print (3, 4)
4Case study #3 - Monomorphic containers

fmap doesn't work on ByteString because ByteString is not a type constructor and has no type parameter that we can map over. Some people use the mono-foldable or mono-traversable packages to solve this problem, but I prefer to use lenses. These examples will require the lens library which has more batteries included.

For example, if I want to transform each character of a Text value I can use the text optic:

$ stack install lens --resolver=lts-3.9 # For `text` optics
$ stack ghci --resolver=lts-3.9
>>> import Control.Lens
>>> import Data.Text.Lens
>>> import qualified Data.Text as Text
>>> let example = Text.pack "Hello, world!"
>>> over text succ example

I can use the same optic to loop over each character:

>>> mapMOf_ text print example
' '

There are also optics for ByteStrings, too:

>>> import Data.ByteString.Lens
>>> import qualified Data.ByteString as ByteString
>>> let example2 = ByteString.pack [0, 1, 2]
>>> mapMOf_ bytes print example2

The lens approach has one killer feature over mono-foldable and mono-traversable which is that you can be explicit about what exactly you want to map over. For example, suppose that I want to loop over the bits of a ByteString instead of the bytes. Then I can just provide an optic that points to the bits and everyting "just works":

>>> import Data.Bits.Lens
>>> mapMOf_ (bytes . bits) print example2

The mono-traversable or mono-foldable packages do not let you specify what you want to loop over. Instead, the MonoFoldable and MonoTraversable type classes guess what you want the elements to be, and if they guess wrong then you are out of luck.


Here are some more examples to illustrate how powerful and general the lens approach is over the type class approach.

>>> lengthOf (bytes . bits) example2
>>> sumOf (both . _1) ((2, 3), (4, 5))
>>> mapMOf_ (_Just . _Left) print (Just (Left 4))
>>> over (traverse . _Right) (+ 1) [Left "Foo", Right 4, Right 5]
[Left "Foo",Right 5,Right 6]

Once you get used to this style of programming you begin to prefer specifying things at the term level instead of relying on type inference or wrangling with newtypes.

Categories: Offsite Blogs

Dimitri Sabadie: luminance-0.5.1 and wavefront-

Planet Haskell - Sun, 10/18/2015 - 9:50am

It’s been a few days I haven’t talked about luminance. I’ve been working on it a lot those days along with wavefront. In order that you keep up to date, I’ll describe the changes I made in those packages you have a talk about the future directions of those packages.

I’ll also give a snippet you can use to load geometries with wavefront and adapt them to embed into luminance so that you can actually render them! A package might come up from that kind of snippet – luminance-wavefront? We’ll see that!


This package has received several changes among two major increments and several fixes. In the first place, I removed some code from the interface that was useless and used only for test purposes. I removed the Ctxt object – it’s a type used by the internal lexer anyways, so you don’t have to know about it – and exposed a type called WavefrontOBJ. That type reprents the parsed Wavefront data and is the main type used by the library in the interface.

Then, I also removed most of the modules, because they’re re-exported by the main module – Codec.Wavefront. I think the documentation is pretty straight-forward, but you think something is missing, please shoot a PM or an email! ;)

On the bugs level, I fixed a few things. Among them, there was a nasty bug in the implementation of an internal recursive parser that caused the last wavefront statement to be silently ignored.

I’d also like to point out that I performed some benchmarks – I will provide the data later on with a heap profile and graphs – and I’m pretty astonished with the results! The parser/lexer is insanely fast! It only takes a few milliseconds (between 7ms and 8ms) to load 50k faces (a 2MB .obj file). The code is not yet optimized, so I guess the package could go even faster!

You can find the changelog here.


I made a lot of work on luminance lately. First, the V type – used to represent vertex components – is not anymore defined by luminance but by linear. You can find the type here. You’ll need the DataKinds extension to write types like V 3 Float.

That change is due to the fact linear is a mature library with a lot of interesting functions and types everyone might use when doing graphics. Its V type has several interesting instances – Storable, Ord, etc. – that are required in luminance. Because it’s not simple to build such V, luminance provides you with three functions to build the 1D, 2D and 3D versions – vec2, vec3 and vec4. Currently, that type is the only one you can use to build vertex components. I might add V2, V3 and V4 as well later.

An interesting change: the Uniform typeclass has a lot of new instances! Basically, all vector types from linear, their array version and the 4x4 floating matrix – M44 Float. You can find the list of all instances here.

A new function was added to the Graphics.Lumimance.Geometry module called nubDirect. That function performs in linear logarithmic time and is used to turn a direct representation of vertices into a pair of data used to represent indexed vertices. The new list of vertices stores only unique vertices and the list of integral values stores the indices. You can then use both the information to build indexed geometries – see createGeometry for further details.

The interface to transfer texels to textures has changed. It doesn’t depend on Foldable anymore but on Data.Vector.Storable.Vector. That change is due to the fact that the Foldable solution uses toList behind the hood, which causes bad performance for the simple reason that we send the list to the GPU through the FFI. It’s then more efficient to use a Storable version. Furthermore, th most known package for textures loading – JuicyPixels – already uses that type of Vector. So you just have to enjoy the new performance boost! ;)

About bugs… I fixed a few ones. First, the implementation of the Storable instance for (:.) had an error for sizeOf. The implementation must be lazy in its argument, and the old one was not, causing undefined crashes when using that type. The strictness was removed and now everything works just fine!

Two bugs that were also fixed: the indexed render and the render of geometries with several vertex components. Those bugs were easy to fix and now you won’t experience those issues anymore.

Interfacing luminance with wavefront to render geometries from artists!

I thought it would be a hard task but I’m pretty proud of how easy it was to interface both the packages! The idea was to provide a function that would turn a WavefrontOBJ into a direct representation of luminance vertices. Here’s the function that implements such a conversion:

type Vtx = V 3 Float :. V 3 Float -- location :. normal

objToDirect :: WavefrontOBJ -> Maybe [Vtx]
objToDirect obj = traverse faceToVtx (toList faces)
locations = objLocations obj
normals = objNormals obj
faces = objFaces obj
faceToVtx face = do
let face' = elValue face
vni <- faceNorIndex face'
v <- locations !? (faceLocIndex face' - 1)
vn <- normals !? (vni - 1)
let loc = vec3 (locX v) (locY v) (locZ v)
nor = vec3 (norX vn) (norY vn) (norZ vn)
pure (loc :. nor)

As you can see, that function is pure and will eventually turn a WavefrontOBJ into a list of Vtx. Vtx is our own vertex type, encoding the location and the normal of the vertex. You can add texture coordinates if you want to. The function fails if a face’s index has no normal associated with or if an index is out-of-bound.

And… and that’s all! You can already have your Geometry with that – direct one:

x <- fmap (fmap objToDirect) (fromFile "./ubercool-mesh.obj")
case x of
Right (Just vertices) -> createGeometry vertices Nothing Triangle
_ -> throwError {- whatever you need as error there -}

You want an indexed version? Well, you already have everything to do that:

x <- fmap (fmap (nubDirect . objToDirect) (fromFile "./ubercool-mesh.obj")
case x of
Right (Just (vertices,indices)) -> createGeometry vertices (Just indices) Triangle
_ -> throwError {- whatever you need as error there -}

Even though the nubDirect performs in a pretty good complexity, it takes time. Don’t be surprised to see the “loading” time longer then.

I might package those snippets and helpers around them into a luminance-wavefront package, but that’s not trivial as the vertex format should be free.

Future directions and thank you

I received a lot of warm feedback from people about what I do in the Haskell community, and I’m just amazed. I’d like to thank each and everyone of you for your support – I even got support from non-Haskellers!

What’s next then… Well, I need to add a few more textures to luminance – texture arrays are not supported yet, and the framebuffers have to be altered to support all kind of textures. I will also try to write a cheddar interpreter directly into luminance to dump the String type of shader stages and replace it with cheddar’s whatever will be. For the long terms, I’ll add UBO and SSBO to luminance, and… compatibility with older OpenGL versions.

Once again, thank you, and keep the vibe!

Categories: Offsite Blogs

Gabriel Gonzalez: Polymorphism for dummies

Planet Haskell - Sun, 10/18/2015 - 9:31am

This tutorial explains how polymorphism is implemented under the hood in Haskell using the least technical terms possible.

The simplest example of a polymorphic function is the identity function:

id :: a -> a
id x = x

The identity function works on any type of value. For example, I can apply id to an Int or to a String:

$ ghci
Prelude> id 4
Prelude> id "Test"

Under the hood, the id function actually takes two arguments, not one.

-- Under the hood:

id :: forall a . a -> a
id @a x = x

The first argument of id (the @a) is the same as the a in the type signature of id. The type of the second argument (x) can refer to the value of the first argument (the @a).

If you don't believe me, you can prove this yourself by just taking the following module:

module Id where

import Prelude hiding (id)

id :: a -> a
id x = x

... and ask ghc to output the low-level "core" representation of the above id function:

$ ghc -ddump-simpl id.hs
[1 of 1] Compiling Id ( id.hs, id.o )

==================== Tidy Core ====================
Result size of Tidy Core = {terms: 4, types: 5, coercions: 0} :: forall a_apw. a_apw -> a_apw
[GblId, Arity=1, Caf=NoCafRefs, Str=DmdType] = \ (@ a_aHC) (x_apx :: a_aHC) -> x_apx

The key part is the last line, which if you clean up looks like this:

id = \(@a) (x :: a) -> x

ghc prefixes types with @ when using them as function arguments.

In other words, every time we "generalize" a function (i.e. make it more polymorphic), we add a new hidden argument to that function corresponding to the polymorphic type.


We can "specialize" id to a narrower type that is less polymorphic (a.k.a. "monomorphic"):

idString :: String -> String
idString = id

Under the hood, what actually happened was that we applied the id function to the String type, like this:

-- Under the hood:

idString :: String -> String
idString = id @String

We can prove this ourselves by taking this module:

module Id where

import Prelude hiding (id)

id :: a -> a
id x = x

idString :: String -> String
idString = id

... and studying the core that this module generates:

$ ghc -ddump-simpl id.hs
[1 of 1] Compiling Id ( id.hs, id.o )

==================== Tidy Core ====================
Result size of Tidy Core = {terms: 6, types: 8, coercions: 0} :: forall a_apx. a_apx -> a_apx
[GblId, Arity=1, Caf=NoCafRefs, Str=DmdType] = \ (@ a_aHL) (x_apy :: a_aHL) -> x_apy

Id.idString :: GHC.Base.String -> GHC.Base.String
[GblId, Arity=1, Caf=NoCafRefs, Str=DmdType]
Id.idString = @ GHC.Base.String

If we clean up the last line, we get:

idString = id @String

In other words, every time we "specialize" a function (i.e. make it less polymorphic), we apply it to a hidden type argument. Again, ghc prefixes types with @ when using them as values.

So back in the REPL, when we ran code like this:

>>> id 4
>>> id "Test"

... ghc was implicitly inserting hidden type arguments for us, like this:

>>> id @Int 4
>>> id @String "Test"Conclusion

That's it! There's really nothing more to it than that.

The general trick for passing around type parameters as ordinary function arguments was first devised as part of System F.

This is the same way that many other languages encode polymorphism (like ML, Idris, Agda, Coq) except that some of them use a more general mechanism. However, the basic principles are the same:

  • When you make something more polymorphic you are adding additional type arguments to your function
  • When you make something less polymorphic you apply your function to type values

In ghc-8.0, you will be allowed to explicitly provide type arguments yourself if you prefer. For example, consider the read function:

read :: Read a => String -> a

If you wanted to specify what type to read without a type annotation, you could provide the type you desired as an argument to read:

read @Int :: String -> Int

Here are some other examples:

[] @Int :: [Int]

show @Int :: Int -> String

Previously, the only way to pass a type as a value was to use the following Proxy type:

data Proxy a = Proxy

... which reified a type as a value. Now you will be able to specialize functions by providing the type argument directly instead of adding a type annotation.

Categories: Offsite Blogs

STM and garbage collector

haskell-cafe - Sat, 10/17/2015 - 5:11pm
If a thread is blocking indefinitely in an STM transaction (reading from a queue to which nobody else has a reference and thus can not write into), is the runtime smart enough to GC the thread? Or do I have to kill the thread manually? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Core libraries

glasgow-user - Fri, 10/16/2015 - 9:21am
Hello GHC users, For various reasons i'm trying to package something which is compatible with/equivalent to the Haskell Platform. I've been looking at the list posted at [0], and it turns out that GHC 7.10.2 doesn't bundle all the packages that are promised. (Or it's a case of PEBKAC and i apologise!) When i do `ghc-pkg list`, the output under ../package.conf.d provided by GHC notably has the following differences with the website list: $ diff website-list <(ghc-pkg list | ..prettify..) 2a3,4 10c12,14 < haddock --- 12,13c16 < old-locale < old-time --- 15a19 16a21 What surprises me is the lack of haddock, old-time and old-locale. Is this intentional, or a glitch on the website? I notice that these "missing" libraries are repeated in the `Additional Platform Libraries' and `Programs and Tools' sections. I guess this means that as a GHC packager, i should add these items separately? I guess what i'm saying is that i believe there is a bug in that list [0], but i'm not sure enough that i understand the
Categories: Offsite Discussion

Default definition for fromRational

libraries list - Thu, 10/15/2015 - 11:51pm
A suitable default definition for fromRational could be the following: fromRational n = fromInteger (numerator n) / fromInteger (denominator n) Changing the MINIMUM pragma to just {-# MINIMAL recip | (/) #-} - Joe
Categories: Offsite Discussion

Handling multiple fds with GHC

glasgow-user - Wed, 10/07/2015 - 8:49am
Hi, the last few days, I tried to get an IO-Event system running with GHC i.e. trigger an IO action when there is data to read from a fd. I looked at a few different implementations, but all of them have some downside. * using select package - This uses the select syscall. select is rather limited (fd cannot be * using GHC.Event - GHC.Event is broken in 7.10.1 (unless unsafeCoerce and a hacky trick are used) - GHC.Event is GHC internal according to hackage - Both Network libraries I looked at (networking (Network.Socket) and socket (System.Socket)) crash the application with GHC.Event - with 7.8+ I didn't see a way to create your own EventManager, so it only works with -threaded * using forkIO and threadWaitRead for each fd in a loop - needs some kind of custom control structure around it - uses a separate thread for each fd - might become pretty awkward to handle multiple events * using poll package - blocks in a safe foreign call - needs some kind of wrapper Fro
Categories: Offsite Discussion

using nmitchell's space leak detection technique

glasgow-user - Fri, 10/02/2015 - 3:02am
Neil Mitchell wrote an article about finding space leaks by limiting the stack size: I'm giving it a try, but the results don't make any sense to me. My understanding is that the too-deep stack indicates that someone created too many thunks, so when they get evaluated you have to descend too many times. And if the stack trace seems to be a reasonable size (say 100 stack frames somehow consuming 1mb of stack) then it means there is a recursive call in there somewhere that ghc is helpfully eliding. And lastly, that the thing about "Stack space overflow: current size 33568 bytes." always showing 33568 bytes is due to a ghc bug and it should actually be whatever limit I gave it. Is all this accurate? The stack trace jumps around a lot, to places which are not lexically present in the caller. I assume this is just lazy evaluation, so e.g. maybe 'f' doesn't call 'g', but if 'f' forces a value returned from 'g' which has not yet been forced,
Categories: Offsite Discussion

New gtk2hs 0.12.4 release

gtk2hs - Wed, 11/21/2012 - 12:56pm

Thanks to John Lato and Duncan Coutts for the latest bugfix release! The latest packages should be buildable on GHC 7.6, and the cairo package should behave a bit nicer in ghci on Windows. Thanks to all!


Categories: Incoming News