News aggregator

kdt - fast and flexible k-d tree library for Haskell

Haskell on Reddit - Tue, 11/11/2014 - 5:02pm


There are a few k-d tree libraries already on Hackage, but they:

  • do not allow users to associate data to each point in the tree
  • are slower than a naive linear scan

This k-d tree library allows for association of data with each point in the tree (like a "K-d Map"), and has gone through many iterations of benchmarking and optimization. Check out the benchmarks. The library also supports a "dynamic" variant of k-d trees; i.e., users can freely interleave point insertion and queries while maintaining a balanced tree structure.

All that said, this is my first contribution to Hackage, so any and all feedback is highly appreciated.

submitted by giogadi
[link] [11 comments]
Categories: Incoming News

Monads, video!

Haskell on Reddit - Tue, 11/11/2014 - 3:25pm
Categories: Incoming News

Neil Mitchell: Upper bounds or not?

Planet Haskell - Tue, 11/11/2014 - 3:11pm

Summary: I thought through the issue of upper bounds on Haskell package dependencies, and it turns out I don't agree with anyone :-)

There is currently a debate about whether Haskell packages should have upper bounds for their dependencies or not. Concretely, given mypackage and dependency-1.0.2, should I write dependency >= 1 (no upper bounds) or dependency >= 1 && < 1.1 (PVP/Package versioning policy upper bounds). I came to the conclusion that the bounds should be dependency >= 1, but that Hackage should automatically add an upper bound of dependency <= 1.0.2.

Rock vs Hard Place

The reason the debate has continued so long is because both choices are unpleasant:

  • Don't add upper bounds, and have packages break for your users because they are no longer compatible.
  • Add PVP upper bounds, and have reasonable install plans rejected and users needlessly downgraded to old versions of packages. If one package requires a minimum version of above n, and another requires a maximum below n, they can't be combined. The PVP allows adding new functions, so even if all your dependencies follow the PVP, the code might still fail to compile.

I believe there are two relevant relevant factors in choosing which scheme to follow.

Factor 1: How long will it take to update the .cabal file

Let us assume that the .cabal file can be updated in minutes. If there are excessively restrictive bounds for a few minutes it doesn't matter - the code will be out of date, but only by a few minutes, and other packages requiring the latest version are unlikely.

As the .cabal file takes longer to update, the problems with restrictive bounds become worse. For abandoned projects, the restrictive upper bounds make them unusable. For actively maintained projects with many dependencies, bounds bumps can be required weekly, and a two week vacation can break actively maintained code.

Factor 2: How likely is the dependency upgrade to break

If upgrading a dependency breaks the package, then upper bounds are a good idea. In general it is impossible to predict whether a dependency upgrade will break a package or not, but my experience is that most packages usually work fine. For some projects, there are stated compatibility ranges, e.g. Snap declares that any API will be supported for two 0.1 releases. For other projects, some dependencies are so tightly-coupled that every 0.1 increment will almost certainly fail to compile, e.g. the HLint dependency on Haskell-src-exts.

The fact that these two variable factors are used to arrive at a binary decision is likely the reason the Haskell community has yet to reach a conclusion.

My Answer

My current preference is to normally omit upper bounds. I do that because:

  • For projects I use heavily, e.g. haskell-src-exts, I have fairly regular communication with the maintainers, so am not surprised by releases.
  • For most projects I depend on only a fraction of the API, e.g. wai, and most changes are irrelevant to me.
  • Michael Snoyman and the excellent Stackage alert me to broken upgrades quickly, so I can respond when things go wrong.
  • I maintain quite a few projects, and the administrative overhead of uploading new versions, testing, waiting for continuous-integration results etc would cut down on real coding time. (While the Hackage facility to edit the metadata would be quicker, I think that tweaking fundamentals of my package, but skipping the revision control and continuous integration, seems misguided.)
  • The PVP is a heuristic, but usually the upper bound is too tight, and occasionally the upper bound is too loose. Relying on the PVP to provide bounds is no silver bullet.

On the negative side, occasionally my packages no longer compile for my users (very rarely, and for short periods of time, but it has happened). Of course, I don't like that at all, so do include upper bounds for things like haskell-src-exts.

The Right Answer

I want my packages to use versions of dependencies such that:

  • All the features I require are present.
  • There are no future changes that stop my code from compiling or passing its test suite.

I can achieve the first objective by specifying a lower bound, which I do. There is no way to predict the future, so no way I can restrict the upper bound perfectly in advance. The right answer must involve:

  • On every dependency upgrade, Hackage (or some agent of Hackage) must try to compile and test my package. Many Haskell packages are already tested using Travis CI, so reusing those tests seems a good way to gauge success.
  • If the compile and tests pass, then the bounds can be increased to the version just tested.
  • If the compile or tests fail, then the bounds must be tightened to exclude the new version, and the author needs to be informed.

With this infrastructure, the time a dependency is too tight is small, and the chance of breakage is unknown, meaning that Hackage packages should have exact upper bounds - much tighter than PVP upper bounds.

Caveats: I am unsure whether such regularly changing metadata should be incorporated into the .cabal file or not. I realise the above setup requires quite a lot of Hackage infrastructure, but will buy anyone who sorts it out some beer.

Categories: Offsite Blogs

How can I contribute in a useful way?

Haskell on Reddit - Tue, 11/11/2014 - 12:58pm

I'm not math major, so I fear that contributions to language design are out of my scope... what can I do then?

submitted by fruitbooploops
[link] [21 comments]
Categories: Incoming News

PROPOSAL: Add 'Natural' type to base:Data.Word

libraries list - Tue, 11/11/2014 - 11:35am
Hello CLC et al., I hereby suggest to add a type for encoding term-level naturals data Natural = <opaque/hidden> deriving (...the usual standard classes...) to `base:Data.Word` module Motivation ========== - GHC 7.10 is planned to ship with integer-gmp2[2] as its default `Integer` lib, whose low-level primitives are based on *unsigned* BigNums. And 'Natural' type for integer-gmp2 can be implemented directly w/o the overhead of wrapping an `Integer` via data Natural = NatS# Word# | NatJ# !PrimBigNat# as well as having a twice as large domain handled via the small-word constructor and thus avoiding FFI calls into GMP. - GHC/`base` already provides type-level naturals, but no term-level naturals - Remove the asymmetry of having an unbounded signed `Integer` but no unbounded /unsigned/ integral type. Also, `Data.Word` has been carrying the following note[1] for some time now: > It would be very natural to add a type Natural providing an > unbounded
Categories: Offsite Discussion

GHC 7.8.3 thread hang

glasgow-user - Tue, 11/11/2014 - 9:11am
I am trying to debug a lockup problem (and need help with debugging technique), where hang means a thread stops at a known place during evaluation, and other threads continue. The code near the problem is like: ec <- return $ encode command l <- return $ BSL.length ec ss <- return $ BSC.unpack ec It does not matter if I use let or return, or if the length is taken after unpack. I used return so I could use this code for tracing, with strictness to try to find the exact statement that is the problem: traceEventIO "sendCommand" ec <- return $ encode command traceEventIO $ "sendCommand: encoded" l <- ec `seq` return $ BSL.length ec traceEventIO $ "sendCommand: size " ++ (show l) ss <- ec `seq` return $ BSC.unpack ec When this runs, the program executes this many times, but always hangs under a certain condition. For good evaluations: 7f04173ff700: cap 0: sendCommand 7f04173ff700: cap 0: sendCommand: encoded 7f04173ff700: cap 0: sendCommand: s
Categories: Offsite Discussion

2048 - A Simple Haskell Game Server Implementation

Haskell on Reddit - Tue, 11/11/2014 - 9:01am

I built a 2048 game server in Haskell and wrote a blog about it, hope you like!

submitted by MarkMc2412
[link] [10 comments]
Categories: Incoming News

IO monads, stream handles, and type inference

haskell-cafe - Tue, 11/11/2014 - 5:09am
In the thread "Precise timing <!topic/haskell-cafe/sf8W4KBTYqQ>", in response to something ugly I was doing, Rohan Drape provided the following code: import Control.Concurrent import Control.Monad import System.IO import Sound.OSC main = withMax $ mapM_ note (cycle [1,1,2]) withMax = withTransport (openUDP "" 9000) sin0 param val = sendMessage (Message "sin0" [string param,float val]) pause = liftIO . pauseThread . (* 0.1) note n = do sin0 "frq" 300 sin0 "amp" 1 pause n sin0 "amp" 0 pause n For days I have monkeyed with it, and studied the libraries it imports, and I remain sorely confused. *How can the "a" in "IO a" be a handle?* Here are two type signatures: openUDP :: String -> Int -> IO UDP withTransport :: Transport t => IO t -> Connection t a -> IO a Rohan's code makes clear that openUDP creates a handle representing the UDP connection. openUDP's type signature indicates that its output is an "IO UDP". How can I reconcile those two facts? When I read about
Categories: Offsite Discussion

Advice request: Universities/Companies in Canada doing Haskell work/research?

Haskell on Reddit - Tue, 11/11/2014 - 4:43am

I´m wondering, can anyone recommend Canadian universities that have Professors or research groups doing research with Haskell, FP, dependent-types, etc.

Alternately, does anyone know if there are companies with locations in Canada that do development or research in Haskell, language development, etc. (Google and Microsoft have Canadian locations, but from what I found they didn´t have much in the way of languages research happening at their Canadian offices.)

My plan was to Google the names of authors accepted at ICFP and POPL, but that will likely take a while, so I thought I´d ask before I did that.

My situation:

I'm a Canadian, but I've just started a 2-year Masters program at the University of Utrecht. I'd like to continue on to a PhD when I'm done, hopefully incorporating languages research, FP, etc.

The Canadian funding deadlines are in autumn, which means I'll need to have to have a couple schools picked out a year before I would start my PhD.

I'm also very interested in getting industry experience with FP, possibly before or concurrently with a PhD.


submitted by jmite
[link] [10 comments]
Categories: Incoming News

Luke Plant: You can’t compare language features, only languages

Planet Haskell - Tue, 11/11/2014 - 3:51am

A lot of programming language debate is of the form “feature X is really good, every language needs it”, or “feature X is much better than its opposite feature Y”. The classic example is static vs dynamic typing, but there are many others, such as different types of meta-programming etc.

I often find myself pulled in both directions by these debates, as I’m rather partial to both Haskell and Python. But I’d like to suggest that doing this kind of comparison in the abstract, without talking about specific languages, is misguided, for the following reasons:

Language features can take extremely different forms in different languages

In my experience, static typing in Haskell is almost entirely unlike static typing in C, and different again from C# 1.0, and, from what I can tell, very different from static typing in C# 5.0. Does it really make sense to lump all these together?

Similarly, dynamic typing in Shell script, PHP, Python and Lisp are perhaps more different than they are alike. You can’t even put them on a spectrum — for example, Python is not simply a ‘tighter’ type system than PHP (in not treating strings as numbers etc.), because it also has features that allow far greater flexibility and power (such as dynamic subclassing due to first class classes).

Combination of features is what matters

One of my favourite features of Python, for example, is keyword arguments. They often increase the clarity of calling code, and give functions the ability to grow new features in a backwards compatible way. However, this feature only makes sense in combination with other features. If you had keyword arguments but without the **kwargs syntax for passing and receiving an unknown set of keyword arguments, it would make decorators extremely difficult.

If you are thinking of how great Python is, I don’t think it helps to talk about keyword arguments in general as a killer feature. It is keyword arguments in Python that work particularly well.

Comparing language features opens up lots of opportunities for bad arguments

For example:

Attacking the worst implementation

So, a dynamic typing advocate might say that static typing means lots of repetitive and verbose boilerplate to indicate types. That criticism might apply to Java, but it doesn’t apply to Haskell and many other modern languages, where type inference handles 95% of the times where you might need to specify types.

Defending the best implementation

The corollary to the above fallacy is that if you are only debating language features in the abstract, you can pick whichever implementation you want in order to refute a claim. Someone claims that dynamic typing makes IDE support for refactoring very difficult, and a dynamic typing advocate retorts that this isn’t the case with Smalltalk — ignoring the fact that they don’t use Smalltalk, they have never used Smalltalk, and their dynamically-typed language of choice does indeed present much greater or even insurmountable problems to automated refactoring.

Defending a hypothetical implementation

Defending the best implementation goes further when you actually defend one that doesn’t exist yet.

The mythical “smart enough compiler” is an example of this, and another would be dynamic typing advocates might talk about “improving” dynamic analysis.

Hypothetical implementations are always great for winning arguments, especially as they can combine all the best features of all the languages, without worrying about whether those features will actually fit together, and produce something that people would actually want to use. Sometimes a hybrid turns out like Hercules, and sometimes like the Africanized bee.

Ignoring everything else

In choosing a programming language, it’s not only the features of the language that you have to consider — there is long list of other factors, such as the maturity of the language, the community, the libraries, the documentation, the tooling, the availability (and quality) of programmers etc.

Sometimes the quality of these things are dominated by accidents of history (which language became popular and when), and sometimes they can be traced back to features of the language design.

Many language-war debates ignore all these things. But it’s even easier if you are not actually comparing real languages — just language features, abstracted from everything else.

I understand that comparing everything at once is difficult, and we will always attempt to break things down into smaller pieces for analysis. But I doubt that this goes very far with programming languages, because of the way the different features interact with each other, and also exert huge influence on the way that everything else develops e.g. libraries.


Language features exist within the context of a language and everything surrounding that language. It seems to me that attempts to analyse them outside that context simply lead to false generalisations.

Of course, being really concrete and talking about specific languages often ends up even more personal, which has its own pitfalls! Is there a good way forward?

Categories: Offsite Blogs

Representing record subtypes, sort of.

haskell-cafe - Tue, 11/11/2014 - 3:44am
I have an API that, in a language with subtyping, would look like: class FsEntry id: String class FsFile extends FsEntry modified: Date size: Int class FsFolder extends FsEntry owner: String listFolder :: Path -> [FsEntry] createFile :: Path -> FsFile createFolder :: Path -> FsFolder (I'm assuming some way of specifying that FsEntry will only ever have those two subtypes.) How would you represent this in Haskell? My first thought was: newtype FsEntry = FsEntry FsCommon FsExtra data FsCommon = FsCommon { id: String } data FsExtra = FsFile { modified: Date, size: Int } | FsFolder { owner: String } But then I couldn't have precise return types for `writeFile` and `createFolder`. My next attempt was to use a type-parameterized top-level class: data FsCommon = FsCommon { id: String } data FsFileExtra = FsFileExtra { modified: Data, size: Int } data FsFolderExtra = FsFolderExtra { owner: String } dat
Categories: Offsite Discussion

RDP 2015 Last Call for Workshops

General haskell list - Mon, 11/10/2014 - 11:53pm
[apologies for cross posting] ----------------------------------------------------------------------- RDP 2015 Last Call for Workshops (Rewriting, Deduction, and Programming, June-July 2015, Warsaw, Poland) ----------------------------------------------------------------------- RDP 2015 is the eighth edition of the International Conference on Rewriting, Deduction, and Programming, consisting of two main conferences * RTA (Rewriting Techniques and Applications) * TLCA (Typed Lambda Calculi and Applications) Previous RDPs were held in 2003 in Valencia (Spain), 2004 in Aachen (Germany), 2005 in Nara (Japan), 2007 in Paris (France), 2009 in Brasilia (Brasil), 2011 in Novi Sad (Serbia), 2013 in Eindhoven (The Netherlands). We solicit proposals for satellite workshops of RDP 2015 that are related in topics to one or both of the RDP conferences. We plan the workshops to proceed for up to 2 days (possibilities of longer workshops should be discussed with the organisers). It is tradition at RDP that attendance
Categories: Incoming News

Data Oriented Programming for RTS game

Haskell on Reddit - Mon, 11/10/2014 - 7:17pm

I've been using a simple state monad transformer stack to implement an RTS game and it's been a lot of fun to write. However each unit has around 60+ fields. Making lots of minor changes to units quickly becomes very expensive. So I tried breaking up the units fields into a tree. That quickly made things very ugly and I'm still getting horrible cache utilization.

So I've been experimenting with the idea of data oriented programming in Haskell using the ST monad. It's not very pretty, but I'd like to know what you lovely people think. Is this insane? Is this a complete waste of time? How interested would you be in the results?

Code Snippet

  • Update mutable world
  • Freeze mutable world
  • Incorporate player commands
  • Send game state to players
  • Thaw immutable world
  • Repeat steps

The only overhead compared to the C equivalent would be copying the vectors and light GC (AFAIK).

submitted by Agitates
[link] [12 comments]
Categories: Incoming News

New Functional Programming Job Opportunities

haskell-cafe - Mon, 11/10/2014 - 7:00pm
Here are some functional programming job opportunities that were posted recently: Senior Software Engineer at McGraw-Hill Education Cheers, Sean Murphy
Categories: Offsite Discussion