News aggregator

Using multiple versions of the Haskell Platform onWindows

haskell-cafe - Wed, 11/12/2014 - 4:43am
The win-hp-path project provides the use-hp command, which makes it easy to switch between different versions of Haskell Platform on Windows. https://github.com/ygale/win-hp-path We are using it for running cabal and GHC in command prompt windows. In particular, we can also use it in build scripts used by the build management team who are not Haskell programmers. Please let me know if you can use this in more complex dev environments, or if you have suggestions about how it could be enhanced to do that. Pull requests are welcome. Thanks, Yitz
Categories: Offsite Discussion

Final bikeshedding call: Fixing Control.Exception.bracket

libraries list - Tue, 11/11/2014 - 8:09pm
Ola! In September Eyal Lotem raised the issue of bracket's cleanup handler not being uninterruptible [1]. This is a final bikeshedding email before I submit a patch. The problem, summarised: Blocking cleanup actions can be interrupted, causing cleanup not to happen and potentially leaking resources. Main objection to making the cleanup handler uninterruptible: Could cause deadlock if the code relies on async exceptions to interrupt a blocked thread. I count only two objections in the previous thread, 1 on the grounds that "deadlocks are NOT unlikely" and 1 that is conditioned on "I don't believe this is a problem". The rest seems either +1, or at least agrees that the status quo is *worse* than the proposed solution. My counter to these objections is: 1) No one has yet shown me any code that relies on the cleanup handler being interruptible 2) There are plenty of examples of current code being broken, for example every single 'bracket' using file handles is broken due to handle operations using a pote
Categories: Offsite Discussion

Monad m => m (Maybe a) -> m (Maybe a) -> m (Maybe a)

haskell-cafe - Tue, 11/11/2014 - 6:57pm
I've been using these functions lately: try :: Monad m => m (Maybe a) -> m (Maybe a) -> m (Maybe a) try action alternative = maybe alternative (return . Just) =<< action tries :: Monad m => [m (Maybe a)] -> m (Maybe a) tries = foldr try (return Nothing) It's sort of like (<|>) on Maybe, or MonadPlus, but within a monad. It seems like the sort of thing that should be already available, but hoogle shows nothing. I think 'm' has to be a monad, and I can't figure out how to generalize the Maybe to MonadPlus or Alternative. It's sort of a mirror image to another function I use a lot: justm :: Monad m => m (Maybe a) -> (a -> m (Maybe b)) -> m (Maybe b) justm op1 op2 = maybe (return Nothing) op2 =<< op1 ... which is just MaybeT for when I can't be bothered to put runMaybeT and lifts and hoists on everything. So you could say 'try' is like MaybeT with the exceptional case reversed. Is 'try' just the instantiation of some standard typeclass, or is it its own thing?
Categories: Offsite Discussion

github.com

del.icio.us/haskell - Tue, 11/11/2014 - 6:31pm
Categories: Offsite Blogs

Force Criterion to benchmark sequentially for benchmarking parallel code

Haskell on Reddit - Tue, 11/11/2014 - 6:00pm

I believe if allowed, criterion will benchmark in parallel. This is bad for benchmarking parallel code. Is there anyway to force criterion to benchmark sequentially - allowing parallel code to use all (4 of my) cores optimally?

Even when benchmarking sequential code. Running criterion with +RTS -N4 results in a slower reported mean than if I run it without the RTS option. This stumps me. Am I doing something wrong?

submitted by pdexter
[link] [8 comments]
Categories: Incoming News

kdt - fast and flexible k-d tree library for Haskell

Haskell on Reddit - Tue, 11/11/2014 - 5:02pm

Hackage

There are a few k-d tree libraries already on Hackage, but they:

  • do not allow users to associate data to each point in the tree
  • are slower than a naive linear scan

This k-d tree library allows for association of data with each point in the tree (like a "K-d Map"), and has gone through many iterations of benchmarking and optimization. Check out the benchmarks. The library also supports a "dynamic" variant of k-d trees; i.e., users can freely interleave point insertion and queries while maintaining a balanced tree structure.

All that said, this is my first contribution to Hackage, so any and all feedback is highly appreciated.

submitted by giogadi
[link] [11 comments]
Categories: Incoming News

Monads, video!

Haskell on Reddit - Tue, 11/11/2014 - 3:25pm
Categories: Incoming News

Neil Mitchell: Upper bounds or not?

Planet Haskell - Tue, 11/11/2014 - 3:11pm

Summary: I thought through the issue of upper bounds on Haskell package dependencies, and it turns out I don't agree with anyone :-)

There is currently a debate about whether Haskell packages should have upper bounds for their dependencies or not. Concretely, given mypackage and dependency-1.0.2, should I write dependency >= 1 (no upper bounds) or dependency >= 1 && < 1.1 (PVP/Package versioning policy upper bounds). I came to the conclusion that the bounds should be dependency >= 1, but that Hackage should automatically add an upper bound of dependency <= 1.0.2.

Rock vs Hard Place

The reason the debate has continued so long is because both choices are unpleasant:

  • Don't add upper bounds, and have packages break for your users because they are no longer compatible.
  • Add PVP upper bounds, and have reasonable install plans rejected and users needlessly downgraded to old versions of packages. If one package requires a minimum version of above n, and another requires a maximum below n, they can't be combined. The PVP allows adding new functions, so even if all your dependencies follow the PVP, the code might still fail to compile.

I believe there are two relevant relevant factors in choosing which scheme to follow.

Factor 1: How long will it take to update the .cabal file

Let us assume that the .cabal file can be updated in minutes. If there are excessively restrictive bounds for a few minutes it doesn't matter - the code will be out of date, but only by a few minutes, and other packages requiring the latest version are unlikely.

As the .cabal file takes longer to update, the problems with restrictive bounds become worse. For abandoned projects, the restrictive upper bounds make them unusable. For actively maintained projects with many dependencies, bounds bumps can be required weekly, and a two week vacation can break actively maintained code.

Factor 2: How likely is the dependency upgrade to break

If upgrading a dependency breaks the package, then upper bounds are a good idea. In general it is impossible to predict whether a dependency upgrade will break a package or not, but my experience is that most packages usually work fine. For some projects, there are stated compatibility ranges, e.g. Snap declares that any API will be supported for two 0.1 releases. For other projects, some dependencies are so tightly-coupled that every 0.1 increment will almost certainly fail to compile, e.g. the HLint dependency on Haskell-src-exts.

The fact that these two variable factors are used to arrive at a binary decision is likely the reason the Haskell community has yet to reach a conclusion.

My Answer

My current preference is to normally omit upper bounds. I do that because:

  • For projects I use heavily, e.g. haskell-src-exts, I have fairly regular communication with the maintainers, so am not surprised by releases.
  • For most projects I depend on only a fraction of the API, e.g. wai, and most changes are irrelevant to me.
  • Michael Snoyman and the excellent Stackage alert me to broken upgrades quickly, so I can respond when things go wrong.
  • I maintain quite a few projects, and the administrative overhead of uploading new versions, testing, waiting for continuous-integration results etc would cut down on real coding time. (While the Hackage facility to edit the metadata would be quicker, I think that tweaking fundamentals of my package, but skipping the revision control and continuous integration, seems misguided.)
  • The PVP is a heuristic, but usually the upper bound is too tight, and occasionally the upper bound is too loose. Relying on the PVP to provide bounds is no silver bullet.

On the negative side, occasionally my packages no longer compile for my users (very rarely, and for short periods of time, but it has happened). Of course, I don't like that at all, so do include upper bounds for things like haskell-src-exts.

The Right Answer

I want my packages to use versions of dependencies such that:

  • All the features I require are present.
  • There are no future changes that stop my code from compiling or passing its test suite.

I can achieve the first objective by specifying a lower bound, which I do. There is no way to predict the future, so no way I can restrict the upper bound perfectly in advance. The right answer must involve:

  • On every dependency upgrade, Hackage (or some agent of Hackage) must try to compile and test my package. Many Haskell packages are already tested using Travis CI, so reusing those tests seems a good way to gauge success.
  • If the compile and tests pass, then the bounds can be increased to the version just tested.
  • If the compile or tests fail, then the bounds must be tightened to exclude the new version, and the author needs to be informed.

With this infrastructure, the time a dependency is too tight is small, and the chance of breakage is unknown, meaning that Hackage packages should have exact upper bounds - much tighter than PVP upper bounds.

Caveats: I am unsure whether such regularly changing metadata should be incorporated into the .cabal file or not. I realise the above setup requires quite a lot of Hackage infrastructure, but will buy anyone who sorts it out some beer.

Categories: Offsite Blogs

How can I contribute in a useful way?

Haskell on Reddit - Tue, 11/11/2014 - 12:58pm

I'm not math major, so I fear that contributions to language design are out of my scope... what can I do then?

submitted by fruitbooploops
[link] [21 comments]
Categories: Incoming News

PROPOSAL: Add 'Natural' type to base:Data.Word

libraries list - Tue, 11/11/2014 - 11:35am
Hello CLC et al., I hereby suggest to add a type for encoding term-level naturals data Natural = <opaque/hidden> deriving (...the usual standard classes...) to `base:Data.Word` module Motivation ========== - GHC 7.10 is planned to ship with integer-gmp2[2] as its default `Integer` lib, whose low-level primitives are based on *unsigned* BigNums. And 'Natural' type for integer-gmp2 can be implemented directly w/o the overhead of wrapping an `Integer` via data Natural = NatS# Word# | NatJ# !PrimBigNat# as well as having a twice as large domain handled via the small-word constructor and thus avoiding FFI calls into GMP. - GHC/`base` already provides type-level naturals, but no term-level naturals - Remove the asymmetry of having an unbounded signed `Integer` but no unbounded /unsigned/ integral type. Also, `Data.Word` has been carrying the following note[1] for some time now: > It would be very natural to add a type Natural providing an > unbounded
Categories: Offsite Discussion

GHC 7.8.3 thread hang

glasgow-user - Tue, 11/11/2014 - 9:11am
I am trying to debug a lockup problem (and need help with debugging technique), where hang means a thread stops at a known place during evaluation, and other threads continue. The code near the problem is like: ec <- return $ encode command l <- return $ BSL.length ec ss <- return $ BSC.unpack ec It does not matter if I use let or return, or if the length is taken after unpack. I used return so I could use this code for tracing, with strictness to try to find the exact statement that is the problem: traceEventIO "sendCommand" ec <- return $ encode command traceEventIO $ "sendCommand: encoded" l <- ec `seq` return $ BSL.length ec traceEventIO $ "sendCommand: size " ++ (show l) ss <- ec `seq` return $ BSC.unpack ec When this runs, the program executes this many times, but always hangs under a certain condition. For good evaluations: 7f04173ff700: cap 0: sendCommand 7f04173ff700: cap 0: sendCommand: encoded 7f04173ff700: cap 0: sendCommand: s
Categories: Offsite Discussion

2048 - A Simple Haskell Game Server Implementation

Haskell on Reddit - Tue, 11/11/2014 - 9:01am

I built a 2048 game server in Haskell and wrote a blog about it, hope you like!

http://jumboeats.wordpress.com/2014/11/10/haskell2048/

submitted by MarkMc2412
[link] [10 comments]
Categories: Incoming News

IO monads, stream handles, and type inference

haskell-cafe - Tue, 11/11/2014 - 5:09am
In the thread "Precise timing <https://groups.google.com/forum/#!topic/haskell-cafe/sf8W4KBTYqQ>", in response to something ugly I was doing, Rohan Drape provided the following code: import Control.Concurrent import Control.Monad import System.IO import Sound.OSC main = withMax $ mapM_ note (cycle [1,1,2]) withMax = withTransport (openUDP "127.0.0.1" 9000) sin0 param val = sendMessage (Message "sin0" [string param,float val]) pause = liftIO . pauseThread . (* 0.1) note n = do sin0 "frq" 300 sin0 "amp" 1 pause n sin0 "amp" 0 pause n For days I have monkeyed with it, and studied the libraries it imports, and I remain sorely confused. *How can the "a" in "IO a" be a handle?* Here are two type signatures: openUDP :: String -> Int -> IO UDP withTransport :: Transport t => IO t -> Connection t a -> IO a Rohan's code makes clear that openUDP creates a handle representing the UDP connection. openUDP's type signature indicates that its output is an "IO UDP". How can I reconcile those two facts? When I read about
Categories: Offsite Discussion