News aggregator

Avoiding BlockedIndefinitelyOnSTM exceptions

glasgow-user - Mon, 07/14/2014 - 3:30am
I have what may sound like an unusual request: I would like to automatically avoid `BlockedIndefinitelyOnSTM` exceptions with a primitive that looks something like this: safe :: STM a -> STM (Maybe a) This hypothetical `safe` primitive would attempt a transaction, and if `ghc` detects that this transaction would fail because of an `BlockedIndefinitelyOnSTM` exception it will return `Nothing` instead of throwing an uncatchable exception. I originally simulated a limited form of this behavior using `pipes-concurrency`. I instrumented the garbage collector (using weak references) to detect when an STM variable was garbage collected and to safely cancel any transactions that depended on those variables. You can see the implementation here: The original purpose behind this was to easily read and write to a channel without having to count references to the
Categories: Offsite Discussion

Diony Rosa - 7/14/2014 2:14:04 AM

haskell-cafe - Mon, 07/14/2014 - 3:14am Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Bit shifting limitations

libraries list - Sun, 07/13/2014 - 7:36pm
The current state of affairs is a bit unsatisfactory. 1. The biggest problem, as I see it, is that while we have shiftR and shiftL, which are documented as giving 0 or -1 when shifting too far, and we have unsafeShiftR and unsafeShiftL, which are likely to do whatever the CPU happens to do, we don't have anything guaranteed to shift using just the first five or six bits of the shift count, which is what Java specifies and what unsafeShiftR and unsafeShiftL *actually* do (at least on x86_64). I propose that we add these masked shifts to Data.Bits. The default implementations can look something like: shiftRMasked x count | popCount (finiteBitSize x) != 1 = error "Masked shift only makes sense if the size is a power of 2." | otherwise = x `unsafeShiftR` (count .&. (finiteBitSize x - 1)) 2. It would be nice to specify what shiftR and shiftL are supposed to do when given negative shift counts. Is there a practical reason not to specify that? 3. I would like to add explicit arithmetic and logical shifts to
Categories: Offsite Discussion

[ANN] cabal-bounds 0.7: update bounds by haskellplatform release

haskell-cafe - Sun, 07/13/2014 - 1:42pm
Hi cafe, cabal-bounds[1] is a command line program for managing the bounds/versions of the dependencies in a cabal file. cabal-bounds 0.7 adds the feature to set the bounds of dependencies to the library versions used by a haskell platform[2] release. For further details please consult the README[3]. Greetings, Daniel [1] [2] [3]
Categories: Offsite Discussion

How would you create a Haskell program that stores mutable Haskell expressions? (Kinda complicated explanation.)

Haskell on Reddit - Sun, 07/13/2014 - 1:26pm

I am trying to create a Haskell program that stores Haskell expressions and their reductions. I must admit that after reading a lot about Haskell, monads, and thinking a lot about the problem, I still have no idea where even to start this. So, let me show an example of what I want:

a <- 1 b <- 2 c <- 3 d <- 4 e <- [a,b,c,d] f <- (\l -> case l of [] -> 0; (x:xs) -> x + s xs) g <- f e print e -- output: [1,2,3,4] print g -- output: 10 a <- 11 print e -- output: [11,2,3,4] print g -- output: 20 h <- 5 e <- [a,b,c,d,h] print e -- output: [11,2,3,4,5] print g -- output: 25 f <- (\l -> case l of [] -> 0; (x:xs) -> x * 2 + s xs) print e -- output: [11,2,3,4,5] print g -- output: 50

This is not in any particular language, but do you get what I've done? To put it on words, I want a way to store a database of Haskell terms and their reductions. There are 2 problems, though:

  1. You can modify a term and, when you do, all terms that use this term will get updated.

  2. Terms are cons-hashed and reductions are memoized by term. So, a <- [1,2] and b <- [1,2], a and b are equal by reference and the reduction is cached.

Is the problem as complicated as I am starting to believe it is? I need some help for the overall design of this. How would you guys do it? What are the libraries that could help?

submitted by Padavo
[link] [15 comments]
Categories: Incoming News

Complete roadmap from total novice to Haskell mastery?

Haskell on Reddit - Sun, 07/13/2014 - 11:35am

I am having no trouble at all in getting Haskell basics. Function, currying, purity, laziness, monads somewhat. All have a lot of great resource options. But just having finished the LYAH book, I have no idea where to go now. I see you guys talking about advanced things all the time - GADTs, Lenses, REPA, monad transformers. I see a lot of cool libraries with no explanations or tutorials. There doesn't seems to be many Haskell books. It is becoming harder and harder to get further down the rabbit hole.

So, considering that lack of resources and courses for advanced stuff, it would be really great if you compiled a list of the available Haskell resources. Books, tutorials, blog posts. One complete enough to contain everything someone should read to go from complete novice to mastery by himself.

submitted by Padavo
[link] [72 comments]
Categories: Incoming News

How do you avoid the Cabal Hell™?

Haskell on Reddit - Sun, 07/13/2014 - 8:43am

I've been using Haskell quite heavily in the past few months, and I just keep experiencing cabal hell over and over again. Here's basically my list of questions. Most recently when I tried to install darcs I'm not even able to build it in a sandbox. I always thought that cabal unpack darcs; cd darcs; cabal sandbox init; cabal install should always pass, but it doesnt, so I guess I must be doing something wrong?

This is probably my biggest question, how can I compile something which fails to install it's dependencies even when using a sandbox? Here are a few more questions:

  • How should I install binaries like yesod-bin, darcs, ghc-mod, hlint, etc., where I'd like to have them available globally? (Should I just cabal unpack, build in a sandbox and copy the binary somewhere safe?)
  • How should I install packages which I do want globally, such as lens? The reason for this is that when playing around with things I don't want to keep reinstalling sandboxes over and over again, what's the best practice here? Should I install all of the things I use in one big cabal install?
  • When and for what should I be using Stackage? Is it better to just wipe everything from ~/.cabal and ~/.ghc, add stackage and start installing things from scratch? How much does this help when using the inclusive build compared to regular hackage?
  • What should I do when I stuble upon a package which I need to build, but it results in dependency issues like this. Is there a way to fix that, other than ghc-pkg unregistering all the packages which it conflicts with?
  • If I use the pre-built binaries for ghc and install everything myself, is that safer than using haskell-platform? I've found that when using the haskell-platform I have to ghc-pkg unregister quite a lot of things to get some things compiled.

If you guys have any other tips for avoiding or figuring out the cabal hell, or techniques you use to manage dependencies, or just anything related to working with cabal properly, please do post them in the comments.

The only way I've been fixing this stuff is just brute force deleting packages or completely re-installing everything, which doesn't seem right.

submitted by progfu
[link] [30 comments]
Categories: Incoming News

'Set' as a 'Map' to '()'

Haskell on Reddit - Sun, 07/13/2014 - 7:20am

import Prelude hiding (filter)

import qualified Data.Map as Map

import Data.Map (Map)

-- | A 'Set' as a 'Map' to '()'.

newtype Set a = Set (Map a ()) deriving (Eq,Ord,Show)

-- | An empty 'Set'.

empty :: Set a

empty = Set ()

-- | Insert and element into a 'Set'.

insert :: (Ord a) => a -> Set a -> Set a

-- | Test if an element is in a 'Set'.

member :: (Ord a) => a -> Set a -> Bool

member _ empty = False

-- | Filter all members that satisfy a predicate.

filter :: (a -> Bool) -> Set a -> Set a

filter _ empty = empty

Cany anyone help me with the rest of this questions. I didn't really know how to understand this.. Thanks.

submitted by fuuman1
[link] [9 comments]
Categories: Incoming News

Trying to learn monads - does this make sense?

Haskell on Reddit - Sun, 07/13/2014 - 7:15am
result = (do x <- NewVar #creates a new var with value=0, and return its id y <- NewVar Increase x 7 #increases by 7 the value of the "x" variable Increase y 3 z <- Add x y #creates a new var with value = 10 and return its id Increase x z #increases by 10 the value of the "x" variable x)

Result: 17

Does this make sense? If I coded the right monad, could this be valid Haskell code? How would that monad look like?

Someone pointed me the State monad, but I still don't get it.

submitted by Padavo
[link] [11 comments]
Categories: Incoming News

[ANN] cabal-bounds 0.7: update bounds by haskell platform release

Haskell on Reddit - Sun, 07/13/2014 - 6:44am

cabal-bounds[1] is a command line program for managing the bounds/versions of the dependencies in a cabal file.

cabal-bounds 0.7 adds the feature to set the bounds of dependencies to the library versions used by a haskell platform[2] release.

For further details please consult the README[3].

[1] [2] [3]

submitted by dan00
[link] [17 comments]
Categories: Incoming News

book "Haskell Data Analysis Cookbook" by NishantShukla

haskell-cafe - Sun, 07/13/2014 - 12:54am
I believe there is a non-thread safe code fragment in Chapter 1(page 15): import System.Directory (doesFileExist) ..... exist <- doesFileExist filename
Categories: Offsite Discussion

GHC backend for MIPS

Haskell on Reddit - Sat, 07/12/2014 - 11:08pm

Does anybody tried to use ghc cross-compiler for MIPS? The information on looks outdated and hopelessly.

But I've found some mentions that someone uses ghc on MIPS boards.

My naive attempts to use the llvm backend failed.

So are there any prospects of build ghc cross-compiler for MIPS or it's just a waste of time?

submitted by voidlizard
[link] [9 comments]
Categories: Incoming News

[ANNOUNCE] New release of SBV (v3.1)

General haskell list - Sat, 07/12/2014 - 8:49pm
I'm pleased to announce v3.1 release of SBV, a library for integrating SMT solvers into Haskell. This release coincides with GHC 7.8.3: A a prior bug in the 7.8 series caused SBV to crash under heavy load. GHC 7.8.3 fixes this bug; so if you're an SBV user, please upgrade to both GHC 7.8.3 and your version of SBV. Also new in this release are two oft-requested features: - Parallel solving capabilities: Using multiple SMT solvers at the same time to get the fastest result (speed), or get all results (to make sure they all behave the same way, safety). - A variant of symbolic if-then-else (called sBranch) that can call the external solver during simulation before it symbolically simulates "then" and "else" branches. This is useful for programming with recursive functions where termination depends on symbolic values. Full release notes: SBV web page: As usual, bug reports and feedback are most welcome! -Levent. _
Categories: Incoming News

Oliver Charles: Announcing engine-io and socket-io for Haskell

Planet Haskell - Sat, 07/12/2014 - 6:00pm

I’ve just released three new libraries to Hackage:

  1. engine-io
  2. engine-io-snap
  3. socket-io

Engine.IO is a new framework from Automattic, which provides an abstraction for real-time client/server communication over the web. You can establish communication channels with clients over XHR long-polling, which works even through proxies and aggressive traffic rewriting, and connections are upgraded to use HTML 5 web sockets if available to reduce latency. Engine.IO also allows the transmission of binary data without overhead, while also gracefully falling back to using base 64 encoding if the client doesn’t support raw binary packets.

This is all very desirable stuff, but you’re going to have a hard time convincing me that I should switch to Node.js! I’m happy to announce that we now have a Haskell implementation for Engine.IO servers, which can be successfully used with the Engine.IO JavaScript client. A simple application may look like the following:

{-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE OverloadedStrings #-} module Main where import Control.Monad (forever) import qualified Control.Concurrent.STM as STM import qualified Network.EngineIO as EIO import qualified Network.EngineIO.Snap as EIOSnap import qualified Snap.CORS as CORS import qualified Snap.Http.Server as Snap handler :: EIO.Socket -> IO () handler s = forever $ STM.atomically $ EIO.receive s >>= EIO.send s main :: IO () main = do eio <- EIO.initialize Snap.quickHttpServe $ CORS.applyCORS CORS.defaultOptions $ EIO.handler eio (pure handler) EIOSnap.snapAPI

This example uses engine-io-snap to run an Engine.IO application using Snap’s server, which allows me to concentrate on the important stuff. The body of the application is the handler, which is called every time a socket connects. In this case, we have a basic echo server, which constantly reads (blocking) from the client, and echos the message straight back.

As mentioned, you can also do binary transmission - the following handler transmits the lovable doge.png to clients:

handler s = do bytes <- BS.readFile "doge.png" STM.atomically $ EIO.send socket (EIO.BinaryPacket bytes)

On the client side, this can be displayed as an image by using data URIs, or manipulated using the HTML 5 File API.


Socket.IO builds on top of Engine.IO to provide an abstraction to build applications in terms of events. In Socket.IO, clients connect to a server, and then receive and emit events, which can often provide a simpler architecture for web applications.

My Socket.IO implementation in Haskell also strives for simplicity, by taking advantage of the aeson library a lot of the encoding and decoding of packets is hidden, allowing you to focus on your application logic. I’ve implemented the example chat application, originally written in Node.js, using my Haskell server:

data AddUser = AddUser Text.Text instance Aeson.FromJSON AddUser where parseJSON = Aeson.withText "AddUser" $ pure . AddUser data NumConnected = NumConnected !Int instance Aeson.ToJSON NumConnected where toJSON (NumConnected n) = Aeson.object [ "numUsers" .= n] data NewMessage = NewMessage Text.Text instance Aeson.FromJSON NewMessage where parseJSON = Aeson.withText "NewMessage" $ pure . NewMessage data Said = Said Text.Text Text.Text instance Aeson.ToJSON Said where toJSON (Said username message) = Aeson.object [ "username" .= username , "message" .= message ] data UserName = UserName Text.Text instance Aeson.ToJSON UserName where toJSON (UserName un) = Aeson.object [ "username" .= un ] data UserJoined = UserJoined Text.Text Int instance Aeson.ToJSON UserJoined where toJSON (UserJoined un n) = Aeson.object [ "username" .= un , "numUsers" .= n ] -------------------------------------------------------------------------------- data ServerState = ServerState { ssNConnected :: STM.TVar Int } server :: ServerState -> SocketIO.Router () server state = do userNameMVar <- liftIO STM.newEmptyTMVarIO let forUserName m = liftIO (STM.atomically (STM.tryReadTMVar userNameMVar)) >>= mapM_ m SocketIO.on "new message" $ \(NewMessage message) -> forUserName $ \userName -> SocketIO.broadcast "new message" (Said userName message) SocketIO.on "add user" $ \(AddUser userName) -> do n <- liftIO $ STM.atomically $ do n <- (+ 1) <$> STM.readTVar (ssNConnected state) STM.putTMVar userNameMVar userName STM.writeTVar (ssNConnected state) n return n SocketIO.emit "login" (NumConnected n) SocketIO.broadcast "user joined" (UserJoined userName n) SocketIO.on_ "typing" $ forUserName $ \userName -> SocketIO.broadcast "typing" (UserName userName) SocketIO.on_ "stop typing" $ forUserName $ \userName -> SocketIO.broadcast "stop typing" (UserName userName)

We define a few data types and their JSON representations, and then define our server application below. Users of the library don’t have to worry about parsing and validating data for routing, as this is handled transparently by defining event handlers. In the above example, we listen for the add user event, and expect it to have a JSON payload that can be decoded to the AddUser data type. This follows the best-practice of pushing validation to the boundaries of your application, so you can spend more time working with stronger types.

By stronger types, I really do mean stronger types - at Fynder we’re using this very library with the singletons library in order to provide strongly typed publish/subscribe channels. If you’re interested in this, be sure to come along to the Haskell eXchange, where I’ll be talking about exactly that!

Categories: Offsite Blogs

New release of SBV (v3.1)

Haskell on Reddit - Sat, 07/12/2014 - 1:51pm
Categories: Incoming News

Interactive scientific computing; of pythonic parts and goldilocks languages

Lambda the Ultimate - Sat, 07/12/2014 - 12:25pm

Graydon Hoare has an excellent series of (two) blog posts about programming languages for interactive scientific computing.
technicalities: interactive scientific computing #1 of 2, pythonic parts
technicalities: interactive scientific computing #2 of 2, goldilocks languages

The scenario of these posts is to explain and constrast the difference between two scientific computing languages, Python and "SciPy/SymPy/NumPy, IPython, and Sage" on one side, and Julia on the other, as the result of two different design traditions, one (Python) following Ousterhout's Dichotomy of having a convenient scripting language on top of a fast system language, and the other rejecting it (in the tradition of Lisp/Dylan and ML), promoting a single general-purpose language.

I don't necessarily buy the whole argument, but the posts are a good read, and have some rather insightful comments about programming language use and design.

Quotes from the first post:

There is a further split in scientific computing worth noting, though I won't delve too deep into it here; I'll return to it in the second post on Julia. There is a division between "numerical" and "symbolic" scientific systems. The difference has to do with whether the tool is specialized to working with definite (numerical) data, or indefinite (symbolic) expressions, and it turns out that this split has given rise to quite radically different programming languages at the interaction layer of such tools, over the course of computing history. The symbolic systems typically (though not always) have much better-engineered languages. For reasons we'll get to in the next post.


I think these systems are a big deal because, at least in the category of tools that accept Ousterhout's Dichotomy, they seem to be about as good a set of hybrid systems as we've managed to get so far. The Python language is very human-friendly, the systems-level languages and libraries that it binds to are well enough supported to provide adequate speed for many tasks, the environments seem as rich as any interactive scientific computing systems to date, and (crucially) they're free, open source, universally available, easily shared and publication-friendly. So I'm enjoying them, and somewhat hopeful that they take over this space.

Quotes from the second:

the interesting history here is that in the process of implementing formal reasoning tools that manipulate symbolic expressions, researchers working on logical frameworks -- i.e. with background in mathematical logic -- have had a tendency to produce implementation languages along the way that are very ... "tidy". Tidy in a way that befits a mathematical logician: orthogonal, minimal, utterly clear and unambiguous, defined more in terms of mathematical logic than machine concepts. Much clearer than other languages at the time, and much more amenable to reasoning about. The original manual for the Logic Theory Machine and IPL (1956) makes it clear that the authors were deeply concerned that nothing sneak in to their implementation language that was some silly artifact of a machine; they wanted a language that they could hand-simulate the reasoning steps of, that they could be certain of the meaning of their programs. They were, after all, translating Russel and Whitehead into mechanical form!


In fact, the first couple generations of "web languages" were abysmally inefficient. Indirect-threaded bytecode interpreters were the fast case: many were just AST-walking interpreters. PHP initially implemented its loops by fseek() in the source code. It's a testament to the amount of effort that had to go into building the other parts of the web -- the protocols, security, naming, linking and information-organization aspects -- that the programming languages underlying it all could be pretty much anything, technology-wise, so long as they were sufficiently web-friendly.

Of course, performance always eventually returns to consideration -- computers are about speed, fundamentally -- and the more-dynamic nature of many of the web languages eventually meant (re)deployment of many of the old performance-enhancing tricks of the Lisp and Smalltalk family, in order to speed up later generations of the web languages: generational GC, JITs, runtime type analysis and specialization, and polymorphic inline caching in particular. None of these were "new" techniques; but it was new for industry to be willing to rely completely on languages that benefit, or even require, such techniques in the first place.


Julia, like Dylan and Lisp before it, is a Goldilocks language. Done by a bunch of Lisp hackers who seriously know what they're doing.

It is trying to span the entire spectrum of its target users' needs, from numerical inner loops to glue-language scripting to dynamic code generation and reflection. And it's doing a very credible job at it. Its designers have produced a language that seems to be a strict improvement on Dylan, which was itself excellent. Julia's multimethods are type-parametric. It ships with really good multi-language FFIs, green coroutines and integrated package management. Its codegen is LLVM-MCJIT, which is as good as it gets these days.

Categories: Offsite Discussion