News aggregator

cabal sandbox: Can't load .so/.DLL for...

haskell-cafe - Wed, 10/08/2014 - 8:18am
Hi all, I have some trouble with cabal sandboxes. I used `cabal sandbox add-source` to add a library. When I add the library as a dependency for my project and try to build I’m always getting <command line>: can't load .so/.DLL for: /Users/joel/workspace/haskell/bookingsystem/.cabal-sandbox/lib/x86_64-osx-ghc-7.8.3/google-maps-0.1.0.0/libHSgoogle-maps-0.1.0.0-ghc7.8.3.dylib (dlopen(/Users/joel/workspace/haskell/bookingsystem/.cabal-sandbox/lib/x86_64-osx-ghc-7.8.3/google-maps-0.1.0.0/libHSgoogle-maps-0.1.0.0-ghc7.8.3.dylib, 9): Symbol not found: _googlezmmapszm0zi1zi0zi0_WebziGoogleziMapsziInternal_queryAPI_closure Referenced from: /Users/joel/workspace/haskell/bookingsystem/.cabal-sandbox/lib/x86_64-osx-ghc-7.8.3/google-maps-0.1.0.0/libHSgoogle-maps-0.1.0.0-ghc7.8.3.dylib Expected in: flat namespace in /Users/joel/workspace/haskell/bookingsystem/.cabal-sandbox/lib/x86_64-osx-ghc-7.8.3/google-maps-0.1.0.0/libHSgoogle-maps-0.1.0.0-ghc7.8.3.dylib) I couldn’t find a solution so far. Anyone can help
Categories: Offsite Discussion

Functional Jobs: Software Engineer / Developer at Clutch Analytics/ Windhaven Insurance (Full-time)

Planet Haskell - Wed, 10/08/2014 - 7:26am

Position Description:

Windhaven Insurance is seeking an experienced Software Engineer/ Developer to join a small elite team who are disrupting the auto insurance industry with innovative technology. You will work in a startup atmosphere as part of a subsidiary of a rapidly growing larger company. This means that along with programming, you’ll have a hand in the larger issues, such as product architecture and direction.

Required:

  • Someone who knows at least one functional language such as: Elixir, Erlang, Lisp, Scheme, Haskell, ML, Clojure, Racket, Ocaml or F#
  • Someone who ENGAGES a.k.a “gives a damn” about what we do and how technology can help make us more competitive in the marketplace.
  • Someone who COLLABORATES. We have the flattest organization in the industry designed with one main goal – the TEAM. Are you hungry to make a significant impact in the tech world?
  • Someone who RESPECTS Teammates, customers and the community.

Special Requirements:

You need to have made an achievement, in any field, of significance worth talking about, that required grit, determination and skill. This can be a personal achievement that few know about, or it could have gotten coverage by the media. It doesn’t matter, what does matter is some demonstration of grit, determination and skill. If it can be described succinctly, please describe the achievement in your cover letter, if not, be prepared to tell us all about it during the interview.

Professional & Technical Qualifications:

  • Experience with languages such as Elixir or Erlang
  • Experience with Ember.js (or Angular.js)
  • Experience with NoSQL data stores, such as Couchbase, Riak, etc.
  • DevOps experience a plus
  • Ability to explain technical issues and recommend solutions
  • Strong team player with a high degree of flexibility
  • Excellent verbal and written communication skills

Compensation:

Competitive salary based on experience. Benefits package includes: medical, dental, vision insurance, life insurance, short term and long term disability insurance, 401K, paid time off. EOE.

Get information on how to apply for this position.

Categories: Offsite Blogs

IO Exceptions through monad transformers stack

haskell-cafe - Tue, 10/07/2014 - 10:25pm
Hi, I’m not sure if this is the right list for “base” questions like this, if there are any more specific lists please let me know. I’m writing a parser and it works in a stack of monad transformers defined in this way: type Parser = StateT ParserState (ExceptT ParserError IO) The base monad is IO because the parse could have the need to load other files included by the file that’s being parsed. The ExceptT transformer make it easy for the lexer and the parser to signal errors. At the end I have a runParser function that compose runExceptT and evalStateT and provides me with a nice Either that contains the parsed result or a value of type ParserError. The ParserError data type is the following: data ParserError = LexingError <fields> | ParsingError <fields> Currently, whenever the parser encounter an error I simply do throwError $ ParsingError something…, for example. What I would like to do is to report in this way also the possible IO errors that can occur, by adding a IOError construc
Categories: Offsite Discussion

Fingerprinting Haskell Objects

haskell-cafe - Tue, 10/07/2014 - 9:30pm
Hello everybody, I have a little question I wanted to run by the folks here. I've run into it several times over the past few years and would love to lock down a good answer. What's the best way to "fingerprint" a Haskell object into, say, ByteString, so that this fingerprint can be used as the "lookup key" in a database (for example) and be trusted that it will remain constant over time even as the underlying libraries evolve? Here's a simple example: - Say I'm building a manual index on top of a key-value store (redis, dynamodb, etc.) - I want my keys to be arbitrary tuples (or similar records) that may contain various fields in them - I would like to avoid ad-hoc, hand-written MyTuple -> ByteString and ByteString -> MyTuple conversions. However, Generic derivations, template-haskell, etc. are acceptable - Notice how your fingerprint, which is used as a lookup key in the database, has to remain stationary. If it changes even by a single bit over time for the same MyTupl
Categories: Offsite Discussion

[help] Setting up Haskell on a Mac?

Haskell on Reddit - Tue, 10/07/2014 - 8:36pm

I've got GHCI running fine, but I'm having a hard time setting my editor. Does anyone know how? I'm super new to OSX which doesn't help.

It was easy on windows using :set editor <path>

But that isn't the case here it seems.

Also, anything else I'll need to get up and running on OSX would be awesome.

Thanks in advance.

submitted by qspec02
[link] [5 comments]
Categories: Incoming News

ANN: islink 0.1.0.0: check if an HTML element is a link (useful for web scraping)

haskell-cafe - Tue, 10/07/2014 - 7:59pm
Hello everybody, I'd like to announce the first public release of islink. It's library that basically provides a list of combinations of HTML tag names and attributes that correspond to links to external resources. This includes things like ("a", "href"), ("img", "src"), ("script", "src") etc. It also comes with a convenience function to check if a particular pair (tag, attribute) corresponds to a link. This can be useful for web scraping. Here's an example how to use it to extract all (external) links from an HTML document (with the help of hxt): {-# LANGUAGE Arrows #-} import Text.Html.IsLink import Text.XML.HXT.Core
Categories: Offsite Discussion

Why haskellers tend to use data-structures directly, instead of type classes?

Haskell on Reddit - Tue, 10/07/2014 - 5:18pm

When reading Haskell code, I notice almost always Haskellers tend to use data-structures directly, as opposed to just using type classes. For example, when implementing a graphing library, one would implement a function such as:

fillRect :: Color → Bounds → Image → Image

As opposed to:

fillRect :: Image a => Color → Bounds → a → a

I see this is also recommended, if not enforced, by prelude, considering that is how it does things as well.

My problem with such approach is that, after your code is finished, if you decide to change the used structure you will have to refactor every single piece of code using it. That is completely unacceptable in 2014, even more so considering even in old languages such as C++, the whole refactoring would be a matter of changing a single line typedef.

Just, why???

submitted by SrPeixinho
[link] [26 comments]
Categories: Incoming News

Neil Mitchell: Bake - Continuous Integration System

Planet Haskell - Tue, 10/07/2014 - 3:24pm

Summary: I've written a continuous integration system, in Haskell, designed for large projects. It works, but probably won't scale yet.

I've just released bake, a continuous integration system - an alternative to Jenkins, Travis, Buildbot etc. Bake eliminates the problem of "broken builds", a patch is never merged into the repo before it has passed all the tests.

Bake is designed for large, productive, semi-trusted teams:

  • Large teams where there are at least several contributors working full-time on a single code base.
  • Productive teams which are regularly pushing code, many times a day.
  • Semi-trusted teams where code does not go through manual code review, but code does need to pass a test suite and perhaps some static analysis. People are assumed to be fallible.

Current state: At the moment I have a rudimentary test suite, and it seems to mostly work, but Bake has never been deployed for real. Some optional functionality doesn't work, some of the web UI is a bit crude, the algorithms probably don't scale and all console output from all tests is kept in memory forever. I consider the design and API to be complete, and the scaling issues to be easily fixable - but it's easier to fix after it becomes clear where the bottleneck is. If you are interested, take a look, and then email me.

To give a flavour, the web GUI looks of a running Bake system looks like:

The Design

Bake is a Haskell library that can be used to put together a continuous integration server. To run Bake you start a single server for your project, which coordinates tasks, provides an HTTP API for submitting new patches, and a web-based GUI for viewing the progress of your patches. You also run some Bake clients which run the tests on behalf of the server. While Bake is written in Haskell, most of the tests are expected to just call some system command.

There are a few aspects that make Bake unique:

  • Patches are submitted to Bake, but are not applied to the main repo until they have passed all their tests. There is no way for someone to "break the build" - at all points the repo will build on all platforms and all tests will pass.
  • Bake scales up so that even if you have 5 hours of testing and 50 commits a day it will not require 250 hours of computation per day. In order for Bake to prove that a set of patches pass a test, it does not have to test each patch individually.
  • Bake allows multiple clients to run tests, even if some tests are only able to be run on some clients, allowing both parallelisation and specialisation (testing both Windows and Linux, for example).
  • Bake can detect that tests are no longer valid, for example because they access a server that is no longer running, and report the issue without blaming the submitted patches.
An Example

The test suite provides both an example configuration and commands to drive it. Here we annotate a slightly simplified version of the example, for lists of imports see the original code.

First we define an enumeration for where we want tests to run. Our server is going to require tests on both Windows and Linux before a patch is accepted.

data Platform = Linux | Windows deriving (Show,Read)
platforms = [Linux,Windows]

Next we define the test type. A test is something that must pass before a patch is accepted.

data Action = Compile | Run Int deriving (Show,Read)

Our type is named Action. We have two distinct types of tests, compiling the code, and running the result with a particular argument. Now we need to supply some information about the tests:

allTests = [(p,t) | p <- platforms, t <- Compile : map Run [1,10,0]]

execute :: (Platform,Action) -> TestInfo (Platform,Action)
execute (p,Compile) = matchOS p $ run $ do
cmd "ghc --make Main.hs"
execute (p,Run i) = require [(p,Compile)] $ matchOS p $ run $ do
cmd ("." </> "Main") (show i)

We have to declare allTests, then list of all tests that must pass, and execute, which gives information about a test. Note that the test type is (Platform,Action), so a test is a platform (where to run the test) and an Action (what to run). The run function gives an IO action to run, and require specifies dependencies. We use an auxiliary matchOS to detect whether a test is running on the right platform:

#if WINDOWS
myPlatform = Windows
#else
myPlatform = Linux
#endif

matchOS :: Platform -> TestInfo t -> TestInfo t
matchOS p = suitable (return . (==) myPlatform)

We use the suitable function to declare whether a test can run on a particular client. Finally, we define the main function:

main :: IO ()
main = bake $
ovenGit "http://example.com/myrepo.git" "master" $
ovenTest readShowStringy (return allTests) execute
defaultOven{ovenServer=("127.0.0.1",5000)}

We define main = bake, then fill in some configuration. We first declare we are working with Git, and give a repo name and branch name. Next we declare what the tests are, passing the information about the tests. Finally we give a host/port for the server, which we can visit in a web browser or access via the HTTP API.

Using the Example

Now we have defined the example, we need to start up some servers and clients using the command line for our tool. Assuming we compiled as bake, we can write bake server and bake client (we'll need to launch at least one client per OS). We can view the state by visiting http://127.0.0.1:5000 in a web browser.

To add a patch we can run bake addpatch --name=cb3c2a71, using the SHA1 of the commit, which will try and integrate that patch into the master branch, after all the tests have passed.

Categories: Offsite Blogs

How do I install QuickCheck?

Haskell on Reddit - Tue, 10/07/2014 - 2:21pm

I've tried using "cabal install QuickCheck" but I get this message.

Is there another way I can install QuickCheck or something I'm not doing right on the cabal install?

submitted by willrobertshaw
[link] [6 comments]
Categories: Incoming News

formally expressing type class "laws"

Haskell on Reddit - Tue, 10/07/2014 - 2:07pm

As a newcomer to Haskell - not even that, only a learner. I have been quite surprised about how relaxed Haskellers are about stating type class laws given how forthright they are about the absolute rightness of a strong type system.

When Haskellers talk about how good it is that they can reason about their code because of the laws, it seems to me like a JavaScript developer saying "it's wonderful, I don't need to check that I have been passed valid URL, because it's in the API specification in the README".

I might even risk contributing to a meme by saying "every sufficiently complicated Haskell library has an incomplete and informally-constructed Agda program in the blog posts expounding on it".

I am not here to push an inevitable march towards dependent typing. I would just like to be able to have a way of formally documenting these laws. Of course once the properties are there I'm sure there are many things that could be done:

  • plugging in an external theorem prover
  • generating quick-check tests
  • allowing the compiler to rely on them for optimisation - for instance if the compiler knows that the applied function in fold is associative then the operation can be panellised (I can't imagine this could be done in any other way than recognising particular special cases, but it could still be profitable even then).
    • other things

I understand that achieving this is probably more complicated than I could possibly imagine. What surprises me is that I don't even hear anyone worrying about it.

submitted by maninalift
[link] [21 comments]
Categories: Incoming News

PLT Redex: The Summer School, Call for Participation

General haskell list - Tue, 10/07/2014 - 2:04pm
PLT REDEX: THE SUMMER SCHOOL CALL for PARTICIPATION Matthias Felleisen, Robert Bruce Findler, Matthew Flatt LOCATION: University of Utah, Salt Lake City DATES: July 27 - July 31, 2015 http://www.cs.utah.edu/~mflatt/plt-redex/ PLT Redex is a lightweight, embedded DSL for modeling programming languages, their reduction semantics, and their type systems. It comes with an IDE and a toolbox for exploring, testing, debugging, and type-setting language models. The PLT research group has successfully used Redex to model and analyze a wide spectrum of published models. The summer school will introduce students to the underlying theory of reduction semantics, programming in the Redex language, and using its tool suite effectively. The course is intended for PhD students and researchers in programming languages. Enrollment is limited to 25 attendees. While the workshop itself is free, attendees must pay for travel, room, and board. We expect room and board to be around $500, assuming an arrival in the evening of
Categories: Incoming News

Brandon Simmons: Announcing unagi-chan

Planet Haskell - Tue, 10/07/2014 - 11:41am

Today I released version 0.2 of unagi-chan, a haskell library implementing fast and scalable FIFO queues with a nice and familiar API. It is available on hackage and you can install it with:

$ cabal install unagi-chan

This version provides a bounded queue variant (and closes issue #1!) that has performance on par with the other variants in the library. This is something I’m somewhat proud of, considering that the standard TBQueue is not only significantly slower than e.g. TQueue, but also was seen to livelock at a fairly low level of concurrency (and so is not included in the benchmark suite).

Here are some example benchmarks. Please do try the new bounded version and see how it works for you.

What follows are a few random thoughts more or less generally-applicable to the design of bounded FIFO queues, especially in a high-level garbage-collected language. These might be obvious, uninteresting, or unintelligible.

What is Bounding For?

I hadn’t really thought much about this before: a bounded queue limits memory consumption because the queue is restricted from growing beyond some size.

But this isn’t quite right. If for instance we implement a bounded queue by pre-allocating an array of size bounds then a write operation need not consume any additional memory; indeed the value to be written has already been allocated on the heap before the write even begins, and will persist whether the write blocks or returns immediately.

Instead constraining memory usage is a knock-on effect of what we really care about: backpressure; when the ratio of “producers” to their writes is high (the usual scenario), blocking a write may limit memory usage by delaying heap allocations associated with elements for future writes.

So bounded queues with blocking writes let us:

  • when threads are “oversubscribed”, transparently indicate to the runtime which work has priority
  • limit future resource usage (CPU time and memory) by producer threads

We might also like our bounded queue to support a non-blocking write which returns immediately with success or failure. This might be thought of (depending on the capabilities of your language’s runtime) as more general than a blocking write, but it also supports a distinctly different notion of bounding, that is bounding message latency: a producer may choose to drop messages when a consumer falls behind, in exchange for lower latency for future writes.

Unagi.Bounded Implementation Ideas

Trying to unpack the ideas above helped in a few ways when designing Unagi.Bounded. Here are a few observations I made.

We need not block before “writing”

When implementing blocking writes, my intuition was to (when the queue is “full”) have writers block before “making the message available” (whatever that means for your implementation). For Unagi that means blocking on an MVar, and then writing a message to an assigned array index.

But this ordering presents a couple of problems: first, we need to be able to handle async exceptions raised during writer blocking; if its message isn’t yet “in place” then we need to somehow coordinate with the reader that would have received this message, telling it to retry.

By unpacking the purpose of bounding it became clear that we’re free to block at any point during the write (because the write per se does not have the memory-usage implications we originally naively assumed it had), so in Unagi.Bounded writes proceed exactly like in our other variants, until the end of the writeChan, at which point we decide when to block.

This is certainly also better for performance: if a wave of readers comes along, they need not wait (themselves blocking) for previously blocked writers to make their messages available.

One hairy detail from this approach: an async exception raised in a blocked writer does not cause that write to be aborted; i.e. once entered, writeChan always succeeds. Reasoning in terms of linearizability this only affects situations in which a writer thread is known-blocked and we would like to abort that write.

Fine-grained writer unblocking in probably unnecessary and harmful

In Unagi.Bounded I relax the bounds constraint to “somewhere between bounds and bounds*2”. This allows me to eliminate a lot of coordination between readers and writers by using a single reader to unblock up to bounds number of writers. This constraint (along with the constraint that bounds be a power of two, for fast modulo) seemed like something everyone could live with.

I also guess that this “cohort unblocking” behavior could result in some nicer stride behavior, with more consecutive non-blocking reads and writes, rather than having a situation where the queue is almost always either completely full or empty.

One-shot MVars and Semaphores

This has nothing to do with queues, but just a place to put this observation: garbage-collected languages permit some interesting non-traditional concurrency patterns. For instance I use MVars and IORefs that only ever go from empty to full, or follow a single linear progression of three or four states in their lifetime. Often it’s easier to design algorithms this way, rather than by using long-lived mutable variables (for instance I struggled to come up with a blocking bounded queue design that used a circular buffer which could be made async-exception-safe).

Similarly the CAS operation (which I get exported from atomic-primops) turns out to be surprisingly versatile far beyond the traditional read/CAS/retry loop, and to have very useful semantics when used on short-lived variables. For instance throughout unagi-chan I do both of the following:

  • CAS without inspecting the return value, content that we or any other competing thread succeeded.

  • CAS using a known initial state, avoiding an initial read

Categories: Offsite Blogs

Define laws for read and show?

Haskell on Reddit - Tue, 10/07/2014 - 10:32am

The documentation for Text.Show states that this holds for derived instances:

The result of sho is a syntactically correct Haskell expression [...]

This fact can be used to pass states of certain computations around as text files or similar objects, because show converts the data to a string representation, while show's complement read enables continuing the earlier computations.

Of course the above is only possible if show and read are either derived automatically or implemented appropriately.

Now my question is: Should the statement from the beginning hold for any show function? Or, more general, Should a law / laws regarding the behaviour of read and show be prescribed, similar to Monad and Monoid?

A trivial law would be:

read . show == id

This would have to hold for any x which is part of Read and Show.

This question originated from StackOverflow.

submitted by ThreeFx
[link] [11 comments]
Categories: Incoming News