Windhaven Insurance is seeking an experienced Software Engineer/ Developer to join a small elite team who are disrupting the auto insurance industry with innovative technology. You will work in a startup atmosphere as part of a subsidiary of a rapidly growing larger company. This means that along with programming, you’ll have a hand in the larger issues, such as product architecture and direction.
- Someone who knows at least one functional language such as: Elixir, Erlang, Lisp, Scheme, Haskell, ML, Clojure, Racket, Ocaml or F#
- Someone who ENGAGES a.k.a “gives a damn” about what we do and how technology can help make us more competitive in the marketplace.
- Someone who COLLABORATES. We have the flattest organization in the industry designed with one main goal – the TEAM. Are you hungry to make a significant impact in the tech world?
- Someone who RESPECTS Teammates, customers and the community.
You need to have made an achievement, in any field, of significance worth talking about, that required grit, determination and skill. This can be a personal achievement that few know about, or it could have gotten coverage by the media. It doesn’t matter, what does matter is some demonstration of grit, determination and skill. If it can be described succinctly, please describe the achievement in your cover letter, if not, be prepared to tell us all about it during the interview.
Professional & Technical Qualifications:
- Experience with languages such as Elixir or Erlang
- Experience with Ember.js (or Angular.js)
- Experience with NoSQL data stores, such as Couchbase, Riak, etc.
- DevOps experience a plus
- Ability to explain technical issues and recommend solutions
- Strong team player with a high degree of flexibility
- Excellent verbal and written communication skills
Competitive salary based on experience. Benefits package includes: medical, dental, vision insurance, life insurance, short term and long term disability insurance, 401K, paid time off. EOE.
Get information on how to apply for this position.
I've got GHCI running fine, but I'm having a hard time setting my editor. Does anyone know how? I'm super new to OSX which doesn't help.
It was easy on windows using :set editor <path>
But that isn't the case here it seems.
Also, anything else I'll need to get up and running on OSX would be awesome.
Thanks in advance.submitted by qspec02
[link] [5 comments]
When reading Haskell code, I notice almost always Haskellers tend to use data-structures directly, as opposed to just using type classes. For example, when implementing a graphing library, one would implement a function such as:fillRect :: Color → Bounds → Image → Image
As opposed to:fillRect :: Image a => Color → Bounds → a → a
I see this is also recommended, if not enforced, by prelude, considering that is how it does things as well.
My problem with such approach is that, after your code is finished, if you decide to change the used structure you will have to refactor every single piece of code using it. That is completely unacceptable in 2014, even more so considering even in old languages such as C++, the whole refactoring would be a matter of changing a single line typedef.
Just, why???submitted by SrPeixinho
[link] [26 comments]
Summary: I've written a continuous integration system, in Haskell, designed for large projects. It works, but probably won't scale yet.
I've just released bake, a continuous integration system - an alternative to Jenkins, Travis, Buildbot etc. Bake eliminates the problem of "broken builds", a patch is never merged into the repo before it has passed all the tests.
Bake is designed for large, productive, semi-trusted teams:
- Large teams where there are at least several contributors working full-time on a single code base.
- Productive teams which are regularly pushing code, many times a day.
- Semi-trusted teams where code does not go through manual code review, but code does need to pass a test suite and perhaps some static analysis. People are assumed to be fallible.
Current state: At the moment I have a rudimentary test suite, and it seems to mostly work, but Bake has never been deployed for real. Some optional functionality doesn't work, some of the web UI is a bit crude, the algorithms probably don't scale and all console output from all tests is kept in memory forever. I consider the design and API to be complete, and the scaling issues to be easily fixable - but it's easier to fix after it becomes clear where the bottleneck is. If you are interested, take a look, and then email me.
To give a flavour, the web GUI looks of a running Bake system looks like:The Design
Bake is a Haskell library that can be used to put together a continuous integration server. To run Bake you start a single server for your project, which coordinates tasks, provides an HTTP API for submitting new patches, and a web-based GUI for viewing the progress of your patches. You also run some Bake clients which run the tests on behalf of the server. While Bake is written in Haskell, most of the tests are expected to just call some system command.
There are a few aspects that make Bake unique:
- Patches are submitted to Bake, but are not applied to the main repo until they have passed all their tests. There is no way for someone to "break the build" - at all points the repo will build on all platforms and all tests will pass.
- Bake scales up so that even if you have 5 hours of testing and 50 commits a day it will not require 250 hours of computation per day. In order for Bake to prove that a set of patches pass a test, it does not have to test each patch individually.
- Bake allows multiple clients to run tests, even if some tests are only able to be run on some clients, allowing both parallelisation and specialisation (testing both Windows and Linux, for example).
- Bake can detect that tests are no longer valid, for example because they access a server that is no longer running, and report the issue without blaming the submitted patches.
First we define an enumeration for where we want tests to run. Our server is going to require tests on both Windows and Linux before a patch is accepted.data Platform = Linux | Windows deriving (Show,Read)
platforms = [Linux,Windows]
Next we define the test type. A test is something that must pass before a patch is accepted.data Action = Compile | Run Int deriving (Show,Read)
Our type is named Action. We have two distinct types of tests, compiling the code, and running the result with a particular argument. Now we need to supply some information about the tests:allTests = [(p,t) | p <- platforms, t <- Compile : map Run [1,10,0]]
execute :: (Platform,Action) -> TestInfo (Platform,Action)
execute (p,Compile) = matchOS p $ run $ do
cmd "ghc --make Main.hs"
execute (p,Run i) = require [(p,Compile)] $ matchOS p $ run $ do
cmd ("." </> "Main") (show i)
We have to declare allTests, then list of all tests that must pass, and execute, which gives information about a test. Note that the test type is (Platform,Action), so a test is a platform (where to run the test) and an Action (what to run). The run function gives an IO action to run, and require specifies dependencies. We use an auxiliary matchOS to detect whether a test is running on the right platform:#if WINDOWS
myPlatform = Windows
myPlatform = Linux
matchOS :: Platform -> TestInfo t -> TestInfo t
matchOS p = suitable (return . (==) myPlatform)
We use the suitable function to declare whether a test can run on a particular client. Finally, we define the main function:main :: IO ()
main = bake $
ovenGit "http://example.com/myrepo.git" "master" $
ovenTest readShowStringy (return allTests) execute
We define main = bake, then fill in some configuration. We first declare we are working with Git, and give a repo name and branch name. Next we declare what the tests are, passing the information about the tests. Finally we give a host/port for the server, which we can visit in a web browser or access via the HTTP API.Using the Example
Now we have defined the example, we need to start up some servers and clients using the command line for our tool. Assuming we compiled as bake, we can write bake server and bake client (we'll need to launch at least one client per OS). We can view the state by visiting http://127.0.0.1:5000 in a web browser.
To add a patch we can run bake addpatch --name=cb3c2a71, using the SHA1 of the commit, which will try and integrate that patch into the master branch, after all the tests have passed.
I've tried using "cabal install QuickCheck" but I get this message.
Is there another way I can install QuickCheck or something I'm not doing right on the cabal install?submitted by willrobertshaw
[link] [6 comments]
As a newcomer to Haskell - not even that, only a learner. I have been quite surprised about how relaxed Haskellers are about stating type class laws given how forthright they are about the absolute rightness of a strong type system.
I might even risk contributing to a meme by saying "every sufficiently complicated Haskell library has an incomplete and informally-constructed Agda program in the blog posts expounding on it".
I am not here to push an inevitable march towards dependent typing. I would just like to be able to have a way of formally documenting these laws. Of course once the properties are there I'm sure there are many things that could be done:
- plugging in an external theorem prover
- generating quick-check tests
- allowing the compiler to rely on them for optimisation - for instance if the compiler knows that the applied function in fold is associative then the operation can be panellised (I can't imagine this could be done in any other way than recognising particular special cases, but it could still be profitable even then).
- other things
I understand that achieving this is probably more complicated than I could possibly imagine. What surprises me is that I don't even hear anyone worrying about it.submitted by maninalift
[link] [21 comments]
Today I released version 0.2 of unagi-chan, a haskell library implementing fast and scalable FIFO queues with a nice and familiar API. It is available on hackage and you can install it with:$ cabal install unagi-chan
This version provides a bounded queue variant (and closes issue #1!) that has performance on par with the other variants in the library. This is something I’m somewhat proud of, considering that the standard TBQueue is not only significantly slower than e.g. TQueue, but also was seen to livelock at a fairly low level of concurrency (and so is not included in the benchmark suite).
Here are some example benchmarks. Please do try the new bounded version and see how it works for you.
What follows are a few random thoughts more or less generally-applicable to the design of bounded FIFO queues, especially in a high-level garbage-collected language. These might be obvious, uninteresting, or unintelligible.What is Bounding For?
I hadn’t really thought much about this before: a bounded queue limits memory consumption because the queue is restricted from growing beyond some size.
But this isn’t quite right. If for instance we implement a bounded queue by pre-allocating an array of size bounds then a write operation need not consume any additional memory; indeed the value to be written has already been allocated on the heap before the write even begins, and will persist whether the write blocks or returns immediately.
Instead constraining memory usage is a knock-on effect of what we really care about: backpressure; when the ratio of “producers” to their writes is high (the usual scenario), blocking a write may limit memory usage by delaying heap allocations associated with elements for future writes.
So bounded queues with blocking writes let us:
- when threads are “oversubscribed”, transparently indicate to the runtime which work has priority
- limit future resource usage (CPU time and memory) by producer threads
We might also like our bounded queue to support a non-blocking write which returns immediately with success or failure. This might be thought of (depending on the capabilities of your language’s runtime) as more general than a blocking write, but it also supports a distinctly different notion of bounding, that is bounding message latency: a producer may choose to drop messages when a consumer falls behind, in exchange for lower latency for future writes.Unagi.Bounded Implementation Ideas
Trying to unpack the ideas above helped in a few ways when designing Unagi.Bounded. Here are a few observations I made.We need not block before “writing”
When implementing blocking writes, my intuition was to (when the queue is “full”) have writers block before “making the message available” (whatever that means for your implementation). For Unagi that means blocking on an MVar, and then writing a message to an assigned array index.
But this ordering presents a couple of problems: first, we need to be able to handle async exceptions raised during writer blocking; if its message isn’t yet “in place” then we need to somehow coordinate with the reader that would have received this message, telling it to retry.
By unpacking the purpose of bounding it became clear that we’re free to block at any point during the write (because the write per se does not have the memory-usage implications we originally naively assumed it had), so in Unagi.Bounded writes proceed exactly like in our other variants, until the end of the writeChan, at which point we decide when to block.
This is certainly also better for performance: if a wave of readers comes along, they need not wait (themselves blocking) for previously blocked writers to make their messages available.
One hairy detail from this approach: an async exception raised in a blocked writer does not cause that write to be aborted; i.e. once entered, writeChan always succeeds. Reasoning in terms of linearizability this only affects situations in which a writer thread is known-blocked and we would like to abort that write.Fine-grained writer unblocking in probably unnecessary and harmful
In Unagi.Bounded I relax the bounds constraint to “somewhere between bounds and bounds*2”. This allows me to eliminate a lot of coordination between readers and writers by using a single reader to unblock up to bounds number of writers. This constraint (along with the constraint that bounds be a power of two, for fast modulo) seemed like something everyone could live with.
I also guess that this “cohort unblocking” behavior could result in some nicer stride behavior, with more consecutive non-blocking reads and writes, rather than having a situation where the queue is almost always either completely full or empty.One-shot MVars and Semaphores
This has nothing to do with queues, but just a place to put this observation: garbage-collected languages permit some interesting non-traditional concurrency patterns. For instance I use MVars and IORefs that only ever go from empty to full, or follow a single linear progression of three or four states in their lifetime. Often it’s easier to design algorithms this way, rather than by using long-lived mutable variables (for instance I struggled to come up with a blocking bounded queue design that used a circular buffer which could be made async-exception-safe).
Similarly the CAS operation (which I get exported from atomic-primops) turns out to be surprisingly versatile far beyond the traditional read/CAS/retry loop, and to have very useful semantics when used on short-lived variables. For instance throughout unagi-chan I do both of the following:
CAS without inspecting the return value, content that we or any other competing thread succeeded.
CAS using a known initial state, avoiding an initial read
The documentation for Text.Show states that this holds for derived instances:
The result of sho is a syntactically correct Haskell expression [...]
This fact can be used to pass states of certain computations around as text files or similar objects, because show converts the data to a string representation, while show's complement read enables continuing the earlier computations.
Of course the above is only possible if show and read are either derived automatically or implemented appropriately.
Now my question is: Should the statement from the beginning hold for any show function? Or, more general, Should a law / laws regarding the behaviour of read and show be prescribed, similar to Monad and Monoid?
A trivial law would be:read . show == id
This would have to hold for any x which is part of Read and Show.
This question originated from StackOverflow.submitted by ThreeFx
[link] [11 comments]