News aggregator

Why does the inferred type of this expression change when I bind it to a varid?

Haskell on Reddit - Thu, 05/14/2015 - 4:03pm

I'd like a function that takes an Integer and gives back the number of digits in it. Here's the output from my ghci session:

GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> :t (length . show) (length . show) :: Show a => a -> Int Prelude> let digits = length . show Prelude> :t digits digits :: () -> Int Prelude>

So, length . show has the right type on its own, but when I bind it to the name digits, it suddenly gets a much more restrictive type. What causes this, and how can I avoid it?

Sure, it works if I just specify the type I want, but I'm surprised that the type inference gets it right some of the time and wrong other times.

submitted by penguinland
[link] [5 comments]
Categories: Incoming News

Question: What's the most Haskell way to implement this? (Card game)

Haskell on Reddit - Thu, 05/14/2015 - 3:14pm

So I basicly have two implementations in mind and im not sure about which of them is the proper Haskell implementation. I have differents cards, entities of those cards, players who can be the owner of those cardentities, and locations on a board where the cardentities can be.

My first idea was:

data Player = Player {name :: String, ..} data Card = Card {name :: String, desc :: String, uri :: String} data CardEntity = CardEntity {card :: Card, rotation :: Direction, orientation :: Orientation, ..} data Location = Location {id :: Int}

But i wasn't sure where to put the information where a cardentity is and who it owns. Should an entity have an attribute who it owner is and where it is, or should a Location and a Player have a List of cardentities they have/own?

Then i came up with another idea. Maybe I could just give an ID to all players, entities and cards and make a function for each of their attributes:

data Player = Player {id :: Int} (or just type Player = Int) data CardEntity = CardEntity {id :: Int}

...

cardEntityRotation :: CardEntity -> Direction cardEntityOrientation :: CardEntity -> Orientation playerName :: Player -> String

...

that way i could easily give cardentities the owner attribute and players the cards attribute:

playerCardEntities :: Player -> [CardEntity] cardEntityOwner :: CardEntity -> Player cardName :: Card -> String

but cards are getting loaded from disk in the beginning so I'd probably had to make a hashmap for every single attribute..

Which one is better, or is there an better implementation than those two?

Thanks in advance.

submitted by Ecyoph
[link] [9 comments]
Categories: Incoming News

Which free monad?

Haskell on Reddit - Thu, 05/14/2015 - 2:57pm

It seems that there are two free-like monads.

data Free f r = Pure r | Free (f (Free f r)) data Free' f r = Done r | forall s. s :>>= (s -> f (Free' f r))

since s is existentially quantified, it can't be observed, so they should act the same. Is there ever a reason to use the second version instead of the (more popular) first? I'm not sure how haskell handles implicit state in closures, but I imagine it might be more efficient to reify the state in some cases.

submitted by dogodel
[link] [12 comments]
Categories: Incoming News

mightybyte: LTMT Part 3: The Monad Cookbook

Planet Haskell - Thu, 05/14/2015 - 11:52am
Introduction

The previous two posts in my Less Traveled Monad Tutorial series have not had much in the way of directly practical content. In other words, if you only read those posts and nothing else about monads, you probably wouldn't be able to use monads in real code. This was intentional because I felt that the practical stuff (like do notation) had adequate treatment in other resources. In this post I'm still not going to talk about the details of do notation--you should definitely read about that elsewhere--but I am going to talk about some of the most common things I have seen beginners struggle with and give you cookbook-style patterns that you can use to solve these issues.

Problem: Getting at the pure value inside the monad

This is perhaps the most common problem for Haskell newcomers. It usually manifests itself as something like this:

main = do lineList <- lines $ readFile "myfile.txt" -- ... do something with lineList here

That code generates the following error from GHC:

Couldn't match type `IO String' with `[Char]' Expected type: String Actual type: IO String In the return type of a call of `readFile'

Many newcomers seem puzzled by this error message, but it tells you EXACTLY what the problem is. The return type of readFile has type IO String, but the thing that is expected in that spot is a String. (Note: String is a synonym for [Char].) The problem is, this isn't very helpful. You could understand that error completely and still not know how to solve the problem. First, let's look at the types involved.

readFile :: FilePath -> IO String lines :: String -> [String]

Both of these functions are defined in Prelude. These two type signatures show the problem very clearly. readFile returns an IO String, but the lines function is expecting a String as its first argument. IO String != String. Somehow we need to extract the String out of the IO in order to pass it to the lines function. This is exactly what do notation was designed to help you with.

Solution #1 main :: IO () main = do contents <- readFile "myfile.txt" let lineList = lines contents -- ... do something with lineList here

This solution demonstrates two things about do notation. First, the left arrow lets you pull things out of the monad. Second, if you're not pulling something out of a monad, use "let foo =". One metaphor that might help you remember this is to think of "IO String" as a computation in the IO monad that returns a String. A do block lets you run these computations and assign names to the resulting pure values.

Solution #2

We could also attack the problem a different way. Instead of pulling the result of readFile out of the monad, we can lift the lines function into the monad. The function we use to do that is called liftM.

liftM :: Monad m => (a -> b) -> m a -> m b liftM :: Monad m => (a -> b) -> (m a -> m b)

The associativity of the -> operator is such that these two type signatures are equivalent. If you've ever heard Haskell people saying that all functions are single argument functions, this is what they are talking about. You can think of liftM as a function that takes one argument, a function (a -> b), and returns another function, a function (m a -> m b). When you think about it this way, you see that the liftM function converts a function of pure values into a function of monadic values. This is exactly what we were looking for.

main :: IO () main = do lineList <- liftM lines (readFile "myfile.txt") -- ... do something with lineList here

This is more concise than our previous solution, so in this simple example it is probably what we would use. But if we needed to use contents in more than one place, then the first solution would be better.

Problem: Making pure values monadic

Consider the following program:

import Control.Monad import System.Environment main :: IO () main = do args <- getArgs output <- case args of [] -> "cat: must specify some files" fs -> liftM concat (mapM readFile fs) putStrLn output

This program also has an error. GHC actually gives you three errors here because there's no way for it to know exactly what you meant. But the first error is the one we're interested in.

Couldn't match type `[]' with `IO' Expected type: IO Char Actual type: [Char] In the expression: "cat: must specify some files"

Just like before, this error tells us exactly what's wrong. We're supposed to have an IO something, but we only have a String (remember, String is the same as [Char]). It's not convenient for us to get the pure result out of the readFile functions like we did before because of the structure of what we're trying to do. The two patterns in the case statement must have the same type, so that means that we need to somehow convert our String into an IO String. This is exactly what the return function is for.

Solution: return return :: a -> m a

This type signature tells us that return takes any type a as input and returns "m a". So all we have to do is use the return function.

import Control.Monad import System.Environment main :: IO () main = do args <- getArgs output <- case args of [] -> return "cat: must specify some files" fs -> liftM concat (mapM readFile fs) putStrLn output

The 'm' that the return function wraps its argument in, is determined by the context. In this case, main is in the IO monad, so that's what return uses.

Problem: Chaining multiple monadic operations import System.Environment main :: IO () main = do [from,to] <- getArgs writeFile to $ readFile from

As you probably guessed, this function also has an error. Hopefully you have an idea of what it might be. It's the same problem of needing a pure value when we actually have a monadic one. You could solve it like we did in solution #1 on the first problem (you might want to go ahead and give that a try before reading further). But this particular case has a pattern that makes a different solution work nicely. Unlike the first problem, you can't use liftM here.

Solution: bind

When we used liftM, we had a pure function lines :: String -> [String]. But here we have writeFile :: FilePath -> String -> IO (). We've already supplied the first argument, so what we actually have is writeFile to :: String -> IO (). And again, readFile returns IO String instead of the pure String that we need. To solve this we can use another function that you've probably heard about when people talk about monads...the bind function.

(=<<) :: Monad m => (a -> m b) -> m a -> m b (=<<) :: Monad m => (a -> m b) -> (m a -> m b)

Notice how the pattern here is different from the first example. In that example we had (a -> b) and we needed to convert it to (m a -> m b). Here we have (a -> m b) and we need to convert it to (m a -> m b). In other words, we're only adding an 'm' onto the 'a', which is exactly the pattern we need here. Here are the two patterns next to each other to show the correspondence.

writeFile to :: String -> IO () a -> m b

From this we see that "writeFile to" is the first argument to the =<< function. readFile from :: IO String fits perfectly as the second argument to =<<, and then the return value is the result of the writeFile. It all fits together like this:

import System.Environment main :: IO () main = do [from,to] <- getArgs writeFile to =<< readFile from

Some might point out that this third problem is really the same as the first problem. That is true, but I think it's useful to see the varying patterns laid out in this cookbook style so you can figure out what you need to use when you encounter these patterns as you're writing code. Everything I've said here can be discovered by carefully studying the Control.Monad module. There are lots of other convenience functions there that make working with monads easier. In fact, I already used one of them: mapM.

When you're first learning Haskell, I would recommend that you keep the documentation for Control.Monad close by at all times. Whenever you need to do something new involving monadic values, odds are good that there's a function in there to help you. I would not recommend spending 10 hours studying Control.Monad all at once. You'll probably be better off writing lots of code and referring to it whenever you think there should be an easier way to do what you want to do. Over time the patterns will sink in as form new connections between different concepts in your brain.

It takes effort. Some people do pick these things up more quickly than others, but I don't know anyone who just read through Control.Monad and then immediately had a working knowledge of everything in there. The patterns you're grappling with here will almost definitely be foreign to you because no other mainstream language enforces this distinction between pure values and side effecting values. But I think the payoff of being able to separate pure and impure code is well worth the effort.

Categories: Offsite Blogs

CFP : Extended deadline : Functional Art, Music,Modelling and Design (FARM 2015)

General haskell list - Thu, 05/14/2015 - 11:12am
************************************************************ Call for Papers and Demos : FARM 2015 The 3rd ACM SIGPLAN International Workshop on Functional Art, Music, Modelling and Design Vancouver, Canada, 5 September, 2015 affiliated with ICFP 2015 http://functional-art.org EXTENTED Submission Deadline : 27 May, 2015 (optional abstract submission : 17 May, 2015) ************************************************************ The ACM SIGPLAN International Workshop on Functional Art, Music, Modelling and Design (FARM) gathers together people who are harnessing functional techniques in the pursuit of creativity and expression. Functional Programming has emerged as a mainstream software development paradigm, and its artistic and creative use is booming. A growing number of software toolkits, frameworks and environments for art, music and design now employ functional programming languages and techniques. FARM is a forum for expl
Categories: Incoming News

Suggestion: "Sizable" super class for Storable

haskell-cafe - Thu, 05/14/2015 - 9:50am
Storable instances have a size, given by sizeOf. In many cases, we're not interested in peeking/poking data but only passing it opaquely via the FFI. A common use case is when the C API offers an "init" function such as: void mycontext_init(mycontext *context); For these cases it would be useful to know the size of "mycontext", so we could malloc it and pass a pointer to mycontext_init. Also, it allows Haskell-side code to decide how it wants to allocate the data, perhaps using some other (external) mechanism not related to the specific API that the FFI bindings are wrapping. c2hs would benefit by allowing users to use the '+' notation in function parameters (which generate malloc-and-pass style code), without having to guess the size of the structure. Instead, it could simply use the Sizable (TM) instance to get the size, and the user will define Sizable in any way they want (for example, using the {#sizeof#} macro, which is somewhat unreliable, or by hard-coding or manually entering the size or b
Categories: Offsite Discussion

JP Moresmau: EclipseFP end of life (from me at least)

Planet Haskell - Thu, 05/14/2015 - 7:09am
Hello, after a few years and several releases, I am now stopping the maintenance of EclipseFP and its companion Haskell packages (BuildWrapper, ghc-pkg-lib and scion-browser). If anybody wants to take over I' ll gladly give them all what's required to get started. Feel free to fork and continue!

Why am I stopping? Not for a very specific reason, though seeing that I had to adapt BuildWrapper to GHC 7.10 didn't exactly fill me with joy, but more generally I got tired of being the single maintainer for this project. I got a few pull requests over the years and some people have at some stage participated (thanks to you, you know who you are!), but not enough, and the biggest part of the work has been always on my shoulders. Let's say I got tired of getting an endless stream of issues reports and enhancement requests with nobody stepping up to actually address them.

Also, I don't think on the Haskell side it makes sense for me to keep on working on a GHC API wrapper like BuildWrapper. There are other alternatives, and with the release of ide-backend, backed up by FPComplete, a real company staffed by competent people who seem to have more that 24 hours per day to hack on Haskell tools, it makes more sense to have consolidation there.

The goal of EclipseFP was to make it easy for Java developers or other Eclipse users to move to Haskell, and I think this has been a failure, mainly due to the inherent complexity of the setup (the Haskell stack and the Java stack) and the technical challenges of integrating GHC and Cabal in a complex IDE like Eclipse. Of course we could have done better with the constraints we were operating under, but if more eyes had looked at the code and more hands had worked on deck we could have succeeded.

Personally I would now be interested in maybe getting the Atom editor to use ide-backend-client, or maybe work on a web based (but local) Haskell IDE. Some of my dabblings can be found at https://github.com/JPMoresmau/dbIDE. But I would much prefer to not work on my own, so if you have an open source project you think could do with my help, I'll be happy to hear about it!

I still think Haskell is a great language that would deserve a top-notch IDE, for newbies and experts alike, and I hope one day we'll get there.

For you EclipseFP users, you can of course keep using it as long as it works, but if no other maintainers step up, down the line you'll have to look for other options, as compatibility with the Haskell ecosystem will not be assured. Good luck!

Happy Haskell Hacking!

Categories: Offsite Blogs

Bizarre ld error when trying to compile my program

Haskell on Reddit - Thu, 05/14/2015 - 6:34am

Someone solved it, both in the comments and on the IRC channel. Thanks for the help, everyone!

Hey, everyone!

I am working on this project here: https://github.com/pharpend/louse. When I try to compile the program with cabal install, I get this bizarre error with ld

Resolving dependencies... In order, the following will be installed: louse-0.1.0.0 +dev (reinstall) Warning: Note that reinstalls are always dangerous. Continuing anyway... Configuring louse-0.1.0.0... Building louse-0.1.0.0... Failed to install louse-0.1.0.0 Build log ( /home/pete/.cabal/logs/louse-0.1.0.0.log ): Configuring louse-0.1.0.0... Building louse-0.1.0.0... Preprocessing library louse-0.1.0.0... In-place registering louse-0.1.0.0... Preprocessing executable 'louse' for louse-0.1.0.0... [2 of 2] Compiling Main ( bin/louse.hs, dist/build/louse/louse-tmp/Main.o ) [Data.Louse.Bugs changed] Linking dist/build/louse/louse ... /home/pete/src/louse/dist/build/libHSlouse-0.1.0.0-0flN1wT55XYAdEZ4du8MhO.a(Bugs.o):(.text+0x536e): undefined reference to `lousezu0flN1wT55XYAdEZZ4du8MhO_DataziLouseziConfig_readLouseConfig1_info' /home/pete/src/louse/dist/build/libHSlouse-0.1.0.0-0flN1wT55XYAdEZ4du8MhO.a(Bugs.o):(.text+0x65d2): undefined reference to `lousezu0flN1wT55XYAdEZZ4du8MhO_DataziLouseziConfig_readLouseConfig1_info' /home/pete/src/louse/dist/build/libHSlouse-0.1.0.0-0flN1wT55XYAdEZ4du8MhO.a(Bugs.o):(.data+0x8a0): undefined reference to `lousezu0flN1wT55XYAdEZZ4du8MhO_DataziLouseziConfig_readLouseConfig1_closure' collect2: error: ld returned 1 exit status cabal: Error: some packages failed to install: louse-0.1.0.0 failed during the building phase. The exception was: ExitFailure 1

If I compile only the library, and try to compile the executable with plain ghc, the same thing happens:

Linking louse ... /home/pete/.cabal/lib/x86_64-linux-ghc-7.10.1/louse_0flN1wT55XYAdEZ4du8MhO/libHSlouse-0.1.0.0-0flN1wT55XYAdEZ4du8MhO.a(Bugs.o):(.text+0x536e): undefined reference to `lousezu0flN1wT55XYAdEZZ4du8MhO_DataziLouseziConfig_readLouseConfig1_info' /home/pete/.cabal/lib/x86_64-linux-ghc-7.10.1/louse_0flN1wT55XYAdEZ4du8MhO/libHSlouse-0.1.0.0-0flN1wT55XYAdEZ4du8MhO.a(Bugs.o):(.text+0x65d2): undefined reference to `lousezu0flN1wT55XYAdEZZ4du8MhO_DataziLouseziConfig_readLouseConfig1_info' /home/pete/.cabal/lib/x86_64-linux-ghc-7.10.1/louse_0flN1wT55XYAdEZ4du8MhO/libHSlouse-0.1.0.0-0flN1wT55XYAdEZ4du8MhO.a(Bugs.o):(.data+0x8a0): undefined reference to `lousezu0flN1wT55XYAdEZZ4du8MhO_DataziLouseziConfig_readLouseConfig1_closure' collect2: error: ld returned 1 exit status

The function that ld is complaining about: https://github.com/pharpend/louse/blob/master/Data/Louse/Config.hs#L38

readLouseConfig :: IO (Maybe LouseConfig) readLouseConfig = do configPath <- _config_path configPathExists <- doesFileExist configPath if configPathExists then do configBytes <- B.readFile configPath pure (decodeStrict configBytes) else pure Nothing

If I comment out the do-block, and replace the definition with pure Nothing, then everything compiles just fine-and-dandy. It's weird because the library compiles just fine, so it's not a type error. There's some weird undefined symbol in the generated code.

submitted by pharpend
[link] [11 comments]
Categories: Incoming News

Eve: the development diary of a programming environment aimed at non-programmers

Lambda the Ultimate - Thu, 05/14/2015 - 6:27am

In spring 2012 Chris Granger successfully completed a Kickstarter fundraising and got $300K (instead of the requested $200K) to work on a live-feedback IDE inspired by Bret Victor "Inventing on principle" talk. The IDE project was called Light Table. It initially supported Clojure (the team's favourite language) only, but eventually added support for Javascript and Python. In January 2014, Light Table was open sourced, and in October 2014 the Light Table development team announced that they decided to create a new language, Eve, that would be a better fit for their vision of programming experience.

There is little public about Eve so far, no precise design documents, but the development team has a public monthly Development Diary that I found fairly interesting. It displays an interesting form of research culture, with in particular recurrent reference to academic works that are coming from outside the programming-language-research community: database queries, Datalog evaluation, distributed systems, version-control systems. This diary might be a good opportunity to have a look at the internals of a language design process (or really programming environment design) that is neither academic nor really industrial in nature. It sounds more representative (I hope!) of the well-educated parts of startup culture.

Eve is a functional-relational language. Every input to an Eve program is stored in one of a few insert-only tables. The program itself consists of a series of views written in a relational query language. Some of these views represent internal state. Others represent IO that needs to be performed. Either way there is no hidden or forgotten state - the contents of these views can always be calculated from the input tables.

Eve is designed for live programming. As the user makes changes, the compiler is constantly re-compiling code and incrementally updating the views. The compiler is designed to be resilient and will compile and run as much of the code as possible in the face of errors. The structural editor restricts partially edited code to small sections, rather than rendering entire files unparseable. The pointer-free relational data model and the timeless views make it feasible to incrementally compute the state of the program, rather than starting from scratch on each edit.

The public/target for the language is described as "non-programmers", but in fact it looks like their control group has some previous experience of Excel. (I would guess that experimenting with children with no experience of programming at all, including no Excel work, could have resulted in very different results.)

Posts so far, by Jamie Brandon:

Some random quotes.

Retrospective:

Excited, we presented our prototype to a small number of non-programmers and sat back to watch the magic. To our horror, not a single one of them could figure out what the simple example program did or how it worked, nor could they produce any useful programs themselves. The sticking points were lexical scope and data structures. Every single person we talked to just wanted to put data in an Excel-like grid and drag direct references. Abstraction via symbol binding was not an intuitive or well-liked idea.

[...]

Our main data-structure was now a tree of tables. Rather than one big top-level function, we switched to a pipeline of functions. Each function pulled data out of the global store using a datalog query, ran some computation and wrote data back. Having less nesting reduced the impact of lexical scope and cursor passing. Using datalog allowed normalising the data store, avoiding all the issues that came from hierarchical models.

At this point we realised we weren't building a functional language anymore. Most of the programs were just datalog queries on normalised tables with a little scalar computation in the middle. We were familiar with Bloom and realised that it fit our needs much better than the functional pidgin we had built so far - no lexical scoping, no data-structures, no explicit ordering. In late March we began work on a Bloom interpreter.

October:

Where most languages express state as a series of changes ('when I click this button add 1 to the counter'), Eve is built around views over input logs ('the value of the counter is the number of button clicks in the log'). Thinking in terms of views makes the current language simple and powerful. It removes the need for explicit control flow, since views can be calculated in any order that is consistent with the dependency graph, and allows arbitrary composition of data without requiring the cooperation of the component that owns that data.

Whenever we have tried to introduce explicit change we immediately run into problems with ordering and composing those changes and we lose the ability to directly explain the state of the program without reference to data that no longer exists.

[...]

In a traditional imperative language, [context] is provided by access to dynamic scoping (or global variables - the poor mans dynamic scope) or by function parameters. In purely functional languages it can only be provided by function parameters, which is a problem when a deeply buried function wants to access some high up data and it has to be manually threaded through the entire callstack.

December:

Eve processes can now spawn subprocesses and inject code into them. Together with the new communication API this allowed much of the IDE architecture to be lifted into Eve. When running in the browser only the UI manager lives on the main thread - the editor, the compiler and the user's program all live in separate web-workers. The editor uses the process API to spawn both the compiler and the user's program and then subscribes to the views it needs for the debugging interface. Both the editor and the user's program send graphics data to the UI manager and receiving UI events in return.
Categories: Offsite Discussion

LambdaCms: CMS in Haskell project has open intern positions

Haskell on Reddit - Thu, 05/14/2015 - 4:28am

About a year ago I announced open internship positions at Hoppinger for building a CMS with Haskell. The resulted in the proud announcement of LambdaCms here on Reddit about eight months later. An interesting remark: one of the interns managed to score an 9.5/10 for the project resulting in a cum laude.

While LambdaCms is usable, incredibly fast (2-10ms responses), and comes with a nice list of features, it is still quite basic in the realm of CMSes.

We have decided to commit to another internship project in order to improve it further. Currently one intern, student at the Rotterdam University of Applied Sciences, has committed to work on the project from September '15 to January '16. By this message we want to announce that, in the second half of 2015, we still have one or two positions open on this project.

submitted by cies010
[link] [4 comments]
Categories: Incoming News

Functional Jobs: OCaml server-side developer at Ahrefs Research (Full-time)

Planet Haskell - Thu, 05/14/2015 - 2:07am
Who we are

Ahrefs Research is a San Francisco branch of Ahrefs Pte Ltd (Singapore), which runs an internet-scale bot that crawls whole Web 24/7, storing huge volumes of information to be indexed and structured in timely fashion. On top of that Ahrefs is building analytical services for end-users.

Ahrefs Research develops a custom petabyte-scale distributed storage to accommodate all that data coming in at high speed, focusing on performance, robustness and ease of use. Performance-critical low-level part is implemented in C++ on top of a distributed filesystem, while all the coordination logic and communication layer, along with API library exposed to the developer is in OCaml.

We are a small team and strongly believe in better technology leading to better solutions for real-world problems. We worship functional languages and static typing, extensively employ code generation and meta-programming, value code clarity and predictability, constantly seek out to automate repetitive tasks and eliminate boilerplate, guided by DRY and following KISS. If there is any new technology that will make our life easier - no doubt, we'll give it a try. We rely heavily on opensource code (as the only viable way to build maintainable system) and contribute back, see e.g. https://github.com/ahrefs . It goes without saying that our team is all passionate and experienced OCaml programmers, ready to lend a hand or explain that intricate ocamlbuild rule.

Our motto is "first do it, then do it right, then do it better".

What we need

Ahrefs Research is looking for backend developer with deep understanding of operating systems, networks and taste for simple and efficient architectural designs. Our backend is implemented mostly in OCaml and some C++, as such proficiency in OCaml is very much appreciated, otherwise a strong inclination to intensively learn OCaml in a short term will be required. Understanding of functional programming in general and/or experience with other FP languages (F#,Haskell,Scala,Scheme,etc) will help a lot. Knowledge of C++ is a plus.

The candidate will have to deal with the following technologies on the daily basis:

  • networks & distributed systems
  • 4+ petabyte of live data
  • OCaml
  • C++
  • linux
  • git

The ideal candidate is expected to:

  • Independently deal with and investigate bugs, schedule tasks and dig code
  • Make argumented technical choice and take responsibility for it
  • Understand the whole technology stack at all levels : from network and userspace code to OS internals and hardware
  • Handle full development cycle of a single component, i.e. formalize task, write code and tests, setup and support production (devops)
  • Approach problems with practical mindset and suppress perfectionism when time is a priority

These requirements stem naturally from our approach to development with fast feedback cycle, highly-focused personal areas of responsibility and strong tendency to vertical component splitting.

What you get

We provide:

  • Competitive salary
  • Modern office in San Francisco SOMA (Embarcadero)
  • Informal and thriving atmosphere
  • First-class workplace equipment (hardware, tools)
  • No dress code

Get information on how to apply for this position.

Categories: Offsite Blogs

CFP: Extended Deadline: Functional High-Performance Computing (held with ICFP)

General haskell list - Thu, 05/14/2015 - 1:35am
====================================================================== CALL FOR PAPERS FHPC 2015 The 4th ACM SIGPLAN Workshop on Functional High-Performance Computing Vancouver, British Columbia, Canada, Canada September 3, 2015 https://sites.google.com/site/fhpcworkshops/ Co-located with the International Conference on Functional Programming (ICFP 2015) EXTENDED Submission Deadline: Friday, 22 May, 2015 (anywhere on earth) ====================================================================== The FHPC workshop aims at bringing together researchers exploring uses of functional (or more generally, declarative or high-level) programming technology in application domains where high performance is essential. The aim of the meeting is to enable sharing of results, experiences, and novel ideas about how high-level, declarative specificat
Categories: Incoming News

CFP: Extended Deadline: Functional High-Performance Computing (held with ICFP)

haskell-cafe - Thu, 05/14/2015 - 1:35am
====================================================================== CALL FOR PAPERS FHPC 2015 The 4th ACM SIGPLAN Workshop on Functional High-Performance Computing Vancouver, British Columbia, Canada, Canada September 3, 2015 https://sites.google.com/site/fhpcworkshops/ Co-located with the International Conference on Functional Programming (ICFP 2015) EXTENDED Submission Deadline: Friday, 22 May, 2015 (anywhere on earth) ====================================================================== The FHPC workshop aims at bringing together researchers exploring uses of functional (or more generally, declarative or high-level) programming technology in application domains where high performance is essential. The aim of the meeting is to enable sharing of results, experiences, and novel ideas about how high-level, declarative specificat
Categories: Offsite Discussion

Yesod Web Framework: Deprecating system-filepath and system-fileio

Planet Haskell - Wed, 05/13/2015 - 10:00pm

I posted this information on Google+, but it's worth advertising this a bit wider. The tl;dr is: system-filepath and system-fileio are deprecated, please migrate to filepath and directory, respectively.

The backstory here is that system-filepath came into existence at a time when there were bugs in GHC's handling of character encodings in file paths. system-filepath fixed those bugs, and also provided some nice type safety to prevent accidentally treating a path as a String. However, the internal representation needed to make that work was pretty complicated, and resulted in some weird corner case bugs.

Since GHC 7.4 and up, the original character encoding issues have been resolved. That left a few options: continue to maintain system-filepath for additional type safety, or deprecate. John Millikin, the author of the package, decided on the latter back in December. Since we were using it extensively at FP Complete via other libraries, we decided to take over maintenance. However, this week we decided that, in fact, John was right in the first place.

I've already migrated most of my libraries away from system-filepath (though doing so quickly was a mistake, sorry everyone). One nice benefit of all this is there's no longer a need to convert between different FilePath representations all over the place. I still believe overall that type FilePath = String is a mistake and a distinct datatype would be better, but there's much to be said for consistency.

Some quick pointers for those looking to convert:

  • You can drop basically all usages of encodeString and decodeString
  • If you're using basic-prelude or classy-prelude, you should get some deprecation warnings around functions like fpToString
  • Most functions have a direct translation, e.g. createTree becomes createDirectoryIfMissing True (yes, the system-filepath and system-fileio names often times feel nicer...)

And for those looking for more type safety: all is not lost. Chris Done has been working on a new package which is aimed at providing additional type safety around absolute/relative and files/directories. It's not yet complete, but is already seeing some interesting work and preventing bugs at some projects we've been working on (and which will be announced soon).

Categories: Offsite Blogs