News aggregator

Douglas M. Auclair (geophf): 1Liners for July 2016

Planet Haskell - Sat, 08/20/2016 - 4:12pm

  • July 14th, 2016: So you have x :: [a] in the IO monad, and the function f :: a -> b What is the expression that gets you IO [b]?
Categories: Offsite Blogs

Philip Wadler: Eric Joyce: Why the Brexit vote pushed me to support Scottish independence

Planet Haskell - Fri, 08/19/2016 - 8:09am
Former Labour MP Eric Joyce explains his change of heart.
At the referendum, still an MP, I gave independence very serious thought right up to the close of the vote. I finally came down on the side of No because I thought big EU states with a potential secession issue, like Spain and France, would prevent an independent Scotland joining the EU. This is obviously no longer the case. And I was, like the great majority of the economists and other experts whose opinion I valued, convinced that being outside the EU would be bonkers – it would badly harm our economy and hurt Scots in all sorts of unforeseen ways too.
The Brexit vote reversed that overnight: all of the arguments we in the unionist camp had used were made invalid at worst, questionable at best. This doesn’t mean they were necessarily all wrong. But it does mean that open-minded, rational No voters should at the very least seriously re-consider things in the light of the staggering new context. They should have an open ear to the experts saying that with independence, jobs in Scotland’s financial and legal service sectors will expand as English and international firms look to keep a foothold in the EU.  And to the reasonable prospect of an eventual £50+ oil price might realistically open the way to a final, generational, upswing in employment, and to security for Scotland’s extractive industries and their supply chain. And to the idea that preserving Scotland’s social democracy in the face of the Little Englander mentality of right-wing English Tories might be worth the fight.
Categories: Offsite Blogs

PowerShell is open sourced and is available on Linux

Lambda the Ultimate - Fri, 08/19/2016 - 3:23am

Long HN thread ensues. Many of the comments discuss the benefits/costs of basing pipes on typed objects rather than text streams. As someone who should be inclined in favor of the typed object approach I have to say that I think the text-only folks have the upper hand at the moment. Primary reason is that text as a lingua franca between programs ensures interoperability (and insurance against future changes to underlying object models) and self-documenting code. Clearly the Achilles' heel is parsing/unparsing.

As happens often, one is reminded of the discussions of DSLs and pipelines in Jon Bentley's Programming Pearls...

Categories: Offsite Discussion

Roman Cheplyaka: Docker configuration on Fedora

Planet Haskell - Thu, 08/18/2016 - 2:00pm

If you need to change the docker daemon options on Fedora, take a look at these files:

# ls /etc/sysconfig/docker* /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup

In my case, I needed to change the container base size, so I put the following in /etc/sysconfig/docker-storage:

DOCKER_STORAGE_OPTIONS="--storage-opt dm.basesize=20G"

These files are then sourced in /etc/systemd/system/multi-user.target.wants/docker.service, and the variables (such as DOCKER_STORAGE_OPTIONS) are passed to the docker daemon.

Categories: Offsite Blogs

Brent Yorgey: Academic integrity and other virtues

Planet Haskell - Thu, 08/18/2016 - 1:41pm

I have been thinking a lot recently about academic integrity. What does it mean? Why do we care—what is it we fundamentally want students to do and to be? And whatever it is, how do we go about helping them become like that?

As a general principle, I think we ought to focus not just on prohibiting certain negative behaviors, but rather on encouraging positive behaviors (which are in a suitable sense “dual” to the negative behaviors we want to prohibit). Mere prohibitions leave a behavioral vacuum—“OK, don’t do this, so what should I do?”—and incentivize looking for loopholes, seeing how close one can toe the line without breaking the letter of the law. On the other hand, a positive principle actively guides behavior, and in actively striving towards the ideal of the positive principle, one (ideally) ends up far away from the prohibited negative behavior.

In the case of academic integrity, then, it is not enough to say “don’t plagiarize”. In fact, if one focuses on the prohibition itself, this is a particularly difficult one to live by, because academic life is not lived in a vacuum: ideas and accomplishments never spring forth ex nihilo, owing nothing to the ideas and accomplishments of others. In reality, one is constantly copying in big and small ways, explicitly and implicitly, consciously and unconsciously. In fact, this is how learning works! We just happen to think that some forms of copying are acceptable and some are not. Now, there are good reasons for distinguishing acceptable and unacceptable copying; the point is that this is often more difficult and ambiguous for students than we care to admit.

So what is the “dual” of plagiarism? What are the positive virtues which we should instill in our students? One can, of course, say “integrity”, but I don’t think this goes far enough: to have integrity is to adhere to a particular set of moral principles, but which ones? Integrity means being truthful, but truthful about what? It seems this is just another way of saying “don’t plagiarize”, i.e. don’t lie about the source of an idea. I have come up with two other virtues, however, which I think really get at the heart of the issue: thankfulness and generosity. (And in the spirit of academic thankfulness, I should say that Vic Norman first got me thinking along these lines with his paper How Will You Practice Virtue Witout Skill?: Preparing Students to be Virtuous Computer Programmers, published in the 2014-2015 Journal of the ACMS; I was also influenced by a discussion of Vic’s paper with several others at the ACMS luncheon at SIGCSE 2016.)

Academic thankfulness has to do with recognizing one’s profound debt to the academic context: to all those thinkers and doers who have come before, and to all those who help you along your journey as a learner, whether professors, other students, or random strangers on the Internet. A thankful student is naturally driven to cite anything and everything, to give credit where credit is due, even to give credit where credit is not technically necessary but can serve as a token of thanks. A thankful student recognizes the hard work and unique contributions of others, rather than seeing others as mere means to their own ends. A thankful student never plagiarizes, since taking something from someone else and claiming it for one’s own is the height of ingratitude.

Academic generosity is about freely sharing one’s own ideas, sacrificing one’s time and energy to help others, and allowing others to share in credit and recognition. Being academically generous is harder than being thankful, because it opens you up to the potential ingratitude of others, but in some sense it is the more important of the two virtues: if no one were generous, no one would have anything to be thankful for. A generous student is naturally driven to cite anything and everything, to give credit and recognition to others, whether earned or not. A generous student recognizes others as worthy collaborators rather than as means to an end. A generous student never plagiarizes, since they know how it would feel to have their own generosity taken advantage of.

There’s more to say—about the circumstances that have led me to think about this, and about how one might actually go about instilling these virtues in students, but I think I will leave that for another post.


Categories: Offsite Blogs

Don Stewart (dons): Haskell devops/dev tools role at Standard Chartered (London)

Planet Haskell - Wed, 08/17/2016 - 2:16am

The Modelling Infrastructure (MI) team at Standard Chartered has an open position for a typed functional programming developer, based in London. MI are a devops-like team responsible for the continuous delivery, testing, tooling and general developer efficiency of the Haskell-based analytics package used by the bank. They work closely with other Haskell dev teams in the bank, providing developer tools, testing and automation on top of our git ecosystem.

The role involves improving the ecosystem for developers and further automation of our build, testing and release infrastructure. You will work with devs in London, as part of the global MI team (located in London and Singapore). Development is primarily in Haskell. Knowledge of the Shake build system and Bake continuous integration system would be helpful. Strong git skills would be an advantage. Having a keen eye for analytics, data analysis and data-driven approaches to optimizing tooling and workflows is desirable.

This is a permanent, associate director-equivalent positions in London

Experience writing typed APIs to external systems such as databases, web services, pub/sub platforms is very desirable. We like working code, so if you have Hackage or github libraries, we definitely want to see them. We also like StackOverflow answers, blog posts, academic papers, or other arenas where you can show broad FP ability. Demonstrated ability to write Haskell-based tooling around git systems would be a super useful.

The role requires physical presence in London, either in our Basinghall or Old Street sites. Remote work is not an option. No financial background is required.Contracting-based positions are also possible if desired.

More info about our development process is in the 2012 PADL keynote, and a 2013 HaskellCast interview.

If this sounds exciting to you, please send your resume to me – donald.stewart <at> sc.com


Tagged: jobs
Categories: Offsite Blogs

Philip Wadler: What I learned as a hired consultant to autodidact physicists

Planet Haskell - Mon, 08/15/2016 - 8:09am
Many programming languages, especially domain-specific ones, are designed by amateurs. How do we prevent obvious irregularities and disasters in languages before they become widespread (aspects of Javascript and R come to mind).

Folk in the human-computer interaction community have a notion of 'expert evaluation'. I wonder if we could develop something similar for programming languages?

Jakub Zalewski passed me the article 'What I learned as a hired consultant to autodidact physicists', which treads related ground, but for physics rather than computing.
Categories: Offsite Blogs

Ken T Takusagawa: [kctmprub] Selecting function arguments by type

Planet Haskell - Sun, 08/14/2016 - 7:33pm

Some programming languages permit a function to refer the arguments passed to it by number instead of by name, for example, Perl's @_ array.

We propose a similar mechanism of referring to function arguments by type.  This can only be done if there is only one argument of the given type in the list of parameters.  We introduce a special keyword, ARGUMENT_OF_TYPE, which when used with a type yields the desired argument.  Below, we use a syntax inspired by Haskell.

replicate_strings :: Int -> String -> String;
replicate_strings UNNAMED UNNAMED = concat $ intersperse " " $ replicate (ARGUMENT_OF_TYPE::Int) (ARGUMENT_OF_TYPE::String);

Somewhat more ambitious, possibly more confusing, would be to have type inference figure out which argument is needed.

replicate_strings :: Int -> String -> String;
replicate_strings UNNAMED UNNAMED = concat $ intersperse " " $ replicate ARGUMENT_OF_TYPE ARGUMENT_OF_TYPE;

The keyword UNNAMED marks the function arguments subject to this kind of inference.  They are awkward, but without them, it may be difficult or awkward for a function to return a function, i.e., it is a higher order function.  Perhaps it has a polymorphic return type which might or might not be function.

More ambitiously, if the arguments are of distinct types, then there could be some mechanism by which the order of the arguments at the call site does not matter.  Previously related ideas, for multiple element tuples and pairs.

Another idea: instead of UNNAMED being a keyword, let the parameters be user-chosen identifiers that do not have to be different for each parameter, but if they have the same name, they must be different types.  Add a (possibly optional) OVERLOADED_PARAMETER annotation to make things less confusing to others reading the code:

replicate_strings2 :: Int -> String -> Int -> String -> (String, String);
replicate_strings2 (OVERLOADED_PARAMETER x) (OVERLOADED_PARAMETER x) (OVERLOADED_PARAMETER y) (OVERLOADED_PARAMETER y) = (concat $ intersperse " " $ replicate (x::Int) (x::String), concat $ intersperse " " $ replicate (y::Int) (y::String))

Categories: Offsite Blogs

Neil Mitchell: The Four Flaws of Haskell

Planet Haskell - Sun, 08/14/2016 - 3:02pm

Summary: Last year I made a list of four flaws with Haskell. Most have improved significantly over the last year.

No language/compiler ecosystem is without its flaws. A while ago I made a list of the flaws I thought might harm people using Haskell in an industrial setting. These are not flaws that impact beginners, or flaws that stop people from switching to Haskell, but those that might harm a big project. Naturally, everyone would come up with a different list, but here is mine.

Package Management: Installing a single consistent set of packages used across a large code base used to be difficult. Upgrading packages within that set was even worse. On Windows, anything that required a new network package was likely to end in tears. The MinGHC project solved the network issue. Stackage solved the consistent set of packages, and Stack made it even easier. I now consider Haskell package management a strength for large projects, not a risk.

IDE: The lack of an IDE certainly harms Haskell. There are a number of possibilities, but whenever I've tried them they have come up lacking. The fact that every Haskell programmer has an entrenched editor choice doesn't make it an easier task to solve. Fortunately, with Ghcid there is at least something near the minimum acceptable standard for everyone. At the same time various IDE projects have appeared, notably the Haskell IDE Engine and Intero. With Ghcid the lack of an IDE stops being a risk, and with the progress on other fronts I hope the Haskell IDE story continues to improve.

Space leaks: As Haskell programs get bigger, the chance of hitting a space leak increases, becoming an inevitability. While I am a big fan of laziness, space leaks are the big downside. Realising space leaks were on my flaws list, I started investigating methods for detecting space leaks, coming up with a simple detection method that works well. I've continued applying this method to other libraries and tools. I'll be giving a talk on space leaks at Haskell eXchange 2016. With these techniques space leaks don't go away, but they can be detected with ease and solved relatively simply - no longer a risk to Haskell projects.

Array/String libraries: When working with strings/arrays, the libraries that tend to get used are vector, bytestring, text and utf8-string. While each are individually nice projects, they don't work seamlessly together. The utf8-string provides UTF8 semantics for bytestring, which provides pinned byte arrays. The text package provides UTF16 encoded unpinned Char arrays. The vector package provides mutable and immutable vectors which can be either pinned or unpinned. I think the ideal situation would be a type that was either pinned or unpinned based on size, where the string was just a UTF8 encoded array with a newtype wrapping. Fortunately the foundation library provides exactly that. I'm not brave enough to claim a 0.0.1 package released yesterday has derisked Haskell projects, but things are travelling in the right direction.

It has certainly been possible to use Haskell for large projects for some time, but there were some risks. With the improvements over the last year the remaining risks have decreased markedly. In contrast, the risks of using an untyped or impure language remain significant, making Haskell a great choice for new projects.

Categories: Offsite Blogs

Christoph Breitkopf: IntervalMap Retrospective

Planet Haskell - Fri, 08/12/2016 - 1:56pm
IntervalMap, my package for containers with intervals as keys, has gone through several changes since its inception in late 2011. As is often the case, I had a concrete problem to solve and the initial code did that and only that. Going public on Hackage made some changes necessary. Others cropped up later in the desire to get closer to the interface and performance of the other base containers. Some were even requested by users. Here's a rough history: VersionDateChanges 0.12011-12Initial version 0.22011-12Name changes; Hackage release 0.32012-08Split into lazy and strict 0.42014-08Interval type class 0.4.12015-12IntervalSet added 0.52016-01Make interval search return submaps/subsets where appropriate

Looking back at this, what astonishes me most is the late addition of sets, which were only added after a user requested it. Why didn't this come up right at the start? “interval-containers” would have been a better and more general package name.

The really big change - supporting user-defined intervals via a type class - was caused by my initial distrust of nonstandard language extensions. The result of this is one more level in the module structure for the now-preferred generic version, e.g.:

import qualified Data.IntervalMap.Generic.Strict as IVM

What it still unclear is where to put the Interval type class. There are many useful interval data structures and using Data.Interval for this particular purpose would not be acceptable. But having it under IntervalMap does not seem right either. One option would be a name change:

Data.IntervalMap Data.IntervalMap.Lazy Data.IntervalMap.Strict Data.IntervalSet Data.OrderedInterval

Another is to use a general module:

Data.IntervalContainers Data.IntervalContainers.Interval Data.IntervalMap Data.IntervalMap.Lazy Data.IntervalMap.Strict Data.IntervalSet

Or even put everything under that module:

Data.IntervalContainers Data.IntervalContainers.Interval Data.IntervalContainers.IntervalMap Data.IntervalContainers.IntervalMap.Lazy Data.IntervalContainers.IntervalMap.Strict Data.IntervalContainers.IntervalSet

None of these seems clearly preferable, so the best option is probably to leave the package name and module structure as is.

Data Structure

Preferring simplicity, I based the implementation on Red-Black trees. While this is in general competitive with size balanced binary trees as used by the base containers, there is one caveat: with plain red-black trees, it is not possible to implement a hedge-union algorithm or even simple optimizations for the very common “union of huge set and tiny set” operation unless one would use an additional redundant size field in the node.

This has become somewhat more pressing since version 0.5 where the search operations return subsets/submaps.

Intervals

My initial fixed type implementation of intervals allowed mixing intervals with different openness, and type class has inherited this “feature”. I highly suspect that no one actually uses this - that is, all in instances of Interval, leftClosed and rightClosed ignore their argument and have a constant value.

If I'm wrong, and someone does use this, please let me know!

That design has a negative impact on performance, though. It turns up in the ordering of intervals and their endpoints. For example, a lower bound is actually lower than the identical lower bound of a second interval, if the first interval is closed and the second open.

Unless the compiler does whole-program optimization, it will not be able to eliminate the Interval dictionaries, and can do very little optimization. That's the reason Interval has many members that could also have been defined as module-level functions: above, below, contains, ... These functions are heavily used by the map and set implementations, and it seems that GHC at least is much better at optimizing these functions when they are defined as members (this is just an observation of mine - I don't know in detail why this should be the case). The disadvantage is that it's possible to break a lot of invariants if you actually define these members in an instance. That would not be possible if they were defined as regular functions. At least I'm in good company, because the same applies to Eq, Ord, and Monad to name just a few.

Data.Map tries to work around this by declaring many functions to be inlinable, effectively getting a bit closer to whole-program optimization. This does not seem feasible for IntervalMap, because the functions are used as deeper nesting, and in particular when constructing nodes.

As far as GHC is concerned, good performance is very dependent on inlining, especially when type classes and their dictionaries are involved. Any improvements in this would certainly benefit IntervalMap. One idea: it might be useful to be able to provide strictness information in type classes. In the case of Interval for example, lowerBound and upperBound always evaluate their argument (disregarding really pathological instances), but there is no way to indicate this to the compiler. Only the instance declarations allow inferring strictness information, and that's not available when compiling library code.

Prospects

Whether a big refactoring or rewrite should be done really depends on the users of the package: you. Please let me know about missing features or performance issues, either via mail, a github issue, or a comment here.

Categories: Offsite Blogs

Kevin Scaldeferri: A Rails 2.3 Performance Gotcha

Planet Haskell - Fri, 08/12/2016 - 9:01am

Recently we upgraded to Rails 2.3 at work. Upon doing so, we saw our application take an aggregate performance hit of about 50%. Spending a little quality time with ruby-prof, one thing that stood out as peculiar was an awful lot of time being spent creating and tending to Ruby Thread objects. Tracing up the call stack, these were coming from inside the memcached client.

Buried in the Rails 2.3 release notes is the innocuous looking statement:

The bundled memcached client has been updated to version 1.6.4.99.

What this fails to tell you is that this upgrade added a timeout option to the client which, as the RDoc will happily inform you, “is a major performance penalty in Ruby 1.8”. Moreover, the default value for the timeout is 0.5s, rather than nil.

Our application makes heavy use of caching, so we were getting massacred by this. To add insult to injury, our most heavily optimized pages were hit the hardest, since they used memcached the most. Adding :timeout => nil to our cache layer immediately restored the previous performance.

Lessons from this story: if you’re a Rails developer using memcached, go add that option to your code; if you’re a library author adding an option with significant performance costs, default to the old behavior; if you’re on the Ruby core team, please add an interface to select so that people don’t do silly things like use Threads to implement timeouts on socket reads.

Comments

Categories: Offsite Blogs

Kevin Scaldeferri: Changes

Planet Haskell - Fri, 08/12/2016 - 9:01am

After nearly 6 1/2 years, on Monday I gave notice at Yahoo. After next week, I’ll be moving on to new things (to be described in a later post). The short version is that I was not finding happiness or the career opportunities I wanted at Yahoo any more. The long version... well, you’ll have to come buy me a beer or two if you want that.

Comments

Categories: Offsite Blogs

Kevin Scaldeferri: More Erlang Beust Challenge Results (incl. HiPE)

Planet Haskell - Fri, 08/12/2016 - 9:01am

I’ve been tinkering a little more with the Beust Challenge, following up on my previous post. There are a couple significant new developments to report.

First, Hynek (Pichi) Vychodil wrote a faster version using a permutation generator.

Second, I also ported the first “CrazyBob” solution using a bit mask to Erlang.

Third, I discovered that my overly literal ports were actually slowing things down. The CrazyBob code uses an unspecified Listener class that receives the numbers in the series, and presumably computes the actual results from there. (Aside, I cannot actually benchmark against the Java code because I haven’t found what this implementation is.) I fudged this in my original ports by simply spawning a process, which then discarded all the messages it received. After I noticed that the code was pegging both of my CPUs, though, I realized that message passing might actually be the bottleneck in my code. Turns out this was the case, and removing the listener process and just computing the results in the main process actually sped things up substantially.

Finally, I got access to a machine with HiPE enabled.

So... here’s the results. First on my MacBook Pro, without HiPE:

log(Max) Original bitmask crazybob pichi 4 8ms 2ms 3ms 3ms 5 65ms 11ms 13ms 14ms 6 632ms 52ms 69ms 62ms 7 6.7s 253ms 303ms 272ms 8 72s 1.0s 1.0s 945ms 9 18m 4.7s 3.6s 2.8s 10 (3h) 13s 7.8s 5.3s

The bitmask solution starts out the fastest, but loses out in the end to the more clever solutions. pichi edges out the crazybob solution by about a third.

Now on Linux 2.6 with HiPE:

log(Max) Original bitmask crazybob pichi 4 4ms <1ms 1ms 2ms 5 50ms 1ms 6ms 7ms 6 608ms 7ms 34ms 37ms 7 6.9s 35ms 160ms 174ms 8 78s 147ms 619ms 563ms 9 (18m) 460ms 1.8s 1.4s 10 (3h) 1.1s 4.2s 2.4s

And our new winner is... bitmask! HiPE barely helps the original brute-force solutions at all, while crazybob and pichi gain about a factor of two. bitmask, on the other hand, picks up an order of magnitude, and is now only a factor of 3 slower than the posted results for the Java crazybob solution (with unknown differences in hardware).

Conclusion: Erlang doesn’t suck!

Comments

Categories: Offsite Blogs

Kevin Scaldeferri: Alpha Male Programmers Are Driving Women Out

Planet Haskell - Fri, 08/12/2016 - 9:01am

Yesterday, DHH made an argument that alpha male programmers aren’t keeping women out of the tech field. I’m of the opinion that he’s wrong and that his argument is flawed, and in a moment I’ll explain why, but let me get a few things out of the way so they don’t distract from the rest of my argument.

First, I don’t think that “alpha males” or the “rockstar” mentality are the only causes of under-representation of women in technology. As far as I can tell, the causes are many, varied, generally difficult to deal with, and in one or two cases may even be valid reasons why we might ultimately accept some degree of imbalance in the field. Second, this is not strictly a gender issue. Some men are also driven away by this behavior, although I think women are more likely to be; and also some “alpha males” happen to be female. Third, this is not a personal attack on DHH or any other individual, although some people might read parts of it in that way. But my goal here is that the range of individuals who find themselves uncomfortably reflected in what I say, but don’t simply reject it all out of hand, might view that discomfort as an opportunity for personal growth. Finally, I am certainly not claiming that I am perfect in this regard. I’ve made mistakes in the past; I will make them again. I simply hope that my friends will be kind enough to point out my mistakes to me, so that I can try to do better.

Okay, now that that’s all out of the way...

I first claim that DHH is wrong. My proof is empirical and anecdotal, but fortunately for me, I’m on the side of this debate that gets to use those techniques. I.e., I’m asserting existence of a phenomenon, rather than non-existence. I know numerous women who have been driven away by alpha male behavior. In some cases, they simply moved to a different team, or a different company. In other cases, they switched to non-programmer roles. And some left the industry entirely. I know this because they told me. They described specific individuals and specific behaviors which drove them away.

With some frequency in these debates, male programmers will claim that they don’t know any women who have left the field for this reason (or who have experienced sexism in the field, or who were offended by the example under discussion, or even just the milder claim that DHH makes, that no one has any idea what to do). I can explain this in only one of two ways: either they don’t know any women in the field to begin with or don’t talk with them beyond the minimal professional requirements, or women are not telling them because they are part of the problem. Perhaps it would be more effective if these women directly confronted the people causing the problem, but the fact of the matter is that most people, men and women, dislike conflict. We’re much more comfortable griping to a sympathetic friend than to the cause of our unhappiness. So consider me something akin to an anonymizing proxy. Without revealing names or too many of the specifics, please trust me when I say that almost every woman in the field experiences and complains about this.

Now I also said that DHH’s argument is flawed, and I will spend the rest of this post pointing out the various flaws I see.

DHH claims alpha males cannot be a problem in programming because the average male programmer is “meek, tame, and introverted” compared to other fields. First off, “alpha males” are by definition not average; they are the most dominant individuals of a group. And, it may even be possible that the general meekness or introversion of programmers makes it easier for a small set of individuals to dominate the interaction, rather than reaching a condition of detente between a group of uniformly assertive individuals. Second, presumably DHH does not interact with these people from other fields in a professional context. A point repeatedly stressed in this recent “pr0n star” controversy is that it’s not an issue of people being anti-sex or anti-porn; it’s about what’s appropriate in the workplace, or in a professional context. Standards for behavior in a social context are different. Third, he speaks in terms of whether these other men are more or less “R-rated”. This is not the point. Women are just as “R-rated” as men. They curse. They talk about sex (often more explicitly than men). The issue is not about whether we say or do adult things, it’s about whether we respect each other as human beings and whether we understand the societal norms of what is and is not appropriate in particular contexts. In fact, in this regard, I’ll defend DHH. Saying “fuck” (in the exclamatory, non-sexual usage) in the course of a technical presentation is not problematic in this day and age within the technology community. I think most of us swear freely in the course of struggling with a nasty bug or production problem. This is a normative expression of frustration within our community, and it does not oppress or disrespect other members of the community. (As far as I know. It’s possible that people just aren’t telling me that it upsets them when I curse at my monitor. If that’s the case, I hope someone will tell me.) Finally, DHH observes that these other fields have a more even mix of men and women. What he misses is that when the distribution is relatively equal it is generally easier and more comfortable for men to be men and women to be women. It is perhaps counterintuitive, but environments which are heavily skewed call for greater sensitivity to gender or other cultural differences simply because it is so easy to unintentionally create an oppressive or exclusionary atmosphere.

In the final paragraphs of his post, DHH suggests that somehow by respecting women we are squashing some other sort of “edge” and diversity in the community. I’m a little puzzled by what he means by this, and I’m sort of afraid that he thinks that being a heterosexual male who likes to look at scantily-clad women (or who openly admits as much) is somehow “edgy”. It’s not. By definition, hetero males like women; and it’s well established that men tend to be visually oriented. Pointing out that you fall in this category does not make you “diverse”, it makes you a completely typical representative of your gender and orientation. No one needs to be reminded of it.

Moreover, it might be true that maximal gains are had by pushing the edges (although I don’t think that one should naively assume that analogy from physics or economics applies to social endeavors), but for this to be work there has to be negative feedback when boundaries are crossed. If the edge-walkers want society to accept their behavior, they must be prepared to apologize, to make reparations, and to correct their course when they go over the line. This is the difference between a trend-setter and a sociopath.

There’s quite a bit more that I could say on this issue, but I fear this may be becoming too long already, and I think it’s probably best to focus only on the arguments presented in this particular post at the moment. To summarize things in a couple of sentences, the phenomenon of women being discouraged by alpha male behavior is real. You merely need to talk, and listen, to women in the field to verify this. (But you might have to earn some trust first.) Comparisons with men in other fields in non-professional settings do not have much relevance to the matter at hand. Claims that respecting the feelings and experiences of a minority group is damaging to the community overall are extraordinary and require extraordinary support. Being a thought leader and being offensive are two very different things.

It’s really quite discouraging that so much of this discussion still seems mired in the question of whether a problem even exists, or whether it is desirable and possible to address the problem. This lack of acceptance leads both to the explicit refusals to acknowledge the validity of the complaints of the offended, as well as the phenomena of false apologies and insincere claims that “I would help if only I knew how (and if it doesn’t require any actual change of behavior on my part)”. Male programmers need to pull their heads out of the sand. The evidence, both hard statistical data and anecdotal, is overwhelming. It also is not hard to find advice about what can be done differently. The hard part is moving from a vague desire for diversity and balance to serious, meaningful, sometimes painful self-examination and commitment to change and improvement. It’s not easy to admit flaws in yourself, to acknowledge when you’ve hurt another person, or to make a true apology. Change doesn’t happen overnight or simply because you say that you want it to. It takes work, but it’s an important part of being a human being and being a member of a community.

Comments

Categories: Offsite Blogs

Kevin Scaldeferri: Speeding up a Rails request by 150ms by changing 1 line

Planet Haskell - Fri, 08/12/2016 - 9:01am

We’re pretty obsessed with performance at Gilt Groupe. You can get a taste for what we’re dealing with, and how we’re dealing with it, from our recent presentation at RailsConf.

One of the techniques we’re using is to precompute what certain high-volume pages will look like at a given time in the future, and store the result as static HTML that we serve to the actual users at that time. For ease of initial development, and because there’s still a fair bit of business logic involved in determining which version of a particular page to serve, this was done inside our normal controller actions which look for a static file to serve, before falling back to generating it dynamically.

We’re now running on Rails 2.3 and, of course, Rails Metal is the new hotness in 2.3. I spent the last couple days looking into how much improvement in static file serving we would see by moving it into the Metal layer. Based on most of what I’ve read, I expected we might shave off a couple milliseconds. This expectation turned out to be dramatically wrong.

Metal components operate outside the realm of the usual Rails timing and logging components, so you don’t get any internal measurements of page performance. Instead, I fired up ab to measure the serving times externally. What I found for the page I was benchmarking was that the Metal implementation took about 5ms. The old controller action took 170ms. But, wait... the Rails logs were only reporting 8ms for that action. Something was fishy.

I started inserting timers at various places in the Rails stack, trying to figure out where the other 160ms was going. A little bit was routing logic and other miscellaneous overhead, but even setting a timer around the very entry points into the Rails request serving path, I was only seeing 15ms being spent. This was getting really puzzling, because at this point where a Rack response is returned to the web server, I expected things to look identical between Metal and ActionController. However, looking more closely at the response objects I discovered the critical difference. The Metal response returns an [String], while the controller returned an ActionController::Response.

I went into the Rails source and found the each method for ActionController::Response. Here it is:

def each(&callback) if @body.respond_to?(:call) @writer = lambda { |x| callback.call(x) } @body.call(self, self) elsif @body.is_a?(String) @body.each_line(&callback) else @body.each(&callback) end @writer = callback @block.call(self) if @block end

The critical line is the case where the body is a String. The code iterates over each line in the response. Each line is written individually to the network socket. In the case of the particular page I was looking at, that was 1300 writes. Ouch.

To confirm this was the problem, I changed that line to

yield @body

With the whole body being sent in a single write, ab reported 15ms per request, right in line with what I measured inside Rails.

1 line changed. 150ms gained. Not too bad.

This sort of performance pessimization we uncovered is particularly insidious because it’s completely invisible to all the usual Rails monitoring tools. It doesn’t show up in your logged response time; you won’t see it in NewRelic or TuneUp. The only way you’re going to find out about it is by running an external benchmarking tool. Of course, this is always a good idea, but it’s easy to forget to do it, because the tools that work inside the Rails ecosystem are so nice. But the lesson here is, if you’re working on performance optimizations, make sure to always get a second opinion.

Comments

Categories: Offsite Blogs

Functional Jobs: Software Engineer (Haskell/Clojure) at Capital Match (Full-time)

Planet Haskell - Fri, 08/12/2016 - 5:05am

Overview

Capital Match is a leading marketplace lending and invoice financing platform in Singapore. Our in-house platform, mostly developed in Haskell, has in the last year seen more than USD 10 million business loans processed with a strong monthly growth (current rate of USD 1.5-2.5 million monthly). We are also eyeing expansion into new geographies and product categories. Very exciting times!

We have just secured another funding round to build a world-class technology as the key business differentiator. The key components include credit risk engine, seamless banking integration and end-to-end product automation from loan origination to debt collection.

Responsibilities

We are looking to hire a software engineer with a minimum of 2-3 years coding experience. The current tech team includes a product manager and 3 software engineers. We are currently also in the process of hiring CTO.

The candidate should have been involved in a development of multiple web-based products from scratch. He should be interested in all aspects of the creation, growth and operations of a secure web-based platform: front-to-back features development, distributed deployment and automation in the cloud, build and test automation etc.

Background in fintech and especially lending / invoice financing space would be a great advantage.

Requirements

Our platform is primarily developed in Haskell with an Om/ClojureScript frontend. We are expecting our candidate to have experience working with a functional programming language e.g. Haskell/Scala/OCaml/F#/Clojure/Lisp/Erlang.

Deployment and production is managed with Docker containers using standard cloud infrastructure so familiarity with Linux systems, command-line environment and cloud-based deployment is mandatory. Minimum exposure to and understanding of XP practices (TDD, CI, Emergent Design, Refactoring, Peer review and programming, Continuous improvement) is expected.

We are looking for candidates that are living in or are willing to relocate to Singapore.

Offer

We offer a combination of salary and equity depending on experience and skills of the candidate.

Most expats who relocate to Singapore do not have to pay their home country taxes and the local tax rate in Singapore is more or less 5% (effective on the proposed salary range).

Visa sponsorship will be provided.

Singapore is a great place to live, a vibrant city rich with diverse cultures, a very strong financial sector and a central location in Southeast Asia.

Get information on how to apply for this position.

Categories: Offsite Blogs

FP Complete: Bitrot-free Scripts

Planet Haskell - Thu, 08/11/2016 - 8:15am

Sneak peek: Run docker run --rm -p 8080:8080 snoyberg/file-server-demo and open http://localhost:8080.

We've all been there. We need to write some non-trivial piece of functionality, and end up doing it in bash or perl because that's what we have on the server we'll be deploying to. Or because it's the language we can most easily rely on being present at a consistent version on our coworkers' machines. We'd rather use a different language and leverage more advanced, non-standard libraries, but we can't do that reliably.

One option is to create static executables or to ship around Docker images. This is great for many use cases, and we are going to have a follow-up blog post about using Docker and Alpine Linux to make such static executables. But there are at least two downsides to this approach:

  • It's not possible to modify a static executable directly. You need to have access to the source code and the tool chain used to produce it.
  • The executable is tied to a single operating system; good luck getting your Linux executable to run on your OS X machine.

Said another way: there are good reasons why people like to use scripting languages. This blog post is going to demonstrate doing some non-trivial work with Haskell, and do so with a fully reproducible and trivially installed toolchain, supported on multiple operating systems.

Why Haskell?

Haskell is a functional programming language with high performance, great safety features, and a large ecosystem of open source libraries to choose from. Haskell programs are high level enough to be readable and modifiable by non-experts, making it ideal for these kinds of shared scripts. If you're new to Haskell, learn more on haskell-lang.org.

The task

We're going to put together a simple file server with upload capability. We're going to assume a non-hostile environment (like a corporate LAN with no external network access), and therefore not put in security precautions like upload size limits. We're going to use the relatively low-level Web Application Interface instead of a web framework. While it makes the code a bit longer, there's no magic involved. Common frameworks in Haskell include Yesod and Servant. We're going to host this all with the blazingly fast Warp web server.

Get Stack

Stack is a cross-platform program for developing Haskell projects. While it has many features, in our case the most important bit is that it can:

  • Download a complete Haskell toolchain for your OS
  • Install Haskell libraries from a curated package set
  • Run Haskell source files directly as a script (we'll show how below)

Check out the Get Started page on haskell-lang.org to get Stack on your system.

The code

You can see the full source code on Github. Let's step through the important parts here.

Script interpreter

We start off our file with something that is distinctly not Haskell code:

#!/usr/bin/env stack {- stack --resolver lts-6.11 --install-ghc runghc --package shakespeare --package wai-app-static --package wai-extra --package warp -}

With this header, we've made our file executable from the shell. If you chmod +x the source file, you can run ./FileServer.hs. The first line is a standard shebang. After that, we have a comment that provides Stack with the relevant command line options. These options tell it to:

  • Use the Haskell Long Term Support (LTS) 6.11 package set. From now through the rest of time, you'll be running against the same set of packages, so no worries about your code bitrotting!
  • Install GHC, the Glasgow Haskell Compiler. LTS 6.11 indicates what version of GHC is needed (GHC 7.10.3). Once again: no bitrot concerns!
  • runghc says we'd like to run a script with GHC
  • The rest of the lines specify which Haskell library packages we depend on. You can see a full list of available libraries in LTS 6.11 on the Stackage server

For more information on Stack's script interpreter support, see the Stack user guide.

Command line argument parsing

Very often with these kinds of tools, we need to handle command line arguments. Haskell has some great libraries for doing this in an elegant way. For example, see the optparse-applicative library tutorial. However, if you want to go simple, you can also just use the getArgs function to get a list of arguments. We're going to add support for a sanity argument, which will allow us to sanity-check that running our application works:

main :: IO () main = do args <- getArgs case args of ["sanity"] -> putStrLn "Sanity check passed, ready to roll!" [] -> do putStrLn "Launching application" -- Run our application (defined below) on port 8080 run 8080 app _ -> error $ "Unknown arguments: " ++ show argsRouting

We're going to support three different routes in our application:

  • The /browse/... tree should allow you to get a directory listing of files in the current directory, and view/download individual files.
  • The /upload page accepts a file upload and writes the uploaded content to the current directory.
  • The homepage (/) should display an HTML page with a link to /browse and provide an HTML upload form targeting /upload.

Thanks to pattern matching in Haskell, getting this to work is very straightforward:

app :: Application app req send = -- Route the request based on the path requested case pathInfo req of -- "/": send the HTML homepage contents [] -> send $ responseBuilder status200 [("Content-Type", "text/html; charset=utf-8")] (runIdentity $ execHtmlT homepage) -- "/browse/...": use the file server to allow directory -- listings and downloading files ("browse":rest) -> -- We create a modified request that strips off the -- "browse" component of the path, so that the file server -- does not need to look inside a /browse/ directory let req' = req { pathInfo = rest } in fileServer req' send -- "/upload": handle a file upload ["upload"] -> upload req send -- anything else: 404 _ -> send $ responseLBS status404 [("Content-Type", "text/plain; charset=utf-8")] "Not found"

The most complicated bit above is the path modification for the /browse tree, which is something a web framework would handle for us automatically. Remember: we're doing this low level to avoid extra concepts, real world code is typically even easier than this!

Homepage content

An area that Haskell really excels at is Domain Specific Languages (DSLs). We're going to use the Hamlet for HTML templating. There are many other options in the Haskell world favoring other syntax, such as Lucid library (which provides a Haskell-based DSL), plus implementations of language-agnostic templates, like mustache.

Here's what our HTML page looks like in Hamlet:

homepage :: Html () homepage = [shamlet| $doctype 5 <html> <head> <title>File server <body> <h1>File server <p> <a href=/browse/>Browse available files <form method=POST action=/upload enctype=multipart/form-data> <p>Upload a new file <input type=file name=file> <input type=submit> |]

Note that Hamlet - like Haskell itself - uses significant whitespace and indentation to denote nesting.

The rest

We're not going to cover the rest of the code in the Haskell file. If you're interested in the details, please read the comments there, and feel free to ask questions about any ambiguous bits (hopefully the inline comments give enough clarity on what's going on).

Running

Download the FileServer.hs file contents (or copy-paste, or clone the repo), make sure the file is executable (chmod +x FileServer.hs), and then run:

$ ./FileServer.hs

If you're on Windows, you can instead run:

> stack FileServer.hs

That's correct: the same source file will work on POSIX systems and Windows as well. The only requirement is Stack and GHC support. Again, to get Stack on your system, please see the Get Started page.

The first time you run this program, it will take a while to complete. This is because Stack will need to download and install GHC and necessary libraries to a user-local directory. Once complete, the results are kept on your system, so subsequent runs will be almost instantaneous.

Once running, you can view the app on localhost:8080.

Dockerizing

Generally, I wouldn't recommend Dockerizing a source file like this; it makes more sense to Dockerize a compiled executable. We'll cover how to do that another time (though sneak preview: Stack has built in support for generating Docker images). For now, let's actually Dockerize the source file itself, complete with Stack and the GHC toolchain.

You can check out the Dockerfile on Github. That file may be slightly different from what I cover here.

FROM ubuntu:16.04 MAINTAINER Michael Snoyman

Nothing too interesting...

ADD https://github.com/Yelp/dumb-init/releases/download/v1.1.3/dumb-init_1.1.3_amd64 /usr/local/bin/dumb-init RUN chmod +x /usr/local/bin/dumb-init

While interesting, this isn't Haskell-specific. We're just using an init process to get proper handling for signals. For more information, see dumb-init's announcement blog post.

ADD https://get.haskellstack.org/get-stack.sh /usr/local/bin/ RUN sh /usr/local/bin/get-stack.sh

Stack has a shell script available to automatically install it on POSIX systems. We just download that script and then run it. This is all it takes to have a Haskell-ready system set up: we're now ready to run script interpreter based files like our FileServer.hs!

COPY FileServer.hs /usr/local/bin/file-server RUN chmod +x /usr/local/bin/file-server

We're copying over the source file we wrote and then ensuring it is executable. Interestingly, we can rename it to not include a .hs file extension. There is plenty of debate in the world around whether scripts should or should not include an extension indicating their source language; Haskell is allowing that debate to perpetuate :).

RUN useradd -m www && mkdir -p /workdir && chown www /workdir USER www

While not strictly necessary, we'd rather not run our executable as the root user, for security purposes. Let's create a new user, create a working directory to store files in, and run all subsequent commands as the new user.

RUN /usr/local/bin/file-server sanity

As I mentioned above, that initial run of the server takes a long time. We'd like to do the heavy lifting of downloading and installing during the Docker image build rather than at runtime. To make this happen, we run our program once with the sanity command line argument, so that it immediately exits after successfully starting up.

CMD ["/usr/local/bin/dumb-init", "/usr/local/bin/file-server"] WORKDIR /workdir EXPOSE 8080

Finally, we use CMD, WORKDIR, and EXPOSE to make it easier to run. This Docker image is available on Docker Hub, so if you'd like to try it out without doing a full build on your local machine:

docker run --rm -p 8080:8080 snoyberg/file-server-demo

You should be able to play with the application on http://localhost:8080.

What's next

As you can see, getting started with Haskell as a scripting language is easy. You may be interested in checking out the turtle library, which is a Shell scripting DSL written in Haskell.

If you're ready to get deeper into Haskell, I'd recommend:

FP Complete both supports the open source Haskell ecosystem, as well as provides commercial support for those seeking it. If you're interested in learning more about how FP Complete can help you and your team be more successful in your development and devops work, you can learn about what services we offer or contact us for a free consultation.

Categories: Offsite Blogs

FP Complete: Bulletproof Haskell Scripts

Planet Haskell - Thu, 08/11/2016 - 8:15am

Sneak peek: Run docker run --rm -p 8080:8080 snoyberg/file-server-demo and open http://localhost:8080.

We've all been there. We need to write some non-trivial piece of functionality, and end up doing it in bash or perl because that's what we have on the server we'll be deploying to. Or because it's the language we can most easily rely on being present at a consistent version on our coworkers' machines. We'd rather use a different language and leverage more advanced, non-standard libraries, but we can't do that reliably.

One option is to create static executables or to ship around Docker images. This is great for many use cases, and we are going to have a follow-up blog post about using Docker and Alpine Linux to make such static executables. But there are at least two downsides to this approach:

  • It's not possible to modify a static executable directly. You need to have access to the source code and the tool chain used to produce it.
  • The executable is tied to a single operating system; good luck getting your Linux executable to run on your OS X machine.

Said another way: there are good reasons why people like to use scripting languages. This blog post is going to demonstrate doing some non-trivial work with Haskell, and do so with a fully reproducible and trivially installed toolchain, supported on multiple operating systems.

Why Haskell?

Haskell is a functional programming language with high performance, great safety features, and a large ecosystem of open source libraries to choose from. Haskell programs are high level enough to be readable and modifiable by non-experts, making it ideal for these kinds of shared scripts. If you're new to Haskell, learn more on haskell-lang.org.

The task

We're going to put together a simple file server with upload capability. We're going to assume a non-hostile environment (like a corporate LAN with no external network access), and therefore not put in security precautions like upload size limits. We're going to use the relatively low-level Web Application Interface instead of a web framework. While it makes the code a bit longer, there's no magic involved. Common frameworks in Haskell include Yesod and Servant. We're going to host this all with the blazingly fast Warp web server.

Get Stack

Stack is a cross-platform program for developing Haskell projects. While it has many features, in our case the most important bit is that it can:

  • Download a complete Haskell toolchain for your OS
  • Install Haskell libraries from a curated package set
  • Run Haskell source files directly as a script (we'll show how below)

Check out the Get Started page on haskell-lang.org to get Stack on your system.

The code

You can see the full source code on Github. Let's step through the important parts here.

Script interpreter

We start off our file with something that is distinctly not Haskell code:

#!/usr/bin/env stack {- stack --resolver lts-6.11 --install-ghc runghc --package shakespeare --package wai-app-static --package wai-extra --package warp -}

With this header, we've made our file executable from the shell. If you chmod +x the source file, you can run ./FileServer.hs. The first line is a standard shebang. After that, we have a comment that provides Stack with the relevant command line options. These options tell it to:

  • Use the Haskell Long Term Support (LTS) 6.11 package set. From now through the rest of time, you'll be running against the same set of packages, so no worries about your code bitrotting!
  • Install GHC, the Glasgow Haskell Compiler. LTS 6.11 indicates what version of GHC is needed (GHC 7.10.3). Once again: no bitrot concerns!
  • runghc says we'd like to run a script with GHC
  • The rest of the lines specify which Haskell library packages we depend on. You can see a full list of available libraries in LTS 6.11 on the Stackage server

For more information on Stack's script interpreter support, see the Stack user guide.

Command line argument parsing

Very often with these kinds of tools, we need to handle command line arguments. Haskell has some great libraries for doing this in an elegant way. For example, see the optparse-applicative library tutorial. However, if you want to go simple, you can also just use the getArgs function to get a list of arguments. We're going to add support for a sanity argument, which will allow us to sanity-check that running our application works:

main :: IO () main = do args <- getArgs case args of ["sanity"] -> putStrLn "Sanity check passed, ready to roll!" [] -> do putStrLn "Launching application" -- Run our application (defined below) on port 8080 run 8080 app _ -> error $ "Unknown arguments: " ++ show argsRouting

We're going to support three different routes in our application:

  • The /browse/... tree should allow you to get a directory listing of files in the current directory, and view/download individual files.
  • The /upload page accepts a file upload and writes the uploaded content to the current directory.
  • The homepage (/) should display an HTML page with a link to /browse and provide an HTML upload form targeting /upload.

Thanks to pattern matching in Haskell, getting this to work is very straightforward:

app :: Application app req send = -- Route the request based on the path requested case pathInfo req of -- "/": send the HTML homepage contents [] -> send $ responseBuilder status200 [("Content-Type", "text/html; charset=utf-8")] (runIdentity $ execHtmlT homepage) -- "/browse/...": use the file server to allow directory -- listings and downloading files ("browse":rest) -> -- We create a modified request that strips off the -- "browse" component of the path, so that the file server -- does not need to look inside a /browse/ directory let req' = req { pathInfo = rest } in fileServer req' send -- "/upload": handle a file upload ["upload"] -> upload req send -- anything else: 404 _ -> send $ responseLBS status404 [("Content-Type", "text/plain; charset=utf-8")] "Not found"

The most complicated bit above is the path modification for the /browse tree, which is something a web framework would handle for us automatically. Remember: we're doing this low level to avoid extra concepts, real world code is typically even easier than this!

Homepage content

An area that Haskell really excels at is Domain Specific Languages (DSLs). We're going to use the Hamlet for HTML templating. There are many other options in the Haskell world favoring other syntax, such as Lucid library (which provides a Haskell-based DSL), plus implementations of language-agnostic templates, like mustache.

Here's what our HTML page looks like in Hamlet:

homepage :: Html () homepage = [shamlet| $doctype 5 <html> <head> <title>File server <body> <h1>File server <p> <a href=/browse/>Browse available files <form method=POST action=/upload enctype=multipart/form-data> <p>Upload a new file <input type=file name=file> <input type=submit> |]

Note that Hamlet - like Haskell itself - uses significant whitespace and indentation to denote nesting.

The rest

We're not going to cover the rest of the code in the Haskell file. If you're interested in the details, please read the comments there, and feel free to ask questions about any ambiguous bits (hopefully the inline comments give enough clarity on what's going on).

Running

Download the FileServer.hs file contents (or copy-paste, or clone the repo), make sure the file is executable (chmod +x FileServer.hs), and then run:

$ ./FileServer.hs

If you're on Windows, you can instead run:

> stack FileServer.hs

That's correct: the same source file will work on POSIX systems and Windows as well. The only requirement is Stack and GHC support. Again, to get Stack on your system, please see the Get Started page.

The first time you run this program, it will take a while to complete. This is because Stack will need to download and install GHC and necessary libraries to a user-local directory. Once complete, the results are kept on your system, so subsequent runs will be almost instantaneous.

Once running, you can view the app on localhost:8080.

Dockerizing

Generally, I wouldn't recommend Dockerizing a source file like this; it makes more sense to Dockerize a compiled executable. We'll cover how to do that another time (though sneak preview: Stack has built in support for generating Docker images). For now, let's actually Dockerize the source file itself, complete with Stack and the GHC toolchain.

You can check out the Dockerfile on Github. That file may be slightly different from what I cover here.

FROM ubuntu:16.04 MAINTAINER Michael Snoyman

Nothing too interesting...

ADD https://github.com/Yelp/dumb-init/releases/download/v1.1.3/dumb-init_1.1.3_amd64 /usr/local/bin/dumb-init RUN chmod +x /usr/local/bin/dumb-init

While interesting, this isn't Haskell-specific. We're just using an init process to get proper handling for signals. For more information, see dumb-init's announcement blog post.

ADD https://get.haskellstack.org/get-stack.sh /usr/local/bin/ RUN sh /usr/local/bin/get-stack.sh

Stack has a shell script available to automatically install it on POSIX systems. We just download that script and then run it. This is all it takes to have a Haskell-ready system set up: we're now ready to run script interpreter based files like our FileServer.hs!

COPY FileServer.hs /usr/local/bin/file-server RUN chmod +x /usr/local/bin/file-server

We're copying over the source file we wrote and then ensuring it is executable. Interestingly, we can rename it to not include a .hs file extension. There is plenty of debate in the world around whether scripts should or should not include an extension indicating their source language; Haskell is allowing that debate to perpetuate :).

RUN useradd -m www && mkdir -p /workdir && chown www /workdir USER www

While not strictly necessary, we'd rather not run our executable as the root user, for security purposes. Let's create a new user, create a working directory to store files in, and run all subsequent commands as the new user.

RUN /usr/local/bin/file-server sanity

As I mentioned above, that initial run of the server takes a long time. We'd like to do the heavy lifting of downloading and installing during the Docker image build rather than at runtime. To make this happen, we run our program once with the sanity command line argument, so that it immediately exits after successfully starting up.

CMD ["/usr/local/bin/dumb-init", "/usr/local/bin/file-server"] WORKDIR /workdir EXPOSE 8080

Finally, we use CMD, WORKDIR, and EXPOSE to make it easier to run. This Docker image is available on Docker Hub, so if you'd like to try it out without doing a full build on your local machine:

docker run --rm -p 8080:8080 snoyberg/file-server-demo

You should be able to play with the application on http://localhost:8080.

What's next

As you can see, getting started with Haskell as a scripting language is easy. You may be interested in checking out the turtle library, which is a Shell scripting DSL written in Haskell.

If you're ready to get deeper into Haskell, I'd recommend:

FP Complete both supports the open source Haskell ecosystem, as well as provides commercial support for those seeking it. If you're interested in learning more about how FP Complete can help you and your team be more successful in your development and devops work, you can learn about what services we offer or contact us for a free consultation.

Categories: Offsite Blogs

Functional Jobs: (Senior) Scala Developer at SAP SE (Full-time)

Planet Haskell - Thu, 08/11/2016 - 3:56am

About SAP

SAP is a market leader in enterprise application software, helping companies of all sizes and industries run better. SAP empowers people and organizations to work together more efficiently and use business insight more effectively. SAP applications and services enable our customers to operate profitably, adapt continuously, and grow sustainably.

What you'll do:

You will be a member of the newly formed Scala development experience team. You will support us with the design and development of a Scala and cloud-based business application development and runtime platform. The goal of the platform is to make cloud-based business application development in the context S/4 HANA as straight-forward as possible. The team will be distributed over Berlin, Potsdam, Walldorf and Bangalore.

Your tasks as a (Senior) Scala Developer will include:

  • Design and development of libraries and tools for business application development
  • Design and development of tools for operating business applications
  • Explore, understand, and implement most recent technologies
  • Contribute to open source software (in particular within the Scala ecosystem)

Required skills:

  • Master’s degree in computer science, mathematics, or related field
  • Excellent programming skills and a solid foundation in computer science with strong competencies in data structures, algorithms, databases, and software design
  • Solid understanding of object-oriented concepts and basic understanding functional programming concepts
  • Good knowledge in Scala, Java, C++, or similar object-oriented programming languages
  • Strong analytical skills
  • Reliable and open-minded with strong team working skills, determined to reach a goal in time as well as the ability to work independently and to prioritize
  • Ability to get quickly up-to-speed in a complex, new environment
  • Proficiency in spoken and written English

Beneficial skills

  • Ph.D. in computer science
  • Solid understanding of functional programming concepts
  • Good knowledge in Scala, OCaml, SML, or Haskell
  • Experience with Scala and Scala.js
  • Experience with meta programming in Scala, e.g., using Scala’s macro system
  • Knowledge on SAP technologies and products
  • Experiences with the design of distributed systems, e.g., using Akka

What we offer

  • Modern and innovative office locations
  • Free lunch and free coffee
  • Flexible working hours
  • Training opportunities and conference visits
  • Fitness room with a climbing wall
  • Gaming room with table tennis, kicker tables and a Playstation
  • Friendly colleagues and the opportunity to work within a highly diverse team which has expert knowledge in a wide range of technologies

Get information on how to apply for this position.

Categories: Offsite Blogs