News aggregator

diffs on HaskellWiki

haskell-cafe - Tue, 04/19/2016 - 2:18am
I've noticed that when viewing changes on HaskellWiki, the diffs are missing. We had a similar problem on the OpenSSL wiki (which, like HaskellWiki, is based on MediaWiki), and the sysadmin said he fixed the problem by switching from the external diff engine to the internal diff engine. Is there someone who could look into this and possibly make that change to the wiki's configuration? I looked around the HaskellWiki and couldn't find any mention of how to contact a maintainer. Thanks, --Patrick
Categories: Offsite Discussion

Sending email

haskell-cafe - Mon, 04/18/2016 - 10:42pm
Hi everyone, I'm trying to use the *Network.Mail.SMTP* library to send email: *{-# LANGUAGE OverloadedStrings #-}* *module Main where* *import Control.Exception* *import qualified Data.Text as T* *import qualified Data.Text.Lazy as LT* *import Network.Mail.SMTP* *main :: IO ()* *main = do* * sendEmail (“Person sender”, “sender< at >somewhere.com <sender< at >somewhere.com>”)* * [(“Person recipient“, “recipient< at >somewhere.com <recipient< at >somewhere.com>”)]* * "Test email"* * "Some message goes here."* *sendEmail :: (T.Text, T.Text) -> [(T.Text, T.Text)] -> T.Text -> T.Text -> IO ()* *sendEmail (fromName, fromEmail) toAddresses subject' body' = do* * let toNameAddrs = map (\(toName, toEmail) -> Address (Just toName) toEmail) toAddresses* * msg = simpleMail (Address (Just fromName) fromEmail)* * toNameAddrs* * []* * []* * subject'* * [ plainTextPart $ LT
Categories: Offsite Discussion

Is it possible to make lazy combinators for IO?

haskell-cafe - Mon, 04/18/2016 - 9:18pm
If f :: a -> IO a for some a, and I want to use mfix f then f must not inspect its argument in any way, or the computation will get stuck. In some cases, this seems a bit harsh. For example, mfix (\x -> fmap (3 :) (x `seq` readLn)) looks perfectly reasonable. There is no need to inspect the return [] action to know that the final result of the computation will begin with 3:. Is there a lazy IO mapping function somewhere that can work such magic?
Categories: Offsite Discussion

Pretty-printing haskell source code - could the linebreaking be improved?

haskell-cafe - Mon, 04/18/2016 - 7:56pm
When I pretty print a certain deleration (with default width) I get this: upeekRow _unv (_xconc< at >(ImageSize {})) = Node (upeekCons idPath Nothing) (concatMap (\f -> forestMap (liftPeek f) (map (\x' -> Node (upeekCons idPath (Just (u x' :: Univ)) :: UPeek Univ Dimension) []) (toListOf (toLens (f idPath) . ulens' (Proxy :: Proxy Univ)) _xconc :: [Dimension]))) [UPath_ImageSize_dim] ++ (concatMap (\f -> forestMap (liftPeek f) (map (\x' -> Node (upeekCons idPath (Just (u x' :: Univ)) :: UPeek Univ Double) []) (toListOf (toLens (f idPath) . ulens' (Proxy :: Proxy Univ)) _xconc :: [Double]))) [UPath_ImageSize_size] ++ (concatMap (\f -> forestMap (liftPeek f) (map (\x' -> Node (upeekCons idPath (Just (u x' :: Univ)) :: UPeek Univ Units) []) (toListOf (toLens (f idPath) . ulens' (Proxy :: Proxy Univ)) _xconc :: [Units]))) [UPath_ImageSize_units] ++ []))) However, wh
Categories: Offsite Discussion

Existential quantification of config data types

haskell-cafe - Mon, 04/18/2016 - 7:45pm
Hi, I'm running into a weird problem i've never had before. I want to define a datatype like: data Network = forall a. Layer a => Network [a] for which I have things that implement Layer, like data TypicalLayer = ... instance Layer TypicalLayer where ... with this, I want to be able to load a Network from a config file where Network can consist of different data types all implementing Layer. Is there a nice way to do this without making a data type that combines all of my layer types, something like: data LayerType = TypicalLayerType TypicalLayer | AnotherLayerType AnotherLayer ... deriving (Show,Read) Thanks for the help! Charlie Durham _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

JTRES 2016 Call for Papers

General haskell list - Mon, 04/18/2016 - 9:32am
====================================================================== CALL FOR PAPERS The 14th Workshop on Java Technologies for Real-Time and Embedded Systems JTRES 2016 Part of the Managed Languages & Runtimes Week 2016 29 August - 2 September 2016 Lugano, Switzerland http://jtres2016.compute.dtu.dk/ ====================================================================== Submission deadline: 12 June, 2016 Submission site: https://easychair.org/conferences/?conf=jtres2016 ====================================================================== Over 90% of all microprocessors are now used for real-time and embedded applications. Embedded devices are deployed on a broad diversity of distinct processor architectures and operating systems. The application software for many embedded devices is custom tailored if
Categories: Incoming News

Dominic Steinitz: Every Manifold is Paracompact

Planet Haskell - Sun, 04/17/2016 - 2:29am
Introduction

In their paper Betancourt et al. (2014), the authors give a corollary which starts with the phrase “Because the manifold is paracompact”. It wasn’t immediately clear why the manifold was paracompact or indeed what paracompactness meant although it was clearly something like compactness which means that every cover has a finite sub-cover.

It turns out that every manifold is paracompact and that this is intimately related to partitions of unity.

Most of what I have written below is taken from some hand-written anonymous lecture notes I found by chance in the DPMMS library in Cambridge University. To whomever wrote them: thank you very much.

Limbering Up

Let be an open cover of a smooth manifold . A partition of unity on M, subordinate to the cover is a finite collection of smooth functions

where for some such that

and for each there exists such that

We don’t yet know partitions of unity exist.

First define

Techniques of classical analysis easily show that is smooth ( is the only point that might be in doubt and it can be checked from first principles that for all ).

Next define

Finally we can define by . This has the properties

  • if
  • if

Now take a point centred in a chart so that, without loss of generality, (we can always choose so that the open ball and then define another chart with ).

Define the images of the open and closed balls of radius and respectively

and further define bump functions

Then is smooth and its support lies in .

By compactness, the open cover has a finite subcover . Now define

by

Then is smooth, and . Thus is the required partition of unity.

Paracompactness

Because is a manifold, it has a countable basis and for any point , there must exist with . Choose one of these and call it . This gives a countable cover of by such sets.

Now define

where, since is compact, is a finite subcover.

And further define

where again, since is compact, is a finite subcover.

Now define

Then is compact, is open and . Furthermore, and only intersects with and .

Given any open cover of , each can be covered by a finite number of open sets in contained in some member of . Thus every point in can be covered by at most a finite number of sets from and and which are contained in some member of . This is a locally finite refinement of and which is precisely the definition of paracompactness.

To produce a partition of unity we define bump functions as above on this locally finite cover and note that locally finite implies that is well defined. Again, as above, define

to get the required result.

Bibliography

Betancourt, M. J., Simon Byrne, Samuel Livingstone, and Mark Girolami. 2014. “The Geometric Foundations of Hamiltonian Monte Carlo,” October, 45. http://arxiv.org/abs/1410.5110.


Categories: Offsite Blogs

[ANN] sarsi 0.0.2.0 - A universal quickfix toolkitand his protocol (nvim/vim)

haskell-cafe - Sat, 04/16/2016 - 1:23pm
Hello Café, Here is a new tool which should basically improve the "quick fixing" experience in Haskell with vim/neovim. The aim is not to replace any of the great features provided by integrating the type checker in the editor, but to complement them with a different approach. I explain the motivation there: http://aloiscochard.blogspot.ch/2016/04/quickfix-all-things-with-sarsi.html If you feel like this might be a tool for you, you can jump straight to the README to find the install and usage instructions: https://github.com/aloiscochard/sarsi I did test it extensively with stack, let me know if you have issues using an other tool (I do only pipe stderr which is fine for stack, I suspect this might not be the case for others). Thanks
Categories: Offsite Discussion

is monomorphism restriction necessary?

haskell-cafe - Sat, 04/16/2016 - 12:53pm
Sorry for the question that probably is already answered somewhere, but i do not get it from the basic documentation i have looked at so far. Is monomorphism restriction really necessary, or is it just a convenience? In the example from A History of Haskell genericLength :: Num a => [b] -> a f xs = (len, len) where len = genericLength xs can't the monomorphism restriction be replaced with just adding the signature f :: Num a => [b] -> (a, a) ? If yes, are there other cases where there is no way to get the desired behavior without enabling the monomorphism restriction? Alexey.
Categories: Offsite Discussion

Mark Jason Dominus: How to recover lost files added to Git but not committed

Planet Haskell - Fri, 04/15/2016 - 9:31pm

A few days ago, I wrote:

If you lose something [in Git], don't panic. There's a good chance that you can find someone who will be able to hunt it down again.

I was not expecting to have a demonstration ready so soon. But today I finished working on a project, I had all the files staged in the index but not committed, and for some reason I no longer remember I chose that moment to do git reset --hard, which throws away the working tree and the staged files. I may have thought I had committed the changes. I hadn't.

If the files had only been in the working tree, there would have been nothing to do but to start over. Git does not track the working tree. But I had added the files to the index. When a file is added to the Git index, Git stores it in the repository. Later on, when the index is committed, Git creates a commit that refers to the files already stored. If you know how to look, you can find the stored files even before they are part of a commit.

Each file added to the Git index is stored as a “blob object”. Git stores objects in two ways. When it's fetching a lot of objects from a remote repository, it gets a big zip file with an attached table of contents; this is called a pack. Getting objects from a pack can be a pain. Fortunately, not all objects are in packs. When when you just use git-add to add a file to the index, git makes a single object, called a “loose” object. The loose objects is basically the file contents, gzipped, with a header attached. At some point Git will decide there are too many loose objects and assemble them into a pack.

To make a loose object from a file, the contents of the file are checksummed, and the checksum is used as the name of the object file in the repository and as an identifier for the object, exactly the same as the way git uses the checksum of a commit as the commit's identifier. If the checksum is 0123456789abcdef0123456789abcdef01234567, the object is stored in

.git/objects/01/23456789abcdef0123456789abcdef01234567

The pack files are elsewhere, in .git/objects/pack.

So the first thing I did was to get a list of the loose objects in the repository:

cd .git/objects find ?? -type f | perl -lpe 's#/##' > /tmp/OBJ

This produces a list of the object IDs of all the loose objects in the repository:

00f1b6cc1dfc1c8872b6d7cd999820d1e922df4a 0093a412d3fe23dd9acb9320156f20195040a063 01f3a6946197d93f8edba2c49d1bb6fc291797b0 … ffd505d2da2e4aac813122d8e469312fd03a3669 fff732422ed8d82ceff4f406cdc2b12b09d81c2e

There were 500 loose objects in my repository. The goal was to find the eight I wanted.

There are several kinds of objects in a Git repository. In addition to blobs, which represent file contents, there are commit objects, which represent commits, and tree objects, which represent directories. These are usually constructed at the time the commit is done. Since my files hadn't been committed, I knew I wasn't interested in these types of objects. The command git cat-file -t will tell you what type an object is. I made a file that related each object to its type:

for i in $(cat /tmp/OBJ); do echo -n "$i "; git type $i; done > /tmp/OBJTYPE

The git type command is just an alias for git cat-file -t. (Funny thing about that: I created that alias years ago when I first started using Git, thinking it would be useful, but I never used it, and just last week I was wondering why I still bothered to have it around.) The OBJTYPE file output by this loop looks like this:

00f1b6cc1dfc1c8872b6d7cd999820d1e922df4a blob 0093a412d3fe23dd9acb9320156f20195040a063 tree 01f3a6946197d93f8edba2c49d1bb6fc291797b0 commit … fed6767ff7fa921601299d9a28545aa69364f87b tree ffd505d2da2e4aac813122d8e469312fd03a3669 tree fff732422ed8d82ceff4f406cdc2b12b09d81c2e blob

Then I just grepped out the blob objects:

grep blob /tmp/OBJTYPE | f 1 > /tmp/OBJBLOB

The f 1 command throws away the types and keeps the object IDs. At this point I had filtered the original 500 objects down to just 108 blobs.

Now it was time to grep through the blobs to find the ones I was looking for. Fortunately, I knew that each of my lost files would contain the string org-service-currency, which was my name for the project I was working on. I couldn't grep the object files directly, because they're gzipped, but the command git cat-file disgorges the contents of an object:

for i in $(cat /tmp/OBJBLOB ) ; do git cat-file blob $i | grep -q org-service-curr && echo $i; done > /tmp/MATCHES

The git cat-file blob $i produces the contents of the blob whose ID is in $i. The grep searches the contents for the magic string. Normally grep would print the matching lines, but this behavior is disabled by the -q flag—the q is for “quiet”—and tells grep instead that it is being used only as part of a test: it yields true if it finds the magic string, and false if not. The && is the test; it runs echo $i to print out the object ID $i only if the grep yields true because its input contained the magic string.

So this loop fills the file MATCHES with the list of IDs of the blobs that contain the magic string. This worked, and I found that there were only 18 matching blobs, so I wrote a very similar loop to extract their contents from the repository and save them in a directory:

for i in $(cat /tmp/OBJBLOB ) ; do git cat-file blob $i | grep -q org-service-curr && git cat-file blob $i > /tmp/rescue/$i; done

Instead of printing out the matching blob ID number, this loop passes it to git cat-file again to extract the contents into a file in /tmp/rescue.

The rest was simple. I made 8 subdirectories under /tmp/rescue representing the 8 different files I was expecting to find. I eyeballed each of the 18 blobs, decided what each one was, and sorted them into the 8 subdirectories. Some of the subdirectories had only 1 blob, some had up to 5. I looked at the blobs in each subdirectory to decide in each case which one I wanted to keep, using diff when it wasn't obvious what the differences were between two versions of the same file. When I found one I liked, I copied it back to its correct place in the working tree.

Finally, I went back to the working tree and added and committed the rescued files.

It seemed longer, but it only took about twenty minutes. To recreate the eight files from scratch might have taken about the same amount of time, or maybe longer (although it never takes as long as I think it will), and would have been tedious.

But let's suppose that it had taken much longer, say forty minutes instead of twenty, to rescue the lost blobs from the repository. Would that extra twenty minutes have been time wasted? No! The twenty minutes spent to recreate the files from scratch is a dead loss. But the forty minutes to rescue the blobs is time spent learning something that might be useful in the future. The Git rescue might have cost twenty extra minutes, but if so it was paid back with forty minutes of additional Git expertise, and time spent to gain expertise is well spent! Spending time to gain expertise is how you become an expert!

Git is a core tool, something I use every day. For a long time I have been prepared for the day when I would try to rescue someone's lost blobs, but until now I had never done it. Now, if that day comes, I will be able to say “Oh, it's no problem, I have done this before!”

So if you lose something in Git, don't panic. There's a good chance that you can find someone who will be able to hunt it down again.

Categories: Offsite Blogs

Darcs: Darcs News #113

Planet Haskell - Fri, 04/15/2016 - 9:02am
News and discussions
  1. We will release Darcs 2.12 by the end of this month:
  2. On May 6th-8th in Helsinki, a joint sprint Pijul/Darcs is organized:
Issues resolved (5)
issue1807 Guillaume Hoffmann
issue2258 Guillaume Hoffmann
issue2393 Guillaume Hoffmann
issue2486 Ben Franksen
issue2494 Ben Franksen
Patches applied (96)
2016-04-14 Guillaume Hoffmann
  • move network-related tests to network dir, update command names
  • resolve issue2393: remove whatsnew functionality from annotate
  • add log --machine-readable to see patch dependencies non-interactively
  • help of log
2016-04-01 Ganesh Sittampalam
  • add some doc comments to RepoType
2016-03-29 Guillaume Hoffmann
  • merge Repository.Util into Repository.State
  • use B and BC instead of BS and BSC in Repository.State
  • fix prelude import in Repository.State
  • move maybeApplyToTree to Darcs.Patch.Apply
  • move getRecursiveDarcsRepos to UI.Commands.Optimize
  • move patchSetfMap to Darcs.Patch.Set
  • move functions from Repository.Util to Patch.TokenReplace
  • comment in Repository.Util
  • refactor similar functions in Darcs.Repository.State
  • use readUnrecordedFiltered in getReplaces
  • inline a function in Clone
  • no longer move index to index.old on mingw32 os
  • clarify comments in Darcs.Repository.State
  • hlint Darcs.Repository.State
  • move External module from Repository to Util
  • move Compat and Lock modules from Repository to Util
  • merge Darcs.Repository.Ssh into Darcs.Util.Ssh
  • remove Darcs.Repository.Read by moving readRepo back to Internal
  • add comments and remove checks of optimize commands wrt repo formats
  • make all optimize subcommands require hashed except upgrade
  • move copySources from HashedRepo to Clone
  • move Storage.Hashed modules to Darcs.Util
  • remove unused function from Storage.Hashed.Plain
  • fix compile error in Storage.Hashed.Test
  • remove Storage.Hashed.Utils, move functions to Darcs.Utils.ByteString
  • move index-related functions from Utils to Index
  • removed unused or redundant functions from Storage.Hashed.Utils
  • remove unused functions from Storage.Hashed.Hash
  • hlint Storage.Hashed.Darcs
  • reuse functions from Darcs.Util.Path
  • remove unused Storage.Hashed.Packs
2016-03-09 Ben Franksen
  • revert command: be quiet when requested
  • accept issue2480: display unicode in patch content
  • slightly improved chaotic indentations and import lists
  • refactor: use maybeRestrictSubpaths
  • refactor: use Darcs.Util.English.capitalize
  • replace Darcs.Util.Printer.<> with <> from Data.Monoid; restructured haddocks
  • small code layout fix in whatsnew command
  • fixed Darcs.Util.English.andClauses and orClauses
  • two simple refactorings in the conflict resolution code
  • cleanup in revert command: use debugMessage for debug messages
  • cleanup: break over-long line in D.R.Merge
  • accept issue2494: output of darcs record with file arguments
  • resolve issue2494: output of darcs record with file arguments
  • refactored some, added readUnrecordedFiltered and maybeRestrictSubpaths
  • several fixes and refactorings in fixSubPaths and maybeFixSubPaths
  • add Darcs.Util.Printer.quoted and Darcs.Util.Text.pathlist
  • added missing hsep function to D.Util.Printer
  • added missing Eq and Show instances for ScanKnown
  • added Darcs.Util.Printer.ePutDocLn
  • add new type IncludeBoring for includeBoring option (was Bool)
  • announceFiles only if verbosity /= Quiet
2016-03-05 Guillaume Hoffmann
  • rm hashed-storage changelog
  • put copyright headers in hashed-storage modules
  • add Storage/Hashed dir to checkdeps contrib script
  • merge Storage.Hashed.AnchoredPath into Darcs.Util.Path
  • explicit exports for Storage.Hashed.Utils
  • list and comment exports of Storage.Hashed.Darcs and Plain
  • remove Storage.Hashed
  • resolve issue2258: improve patch index error message with suggestion
  • resolve issue1807: clarify help of PAGER, DARCS_PAGER
  • fix extra-source-file path in darcs.cabal
2016-03-07 Ben Franksen
  • Darcs.UI.Commands.Unrecord: honor quiet option everywhere
  • resolve issue2486: obliterate --not-in-remote -q should be more quiet
2016-02-25 Ganesh Sittampalam
  • print the rebase status even after an error
  • in runJob, pull repojob out to first-level decision
  • refactor displaying suspended status a bit
  • inline repoJobOnRebaseRepo
  • use helper types to elide more cases in runJob
  • elide some common cases in runJob
  • reorder runJob cases by job type
  • flatten runJob case statements
  • add a helper type for flags needed for Rebase
  • lift the runJob debugMessage call outside the case
  • lift 'therepo' outside the runJob case statement
  • express the V1/V2 patch type switch via a GADT too
  • use SRepoType to control the rebase type in runJob
  • remove commented-out cases for old TreeJob
  • drop unnecessary constraints
  • break out a runJob function
  • drop CarrierType - it can't ever be Rebasing p now
  • drop RecontextRebase
  • drop NameHack
  • inline MaybeInternal module into Named.Wrapped
  • make the Rebase import qualified
  • Introduce RebaseP to replace Rebasing type
  • add 'activecontents' to replace 'patchcontents' for use in conflict resolution
  • stop Convert using Wrapped.patchcontents
  • add nullary RepoType
  • flip dependency between Named.Wrapped and Rebase.Container
  • add wrapper type around 'Named'
See darcs wiki entry for details.
Categories: Offsite Blogs

Roman Cheplyaka: Basic HTTP auth with Scotty

Planet Haskell - Thu, 04/14/2016 - 2:00pm

Not so long ago, I needed to write a web app to automate the recording of our Haskell podcast, Bananas and Lenses.

<figure> </figure>

To build it, I chose a lightweight Haskell web framework called Scotty. There is another lightweight Haskell web framework called Spock. Both start with the letter S and are characters from Star Trek, and I have little hope ever being able to tell which is which by name. I can say though that I enjoyed working with the one I happened to pick.

So, anyway, I needed to ensure that only my co-hosts and I could access the app. In such a simple scenario, basic HTTP auth is enough. I did a quick google search for “scotty basic auth”, but all I found was this gist in which the headers are extracted by hand. Ugh.

Indeed, at the time of writing, Scotty itself does not seem to provide any shortcuts for basic auth. And yet the solution is simple and beautiful; you just need to step back to see it. Scotty is based on WAI, the Haskell web application interface, and doesn’t attempt to hide that fact. On the contrary, it conveniently exposes the function

middleware :: Middleware -> ScottyM ()

which “registers” a WAI wrapper that runs on every request. And sure enough, WAI (wai-extra) provides an HttpAuth module.

To put everything together, here’s a minimal password-protected Scotty application (works with Stackage lts-5.1).

{-# LANGUAGE OverloadedStrings #-} import Web.Scotty import Network.Wai.Middleware.HttpAuth import Data.SecureMem -- for constant-time comparison import Lucid -- for HTML generation password :: SecureMem password = secureMemFromByteString "An7aLasi" -- https://xkcd.com/221/ main :: IO () main = scotty 8000 $ do middleware $ basicAuth (\u p -> return $ u == "user" && secureMemFromByteString p == password) "Bananas and lenses recording" get "/" . html . renderText $ do doctype_ html_ $ do head_ $ do title_ "Bananas and lenses recording" body_ $ h1_ "Hello world!"

Two security-related points:

  1. Data.SecureMem is used to perform constant-time comparison to avoid a timing attack.
  2. Ideally, the whole thing should be run over https (as the password is submitted in clear), but this is outside of the scope of this article.
Categories: Offsite Blogs

Functional Jobs: OCaml server-side developer at Ahrefs Research (Full-time)

Planet Haskell - Thu, 04/14/2016 - 12:54pm
Who we are

Ahrefs Research is a San Francisco branch of Ahrefs Pte Ltd (Singapore), which runs an internet-scale bot that crawls whole Web 24/7, storing huge volumes of information to be indexed and structured in timely fashion. On top of that Ahrefs is building analytical services for end-users.

Ahrefs Research develops a custom petabyte-scale distributed storage to accommodate all that data coming in at high speed, focusing on performance, robustness and ease of use. Performance-critical low-level part is implemented in C++ on top of a distributed filesystem, while all the coordination logic and communication layer, along with API library exposed to the developer is in OCaml.

We are a small team and strongly believe in better technology leading to better solutions for real-world problems. We worship functional languages and static typing, extensively employ code generation and meta-programming, value code clarity and predictability, constantly seek out to automate repetitive tasks and eliminate boilerplate, guided by DRY and following KISS. If there is any new technology that will make our life easier - no doubt, we'll give it a try. We rely heavily on opensource code (as the only viable way to build maintainable system) and contribute back, see e.g. https://github.com/ahrefs . It goes without saying that our team is all passionate and experienced OCaml programmers, ready to lend a hand or explain that intricate ocamlbuild rule.

Our motto is "first do it, then do it right, then do it better".

What we need

Ahrefs Research is looking for backend developer with deep understanding of operating systems, networks and taste for simple and efficient architectural designs. Our backend is implemented mostly in OCaml and some C++, as such proficiency in OCaml is very much appreciated, otherwise a strong inclination to intensively learn OCaml in a short term will be required. Understanding of functional programming in general and/or experience with other FP languages (F#,Haskell,Scala,Scheme,etc) will help a lot. Knowledge of C++ and/or Rust is a plus.

The candidate will have to deal with the following technologies on the daily basis:

  • networks & distributed systems
  • 4+ petabyte of live data
  • OCaml
  • linux
  • git

The ideal candidate is expected to:

  • Independently deal with and investigate bugs, schedule tasks and dig code
  • Make argumented technical choice and take responsibility for it
  • Understand the whole technology stack at all levels : from network and userspace code to OS internals and hardware
  • Handle full development cycle of a single component, i.e. formalize task, write code and tests, setup and support production (devops)
  • Approach problems with practical mindset and suppress perfectionism when time is a priority

These requirements stem naturally from our approach to development with fast feedback cycle, highly-focused personal areas of responsibility and strong tendency to vertical component splitting.

What you get

We provide:

  • Competitive salary
  • Modern office in San Francisco SOMA (Embarcadero)
  • Informal and thriving atmosphere
  • First-class workplace equipment (hardware, tools)
  • No dress code

Get information on how to apply for this position.

Categories: Offsite Blogs

Robert Harper: Practical Foundations for Programming Languages, Second Edition

Planet Haskell - Thu, 04/14/2016 - 9:35am

Today I received my copies of Practical Foundations for Programming Languages, Second Edition on Cambridge University Press.  The new edition represents a substantial revision and expansion of the first edition, including these:

  1. A new chapter on type refinements has been added, complementing previous chapters on dynamic typing and on sub-typing.
  2. Two old chapters were removed (general pattern matching, polarization), and several chapters were very substantially rewritten (higher kinds, inductive and co-inductive types, concurrent and distributed Algol).
  3. The parallel abstract machine was revised to correct an implied extension that would have been impossible to carry out.
  4. Numerous corrections and improvements were made throughout, including memorable and pronounceable names for languages.
  5. Exercises were added to the end of each chapter (but the last).  Solutions are available separately.
  6. The index was revised and expanded, and some conventions systematized.
  7. An inexcusably missing easter egg was inserted.

I am grateful to many people for their careful reading of the text and their suggestions for correction and improvement.

In writing this book I have attempted to organize a large body of material on programming language concepts, all presented in the unifying framework of type systems and structural operational semantics.  My goal is to give precise definitions that provide a clear basis for discussion and a foundation for both analysis and implementation.  The field needs such a foundation, and I hope to have helped provide one.

 


Filed under: Programming, Research, Teaching
Categories: Offsite Blogs

FP Complete: The Stackage data flow

Planet Haskell - Thu, 04/14/2016 - 8:30am

I recently wrote up the Stackage data flow. The primary intent was to assist the rest of the Stackage curation team see how all the pieces fit together. However, it may also be of general interest to the rest of the community. In particular, some of the components used are not widely known and may be beneficial for completely separate projects (such as all-cabal-metadata).

Please check out the above linked copy of the file for the most up-to-date content. For convenience, I'm copying in the current content as of publication time below.

The Stackage project is really built on top of a number of different subcomponents. This page covers how they fit together. The Stackage data flow diagram gives a good bird's-eye view:

Inputs

There are three inputs into the data flow:

  • Hackage is the upstream repository of all available open source Haskell packages that are part of our ecosystem. Hackage provides both cabal file metadata (via the 00-index.tar file) and tarballs of the individual packages.

  • build-constraints.yaml is the primary Stackage input file. This is where package maintainers can add packages to the Stackage package set. This also defines upper bounds, skipped tests, and a few other pieces of metadata.

  • stackage-content is a Github repository containing static file content served from stackage.org

Travis

For various reasons, we leverage Travis CI for running some processes. In particular:

  • all-cabal-files clones all cabal files from Hackage's 00-index.tar file into a Git repository without any modification

  • all-cabal-hashes is mostly the same, but also includes cryptographic hashes of the package tarballs for more secure download (as leveraged by Stack. It is powered by all-cabal-hashes-tool

  • all-cabal-packages uses hackage-mirror to populate the hackage.fpcomplete.com mirror of Hackage, which provides S3-backed high availability hosting of all package tarballs

  • all-cabal-metadata uses all-cabal-metadata-tool to query extra metadata from Hackage about packages and put them into YAML files. As we'll see later, this avoids the need to make a lot of costly calls to Hackage APIs

Travis does not currently provide a means of running jobs on a regular basis. Therefore, we have a simple cron job on the Stackage build server that triggers each of the above builds every 30 minutes.

stackage-curator

The heart of running Stackage builds is the stackage-curator tool. We run this on a daily basis on the Stackage build server for Stackage Nightly, and on a weekly basis for LTS Haskell. The build process is highly automated and leverages Docker quite a bit.

stackage-curator needs to know about the most recent versions of all packages, their tarball contents, and some metadata, all of which it gets from the Travis-generated sources mentioned in the previous section. In addition, it needs to know about build constraints, which can come from one of two places:

  • When doing an LTS Haskell minor version bump (e.g., building lts-5.13), it grabs the previous version (e.g., lts-5.12) and converts the previous package set into constraints. For example, if lts-5.12 contains the package foo-5.6.7, this will be converted into the constraint foo >= 5.6.7 && < 5.7.
  • When doing a Stackage Nightly build or LTS Haskell major version bump (e.g., building lts-6.0), it grabs the latest version of the build-constraints.yaml file.

By combining these constraints with the current package data, stackage-curator can generate a build plan and check it. (As an aside, this build plan generation and checking also occurs every time you make a pull request to the stackage repo.) If there are version bounds problems, one of the Stackage curators will open up a Github issue and will add upper bounds, temporarily block a package, or some other corrective action.

Once a valid build plan is found, stackage-curator will build all packages, build docs, and run test suites. Assuming that all succeeds, it generates some artifacts:

  • Uploads the build plan as a YAML file to either stackage-nightly or lts-haskell
  • Uploads the generated Haddock docs and a package index (containing all used .cabal files) to haddock.stackage.org.
stackage-server-cron

On the Stackage build server, we run the stackage-server-cron executable regularly, which generates:

  • A SQLite database containing information on snapshots, the packages they contain, Hackage metadata about packages, and a bit more. This database is uploaded to S3.
  • A Hoogle database for each snapshot, which is also uploaded to S3
stackage-server

The software running stackage.org is a relatively simple Yesod web application. It pulls data from the stackage-content repo, the SQLite database, the Hoogle databases, and the build plans for Stackage Nightly and LTS Haskell. It doesn't generate anything important of its own except for a user interface.

Stack

Stack takes advantage of many of the pieces listed above as well:

  • It by default uses the all-cabal-hashes repo for getting package metadata, and downloads package contents from the hackage.fpcomplete.com mirror (using the hashes in the repo for verification)
  • There are some metadata files in stackage-content which contain information on, for example, where to download GHC tarballs from to make stack setup work
  • Stack downloads the raw build plans for Stackage Nightly and LTS Haskell from the Github repo and uses them when deciding which packages to build for a given stack.yaml file
Categories: Offsite Blogs

FP Complete: The Stackage data flow

Planet Haskell - Thu, 04/14/2016 - 8:30am

I recently wrote up the Stackage data flow. The primary intent was to assist the rest of the Stackage curation team see how all the pieces fit together. However, it may also be of general interest to the rest of the community. In particular, some of the components used are not widely known and may be beneficial for completely separate projects (such as all-cabal-metadata).

Please check out the above linked copy of the file for the most up-to-date content. For convenience, I'm copying in the current content as of publication time below.

The Stackage project is really built on top of a number of different subcomponents. This page covers how they fit together. The Stackage data flow diagram gives a good bird's-eye view:

Inputs

There are three inputs into the data flow:

  • Hackage is the upstream repository of all available open source Haskell packages that are part of our ecosystem. Hackage provides both cabal file metadata (via the 00-index.tar file) and tarballs of the individual packages.

  • build-constraints.yaml is the primary Stackage input file. This is where package maintainers can add packages to the Stackage package set. This also defines upper bounds, skipped tests, and a few other pieces of metadata.

  • stackage-content is a Github repository containing static file content served from stackage.org

Travis

For various reasons, we leverage Travis CI for running some processes. In particular:

  • all-cabal-files clones all cabal files from Hackage's 00-index.tar file into a Git repository without any modification

  • all-cabal-hashes is mostly the same, but also includes cryptographic hashes of the package tarballs for more secure download (as leveraged by Stack. It is powered by all-cabal-hashes-tool

  • all-cabal-packages uses hackage-mirror to populate the hackage.fpcomplete.com mirror of Hackage, which provides S3-backed high availability hosting of all package tarballs

  • all-cabal-metadata uses all-cabal-metadata-tool to query extra metadata from Hackage about packages and put them into YAML files. As we'll see later, this avoids the need to make a lot of costly calls to Hackage APIs

Travis does not currently provide a means of running jobs on a regular basis. Therefore, we have a simple cron job on the Stackage build server that triggers each of the above builds every 30 minutes.

stackage-curator

The heart of running Stackage builds is the stackage-curator tool. We run this on a daily basis on the Stackage build server for Stackage Nightly, and on a weekly basis for LTS Haskell. The build process is highly automated and leverages Docker quite a bit.

stackage-curator needs to know about the most recent versions of all packages, their tarball contents, and some metadata, all of which it gets from the Travis-generated sources mentioned in the previous section. In addition, it needs to know about build constraints, which can come from one of two places:

  • When doing an LTS Haskell minor version bump (e.g., building lts-5.13), it grabs the previous version (e.g., lts-5.12) and converts the previous package set into constraints. For example, if lts-5.12 contains the package foo-5.6.7, this will be converted into the constraint foo >= 5.6.7 && < 5.7.
  • When doing a Stackage Nightly build or LTS Haskell major version bump (e.g., building lts-6.0), it grabs the latest version of the build-constraints.yaml file.

By combining these constraints with the current package data, stackage-curator can generate a build plan and check it. (As an aside, this build plan generation and checking also occurs every time you make a pull request to the stackage repo.) If there are version bounds problems, one of the Stackage curators will open up a Github issue and will add upper bounds, temporarily block a package, or some other corrective action.

Once a valid build plan is found, stackage-curator will build all packages, build docs, and run test suites. Assuming that all succeeds, it generates some artifacts:

  • Uploads the build plan as a YAML file to either stackage-nightly or lts-haskell
  • Uploads the generated Haddock docs and a package index (containing all used .cabal files) to haddock.stackage.org.
stackage-server-cron

On the Stackage build server, we run the stackage-server-cron executable regularly, which generates:

  • A SQLite database containing information on snapshots, the packages they contain, Hackage metadata about packages, and a bit more. This database is uploaded to S3.
  • A Hoogle database for each snapshot, which is also uploaded to S3
stackage-server

The software running stackage.org is a relatively simple Yesod web application. It pulls data from the stackage-content repo, the SQLite database, the Hoogle databases, and the build plans for Stackage Nightly and LTS Haskell. It doesn't generate anything important of its own except for a user interface.

Stack

Stack takes advantage of many of the pieces listed above as well:

  • It by default uses the all-cabal-hashes repo for getting package metadata, and downloads package contents from the hackage.fpcomplete.com mirror (using the hashes in the repo for verification)
  • There are some metadata files in stackage-content which contain information on, for example, where to download GHC tarballs from to make stack setup work
  • Stack downloads the raw build plans for Stackage Nightly and LTS Haskell from the Github repo and uses them when deciding which packages to build for a given stack.yaml file
Categories: Offsite Blogs

[ANN] Vivid 0.2

haskell-cafe - Thu, 04/14/2016 - 1:13am
After about a year of blood, sweat, and tears, the new Vivid is out! This gets us to a pretty polished state and updates will now be much more frequent. What is Vivid? It's a library to create music (and other sound) using the SuperCollider synth engine -- a really powerful audio "rendering engine." Vivid lets you create and alter music in real time, or render audio files faster than real time. It's been used in live performances (not only by me!), and I'm very happy with its sound, its power, and its expressivity. Without further ado: http://hackage.haskell.org/package/vivid Happy hacking! Tom
Categories: Offsite Discussion

Lazy monadic queues

haskell-cafe - Wed, 04/13/2016 - 10:42pm
I've come up with a very simple implementation of lazy monadic queues based loosely on ideas from Leon P. Smith's control-monad-queue package. This implementation ties a similar sort of lazy knot, but avoids the complexity and strictness of continuation-passing style. I'm curious whether something similar is already available on Hackage, and, if not, whether it would be useful enough to package it. The source code can currently be found at https://gist.github.com/treeowl/5c14a43869cf14a823473ec075788a74 David Feuer
Categories: Offsite Discussion

Using existential types from TAPL book in Haskell

haskell-cafe - Wed, 04/13/2016 - 4:54pm
Hi all, I'm currently reading Prof. Pierce's TAPL book. Reading the chapter on existential types, I've been wondering how to use existentials in Haskell. In particular, in the TAPL book there is a section on purely functional objects, which are encoded (e.g.) as follows: counter = {*Nat, { state = 5 , methods = { get = \x:Nat. x , inc = \x:Nat. succ x } } } as Counter where Counter = {∃X, { state : X , methods = { get : X -> Nat , inc : X -> X } } } That is, a Counter consists of some internal state as well as methods `get` to retrieve its state and `inc` to increment it. Internally, a Counter is just implemented with a natural number. However, from the outside, we hide this fact by making its state an existential type X. Thus, we can only modify a Counter instance (e.g., counter) using its exposed methods. For example, let {X, body} = counter in body.methods.get
Categories: Offsite Discussion

Functional Jobs: Senior Software Engineer (Haskell) at Front Row Education (Full-time)

Planet Haskell - Wed, 04/13/2016 - 2:24pm
Position

Senior Functional Web Engineer to join fast-growing education startup transforming the way 3+ million K-8 students learn Math and English.

What you will be doing

Architect, design and develop new applications, tools and distributed systems for the Front Row ecosystem in Haskell, Flow, PostgreSQL, Ansible and many others. You will get to work on your deliverable end-to-end, from the UX to the deployment logic.

Once you're an integral part of the team you will act as Dev Lead and oversee the success of your team

Mentor and support more junior developers in the organization

Create, improve and refine workflows and processes for delivering quality software on time and without incurring debt

Work at our offices in San Francisco as part of a very small (there's literally half a dozen of us!), world-class team of engineers with a track record of rapidly delivering valuable software to millions of users.

Work closely with Front Row educators, product managers, customer support representatives and account executives to help the business move fast and efficiently through relentless automation.

Why you should join Front Row

Our mission is important to us, and we want it to be important to you as well: millions of students learn math using Front Row every month. Our early results show students improve twice as much while using Front Row than their peers who aren’t using the program.

You’ll be THE first Senior Engineer ever at Front Row, which means you’ll have an immense impact on our company, product, and culture; you’ll have a ton of autonomy and responsibility; you’ll have equity to match the weight of this role. If you're looking for an opportunity to both grow and do meaningful work, surrounded and supported by like-minded professionals, this is THE place for you.

You will be working side by side with many well known world-class personalities in the Haskell and Functional Programming community whose work you've likely used. Front Row is an active participant to the Open Source community and contributor to some of the most popular Haskell libraries.

A lot of flexibility: while we all work towards the same goals, you’ll have a lot of autonomy in what you work on. You can work from home up to one day a week, and we have a very flexible untracked vacation days policy

The company and its revenue are growing at a rocketship pace. Front Row is projected to make a massive impact on the world of education in the next few years. It's a once in a lifetime opportunity to join a small organization with great odds of becoming the Next Big Thing.

Must haves
  • You have experience doing full-stack web development. You understand HTTP, networking, databases and the world of distributed systems.
  • You have functional programming experience.
  • Extreme hustle: you’ll be solving a lot of problems you haven’t faced before without the resources and the support of a giant organization. You must thrive on getting things done, whatever the cost.
  • Soft skills: we want you to move into a leadership position, so you must be an expert communicator
Nice-to-haves
  • You have led a software development team before
  • You have familiarity with a functional stack (Haskell / Clojure / Scala / OCaml etc)
  • You understand and have worked all around the stack before, from infrastructure automation all the way to the frontend
  • You're comfortable with the Behavior-Driven Development style
  • You have worked at a very small startup before: you thrive on having a lot of responsibility and little oversight
  • You have worked in small and effective Agile/XP teams before
  • You have delivered working software to large numbers of users before
Benefits
  • Competitive salary
  • Generous equity option grants
  • Medical, Dental, and Vision
  • Catered lunch and dinner 4 times a week
  • Equipment budget
  • One flexible work day per week
  • Working from downtown SF, very accessible location
  • Professional yet casual work environment

Get information on how to apply for this position.

Categories: Offsite Blogs