News aggregator

Yesod Web Framework: The brittle Haskell toolchain

Planet Haskell - Mon, 02/09/2015 - 10:16am

A few weeks ago I received a bug report against streaming-commons. Since then, the details of what we discovered when discussing this report have been bothering me quite a bit, as they expose a lot of the brittleness of the Haskell toolchain. I'm documenting all of these aspects now to make clear how fragile our tooling is, and thereby explain why I think Stackage is so vital to our community.

In this blog post, I'm going to describe six separate problems I've identified when looking into this issue, and explain how Stackage (or some similar deterministic build system) would have protected users against these problems had it been employed.

The story

streaming-commons is a library that provides helper utilities for a number of different streaming concepts, one of them being a streaming way to convert blaze-builder Builders to filled ByteString buffers. Since blaze-builder was released a few years ago, a new set of modules was added to the bytestring package in version 0.10 known as a "bytestring builder." I asked one of the engineers at FP Complete, Emanuel Borsboom, to start working on a new module for streaming-commons to provide similar functionality for bytestring builder.

And now we run into the first problem with the Haskell toolchain. You would think that we should just add a lower bound on bytestring >= 0.10 in the streaming-commons.cabal file. However, setting restrictive lower bounds on ghc-package dependencies can be a problem. Fortunately, Leon Smith already solved this problem for us with bytestring-builder, which provides a compatibility layer for older bytestrings (much like Ed's transformers-compat). The idea is that, when compiled against an older version of bytestring, the bytestring-builder package provides the necessary missing modules, and otherwise does nothing.

When Emanuel wrote his changes to streaming-commons, he added a dependency on bytestring-builder. We then proceeded to test this on multiple versions of GHC via Travis CI and Herbert's multi-ghc-travis. Everything compiled and passed tests, so we shipped the updated version.

However, that original bug report I linked to- reported by Ozgun Ataman- told us there was a problem with GHC 7.6. This was pretty surprising, given that we'd tested on GHC 7.6. Fortunately Lane Seppala discovered the culprit: the Cabal library. It turns out that installing a new version of the Cabal library causes the build of streaming-commons to break, whereas our tests just used the default version of Cabal shipped with GHC 7.6. (We'll get back to why that broke things in a bit.)

After some digging, Emanuel discovered the deeper cause of the problem: Bryan O'Sullivan reported an issue a year ago where- when using a new version of the Cabal library- bytestring-builder does not in fact provide it's compatibility modules. This leads us to our second issue: this known bug existed for almost a year without resolution, and since it only occurs in unusual circumstances, was not detected by any of our automated tooling.

The reason this bug existed though is by far the most worrisome thing I saw in this process: the Cabal library silently changed the semantics of one of its fields in the 1.18 (or 1.20? I'm not sure) release. You see, bytestring-builder was detecting which version of bytestring it was compiled against by inspecting the configConstraints field (you can see the code yourself on Hackage). And starting in Cabal 0.19.1 (a development release), that field was no longer being populated. As a result, as soon as that newer Cabal library was installed, the bytestring-builder package became worse than useless.

As an aside, this points to another problematic aspect of our toolchain: there is no way to specify constraints on dependencies used in custom Setup.hs files. That's actually causes more difficulty than it may sound like, but I'll skip diving into it for now.

The fix for this was relatively simple: use some flag logic in the cabal file instead of a complicated custom Setup.hs file. (Once this pull request was merged in and released, it did fix the original bug report.) But don't take this as a critique of Leon's choice of a complicated Setup.hs file. Because in reality, the flag trick- while the "standard" solution to this problem- broke cabal-install's dependency solver for quite a while. To be fair, I'm still not completely convinced that the bug is fixed, but for now that bug is the lesser of two evils vs the Cabal library bug.

And finally, based on the bug report from Ozgun, it seems like an internal build failed based on all of this occurring. This has been a constant criticism I've made about the way we generally do builds in the Haskell world. Rarely is reproducibility a part of the toolchain. To quote Ozgun:

We are in fact quite careful in dependency management with lower-upper bounds on most outside packages, so breakages like this are unexpected.

And many people feel that this is the way things should be. But as this discussion hopefully emphasizes, just playing with lower and upper bounds is not sufficient to avoid build failures in general. In this case, we're looking at a piece of software that was broken by a change in a library that it didn't depend on, namely Cabal, since our tooling makes an implicit dependency on that library, and we have no way of placing bounds on it.

The case for Stackage

So here are the toolchain problems I've identified above:

  1. Tight coupling between GHC version and some core libraries like bytestring.
  2. A known issue lasting undocumented for a corner case for over a year, without any indication on the Hackage page that we should be concerned.
  3. The Cabal library silently changed the semantics of a field, causing complete breakage of a package.
  4. cabal-install's solver gets confused by standard flag usage, at least in slightly older versions.
  5. Not all dependencies are actually specified in a cabal file. At the very least, the Cabal library version is unconstrained, and any other packages used by Setup.hs.
  6. Default Haskell toolchain doesn't protect us against these kinds of problems, or give us any concept of reproducibility.

Stackage completely solves (2), (3), (5), and (6) for end users. By specifying all library versions used, and then testing all of those versions together, we avoid many possible corner cases of weird library interactions, and provide a fully reproducible build. (Note the Stackage doesn't solve all such cases: operating system, system libraries, executables, etc are still unpinned. That's why FP Complete is working on Docker-based tooling.)

(1) is highly mitigated by Stackage because, even though the tight coupling still exists, Stackage provides a set of packages that take that coupling into account for you, so you're not stuck trying to put the pieces together yourself.

As for (4)... Stackage helps the situation by making the job of the solver simpler by pinning down version numbers. Unfortunately, there are still potential gotchas when encountering solver bugs. Sometimes we end up needing to implement terribly awkward solutions to work around those bugs.

Categories: Offsite Blogs

Does product function uses O(n) space?

Haskell on Reddit - Mon, 02/09/2015 - 10:02am

Hello, I am trying to learn haskell and try to understand the behavior of haskell programs. So I run following code in ghci Prelude> product [1..1000000]

I was expecting this code to use O(1) space. But when I run the code, I saw that ghc is using 250MB of ram to calculate this. Which suggests that space for the whole list is initialized in memory.

I believe this is the definition of product https://github.com/ghc/packages-base/blob/52c0b09036c36f1ed928663abb2f295fd36a88bb/Data/List.hs#L1010-L1021

In the definition, Comment explains that product function can calculate infinite lists. So there should be something that triggers full list initialization that I am not aware of.

I was just wondering what is the reason of that behavior. And how can I do this calculation in constant space.

Also I am using htop to see memory consumption of ghc. Is there a better way of doing this?

Thanks

submitted by yilmazhuseyin
[link] [20 comments]
Categories: Incoming News

Officially supported Android backend, GSoC?

haskell-cafe - Mon, 02/09/2015 - 9:56am
Hi Café For the past few years I have done quite a lot of work in functional languages targeting mobile platforms. I've tried every functional language I could find. GHC targets Android via Neurocyte's (and JoeyH's) backend [1], although, afaik, this is not officially supported. There have been recent discussions on strategies to improve the current state, especially wrt. TH. Nevertheless, the progress is amazing and the backend works very well. I haven't found any major bugs. We've successfully used this at Keera Studios to write multiple games [2,3,4], and I've also written small applications for Google Glass (yes, Haskell works on Glass!). Users are running the games on different Android devices (with different hardware and OS version), and none of them has reported any (ghc-related) bugs so far. Haskell's ability to 'write once, run anywhere' could be a major selling point. Soon Haskell might become one of the very few functional languages that can target any platform, be it web, desktop
Categories: Offsite Discussion

Entirely record based data

Haskell on Reddit - Mon, 02/09/2015 - 7:40am
Idea

I am wondering whether it would be feasible and moreover desirable to have an entirely record based language with row polymorphism. In this language the main mechanism for manipulation of records would be lenses. One can allow pattern matching using a Getter lens and a monoidal Binder type that represents bindings to variables.

With row polymorphism one can infer that (in some imaginary language)

showWithLabel @rec = (rec ^. 'label) <> ": " <> show (rec ^. 'content)

'label is a lens from any record with a label field that focuses on that field

@rec is not an as-pattern, but is actually a lens of the type Getter a (Binder {rec :: a |* rest})

has the type

showWithLabel :: (Show s) => { label :: String, content :: s |* rest} -> String

where

{ label :: String, content :: s |* rest}

asserts that rest is a record with a label field holding a String, a content field holding a type s that satisfies the Show s constraint and maybe some other fields. If rest was omitted, then the record could only contain those two fields. This allows product types with labels to be expressed without prior declaration and values of larger product types with more fields to be applied to the function.

Sum types are usually expressed with

data Either a b = Left a | Right b

There why can't sum types have the same record-like treatment?

newtype Either a b = Either { left :: a, right :: b |+}

There would be no Either constructor, but rather an 'Either Iso

with the + indicating a sum record and * indicating a product record

If we introduce row subtraction, then record prisms can of the form

'x :: Lens' {x :: a |+ rest} (Either {|+ rest - x} a) 'y :: Lens' {y :: a |+ rest} (Either {|+ rest - y} a)

By allowing lenses with Either types to be used as binders in multi-case pattern matching

-- the type of the first case and the actual type -- test :: {x :: Int, y :: String |+} -> String test '@x = "The number is: " <> show x -- the type of the second case -- test :: {y :: String |+} -> String test '@y = "The string is: " <> y --Since the type of '@y subsumes Getter a (Either {|+} b), the pattern matching is exhaustive and no error is given

'@identifier is a pun and in this context desugars to 'identifier . to (bimap id (^. '@identifier)

Use cases
  1. I think this would be especially useful for inversion of control. For example say you have some interpreted language, and you can add hooks to the parser and evaluator for someone else to extend. If that someone attaches a hook to the parser that has returns some new construct in the AST, then it will fail to compile until the new construct is also handled in the evaluator. Perhaps an entire language could be build in this manner, with a project structure consisting of a module for each feature rather than for each stage of processing.

  2. Since datatypes can be inferred, why not have a programmer not ever specify types directly. Rather have types in automatically managed header files. Have notifications when they change with interactive adjustments. If the old type is not subsumed by new type then they can either specify a bridge of the old value in terms of the new values, or mark it as a breaking change.

Disclaimers

I am still very much a Haskell noob. I have no experience in language design or type theory and such, so take this with a grain of salt. I am posting this in the /r/haskell subreddit because it has the readers with the best understanding of the set of concepts I use. Apologies if this is either completely trivial or non-nonsensical.

submitted by reuben364
[link] [8 comments]
Categories: Incoming News

Ian Ross: Non-diffusive atmospheric flow #12: dynamics warm-up

Planet Haskell - Mon, 02/09/2015 - 2:31am
Non-diffusive atmospheric flow #12: dynamics warm-up February 9, 2015

The analysis of preferred flow regimes in the previous article is all very well, and in its way quite illuminating, but it was an entirely static analysis – we didn’t make any use of the fact that the original <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> data we used was a time series, so we couldn’t gain any information about transitions between different states of atmospheric flow. We’ll attempt to remedy that situation now.

What sort of approach can we use to look at the dynamics of changes in patterns of <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics>? Our <semantics>(θ,ϕ)<annotation encoding="application/x-tex">(\theta, \phi)</annotation></semantics> parameterisation of flow patterns seems like a good start, but we need some way to model transitions between different flow states, i.e. between different points on the <semantics>(θ,ϕ)<annotation encoding="application/x-tex">(\theta, \phi)</annotation></semantics> sphere. Each of our original <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> maps corresponds to a point on this sphere, so we might hope that we can some up with a way of looking at trajectories of points in <semantics>(θ,ϕ)<annotation encoding="application/x-tex">(\theta, \phi)</annotation></semantics> space that will give us some insight into the dynamics of atmospheric flow.

Since atmospheric flow clearly has some stochastic element to it, a natural approach to take is to try to use some sort of Markov process to model transitions between flow states. Let me give a very quick overview of how we’re going to do this before getting into the details. In brief, we partition our <semantics>(θ,ϕ)<annotation encoding="application/x-tex">(\theta, \phi)</annotation></semantics> phase space into <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> components, assign each <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> pattern in our time series to a component of the partition, then count transitions between partition components. In this way, we can construct a matrix <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> with

<semantics>Mij=Ni→jNtot<annotation encoding="application/x-tex"> M_{ij} = \frac{N_{i \to j}}{N_{\mathrm{tot}}} </annotation></semantics>

where <semantics>Ni→j<annotation encoding="application/x-tex">N_{i \to j}</annotation></semantics> is the number of transitions from partition <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> to partition <semantics>j<annotation encoding="application/x-tex">j</annotation></semantics> and <semantics>Ntot<annotation encoding="application/x-tex">N_{\mathrm{tot}}</annotation></semantics> is the total number of transitions. We can then use this Markov matrix to answer some questions about the type of dynamics that we have in our data – splitting the Markov matrix into its symmetric and antisymmetric components allows us to respectively look at diffusive (or irreversible) and non-diffusive (or conservative) dynamics.

Before trying to apply these ideas to our <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> data, we’ll look (in the next article) at a very simple Markov matrix calculation by hand to get some understanding of what these concepts really mean. Before that though, we need to take a look at the temporal structure of the <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> data – in particular, if we’re going to model transitions between flow states by a Markov process, we really want uncorrelated samples from the flow, and our daily <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> data is clearly correlated, so we need to do something about that.

Autocorrelation properties

Let’s look at the autocorrelation properties of the PCA projected component time series from our original <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> data. We use the autocorrelation function in the statistics package to calculate and save the autocorrelation for these PCA projected time series. There is one slight wrinkle – because we have multiple winters of data, we want to calculate autocorrelation functions for each winter and average them. We do not want to treat all the data as a single continuous time series, because if we do we’ll be treating the jump from the end of one winter to the beginning of the next as “just another day”, which would be quite wrong. We’ll need to pay attention to this point when we calculate Markov transition matrices too. Here’s the code to calculate the autocorrelation:

npcs, nday, nyear :: Int npcs = 10 nday = 151 nyear = 66 main :: IO () main = do -- Open projected points data file for input. Right innc <- openFile $ workdir </> "z500-pca.nc" let Just ntime = ncDimLength <$> ncDim innc "time" let (Just projvar) = ncVar innc "proj" Right (HMatrix projsin) <- getA innc projvar [0, 0] [ntime, npcs] :: HMatrixRet CDouble -- Split projections into one-year segments. let projsconv = cmap realToFrac projsin :: Matrix Double lens = replicate nyear nday projs = map (takesV lens) $ toColumns projsconv -- Calculate autocorrelation for one-year segment and average. let vsums :: [Vector Double] -> Vector Double vsums = foldl1 (SV.zipWith (+)) fst3 (x, _, _) = x doone :: [Vector Double] -> Vector Double doone ps = SV.map (/ (fromIntegral nyear)) $ vsums $ map (fst3 . autocorrelation) ps autocorrs = fromColumns $ map doone projs -- Generate output file. let outpcdim = NcDim "pc" npcs False outpcvar = NcVar "pc" NcInt [outpcdim] M.empty outlagdim = NcDim "lag" (nday - 1) False outlagvar = NcVar "lag" NcInt [outlagdim] M.empty outautovar = NcVar "autocorr" NcDouble [outpcdim, outlagdim] M.empty outncinfo = emptyNcInfo (workdir </> "autocorrelation.nc") # addNcDim outpcdim # addNcDim outlagdim # addNcVar outpcvar # addNcVar outlagvar # addNcVar outautovar flip (withCreateFile outncinfo) (putStrLn . ("ERROR: " ++) . show) $ \outnc -> do -- Write coordinate variable values. put outnc outpcvar $ (SV.fromList [0..fromIntegral npcs-1] :: SV.Vector CInt) put outnc outlagvar $ (SV.fromList [0..fromIntegral nday-2] :: SV.Vector CInt) put outnc outautovar $ HMatrix $ (cmap realToFrac autocorrs :: Matrix CDouble) return ()

We read in the component time series as a hmatrix matrix, split the matrix into columns (the individual component time series) then split each time series into year-long segments. The we use the autocorrelation function on each segment of each time series (dropping the confidence limit values that the autocorrelation function returns since we’re not so interested in those here) and average across segments of each time series. The result is an autocorrelation function (for lags from zero to <semantics>

Categories: Offsite Blogs

State of Haskell CMS

Haskell on Reddit - Sun, 02/08/2015 - 11:24pm

The recent announcements of LambdaCMS and clckwrks are encouraging. Spent the Sunday checking them out as well as HsCMS. The only one that worked with minimal installation pains (using NixOS, also tried Stackage LTS) was HsCMS. Yesod itself was problematic to some degree, but not insurmountable. Yesod compilation seemed very slow, if there are any ways to speed this up, please let me know.

I am not impressed by the non-programmer focus of clckwrks or LambdaCMS (post-install). Packaging and simplicity of deployment should allow more people to battle-test these systems.

I've also played around with Hakyll, it seems quickest and simplest if all you need is a static site.

Examples of nearly default setups:

Hakyll

HsCMC

clckwrks: some package management needed

LambdaCMS: functioning, needs more extensions to be usable

submitted by tomberek
[link] [35 comments]
Categories: Incoming News

CF STUDENT POSTERS for Innovations'15 (No registration fees), Dubai, November 01-03, 2015

General haskell list - Sun, 02/08/2015 - 4:09pm
CF STUDENT POSTERS for Innovations'15 (No registration fees), Dubai, November 01-03, 2015 IIT’15: The 11th International Conference on Innovations in Information Technology 2015 URL: http://www.it-innovations.ae/iit2015/posters.html The IIT’15 Student Poster and Demos Committee invites all undergraduate and graduate students to submit an extended (2 pages max.) abstract and to display it as a poster during the IIT’15. The poster topic should fall within the conference’s theme and tracks. SUBMISSION Extended abstracts should be sent to Dr. Nabeel Al-Qirim at nalqirim< at >uaeu.ac.ae. All students are encouraged to review their abstracts with their faculty advisers prior to submission. All accepted abstracts will be published by the IIT’15 proceedings. IMPORTANT DATES -Student Poster (Extended Paper) Submission May 30, 2015 -Notification of Student Poster acceptance July 15, 2015 -Camera ready Extended Paper and Poster material September 01, 2015 -Conference November 01-03, 2015 BEST STUDENT POST
Categories: Incoming News

Hackage dependency monitor

del.icio.us/haskell - Sun, 02/08/2015 - 3:10pm
Categories: Offsite Blogs

Hackage dependency monitor

del.icio.us/haskell - Sun, 02/08/2015 - 3:10pm
Categories: Offsite Blogs

[Announce] Lambdaheads - Vienna Functional Programming - 2015-02-11 Wed 19:00

Haskell on Reddit - Sun, 02/08/2015 - 6:26am

Hey fellow friends of the functional!

Sorry for the late announcement - I was not sure I had the time to prepare the next meeting, but I am happy to announce the next session of Lambdaheads will be on time.

Topic this time will be - a short recap of our last session, and a bit of supplement to the first part, where I totally forgot to tell you about "record syntax", which is one of the features I am still not really decided on whether I like it or not. One thing I am sure of it can lead to the more painful experiences you can have with Haskell (that is one reason why lenses were invented).

The main part will be an introduction to the way polymorphism and subclassing works in Haskell. But beware - all your previous experiences with class- and/or prototypeinheritance might not apply, the closest thing you might know are mixins (ruby,dylan) and interfaces from java.

Links: - https://metalab.at/wiki/Lambdaheads - http://www.meetup.com/Lambdaheads/events/220385939/

Hope you will come and do some Haskell. Yours Martin (epsilonhalbe)

submitted by epsilonhalbe
[link] [5 comments]
Categories: Incoming News

Building with some dependencies pre-installed in asandbox

haskell-cafe - Sun, 02/08/2015 - 4:33am
Hey guys, sorry for re-posting, but I feel that my original question is largely different from this one. I've managed to get my broken dependency to build (the mysql package), but unfortunately I had to manually change its source code to make this happen, (original post https://www.haskell.org/pipermail/haskell-cafe/2015-February/118064.html), but I do have the package installed locally (or in the sandbox of my app). The problem is that when I try to `cabal install` the whole application, it will re-download the `mysql` package from Hackage and try to build it again, instead of using the version I custom built and installed myself. Is it normal that cabal won't re-use the already existing packages, or did I somehow change it so that it won't recognize it? Also, is it possible to explicitly say where should cabal resolve some of its dependencies? Such as this case when one package is in a local path, and other should be built from hackage. I'm not really sure what the best approach is here. again sorry fo
Categories: Offsite Discussion

MySQL + Windows + Haskell = ???

haskell-cafe - Sun, 02/08/2015 - 12:28am
Hey guys, I've been trying to get my Yesod app working on Windows 8.1 using the Haskell Platform 2014.2.0.0, and everything seemes to work well, except for MySQL. The `mysql` package doesn't build with the following error setup.exe: The program mysql_config is required but it could not be found I've found a related GitHub issue (https://github.com/bos/mysql/issues/3), but there doesn't seem to be any solution. Is there anyone who's successfully running using Haskell and MySQL on Windows? I've also tried `hdbc-mysql` and `mysql-simple`, but they both depend on the `mysql` package, so that didn't really help. I can't really use a different database, or Linux. Thanks for any tips, Jakub _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Functional Programmers Sought

haskell-cafe - Sat, 02/07/2015 - 8:25pm
I'm looking for experienced functional programmers who are happy to work with Haskell, Scala or Clojure. Please see https://hackerjobs.co.uk/jobs/2015/2/4/guardtime-experienced-functional-software-engineer I'm afraid I can't hire outside the UK or EU areas for this role. Very competitive salaries for the right person, occasional EU/US travel, working from home. Direct hire - I'm not an agency. You'll be working for me. Please ping me on here or privately at the e-mail address on the HackerJobs advert. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

cryptography in haskell

haskell-cafe - Sat, 02/07/2015 - 6:21pm
Hi, I've been wondering about the state of cryptography in haskell. Not so much in the sense of "what libraries are out there?", but rather about the question what crpyto and IT security people think about ideas like rewriting something as OpenSSL in haskell. I know it can be technically done, but are there any remarks in this area that are important for practical security? For example, some people think that it can be dangerous to implement something like this in high-level languages (e.g. java which was vulnerable to timing attacks). Of course I think haskell can do a lot for us to make things safer: * type safety * referential transparency * explicit knowledge about side-effects and which kind ... But that doesn't tell me if it introduces new pitfalls/attack-vectors for practical cryptography implementations. -- Regards, Julian Ospald
Categories: Offsite Discussion