News aggregator

something went wrong

del.icio.us/haskell - Tue, 05/03/2016 - 11:00pm
Categories: Offsite Blogs

Mikhail Glushenkov: What's new in Cabal 1.24 — Nix-style local builds, setup dependencies, HTTPS and more!

Planet Haskell - Tue, 05/03/2016 - 6:00pm

We’ve just released versions 1.24 of both Cabal and cabal-install. The 1.24 release incorporates more than a thousand commits by 89 different contributors. This post describes what’s new and improved in this version.

User-visible features
  • Nix-style local builds in cabal-install (so far only a technical preview). See this post by Edward Z. Yang for more details.

  • Integration of a new security scheme for Hackage based on The Update Framework. So far this is not enabled by default, pending some changes on the Hackage side. See these three posts by Edsko de Vries and Duncan Coutts for more information.

  • Support for specifying setup script dependencies in .cabal files. Setup scripts are also now built with the cabal_macros.h-style macros for conditional compilation. See this post by Duncan Coutts for more information.

  • Support for HTTPS downloads in cabal-install. HTTPS is now used by default for downloads from Hackage. This uses either curl or wget or, on Windows, PowerShell, under the hood. Install target URLs can now also use HTTPS, e.g. cabal install https://example.com/foo-1.0.tar.gz.

  • cabal upload learned how to upload documentation to Hackage (cabal upload --doc) (#2890).

  • In related news, cabal haddock now can generate documentation intended for uploading to Hackage (cabal haddock --for-hackage, #2852). cabal upload --doc runs this command automatically if the documentation for current package hasn’t been generated yet.

  • New cabal-install command: gen-bounds for easy generation of version bounds. See this post by Doug Beardsley for more information.

  • It’s now possible to limit the scope of --allow-newer to single packages in the install plan, both on the command line and in the config file. See here for an example.

  • The --allow-newer option can be now used with ./Setup configure (#3163).

  • New cabal user-config subcommand: init, which creates a default config file in either the default location (~/.cabal/config) or as specified by --config-file (#2553).

  • New config file field extra-framework-dirs for specifying extra locations to find OS X frameworks in (#3158). Can be also specified as an argument for install and configure commands.

  • cabal-install solver now takes information about extensions and language flavours into account (#2873). The solver is now also aware of pkg-config constraints (#3023).

  • New cabal-install option: --offline, which prevents cabal-install from downloading anything from the Internet.

  • New cabal upload option -P/--password-command for reading Hackage password from arbitrary program output.

  • New --profiling-detail=$level flag with a default for libraries and executables of ‘exported-functions’ and ‘toplevel-functions’ respectively (GHC’s -fprof-auto-{exported,top} flags) (#193).

  • New --show-detail mode: --show-detail=direct; like streaming, but allows the test program to detect that is connected to a terminal, and works reliable with a non-threaded runtime (#2911, and serves as a work-around for #2398)

  • Macros VERSION_$pkgname and MIN_VERSION_$pkgname are now also generated for the current package (#3235).

  • The builddir option can now be specified via the CABAL_BUILDDIR environment variable and in cabal.config (#2484).

  • Added a log file message similar to one printed by make when building in another directory (#2642).

Bug fixes and minor improvements
  • Support for GHC 8. NB: pre-1.24 versions of Cabal won’t work with GHC 8.

  • Cabal is now aware of extra C sources generated by preprocessors (e.g. c2hs and hsc2hs) (#2467).

  • Cabal now includes cabal_macros.h when running c2hs (#2600).

  • C sources are now recompiled only when needed (#2601).

  • Support Haddock response files to work around command-line length restrictions on Windows (#2746).

  • Library support for multi-instance package DBs (#2948).

  • Improvements in the ./Setup configure solver (#3082, #3076).

  • If there are multiple remote repos, cabal update now updates them in parallel (#2503).

  • cabal program itself now can be used as an external setup method. This fixes an issue when Cabal version mismatch caused unnecessary reconfigures (#2633).

  • Fixed space leaks in cabal update (#2826) and in the solver (#2916, #2914). Improved performance of --reorder-goals (#3208).

  • cabal exec and sandbox hc-pkg now use the configured compiler (#2859).

  • The man page for cabal-install is now automatically generated (#2877).

  • Miscellaneous minor and/or internal bug fixes and improvements.

Acknowledgements

Thanks to everyone who contributed code and bug reports, and to Ryan Thomas for helping with release management. Full list of people who contributed patches to Cabal/cabal-install 1.24 is available here.

Looking forward

We plan to make a new release of Cabal/cabal-install approximately 6 months after 1.24 — that is, in late October or early November 2016. Main features that are currently targeted at 1.26 are:

  • Further work on nix-style local builds, perhaps making that code path the default.

  • Enabling Hackage Security by default.

  • Native suport for foreign libraries: Haskell libraries that are intended to be used by non-Haskell code.

  • New Parsec-based parser for .cabal files.

  • A revamped homepage for Cabal, rewritten user manual, and automated build bots for binary releases.

We would like to encourage people considering contributing to take a look at the bug tracker on GitHub, take part in discussions on tickets and pull requests, or submit their own. The bug tracker is reasonably well maintained and it should be relatively clear to new contributors what is in need of attention and which tasks are considered relatively easy. For more in-depth discussion there is also the cabal-devel mailing list.

Categories: Offsite Blogs

Christopher Done: The five arguments on why people struggle with monads

Planet Haskell - Tue, 05/03/2016 - 6:00pm

People trying to learn Haskell, or about Haskell, often struggle with the concept of monads. People like to talk about it a lot; it’s probably because pedagogy is interesting, because you need to use the class in Haskell (so comprehension is important for language adoption), and people like talking about things that they understand that other people don’t. Specifically, it’s a class called Monad that appears in much of Haskell development.

I’m going to summarize the kind of arguments typically put forth, which I consider all contributory scales of why people struggle with monads as applied in Haskell:

  • The alienation argument: The concept of a monad is inherently difficult for puny minds to understand (suddenly programmers aren’t good at abstraction?). It doesn’t matter the quality of or context in which materials present the topic, it’s just really hard and alien.
  • The bad pedagogy argument: The educational material is terrible. We just haven’t found the holy grail of monad tutorials.
  • The mysticism argument: There’re so many tutorials and “wu wu” (and discussions like this very blog post; you are contributing by reading it! Shame on you!) around monads that make people think they still don’t “get” it, like some kind of zen moment is required.
  • The academic saturation argument: Many enthusiastic Haskellers encourage learning category theory, which leads to confusing the Haskell class Monad with category theoretic notion of a monad, which are distinct.
  • The foreign language argument: Not having enough understanding of Haskell, specifically, Haskell’s type system (to understand the Monad class in any realistic way, you have to grok type-classes and higher-kinded types), makes understanding a concept employing that language, out of reach.

I have my own opinion about which scales are most to blame, which I’ll elaborate on now.

The alienation argument

We’ve already seen monads applied in other languages. LINQ in C#, workflows in F#, promises in JavaScript. They’re not all the same as monads–let’s avoid that pedantry–but they’re not some whole other world, either. They have a notion of creating some actions as first class values and combining them in some way and then running that composed thing. If programmers have no problem with this kind of thing in other languages, what’s so hard about Haskell’s Monad class?

The bad pedagogy argument

Actually, there are a few tutorials that always come up in online discussions as being quality monad tutorials which are followed by echoes of approval as being The Great Monad Tutorial:

Whatever you think about the tutorials, we’re not lacking in quality educational materials. So what’s the problem?

The mysticism argument

This might be the answer to the previous. There are indeed too many tutorials. It’s been called a fallacy or advised never to read monad tutorials:

The ever-increasing monad tutorial timeline

The tutorials are not consistent, either, they fall in a few camps:

This is indeed a problem.

The academic saturation argument

Quoting Keith B:

Too many “monad tutorials” are written by either actual mathematicians or frustrated mathematicians–manqué and as a result they focus far too much on trying to transmit the excitement of understanding a cool new concept. Working programmers have little interest in this. They want to gan facility with useful techniques.

Fundamentally there are two audiences, people who want to learn category theory, and people who want to learn Haskell. Unfortunately, sometimes, the latter camp are baffled into believing they ought to be in the former camp.

But ultimately I think people putting the effort in aren’t really mislead by this.

The foreign language argument

This is the most compelling reason for me.

If you don’t understand a language, you cannot understand concepts expressed in that language. Try to read about a new concept, something abstract, on Wikipedia, but switch the language to one you don’t understand well. Good luck. I’ve tried explaining why Blinkenlights makes me laugh to tears, to non-native English speakers (note: the ‘ATTENTION’ German version is also funny), with much difficulty.

There are two problems:

  1. You can’t understand the implications of class Monad m where .. without having a solid understanding of (1) type-classes, and (2) higher-kinded types. The type of return is Monad m => a -> m a. Haskell’s type-classes are a unique feature that few other languages have, supporting value polymorphism. The m there is of kind m :: * -> *. That’s a higher-order type. It’s hard to even explain these two concepts.
  2. Haskell starts out of the box with making use of the Monad class. The first thing you encounter as a newbie is also one of the things you are most ill-equipped to understand.

Is it any wonder that this class is such a pain point?

To add insult to injury, said audience aren’t aware that this is their poison. Laurence Gonsalves arrives at an insight without really knowing it:

Another problem that seems to be endemic to explanations of monads is that it’s written in Haskell. I’m not saying Haskell is a bad language – I’m saying it’s a bad language for explaining monads. If I knew Haskell I’d already understand monads, so if you want to explain monads, start by using a language that people who don’t know monads are more likely to understand.

Of course monad tutorials about the Monad class are written in Haskell, because it uses two language features not present in other popular languages. JavaScript, Ruby, Python don’t have a type system. C# and Java have generics, and Common Lisp has generic functions, but none of them have value polymorphism. The whole reason that Haskell is used to teach this concept is because it’s only a satisfying class in Haskell.

Let’s detour into natural language: French does not have a natural equivalent to the verb “Peck”, which translated into French is “donner des coups de bec” or “Attack with the front of the beak.” Italian does not have a word for “toe”, it has “dito del piede” which is “finger of the foot”. English has “-ish” and “-y”, like “this is greenish” or “it was salady”, whereas Italian has “-one” (large), so “bacio” (kiss) => “baccione” (big kiss), and “-a” and “-o” for male and female: figlio/figlia => son/daughter.

Programming languages are the same. There are some genuinely new features that other languages aren’t able to reproduce, without removing the utility of the thing in doing so.

Summarizing

In summarizing I’d personally assign the following ratings to each argument:

  • The alienation argument: not too convincing
  • The bad pedagogy argument: not likely contributory
  • The mysticism argument: very contributory
  • The academic saturation argument: not a big deal
  • The foreign language argument: a big contributor

I think educators can only acknowledge the two problems in The foreign language argument and teach a good understanding of the language before moving onto that class.

Categories: Offsite Blogs

Could Haddock export documentation with type familyapplications normalised?

haskell-cafe - Tue, 05/03/2016 - 11:20am
Hi all, I'm working on a library that uses quite a lot of type "magic" (as some would call it), but really it's all just implementation details. The type families are necessary for me to write the code, but an outside understanding of these type families shouldn't be necessary to understand the library. Unfortunately, Haddock does not align with that goal. To give you a feeling for things, take the following types: data Expr (t :: k) data AsHaskell (t :: k) data BaseType = DBInt | DBText type family Col (f :: k -> *) (a :: k) :: * type instance Col Expr (a :: BaseType) = Expr a type instance Col AsHaskell (a :: BaseType) = BaseTypeAsHaskell a type family BaseTypeAsHaskell (bt :: BaseType) :: * where BaseTypeAsHaskell 'DBInt = Int BaseTypeAsHaskell 'DBText = String class Lit (exprType :: k) where lit :: Col AsHaskell exprType -> Expr exprType instance Lit 'DBInt where lit = ... instance Lit 'DBText where lit = ... (I am modelling the interaction with remote relational databases, to provide a li
Categories: Offsite Discussion

Controlling how test data is generated in QuickCheck2

haskell-cafe - Tue, 05/03/2016 - 2:52am
I have a problem similar to this question < http://stackoverflow.com/questions/9977734/controlling-how-test-data-is-generated-in-quickcheck>. Below I will articulate my specifics, the code I am using, and the particular question I have. I have written a fizz-buzz program that uses a Fibonacci sequence as input. I would like to test two things. (1) Does my program emit the correct string given an Int that meets a particular condition. (2) Is my Fibonacci generator generating Fibonacci numbers? The problem I am having is similar to the link above. The range of `Int`s is too large. How do I constrain my tests to say, the first 1000 Fibonacci numbers? Here is code that I think is both adequate and minimal. Please let me know if I need to elaborate. import Data.Numbers.Primes (isPrime) import Test.Hspec (Spec,hspec,describe,it,shouldBe) import Test.Hspec.QuickCheck (prop) qcheck :: Spec qcheck = do describe "QuickCheck test fiz" $ prop "QuickCh
Categories: Offsite Discussion

Applications using generalized monads?

haskell-cafe - Mon, 05/02/2016 - 7:40pm
Hello, I am currently looking into applications of generalized monads. I am aware of the following generalized monad concepts (and libraries implementing them): * Indexed/Parameterized/Hoare Monads [1] * indexed [8] * simple-sessions [9][4] * monad-param [10][3][6] * Effect Monads [2] * effect-monad [11][5] * monad-param [12][3] * Constrained Monads * rmonad [13] Question 1: Do you know of other libraries that implement generalized monad concepts? Most of the libraries I listed above are in working condition, but still outdated. Question 2: Do you know of (open source) software and applications that use these concepts/libraries to implement something? This mail is a copy of my post on Reddit [7]. I am sorry for reposting it here, but I dod not get a response on that post for almost 2 weeks now. Best, Jan [1]: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.104.3020 [2]: http://dl.acm.org/citation.cfm?doid=2578855.2535846 [3]: http://comonad.com/reader/2007/parameterized-monads-i
Categories: Offsite Discussion

Bryn Keller: Mac OS X C++ Development (part 2)

Planet Haskell - Mon, 05/02/2016 - 6:00pm

As I wrote in my previous post, I recently started working on a Mac. This is a collection of my notes on the problems and surprises I ran into. Maybe it will be useful for you, too.

Tools

In addition to many of the command line tools you may be familiar with from C++ development on Linux, Mac OS X has specialized tools for working with binaries.

otool

The otool utility displays information from binary headers. It works with Mach headers (which are the main thing, on Macs) but also works with other “universal” formats as well. There are many options, but the main one that’s been helpful to me so far has been otool -L <lib>, which tells you the dependencies of the archive, which are essential debugging information for dynamic linking problems. Another useful one is otool -D <lib>, which tells you the install names embedded in a given library.

install_name_tool

This tool allows you to change the install names for a binary. Simple, but important.

By now you must be wondering what install names are.

Install Names

Let us begin with man dyld. There is no dyld command. Nevertheless, it is the dynamic linker for Mac OS X. The man page tells us many interesting things, perhaps chief among which is how dynamic libraries are located at run-time.

When you link an application to a dynamic library, the path to that library is encoded in the application binary. Well, actually, when you build, the library is interrogated to find out where it is supposed to be installed, and that path gets encoded in the application binary. These paths where things are supposed to be installed are known as install names on Mac OS X. Every .dylib has (at least) one. You can use otool -D to find the install name of a .dylib file.

In most cases, the install name will have an absolute path, like this:

> $ otool -D /usr/lib/libiconv.dylib /usr/lib/libiconv.dylib: /usr/lib/libiconv.2.dylib

So if you link with /usr/lib/libiconv.dylib, your application will look for /usr/lib/libiconv.2.dylib when it launches. If the .dylib isn’t where it’s expected, the app crashes.

Relative paths

Sometimes a library will have a relative path as its install name. When this happens, dyld will search for a library with that name on several search paths. It’s similar to LD_LIBRARY_PATH in some ways. There are environment variables you can set to control where it searches. There are several, and you should consult man dyld if you want the gory details. One thing that is important to know is that one of these variables, DYLD_FALLBACK_LIBRARY_PATH, has the default value $(HOME)/lib:/usr/local/lib:/lib:/usr/lib. Occasionally you may see advice on the web that recommends symbolic linking a library into your $(HOME)/lib without any further explanation. DYLD_FALLBACK_LIBRARY_PATH is why.

@rpath

If you want to distribute an application that relies on libraries that are not necessarily in known standard places, it is a good idea to use @rpath as part of your install name. This directs dyld to look in a path that is relative to the application binary (or in the case of a library that is dynamically linked to another library, the path is relative to the calling library), rather than looking in the places it normally would. This would allow you to bundle a set of libraries along with your application, and have them be found regardless of where you install the application. There are several other options, such as @executable_path and @loader_path, but @rpath seems to be the right choice in most cases. Again, man dyld has all the details.

An Example: Dionysus

I wanted to build Dionysus, a library for working with persistent homology. I wanted to build the Python bindings.

Linker Errors

After I managed to get through the compile phase, I ran into linker problems, with unresolved symbol errors similar to the ones I mentioned in part 1. However, in this case, changing the compiler didn’t fix the problem. I was getting linking errors trying to link with Boost, which I had already reinstalled on my system from source, using gcc:

export HOMEBREW_CC=gcc-5 export HOMEBREW_CXX=g++-5 brew reinstall --build-from-source boost brew reinstall --build-from-source boost --with-python3

That wasn’t enough though. I still got linker errors. Googling suggested maybe the problem was that Boost.Python (which Dionysus uses for its Python binding) was linked to a different version of Python than the one I was targeting. This turned out to be a red herring, however. As far as I can tell, Boost.Python doesn’t actually link to Python at all (though it has to be compiled with support for Python 3 if you want that, so there’s certainly a connection, just not a linking one).

The problem actually was that Dionysus wanted the C++ 11 version of Boost, and the default Homebrew version doesn’t have that support. For this, you need:

brew reinstall --build-from-source boost --with-c++11 brew reinstall --build-from-source boost-python --with-c++11

Build-time linking errors resolved, I thought I was home free.

Dynamic Linking Errors

At last, I had built the library. I fired up python2.7, imported the library. It worked! Then I created an instance of one of the classes, and got this:

TypeError: __init__() should return None, not 'NoneType'

After a lot of googling, it turned out that the most likely cause for this was that the Dionysus binding had wound up linked to the wrong Python library. I carefully checked the CMakeCache.txt file and eliminated any occurrences of the wrong Python library, or the wrong Boost library. Still no luck.

A look at otool -L output for the library showed something funny:

> $ otool -L lib_dionysus.dylib lib_dionysus.dylib: /Users/me/src/Dionysus/build/bindings/python/lib_dionysus.dylib (compatibility version 0.0.0, current version 0.0.0) libpython2.7.dylib (compatibility version 2.7.0, current version 2.7.0) /usr/local/opt/boost-python/lib/libboost_python-mt.dylib (compatibility version 0.0.0, current version 0.0.0) /usr/local/opt/mpfr/lib/libmpfr.4.dylib (compatibility version 6.0.0, current version 6.3.0) /usr/local/opt/gmp/lib/libgmp.10.dylib (compatibility version 14.0.0, current version 14.0.0)

Do you notice that all the entries are absolute paths except for the libpython2.7 one? A quick look at the Anaconda libpython2.7.dylib I linked against shows why:

> $ otool -D /Users/me/anaconda2/lib/libpython2.7.dylib /Users/me/anaconda2/lib/libpython2.7.dylib: libpython2.7.dylib

So Anaconda’s version of Python instructed the linker to find libpython2.7.dylib by going through the normal dyld process - checking DYLD_LIBRARY_PATH, then DYLD_FALLBACK_LIBRARY_PATH, and so on. Since /usr/lib is on DYLD_FALLBACK_LIBRARY_PATH by default, and the system libpython2.7.dylib is in that directory, Dionysus was getting linked with that one, not with the one that was already in memory in the current process. This led to the strange error I saw.

I was able to use install_name_tool to change the binary to point to the right libpython2.7:

> $ install_name_tool lib_dionysus.dylib -change libpython2.7.dylib /Users/me/anaconda2/lib/libpython2.7.dylib

After this, everything was fine.

Further reading

In addition to man dyld, these are useful:

Categories: Offsite Blogs

Douglas M. Auclair (geophf): April 2016 1HaskellADay Problem and Solutions

Planet Haskell - Mon, 05/02/2016 - 10:51am
April 2016
Categories: Offsite Blogs

Edward Z. Yang: Announcing cabal new-build: Nix-style local builds

Planet Haskell - Mon, 05/02/2016 - 10:45am

cabal new-build, also known as “Nix-style local builds”, is a new command inspired by Nix that comes with cabal-install 1.24. Nix-style local builds combine the best of non-sandboxed and sandboxed Cabal:

  1. Like sandboxed Cabal today, we build sets of independent local packages deterministically and independent of any global state. new-build will never tell you that it can't build your package because it would result in a “dangerous reinstall.” Given a particular state of the Hackage index, your build is completely reproducible. For example, you no longer need to compile packages with profiling ahead of time; just request profiling and new-build will rebuild all its dependencies with profiling automatically.
  2. Like non-sandboxed Cabal today, builds of external packages are cached globally, so that a package can be built once, and then reused anywhere else it is also used. No need to continually rebuild dependencies whenever you make a new sandbox: dependencies which can be shared, are shared.

Nix-style local builds work with all versions of GHC supported by cabal-install 1.24, which currently is GHC 7.0 and later. Additionally, cabal-install is on a different release cycle than GHC, so we plan to be pushing bugfixes and updates on a faster basis than GHC's yearly release cycle.

Although this feature is in only beta (there are bugs, see “Known Issues”, and the documentation is a bit sparse), I’ve been successfully using Nix-style local builds exclusively to do my Haskell development. It's hard to overstate my enthusiasm for this new feature: it “just works”, and you don't need to assume that there is a distribution of blessed, version-pegged packages to build against (e.g., Stackage). Eventually, new-build will simply replace the existing build command.

Quick start

Nix-style local builds “just work”: there is very little configuration that needs to be done to start working with it.

  1. Download and install cabal-install 1.24:

    cabal update cabal install cabal-install

    Make sure the newly installed cabal is in your path.

  2. To build a single Cabal package, instead of running cabal configure; cabal build, you can use Nix-style builds by prefixing these commands with new-; e.g., cabal new-configure; cabal new-build. cabal new-repl is also supported. (Unfortunately, other commands are not yet supported, e.g. new-clean (#2957) or new-freeze (#2996).)

  3. To build multiple Cabal packages, you need to first create cabal.project file in some root directory. For example, in the Cabal repository, there is a root directory with a folder per package, e.g., the folders Cabal and cabal-install. Then in cabal.project, specify each folder:

    packages: Cabal/ cabal-install/

    Then, in the directory for a package, you can say cabal new-build to build all of the components in that package; alternately, you can specify a list of targets to build, e.g., package-tests cabal asks to build the package-tests test suite and the cabal executable. A component can be built from any directory; you don't have to be cd'ed into the directory containing the package you want to build. Additionally, you can qualify targets by the package they came from, e.g., Cabal:package-tests asks specifically for the package-tests component from Cabal. There is no need to manually configure a sandbox: add a cabal.project file, and it just works!

Unlike sandboxes, there is no need to add-source; just add the package directories to your cabal.project. And unlike traditional cabal install, there is no need to explicitly ask for packages to be installed; new-build will automatically fetch and build dependencies.

There is also a convenient script you can use for hooking up new-build to your Travis builds.

How it works

Nix-style local builds are implemented with these two big ideas:

  1. For external packages (from Hackage), prior to compilation, we take all of the inputs which would influence the compilation of a package (flags, dependency selection, etc.) and hash it into an identifier. Just as in Nix, these hashes uniquely identify the result of a build; if we compute this identifier and we find that we already have this ID built, we can just use the already built version. These packages are stored globally in ~/.cabal/store; you can list all of the Nix packages that are globally available using ghc-pkg list --package-db=$HOME/.cabal/store/ghc-VERSION/package.db.
  2. For local packages, we instead assign an inplace identifier, e.g., foo-0.1-inplace, which is local to a given cabal.project. These packages are stored locally in dist-newstyle/build; you can list all of the per-project packages using ghc-pkg list --package-db=dist-newstyle/packagedb. This treatment applies to any remote packages which depend on local packages (e.g., if you vendored some dependency which your other dependencies depend on.)

Furthermore, Nix local builds use a deterministic dependency solving strategy, by doing dependency solving independently of the locally installed packages. Once we've solved for the versions we want to use and have determined all of the flags that will be used during compilation, we generate identifiers and then check if we can improve packages we would have needed to build into ones that are already in the database.

Commands new-configure FLAGS

Overwrites cabal.project.local based on FLAGS.

new-build [FLAGS] [COMPONENTS]

Builds one or more components, automatically building any local and non-local dependencies (where a local dependency is one where we have an inplace source code directory that we may modify during development). Non-local dependencies which do not have a transitive dependency on a local package are installed to ~/.cabal/store, while all other dependencies are installed to dist-newstyle.

The set of local packages is read from cabal.project; if none is present, it assumes a default project consisting of all the Cabal files in the local directory (i.e., packages: *.cabal), and optional packages in every subdirectory (i.e., optional-packages: */*.cabal).

The configuration of the build of local packages is computed by reading flags from the following sources (with later sources taking priority):

  1. ~/.cabal/config
  2. cabal.project
  3. cabal.project.local (usually generated by new-configure)
  4. FLAGS from the command line

The configuration of non-local packages is only affect by package-specific flags in these sources; global options are not applied to the build. (For example, if you --disable-optimization, this will only apply to your local inplace packages, and not their remote dependencies.)

new-build does not read configuration from cabal.config.

Phrasebook

Here is a handy phrasebook for how to do existing Cabal commands using Nix local build:

old-style new-style cabal configure cabal new-configure cabal build cabal new-build cabal clean rm -rf dist-newstyle cabal.project.local cabal run EXECUTABLE cabal new-build; ./dist-newstyle/build/PACKAGE-VERSION/build/EXECUTABLE/EXECUTABLE cabal repl cabal new-repl cabal test TEST cabal new-build; ./dist-newstyle/build/PACKAGE-VERSION/build/TEST/TEST cabal benchmark BENCH cabal new-build; ./dist-newstyle/build/PACKAGE-VERSION/build/BENCH/BENCH cabal haddock does not exist yet cabal freeze does not exist yet cabal install --only-dependencies unnecessary (handled by new-build) cabal install does not exist yet (for libraries new-build should be sufficient; for executables, they can be found in ~/.cabal/store/ghc-GHCVER/PACKAGE-VERSION-HASH/bin) cabal.project files

cabal.project files actually support a variety of options beyond packages for configuring the details of your build. Here is a simple example file which displays some of the possibilities:

-- For every subdirectory, build all Cabal files -- (project files support multiple Cabal files in a directory) packages: */*.cabal -- Use this compiler with-compiler: /opt/ghc/8.0.1/bin/ghc -- Constrain versions of dependencies in the following way constraints: cryptohash < 0.11.8 -- Do not build benchmarks for any local packages benchmarks: False -- Build with profiling profiling: true -- Suppose that you are developing Cabal and cabal-install, -- and your local copy of Cabal is newer than the -- distributed hackage-security allows in its bounds: you -- can selective relax hackage-security's version bound. allow-newer: hackage-security:Cabal -- Settings can be applied per-package package cryptohash -- For the build of cryptohash, instrument all functions -- with a cost center (normally, you want this to be -- applied on a per-package basis, as otherwise you would -- get too much information.) profiling-detail: all-functions -- Disable optimization for this package optimization: False -- Pass these flags to GHC when building ghc-options: -fno-state-hack package bytestring -- And bytestring will be built with the integer-simple -- flag turned off. flags: -integer-simple

When you run cabal new-configure, it writes out a cabal.project.local file which saves any extra configuration options from the command line; if you want to know how a command line arguments get translated into a cabal.project file, just run new-configure and inspect the output.

Known issues

As a tech preview, the code is still a little rough around the edges. Here are some more major issues you might run into:

  • Although dependency resolution is deterministic, if you update your Hackage index with cabal update, dependency resolution will change too. There is no cabal new-freeze, so you'll have to manually construct the set of desired constraints.
  • A new feature of new-build is that it avoids rebuilding packages when there have been no changes to them, by tracking the hashes of their contents. However, this dependency tracking is not 100% accurate (specifically, it relies on your Cabal file accurately reporting all file dependencies ala sdist, and it doesn't know about search paths). There's currently no UI for forcing a package to be recompiled; however you can induce a recompilation fairly easily by removing an appropriate cache file: specifically, for the package named p-1.0, delete the file dist-newstyle/build/p-1.0/cache/build.
  • On Mac OS X, Haskell Platform, you may get the message “Warning: The package list for 'hackage.haskell.org' does not exist. Run 'cabal update' to download it.” That is issue #3392; see the linked ticket for workarounds.

If you encounter other bugs, please let us know on Cabal's issue tracker.

Categories: Offsite Blogs

[TFP'16] call for participation

haskell-cafe - Mon, 05/02/2016 - 8:04am
----------------------------- C A L L F O R P A R T I C I P A T I O N ----------------------------- ======== TFP 2016 =========== 17th Symposium on Trends in Functional Programming June 8-10, 2016 University of Maryland, College Park Near Washington, DC http://tfp2016.org/ The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trends in the area. It aspires to be a lively environment for presenting the latest research results, and other contributions (see below). Authors of draft papers will be invited to submit revised papers based on the feedback receive at the symposium. A post-symposium refereeing process will then select a subset of these articles for formal publication.
Categories: Offsite Discussion

[TFP'16] call for participation

General haskell list - Mon, 05/02/2016 - 8:03am
----------------------------- C A L L F O R P A R T I C I P A T I O N ----------------------------- ======== TFP 2016 =========== 17th Symposium on Trends in Functional Programming June 8-10, 2016 University of Maryland, College Park Near Washington, DC http://tfp2016.org/ The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming, taking a broad view of current and future trends in the area. It aspires to be a lively environment for presenting the latest research results, and other contributions (see below). Authors of draft papers will be invited to submit revised papers based on the feedback receive at the symposium. A post-symposium refereeing process will then select a subset of these articles for formal publication.
Categories: Incoming News

Tom Schrijvers: PPDP 2016: Call for Papers

Planet Haskell - Mon, 05/02/2016 - 4:53am
======================================================================

                         Second call for papers
                   18th International Symposium on
          Principles and Practice of Declarative Programming
                              PPDP 2016

         Special Issue of Science of Computer Programming (SCP)

                 Edinburgh, UK, September 5-7, 2016
                  (co-located with LOPSTR and SAS)

                     http://ppdp16.webs.upv.es/

======================================================================

         SUBMISSION DEADLINE: 9 MAY (abstracts) / 16 MAY (papers)

----------------------------------------------------------------------
INVITED SPEAKERS

  Elvira Albert, Complutense University of Madrid, Spain

----------------------------------------------------------------------

PPDP  2016  is a  forum  that  brings  together researchers  from  the
declarative  programming communities, including  those working  in the
logic,  constraint  and  functional  programming paradigms,  but  also
embracing languages, database  languages, and knowledge representation
languages. The  goal is  to stimulate research  in the use  of logical
formalisms  and  methods  for  specifying, performing,  and  analyzing
computations,   including   mechanisms   for   mobility,   modularity,
concurrency,  object-orientation,  security,  verification and  static
analysis. Papers related to the use of declarative paradigms and tools
in industry and education are especially solicited. Topics of interest
include, but are not limited to

* Functional programming
* Logic programming
* Answer-set programming
* Functional-logic programming
* Declarative visual languages
* Constraint Handling Rules
* Parallel implementation and concurrency
* Monads, type classes and dependent type systems
* Declarative domain-specific languages
* Termination, resource analysis and the verification of declarative programs
* Transformation and partial evaluation of declarative languages
* Language extensions for security and tabulation
* Probabilistic modeling in a declarative language and modeling reactivity
* Memory management and the implementation of declarative systems
* Practical experiences and industrial application

This year the conference will be co-located with the  26th Int'l Symp.
on Logic-Based Program Synthesis and Transformation (LOPSTR 2016)  and
the 23rd Static Analysis Symposium (SAS 2016).

The  conference will  be held in Edinburgh, UK. Previous symposia were
held  at  Siena  (Italy),  Canterbury  (UK),  Madrid  (Spain),  Leuven
(Belgium), Odense (Denmark), Hagenberg (Austria),  Coimbra (Portugal),
Valencia (Spain), Wroclaw (Poland), Venice (Italy), Lisboa (Portugal),
Verona (Italy), Uppsala (Sweden), Pittsburgh (USA), Florence  (Italy),
Montreal (Canada),  and  Paris (France).  You might have a look at the
contents of past PPDP symposia, http://sites.google.com/site/ppdpconf/

Papers  must  describe original  work,  be  written  and presented  in
English, and must not substantially overlap with papers that have been
published  or   that  are  simultaneously  submitted   to  a  journal,
conference, or  workshop with refereed proceedings.  Work that already
appeared in  unpublished or informally  published workshop proceedings
may be submitted (please contact the PC chair in case of questions).

After the symposium, a selection of the best papers will be invited to
extend their submissions in the light of the feedback solicited at the
symposium.   The papers  are expected  to include  at least  30% extra
material over and above the PPDP version. Then, after another round of
reviewing, these revised  papers will be published in  a special issue
of SCP with a target publication date by Elsevier of 2017.

Important Dates

  Abstract submission:       9  May, 2016
  Paper submission:         16  May, 2016
  Notification:             20 June, 2016
  Final version of papers:  17 July, 2016

  Symposium:                5-7 September, 2016

Authors  should  submit  an  electronic  copy of  the  full  paper  in
PDF. Papers  should be  submitted to the  submission website  for PPDP
2016. Each submission must include  on its first page the paper title;
authors  and   their  affiliations;   abstract;  and  three   to  four
keywords. The keywords will be used to assist the program committee in
selecting appropriate  reviewers for the paper.  Papers should consist
of   the   equivalent  of   12   pages   under   the  ACM   formatting
guidelines.  These   guidelines  are  available   online,  along  with
formatting templates  or style files. Submitted papers  will be judged
on the basis of significance, relevance, correctness, originality, and
clarity. They should  include a clear identification of  what has been
accomplished and  why it is  significant. Authors who wish  to provide
additional material to  the reviewers beyond the 12-page  limit can do
so in  clearly marked appendices:  reviewers are not required  to read
such appendices.

Program Committee

  Sandra Alves, University of Porto, Portugal
  Zena M. Ariola, University of Oregon, USA
  Kenichi Asai, Ochanomizu University, Japan
  Dariusz Biernacki, University of Wroclaw, Poland
  Rafael Caballero, Complutense University of Madrid, Spain
  Iliano Cervesato, Carnegie Mellon University
  Marina De Vos, University of Bath, UK
  Agostino Dovier, Universita degli Studi di Udine, Italy
  Maribel Fernandez, King's College London, UK
  John Gallagher, Roskilde University, Denmark, and IMDEA Software Institute, Spain
  Michael Hanus, CAU Kiel, Germany
  Martin Hofmann, LMU Munchen, Germany
  Gerda Janssens, KU Leuven, Belgium
  Kazutaka Matsuda, Tohoku University, Japan
  Fred Mesnard, Universite de la Reunion, France
  Emilia Oikarinen, Finnish Institute of Occupational Health, Finland
  Alberto Pettorossi, Universita di Roma Tor Vergata, Italy
  Tom Schrijvers, KU Leuven, Belgium
  Josep Silva, Universitat Politecnica de Valencia, Spain
  Perdita Stevens, University of Edinburgh, UK
  Peter Thiemann, Universitat Freiburg, Germany
  Frank D. Valencia, CNRS-LIX Ecole Polytechnique de Paris, France, and Pontificia Universidad Javeriana de Cali, Colombia
  German Vidal, Universitat Politecnica de Valencia, Spain (Program Chair)
  Stephanie Weirich, University of Pennsylvania, USA

Program Chair

    German Vidal
    Universitat Politecnica de Valencia
    Camino de Vera, S/N
    E-46022 Valencia, Spain
    Email: gvidal@dsic.upv.es

Organizing committee

    James Cheney (University of Edinburgh, Local Organizer)
    Moreno Falaschi (University of Siena, Italy)
----------------------------------------------------------------------
Categories: Offsite Blogs

Chris Smith: CodeWorld/Summer of Haskell Update

Planet Haskell - Sun, 05/01/2016 - 10:45pm

Reminder: The deadline for Summer of Haskell submissions is this Friday, May 6.

One slot in Summer of Haskell this year will specifically be chosen based on CodeWorld.  If you plan to submit a proposal for CodeWorld, please feel free to contact me with any questions, concerns, or for early feedback.  I’ll definitely try my best to help you write the best proposal possible.  So far, I’m expecting three to four CodeWorld proposals that I’m aware of.

Q&A What is Summer of Haskell?

Summer of Haskell is a program by the Haskell.org committee to encourage students to spend the summer contributing to open-source projects that benefit the Haskell community.  That encouragement comes in the form of a stipend of US$5500.  More details are at http://summer.haskell.org.

How is CodeWorld related to Summer of Haskell?

The Haskell.org committee will choose a number of student projects based on their impact to the Haskell community.  As part of this, one project will be chosen specifically relating to CodeWorld, and funded by CodeWorld maintainers.

Should I submit a proposal?

It’s up to you, but I believe you should submit a proposal if:

  • You are eligible (see the bottom of the Summer of Haskell info page).
  • You are willing and available to take on an essentially full-time commitment for the summer.
  • You have a realistic idea you’d like to work on to benefit the Haskell community.
Any advice for writing a proposal?

Yes!  Here are things you should keep in mind:

  1. Propose a project with immediate impact on real people.  “If you build it, they will come” doesn’t work here.  Unless you have an extremely good reason, don’t propose to build something speculative and hope people will just like it so much that they adopt it.  Point to real people who already want this, and who will already be users and will find their lives better if and when it’s completed.
  2. Demonstrate that you understand the task.  Provide enough detail to convince us that the project is feasible.  A reasonable and concrete timeline with at least rough deliverables is a good idea.  Poorly defined projects with a low probability of success are often not good fits for this format.
  3. Show that you are already becoming a part of the community you’ll be working with.  Are you familiar with the project you’re proposing to contribute to?  Do core people in the project and/or the Haskell community know who you are?  Have you discussed your ideas with people already involved in the project?  Do you know someone who would be your mentor?

You can browse successful projects from last year.  There’s also some good advice by Edward Kmett in an old mailing list thread.


Categories: Offsite Blogs

Transforming graphs with loops with Hoopl

haskell-cafe - Sun, 05/01/2016 - 6:59pm
I am trying to implement a program transformation with Hoopl. The transformation replaces a particular statement with variable declarations provided that the variables have not been previously declared. A map keeps track of declared variables where the map keys store variable names. The transformation works only for programs without loops. In the program below, it should replace line 2 rec #3 INxt B37H00G with 1 Global Field B37H00G 2 Global Array B37HO3R 3 rec #3 INxt B37H00G but it doesn't. === Example graph == 0 goto L4 1 L4: 2 rec #3 INxt B37H00G 3 while #3 goto L5 goto L2 4 L5: 5 Global Field VSTF01a 6 goto L4 7 L2: I think the reason it fails is because there are two paths leading to L4, one from the program start (line 0) and a second when the loop repeats from line 6. In the first case the transformation produces the substitution graph since the two variables are not in the map. In the analysis of the second path however, the variables are in the name space so the rewrite function returns Nothing.
Categories: Offsite Discussion

Mark Jason Dominus: Typewriters

Planet Haskell - Sun, 05/01/2016 - 6:00pm

It will suprise nobody to learn that when I was a child, computers were almost unknown, but it may be more surprising that typewriters were unusual.

Probably the first typewriter I was familiar with was my grandmother’s IBM “Executive” model C. At first I was not allowed to touch this fascinating device, because it was very fancy and expensive and my grandmother used it for her work as an editor of medical journals.

The “Executive” was very advanced: it had proportional spacing. It had two space bars, for different widths of spaces. Characters varied between two and five ticks wide, and my grandmother had typed up a little chart giving the width of each character in ticks, which she pasted to the top panel of the typewriter. The font was sans-serif, and I remember being a little puzzled when I first noticed that the lowercase j had no hook: it looked just like the lowercase i, except longer.

The little chart was important, I later learned, when I became old enough to use the typewriter and was taught its mysteries. Press only one key at a time, or the type bars will collide. Don't use the (extremely satisfying) auto-repeat feature on the hyphen or underscore, or the platen might be damaged. Don't touch any of the special controls; Grandma has them adjusted the way she wants. (As a concession, I was allowed to use the “expand” switch, which could be easily switched off again.)

The little chart was part of the procedure for correcting errors. You would backspace over the character you wanted to erase—each press of the backspace key would move the carriage back by one tick, and the chart told you how many times to press—and then place a slip of correction paper between the ribbon and the paper, and retype the character you wanted to erase. The dark ribbon impression would go onto the front of the correction slip, which was always covered with a pleasing jumble of random letters, and the correction slip impression, in white, would exactly overprint the letter you wanted to erase. Except sometimes it didn't quite: the ribbon ink would have spread a bit, and the corrected version would be a ghostly white letter with a hair-thin black outline. Or if you were a small child, as I was, you would sometimes put the correction slip in backwards, and the white ink would be transferred uselessly to the back of the ribbon instead of to the paper. Or you would select a partly-used portion of the slip and the missing bit of white ink would leave a fragment of the corrected letter on the page, like the broken-off leg of a dead bug.

Later I was introduced to the use of Liquid Paper (don't brush on a big glob, dot it on a bit at a time with the tip of the brush) and carbon paper, another thing you had to be careful not to put in backward, although if you did you got a wonderful result: the typewriter printed mirror images.

From typing alphabets, random letters, my name, and of course qwertyuiops I soon moved on to little poems, stories, and other miscellanea, and when my family saw that I was using the typewriter for writing, they presented me with one of my own, a Royal manual (model HHE maybe?) with a two-color ribbon, and I was at last free to explore the mysteries of the TAB SET and TAB CLEAR buttons. The front panel had a control for a three-color ribbon, which forever remained an unattainable mystery. Later I graduated to a Smith-Corona electric, on which I wrote my high school term papers. The personal computer arrived while I was in high school, but available printers were either expensive or looked like crap.

When I was in first grade our classroom had acquired a cheap manual typewriter, which as I have said, was an unusual novelty, and I used it whenever I could. I remember my teacher, Ms. Juanita Adams, complaining that I spent too much time on the typewriter. “You should work more on your handwriting, Jason. You might need to write something while you’re out on the street, and you won't just be able to pull a typewriter out of your pocket.”

She was wrong.

Categories: Offsite Blogs

deepseq: instance NFData (a -> b)

libraries list - Sun, 05/01/2016 - 3:38pm
According to Haddock comments, between deepseq-1.2 and deepseq-1.3 an instance for NFData on functions was introduced without previous discussion. I find this instance pretty problematic since it has no superclasses. The correct instance would be certainly something like instance (Enumerate a, NFData b) => NFData (a -> b) where Enumerate would be a new class that allows to enumerate all values of a type. This would be hardly useful because it is pretty inefficient. I'd prefer that the instance is removed, again, or even better, be replaced by a non-implementable instance. Alternatively we should replace it by a correct implementation with corresponding superclasses. If we do the second, then we could still omit the Enumerate instance for types where enumeration of all values of the type is too expensive. I assume that the instance was added to simplify automatic derivation of NFData instances. However, I think it would be better if people insert custom deepseq implementation for the expected f
Categories: Offsite Discussion

Hackage package upload error: Invalid package

haskell-cafe - Sun, 05/01/2016 - 1:31pm
Hi! Wanted to upload the 0.8.1 version of hedis, did what I usually did, but getting an error. Running "stack -v upload ." seems to show some tls-related details. ``` ➜ hedis git:(master) stack -v upload . Version 1.0.4.3 x86_64 2016-05-01 15:29:31.034044: [debug] Checking for project config at: /Users/kb/workspace/hedis/stack.yaml < at >(stack_2rXRdr1j02iFXWAif5re4K:Stack.Config src/Stack/Config.hs:761:9) 2016-05-01 15:29:31.037100: [debug] Loading project config file stack.yaml < at >(stack_2rXRdr1j02iFXWAif5re4K:Stack.Config src/Stack/Config.hs:779:13) 2016-05-01 15:29:31.110561: [debug] Checking for project config at: /Users/kb/workspace/hedis/stack.yaml < at >(stack_2rXRdr1j02iFXWAif5re4K:Stack.Config src/Stack/Config.hs:761:9) 2016-05-01 15:29:31.110690: [debug] Loading project config file stack.yaml < at >(stack_2rXRdr1j02iFXWAif5re4K:Stack.Config src/Stack/Config.hs:779:13) 2016-05-01 15:29:31.111459: [debug] Trying to decode /Users/kb/.stack/build-plan-cache/x86_64-osx/lts-5.3.cache < at >(stack_2rXRdr1j02iFXWAif5re4K:
Categories: Offsite Discussion

Philip Wadler: Paul Graham on Writing, Briefly

Planet Haskell - Sun, 05/01/2016 - 6:46am

Thanks to Arne Ranta for introducing me to Writing, Briefly by Paul Graham.I think it's far more important to write well than most people realize. Writing doesn't just communicate ideas; it generates them. If you're bad at writing and don't like to do it, you'll miss out on most of the ideas writing would have generated. As for how to write well, here's the short version: Write a bad version 1 as fast as you can; rewrite it over and over; cut out everything unnecessary; write in a conversational tone; develop a nose for bad writing, so you can see and fix it in yours; imitate writers you like; if you can't get started, tell someone what you plan to write about, then write down what you said; expect 80% of the ideas in an essay to happen after you start writing it, and 50% of those you start with to be wrong; be confident enough to cut; have friends you trust read your stuff and tell you which bits are confusing or drag; don't (always) make detailed outlines; mull ideas over for a few days before writing; carry a small notebook or scrap paper with you; start writing when you think of the first sentence; if a deadline forces you to start before that, just say the most important sentence first; write about stuff you like; don't try to sound impressive; don't hesitate to change the topic on the fly; use footnotes to contain digressions; use anaphora to knit sentences together; read your essays out loud to see (a) where you stumble over awkward phrases and (b) which bits are boring (the paragraphs you dread reading); try to tell the reader something new and useful; work in fairly big quanta of time; when you restart, begin by rereading what you have so far; when you finish, leave yourself something easy to start with; accumulate notes for topics you plan to cover at the bottom of the file; don't feel obliged to cover any of them; write for a reader who won't read the essay as carefully as you do, just as pop songs are designed to sound ok on crappy car radios; if you say anything mistaken, fix it immediately; ask friends which sentence you'll regret most; go back and tone down harsh remarks; publish stuff online, because an audience makes you write more, and thus generate more ideas; print out drafts instead of just looking at them on the screen; use simple, germanic words; learn to distinguish surprises from digressions; learn to recognize the approach of an ending, and when one appears, grab it.
Categories: Offsite Blogs

Dominic Steinitz: Fun with LibBi and Influenza

Planet Haskell - Sun, 05/01/2016 - 4:23am
Introduction

This is a bit different from my usual posts (well apart from my write up of hacking at Odessa) in that it is a log of how I managed to get LibBi (Library for Bayesian Inference) to run on my MacBook and then not totally satisfactorily (as you will see if you read on).

The intention is to try a few more approaches to the same problem, for example, Stan, monad-bayes and hand-crafted.

Kermack and McKendrick (1927) give a simple model of the spread of an infectious disease. Individuals move from being susceptible () to infected () to recovered ().

In 1978, anonymous authors sent a note to the British Medical Journal reporting an influenza outbreak in a boarding school in the north of England (“Influenza in a boarding school” 1978). The chart below shows the solution of the SIR (Susceptible, Infected, Record) model with parameters which give roughly the results observed in the school.

LibBi Step 1 ~/LibBi-stable/SIR-master $ ./init.sh error: 'ncread' undefined near line 6 column 7

The README says this is optional so we can skip over it. Still it would be nice to fit the bridge weight function as described in Moral and Murray (2015).

The README does say that GPML is required but since we don’t (yet) need to do this step, let’s move on.

~/LibBi-stable/SIR-master $ ./run.sh ./run.sh Error: ./configure failed with return code 77. See .SIR/build_openmp_cuda_single/configure.log and .SIR/build_openmp_cuda_single/config.log for details

It seems the example is configured to run on CUDA and it is highly likely that my installation of LibBI was not set up to allow this. We can change config.conf from

--disable-assert --enable-single --enable-cuda --nthreads 2

to

--nthreads 4 --enable-sse --disable-assert

On to the next issue.

~/LibBi-stable/SIR-master $ ./run.sh ./run.sh Error: ./configure failed with return code 1. required QRUpdate library not found. See .SIR/build_sse/configure.log and .SIR/build_sse/config.log for details

But QRUpdate is installed!

~/LibBi-stable/SIR-master $ brew info QRUpdate brew info QRUpdate homebrew/science/qrupdate: stable 1.1.2 (bottled) http://sourceforge.net/projects/qrupdate/ /usr/local/Cellar/qrupdate/1.1.2 (3 files, 302.6K) /usr/local/Cellar/qrupdate/1.1.2_2 (6 files, 336.3K) Poured from bottle /usr/local/Cellar/qrupdate/1.1.2_3 (6 files, 337.3K) * Poured from bottle From: https://github.com/Homebrew/homebrew-science/blob/master/qrupdate.rb ==> Dependencies Required: veclibfort ✔ Optional: openblas ✔ ==> Options --with-openblas Build with openblas support --without-check Skip build-time tests (not recommended)

Let’s look in the log as advised. So it seems that a certain symbol cannot be found.

checking for dch1dn_ in -lqrupdate

Let’s try ourselves.

nm -g /usr/local/Cellar/qrupdate/1.1.2_3/lib/libqrupdate.a | grep dch1dn_ 0000000000000000 T _dch1dn_

So the symbol is there! What gives? Let’s try setting one of the environment variables.

export LDFLAGS='-L/usr/local/lib'

Now we get further.

./run.sh Error: ./configure failed with return code 1. required NetCDF header not found. See .SIR/build_sse/configure.log and .SIR/build_sse/config.log for details

So we just need to set another environment variable.

export CPPFLAGS='-I/usr/local/include/'

This is more mysterious.

./run.sh Error: ./configure failed with return code 1. required Boost header not found. See .SIR/build_sse/configure.log and .SIR/build_sse/config.log for details ~/LibBi-stable/SIR-master

Let’s see what we have.

brew list | grep -i boost

Nothing! I recall having some problems with boost when trying to use a completely different package. So let’s install boost.

brew install boost

Now we get a different error.

./run.sh Error: make failed with return code 2, see .SIR/build_sse/make.log for details

Fortunately at some time in the past sbfnk took pity on me and advised me here to use boost155, a step that should not be lightly undertaken.

/usr/local/Cellar/boost155/1.55.0_1: 10,036 files, 451.6M, built in 15 minutes 9 seconds

Even then I had to say

brew link --force boost155

Finally it runs.

./run.sh 2> out.txt

And produces a lot of output

wc -l out.txt 49999 out.txt ls -ltrh results/posterior.nc 1.7G Apr 30 19:57 results/posterior.nc

Rather worringly, out.txt has all lines of the form

1: -51.9191 -23.2045 nan beats -inf -inf -inf accept=0.5

nan beating -inf does not sound good.

Now we are in a position to analyse the results.

octave --path oct/ --eval "plot_and_print" error: 'bi_plot_quantiles' undefined near line 23 column 5

I previously found an Octave package(?) called OctBi so let’s create an .octaverc file which adds this to the path. We’ll also need to load the netcdf package which we previously installed.

addpath ("../OctBi-stable/inst") pkg load netcdf ~/LibBi-stable/SIR-master $ octave --path oct/ --eval "plot_and_print" octave --path oct/ --eval "plot_and_print" warning: division by zero warning: called from mean at line 117 column 7 read_hist_simulator at line 47 column 11 bi_read_hist at line 85 column 12 bi_hist at line 63 column 12 plot_and_print at line 56 column 5 warning: division by zero warning: division by zero warning: division by zero warning: division by zero warning: division by zero warning: print.m: fig2dev binary is not available. Some output formats are not available. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. warning: opengl_renderer: x/y/zdata should have the same dimensions. Not rendering. sh: pdfcrop: command not found

I actually get a chart from this so some kind of success.

This does not look like the chart in the Moral and Murray (2015), the fitted number of infected patients looks a lot smoother and the “rates” parameters also vary in a much smoother manner. For reasons I haven’t yet investigated, it looks like over-fitting. Here’s the charts in the paper.

Bibliography

“Influenza in a boarding school.” 1978. British Medical Journal, March, 587.

Kermack, W. O., and A. G. McKendrick. 1927. “A Contribution to the Mathematical Theory of Epidemics.” Proceedings of the Royal Society of London Series A 115 (August): 700–721. doi:10.1098/rspa.1927.0118.

Moral, Pierre Del, and Lawrence M Murray. 2015. “Sequential Monte Carlo with Highly Informative Observations.”


Categories: Offsite Blogs

Taking over cmdtheline

haskell-cafe - Sat, 04/30/2016 - 7:04pm
The package cmdtheline has not been updated since Apr 30, 2013, and it is broken with the current version of GHC; it needs to have its upper bound on transformers bumped. I tried contacting the author three months ago and three weeks ago to ask him to fix the package, but I received no response. I contacted him one more time one week ago to tell him that I intended to take over the package, and still I received no response. Thus, following the instructions on Hackage for taking over a package, I am officially stating my intent to take over this package so that I can fix it. Cheers, Greg
Categories: Offsite Discussion