News aggregator

Automatic differentiation (AD) with respect to listof matrices in Haskell

haskell-cafe - Sun, 05/08/2016 - 5:04pm
I am trying to understand how can I use Numeric.AD (automatic differentiation) in Haskell. I defined a simple matrix type and a scalar function taking an array and two matrices as arguments. I want to use AD to get the gradient of the scoring function with respect to both matrices, but I'm running into compilation problems. Here is the code: ------------------------------- {-# LANGUAGE DeriveTraversable, DeriveFunctor, DeriveFoldable #-}import Numeric.AD.Mode.Reverse as Rimport Data.Traversable as Timport Data.Foldable as F --- Non-linear function on "vectors" logistic x = 1.0 / (1.0 + exp(-x) ) phi v = map logistic v phi' (x:xs) = x : (phi xs) --- dot product dot u v = foldr (+) 0 $ zipWith (*) u v --- simple matrix typedata Matrix a = M [[a]] deriving (Eq,Show,Functor,F.Foldable,T.Traversable) --- action of a matrix on a vector mv _ [] = [] mv (M []) _ = [] mv ( M m ) v = ( dot (head m) v ) : (mv (M (tail m)) v ) --- two matrices mbW1 = M $ [[1,0,0],[-1,5,1],[1,2,-3]] mbW2 = M $ [[0,0,0],[1,3,-1],[-2,
Categories: Offsite Discussion

The Gentoo Haskell Team: How to deal with portage blockers

Planet Haskell - Sun, 05/08/2016 - 3:27pm
TL;DR:
  • use --autounmask=n
  • use --backtrack=1000 (or more)
  • package.mask all outdated packages that cause conflicts (usually requires more iterations)
  • run world update
The problem

Occasionally (more frequently on haskel packages) portage starts taking long time to only tell you it was not able to figure out the upgrade path.

Some people suggest "wipe-blockers-and-reinstall" solution. This post will try to explain how to actually upgrade (or find out why it’s not possible) without actually destroying your system.

Real-world example

I’ll take a real-world example in Gentoo’s bugzilla: bug 579520 where original upgrade error looked like that:

!!! Multiple package instances within a single package slot have been pulled !!! into the dependency graph, resulting in a slot conflict: x11-libs/gtk+:3 (x11-libs/gtk+-3.18.7:3/3::gentoo, ebuild scheduled for merge) pulled in by (no parents that aren't satisfied by other packages in this slot) (x11-libs/gtk+-3.20.0:3/3::gnome, installed) pulled in by >=x11-libs/gtk+-3.19.12:3[introspection?] required by (gnome-base/nautilus-3.20.0:0/0::gnome, installed) ^^ ^^^^^^^^^ >=x11-libs/gtk+-3.20.0:3[cups?] required by (gnome-base/gnome-core-libs-3.20.0:3.0/3.0::gnome, installed) ^^ ^^^^^^^^ >=x11-libs/gtk+-3.19.4:3[introspection?] required by (media-video/totem-3.20.0:0/0::gnome, ebuild scheduled for merge) ^^ ^^^^^^^^ >=x11-libs/gtk+-3.19.0:3[introspection?] required by (app-editors/gedit-3.20.0:0/0::gnome, ebuild scheduled for merge) ^^ ^^^^^^^^ >=x11-libs/gtk+-3.19.5:3 required by (gnome-base/dconf-editor-3.20.0:0/0::gnome, ebuild scheduled for merge) ^^ ^^^^^^^^ >=x11-libs/gtk+-3.19.6:3[introspection?] required by (x11-libs/gtksourceview-3.20.0:3.0/3::gnome, ebuild scheduled for merge) ^^ ^^^^^^^^ >=x11-libs/gtk+-3.19.3:3[introspection,X] required by (media-gfx/eog-3.20.0:1/1::gnome, ebuild scheduled for merge) ^^ ^^^^^^^^ >=x11-libs/gtk+-3.19.8:3[X,introspection?] required by (x11-wm/mutter-3.20.0:0/0::gnome, installed) ^^ ^^^^^^^^ >=x11-libs/gtk+-3.19.12:3[X,wayland?] required by (gnome-base/gnome-control-center-3.20.0:2/2::gnome, installed) ^^ ^^^^^^^^^ >=x11-libs/gtk+-3.19.11:3[introspection?] required by (app-text/gspell-1.0.0:0/0::gnome, ebuild scheduled for merge) ^^ ^^^^^^^^^ x11-base/xorg-server:0 (x11-base/xorg-server-1.18.3:0/1.18.3::gentoo, installed) pulled in by x11-base/xorg-server:0/1.18.3= required by (x11-drivers/xf86-video-nouveau-1.0.12:0/0::gentoo, installed) ^^^^^^^^^^ x11-base/xorg-server:0/1.18.3= required by (x11-drivers/xf86-video-fbdev-0.4.4:0/0::gentoo, installed) ^^^^^^^^^^ x11-base/xorg-server:0/1.18.3= required by (x11-drivers/xf86-input-evdev-2.10.1:0/0::gentoo, installed) ^^^^^^^^^^ (x11-base/xorg-server-1.18.2:0/1.18.2::gentoo, ebuild scheduled for merge) pulled in by x11-base/xorg-server:0/1.18.2= required by (x11-drivers/xf86-video-vesa-2.3.4:0/0::gentoo, installed) ^^^^^^^^^^ x11-base/xorg-server:0/1.18.2= required by (x11-drivers/xf86-input-synaptics-1.8.2:0/0::gentoo, installed) ^^^^^^^^^^ x11-base/xorg-server:0/1.18.2= required by (x11-drivers/xf86-input-mouse-1.9.1:0/0::gentoo, installed) ^^^^^^^^^^ app-text/poppler:0 (app-text/poppler-0.32.0:0/51::gentoo, ebuild scheduled for merge) pulled in by >=app-text/poppler-0.32:0/51=[cxx,jpeg,lcms,tiff,xpdf-headers(+)] required by (net-print/cups-filters-1.5.0:0/0::gentoo, installed) ^^^^^^ >=app-text/poppler-0.16:0/51=[cxx] required by (app-office/libreoffice-5.0.5.2:0/0::gentoo, installed) ^^^^^^ >=app-text/poppler-0.12.3-r3:0/51= required by (app-text/texlive-core-2014-r4:0/0::gentoo, installed) ^^^^^^ (app-text/poppler-0.42.0:0/59::gentoo, ebuild scheduled for merge) pulled in by >=app-text/poppler-0.33[cairo] required by (app-text/evince-3.20.0:0/evd3.4-evv3.3::gnome, ebuild scheduled for merge) ^^ ^^^^ net-fs/samba:0 (net-fs/samba-4.2.9:0/0::gentoo, installed) pulled in by (no parents that aren't satisfied by other packages in this slot) (net-fs/samba-3.6.25:0/0::gentoo, ebuild scheduled for merge) pulled in by net-fs/samba[smbclient] required by (media-sound/xmms2-0.8-r2:0/0::gentoo, ebuild scheduled for merge) ^^^^^^^^^ It may be possible to solve this problem by using package.mask to prevent one of those packages from being selected. However, it is also possible that conflicting dependencies exist such that they are impossible to satisfy simultaneously. If such a conflict exists in the dependencies of two different packages, then those packages can not be installed simultaneously. For more information, see MASKED PACKAGES section in the emerge man page or refer to the Gentoo Handbook. emerge: there are no ebuilds to satisfy ">=dev-libs/boost-1.55:0/1.57.0=". (dependency required by "app-office/libreoffice-5.0.5.2::gentoo" [installed]) (dependency required by "@selected" [set]) (dependency required by "@world" [argument])

A lot of text! Let’s trim it down to essential detail first (AKA how to actually read it). I’ve dropped the "cause" of conflcts from previous listing and left only problematic packages:

!!! Multiple package instances within a single package slot have been pulled !!! into the dependency graph, resulting in a slot conflict: x11-libs/gtk+:3 (x11-libs/gtk+-3.18.7:3/3::gentoo, ebuild scheduled for merge) pulled in by (x11-libs/gtk+-3.20.0:3/3::gnome, installed) pulled in by x11-base/xorg-server:0 (x11-base/xorg-server-1.18.3:0/1.18.3::gentoo, installed) pulled in by (x11-base/xorg-server-1.18.2:0/1.18.2::gentoo, ebuild scheduled for merge) pulled in by app-text/poppler:0 (app-text/poppler-0.32.0:0/51::gentoo, ebuild scheduled for merge) pulled in by (app-text/poppler-0.42.0:0/59::gentoo, ebuild scheduled for merge) pulled in by net-fs/samba:0 (net-fs/samba-4.2.9:0/0::gentoo, installed) pulled in by (net-fs/samba-3.6.25:0/0::gentoo, ebuild scheduled for merge) pulled in by emerge: there are no ebuilds to satisfy ">=dev-libs/boost-1.55:0/1.57.0=".

That is more manageable. We have 4 "conflicts" here and one "missing" package.

Note: all the listed requirements under "conflicts" (the previous listing) suggest these are >= depends. Thus the dependencies themselves can’t block upgrade path and error message is misleading.

For us it means that portage leaves old versions of gtk+ (and others) for some unknown reason.

To get an idea on how to explore that situation we need to somehow hide outdated packages from portage and retry an update. It will be the same as uninstalling and reinstalling a package but not actually doing it

package.mask does exactly that. To make package hidden for real we will also need to disable autounmask: --autounmask=n (default is y).

Let’s hide outdated x11-libs/gtk+-3.18.7 package from portage:

# /etc/portage/package.mask <x11-libs/gtk+-3.20.0:3

Blocker list became shorter but still does not make sense:

x11-base/xorg-server:0 (x11-base/xorg-server-1.18.2:0/1.18.2::gentoo, ebuild scheduled for merge) pulled in by (x11-base/xorg-server-1.18.3:0/1.18.3::gentoo, installed) pulled in by ^^^^^^^^^^ app-text/poppler:0 (app-text/poppler-0.32.0:0/51::gentoo, ebuild scheduled for merge) pulled in by (app-text/poppler-0.42.0:0/59::gentoo, ebuild scheduled for merge) pulled in by

Blocking more old stuff:

# /etc/portage/package.mask <x11-libs/gtk+-3.20.0:3 <x11-base/xorg-server-1.18.3 <app-text/poppler-0.42.0

The output is:

[blocks B ] <dev-util/gdbus-codegen-2.48.0 ("<dev-util/gdbus-codegen-2.48.0" is blocking dev-libs/glib-2.48.0) * Error: The above package list contains packages which cannot be * installed at the same time on the same system. (dev-libs/glib-2.48.0:2/2::gentoo, ebuild scheduled for merge) pulled in by (dev-util/gdbus-codegen-2.46.2:0/0::gentoo, ebuild scheduled for merge) pulled in by

That’s our blocker! Stable <dev-util/gdbus-codegen-2.48.0 blocks unstable blocking dev-libs/glib-2.48.0.

The solution is to ~arch keyword dev-util/gdbus-codegen-2.48.0:

# /etc/portage/package.accept_keywords dev-util/gdbus-codegen

And run world update.

Simple!


Categories: Offsite Blogs

Applying a Constraint to a Proxy'd type

haskell-cafe - Sun, 05/08/2016 - 1:15pm
I'm not sure if it's possible (and I have an alternate method of doing this so it isn't strictly speaking _necessary_, but would make the code cleaner if I can), but I would like to convert a "value" of kind (k -> Constraint) into one of kind (Proxy k -> Constraint). I can achieve this using a type family (note: I've also made this polymorphic in terms of the Proxy type used, but that isn't necessary): which has the problem that it can't be partially applied (which I also need). This can be wrapped up in a typeclass: The problem with this is that if I try and use it somewhere that - having already a type of kind (c :: * -> Constraint) from the environment - that expects a function of type (forall a. (c a) => a -> b) with a fixed "b" value, then trying to use something like this function prevents it from working (for the purposes of argument, assume the old specification of Num that had an Eq superclass; this is the way I can produce the smallest example): The resulting error message is: "Could not de
Categories: Offsite Discussion

Derek Elkins: Constructivist Motto

Planet Haskell - Sat, 05/07/2016 - 12:12pm

I don’t believe classical logic is false; I just believe that it is not true.

Categories: Offsite Blogs

I have a question

haskell-cafe - Sat, 05/07/2016 - 1:29am
Hi I don’t want to receive this mail anymore. what should do? thanks _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

The maths behind the Fastest Fibb In The West.

haskell-cafe - Sat, 05/07/2016 - 12:46am
I've been working on a project that needs a good fibonacci generator, and I'm to the point where can now improve upon this one: https://wiki.haskell.org/The_Fibonacci_sequence#Fastest_Fib_in_the_West thanks to this guy: https://groups.google.com/forum/#!topic/haskell-cafe/HUgbAUCvCp4 He suggested breaking up a guard into two diffeent functions, which I can do, but I don't know what to call them because I don't know why the operations are different. I'm referring to this section: fib' (f, g) p | p = (f*(f+2*g), f^2 + g^2) | otherwise = (f^2+g^2, g*(2*f-g)) I'd like to know the reason why each guard does two entirely different things, so I know what to call the functions when I seperate them out. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Storing big datasets

haskell-cafe - Fri, 05/06/2016 - 9:28pm
Hi! I'm using ACID package as main database -- it's simple and... ACID (which is cool of course). So now I need to store up to ~30GB of data in a single data structure (!) that will be constantly updated (some kind of a huge tree set). Here's a question -- how to operate that big structure? 1. It doesn't even fit in RAM 2. It should be updated atomically and frequently (every 10 seconds up to 500 elements out of 10^7). 3. What structures should I use? I'd like to store up to 10^6~10^7 some simple elements there too, that will be gigabytes of data. So it seems to me I can't use Data.Set. Thanks for any ideas!
Categories: Offsite Discussion

Code critique request

haskell-cafe - Fri, 05/06/2016 - 7:29pm
I've got this fizzbuzz project I am using for a blog series, among other things. In this version, the fizzbuzz function is fed from a Fibonacci generator. I'm particularly concerned with the efficiency of the Fibonacci generator, but all scrutiny is welcomed. I'll included a link to the entire project, but below are the parts I think would be sufficient to spot trouble with how I am generating Fibonacci numbers.
Categories: Offsite Discussion

Chris Smith: Reminder: Summer of Haskell Proposals Due Today

Planet Haskell - Fri, 05/06/2016 - 3:57pm

For anyone planning to participate in Summer of Haskell this summer, remember that proposals are due today.  Good luck to all!


Categories: Offsite Blogs

GHC.Prim: Resizable Multidimensional array

haskell-cafe - Fri, 05/06/2016 - 1:52pm
I'm trying to create a resizable multidimensional array using the primitive array stuff in GHC.Prim. I see that the only resizable array is the ByteArray#, but the only array that supports multidimensionality is the ArrayArray# as far as I can tell. I have groups of 4 Double#s that I want to have a resizable array of, so only really the "first dimension" needs to be resizable. I was thinking if there was a way to get the pointer of a SmallArray# as an Addr# I could use the ability of ByteArray#s to store Addr#s. The other option I was considering was to simply have a mapping similar to the ones provided by Data.Ix that maps pairs of numbers to a single index, and in that way use a single ByteArray# to store everything. However, because this will be a fairly large array it seems like it is a much better idea to avoid allocating and reallocating that humongous chunk of memory when I can keep it spread out in smaller chunks by storing pointers to smaller arrays in the larger array. But maybe that isn't an imp
Categories: Offsite Discussion

Trouble building library with cabal-install 1.24

haskell-cafe - Fri, 05/06/2016 - 10:40am
I'm trying to use cabal-install 1.24. To start fresh, I removed ~/.cabal and ~/.ghc, and then used the old cabal(-install) executable to do cabal update and cabal install cabal-install. Unfortunately, it seems that cabal-install is building everything except my library! $ cabal --version cabal-install version 1.24.0.0 compiled using version 1.24.0.0 of the Cabal library $ ghc --version The Glorious Glasgow Haskell Compilation System, version 7.10.3 $ cat cabal.project.local tests: True program-options ghc-options: -Werror $ cabal new-build --enable-test -v Project settings changed, reconfiguring... creating /home/eamybut/néal/creatur/dist-newstyle creating /home/eamybut/néal/creatur/dist-newstyle/cache creating /home/eamybut/.cabal/store/ghc-7.10.3 Reading installed packages... /usr/local/ghc-7.10.3/bin/ghc-pkg dump '--package-db=/home/eamybut/.cabal/store/ghc-7.10.3/package.db' -v0 /usr/local/ghc-7.10.3/bin/ghc --print-libdir -Werror In order, the following will be built: cond-0.4.1.1 (lib:cond)
Categories: Offsite Discussion

RFC: Removing the `-hb` profiling option

glasgow-user - Fri, 05/06/2016 - 10:04am
Hi all, After a bit of rather tedious and frustrating debugging I came to the realisation that the code for the `-hb` profiling option is not thread safe. See https://ghc.haskell.org/trac/ghc/ticket/12019 This gives us an opportunity to simply remove it instead of fixing it. If there is anyone that thinks this future is really useful (ie more useful than the other profiling modes) then I'm willing to fix it. But if noone would miss it. I'd much rather remove it. Thoughts? Erik
Categories: Offsite Discussion

What happened to -optdep option?

glasgow-user - Thu, 05/05/2016 - 8:19pm
Hello! I'm using GHC 7.10.3 after an upgrade. I have the following in my Makefile: depend_mod :: lib/abh ghc -M $(CFLAGS_GHC_0) -dep-makefile -optdepbuild/depend.tmp -dep-suffix "p_" \ $(foreach m, $(MOD_HS) $(PROG_HS), src/$(m).hs) \ $(foreach m, $(MOD_CHS) $(PROG_CHS), build/$(m).hs) ... This used to work. The previous GHC version was 7.8. Now GHC complains: -optdepbuild/depend.tmp: openBinaryFile: does not exist (No such file or directory) This looks like the -optdep argument isn't understood. There is no mention of it in the new/7.10.3 user manual. I don't know any longer what this "-optget" argument does, and it isn't in the documentation any longer. I googled the GHC web site. There it is mentioned that -optdep was deprecated, but almost all of the hits are very outdated (like ten years old). Can soneone tell me, how I should change my Makefile..? Greetings, Volker Wysk
Categories: Offsite Discussion

Douglas M. Auclair (geophf): April 2016 1HaskellADay 1Liners

Planet Haskell - Thu, 05/05/2016 - 3:39pm
  • April 15th, 2016:
    (\(intensf, list) -> map (second intensf) list) (insensifierFn, assocList)
    Point-free-itize this lambda
    • obadz @obadzz uncurry $ map . second
  • April 15th, 2016:
    foldr (\card -> let c = color card in mappend c *** ((idx card, c):))
           (mempty, []) cards
    Point-free-itize and de-let lambda
    • Gautier DI FOLCO @gautier_difolco foldr (uncurry (***) . (uncurry fmap . fmap (((,) =<< mappend) . color) . ((,) =<< ((:) .) . (,) . idx))) (mempty, []) cards
    • me: foldr (uncurry (***) . (mappend . snd &&& (:)) . (idx &&& color)) (mempty, [])
  • April 15th, 2016: map (\(idx, color) -> C idx (cell bb idx) color) assoclist
    point-free-itize the lambda in the map function.
    • Eyal Lotem @EyalL uncurry (C <*> cell bb) ?
      Not in front of a type checker, can't check myself :)
  • April 13th, 2016:
    point-free-itize lambda term in:
    uncurry (foldr (\a -> min a *** max a)) . ((head &&& head) &&& id)
    • obadz @obadzz liftA2 (***) min max
  • April 12th, 2016:
    minmax :: Ord a => [a] -> (a, a)
    minmax [1..9] = (1,9)
    in linear time.
    minmax = minimum &&& maximum
    fails as it's 2x too slow.
  • April 12th, 2016:
    You have (a,b) (c,d)
    You need (a*c, b*d)

    Well, we don't have all day! (Actually, we do) Get to it!
    Pointless; I MEAN POINT-FREE pls
    • lotz@Haskell㌠ @lotz84_ ((uncurry . flip . curry $ id).) . ((uncurry.) . uncurry . (flip.) . flip . flip) ((((flip . (flip.)) $ ((,).) . (*)).) . (*))
    • Olivier Iffrig @oiffrig Point-free but star-full: uncurry (***) . ((*) *** (*))
    • bazzargh @bazzargh curry((uncurry (*)) *** (uncurry (*)))
    • bazzargh @bazzargh knew there had to be a bimap answer, eventually got: curry ((bimap <$> id <*> id) (uncurry (*)))
    • obadz @obadzz join biliftA2 (*)
    • Андреев Кирилл @nonaem00 (((product *** product) . unzip) .) . (flip (:)) . (:[]) ... I clearly need more coffee this morning.
  • April 10th, 2016:

    data BBx = Bx (Int, Int) Pt2D Pt2D

    define:
    mkBB :: [a] -> Pt2D -> BBx
    mkBB xs extents = Bx (0, length xs) (0,0) extents
    points-free
    • Gautier DI FOLCO @gautier_difolco
      -- LANGUAGE TupleSections
      data BBx = Bx (Int, Int) Pt2D Pt2D
      mkBB :: [a] -> Pt2D -> BBx
      mkBB = flip Bx (0,0) . (0, ) . length
    • Thomas D @tthomasdd mkBB = flip Bx (0,0) . (,) 0 . length
  • April 7th, 2016:
    type MList a n = [(a, n)]
    partSz :: Ord n => n -> MList a n -> (MList a n, MList a n)
    partSz µ = partition ((µ >) . snd)
    Define points-free
    • partSz = partition . (. snd) . (>)  -- via @gautier_difolco
Categories: Offsite Blogs

Derek Elkins: The Mistake Everyone Makes with KnockoutJS

Planet Haskell - Thu, 05/05/2016 - 2:38pm
Introduction

Knockout is a nice JavaScript library for making values that automatically update when any of their “dependencies” update. Those dependencies can form an arbitrary directed acyclic graph. Many people seem to think of it as “yet another” templating library, but the core idea which is useful far beyond “templating” is the notion of observable values. One nice aspect is that it is a library and not a framework so you can use it as little or as much as you want and you can integrate it with other libraries and frameworks.

At any rate, this article is more geared toward those who have already decided on using Knockout or a library (in any language) offering similar capabilities. I strongly suspect the issues and solutions I’ll discuss apply to all similar sorts of libraries. While I’ll focus on one particular example, the ideas behind it apply generally. This example, admittedly, is one that almost anyone will implement, and in my experience will do it incorrectly the first time and won’t realize the problem until later.

The Problem

When doing any front-end work, before long there will be a requirement to support “multi-select” of something. Of course, you want the standard select/deselect all functionality and for it to work correctly, and of course you want to do something with the items you’ve selected. Here’s a very simple example:

Number selected: Item Add

Here, the number selected is an overly simple example of using the selected items. More realistically, the selected items will trigger other items to show up and/or trigger AJAX requests to update the data or populate other data. The HTML for this example is completely straightforward.

<div id="#badExample"> Number selected: <span data-bind="text: $data.numberSelected()"></span> <table> <tr><th><input type="checkbox" data-bind="checked: $data.allSelected"/></th><th>Item</th></tr> <!-- ko foreach: { data: $data.items(), as: '$item' } --> <tr><td><input type="checkbox" data-bind="checked: $data.selected"/></td><td data-bind="text: 'Item number: '+$data.body"></td></tr> <!-- /ko --> <tr><td><button data-bind="click: function() { $data.add(); }">Add</button></td></tr> </table> </div>

The way nearly everyone (including me) first thinks to implement this is by adding a selected observable to each item and then having allSelected depend on all of the selected observables. Since we also want to write to allSelected to change the state of the selected observables we use a writable computed observable. This computed observable will loop through all the items and check to see if they are all set to determine it’s state. When it is updated, it will loop through all the selected observables and set them to the appropriate state. Here’s the full code listing.

var badViewModel = { counter: 0, items: ko.observableArray() }; badViewModel.allSelected = ko.computed({ read: function() { var items = badViewModel.items(); var allSelected = true; for(var i = 0; i < items.length; i++) { // Need to make sure we depend on each item, so don't break out of loop early allSelected = allSelected && items[i].selected(); } return allSelected; }, write: function(newValue) { var items = badViewModel.items(); for(var i = 0; i < items.length; i++) { items[i].selected(newValue); } } }); badViewModel.numberSelected = ko.computed(function() { var count = 0; var items = badViewModel.items(); for(var i = 0; i < items.length; i++) { if(items[i].selected()) count++; } return count; }); badViewModel.add = function() { badViewModel.items.push({ body: badViewModel.counter++, selected: ko.observable(false) }); }; ko.applyBindings(badViewModel, document.getElementById('#badExample'));

This should be relatively straightforward, and it works, so what’s the problem? The problem can be seen in numberSelected (and it also comes up with allSelected which I’ll get to momentarily). numberSelected depends on each selected observable and so it will be fired each time each one updates. That means if you have 100 items, and you use the select all checkbox, numberSelected will be called 100 times. For this example, that doesn’t really matter. For a more realistic example than numberSelected, this may mean rendering one, then two, then three, … then 100 HTML fragments or making 100 AJAX requests. In fact, this same behavior is present in allSelected. When it is written, as it’s writing to the selected observables, it is also triggering itself.

So the problem is updating allSelected or numberSelected can’t be done all at once, or to use database terminology, it can’t be updated atomically. One possible solution in newer versions of Knockout is to use deferredUpdates or, what I did back in the much earlier versions of Knockout, abuse the rate limiting features. The problem with this solution is that it makes updates asynchronous. If you’ve written your code to not care whether it was called synchronously or asynchronously, then this will work fine. If you haven’t, doing this throws you into a world of shared state concurrency and race conditions. In this case, this solution is far worse than the disease.

The Solution

So, what’s the alternative? We want to update all selected items atomically; we can atomically update a single observable; so we’ll put all selected items into a single observable. Now an item determines if it is selected by checking whether it is in the collection of selected items. More abstractly, we make our observables more coarse-grained, and we have a bunch of small computed observables depend on a large observable instead of a large computed observable depending on a bunch of small observables as we had in the previous code. Here’s an example using the exact same HTML and presenting the same overt behavior.

Number selected: Item Add

And here’s the code behind this second example:

var goodViewModel = { counter: 0, selectedItems: ko.observableArray(), items: ko.observableArray() }; goodViewModel.allSelected = ko.computed({ read: function() { return goodViewModel.items().length === goodViewModel.selectedItems().length; }, write: function(newValue) { if(newValue) { goodViewModel.selectedItems(goodViewModel.items().slice(0)); // Need a copy! } else { goodViewModel.selectedItems.removeAll(); } } }); goodViewModel.numberSelected = ko.computed(function() { return goodViewModel.selectedItems().length; }); goodViewModel.add = function() { var item = { body: goodViewModel.counter++ } item.selected = ko.computed({ read: function() { return goodViewModel.selectedItems.indexOf(item) > -1; }, write: function(newValue) { if(newValue) { goodViewModel.selectedItems.push(item); } else { goodViewModel.selectedItems.remove(item); } } }); goodViewModel.items.push(item); }; ko.applyBindings(goodViewModel, document.getElementById('#goodExample'));

One thing to note is that setting allSelected and numberSelected are now both simple operations. A write to an observable on triggers a constant number of writes to other observables. In fact, there are only two (non-computed) observables. On the other hand, reading the selected observable is more expensive. Toggling all items has quadratic complexity. In fact, it had quadratic complexity before due to the feedback. However, unlike the previous code, this also has quadratic complexity when any individual item is toggled. Unlike the previous code, though, this is simply due to a poor choice of data structure. Equipping each item with an “ID” field and using an object as a hash map would reduce the complexity to linear. In practice, for this sort of scenario, it tends not to make a big difference. Also, Knockout won’t trigger dependents if the value doesn’t change, so there’s no risk of the extra work propagating into still more extra work. Nevertheless, while I endorse this solution for this particular problem, in general making finer grained observables can help limit the scope of changes so unnecessary work isn’t done.

Still, the real concern and benefit of this latter approach isn’t the asymptotic complexity of the operations, but the atomicity of the operations. In the second solution, every update is atomic. There are no intermediate states on the way to a final state. This means that dependents, represented by numberSelected but which are realistically much more complicated, don’t get triggered excessively and don’t need to “compensate” for unintended intermediate values.

Epilogue

We could take the coarse-graining to its logical conclusion and have the view model for an application be a single observable holding an object representing the entire view model (and containing no observables of its own). Taking this approach actually does have a lot of benefits, albeit there is little reason to use Knockout at that point. Instead this starts to lead to things like Facebook’s Flux pattern and the pattern perhaps most clearly articulated by Cycle JS.

<script lang="text/javascript" src="http://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js"></script> <script lang="text/javascript"> // Bad View Model var badViewModel = { counter: 0, items: ko.observableArray() }; badViewModel.allSelected = ko.computed({ read: function() { var items = badViewModel.items(); var allSelected = true; for(var i = 0; i < items.length; i++) { // Need to make sure we depend on each item, so don't break out of loop early allSelected = allSelected && items[i].selected(); } return allSelected; }, write: function(newValue) { var items = badViewModel.items(); for(var i = 0; i < items.length; i++) { items[i].selected(newValue); } } }); badViewModel.numberSelected = ko.computed(function() { var count = 0; var items = badViewModel.items(); for(var i = 0; i < items.length; i++) { if(items[i].selected()) count++; } return count; }); badViewModel.add = function() { badViewModel.items.push({ body: badViewModel.counter++, selected: ko.observable(false) }); }; // Good View Model var goodViewModel = { counter: 0, selectedItems: ko.observableArray(), items: ko.observableArray() }; goodViewModel.allSelected = ko.computed({ read: function() { return goodViewModel.items().length === goodViewModel.selectedItems().length; }, write: function(newValue) { if(newValue) { goodViewModel.selectedItems(goodViewModel.items().slice(0)); // Need a copy! } else { goodViewModel.selectedItems.removeAll(); } } }); goodViewModel.numberSelected = ko.computed(function() { return goodViewModel.selectedItems().length; }); goodViewModel.add = function() { var item = { body: goodViewModel.counter++ } item.selected = ko.computed({ read: function() { return goodViewModel.selectedItems.indexOf(item) > -1; }, write: function(newValue) { if(newValue) { goodViewModel.selectedItems.push(item); } else { goodViewModel.selectedItems.remove(item); } } }); goodViewModel.items.push(item); }; ko.applyBindings(badViewModel, document.getElementById('#badExample')); ko.applyBindings(goodViewModel, document.getElementById('#goodExample')); </script>
Categories: Offsite Blogs

NFM 2016 - Call for participation

General haskell list - Thu, 05/05/2016 - 2:19pm
******************************************************************** CALL FOR PARTICIPATION The 8th NASA Formal Methods Symposium June 7 - June 9, 2016 McNamara Alumni Center University of Minnesota Minneapolis, MN http://crisys.cs.umn.edu/nfm2016 ******************************************************************** REGISTRATION ... is FREE! All interested individuals, including non-US citizens, are welcome to attend. All participants must register but there is no registration fee. Please register online at http://crisys.cs.umn.edu/nfm2016/REGISTRATION We strongly encourage participants to register early and reserve accommodations. A block of hotel rooms are reserved at The Commons Hotel until May 7, 2016. THEME OF THE SYMPOSIUM The NASA Formal Methods Symposium is a forum to foster collaboration between theoreticians and practitioners from NASA, academia, and the
Categories: Incoming News

Disciple/DDC: DDC 0.4.2 -- the "almost useful" release

Planet Haskell - Thu, 05/05/2016 - 6:22am
DDC 0.4.2 was pushed to Hackage a few days ago. Running "cabal update; cabal install ddc-tools" should build it. You'll need a version of the LLVM tools in your path. LLVM 3.4 - 3.8 (the current release) should work.

Code generation and runtime system support for higher order functions is finished. The type checker also now automatically inserts the run and box casts which were mentioned a few posts ago. Release notes are on the github page.

The programs we can compile are starting to look "almost useful". For example, here is some code from the AlmostPrimes example, which I adapted from Rosetta Code:


-- | A stream of k-almost-prime numbers.
kPrimes (k: Nat#): Stream Nat# Nat#
= sfilter (isKPrime k) (senumFrom 2)


-- | Check if some number is k-almost-prime.
isKPrime (k n: Nat#): Bool#
| k == 1
= isPrime n

| otherwise
= sany $ smap (isKPrime (k - 1))
$ smap fst
$ sfilter (eq 0 ∘ snd)
$ smap (divMod n)
$ stakeWhile (λx: Nat#. x < n)
$ primes 2


main (_: Unit): S Console Unit
= do forS (enumFromTo 1 5)
(λk: Nat#.
do write $ "k = " % showNat k % ": "
forS (listOfStream $ stake 10 $ kPrimes k)
(λx: Nat#. write $ showNat x % " ")
write "\n")

The full example code is here

Note that Nat# is the type of a primitive natural number. The # just means "primitive" which is not the same as "unboxed" as per GHC. The hashes will go away once I implement type synonyms and can define a nicer synonym. The example invokes some standard stream combinators, smap, sfilter and so on. The forS function is like mapM, but using the Disciple effect system instead of monads. The type of forS is:
forS : List a -> (a -> S e Unit) -> S e Unit
S e Unit is the type of a suspended computation with effect e which returns a value of type Unit when evaluated. I'm calling this the "almost useful" release because because we still need to finish the garbage collector, as well as desugaring for nested patterns, before I could imagine writing anything substantial in it. Late last year Jacob Stanley was working on the transform to insert GC slot handling into core programs, but the runtime support still needs to be implemented. Current work is to finish support for type synonyms, then I'll do either the GC or nested patterns, depending in which looks like the lowest hanging fruit.
Categories: Offsite Blogs

Is Haskell IO inherently slower than C?

haskell-cafe - Thu, 05/05/2016 - 2:05am
I'm trying to write a program in Haskell that writes a large file at the end, and it seems like that output alone is taking way longer than it should. I don't understand why Haskell shouldn't be able to write data as quickly as C, so I wrote two test files:
Categories: Offsite Discussion

Cabal confusion. Modules not being exposed stillaccessible for testing.

haskell-cafe - Thu, 05/05/2016 - 12:18am
I use reactive-banana <https://github.com/HeinrichApfelmus/reactive-banana/blob/master/reactive-banana/reactive-banana.cabal> as an example. My question is how can Reactive.Banana.Internal.Combinators be testable from Reactive.Banana.Test.Plumbing? My misunderstanding is that only modules named in the exposed-modules stanza could be imported. Could someone clear up my misunderstanding? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

FP Complete: Stack Security GnuPG Keys

Planet Haskell - Wed, 05/04/2016 - 10:50pm
Introduction

We are introducing a method of verifying Haskell packages with OpenPGP signatures. The initial code to sign your packages has been included in Stack as an experimental feature for some time. We are going to be improving it and included verification options soon. However, we need signatures from package authors before verification is useful.

The first step in submitting package signatures is to create a secure OpenPGP key for yourself and publish it. This post will walk you through creating an offline master OpenPGP key-set with GnuPG.

If you've never used GPG before and need to generate a secure set of keys, then continue reading.

If you already have GPG keys that you are happy with, continue to 'Signing Packages with Stack' towards the bottom of this article.

GPG KeysChoosing a version of GPG

There are 3 main flavors of GnuPG:

  • 1.4 (AKA "gpg") - Old - Super stable (11 years old) & limited features
  • 2.0 (AKA "gpg2") - Stable - Most people use this for personal encryption
  • 2.1 (AKA "gpg2") - Modern - Latest features: ECC support, Tor, TOFU, etc

It is recommended by the GnuPG project version to use the Stable GPG-2 release (2.0). I've included instructions for using GPG 1.4 but I don't recommend using such an old version unless you have no choice. I've also included instructions for GnuPG 2.1 because I use it personally and find the new features useful.

  • Windows - https://www.gpg4win.org/download.html
  • OS X
    • GPG 2.0 is available as part of GPG Suite which is an easy download from https://gpgtools.org (recommended). (Bonus: GPG Suite adds GPG e-mail for the Apple Mail app). (Note: If you install GPG Suite click "Cancel" when prompted to generate a key through their GUI. We'll go through this step below at the command line.)
    • GnuPG 2.0 is available through Homebrew as well if you prefer.
  • Linux/BSD - GnuPG 2.x is available as a package for your favorite Linux flavor. (Bonus: Add Engimail while you are at it for easy GPG e-mail.)
Offline machine for GPG key management

Ideally your offline keys will never be on a machine connected to the outside world. The offline box will be the place where you keep your master key, manage your keys & sign other keys. This assures that you can always revoke keys and recover if your online machine(s) are somehow compromised.

To have an offline box you'll need two "machines". These can just be the same machine if you utilize a "live" Linux USB/DVD boot drive and a USB stick for the "sneaker net".

We'll follow the same procedure to generate your keys, even if you don't want to use an offline machine to generate them. It's highly recommended using an offline master though and keeping secure backups. It's the least hassle for everyone else in your "web of trust" in an event where keys are compromised.

With an offline master key, if your laptop is compromised, you can just revoke the compromised sub-keys, create new sub-keys and publish them. If you get into the situation where your master key is compromised, you will have to revoke all your keys, approach people about your new keys & going through the process of reestablishing trust with people in your web-of-trust.

Configuration

Setup $GNUPGHOME directory

umask 077 export GNUPGHOME=$HOME/.gnupg ; # default location but env var rules mkdir -p $GNUPGHOME

Install the secure key-server certificate (Ubuntu/Debian):

mkdir -p /usr/local/share/ca-certificates/ curl -s https://sks-keyservers.net/sks-keyservers.netCA.pem \ | sudo tee /usr/local/share/ca-certificates/sks-keyservers.netCA.crt sudo update-ca-certificates

Put some or all of the following in your $GNUPGHOME/gpg.conf file: (taken from the riseup.net best gpg practices page)

no-emit-version no-comments keyid-format 0xlong with-fingerprint list-options show-uid-validity verify-options show-uid-validity use-agent keyserver hkps://hkps.pool.sks-keyservers.net # or less secure: keyserver hkp://pool.sks-keyservers.net keyserver-options no-honor-keyserver-url keyserver-options include-revoked personal-cipher-preferences AES256 AES192 AES CAST5 personal-digest-preferences SHA512 SHA384 SHA256 SHA224 cert-digest-algo SHA512 default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP UncompressedKey GenerationMaster Key

Public and Secret

  1. Generate Key

    GPG 1.4

    gpg --expert --gen-key

    GPG 2.0

    gpg2 --expert --gen-key

    GPG 2.1

    gpg2 --expert --full-gen-key
    • Pick the "RSA" and "set your own capabilities" option
    • Turn off everything until it just says "Certify" then continue
    • Select the bit count. I pick 4000 for RSA because it's big but it's also not a power of 2 (like everyone else picks.)
    • Select the expiration period. I picked 1 year.
    • Pick a good pass-phrase that you can remember but would take a computer a long time to guess.
  2. Backup Your Keys

    GPG 1.4

    gpg --armor --export > $GNUPGHOME/public.asc gpg --armor --export-secret-key > $GNUPGHOME/secret.asc

    GPG 2.0 or 2.1

    gpg2 --armor --export > $GNUPGHOME/public.asc gpg2 --armor --export-secret-key > $GNUPGHOME/secret.asc
  3. Try Recovering Your Keys

    Delete your public and secret key and re-import them.

    GPG 1.4

    gpg --delete-secret-key <KEYID> gpg --delete-key <KEYID> gpg --import $GNUPGHOME/public.asc $GNUPGHOME/secret.asc gpg --expert --edit-key <KEYID>

    GPG 2.0 & 2.1

    gpg2 --delete-secret-key <KEYID> gpg2 --delete-key <KEYID> gpg2 --import $GNUPGHOME/public.asc $GNUPGHOME/secret.asc gpg2 --expert --edit-key <KEYID>
    • Type trust [Return] to give your self "ultimate" trust.
    • Type save [Return] to save your changes to the key.
Sub-keys
  1. Generate Keys

    Start by editing your key again.

    GPG 1.4

    gpg --expert --edit-key <KEYID>

    GPG 2.0 or 2.1

    gpg2 --expert --edit-key <KEYID>

    Create a Signing Sub-key:

    • Type addkey [Return]
    • Pick the "RSA" and "set your own capabilities" option
    • Turn off everything until it just says "Sign" then continue

    Create an Encryption Sub-key:

    • Type addkey [Return]
    • Pick the "RSA" and "set your own capabilities" option
    • Turn off everything until it just says "Encrypt" then continue

    Create an Authentication Sub-key:

    • Type addkey [Return]
    • Pick the "RSA" and "set your own capabilities" option
    • Add the "Authenticate" capability then continue
    • Turn off everything until it just says "Authenticate" then continue
    • Type save [Return] to save and exit

    Now try encrypting and signing stuff with your keys.

    GPG 1.4

    echo 'hello' | gpg --armor --clearsign | gpg --verify echo 'sekrats!' | gpg --armor --encrypt | gpg --decrypt

    GPG 2.0 or 2.1

    echo 'hello' | gpg2 --armor --clearsign | gpg2 --verify echo 'sekrats!' | gpg2 --armor --encrypt | gpg2 --decrypt
  2. Backup Keys

    GPG 1.4

    gpg --armor --export > $GNUPGHOME/public.asc gpg --armor --export-secret-keys > $GNUPGHOME/secret.asc gpg --armor --export-secret-subkeys > $GNUPGHOME/subkey.asc

    GPG 2.0 or 2.1

    gpg2 --armor --export > $GNUPGHOME/public.asc gpg2 --armor --export-secret-keys > $GNUPGHOME/secret.asc gpg2 --armor --export-secret-subkeys > $GNUPGHOME/subkey.asc
  3. Try Recovery

    Delete your public and secret key and re-import the offline keys (the full set).

    GPG 1.4

    gpg --delete-secret-key <KEYID> gpg --delete-key <KEYID> gpg --import $GNUPGHOME/public.asc $GNUPGHOME/secret.asc gpg --expert --edit-key <KEYID>

    GPG 2.0 or 2.1

    gpg2 --delete-secret-key <KEYID> gpg2 --delete-key <KEYID> gpg2 --import $GNUPGHOME/public.asc $GNUPGHOME/secret.asc gpg2 --expert --edit-key <KEYID>
    1. Type trust [Return] to give your self "ultimate" trust.
    2. Type save [Return] to save your changes to the key.

    Now try encrypting and signing stuff with your keys.

    GPG 1.4

    echo 'hello' | gpg --armor --clearsign | gpg --verify echo 'sekrats!' | gpg --armor --encrypt | gpg --decrypt

    GPG 2.0 or 2.1

    echo 'hello' | gpg2 --armor --clearsign | gpg2 --verify echo 'sekrats!' | gpg2 --armor --encrypt | gpg2 --decrypt
Switching Secret Key SetsUsing your offline secret keys

Delete your secret keys and re-import the offline secret keys (the full set).

GPG 1.4

gpg --delete-secret-key <KEYID> gpg --import $GNUPGHOME/secret.asc

GPG 2.0 or 2.1

gpg2 --delete-secret-key <KEYID> gpg2 --import $GNUPGHOME/secret.asc

With the full set in use (offline only of course!) you can import your buddy's keys, sign them and trust them. Use gpg –import and gpg –export to move keys around to/from USB drives to/from your online machine.

If you list your secret keys you should see a plain "sec" next to your key. This indicates a full secret key is present. You may now manage keys and sign other keys.

GPG 1.4

gpg --list-secret-keys

GPG 2.0 or 2.1

gpg2 --list-secret-keysUsing your online secret keys

Delete your secret keys and re-import the online secret keys (the subset).

GPG 1.4

gpg --delete-secret-key <KEYID> gpg --import $GNUPGHOME/subkey.asc

GPG 2.0 or 2.1

gpg2 --delete-secret-key <KEYID> gpg2 --import $GNUPGHOME/subkey.asc

You won't be able to sign other people's keys or create/revoke keys with this key-set. (This is a good thing in case your online machine and it's subkeys are compromised later.)

If you list your secret keys you should see a "sec#" next to your key. The '#' indicates that the secret is missing for your master key. If you are following good practices, from now on, it will only be available on your offline computer.

GPG 1.4

gpg --list-secret-keys

GPG 2.0 or 2.1

gpg2 --list-secret-keysSigning Packages with Stack

Now that we have our online secret keys ready, we can sign our Hackage-bound packages with Stack. There are currently two methods to sign keys in Stack. The first is to sign your package while uploading (the default). Just perform an upload as you might have in the past.

stack upload

This will upload your package to Hackage just like it did before. After that finishes it will GPG detach-sign the package and send that signature to sig-service. This service just collects signatures & periodically commits them to sig-archive. Later we will use these signatures and our web of trust to verify package contents as they download with stack.

If you don't want to sign for some reason. You can just skip the signature by adding a flag.

stack upload --no-signature

Another way to sign is to create an sdist file. If you add a flag you can publish a signature at the same time.

stack sdist --sign

For those package authors out there with more than a few packages to sign, we have written a simple tool to bulk sign all your packages published to Hackage. It's called sig-tool.

To bootstrap you need to run the following:

cd sig-tool stack install stack exec -- sig-tool setup <myHackageUser>

This will download all of your packages to the current directory & generate a manifest with SHA256 sums for all the releases. You should inspect the files to make sure they are correct at this stage. You can sign all of the releases or you can trim the manifest file if you like.

To begin signing run the following:

stack exec -- sig-tool signCome work with us!

**FP Complete has multiple openings for software and devops engineers. If you're interested in working on Haskell, container-based deployment, build systems, or just about anything you read about on this blog, send us your CV.**

Categories: Offsite Blogs