News aggregator

GHC.Prim: Resizable Multidimensional array

haskell-cafe - 0 sec ago
I'm trying to create a resizable multidimensional array using the primitive array stuff in GHC.Prim. I see that the only resizable array is the ByteArray#, but the only array that supports multidimensionality is the ArrayArray# as far as I can tell. I have groups of 4 Double#s that I want to have a resizable array of, so only really the "first dimension" needs to be resizable. I was thinking if there was a way to get the pointer of a SmallArray# as an Addr# I could use the ability of ByteArray#s to store Addr#s. The other option I was considering was to simply have a mapping similar to the ones provided by Data.Ix that maps pairs of numbers to a single index, and in that way use a single ByteArray# to store everything. However, because this will be a fairly large array it seems like it is a much better idea to avoid allocating and reallocating that humongous chunk of memory when I can keep it spread out in smaller chunks by storing pointers to smaller arrays in the larger array. But maybe that isn't an imp
Categories: Offsite Discussion

Trouble building library with cabal-install 1.24

haskell-cafe - 0 sec ago
I'm trying to use cabal-install 1.24. To start fresh, I removed ~/.cabal and ~/.ghc, and then used the old cabal(-install) executable to do cabal update and cabal install cabal-install. Unfortunately, it seems that cabal-install is building everything except my library! $ cabal --version cabal-install version 1.24.0.0 compiled using version 1.24.0.0 of the Cabal library $ ghc --version The Glorious Glasgow Haskell Compilation System, version 7.10.3 $ cat cabal.project.local tests: True program-options ghc-options: -Werror $ cabal new-build --enable-test -v Project settings changed, reconfiguring... creating /home/eamybut/néal/creatur/dist-newstyle creating /home/eamybut/néal/creatur/dist-newstyle/cache creating /home/eamybut/.cabal/store/ghc-7.10.3 Reading installed packages... /usr/local/ghc-7.10.3/bin/ghc-pkg dump '--package-db=/home/eamybut/.cabal/store/ghc-7.10.3/package.db' -v0 /usr/local/ghc-7.10.3/bin/ghc --print-libdir -Werror In order, the following will be built: cond-0.4.1.1 (lib:cond)
Categories: Offsite Discussion

RFC: Removing the `-hb` profiling option

glasgow-user - 0 sec ago
Hi all, After a bit of rather tedious and frustrating debugging I came to the realisation that the code for the `-hb` profiling option is not thread safe. See https://ghc.haskell.org/trac/ghc/ticket/12019 This gives us an opportunity to simply remove it instead of fixing it. If there is anyone that thinks this future is really useful (ie more useful than the other profiling modes) then I'm willing to fix it. But if noone would miss it. I'd much rather remove it. Thoughts? Erik
Categories: Offsite Discussion

What happened to -optdep option?

glasgow-user - Thu, 05/05/2016 - 8:19pm
Hello! I'm using GHC 7.10.3 after an upgrade. I have the following in my Makefile: depend_mod :: lib/abh ghc -M $(CFLAGS_GHC_0) -dep-makefile -optdepbuild/depend.tmp -dep-suffix "p_" \ $(foreach m, $(MOD_HS) $(PROG_HS), src/$(m).hs) \ $(foreach m, $(MOD_CHS) $(PROG_CHS), build/$(m).hs) ... This used to work. The previous GHC version was 7.8. Now GHC complains: -optdepbuild/depend.tmp: openBinaryFile: does not exist (No such file or directory) This looks like the -optdep argument isn't understood. There is no mention of it in the new/7.10.3 user manual. I don't know any longer what this "-optget" argument does, and it isn't in the documentation any longer. I googled the GHC web site. There it is mentioned that -optdep was deprecated, but almost all of the hits are very outdated (like ten years old). Can soneone tell me, how I should change my Makefile..? Greetings, Volker Wysk
Categories: Offsite Discussion

Douglas M. Auclair (geophf): April 2016 1HaskellADay 1Liners

Planet Haskell - Thu, 05/05/2016 - 3:39pm
  • April 15th, 2016:
    (\(intensf, list) -> map (second intensf) list) (insensifierFn, assocList)
    Point-free-itize this lambda
    • obadz @obadzz uncurry $ map . second
  • April 15th, 2016:
    foldr (\card -> let c = color card in mappend c *** ((idx card, c):))
           (mempty, []) cards
    Point-free-itize and de-let lambda
    • Gautier DI FOLCO @gautier_difolco foldr (uncurry (***) . (uncurry fmap . fmap (((,) =<< mappend) . color) . ((,) =<< ((:) .) . (,) . idx))) (mempty, []) cards
    • me: foldr (uncurry (***) . (mappend . snd &&& (:)) . (idx &&& color)) (mempty, [])
  • April 15th, 2016: map (\(idx, color) -> C idx (cell bb idx) color) assoclist
    point-free-itize the lambda in the map function.
    • Eyal Lotem @EyalL uncurry (C <*> cell bb) ?
      Not in front of a type checker, can't check myself :)
  • April 13th, 2016:
    point-free-itize lambda term in:
    uncurry (foldr (\a -> min a *** max a)) . ((head &&& head) &&& id)
    • obadz @obadzz liftA2 (***) min max
  • April 12th, 2016:
    minmax :: Ord a => [a] -> (a, a)
    minmax [1..9] = (1,9)
    in linear time.
    minmax = minimum &&& maximum
    fails as it's 2x too slow.
  • April 12th, 2016:
    You have (a,b) (c,d)
    You need (a*c, b*d)

    Well, we don't have all day! (Actually, we do) Get to it!
    Pointless; I MEAN POINT-FREE pls
    • lotz@Haskell㌠ @lotz84_ ((uncurry . flip . curry $ id).) . ((uncurry.) . uncurry . (flip.) . flip . flip) ((((flip . (flip.)) $ ((,).) . (*)).) . (*))
    • Olivier Iffrig @oiffrig Point-free but star-full: uncurry (***) . ((*) *** (*))
    • bazzargh @bazzargh curry((uncurry (*)) *** (uncurry (*)))
    • bazzargh @bazzargh knew there had to be a bimap answer, eventually got: curry ((bimap <$> id <*> id) (uncurry (*)))
    • obadz @obadzz join biliftA2 (*)
    • Андреев Кирилл @nonaem00 (((product *** product) . unzip) .) . (flip (:)) . (:[]) ... I clearly need more coffee this morning.
  • April 10th, 2016:

    data BBx = Bx (Int, Int) Pt2D Pt2D

    define:
    mkBB :: [a] -> Pt2D -> BBx
    mkBB xs extents = Bx (0, length xs) (0,0) extents
    points-free
    • Gautier DI FOLCO @gautier_difolco
      -- LANGUAGE TupleSections
      data BBx = Bx (Int, Int) Pt2D Pt2D
      mkBB :: [a] -> Pt2D -> BBx
      mkBB = flip Bx (0,0) . (0, ) . length
    • Thomas D @tthomasdd mkBB = flip Bx (0,0) . (,) 0 . length
  • April 7th, 2016:
    type MList a n = [(a, n)]
    partSz :: Ord n => n -> MList a n -> (MList a n, MList a n)
    partSz µ = partition ((µ >) . snd)
    Define points-free
    • partSz = partition . (. snd) . (>)  -- via @gautier_difolco
Categories: Offsite Blogs

Derek Elkins: The Mistake Everyone Makes with KnockoutJS

Planet Haskell - Thu, 05/05/2016 - 2:38pm
Introduction

Knockout is a nice JavaScript library for making values that automatically update when any of their “dependencies” update. Those dependencies can form an arbitrary directed acyclic graph. Many people seem to think of it as “yet another” templating library, but the core idea which is useful far beyond “templating” is the notion of observable values. One nice aspect is that it is a library and not a framework so you can use it as little or as much as you want and you can integrate it with other libraries and frameworks.

At any rate, this article is more geared toward those who have already decided on using Knockout or a library (in any language) offering similar capabilities. I strongly suspect the issues and solutions I’ll discuss apply to all similar sorts of libraries. While I’ll focus on one particular example, the ideas behind it apply generally. This example, admittedly, is one that almost anyone will implement, and in my experience will do it incorrectly the first time and won’t realize the problem until later.

The Problem

When doing any front-end work, before long there will be a requirement to support “multi-select” of something. Of course, you want the standard select/deselect all functionality and for it to work correctly, and of course you want to do something with the items you’ve selected. Here’s a very simple example:

Number selected: Item Add

Here, the number selected is an overly simple example of using the selected items. More realistically, the selected items will trigger other items to show up and/or trigger AJAX requests to update the data or populate other data. The HTML for this example is completely straightforward.

<div id="#badExample"> Number selected: <span data-bind="text: $data.numberSelected()"></span> <table> <tr><th><input type="checkbox" data-bind="checked: $data.allSelected"/></th><th>Item</th></tr> <!-- ko foreach: { data: $data.items(), as: '$item' } --> <tr><td><input type="checkbox" data-bind="checked: $data.selected"/></td><td data-bind="text: 'Item number: '+$data.body"></td></tr> <!-- /ko --> <tr><td><button data-bind="click: function() { $data.add(); }">Add</button></td></tr> </table> </div>

The way nearly everyone (including me) first thinks to implement this is by adding a selected observable to each item and then having allSelected depend on all of the selected observables. Since we also want to write to allSelected to change the state of the selected observables we use a writable computed observable. This computed observable will loop through all the items and check to see if they are all set to determine it’s state. When it is updated, it will loop through all the selected observables and set them to the appropriate state. Here’s the full code listing.

var badViewModel = { counter: 0, items: ko.observableArray() }; badViewModel.allSelected = ko.computed({ read: function() { var items = badViewModel.items(); var allSelected = true; for(var i = 0; i < items.length; i++) { // Need to make sure we depend on each item, so don't break out of loop early allSelected = allSelected && items[i].selected(); } return allSelected; }, write: function(newValue) { var items = badViewModel.items(); for(var i = 0; i < items.length; i++) { items[i].selected(newValue); } } }); badViewModel.numberSelected = ko.computed(function() { var count = 0; var items = badViewModel.items(); for(var i = 0; i < items.length; i++) { if(items[i].selected()) count++; } return count; }); badViewModel.add = function() { badViewModel.items.push({ body: badViewModel.counter++, selected: ko.observable(false) }); }; ko.applyBindings(badViewModel, document.getElementById('#badExample'));

This should be relatively straightforward, and it works, so what’s the problem? The problem can be seen in numberSelected (and it also comes up with allSelected which I’ll get to momentarily). numberSelected depends on each selected observable and so it will be fired each time each one updates. That means if you have 100 items, and you use the select all checkbox, numberSelected will be called 100 times. For this example, that doesn’t really matter. For a more realistic example than numberSelected, this may mean rendering one, then two, then three, … then 100 HTML fragments or making 100 AJAX requests. In fact, this same behavior is present in allSelected. When it is written, as it’s writing to the selected observables, it is also triggering itself.

So the problem is updating allSelected or numberSelected can’t be done all at once, or to use database terminology, it can’t be updated atomically. One possible solution in newer versions of Knockout is to use deferredUpdates or, what I did back in the much earlier versions of Knockout, abuse the rate limiting features. The problem with this solution is that it makes updates asynchronous. If you’ve written your code to not care whether it was called synchronously or asynchronously, then this will work fine. If you haven’t, doing this throws you into a world of shared state concurrency and race conditions. In this case, this solution is far worse than the disease.

The Solution

So, what’s the alternative? We want to update all selected items atomically; we can atomically update a single observable; so we’ll put all selected items into a single observable. Now an item determines if it is selected by checking whether it is in the collection of selected items. More abstractly, we make our observables more coarse-grained, and we have a bunch of small computed observables depend on a large observable instead of a large computed observable depending on a bunch of small observables as we had in the previous code. Here’s an example using the exact same HTML and presenting the same overt behavior.

Number selected: Item Add

And here’s the code behind this second example:

var goodViewModel = { counter: 0, selectedItems: ko.observableArray(), items: ko.observableArray() }; goodViewModel.allSelected = ko.computed({ read: function() { return goodViewModel.items().length === goodViewModel.selectedItems().length; }, write: function(newValue) { if(newValue) { goodViewModel.selectedItems(goodViewModel.items().slice(0)); // Need a copy! } else { goodViewModel.selectedItems.removeAll(); } } }); goodViewModel.numberSelected = ko.computed(function() { return goodViewModel.selectedItems().length; }); goodViewModel.add = function() { var item = { body: goodViewModel.counter++ } item.selected = ko.computed({ read: function() { return goodViewModel.selectedItems.indexOf(item) > -1; }, write: function(newValue) { if(newValue) { goodViewModel.selectedItems.push(item); } else { goodViewModel.selectedItems.remove(item); } } }); goodViewModel.items.push(item); }; ko.applyBindings(goodViewModel, document.getElementById('#goodExample'));

One thing to note is that setting allSelected and numberSelected are now both simple operations. A write to an observable on triggers a constant number of writes to other observables. In fact, there are only two (non-computed) observables. On the other hand, reading the selected observable is more expensive. Toggling all items has quadratic complexity. In fact, it had quadratic complexity before due to the feedback. However, unlike the previous code, this also has quadratic complexity when any individual item is toggled. Unlike the previous code, though, this is simply due to a poor choice of data structure. Equipping each item with an “ID” field and using an object as a hash map would reduce the complexity to linear. In practice, for this sort of scenario, it tends not to make a big difference. Also, Knockout won’t trigger dependents if the value doesn’t change, so there’s no risk of the extra work propagating into still more extra work. Nevertheless, while I endorse this solution for this particular problem, in general making finer grained observables can help limit the scope of changes so unnecessary work isn’t done.

Still, the real concern and benefit of this latter approach isn’t the asymptotic complexity of the operations, but the atomicity of the operations. In the second solution, every update is atomic. There are no intermediate states on the way to a final state. This means that dependents, represented by numberSelected but which are realistically much more complicated, don’t get triggered excessively and don’t need to “compensate” for unintended intermediate values.

Epilogue

We could take the coarse-graining to its logical conclusion and have the view model for an application be a single observable holding an object representing the entire view model (and containing no observables of its own). Taking this approach actually does have a lot of benefits, albeit there is little reason to use Knockout at that point. Instead this starts to lead to things like Facebook’s Flux pattern and the pattern perhaps most clearly articulated by Cycle JS.

<script lang="text/javascript" src="http://ajax.aspnetcdn.com/ajax/knockout/knockout-3.3.0.js"></script> <script lang="text/javascript"> // Bad View Model var badViewModel = { counter: 0, items: ko.observableArray() }; badViewModel.allSelected = ko.computed({ read: function() { var items = badViewModel.items(); var allSelected = true; for(var i = 0; i < items.length; i++) { // Need to make sure we depend on each item, so don't break out of loop early allSelected = allSelected && items[i].selected(); } return allSelected; }, write: function(newValue) { var items = badViewModel.items(); for(var i = 0; i < items.length; i++) { items[i].selected(newValue); } } }); badViewModel.numberSelected = ko.computed(function() { var count = 0; var items = badViewModel.items(); for(var i = 0; i < items.length; i++) { if(items[i].selected()) count++; } return count; }); badViewModel.add = function() { badViewModel.items.push({ body: badViewModel.counter++, selected: ko.observable(false) }); }; // Good View Model var goodViewModel = { counter: 0, selectedItems: ko.observableArray(), items: ko.observableArray() }; goodViewModel.allSelected = ko.computed({ read: function() { return goodViewModel.items().length === goodViewModel.selectedItems().length; }, write: function(newValue) { if(newValue) { goodViewModel.selectedItems(goodViewModel.items().slice(0)); // Need a copy! } else { goodViewModel.selectedItems.removeAll(); } } }); goodViewModel.numberSelected = ko.computed(function() { return goodViewModel.selectedItems().length; }); goodViewModel.add = function() { var item = { body: goodViewModel.counter++ } item.selected = ko.computed({ read: function() { return goodViewModel.selectedItems.indexOf(item) > -1; }, write: function(newValue) { if(newValue) { goodViewModel.selectedItems.push(item); } else { goodViewModel.selectedItems.remove(item); } } }); goodViewModel.items.push(item); }; ko.applyBindings(badViewModel, document.getElementById('#badExample')); ko.applyBindings(goodViewModel, document.getElementById('#goodExample')); </script>
Categories: Offsite Blogs

NFM 2016 - Call for participation

General haskell list - Thu, 05/05/2016 - 2:19pm
******************************************************************** CALL FOR PARTICIPATION The 8th NASA Formal Methods Symposium June 7 - June 9, 2016 McNamara Alumni Center University of Minnesota Minneapolis, MN http://crisys.cs.umn.edu/nfm2016 ******************************************************************** REGISTRATION ... is FREE! All interested individuals, including non-US citizens, are welcome to attend. All participants must register but there is no registration fee. Please register online at http://crisys.cs.umn.edu/nfm2016/REGISTRATION We strongly encourage participants to register early and reserve accommodations. A block of hotel rooms are reserved at The Commons Hotel until May 7, 2016. THEME OF THE SYMPOSIUM The NASA Formal Methods Symposium is a forum to foster collaboration between theoreticians and practitioners from NASA, academia, and the
Categories: Incoming News

Disciple/DDC: DDC 0.4.2 -- the "almost useful" release

Planet Haskell - Thu, 05/05/2016 - 6:22am
DDC 0.4.2 was pushed to Hackage a few days ago. Running "cabal update; cabal install ddc-tools" should build it. You'll need a version of the LLVM tools in your path. LLVM 3.4 - 3.8 (the current release) should work.

Code generation and runtime system support for higher order functions is finished. The type checker also now automatically inserts the run and box casts which were mentioned a few posts ago. Release notes are on the github page.

The programs we can compile are starting to look "almost useful". For example, here is some code from the AlmostPrimes example, which I adapted from Rosetta Code:


-- | A stream of k-almost-prime numbers.
kPrimes (k: Nat#): Stream Nat# Nat#
= sfilter (isKPrime k) (senumFrom 2)


-- | Check if some number is k-almost-prime.
isKPrime (k n: Nat#): Bool#
| k == 1
= isPrime n

| otherwise
= sany $ smap (isKPrime (k - 1))
$ smap fst
$ sfilter (eq 0 ∘ snd)
$ smap (divMod n)
$ stakeWhile (λx: Nat#. x < n)
$ primes 2


main (_: Unit): S Console Unit
= do forS (enumFromTo 1 5)
(λk: Nat#.
do write $ "k = " % showNat k % ": "
forS (listOfStream $ stake 10 $ kPrimes k)
(λx: Nat#. write $ showNat x % " ")
write "\n")

The full example code is here

Note that Nat# is the type of a primitive natural number. The # just means "primitive" which is not the same as "unboxed" as per GHC. The hashes will go away once I implement type synonyms and can define a nicer synonym. The example invokes some standard stream combinators, smap, sfilter and so on. The forS function is like mapM, but using the Disciple effect system instead of monads. The type of forS is:
forS : List a -> (a -> S e Unit) -> S e Unit
S e Unit is the type of a suspended computation with effect e which returns a value of type Unit when evaluated. I'm calling this the "almost useful" release because because we still need to finish the garbage collector, as well as desugaring for nested patterns, before I could imagine writing anything substantial in it. Late last year Jacob Stanley was working on the transform to insert GC slot handling into core programs, but the runtime support still needs to be implemented. Current work is to finish support for type synonyms, then I'll do either the GC or nested patterns, depending in which looks like the lowest hanging fruit.
Categories: Offsite Blogs

Is Haskell IO inherently slower than C?

haskell-cafe - Thu, 05/05/2016 - 2:05am
I'm trying to write a program in Haskell that writes a large file at the end, and it seems like that output alone is taking way longer than it should. I don't understand why Haskell shouldn't be able to write data as quickly as C, so I wrote two test files:
Categories: Offsite Discussion

Cabal confusion. Modules not being exposed stillaccessible for testing.

haskell-cafe - Thu, 05/05/2016 - 12:18am
I use reactive-banana <https://github.com/HeinrichApfelmus/reactive-banana/blob/master/reactive-banana/reactive-banana.cabal> as an example. My question is how can Reactive.Banana.Internal.Combinators be testable from Reactive.Banana.Test.Plumbing? My misunderstanding is that only modules named in the exposed-modules stanza could be imported. Could someone clear up my misunderstanding? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

FP Complete: Stack Security GnuPG Keys

Planet Haskell - Wed, 05/04/2016 - 10:50pm
Introduction

We are introducing a method of verifying Haskell packages with OpenPGP signatures. The initial code to sign your packages has been included in Stack as an experimental feature for some time. We are going to be improving it and included verification options in the coming weeks. However, we need signatures from package authors before verification is useful.

The first step in submitting package signatures is to create a secure OpenPGP key for yourself and publish it. This post will walk you through creating an offline master OpenPGP key-set with GnuPG.

If you've never used GPG before and need to generate a secure set of keys, then continue reading.

If you already have GPG keys that you are happy with, continue to 'Signing Packages with Stack' towards the bottom of this article.

GPG KeysChoosing a version of GPG

There are 3 main flavors of GnuPG:

  • 1.4 (AKA "gpg") - Old - Super stable (11 years old) & limited features
  • 2.0 (AKA "gpg2") - Stable - Most people use this for personal encryption
  • 2.1 (AKA "gpg2") - Modern - Latest features: ECC support, Tor, TOFU, etc

It is recommended by the GnuPG project version to use the Stable GPG-2 release (2.0). I've included instructions for using GPG 1.4 but I don't recommend using such an old version unless you have no choice. I've also included instructions for GnuPG 2.1 because I use it personally and find the new features useful.

  • Windows - https://www.gpg4win.org/download.html
  • OS X
    • GPG 2.0 is available as part of GPG Suite which is an easy download from https://gpgtools.org (recommended). (Bonus: GPG Suite adds GPG e-mail for the Apple Mail app). (Note: If you install GPG Suite click "Cancel" when prompted to generate a key through their GUI. We'll go through this step below at the command line.)
    • GnuPG 2.0 is available through Homebrew as well if you prefer.
  • Linux/BSD - GnuPG 2.x is available as a package for your favorite Linux flavor. (Bonus: Add Engimail while you are at it for easy GPG e-mail.)
Offline machine for GPG key management

Ideally your offline keys will never be on a machine connected to the outside world. The offline box will be the place where you keep your master key, manage your keys & sign other keys. This assures that you can always revoke keys and recover if your online machine(s) are somehow compromised.

To have an offline box you'll need two "machines". These can just be the same machine if you utilize a "live" Linux USB/DVD boot drive and a USB stick for the "sneaker net".

We'll follow the same procedure to generate your keys, even if you don't want to use an offline machine to generate them. It's highly recommended using an offline master though and keeping secure backups. It's the least hassle for everyone else in your "web of trust" in an event where keys are compromised.

With an offline master key, if your laptop is compromised, you can just revoke the compromised sub-keys, create new sub-keys and publish them. If you get into the situation where your master key is compromised, you will have to revoke all your keys, approach people about your new keys & going through the process of reestablishing trust with people in your web-of-trust.

Configuration

Setup $GNUPGHOME directory

umask 077 export GNUPGHOME=$HOME/.gnupg ; # default location but env var rules mkdir -p $GNUPGHOME

Install the secure key-server certificate (Ubuntu/Debian):

mkdir -p /usr/local/share/ca-certificates/ curl -s https://sks-keyservers.net/sks-keyservers.netCA.pem \ | sudo tee /usr/local/share/ca-certificates/sks-keyservers.netCA.crt sudo update-ca-certificates

Put some or all of the following in your $GNUPGHOME/gpg.conf file: (taken from the riseup.net best gpg practices page)

no-emit-version no-comments keyid-format 0xlong with-fingerprint list-options show-uid-validity verify-options show-uid-validity use-agent keyserver hkps://hkps.pool.sks-keyservers.net # or less secure: keyserver hkp://pool.sks-keyservers.net keyserver-options no-honor-keyserver-url keyserver-options include-revoked personal-cipher-preferences AES256 AES192 AES CAST5 personal-digest-preferences SHA512 SHA384 SHA256 SHA224 cert-digest-algo SHA512 default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP UncompressedKey GenerationMaster Key

Public and Secret

  1. Generate Key

    GPG 1.4

    gpg --expert --gen-key

    GPG 2.0

    gpg2 --expert --gen-key

    GPG 2.1

    gpg2 --expert --full-gen-key
    • Pick the "RSA" and "set your own capabilities" option
    • Turn off everything until it just says "Certify" then continue
    • Select the bit count. I pick 4000 for RSA because it's big but it's also not a power of 2 (like everyone else picks.)
    • Select the expiration period. I picked 1 year.
    • Pick a good pass-phrase that you can remember but would take a computer a long time to guess.
  2. Backup Your Keys

    GPG 1.4

    gpg --armor --export > $GNUPGHOME/public.asc gpg --armor --export-secret-key > $GNUPGHOME/secret.asc

    GPG 2.0 or 2.1

    gpg2 --armor --export > $GNUPGHOME/public.asc gpg2 --armor --export-secret-key > $GNUPGHOME/secret.asc
  3. Try Recovering Your Keys

    Delete your public and secret key and re-import them.

    GPG 1.4

    gpg --delete-secret-key <KEYID> gpg --delete-key <KEYID> gpg --import $GNUPGHOME/public.asc $GNUPGHOME/secret.asc gpg --expert --edit-key <KEYID>

    GPG 2.0 & 2.1

    gpg2 --delete-secret-key <KEYID> gpg2 --delete-key <KEYID> gpg2 --import $GNUPGHOME/public.asc $GNUPGHOME/secret.asc gpg2 --expert --edit-key <KEYID>
    • Type trust [Return] to give your self "ultimate" trust.
    • Type save [Return] to save your changes to the key.
Sub-keys
  1. Generate Keys

    Start by editing your key again.

    GPG 1.4

    gpg --expert --edit-key <KEYID>

    GPG 2.0 or 2.1

    gpg2 --expert --edit-key <KEYID>

    Create a Signing Sub-key:

    • Type addkey [Return]
    • Pick the "RSA" and "set your own capabilities" option
    • Turn off everything until it just says "Sign" then continue

    Create an Encryption Sub-key:

    • Type addkey [Return]
    • Pick the "RSA" and "set your own capabilities" option
    • Turn off everything until it just says "Encrypt" then continue

    Create an Authentication Sub-key:

    • Type addkey [Return]
    • Pick the "RSA" and "set your own capabilities" option
    • Add the "Authenticate" capability then continue
    • Turn off everything until it just says "Authenticate" then continue
    • Type save [Return] to save and exit

    Now try encrypting and signing stuff with your keys.

    GPG 1.4

    echo 'hello' | gpg --armor --clearsign | gpg --verify echo 'sekrats!' | gpg --armor --encrypt | gpg --decrypt

    GPG 2.0 or 2.1

    echo 'hello' | gpg2 --armor --clearsign | gpg2 --verify echo 'sekrats!' | gpg2 --armor --encrypt | gpg2 --decrypt
  2. Backup Keys

    GPG 1.4

    gpg --armor --export > $GNUPGHOME/public.asc gpg --armor --export-secret-keys > $GNUPGHOME/secret.asc gpg --armor --export-secret-subkeys > $GNUPGHOME/subkey.asc

    GPG 2.0 or 2.1

    gpg2 --armor --export > $GNUPGHOME/public.asc gpg2 --armor --export-secret-keys > $GNUPGHOME/secret.asc gpg2 --armor --export-secret-subkeys > $GNUPGHOME/subkey.asc
  3. Try Recovery

    Delete your public and secret key and re-import the offline keys (the full set).

    GPG 1.4

    gpg --delete-secret-key <KEYID> gpg --delete-key <KEYID> gpg --import $GNUPGHOME/public.asc $GNUPGHOME/secret.asc gpg --expert --edit-key <KEYID>

    GPG 2.0 or 2.1

    gpg2 --delete-secret-key <KEYID> gpg2 --delete-key <KEYID> gpg2 --import $GNUPGHOME/public.asc $GNUPGHOME/secret.asc gpg2 --expert --edit-key <KEYID>
    1. Type trust [Return] to give your self "ultimate" trust.
    2. Type save [Return] to save your changes to the key.

    Now try encrypting and signing stuff with your keys.

    GPG 1.4

    echo 'hello' | gpg --armor --clearsign | gpg --verify echo 'sekrats!' | gpg --armor --encrypt | gpg --decrypt

    GPG 2.0 or 2.1

    echo 'hello' | gpg2 --armor --clearsign | gpg2 --verify echo 'sekrats!' | gpg2 --armor --encrypt | gpg2 --decrypt
Switching Secret Key SetsUsing your offline secret keys

Delete your secret keys and re-import the offline secret keys (the full set).

GPG 1.4

gpg --delete-secret-key <KEYID> gpg --import $GNUPGHOME/secret.asc

GPG 2.0 or 2.1

gpg2 --delete-secret-key <KEYID> gpg2 --import $GNUPGHOME/secret.asc

With the full set in use (offline only of course!) you can import your buddy's keys, sign them and trust them. Use gpg –import and gpg –export to move keys around to/from USB drives to/from your online machine.

If you list your secret keys you should see a plain "sec" next to your key. This indicates a full secret key is present. You may now manage keys and sign other keys.

GPG 1.4

gpg --list-secret-keys

GPG 2.0 or 2.1

gpg2 --list-secret-keysUsing your online secret keys

Delete your secret keys and re-import the online secret keys (the subset).

GPG 1.4

gpg --delete-secret-key <KEYID> gpg --import $GNUPGHOME/subkey.asc

GPG 2.0 or 2.1

gpg2 --delete-secret-key <KEYID> gpg2 --import $GNUPGHOME/subkey.asc

You won't be able to sign other people's keys or create/revoke keys with this key-set. (This is a good thing in case your online machine and it's subkeys are compromised later.)

If you list your secret keys you should see a "sec#" next to your key. The '#' indicates that the secret is missing for your master key. If you are following good practices, from now on, it will only be available on your offline computer.

GPG 1.4

gpg --list-secret-keys

GPG 2.0 or 2.1

gpg2 --list-secret-keysSigning Packages with Stack

Now that we have our online secret keys ready, we can sign our Hackage-bound packages with Stack. There are currently two methods to sign keys in Stack. The first is to sign your package while uploading (the default). Just perform an upload as you might have in the past.

stack upload

Another way to sign is to create an sdist file. If you add a flag you can publish a signature at the same time.

stack sdist --sign

For those package authors out there with more than a few packages to sign, we have written a simple tool to bulk sign all your packages published to Hackage. It's called sig-tool.

To bootstrap you need to run the following:

sig-tool setup <myHackageUser>

This will download all of your packages to the current directory & generate a manifest with SHA256 sums for all the releases. You should inspect the files to make sure they are correct at this stage. You can sign all of the releases or you can trim the manifest file if you like.

To begin signing run the following:

sig-tool signCome work with us!

**FP Complete has multiple openings for software and devops engineers. If you're interested in working on Haskell, container-based deployment, build systems, or just about anything you read about on this blog, send us your CV.**

Categories: Offsite Blogs

Functional Jobs: Software Engineer – Cryptography at LeapYear (Full-time)

Planet Haskell - Wed, 05/04/2016 - 6:07pm
LeapYear is pioneering a new approach to the data security problem.

We’ve built the first platform that enables unrestricted access to information while providing mathematically proven guarantees of security and privacy.

Our engineers solve hard, real-world problems that impact every data-driven organization. We are an Agile team at the frontiers of software development, integrating the latest advances in cryptography and machine learning into critical systems using functional programming (99% of our code is written in Haskell).

If this sounds like your dream job, read on.

You will be one of the first ten employees at a rapidly growing, mission-driven company.

In our first six months, our team architected, developed, and deployed our product on Fortune 50 datasets with billions of records and secured over five and a half million dollars in bookings. We are backed by Bloomberg, private equity investors, and security industry experts with multi-billion dollar exits. LeapYear has collaborated with some of the nation’s largest healthcare organizations, partnered with a coalition working to deploy $250B in social impact capital, and we’re just getting started.

If your day-to-day concerns include information leaks, timing attacks, and how well PKCS#1 v2 stacks up against v1.5, this is the job for you. You will shape our security design alongside the world's leading experts in cryptography and make key strategic decisions about our product. We will look to you to identify and instill the best practices in security engineering throughout the platform and help us meet high performance benchmarks while maintaining cryptographic guarantees of privacy and security.

To learn more about our mission, team, company culture, and benefits, see http://leapyear.io/careers. For details on the specific responsibilities and requirements of the role, please see below.

RESPONSIBILITIES
  • oversee the development of secure and scalable implementations
  • collaborate with developers and researchers to transform theory in cryptography into enterprise applications
  • develop performance-critical code
  • plan, implement, and optimize new features to carry out our product roadmap
REQUIREMENTS
  • expert knowledge of systems architecture and C
  • professional experience in functional programming
  • ability to read and understand academic literature in theoretical computer science
  • track record of delivering high-quality product features on schedule
  • strong foundation in data structures, algorithms, and software design
PREFERRED
  • excellent Haskell skills
  • expert knowledge of practical cryptography and data security
  • industry experience in security and/or cryptography
BONUS
  • publications in major security or cryptography journals
  • PhD in computer science, math, or related fields
  • professional experience in developing secure critical systems
A FEW OF THE PERKS
  • Culture of teaching and learning
  • Competitive compensation package of salary and equity
  • Open vacation policy
  • Build your ideal workstation
  • Catered lunch and dinner every day
  • Company outings
  • Free yoga and meditation classes at a world-class studio downstairs
  • Relocation support

LeapYear is on a mission to resolve the conflict between innovation and data security. If you thrive in an open environment working on deeply technical challenges, we’re looking for you.

Get information on how to apply for this position.

Categories: Offsite Blogs

Functional Jobs: Software Engineer – Machine Learning at LeapYear (Full-time)

Planet Haskell - Wed, 05/04/2016 - 6:02pm
LeapYear is pioneering a new approach to the data security problem.

We’ve built the first platform that enables unrestricted access to information while providing mathematically proven guarantees of security and privacy.

Our engineers solve hard, real-world problems that impact every data-driven organization. We are an Agile team at the frontiers of software development, integrating the latest advances in cryptography and machine learning into critical systems using functional programming (99% of our code is written in Haskell).

If this sounds like your dream job, read on.

You will be one of the first ten employees at a rapidly growing, mission-driven company.

In our first six months, our team architected, developed, and deployed our product on Fortune 50 datasets with billions of records and secured over five and a half million dollars in bookings. We are backed by Bloomberg, private equity investors, and security industry experts with multi-billion dollar exits. LeapYear has collaborated with some of the nation’s largest healthcare organizations, partnered with a coalition working to deploy $250B in social impact capital, and we’re just getting started.

If you work at the intersection of machine learning and functional programming, we're looking for you. You will collaborate with developers and researchers to transform machine learning theory into enterprise applications, and the novel algorithms that you develop will be deployed on massive enterprise datasets.

To learn more about our mission, team, company culture, and benefits, see http://leapyear.io/careers. For details on the specific responsibilities and requirements of the role, please see below.

RESPONSIBILITIES
  • design and implement novel machine learning techniques
  • develop performance-critical code
  • plan, implement, and optimize new features to carry out our product roadmap
REQUIREMENTS
  • professional or research machine learning experience
  • professional experience in functional programming
  • ability to read and understand academic literature in theoretical computer science
  • track record of delivering high-quality product features on schedule
  • strong foundation in data structures, algorithms, and software design
PREFERRED
  • excellent Haskell skills
  • experience in enterprise data science and data engineering
  • background applying advanced machine learning techniques in enterprise data environments
BONUS
  • PhD in machine learning or related field
A FEW OF THE PERKS
  • Culture of teaching and learning
  • Competitive compensation package of salary and equity
  • Open vacation policy
  • Build your ideal workstation
  • Catered lunch and dinner every day
  • Company outings
  • Free yoga and meditation classes at a world-class studio downstairs
  • Relocation support

LeapYear is on a mission to resolve the conflict between innovation and data security. If you thrive in an open environment working on deeply technical challenges, we’re looking for you.

Get information on how to apply for this position.

Categories: Offsite Blogs

PPDP 2016 - Last Call for Papers

General haskell list - Wed, 05/04/2016 - 9:19am
====================================================================== Last call for papers 18th International Symposium on Principles and Practice of Declarative Programming PPDP 2016 Special Issue of Science of Computer Programming (SCP) Edinburgh, UK, September 5-7, 2016 (co-located with LOPSTR and SAS) http://ppdp16.webs.upv.es/ ====================================================================== SUBMISSION DEADLINE: 9 MAY (abstracts) / 16 MAY (papers) ---------------------------------------------------------------------- INVITED SPEAKERS Elvira Albert, Complutense University of Madrid, Spain ---------------------------------------------------------------------- PPDP 2016 is a forum that brings together researchers from the declarative programming communities, including those working in the logic, constraint and functional programming paradigms, but also embracing languages, database languages, and knowledge repres
Categories: Incoming News

Bill Atkins: NSNotificationCenter, Swift and blocks

Planet Haskell - Wed, 05/04/2016 - 8:29am
The conventional way to register observers with NSNotificationCenter is to use the target-action pattern. While this gets the job done, it's inherently not type-safe.

For example, the following Swift snippet will compile perfectly:

    NSNotificationCenter.defaultCenter().addObserver(self, selector: Selector("itemAdded:"),      name: MyNotificationItemAdded, object: nil)
even though at runtime it will fail unless self has a method named itemAdded that takes exactly one parameter (leaving off that last colon in the selector will turn this line into a no-op). Plus, this method gives you no way to take advantages of Swift's closures, which would allow the observer to access local variables in the method that adds the observer and would eliminate the need to create a dedicated method to handle the event.
A better way to do this is to use blocks. And NSNotificationCenter does include a block-based API:
    NSNotificationCenter.defaultCenter().addObserverForName(MyNotificationItemAdded, object: nil, queue: nil) { note in      // ...    }
This is much nicer, especially with Swift's trailing closure syntax. There are no method names to be looked up at runtime, we can refer to local variables in the method that registered the observer and we can perform small bits of logic in reaction to events without having to create and name dedicated methods.
The catch comes in resource management. It's very important that an object remove its event observers when it's deallocated, or else NSNotificationCenter will try to invoke methods on invalid pointers.
The traditional target-action method has the one advantage that we can easily handle this requirement with a single call in deinit:
  deinit {    NSNotificationCenter.defaultCenter().removeObserver(self)  }
With the block API, however, since there is no explicit target object, each call to addObserverForName returns "an opaque object to act as observer." So your observer class would need to track all of these objects and then remove them all from the notification center in deinit, which is a pain.
In fact, the hassle of having to do bookkeeping on the observer objects almost cancels out the convenience of using the block API. Frustrated by this situation, I sat down and created a simple helper class, NotificationManager:
class NotificationManager {  private var observerTokens: [AnyObject] = []
  deinit {    deregisterAll()  }
  func deregisterAll() {    for token in observerTokens {      NSNotificationCenter.defaultCenter().removeObserver(token)    }
    observerTokens = []  }
  func registerObserver(name: String, block: (NSNotification -> Void)) {    let newToken = NSNotificationCenter.defaultCenter().addObserverForName(name, object: nil, queue: nil, usingBlock: block)

    observerTokens.append(newToken)  }    func registerObserver(name: String, forObject object: AnyObject, block: (NSNotification -> Void)) {    let newToken = NSNotificationCenter.defaultCenter().addObserverForName(name, object: object, queue: nil, usingBlock: block)        observerTokens.append(newToken)  }}
First, this simple class provides a Swift-specialized API around NSNotificationCenter.  It provides an additional convenience method without an object parameter (rarely used, in my experience) to make it easier to use trailing-closure syntax. But most importantly, it keeps track of the observer objects generated when observers are registered, and removes them when the object is deinit'd.

A client of this class can simply keep a member variable of type NotificationManager and use it to register its observers. When the parent class is deallocated, the deinit method will automatically be called on its NotificationManager member variable, and its observers will be properly disposed of:

class MyController: UIViewController {  private let notificationManager = NotificationManager()    override init() {    notificationManager.registerObserver(MyNotificationItemAdded) { note in      println("item added!")    }        super.init()  }    required init(coder: NSCoder) {    fatalError("decoding not implemented")  }}
When the MyController instance is deallocated, its NotificationManager member variable will be automatically deallocated, triggering the call to deregisterAll that will remove the dead objects from NSNotificationCenter.
In my apps, I add a notificationManager instance to my common UIViewController base class so I don't have to explicitly declare the member variable in all of my controller subclasses.

Another benefit of using my own wrapper around NSNotificationCenter is that I can add useful functionality, like group observers: an observer that's triggered when any one of a group of notifications are posted:

struct NotificationGroup {  let entries: [String]    init(_ newEntries: String...) {    entries = newEntries  }
}
extension NotificationManager {  func registerGroupObserver(group: NotificationGroup, block: (NSNotification -> ()?)) {    for name in group.entries {      registerObserver(name, block: block)    }  }}
This can be a great way to easily set up an event handler to run when, for example, an item is changed in any way at all:
   let MyNotificationItemsChanged = NotificationGroup(      MyNotificationItemAdded,      MyNotificationItemDeleted,      MyNotificationItemMoved,      MyNotificationItemEdited    )
    notificationManager.registerGroupObserver(MyNotificationItemsChanged) { note in      // ...    }
Categories: Offsite Blogs

ANN: monad-ste-0.1.0.0

haskell-cafe - Wed, 05/04/2016 - 5:09am
hello All, I took the time this evening to clean up some work that came out of a larger project and have made a corresponding release to hackage monad-ste is sibling to ST that provides efficient abortive errors using GHC's exception machinery as well as handy machinery like being a PrimMonad and PrimBase instance! (and take heed, you cannot catch ) hackage: https://hackage.haskell.org/package/monad-ste-0.1.0.0 github: https://github.com/cartazio/monad-ste also thanks Brian Schroeder, Joel Burget and Duncan Coutts for providing invaluable feedback, debugging and the like for this library enjoy! -Carter _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Call for Papers: OCL and Textual Modeling Tools and Textual Model Transformations (OCL 2016) - Submit Your Paper Until July 17, 2016

General haskell list - Wed, 05/04/2016 - 3:10am
(Apologies for duplicates) CALL FOR PAPERS 16th International Workshop on OCL and Textual Modeling Co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MODELS 2016) October 2-4, 2016, Saint-Malo, France (TBC) http://oclworkshop.github.io Modeling started out with UML and its precursors as a graphical notation. Such visual representations enable direct intuitive capturing of reality, but some of their features are difficult to formalize and lack the level of precision required to create complete and unambiguous specifications. Limitations of the graphical notations encouraged the development of text-based modeling languages that either integrate with or replace graphical notations for modeling. Typical examples of such languages are OCL, textual MOF, Epsilon, and Alloy. Textual modeling languages have their roots in formal language paradigms like logic, programming and databases. The goal of this w
Categories: Incoming News

Call for Papers: OCL and Textual Modeling Tools and Textual Model Transformations (OCL 2016) - Submit Your Paper Until July 17, 2016

haskell-cafe - Wed, 05/04/2016 - 3:10am
(Apologies for duplicates) CALL FOR PAPERS 16th International Workshop on OCL and Textual Modeling Co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MODELS 2016) October 2-4, 2016, Saint-Malo, France (TBC) http://oclworkshop.github.io Modeling started out with UML and its precursors as a graphical notation. Such visual representations enable direct intuitive capturing of reality, but some of their features are difficult to formalize and lack the level of precision required to create complete and unambiguous specifications. Limitations of the graphical notations encouraged the development of text-based modeling languages that either integrate with or replace graphical notations for modeling. Typical examples of such languages are OCL, textual MOF, Epsilon, and Alloy. Textual modeling languages have their roots in formal language paradigms like logic, programming and databases. The goal of this w
Categories: Offsite Discussion

Sponsored: 64% off Code Black Drone with HD Camera

del.icio.us/haskell - Tue, 05/03/2016 - 11:00pm
Our #1 Best-Selling Drone--Meet the Dark Night of the Sky!
Categories: Offsite Blogs

something went wrong

del.icio.us/haskell - Tue, 05/03/2016 - 11:00pm
Categories: Offsite Blogs