News aggregator

Joachim Breitner: Switching to systemd-networkd

Planet Haskell - Tue, 10/14/2014 - 2:26pm

Ever since I read about systemd-networkd being in the making I was looking forward to try it out. I kept watching for the package to appear in Debian, or at least ITP bugs. A few days ago, by accident, I noticed that I already have systemd-networkd on my machine: It is simply shipped with the systemd package!

My previous setup was a combination of ifplugd to detect when I plug or unplug the ethernet cable with a plain DHCP entry in /etc/network/interface. A while ago I was using guessnet to do a static setup depending on where I am, but I don’t need this flexibility any more, so the very simple approach with systemd-networkd is just fine with me. So after stopping ifplugd and

$ cat > /etc/systemd/network/ <<__END__ [Match] Name=eth0 [Network] DHCP=yes __END__ $ systemctl enable systemd-networkd $ systemctl start systemd-networkd

I was ready to go. Indeed, systemd-networkd, probably due to the integrated dhcp client, felt quite a bit faster than the old setup. And what’s more important (and my main motivation for the switch): It did the right thing when I put it to sleep in my office, unplug it there, go home, plug it in and wake it up. ifplugd failed to detect this change and I often had to manually run ifdown eth0 && ifup eth0; this now works.

But then I was bitten by what I guess some people call the viral nature of systemd: systemd-networkd would not update /etc/resolve.conf, but rather relies on systemd-resolved. And that requires me to change /etc/resolve.conf to be a symlink to /run/systemd/resolve/resolv.conf. But of course I also use my wireless adapter, which, at that point, was still managed using ifupdown, which would use dhclient which updates /etc/resolve.conf directly.

So I investigated if I can use systemd-networkd also for my wireless account. I am not using NetworkManager or the like, but rather keep wpa_supplicant running in roaming mode, controlled from ifupdown (not sure how that exactly works and what controls what, but it worked). I found out that this setup works just fine with systemd-networkd: I start wpa_supplicant with this service file (which I found in the wpasupplicant repo, but not yet in the Debian package):

[Unit] Description=WPA supplicant daemon (interface-specific version) Requires=sys-subsystem-net-devices-%i.device After=sys-subsystem-net-devices-%i.device [Service] Type=simple ExecStart=/sbin/wpa_supplicant -c/etc/wpa_supplicant/wpa_supplicant-%I.conf -i%I [Install]

Then wpa_supplicant will get the interface up and down as it goes, while systemd-networkd, equipped with

[Match] Name=wlan0 [Network] DHCP=yes

does the rest.

So suddenly I have a system without /etc/init.d/networking and without ifup. Feels a bit strange, but also makes sense. I still need to migrate how I manage my UMTS modem device to that model.

The only thing that I’m missing so far is a way to trigger actions when the network configuration has changes, like I could with /etc/network/if-up.d/ etc. I want to run things like killall -ALRM tincd and exim -qf. If you know how to do that, please tell me, or answer over at Stack Exchange.

Categories: Offsite Blogs

Building cross compiler fails

haskell-cafe - Tue, 10/14/2014 - 1:23pm
I am after a ghc cross compiler. I have been cross compiling stuff for years and also built cross-compiling gcc from time to time but am having trouble building a cross-compiling ghc. I have read and associated pages. In the ghc-7.8.3 directory I do ./configure --target=arm-linux-gnueabi -with-gcc=arm-linux-gnueabi-gcc but make eventually fails (in stage 1) configure: error: in `/home/jon/build/ghc-7.8.3/libraries/terminfo': configure:3386: arm-linux-gnueabi-gcc -o conftest -fno-stack-protector conftest.c -lncurses >&5 /usr/lib/gcc/arm-linux-gnueabi/4.6/../../../../arm-linux-gnueabi/bin/ld: cannot find -lncurses Firstly the requirement for termcap is surely unnecessary. We don't need it as the target will just be processing data. We do have an libncurses kicking about and I know _exactly_ what -L option to pass to (in this case) arm-linux-gnueabi-gcc to make this work but where to specify it to ghc's build system escapes me. I have tried settin
Categories: Offsite Discussion

let x = x in x

Haskell on Reddit - Tue, 10/14/2014 - 12:18pm

Can anyone explain when and why the code in the title is useful?

submitted by BanX
[link] [18 comments]
Categories: Incoming News

foreign libraries, dylibs,OS X Mavericks and GHC 7.8.3 woes

haskell-cafe - Tue, 10/14/2014 - 11:41am
Hello Cafe, I will let this gist talk for me: Even though there is a solution, it seems extremely unsatisfying having to specify such paths manually. I always had very good experiences with FFI, OS X and GHC in the past, so this strikes me as a surprise. I have read a bit here and there about GHC 7.8.3 introducing the -dynamic flag (not sure about the specific problem it aims to solve, though) and changing some internal when it comes to library linking, but information is scattered and fragmented. I would like to: 1) Shed some light on my specific use case: Can I do better here? (aka have GHC figure out automatically all the nitty gritty details) 2) Any sort of pointers to documentation, tutorials, papers, everything to "teach me to fish”? Many thanks, Alfredo _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

“Real life” violations of Traversable laws?

Haskell on Reddit - Tue, 10/14/2014 - 11:35am

Here are the laws of Traversable:

  • traverse Identity ≡ Identity
  • traverse (Compose . fmap g . f) ≡ Compose . fmap (traverse g) . traverse f

One can, of course, write a silly instance of Traversable which doesn't satisfy the laws – for instance, only traverses half of the elements and removes the other half, or swaps left and right branches of a tree when traversing a binary tree, or something of the sort. However, are there “real life” stories where you've been bitten by an illegal Traversable instance which you weren't aware was illegal? Or you wrote an illegal instance for a good reason, and then somebody else who wasn't aware of this made an optimisation which was relying on Traversable laws?

submitted by peargreen
[link] [9 comments]
Categories: Incoming News

Laws of `some` and `many`

Haskell on Reddit - Tue, 10/14/2014 - 10:58am

The documentation of Alternative says:

If defined, some and many should be the least solutions of the equations:

some v = (:) <$> v <*> many v many v = some v <|> pure []

Although this is fine in many cases, and I'm ok with those being default definitions, I don't think that these equations should be considered laws.

For example, I would expect that if X is a Kleene algebra, then Const X is an Alternative. But the Kleene star doesn't necessarily satisfy the characterisation above, hence Const X cannot technically be made a legal Alternative, despite satisfying all the other laws, plus many more, including distributivity.

In optparse-applicative I defined a "free alternative functor with star", which is essentially an Alternative where some is a completely formal operation that satisfies no equations whatsoever.

I can't just use the default definition of some, because that creates an "infinite" structure that cannot be statically analysed.

Now, this type, despite satisfying all the Applicative laws and associativity of <|> is not technically an Alternative because of that silly requirement above, but I would really like to avoid to have to use a different name for the Kleene star operator.

What do people think is the right approach here? I personally think that paragraph in the documentation of Alternative should be removed, as it's too simplistic and rules out interesting instances, but maybe there's a better solution.

submitted by pcapriotti
[link] [16 comments]
Categories: Incoming News

Status of GHC targetting Android on ARM?

Haskell on Reddit - Tue, 10/14/2014 - 10:52am

There have been several posts here and on mailing lists in the past about the GHC targetting Android on ARM. Are there any more recent notes anywhere detailing the build process, any known problems, etc?

submitted by homeopathetic
[link] [14 comments]
Categories: Incoming News

Wiki account creation

haskell-cafe - Tue, 10/14/2014 - 4:31am
Preferred Username is: Looms (I hope I have done this correctly) _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Shake's Internal State

Haskell on Reddit - Tue, 10/14/2014 - 1:04am
Categories: Incoming News

Haskell Introduction - YouTube - Mon, 10/13/2014 - 10:08pm
Categories: Offsite Blogs

online tutorial

haskell-cafe - Mon, 10/13/2014 - 6:13pm
Hello, I want to learn haskell by using the fpcomplete site. Now I wonder if this ( is a good tutorial for a beginner. Roelof
Categories: Offsite Discussion

New Functional Programming Job Opportunities

haskell-cafe - Mon, 10/13/2014 - 5:00pm
Here are some functional programming job opportunities that were posted recently: Functional Software Engineer at Cake Solutions Ltd Software Engineer / Developer at Clutch Analytics/ Windhaven Insurance Cheers, Sean Murphy
Categories: Offsite Discussion

Proposal: Add isSubsequenceOf to Data.List

libraries list - Mon, 10/13/2014 - 4:34pm
Data.List has `subsequences`, calculating all subsequences of a list, but it doesn't provide a function to check whether a list is a subsequence of another list. `isSubsequenceOf` would go into the "Predicates" section ( which already contains: * isPrefixOf (dual of inits) * isSuffixOf (dual of tails) * isInfixOf With this proposal, we would add * isSubsequenceOf (dual of subsequences) Suggested implementation:
Categories: Offsite Discussion

Getting the haddocks back (was: documentation buildfailing in hackage?)

haskell-cafe - Mon, 10/13/2014 - 4:25pm
On Sun, Oct 12, 2014 at 4:50 AM, Mateusz Kowalczyk <fuuzetsu< at >> wrote: I agree! My understanding (which is at /least/ 2nd or 3rd hand) is that the doc builds were turned off intentionally because it was a security issue, and that they are unlikely to come back. Now, assuming that is the case, how can we solve the "there is no documentation" issue? Some ideas to kick-start discussion: - Provide an option to include haddocks in the sdist bundle, and extract them on Hackage for display. - Add a 'cabal uploadHaddock' - Run all haddock builds in a VM/docker container / etc.. that mitigates the security concerns. - ??? --Rogan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

Neil Mitchell: Shake's Internal State

Planet Haskell - Mon, 10/13/2014 - 2:53pm

Summary: Shake is not like Make, it has different internal state, which leads to different behaviour. I also store the state in an optimised way.

Update: I'm keeping an up to date version of this post in the Shake repo, which includes a number of questions/answers at the bottom, and is likely to evolve over time to incorporate that information into the main text.

In order to understand the behaviour of Shake, it is useful to have a mental model of Shake's internal state. To be a little more concrete, let's talk about Files which are stored on disk, which have ModTime value's associated with them, where modtime gives the ModTime of a FilePath (Shake is actually generalised over all those things). Let's also imagine we have the rule:

file *> \out -> do
need [dependency]

So file depends on dependency and rebuilds by executing the action run.

The Make Model

In Make there is no additional state, only the file-system. A file is considered dirty if it has a dependency such that:

modtime dependency > modtime file

As a consequence, run must update modtime file, or the file will remain dirty and rebuild in subsequent runs.

The Shake Model

For Shake, the state is:

database :: File -> (ModTime, [(File, ModTime)])

Each File is associated with a pair containing the ModTime of that file, plus a list of each dependency and their modtimes, all from when the rule was last run. As part of executing the rule above, Shake records the association:

file -> (modtime file, [(dependency, modtime dependency)])

The file is considered dirty if any of the information is no longer current. In this example, if either modtime file changes, or modtime dependency changes.

There are a few consequences of the Shake model:

  • There is no requirement for modtime file to change as a result of run. The file is dirty because something changed, after we run the rule and record new information it becomes clean.
  • Since a file is not required to change its modtime, things that depend on file may not require rebuilding even if file rebuilds.
  • If you update an output file, it will rebuild that file, as the ModTime of a result is tracked.
  • Shake only ever performs equality tests on ModTime, never ordering, which means it generalises to other types of value and works even if your file-system sometimes has incorrect times.

These consequences allow two workflows that aren't pleasant in Make:

  • Generated files, where the generator changes often, but the output of the generator for a given input changes rarely. In Shake, you can rerun the generator regularly, and using a function that writes only on change (writeFileChanged in Shake) you don't rebuild further. This technique can reduce some rebuilds from hours to seconds.
  • Configuration file splitting, where you have a configuration file with lots of key/value pairs, and want certain rules to only depend on a subset of the keys. In Shake, you can generate a file for each key/value and depend only on that key. If the configuration file updates, but only a subset of keys change, then only a subset of rules will rebuild. Alternatively, using Development.Shake.Config you can avoid the file for each key, but the dependency model is the same.

Optimising the Model

The above model expresses the semantics of Shake, but the implementation uses an optimised model. Note that the original Shake paper gives the optimised model, not the easy to understand model - that's because I only figured out the difference a few days ago (thanks to Simon Marlow, Simon Peyton Jones and Andrey Mokhov). To recap, we started with:

database :: File -> (ModTime, [(File, ModTime)])

We said that File is dirty if any of the ModTime values change. That's true, but what we are really doing is comparing the first ModTime with the ModTime on disk, and the list of second ModTime's with those in database. Assuming we are passed the current ModTime on disk, then a file is valid if:

valid :: File -> ModTime -> Bool
valid file mNow =
mNow == mOld &&
and [fst (database d) == m | (d,m) <- deps]
where (mOld, deps) = database file

The problem with this model is that we store each File/ModTime pair once for the file itself, plus once for every dependency. That's a fairly large amount of information, and in Shake both File and ModTime can be arbitrarily large for user rules.

Let's introduce two assumptions:

Assumption 1: A File only has at most one ModTime per Shake run, since a file will only rebuild at most once per run. We use Step for the number of times Shake has run on this project.

Consequence 1: The ModTime for a file and the ModTime for its dependencies are all recorded in the same run, so they share the same Step.

Assumption 2: We assume that if the ModTime of a File changes, and then changes back to a previous value, we can still treat that as dirty. In the specific case of ModTime that would require time travel, but even for other values it is very rare.

Consequence 2: We only use historical ModTime values to compare them for equality with current ModTime values. We can instead record the Step at which the ModTime last changed, assuming all older Step values are unequal.

The result is:

database :: File -> (ModTime, Step, Step, [File])

valid :: File -> ModTime -> Bool
valid file mNow =
mNow == mOld &&
and [sBuild >= changed (database d) | d <- deps]
where (mOld, sBuilt, sChanged, deps) = database file
changed (_, _, sChanged, _) = sChanged

For each File we store its most recently recorded ModTime, the Step at which it was built, the Step when the ModTime last changed, and the list of dependencies. We now check if the Step for this file is greater than the Step at which dependency last changed. Using the assumptions above, the original formulation is equivalent.

Note that instead of storing one ModTime per dependency+1, we now store exactly one ModTime plus two small Step values.

We still store each file many times, but we reduce that by creating a bijection between File (arbitrarily large) and Id (small index) and only storing Id.

Implementing the Model

For those who like concrete details, which might change at any point in the future, the relevant definition is in Development.Shake.Database:

data Result = Result
{result :: Value -- the result when last built
,built :: Step -- when it was actually run
,changed :: Step -- when the result last changed
,depends :: [[Id]] -- dependencies
,execution :: Float -- duration of last run
,traces :: [Trace] -- a trace of the expensive operations
} deriving Show

The differences from the model are:

  • ModTime became Value, because Shake deals with lots of types of rules.
  • The dependencies are stored as a list of lists, so we still have access to the parallelism provided by need, and if we start rebuilding some dependencies we can do so in parallel.
  • We store execution and traces so we can produce profiling reports.
  • I haven't shown the File/Id mapping here - that lives elsewhere.
  • I removed all strictness/UNPACK annotations from the definition above, and edited a few comments.

As we can see, the code follows the optimised model quite closely.

Categories: Offsite Blogs