News aggregator

Antti-Juhani Kaijanaho (ibid): A milestone toward a doctorate

Planet Haskell - Sat, 08/23/2014 - 11:44am

Yesterday I received my official diploma for the degree of Licentiate of Philosophy. The degree lies between a Master’s degree and a doctorate, and is not required; it consists of the coursework required for a doctorate, and a Licentiate Thesis, “in which the student demonstrates good conversance with the field of research and the capability of independently and critically applying scientific research methods” (official translation of the Government decree on university degrees 794/2004, Section 23 Paragraph 2).

The title and abstract of my Licentiate Thesis follow:

Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.

Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

A Licentiate Thesis is assessed by two examiners, usually drawn from outside of the home university; they write (either jointly or separately) a substantiated statement about the thesis, in which they suggest a grade. The final grade is almost always the one suggested by the examiners. I was very fortunate to have such prominent scientists as Dr. Stefan Hanenberg and Prof. Stein Krogdahl as the examiners of my thesis. They recommended, and I received, the grade “very good” (4 on a scale of 1–5).

The thesis has been accepted for publication published in our faculty’s licentiate thesis series and will in due course appear has appeared in our university’s electronic database (along with a very small number of printed copies). In the mean time, if anyone wants an electronic preprint, send me email at

<figure class="wp-caption aligncenter" id="attachment_1622" style="width: 334px;"><figcaption class="wp-caption-text">Figure 1 of the thesis: an overview of the mapping process</figcaption></figure>

As you can imagine, the last couple of months in the spring were very stressful for me, as I pressed on to submit this thesis. After submission, it took me nearly two months to recover (which certain people who emailed me on Planet Haskell business during that period certainly noticed). It represents the fruit of almost four years of work (way more than normally is taken to complete a Licentiate Thesis, but never mind that), as I designed this study in Fall 2010.

<figure class="wp-caption aligncenter" id="attachment_1625" style="width: 330px;"><figcaption class="wp-caption-text">Figure 8 of the thesis: Core studies per publication year</figcaption></figure>

Recently, I have been writing in my blog a series of posts in which I have been trying to clear my head about certain foundational issues that irritated me during the writing of the thesis. The thesis contains some of that, but that part of it is not very strong, as my examiners put it, for various reasons. The posts have been a deliberately non-academic attempt to shape the thoughts into words, to see what they look like fixed into a tangible form. (If you go read them, be warned: many of them are deliberately provocative, and many of them are intended as tentative in fact if not in phrasing; the series also is very incomplete at this time.)

I closed my previous post, the latest post in that series, as follows:

In fact, the whole of 20th Century philosophy of science is a big pile of failed attempts to explain science; not one explanation is fully satisfactory. [...] Most scientists enjoy not pondering it, for it’s a bit like being a cartoon character: so long as you don’t look down, you can walk on air.

I wrote my Master’s Thesis (PDF) in 2002. It was about the formal method called “B”; but I took a lot of time and pages to examine the history and content of formal logic. My supervisor was, understandably, exasperated, but I did receive the highest possible grade for it (which I never have fully accepted I deserved). The main reason for that digression: I looked down, and I just had to go poke the bridge I was standing on to make sure I was not, in fact, walking on air. In the many years since, I’ve taken a lot of time to study foundations, first of mathematics, and more recently of science. It is one reason it took me about eight years to come up with a doable doctoral project (and I am still amazed that my department kept employing me; but I suppose they like my teaching, as do I). The other reason was, it took me that long to realize how to study the design of programming languages without going where everyone has gone before.

Debian people, if any are still reading, may find it interesting that I found significant use for the dctrl-tools toolset I have been writing for Debian for about fifteen years: I stored my data collection as a big pile of dctrl-format files. I ended up making some changes to the existing tools (I should upload the new version soon, I suppose), and I wrote another toolset (unfortunately one that is not general purpose, like the dctrl-tools are) in the process.

For the Haskell people, I mainly have an apology for not attending to Planet Haskell duties in the summer; but I am back in business now. I also note, somewhat to my regret, that I found very few studies dealing with Haskell. I just checked; I mention Haskell several times in the background chapter, but it is not mentioned in the results chapter (because there were not studies worthy of special notice).

I am already working on extending this work into a doctoral thesis. I expect, and hope, to complete that one faster.

Categories: Offsite Blogs

Set (Set a) like container - but more efficient?

haskell-cafe - Sat, 08/23/2014 - 11:42am
Does a Set (Set a) like container exist already but which is represented as tree internally, eg A ` - B - C ` - D ` - X - Y which would represent ABC ABD and AXY ? If it does not what would be a nice name to create such container? Marc Weber
Categories: Offsite Discussion

How to add Haddock comment for standalone derivedinstances?

haskell-cafe - Sat, 08/23/2014 - 11:04am
Hi cafe, Is there any way to add the documentation comment for the instances defined with StandaloneDeriving? I'm currently defining data-type using GADTs and its Typeable instance. Normally, this can be done only StandaloneDeriving and DeriveDataTypeable extensions: I added `Typeable` instance for Ordinal recently, so I want to add some comments like "Since". But any of the following doesn't work or, even worse, haddock won't compile: * Just before `deriving instance` line using `-- | ` * Just after `deriving` keyword but before `instance` using `-- | ` * Just after the `Typeable Ordinal`, but no newline in-between, with `-- ^ ` * The next line of `deriving` clause with `-- ^ ` Is there any way to add documentation for instances with standalone deriving, or it's just not supported yet?
Categories: Offsite Discussion

Joachim Breitner: This blog goes static

Planet Haskell - Sat, 08/23/2014 - 9:54am

After a bit more than 9 years, I am replacing Serendipity, which as been hosting my blog, by a self-made static solution. This means that when you are reading this, my server no longer has to execute some rather large body of untyped code to produce the bytes sent to you. Instead, that happens once in a while on my laptop, and they are stored as static files on the server.

I hope to get a little performance boost from this, so that my site can more easily hold up to being mentioned on hackernews. I also do not want to worry about security issues in Serendipity – static files are not hacked.

Of course there are down-sides to having a static blog. The editing is a bit more annoying: I need to use my laptop (previously I could post from anywhere) and I edit text files instead of using a JavaScript-based WYSIWYG editor (but I was slightly annoyed by that as well). But most importantly your readers cannot comment on static pages. There are cloud-based solutions that integrate commenting via JavaScript on your static pages, but I decided to go for something even more low-level: You can comment by writing an e-mail to me, and I’ll put your comment on the page. This has the nice benefit of solving the blog comment spam problem.

The actual implementation of the blog is rather masochistic, as my web page runs on one of these weird obfuscated languages (XSLT). Previously, it contained of XSLT stylesheets producing makefiles calling XSLT sheets. Now it is a bit more-self-contained, with one XSLT stylesheet writing out all the various html and rss files.

I managed to import all my old posts and comments thanks to this script by Michael Hamann (I had played around with this some months ago and just spend what seemed to be an hour to me to find this script again) and a small Haskell script. Old URLs are rewritten (using mod_rewrite) to the new paths, but feed readers might still be confused by this.

This opens the door to a long due re-design of my webpage. But not today...

Categories: Offsite Blogs

Why are `when` and `unless` not polymorphic?

Haskell on Reddit - Sat, 08/23/2014 - 7:18am

Is there any reason when and unless aren't polymorphic in their first argument? I understand that the return type has to be m (), but why don't they take an m a and void it?

submitted by theonlycosmonaut
[link] [21 comments]
Categories: Incoming News

Dominic Steinitz: Importance Sampling

Planet Haskell - Sat, 08/23/2014 - 2:05am
Importance Sampling

Suppose we have an random variable with pdf and we wish to find its second moment numerically. However, the random-fu package does not support sampling from such as distribution. We notice that

So we can sample from and evaluate

> {-# OPTIONS_GHC -Wall #-} > {-# OPTIONS_GHC -fno-warn-name-shadowing #-} > {-# OPTIONS_GHC -fno-warn-type-defaults #-} > {-# OPTIONS_GHC -fno-warn-unused-do-bind #-} > {-# OPTIONS_GHC -fno-warn-missing-methods #-} > {-# OPTIONS_GHC -fno-warn-orphans #-} > module Importance where > import Control.Monad > import Data.Random.Source.PureMT > import Data.Random > import Data.Random.Distribution.Binomial > import Data.Random.Distribution.Beta > import Control.Monad.State > import qualified Control.Monad.Writer as W > sampleImportance :: RVarT (W.Writer [Double]) () > sampleImportance = do > x <- rvarT $ Normal 0.0 2.0 > let x2 = x^2 > u = x2 * 0.5 * exp (-(abs x)) > v = (exp ((-x2)/8)) * (recip (sqrt (8*pi))) > w = u / v > lift $ W.tell [w] > return () > runImportance :: Int -> [Double] > runImportance n = > snd $ > W.runWriter $ > evalStateT (sample (replicateM n sampleImportance)) > (pureMT 2)

We can run this 10,000 times to get an estimate.

ghci> import Formatting ghci> format (fixed 2) (sum (runImportance 10000) / 10000) "2.03"

Since we know that the -th moment of the exponential distribution is where is the rate (1 in this example), the exact answer is 2 which is not too far from our estimate using importance sampling.

The value of

is called the weight, is the pdf from which we wish to sample and is the pdf of the importance distribution.

Importance Sampling Approximation of the Posterior

Suppose that the posterior distribution of a model in which we are interested has a complicated functional form and that we therefore wish to approximate it in some way. First assume that we wish to calculate the expectation of some arbitrary function of the parameters.

Using Bayes

where is some normalizing constant.

As before we can re-write this using a proposal distribution

We can now sample repeatedly to obtain

where the weights are defined as before by

We follow Alex Cook and use the example from (Rerks-Ngarm et al. 2009). We take the prior as and use as the proposal distribution. In this case the proposal and the prior are identical just expressed differently and therefore cancel.

Note that we use the log of the pdf in our calculations otherwise we suffer from (silent) underflow, e.g.,

ghci> pdf (Binomial nv (0.4 :: Double)) xv 0.0

On the other hand if we use the log pdf form

ghci> logPdf (Binomial nv (0.4 :: Double)) xv -3900.8941170876574 > xv, nv :: Int > xv = 51 > nv = 8197 > sampleUniform :: RVarT (W.Writer [Double]) () > sampleUniform = do > x <- rvarT StdUniform > lift $ W.tell [x] > return () > runSampler :: RVarT (W.Writer [Double]) () -> > Int -> Int -> [Double] > runSampler sampler seed n = > snd $ > W.runWriter $ > evalStateT (sample (replicateM n sampler)) > (pureMT (fromIntegral seed)) > sampleSize :: Int > sampleSize = 1000 > pv :: [Double] > pv = runSampler sampleUniform 2 sampleSize > logWeightsRaw :: [Double] > logWeightsRaw = map (\p -> logPdf (Beta 1.0 1.0) p + > logPdf (Binomial nv p) xv - > logPdf StdUniform p) pv > logWeightsMax :: Double > logWeightsMax = maximum logWeightsRaw > > weightsRaw :: [Double] > weightsRaw = map (\w -> exp (w - logWeightsMax)) logWeightsRaw > weightsSum :: Double > weightsSum = sum weightsRaw > weights :: [Double] > weights = map (/ weightsSum) weightsRaw > meanPv :: Double > meanPv = sum $ zipWith (*) pv weights > > meanPv2 :: Double > meanPv2 = sum $ zipWith (\p w -> p * p * w) pv weights > > varPv :: Double > varPv = meanPv2 - meanPv * meanPv

We get the answer

ghci> meanPv 6.400869727227364e-3

But if we look at the size of the weights and the effective sample size

ghci> length $ filter (>= 1e-6) weights 9 ghci> (sum weights)^2 / (sum $ map (^2) weights) 4.581078458313967

so we may not be getting a very good estimate. Let’s try

> sampleNormal :: RVarT (W.Writer [Double]) () > sampleNormal = do > x <- rvarT $ Normal meanPv (sqrt varPv) > lift $ W.tell [x] > return () > pvC :: [Double] > pvC = runSampler sampleNormal 3 sampleSize > logWeightsRawC :: [Double] > logWeightsRawC = map (\p -> logPdf (Beta 1.0 1.0) p + > logPdf (Binomial nv p) xv - > logPdf (Normal meanPv (sqrt varPv)) p) pvC > logWeightsMaxC :: Double > logWeightsMaxC = maximum logWeightsRawC > > weightsRawC :: [Double] > weightsRawC = map (\w -> exp (w - logWeightsMaxC)) logWeightsRawC > weightsSumC :: Double > weightsSumC = sum weightsRawC > weightsC :: [Double] > weightsC = map (/ weightsSumC) weightsRawC > meanPvC :: Double > meanPvC = sum $ zipWith (*) pvC weightsC > meanPvC2 :: Double > meanPvC2 = sum $ zipWith (\p w -> p * p * w) pvC weightsC > > varPvC :: Double > varPvC = meanPvC2 - meanPvC * meanPvC

Now the weights and the effective size are more re-assuring

ghci> length $ filter (>= 1e-6) weightsC 1000 ghci> (sum weightsC)^2 / (sum $ map (^2) weightsC) 967.113872888872

And we can take more confidence in the estimate

ghci> meanPvC 6.371225269833208e-3 Bibliography

Rerks-Ngarm, Supachai, Punnee Pitisuttithum, Sorachai Nitayaphan, Jaranit Kaewkungwal, Joseph Chiu, Robert Paris, Nakorn Premsri, et al. 2009. “Vaccination with ALVAC and AIDSVAX to Prevent HIV-1 Infection in Thailand.” New England Journal of Medicine 361 (23) (December 3): 2209–2220. doi:10.1056/nejmoa0908492.

Categories: Offsite Blogs

Unable to build a mingw cross compiler.

haskell-cafe - Fri, 08/22/2014 - 11:38pm
Greetings, I'm trying to build GHC to cross compile to windows. GCC and friends are in /usr/bin/x86_64-w64-mingw32-*, libraries and headers are in /usr/x86_64-w64-mingw32/{include,lib}. I configured with: ./configure --target=x86_64-w64-mingw32 \ --with-gcc=/usr/bin/x86_64-w64-mingw32-gcc # Needed because # otherwise /usr/bin/gcc # would be used Building fails with the following error: utils/hsc2hs/dist/build/Main.o: In function `s2nI_info': (.text+0x5c5): undefined reference to `GetModuleFileNameW' collect2: error: ld returned 1 exit status (The GHC invocation line is included in the attached file) So it seems that GHC either uses the wrong linker or invokes it with the wrong search path. However, the configure script detects the right linker (/usr/bin/x86_64-w64-mingw32). Is this a bug, or a feature? System data: OS: Debian jessie Linux/GNU i386 Installed GHC (for bootstrapping): version 7.6.3 GHC I'm trying to build: version 7.8.3 Regards Sven "/usr/bin/ghc" -o utils/hsc2
Categories: Offsite Discussion

Why is my program so slow?

Haskell on Reddit - Fri, 08/22/2014 - 11:11pm

I've started trying out Haskell to solve some Project Euler problems, and my program runs very slowly (~40s with -O) compared with ~0.5s from a similar C program. Is there anything obvious that would make this run slow? Any other comments on my code?

reversedDigits :: Int -> [Int] reversedDigits x | x < 0 = reversedDigits (-x) | x < 10 = [x] | otherwise = mod x 10 : reversedDigits (div x 10) digitSquareSum :: Int -> Int digitSquareSum x = sum [y^2 | y <- reversedDigits x] chain :: Int -> Int chain 89 = 89 chain 1 = 1 chain x = chain (digitSquareSum x) chainLookup :: Int -> Int chainLookup x | x <= 567 = fromJust (lookup x lookuptable) | otherwise = fromJust (lookup (digitSquareSum x) lookuptable) where lookuptable = [(x, chain x) | x <- [1..567]] main = print (length [x | x <- [1..9999999], chainLookup x == 89])

Profiling says it's allocating an insane amount of memory: total time = 24.90 secs (24897 ticks @ 1000 us, 1 processor) total alloc = 31,609,551,352 bytes (excludes profiling overheads)

submitted by sheepweevil
[link] [16 comments]
Categories: Incoming News

Maintaining test-suite dependencies with cabal

haskell-cafe - Fri, 08/22/2014 - 5:49pm
Dear reader, I have a single executable package in Cabal[1] and I have added a test for it using an example configuration from the ltc package[2]. With my test on my executable I can't depend on the main package, but I seem to have to repeat the dependencies of my executable, or split off functionality into a library. [3] Can I easily depend on the dependencies of another package or should I create a library and depend on that in my test and executable? All source is at: Greetings, Bram [1] [2] [3] A part of my cabal file added below executable after main-is: Main.hs build-depends: base >=4.7 && <4.8, options ==1.*, directory ==1.2.*, parallel-io ==0.3.*, filepath ==1.3.*, unix ==2.7.* hs-source-dirs: src g
Categories: Offsite Discussion

GHC company-mode

Haskell on Reddit - Fri, 08/22/2014 - 3:35pm
Categories: Incoming News

λ Bubble Pop!

Haskell on Reddit - Fri, 08/22/2014 - 11:12am
Categories: Incoming News

CFP - JSS, Elsevier - Special issue on adaptive and reconfigurable software systems and architectures

General haskell list - Fri, 08/22/2014 - 10:58am
Call for papers =============================== Journal of Systems and Software (JSS, Elsevier) Impact Factor: 1.245 (5-Year Impact Factor: 1.443) Special issue on Adaptive and reconfigurable software systems and architectures =============================== The focal concerns are Service-oriented and component-based software systems, applications and architectures addressing adaptation and reconfiguration issues. Different investigation topics are involved, such as: CBSE, SOA, Functional and Non Functional (NF) requirements (QoS, performance, resilience), monitoring, diagnosis, decision and execution of adaptation and reconfiguration. Different research axes are covered: concepts, methods, techniques, and tools to design, develop, deploy and manage adaptive and reconfigurable software systems. The development of composite services poses very interesting challenges concerning their functional and NF requirements. On the one hand, a composite software system depends on the NF requirements of its constitut
Categories: Incoming News

Temporarily taking over hdevtools on Hackage.

haskell-cafe - Fri, 08/22/2014 - 5:33am
Hi all, it seems that Bit Connor (the author of hdevtools - has been MIA since January. There are a number of pull requests that need to be merged and uploaded to hackage for GHC 7.8. I tried contacting him last week through both emails listed on hackage - one bounced and the other is unanswered. Has anyone seen/heard from the author? I'd like to temporarily hijack hackage responsibilities until we hear from him again in order to merge and upload these changes (
Categories: Offsite Discussion

Philip Wadler: Informatics Independence Referendum Debate

Planet Haskell - Fri, 08/22/2014 - 4:43am
School of Informatics, University of Edinburgh
Independence Referendum Debate4.00--5.30pm Monday 25 August
Appleton Tower Lecture Room 2

For the NAYs: Prof. Alan Bundy
For the AYEs: Prof. Philip Wadler
Moderator: Prof. Mike Fourman

All members of the School of Informatics
and the wider university community welcome
(This is a debate among colleagues and not a formal University event.
All views expressed are those of the individuals who express them,
and not the University of Edinburgh.)
Categories: Offsite Blogs

Philip Wadler: Research funding in an independent Scotland

Planet Haskell - Fri, 08/22/2014 - 4:32am

A useful summary, written by a colleague.
In Summary:
  •  the difference between the Scottish tax contribution and RCUK spending in Scotland is small compared to savings that will be made in other areas such as defence
  • Funding levels per institution are actually similar in Scotland to those in the rest of the UK, it’s just that there are more institutions here
  • Because of its relative importance any independent Scottish government would prioritise research
  • If rUK rejects a common research area it would lose the benefits of its previous investments, and the Scottish research capacity, which is supported by the Scottish government and the excellence of our universities
  • There are significant disadvantages with a No vote, resulting from UK immigration policy and the possibility of exiting the EU
Categories: Offsite Blogs

Lens 4.4 release notes

Haskell on Reddit - Fri, 08/22/2014 - 12:41am

There's a new release of the lens package tonight, and I'd like to break down some of the changes from the changelog.

Template Haskell changes

The Template Haskell module has been around through many revisions of the lens package and seen a lot of features added to it. There were multiple implementations of the same functionality in the module doing slightly different things and it was becoming a challenge to fix bugs or add functionality. To improve this situation the TH code has been rewritten and unified.

  • The common cases of makeLenses and makePrisms should continue to work as before.

  • There's a new makeClassyPrisms function to complement the makeClassy lens generating code. You can read more about this feature in the haddocks!

  • Lenses generated for a single constructor now use irrefutable patterns. This allows you to initialize an undefined with lenses.

  • The makeFields implementation is now merged into the makeLenses implementation. If you were previously using makeFieldsWith in order to provide a custom rule set, you should migrate to using makeLensesWith

  • Note: makeIsos was removed in the 4.3 release. Both makeLenses and makePrisms will generate an Iso when possible.

Data.Aeson.Lens split

The Data.Aeson.Lens module has been migrated back into its own package. This change will enable this module to continue to evolve its API beyond what we usually provide in the lens package and it reduces our direct package dependencies by three (aeson, scientific, attoparsec). This will also help to reduce build times, which can be particularly important to sandbox users.


  • Review is now a proper supertype of Prism
  • makeLenses unifies field types for lenses/traversals spanning different field types.
  • GHC.Generics.Lens.tinplate works correctly on data types with both a single constructor and single field as well as empty data types.

Report your issues

Please report your issues to the GitHub issue tracker. While we'll be reading comments on Reddit, it will help to ensure our issue isn't lost.

submitted by glguy
[link] [18 comments]
Categories: Incoming News

What's the best practice for building a DSL in haskell?

Haskell on Reddit - Thu, 08/21/2014 - 6:21pm

I'm working at a python dev shop many of our operations revolve around making queries to a postgres database. I'd like to try writing some haskell at work, so I need a haskell-friendly copy of the database schema (currently this information is available to the python code as a python class using sqlalchemy). I can and have just copied over the schema by hand, but it seems that doing the job right would require a single copy of the schema written in a DSL, which could be compiled to both haskell and python code as a reference for the respective languages. What's the best practice for actually writing a DSL? Do I just whip out Parsec and start creating my own standard, or do I write a .hs file that gets compiled by ghc into an intermediate form and subsequently perform magic on that? I'd really like to see a tutorial on "How to start creating DSLs for haskell".

submitted by singularai
[link] [22 comments]
Categories: Incoming News

Philip Wadler: Scotland can't save England

Planet Haskell - Thu, 08/21/2014 - 4:05pm
Salmond concluded his debate with Darling by observing that for half his lifetime Scotland had been ruled by governments that Scotland had not elected. Many take this the other way, and fret that if Scotland leaves the UK, then Labour would never win an election. Wings Over Scotland reviews the figures. While Scotland has an effect on the size of the majority, elections would yield the same ruling party with or without Scotland in 65 of the last 67 years. To a first approximation, Scotland's impact over the rest of the UK is nil, while the rest of the UK overwhelms Scotland's choice half the time.

1945 Labour govt (Attlee)

Labour majority: 146
Labour majority without any Scottish MPs in Parliament: 143
1950 Labour govt (Attlee)

Labour majority: 5
Without Scottish MPs: 2
1951 Conservative govt (Churchill/Eden)

Conservative majority: 17
Without Scottish MPs: 16
1955 Conservative govt (Eden/Macmillan)

Conservative majority: 60
Without Scottish MPs: 61
1959 Conservative govt (Macmillan/Douglas-Home)

Conservative majority: 100
Without Scottish MPs: 109
1964 Labour govt (Wilson)

Labour majority: 4
Without Scottish MPs: -11
1966 Labour govt (Wilson)

Labour majority: 98
Without Scottish MPs: 77
1970 Conservative govt (Heath)

Conservative majority: 30
Without Scottish MPs: 55
1974 Minority Labour govt (Wilson)

Labour majority: -33
Without Scottish MPs: -42
1974b Labour govt (Wilson/Callaghan)

Labour majority: 3
Without Scottish MPs: -8
1979 Conservative govt (Thatcher)

Conservative majority: 43
Without Scottish MPs: 70
1983 Conservative govt (Thatcher)

Conservative majority: 144
Without Scottish MPs: 174
1987 Conservative govt (Thatcher/Major)

Conservative majority: 102
Without Scottish MPs: 154
1992 Conservative govt (Major)

Conservative majority: 21
Without Scottish MPs: 71
1997 Labour govt (Blair)

Labour majority: 179
Without Scottish MPs: 139
2001 Labour govt (Blair)

Labour majority: 167
Without Scottish MPs: 129
2005 Labour govt (Blair/Brown)

Labour majority: 66
Without Scottish MPs:  43
2010 Coalition govt (Cameron)

Conservative majority: -38
Without Scottish MPs: 19
Categories: Offsite Blogs

Philip Wadler: How Scotland will be robbed

Planet Haskell - Thu, 08/21/2014 - 3:36pm
Thanks to the Barnett Formula, the UK government provides more funding per head in Scotland than in the rest of the UK. Better Together touts this as an extra £1400 in each person's pocket that will be lost if Scotland votes 'Aye' (famously illustrated with Lego). Put to one side the argument as to whether the extra £1400 is a fair reflection of the extra Scotland contributes to the UK economy, through oil and other means. The Barnett Formula is up for renegotiation. Will it be maintained if Scotland votes 'Nay'?
Wings over Scotland lays out the argument that if Scotland opts to stick with Westminster then Westminster will stick it to Scotland.The Barnett Formula is the system used to decide the size of the “block grant” sent every year from London to the Scottish Government to run devolved services. ...
Until now, however, it’s been politically impossible to abolish the Formula, as such a manifestly unfair move would lead to an upsurge in support for independence. In the wake of a No vote in the referendum, that obstacle would be removed – Scots will have nothing left with which to threaten Westminster.
It would still be an unwise move for the UK governing party to be seen to simply obviously “punish” Scotland after a No vote. But the pledge of all three Unionist parties to give Holyrood “more powers” provides the smokescreen under which the abolition of Barnett can be executed and the English electorate placated.
The block grant is a distribution of tax revenue. The “increased devolution” plans of the UK parties will instead make the Scottish Government responsible for collecting its own income taxes. The Office of Budget Responsibility has explained in detail how the block grant from the UK government to Scotland will then be reduced to reflect the fiscal impact of the devolution of these tax-raising powers.” (page 4).But if Holyrood sets Scottish income tax at the same level as the UK, that’ll mean the per-person receipts are also the same, which means that there won’t be the money to pay for the “extra” £1400 of spending currently returned as part-compensation for Scottish oil revenues, because the oil revenues will be staying at Westminster. ...
We’ve explained the political motivations behind the move at length before. The above is simply the mechanical explanation of how it will happen if Scotland votes No. The“if” is not in question – all the UK parties are united behind the plan.
A gigantic act of theft will be disguised as a gift. The victories of devolution will be lost, because there’ll no longer be the money to pay for them. Tuition fees and prescription charges will return. Labour’s “One Nation” will manifest itself, with the ideologically troublesome differences between Scotland and the rest of the UK eliminated.
And what’s more, it’ll all have been done fairly and above-board, because the Unionist parties have all laid out their intentions in black and white. They’ll be able to say, with justification, “Look, you can’t complain, this is exactly what we TOLD you we’d do”.This analysis looks persuasive to me, and I've not seen it put so clearly elsewhere. Please comment below if you know sources for similar arguments.
Categories: Offsite Blogs