CFP - JSS, Elsevier - Special issue on adaptive and reconfigurable software systems and architectures
Independence Referendum Debate4.00--5.30pm Monday 25 August
Appleton Tower Lecture Room 2
For the NAYs: Prof. Alan Bundy
For the AYEs: Prof. Philip Wadler
Moderator: Prof. Mike Fourman
All members of the School of Informatics
and the wider university community welcome
(This is a debate among colleagues and not a formal University event.
All views expressed are those of the individuals who express them,
and not the University of Edinburgh.)
A useful summary, written by a colleague.
- the difference between the Scottish tax contribution and RCUK spending in Scotland is small compared to savings that will be made in other areas such as defence
- Funding levels per institution are actually similar in Scotland to those in the rest of the UK, it’s just that there are more institutions here
- Because of its relative importance any independent Scottish government would prioritise research
- If rUK rejects a common research area it would lose the benefits of its previous investments, and the Scottish research capacity, which is supported by the Scottish government and the excellence of our universities
- There are significant disadvantages with a No vote, resulting from UK immigration policy and the possibility of exiting the EU
There's a new release of the lens package tonight, and I'd like to break down some of the changes from the changelog.
Template Haskell changes
The Template Haskell module has been around through many revisions of the lens package and seen a lot of features added to it. There were multiple implementations of the same functionality in the module doing slightly different things and it was becoming a challenge to fix bugs or add functionality. To improve this situation the TH code has been rewritten and unified.
The common cases of makeLenses and makePrisms should continue to work as before.
There's a new makeClassyPrisms function to complement the makeClassy lens generating code. You can read more about this feature in the haddocks!
Lenses generated for a single constructor now use irrefutable patterns. This allows you to initialize an undefined with lenses.
The makeFields implementation is now merged into the makeLenses implementation. If you were previously using makeFieldsWith in order to provide a custom rule set, you should migrate to using makeLensesWith
Note: makeIsos was removed in the 4.3 release. Both makeLenses and makePrisms will generate an Iso when possible.
The Data.Aeson.Lens module has been migrated back into its own package. This change will enable this module to continue to evolve its API beyond what we usually provide in the lens package and it reduces our direct package dependencies by three (aeson, scientific, attoparsec). This will also help to reduce build times, which can be particularly important to sandbox users.
- Review is now a proper supertype of Prism
- makeLenses unifies field types for lenses/traversals spanning different field types.
- GHC.Generics.Lens.tinplate works correctly on data types with both a single constructor and single field as well as empty data types.
Report your issues
Please report your issues to the GitHub issue tracker. While we'll be reading comments on Reddit, it will help to ensure our issue isn't lost.glguy
[link] [18 comments]
I'm working at a python dev shop many of our operations revolve around making queries to a postgres database. I'd like to try writing some haskell at work, so I need a haskell-friendly copy of the database schema (currently this information is available to the python code as a python class using sqlalchemy). I can and have just copied over the schema by hand, but it seems that doing the job right would require a single copy of the schema written in a DSL, which could be compiled to both haskell and python code as a reference for the respective languages. What's the best practice for actually writing a DSL? Do I just whip out Parsec and start creating my own standard, or do I write a .hs file that gets compiled by ghc into an intermediate form and subsequently perform magic on that? I'd really like to see a tutorial on "How to start creating DSLs for haskell".submitted by singularai
[link] [22 comments]
1945 Labour govt (Attlee)
Labour majority: 146
Labour majority without any Scottish MPs in Parliament: 143
NO CHANGE WITHOUT SCOTTISH MPS
1950 Labour govt (Attlee)
Labour majority: 5
Without Scottish MPs: 2
1951 Conservative govt (Churchill/Eden)
Conservative majority: 17
Without Scottish MPs: 16
1955 Conservative govt (Eden/Macmillan)
Conservative majority: 60
Without Scottish MPs: 61
1959 Conservative govt (Macmillan/Douglas-Home)
Conservative majority: 100
Without Scottish MPs: 109
1964 Labour govt (Wilson)
Labour majority: 4
Without Scottish MPs: -11
CHANGE: LABOUR MAJORITY TO CONSERVATIVE MAJORITY OF 1(Con 280, Lab 274, Lib 5)
1966 Labour govt (Wilson)
Labour majority: 98
Without Scottish MPs: 77
1970 Conservative govt (Heath)
Conservative majority: 30
Without Scottish MPs: 55
1974 Minority Labour govt (Wilson)
Labour majority: -33
Without Scottish MPs: -42
POSSIBLE CHANGE – LABOUR MINORITY TO CONSERVATIVE MINORITY(Without Scots: Con 276, Lab 261, Lib 11, Others 16)
1974b Labour govt (Wilson/Callaghan)
Labour majority: 3
Without Scottish MPs: -8
CHANGE: LABOUR MAJORITY TO LABOUR MINORITY(Lab 278 Con 261 Lib 10 others 15)
1979 Conservative govt (Thatcher)
Conservative majority: 43
Without Scottish MPs: 70
1983 Conservative govt (Thatcher)
Conservative majority: 144
Without Scottish MPs: 174
1987 Conservative govt (Thatcher/Major)
Conservative majority: 102
Without Scottish MPs: 154
1992 Conservative govt (Major)
Conservative majority: 21
Without Scottish MPs: 71
1997 Labour govt (Blair)
Labour majority: 179
Without Scottish MPs: 139
2001 Labour govt (Blair)
Labour majority: 167
Without Scottish MPs: 129
2005 Labour govt (Blair/Brown)
Labour majority: 66
Without Scottish MPs: 43
2010 Coalition govt (Cameron)
Conservative majority: -38
Without Scottish MPs: 19
CHANGE: CON-LIB COALITION TO CONSERVATIVE MAJORITY
Wings over Scotland lays out the argument that if Scotland opts to stick with Westminster then Westminster will stick it to Scotland.The Barnett Formula is the system used to decide the size of the “block grant” sent every year from London to the Scottish Government to run devolved services. ...
Until now, however, it’s been politically impossible to abolish the Formula, as such a manifestly unfair move would lead to an upsurge in support for independence. In the wake of a No vote in the referendum, that obstacle would be removed – Scots will have nothing left with which to threaten Westminster.
It would still be an unwise move for the UK governing party to be seen to simply obviously “punish” Scotland after a No vote. But the pledge of all three Unionist parties to give Holyrood “more powers” provides the smokescreen under which the abolition of Barnett can be executed and the English electorate placated.
The block grant is a distribution of tax revenue. The “increased devolution” plans of the UK parties will instead make the Scottish Government responsible for collecting its own income taxes. The Office of Budget Responsibility has explained in detail how “the block grant from the UK government to Scotland will then be reduced to reflect the fiscal impact of the devolution of these tax-raising powers.” (page 4).But if Holyrood sets Scottish income tax at the same level as the UK, that’ll mean the per-person receipts are also the same, which means that there won’t be the money to pay for the “extra” £1400 of spending currently returned as part-compensation for Scottish oil revenues, because the oil revenues will be staying at Westminster. ...
We’ve explained the political motivations behind the move at length before. The above is simply the mechanical explanation of how it will happen if Scotland votes No. The“if” is not in question – all the UK parties are united behind the plan.
A gigantic act of theft will be disguised as a gift. The victories of devolution will be lost, because there’ll no longer be the money to pay for them. Tuition fees and prescription charges will return. Labour’s “One Nation” will manifest itself, with the ideologically troublesome differences between Scotland and the rest of the UK eliminated.
And what’s more, it’ll all have been done fairly and above-board, because the Unionist parties have all laid out their intentions in black and white. They’ll be able to say, with justification, “Look, you can’t complain, this is exactly what we TOLD you we’d do”.This analysis looks persuasive to me, and I've not seen it put so clearly elsewhere. Please comment below if you know sources for similar arguments.
Today, the 14th of January 2014, we had a special session of our Theory Lunch. I spoke about ultrafilters and how they allow to generalize the notion of limit.
Consider the space of bounded sequences of real numbers, together with the supremum norm. We would like to define a notion of limit which holds for every and satisfies the well known properties of standard limit:
- Linearity: .
- Homogeneity: .
- Monotonicity: if for every then .
- Nontriviality: if for every then .
- Consistency: if the limit exists in the classical sense, then the two notions coincide.
The consistency condition is reasonable also because it avoids trivial cases: if we fix and we define the limit of the sequence as the value , then the first four properties are satisfied.
Let us recall the classical definition of limit: we say that converges to if and only if, for every , the set of values such that is cofinite, i.e., has a finite complement: the inequality can be satisfied at most for finitely many values of . The family of cofinite subsets of (in fact, of any set ) has the following properties:
- Upper closure: if and then .
- Meet stability: if then .
A family of subsets of with the two properties above is called a filter on . An immediate example is the trivial filter ; another example is the improper filter . The family of cofinite subset of is called the Fréchet filter on . The Fréchet filter is not the improper one if and only if is infinite.
An ultrafilter on is a filter on satisfying the following additional conditions:
- Properness: .
- Maximality: for every , either or .
For example, if , then is an ultrafilter on , called the principal ultrafilter generated by . Observe that : if we say that is free. These are, in fact, the only two options.
Lemma 1. For a proper filter to be an ultrafilter, it is necessary and sufficient that it satisfies the following condition: for every and nonempty , if then for at least one .
Proof: It is sufficient to prove the thesis with . If with , then is a proper filter that properly contains . If the condition is satisfied, for every which is neither nor we have , thus either or .
Theorem 1. Every nonprincipal ultrafilter is free. In addition, an ultrafilter is free if and only if it extends the Fréchet filter. In particular, every ultrafilter over a finite set is principal.
Proof: Let be a nonprincipal ultrafilter. Let : then , so either there exists such that and , or there exists such that and . In the first case, ; in the second case, we consider and reduce to the first case. As is arbitrary, is free.
Now, for every the set belongs to but not to : therefore, no principal ultrafilter extends the Fréchet filter. On the other hand, if is an ultrafilter, is finite, and , then by maximality, hence for some because of Lemma 1, thus cannot be a free ultrafilter.
So it seems that free ultrafilters are the right thing to consider when trying to expand the concept of limit. There is an issue, though: we have not seen any single example of a free ultrafilter; in fact, we do not even (yet) know whether free ultrafilters do exist! The answer to this problem comes, in a shamelessly nonconstructive way, from the following
Ultrafilter lemma. Every proper filter can be extended to an ultrafilter.
The ultrafilter lemma, together with Theorem 1, implies the existence of free ultrafilters on every infinite set, and in particular on . On the other hand, to prove the ultrafilter lemma the Axiom of Choice is required, in the form of Zorn’s lemma. Before giving such proof, we recall that a family of sets has the finite intersection property if every finite subfamily has a nonempty intersection: every proper filter has the finite intersection property.
Proof of the ultrafilter lemma. Let be a proper filter on and let be the family of the collections of subsets of that extend and have the finite intersection property, ordered by inclusion. Let be a totally ordered subfamily of : then extends and has the finite intersection property, because for every finitely many there exists by construction such that .
By Zorn’s lemma, has a maximal element , which surely satisfies and . If and , then still has the finite intersection property, therefore by maximality. If then still has the finite intersection property, therefore again by maximality.
Suppose, for the sake of contradiction, that there exists such that and : then neither nor have the finite intersection property, hence there exist such that . But means , and means : therefore,
against having the finite intersection property.
We are now ready to expand the idea of limit. Let be a metric space and let be an ultrafilter on : we say that is the ultralimit of the sequence along if for every the set
belongs to . (Observe how, in the standard definition of limit, the above set is required to belong to the Fréchet filter.) If this is the case, we write
Ultralimits, if they exist, are unique and satisfy our first four conditions. Moreover, the choice of a principal ultrafilter corresponds to the trivial definition . So, what about free ultrafilters?
Theorem 2. Every bounded sequence of real numbers has an ultralimit along every free ultrafilter on .
Proof: It is not restrictive to suppose for every . Let be an arbitrary, but fixed, free ultrafilter on . We will construct a sequence of closed intervals , , such that and for every . By the Cantor intersection theorem it will be : we will then show that .
Let . Let be either or , chosen according to the following criterion: . If both halves satisfy the criterion, then we just choose one once and for all. We iterate the procedure by always choosing as one of the two halves of such that .
Let . Let , and let be so large that : then , thus . As the smaller set belongs to , so does the larger one.
We have thus almost achieved our original target: a notion of limit which applies to every bounded sequence of real numbers. Such notion will depend on the specific free ultrafilter we choose: but it is already very reassuring that such a notion exists at all! To complete our job we need one more check: we have to be sure that the definition is consistent with the classical one. And this is indeed what happens!
Theorem 3. Let be a sequence of real numbers and let . Then in the classical sense if and only if for every free ultrafilter on .
To prove Theorem 3 we make use of an auxiliary result, which is of interest by itself.
Lemma 2. Let be the family of collections of subsets of that have the finite intersection property. The maximal elements of are precisely the ultrafilters.
Proof: Every ultrafilter is clearly maximal in . If is maximal in , then it is clearly proper and upper closed, and we can reason as in the proof of the ultrafilter lemma to show that it is actually an ultrafilter.
Proof of Theorem 3: Suppose does not converge to in the classical sense. Fix such that the set is infinite. Then the family has the finite intersection property: an ultrafilter that extends must be free. Then , and does not have an ultralimit along .
The converse implication follows from the classical definition of limit, together with the very notion of free ultrafilter.
Theorem 3 does hold for sequences of real numbers, but does not extend to arbitrary metric spaces. In fact, the following holds, which we state without proving.
Theorem 4. Let be a metric space. The following are equivalent.
- For some free ultrafilter on , every sequence in has an ultralimit along .
- For every free ultrafilter on , every sequence in has an ultralimit along .
- is compact.
Ultrafilters are useful in many other contexts. For instance, they are used to construct hyperreal numbers, which in turn allow a rigorous definition of infinitesimals and the foundation of calculus over those. But this might be the topic for another Theory Lunch talk.
As some of you may know HP began working on a memristor computer and OS. The goal of memristors is to replace all forms of storage (SRAM, DRAM and Disk), while being faster than DRAM.
Does the Haskell community have any thoughts on what this could mean for a language like Haskell?
General Memrisor Infomration: http://en.wikipedia.org/wiki/MemristorArtemis311
[link] [12 comments]
Why are there so many goddamn package managers? They sprawl across both operating systems (apt, yum, pacman, Homebrew) as well as for programming languages (Bundler, Cabal, Composer, CPAN, CRAN, CTAN, EasyInstall, Go Get, Maven, npm, NuGet, OPAM, PEAR, pip, RubyGems, etc etc etc). "It is a truth universally acknowledged that a programming language must be in want of a package manager." What is the fatal attraction of package management that makes programming language after programming language jump off this cliff? Why can't we just, you know, reuse an existing package manager?
You can probably think of a few reasons why trying to use apt to manage your Ruby gems would end in tears. "System and language package managers are completely different! Distributions are vetted, but that's completely unreasonable for most libraries tossed up on GitHub. Distributions move too slowly. Every programming language is different. The different communities don't talk to each other. Distributions install packages globally. I want control over what libraries are used." These reasons are all right, but they are missing the essence of the problem.
The fundamental problem is that programming languages package management is decentralized.
This decentralization starts with the central premise of a package manager: that is, to install software and libraries that would otherwise not be locally available. Even with an idealized, centralized distribution curating the packages, there are still two parties involved: the distribution and the programmer who is building applications locally on top of these libraries. In real life, however, the library ecosystem is further fragmented, composed of packages provided by a huge variety of developers. Sure, the packages may all be uploaded and indexed in one place, but that doesn't mean that any given author knows about any other given package. And then there's what the Perl world calls DarkPAN: the uncountable lines of code which probably exist, but which we have no insight into because they are locked away on proprietary servers and source code repositories. Decentralization can only be avoided when you control absolutely all of the lines of code in your application.. but in that case, you hardly need a package manager, do you? (By the way, my industry friends tell me this is basically mandatory for software projects beyond a certain size, like the Windows operating system or the Google Chrome browser.)
Decentralized systems are hard. Really, really hard. Unless you design your package manager accordingly, your developers will fall into dependency hell. Nor is there a one "right" way to solve this problem: I can identify at least three distinct approaches to the problem among the emerging generation of package managers, each of which has their benefits and downsides.
Pinned versions. Perhaps the most popular school of thought is that developers should aggressively pin package versions; this approach advocated by Ruby's Bundler, PHP's Composer, Python's virtualenv and pip, and generally any package manager which describes itself as inspired by the Ruby/node.js communities (e.g. Java's Gradle, Rust's Cargo). Reproduceability of builds is king: these package managers solve the decentralization problem by simply pretending the ecosystem doesn't exist once you have pinned the versions. The primary benefit of this approach is that you are always in control of the code you are running. Of course, the downside of this approach is that you are always in control of the code you are running. An all-to-common occurrence is for dependencies to be pinned, and then forgotten about, even if there are important security updates to the libraries involved. Keeping bundled dependencies up-to-date requires developer cycles--cycles that more often than not are spent on other things (like new features).
A stable distribution. If bundling requires every individual application developer to spend effort keeping dependencies up-to-date and testing if they keep working with their application, we might wonder if there is a way to centralize this effort. This leads to the second school of thought: to centralize the package repository, creating a blessed distribution of packages which are known to play well together, and which will receive bug fixes and security fixes while maintaining backwards compatibility. In programming languages, this is much less common: the two I am aware of are Anaconda for Python and Stackage for Haskell. But if we look closely, this model is exactly the same as the model of most operating system distributions. As a system administrator, I often recommend my users use libraries that are provided by the operating system as much as possible. They won't take backwards incompatible changes until we do a release upgrade, and at the same time you'll still get bugfixes and security updates for your code. (You won't get the new hotness, but that's essentially contradictory with stability!)
Embracing decentralization. Up until now, both of these approaches have thrown out decentralization, requiring a central authority, either the application developer or the distribution manager, for updates. Is this throwing out the baby with the bathwater? The primary downside of centralization is the huge amount of work it takes to maintain a stable distribution or keep an individual application up-to-date. Furthermore, one might not expect the entirety of the universe to be compatible with one another, but this doesn't stop subsets of packages from being useful together. An ideal decentralized ecosystem distributes the problem of identifying what subsets of packages work across everyone participating in the system. Which brings us to the fundamental, unanswered question of programming languages package management:How can we create a decentralized package ecosystem that works?
Here are a few things that can help:
- Stronger encapsulation for dependencies. One of the reasons why dependency hell is so insidious is the dependency of a package is often an inextricable part of its outwards facing API: thus, the choice of a dependency is not a local choice, but rather a global choice which affects the entire application. Of course, if a library uses some library internally, but this choice is entirely an implementation detail, this shouldn't result in any sort of global constraint. Node.js's NPM takes this choice to its logical extreme: by default, it doesn't deduplicate dependencies at all, giving each library its own copy of each of its dependencies. While I'm a little dubious about duplicating everything (it certainly occurs in the Java/Maven ecosystem), I certainly agree that keeping dependency constraints local improves composability.
- Advancing semantic versioning. In a decentralized system, it's especially important that library writers give accurate information, so that tools and users can make informed decisions. Wishful, invented version ranges and artistic version number bumps simply exacerbate an already hard problem (as I mentioned in my previous post). If you can enforce semantic versioning, or better yet, ditch semantic versions and record the true, type-level dependency on interfaces, our tools can make better choices. The gold standard of information in a decentralized system is, "Is package A compatible with package B", and this information is often difficult (or impossible, for dynamically typed systems) to calculate.
- Centralization as a special-case. The point of a decentralized system is that every participant can make policy choices which are appropriate for them. This includes maintaining their own central authority, or deferring to someone else's central authority: centralization is a special-case. If we suspect users are going to attempt to create their own, operating system style stable distributions, we need to give them the tools to do so... and make them easy to use!
For a long time, the source control management ecosystem was completely focused on centralized systems. Distributed version control systems such as Git fundamentally changed the landscape: although Git may be more difficult to use than Subversion for a non-technical user, the benefits of decentralization are diverse. The Git of package management doesn't exist yet: if someone tells you that package management is solved, just reimplement Bundler, I entreat you: think about decentralization as well!