News aggregator

Edward Z. Yang: What’s a module system good for anyway?

Planet Haskell - Sat, 08/09/2014 - 5:21pm

This summer, I've been working at Microsoft Research implementing Backpack, a module system for Haskell. Interestingly, Backpack is not really a single monolothic feature, but, rather, an agglomeration of small, infrastructural changes which combine together in an interesting way. In this series of blog posts, I want to talk about what these individual features are, as well as how the whole is greater than the sum of the parts.

But first, there's an important question that I need to answer: What's a module system good for anyway? Why should you, an average Haskell programmer, care about such nebulous things as module systems and modularity. At the end of the day, you want your tools to solve specific problems you have, and it is sometimes difficult to understand what problem a module system like Backpack solves. As tomejaguar puts it: "Can someone explain clearly the precise problem that Backpack addresses? I've read the paper and I know the problem is 'modularity' but I fear I am lacking the imagination to really grasp what the issue is."

Look no further. In this blog post, I want to talk concretely about problems Haskellers have today, explain what the underlying causes of these problems are, and say why a module system could help you out.

The String, Text, ByteString problem

As experienced Haskellers are well aware, there are multitude of string types in Haskell: String, ByteString (both lazy and strict), Text (also both lazy and strict). To make matters worse, there is no one "correct" choice of a string type: different types are appropriate in different cases. String is convenient and native to Haskell'98, but very slow; ByteString is fast but are simply arrays of bytes; Text is slower but Unicode aware.

In an ideal world, a programmer might choose the string representation most appropriate for their application, and write all their code accordingly. However, this is little solace for library writers, who don't know what string type their users are using! What's a library writer to do? There are only a few choices:

  1. They "commit" to one particular string representation, leaving users to manually convert from one representation to another when there is a mismatch. Or, more likely, the library writer used the default because it was easy. Examples: base (uses Strings because it completely predates the other representations), diagrams (uses Strings because it doesn't really do heavy string manipulation).
  2. They can provide separate functions for each variant, perhaps identically named but placed in separate modules. This pattern is frequently employed to support both strict/lazy variants Text and ByteStringExamples: aeson (providing decode/decodeStrict for lazy/strict ByteString), attoparsec (providing Data.Attoparsec.ByteString/Data.Attoparsec.ByteString.Lazy), lens (providing Data.ByteString.Lazy.Lens/Data.ByteString.Strict.Lens).
  3. They can use type-classes to overload functions to work with multiple representations. The particular type class used hugely varies: there is ListLike, which is used by a handful of packages, but a large portion of packages simply roll their own. Examples: SqlValue in HDBC, an internal StringLike in tagsoup, and yet another internal StringLike in web-encodings.

The last two methods have different trade offs. Defining separate functions as in (2) is a straightforward and easy to understand approach, but you are still saying no to modularity: the ability to support multiple string representations. Despite providing implementations for each representation, the user still has to commit to particular representation when they do an import. If they want to change their string representation, they have to go through all of their modules and rename their imports; and if they want to support multiple representations, they'll still have to write separate modules for each of them.

Using type classes (3) to regain modularity may seem like an attractive approach. But this approach has both practical and theoretical problems. First and foremost, how do you choose which methods go into the type class? Ideally, you'd pick a minimal set, from which all other operations could be derived. However, many operations are most efficient when directly implemented, which leads to a bloated type class, and a rough time for other people who have their own string types and need to write their own instances. Second, type classes make your type signatures more ugly String -> String to StringLike s => s -> s and can make type inference more difficult (for example, by introducing ambiguity). Finally, the type class StringLike has a very different character from the type class Monad, which has a minimal set of operations and laws governing their operation. It is difficult (or impossible) to characterize what the laws of an interface like this should be. All-in-all, it's much less pleasant to program against type classes than concrete implementations.

Wouldn't it be nice if I could import String, giving me the type String and operations on it, but then later decide which concrete implementation I want to instantiate it with? This is something a module system can do for you! This Reddit thread describes a number of other situations where an ML-style module would come in handy.

(PS: Why can't you just write a pile of preprocessor macros to swap in the implementation you want? The answer is, "Yes, you can; but how are you going to type check the thing, without trying it against every single implementation?")

Destructive package reinstalls

Have you ever gotten this error message when attempting to install a new package?

$ cabal install hakyll cabal: The following packages are likely to be broken by the reinstalls: pandoc-1.9.4.5 Graphalyze-0.14.0.0 Use --force-reinstalls if you want to install anyway.

Somehow, Cabal has concluded that the only way to install hakyll is to reinstall some dependency. Here's one situation where a situation like this could come about:

  1. pandoc and Graphalyze are compiled against the latest unordered-containers-0.2.5.0, which itself was compiled against the latest hashable-1.2.2.0.
  2. hakyll also has a dependency on unordered-containers and hashable, but it has an upper bound restriction on hashable which excludes the latest hashable version. Cabal decides we need to install an old version of hashable, say hashable-0.1.4.5.
  3. If hashable-0.1.4.5 is installed, we also need to build unordered-containers against this older version for Hakyll to see consistent types. However, the resulting version is the same as the preexisting version: thus, reinstall!

The root cause of this error an invariant Cabal currently enforces on a package database: there can only be one instance of a package for any given package name and version. In particular, this means that it is not possible to install a package multiple times, compiled against different dependencies. This is a bit troublesome, because sometimes you really do want the same package installed multiple times with different dependencies: as seen above, it may be the only way to fulfill the version bounds of all packages involved. Currently, the only way to work around this problem is to use a Cabal sandbox (or blow away your package database and reinstall everything, which is basically the same thing).

You might be wondering, however, how could a module system possibly help with this? It doesn't... at least, not directly. Rather, nondestructive reinstalls of a package are a critical feature for implementing a module system like Backpack (a package may be installed multiple times with different concrete implementations of modules). Implementing Backpack necessitates fixing this problem, moving Haskell's package management a lot closer to that of Nix's or NPM.

Version bounds and the neglected PVP

While we're on the subject of cabal-install giving errors, have you ever gotten this error attempting to install a new package?

$ cabal install hledger-0.18 Resolving dependencies... cabal: Could not resolve dependencies: # pile of output

There are a number of possible reasons why this could occur, but usually it's because some of the packages involved have over-constrained version bounds (especially upper bounds), resulting in an unsatisfiable set of constraints. To add insult to injury, often these bounds have no grounding in reality (the package author simply guessed the range) and removing it would result in a working compilation. This situation is so common that Cabal has a flag --allow-newer which lets you override the upper bounds of packages. The annoyance of managing bounds has lead to the development of tools like cabal-bounds, which try to make it less tedious to keep upper bounds up-to-date.

But as much as we like to rag on them, version bounds have a very important function: they prevent you from attempting to compile packages against dependencies which don't work at all! An under-constrained set of version bounds can easily have compiling against a version of the dependency which doesn't type check.

How can a module system help? At the end of the day, version numbers are trying to capture something about the API exported by a package, described by the package versioning policy. But the current state-of-the-art requires a user to manually translate changes to the API into version numbers: an error prone process, even when assisted by various tools. A module system, on the other hand, turns the API into a first-class entity understood by the compiler itself: a module signature. Wouldn't it be great if packages depended upon signatures rather than versions: then you would never have to worry about version numbers being inaccurate with respect to type checking. (Of course, versions would still be useful for recording changes to semantics not seen in the types, but their role here would be secondary in importance.) Some full disclosure is warranted here: I am not going to have this implemented by the end of my internship, but I'm hoping to make some good infrastructural contributions toward it.

Conclusion

If you skimmed the introduction to the Backpack paper, you might have come away with the impression that Backpack is something about random number generators, recursive linking and applicative semantics. While these are all true "facts" about Backpack, they understate the impact a good module system can have on the day-to-day problems of a working programmer. In this post, I hope I've elucidated some of these problems, even if I haven't convinced you that a module system like Backpack actually goes about solving these problems: that's for the next series of posts. Stay tuned!

Categories: Offsite Blogs

Literate Futoshiki solver

Haskell on Reddit - Sat, 08/09/2014 - 2:10pm
Categories: Incoming News

ANN: immortal-0.1

haskell-cafe - Sat, 08/09/2014 - 1:57pm
I am pleased to announce the first release of the "immortal" package. Immortal allows you to create threads that never die. If the action executed by such a thread finishes or is killed by an exception, it is automatically restarted. Such a behavior is often desired when writing servers, so I decided to release a package that does exactly that. http://hackage.haskell.org/package/immortal https://github.com/feuerbach/immortal Roman _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Proposal: Export Data.Word.Word from Prelude

libraries list - Sat, 08/09/2014 - 12:38pm
Hello *, Proposal -------- I hereby propose to export Haskell2010's Data.Word.Word type from the Prelude Motivation ---------- Starting with Haskell2010, "Data.Word" exporting the type 'Word', "an unsigned integral type, with the same size as 'Int'" became part of the Haskell Report. 'Word' is usually a better choice than 'Int' when non-negative quantities (such as list lengths, bit or vector indices, or number of items in a container) need to be represented. Currently however, 'Word' is at a disadvantage (with respect to 'Int') in terms of public exposure by being accessible only after an "import Data.Word". Moreover, since 'Word' is now part of the Haskell Report, libraries should attempt to avoid name-clashing with 'Word' (and if they do, it ought to be a conscious decision, which ought to be declared by a requiring a "import Prelude hiding (Word)"). While one might think 'Word' would be a popular type-name to use in Haskell code, the current level of name collision is still rather low (as is show
Categories: Offsite Discussion

Lambdaheads - Vienna Functional Programming

Haskell on Reddit - Sat, 08/09/2014 - 3:16am

Friends of Functional Programming!

I'd like to announce the monthly meeting of the Lambdaheads (https://metalab.at/wiki/Lambdaheads), a functional programming group based in Vienna. We will meet at the Metalab library in Rathausstraße 6, 1010 Vienna at 19:30.

I will prepare a talk on Record Syntax - Lenses and Prisms and I hope we will have another talk about "STM with Finalizers". I hope my talk will be beginner-friendly, wheras I guess the STM talk will be a bit more advanced.

If you are interested in functional programming, be it Haskell or a different language come and join us for a nice evening.

submitted by epsilonhalbe
[link] [5 comments]
Categories: Incoming News

language-puppet: 7 Startups - part 5 - the XMPP backend

Planet Haskell - Sat, 08/09/2014 - 2:48am

Note: I ran out of time weeks ago. I could never finish this serie as I envisionned, and I don’t see much free time on the horizon. Instead of letting this linger forever, here is a truncated conclusion. The previous episodes were :

  • Part 1 : probably the best episode, about the basic game types.
  • Part 2 : definition of the game rules in an unspecified monad.
  • Part 3 : writing an interpreter for the rules.
  • Part 4 : stumbling and failure in writing a clean backend system.

In the previous episode I added a ton of STM code and helper functions in several 15 minutes sessions. The result was not pretty, and left me dissatisfied.

For this episode, I decided to release my constraints. For now, I am only going to support the following :

  • The backend list will not be dynamic : a bunch of backends are going to be registered once, and it will be not be possible to remove an existing or add a previous backend once this is done.
  • The backends will be text-line based (XMPP and IRC are good protocols for this). This will unfortunately make it harder to write a nice web interface for the game too, but given how much time I can devote to this side-project this doesn’t matter much …
The MVC paradigm

A great man once said that “if you have category theory, everything looks like a pipe. Or a monad. Or a traversal. Or perhaps it’s a cosomething”. With the previously mentionned restrictions, I was able to shoehorn my problem in the shape of the mvc package, which I wanted to try for a while. It might be a bit different that what people usually expect when talking about the model - view - controller pattern, and is basically :

  • Some kind of pollable input (the controllers),
  • a pure stream based computation (the model), sporting an internal state and transforming the data coming from the inputs into something that is passed to …
  • … IO functions that run the actual effects (the views).

Each of these components can be reasoned about separately, and combined together in various ways.

There is however one obvious problem with this pattern, due to the way the game is modeled. Currently, the game is supposed to be able to receive data from the players, and to send data to them. It would need to live entirely in the model for this to work as expected, but the way it is currently written doesn’t make it obvious.

It might be possible to have the game be explicitely CPS, so that the pure part would run the game until communication with the players is required, which would translate nicely in an output that could be consumed by a view.

This would however require some refactoring and a lot of thinking, which I currently don’t have time for, so here is instead how the information flows :

Here PInput and GInput are the type of the inputs (respectively from player and games). The blue boxes are two models that will be combined together. The pink ones are the type of outputs emitted from the models. The backends serve as drivers for player communication. The games run in their respective threads, and the game manager spawns and manages the game threads.

Comparison with the “bunch of STM functions” model

I originally started with a global TVar containing the state information of each players (for example if they are part of a game, still joining, due to answer to a game query, etc.). There were a bunch of “helper functions” that would manipulate the global state in a way that would ensure its consistency. The catch is that the backends were responsible for calling these helper functions at appropriate times and for not messing with the global state.

The MVC pattern forces the structure of your program. In my particular case, it means a trick is necessary to integrate it with the current game logic (that will be explained later). The “boSf” pattern is more flexible, but carries a higher cognitive cost.

With the “boSf” pattern, response to player inputs could be :

  • Messages to players, which fits well with the model, as it happened over STM channels, so the whole processing / state manipulation / player output could be of type Input -> STM ().
  • Spawning a game. This time we need forkIO and state manipulation. This means a type like c :: Input -> STM (IO ()), with a call like join (atomically (c input)).

Now there are helper functions that return an IO action, and some that don’t. When some functionnality is added, some functions need to start returning IO actions. This is ugly and makes it harder to extend.

Conclusion of the serie

Unfortunately I ran out of time for working on this serie a few weeks ago. The code is out, the game works and it’s fun. My original motivation for writing this post was as an exposure on basic type-directed design to my non-Haskeller friends, but I think it’s not approachable to non Haskellers, so I never shown them.

The main takeaways are :

Game rules

The game rules have first been written with an unspecified monad that exposed several functions required for user interaction. That’s the reason I started with defining a typeclass, that way I wouldn’t have to worry about implementing the “hard” part and could concentrate on writing the rules instead. For me, this was the fun part, and it was also the quickest.

As of the implementation of the aforementionned functions, I then used the operational package, that would let me write and “interpreter” for my game rules. One of them is pure, and used in tests. There are two other interpreters, one of them for the console version of the game, the other for the multi-backends system.

Backend system

The backends are, I think, easy to expand. Building the core of the multi-game logic with the mvc package very straightforward. It would be obvious to add an IRC backend to the XMPP one, if there weren’t that many IRC packages to choose from on hackage …

A web backend doesn’t seem terribly complicated to write, until you want to take into account some common web application constraints, such as having several redundant servers. In order to do so, the game interpreter should be explicitely turned into an explicit continuation-like system (with the twist it only returns on blocking calls) and the game state serialized in a shared storage system.

Bugs

My main motivation was to show it was possible to eliminate tons of bug classes by encoding of the invariants in the type system. I would say this was a success.

The area where I expected to have a ton of problems was the card list. It’s a tedious manual process, but some tests weeded out most of the errors (it helps that there are some properties that can be verified on the deck). The other one was the XMPP message processing in its XML horror. It looks terrible.

The area where I wanted this process to work well was a success. I wrote the game rules in one go, without any feedback. Once they were completed, I wrote the backends and tested the game. It turned out they were very few bugs, especially when considering the fact that the game is a moderately complicated board game :

  • One of the special capabilities was replaced with another, and handled at the wrong moment in the game. This was quickly debugged.
  • I used traverse instead of both for tuples. I expected them to have the same result, and it “typechecked” because my tuple was of type (a,a), but the Applicative instance for tuples made it obvious this wasn’t the case. That took a bit longer to find out, as it impacted half of the military victory points, which are distributed only three times per game.
  • I didn’t listen to my own advice, and didn’t take the time to properly encode that some functions only worked with nonempty lists as arguments. This was also quickly found out, using quickcheck.

The game seems to run fine now. There is a minor rule bugs identified (the interaction between card-recycling abilities and the last turn for example), but I don’t have time to fix it.

There might be some interest with the types of the Hub, as they also encode a lot of invariants.

Also off-topic, but I really like using the lens vocabulary to encode the relationship between types these days. A trivial example can be found here.

The game

That might be the most important part. I played a score of games, and it was a lot of fun. The game is playable, and just requires a valid account on an XMPP server. Have fun !

Categories: Offsite Blogs

Need help understanding lazysplines

Haskell on Reddit - Fri, 08/08/2014 - 11:28pm

The documentation is rather sparse, but from reading the source and trying to understand the examples, it seems that duckDeathAtAge defines a piece-wise function where duckDeathAtAge at x gives the probability that the duck will die at age x. Then, survival at x gives the probability that the duck will live till age x. I'm not fully understanding how the recursion works here and the other examples seem almost impenetrable.

Some googling led me to the announcement for a talk by the author but I couldn't find slides/video in the websites linked to at the end of the paper.

Can anyone with more experience with the library point me to additional resources to learn about the topic?

submitted by precalc
[link] [3 comments]
Categories: Incoming News

Yesod Web Framework: Deprecating yesod-platform

Planet Haskell - Fri, 08/08/2014 - 11:10pm

I want to deprecate the yesod-platform, and instead switch to Stackage server as the recommended installation method for Yesod for end users. To explain why, let me explain the purpose of yesod-platform, the problems I've encountered maintaining it, and how Stackage Server can fit in. I'll also explain some unfortunate complications with Stackage Server.

Why yesod-platform exists

Imagine a simpler Yesod installation path:

  1. cabal install yesod-bin, which provides the yesod executable.
  2. yesod init to create a scaffolding.
  3. cabal install inside that directory, which downloads and installs all of the necessary dependencies.

This in fact used to be the installation procedure, more or less. However, this led to a number of user problems:

  • Back in the earlier days of cabal-install, it was difficult for the dependency solver to find a build plan in this situation. Fortunately, cabal-install has improved drastically since then.
    • This does still happen occasionally, especially with packages with restrictive upper bounds. Using --max-backjumps=-1 usually fixes that.
  • It sometimes happens that an upstream package from Yesod breaks Yesod, either by changing an API accidentally, or by introducing a runtime bug.

This is where yesod-platform comes into play. Instead of leaving it up to cabal-install to track down a consistent build plan, it specifies exact versions of all depedencies to ensure a consistent build plan.

Conflicts with GHC deps/Haskell Platform

Yesod depends on aeson. So logically, yesod-platform should have a strict dependency on aeson. We try to always use the newest versions of dependencies, so today, that would be aeson == 0.8.0.0. In turn, this demands text >= 1.1.1.0. However, if you look at the Haskell Platform changelog, there's no version of the platform that provides a new enough version of text to support that constraint.

yesod-platform could instead specify an older version of aeson, but that would unnecessarily constrain users who aren't sticking to the Haskell Platform versions (which, in my experience, is the majority of users). This would also cause more dependency headaches down the road, as you'd now also need to force older versions of packages like criterion.

To avoid this conflict, yesod-platform has taken the approach of simply omitting constraints on any packages in the platform, as well as any packages with strict bounds on those packages. And if you look at yesod-platform today, you'll that there is no mention of aeson or text.

A similar issue pops up for packages that are a dependency of the GHC package (a.k.a., GHC-the-library). The primary problem there is the binary package. In this case, the allowed version of the package depends on which version of GHC is being used, not the presence or absence of the Haskell Platform.

This results in two problems:

  • It's very difficult to maintain this list of excluded packages correctly. I get large number of bug reports about these kinds of build plan problems.

  • We're giving up quite a bit of the guaranteed buildability that yesod-platform was supposed to provide. If aeson 0.7.0.4 (as an example) doesn't work with yesod-form, yesod-platform won't be able to prevent such a build plan from happening.

There's also an issue with the inability to specify dependencies on executable-only packages, like alex, happy, and yesod-bin.

Stackage Server

Stackage Server solves exactly the same problem. It provides a consistent set of packages that can be installed together. Unlike yesod-platform, it can be distinguished based on GHC version. And it's far simpler to maintain. Firstly, I'm already maintaining Stackage Server full time. And secondly, all of the testing work is handled by a very automated process.

So here's what I'm proposing: I'll deprecate the yesod-platform package, and change the Yesod quickstart guide to have the following instructions:

  • Choose an appropriate Stackage snapshot from stackage.org
  • Modify your cabal config file appropriately
  • cabal install yesod-bin alex happy
  • Use yesod init to set up a scaffolding
  • cabal install --enable-tests in the new directory

For users wishing to live on more of a bleeding edge, the option is always available to simply not use Stackage. Such a usage will give more control over package versions, but will also lack some stability.

The problems

There are a few issues that need to be ironed out.

  • cabal sandbox does not allow changing the remote-repo. Fortunately, Luite seems to have this solved, so hopefully this won't be a problem for long. Until then, you can either use a single Stackage snapshot for all your development, or use a separate sandboxing technology like hsenv.

  • Haskell Platform conflicts still exist. The problem I mentioned above with aeson and text is a real problem. The theoretically correct solution is to create a Stackage snapshot for GHC 7.8 + Haskell Platform. And if there's demand for that, I'll bite the bullet and do it, but it's not an easy bullet to bite. But frankly, I'm not hearing a lot of users saying that they want to peg Haskell Platform versions specifically.

    In fact, the only users who really seem to want to stick to Haskell Platform versions are Windows users, and the main reason for this is the complexity in installing the network package on Windows. I think there are three possible solutions to this issue, without forcing Windows users onto old versions of packages:

    1. Modify the network package to be easier to install on Windows. I really hope this has some progress. If this is too unstable to be included in the official Hackage release, we could instead have an experimental Stackage snapshot for Windows with that modification applied.
    2. Tell Windows users to simply bypass Stackage and yesod-platform, with the possibility of more build problems on that platform.
      • We could similarly recommend Windows users develop in a Linux virtual machine/Docker image.
    3. Provide a Windows distribution of GHC + cabal-install + network. With the newly split network/network-uri, this is a serious possibility.

Despite these issues, I think Stackage Server is a definite improvement on yesod-platform on Linux and Mac, and will likely still improve the situation on Windows, once we figure out the Haskell Platform problems.

I'm not making any immediate changes. I'd very much like to hear people using Yesod on various operating systems to see how these changes will affect them.

Categories: Offsite Blogs

Get out structured data from a C/C++ library inHaskell

haskell-cafe - Fri, 08/08/2014 - 3:34pm
Hello everybody. I'm new to the list so I'd like to say hello to you. I'm a student of computer science and early practitioner of Haskell. I've decided to implement the next project in Haskell but I need to interface a C++ library. I've read all the wikis material about the matter and I have an understanding of how FFI to C works in general. The library in question is a SAT solver for LTL formulas, and all I need to do is to be able to create the AST of the formula (Haskell will do the parsing) and pass it to the library, and then get back the reply. From the C++ point of view, the AST of the formulas consists simply of objects linked together with raw pointers. Nodes are of the same type, with an internal enum that specifies the type of node, so it's not an inheritance hierarchy or fancy things... What I would like to do is to be able to declare an algebraic data type that represents the AST and somehow mangle it to a form that can be passed to the C++ function of the library (that I can wrap into an ex
Categories: Offsite Discussion

Data families and classes to simulate GADT example

Haskell on Reddit - Fri, 08/08/2014 - 2:25pm

This works

data One a where A :: One () B :: One Int blah :: One a -> a blah A = () blah B = 10 blah C = 42

but this doesn't:

class One a where data Two a instance One () where data Two () = A instance One Int where data Two Int = B | C blah :: One a => Two a -> a blah A = undefined blah _ = undefined

Presumably because it doesn't have a proof that unifies a with (), but is there any way to get it to work?

submitted by haskellthrowaway
[link] [11 comments]
Categories: Incoming News

How are polymorphic numeric functions implemented?

Haskell on Reddit - Fri, 08/08/2014 - 2:08pm

Are numbers boxed with a vtable for looking up functions at runtime or since Haskell knows the concrete types at runtime (I think?), is it like C++ templates where a copy of each function is created for each concrete type that is needed? If it's the latter, if you had a gigantic program and right at the beginning it "switched" on a value you read in from the user into branches for X different number types, would your executable be X times bigger? Like switch(user input){ case 1: longCalculation (read n :: Double) ; case 2: longCalculation (read n :: Integer) ... etc. } This has no relevance to anything, just weird curiousity.

submitted by chromaticburst
[link] [17 comments]
Categories: Incoming News

Brent Yorgey: Maniac week

Planet Haskell - Fri, 08/08/2014 - 1:43pm

Inspired by Bethany Soule (and indirectly by Nick Winter, and also by the fact that my dissertation defense and the start of the semester are looming), I am planning a “maniac week” while Joyia and Noah will be at the beach with my family (I will join them just for the weekend). The idea is to eliminate as many distractions as possible and to do a ton of focused work. Publically committing (like this) to a time frame, ground rules, and to putting up a time-lapse video of it afterwards are what actually make it work—if I don’t succeed I’ll have to admit it here on my blog; if I waste time on Facebook the whole internet will see it in the video; etc. (There’s actually no danger of wasting time on Facebook in particular since I have it blocked, but you get the idea.)

Here are the rules:

  • I will start at 6pm (or thereabouts) on Friday, August 8.
  • I will continue until 10pm on Wednesday, August 13, with the exception of the morning of Sunday, August 10 (until 2pm).
  • I will get at least 7.5 hours of sleep each night.
  • I will not eat cereal for any meal other than breakfast.
  • I will reserve 3 hours per day for things like showering, eating, and just plain resting.  Such things will be tracked by the TagTime tag “notwork”.
  • I will spend the remaining 13.5 hours per day working productively. Things that will count as productive work:
    • Working on my dissertation
    • Course prep for CS 354 (lecture and assignment planning, etc.) and CS 134 (reading through the textbook); making anki decks with names and faces for both courses
    • Updating my academic website (finish converting to Hakyll 4; add potential research and independent study topics for undergraduates)
    • Processing FogBugz tickets
    • I may work on other research or coding projects (e.g. diagrams) each day, but only after spending at least 7 hours on my dissertation.
  • I will not go on IRC at all during the week.  I will disable email notifications on my phone (but keep the phone around for TagTime), and close and block gmail in my browser.  I will also disable the program I use to check my UPenn email account.
  • For FogBugz tickets which require responding to emails, I will simply write the email in a text file and send it later.
  • I may read incoming email and write short replies on my phone, but will keep it to a bare minimum.
  • I will not read any RSS feeds during the week.  I will block feedly in my browser.
  • On August 18 I will post a time-lapse video of August 8-13.  I’ll probably also write a post-mortem blog post, if I feel like I have anything interesting to say.
  • I reserve the right to tweak these rules (by editing this post) up until August 8 at 6pm.  After that point it’s shut up and work time, and I cannot change the rules any more.

And no, I’m not crazy. You (yes, you) could do this too.


Categories: Offsite Blogs