News aggregator

Wolfgang Jeltsch: List of publications and talks updated

Planet Haskell - Tue, 09/30/2014 - 10:22am

The list of publications and talks on my website has been pretty much out of date. Now it is updated, for you to enjoy. :-)


Tagged: publication, talk
Categories: Offsite Blogs

Why do I get this error message?

Haskell on Reddit - Tue, 09/30/2014 - 8:57am

Only just started using Haskell and I've finally got it working on emacs but when I try and use a function I get this error message, why is this??

submitted by willrobertshaw
[link] [4 comments]
Categories: Incoming News

ANN: Nomyx V0.7, the only game where you can change the rules

Haskell on Reddit - Tue, 09/30/2014 - 8:36am

I released Nomyx V0.7, the only game where You can change the rules!
Here is the website: www.nomyx.net
Here is a video introduction of the game: http://vimeo.com/58265498
I created a tutorial to learn how to play: http://www.corentindupont.info/blog/posts/2014-09-23-first-Nomyx-tutorial.html

Let's start a new game! I propose to call it "The Space Merchants". Dear rulers, this part of the Galaxy is yours...
Please login to the game here: www.nomyx.net:8000/Nomyx.
The corresponding forum thread is here: http://www.nomyx.net/forum/viewtopic.php?f=4&t=1525&p=1739
Please also register on the mailing list to follow the game: nomyx-game@googlegroups.com

Some background: this is an implementation of a Nomic [1] game in Haskell (I believe the first complete implementation of a Nomic game on a computer). In a Nomyx game you can change the rules of the game itself while playing it. The players can submit new rules or modify existing ones, thus completely changing the behaviour of the game through time. The rules are managed and interpreted by the computer. They must be written in the Nomyx language, which is a subset of Haskell.

[1] www.nomic.net

submitted by kaukau
[link] [5 comments]
Categories: Incoming News

Yesod 1.4 released

Haskell on Reddit - Tue, 09/30/2014 - 5:20am
Categories: Incoming News

Yesod Web Framework: Announcing Yesod 1.4

Planet Haskell - Tue, 09/30/2014 - 5:00am

We are happy to announce the release of Yesod 1.4. This includes:

  • Releases of all Yesod packages to support version 1.4.
  • The book content on yesodweb.com is completely updated for Yesod 1.4, with all snippets confirmed to compile and most of the text proofread from scratch for accuracy (in the next week the rest will be finished).
  • A new Stackage snapshot available for GHC 7.8.3.

Its worth mentioning that there have been a ton of improvements to Yesod since version 1.2, they just didn't need any breaking changes.

Thanks to everyone who provided code, feedback, and testing for this release, it should be a very solid one!

Here's a collection of links that provide various other pieces of information about this release:

Changelog

What is most exciting to report is that this was a very minor change to Yesod, and therefore most code should be upgradeable with minor changes. First, the changelog of breaking changes:

New routing system with more overlap checking control

This requires OverloadedStrings and ViewPatterns. The generated code is faster and much more readable.

Yesod routes are not just type-safe, they also check for overlapping that could cause ambiguity. This is a great feature, but sometimes it gets in your way. Overlap checking can be turned off for multipieces, entire routes, and parent routes in a hierarchy. For more information, see the commit comment.

Dropped backwards compatibility with older versions of dependencies

In particular, persistent-1 and wai-2. We will talk more about persistent 2. wai-3 uses a CPS style that will require some middleware to have an additional CPS parameter. Looking at the wai-extra source code can help with upgrading, but it should just be adding an extra parameter.

yesod-auth works with your database and your JSON

There is better support for non-persistent backends in yesod-auth. See pull request 821 for details. For most users, you can fix this by adding instance YesodAuthPersist App to your Foundation.hs.

yesod-auth already released a breaking change to be able to accept JSON everywhere. That bumped the version to 1.3 We like to keep the yesod-* packages in sync, so now everything is getting bumped to 1.4 together.

In the 1.4 release, we also fixed requireAuth and and requireAuthId to return a 401 response when a JSON response is requested. See pull request 783.

yesod-test sends HTTP/1.1 as the version

This may require updating tests to expect 303 instead of 302 redirects.

Type-based caching with keys.

The Type-based caching code was moved into a separate module without Yesod dependencies and documented. If there is interest in seeing this as a separate package let us know, but it is also pretty easy to just copy the module.

To me, TypeCache is a beautiful demonstration of Haskell's advanced type system that shows how you can get the best of both worlds in a strongly typed language.

type TypeMap = HashMap TypeRep Dynamic

Above we have the wonderful juxtaposition of Haskell's strong typing in the Key, and dynamic typing in the value. This HashMap is used to cache the result of a monadic action.

cached :: (Monad m, Typeable a) => TypeMap -> m a -- ^ cache the result of this action -> m (Either (TypeMap, a) a) -- ^ Left is a cache miss, Right is a hit

Dynamic is used to have a HashMap of arbitrary value types. TypeRep is used to create a unique key for the cache. Yesod uses this to cache the authentication lookup of the database for the duration of the request.

newtype CachedMaybeAuth val = CachedMaybeAuth { unCachedMaybeAuth :: Maybe val } deriving Typeable cachedAuth = fmap unCachedMaybeAuth . cached . fmap CachedMaybeAuth . getAuthEntity

CachedMaybeAuth is a newtype that isn't exported. TypeRep is specific to a module, so this pattern guarantees that your cache key will not conflict outside of your module.

This functionality was in yesod-1.2 even though the code was not separated into a new module. The 1.4 release adds the ability to cache multiple values per type

type KeyedTypeMap = HashMap (TypeRep, ByteString) Dynamic cachedBy :: (Monad m, Typeable a) => KeyedTypeMap -> ByteString -- ^ a cache key -> m a -- ^ cache the result of this action -> m (Either (KeyedTypeMap, a) a) -- ^ Left is a cache miss, Right is a hit

This is useful if your monadic action has inputs: if you serialize them to a ByteString you can use thm as a key.

Upgrade guide

The most significant set of changes in the Yesod ecosystem actually landed in Persistent 2. However, these were mostly internal changes with new features that maintain backwards compatibility, so many users will be unaffected.

To kickoff the upgrade process, you need to change update your cabal file to allow yesod version 1.4. If you had constraints on persistent, update them to > 2.1 If you are using cabal freeze to peg your versions in the cabal.config file, cabal will provide you no assistance in making a smooth upgrae. You are probably going to want to delete a whole lot of things in cabal.config (or possibley the entire file), and upgrade a lot of dependencies at once. When you are done and things compile again, you will want to do a cabal freeze

As has become the custom for each major release, the upgrade process is documented by the diff of the Haskellers code base upgrading to Yesod 1.4. For Haskellers it was pretty simple.

In sum:

  • Replace type YesodPersistBackend App = SqlPersist with type YesodPersistBackend App = SqlBackend.
  • Add instance YesodAuthPersist App to Foundation.hs.
  • Add the ViewPatterns language extension.

If you have more complex persistent code you may have more to do. Look at the previous post on persistent-2.1

Categories: Offsite Blogs

Well-Typed.Com: How we might abolish Cabal Hell, part 1

Planet Haskell - Tue, 09/30/2014 - 4:19am

At ICFP a few weeks ago a hot topic in the corridors and in a couple talks was the issues surrounding packaging and “Cabal Hell”.

Fortunately we were not just discussing problems but solutions. Indeed I think we have a pretty good understanding now of where we want to be, and several solutions are in development or have reasonably clear designs in peoples’ heads.

I want to explain what’s going on for those not already deeply involved in the conversation. So this is the first of a series of blog posts on the problems and solutions around Cabal hell.

There are multiple problems and multiple solutions. The solutions overlap in slightly complicated ways. Since it is a bit complicated, I’m going to start with the big picture of the problems and solutions and how they relate to each other. In subsequent posts I’ll go into more detail on particular problems and solutions.

“Cabal hell”: the problems

So what is “Cabal hell”? Let’s consult the dictionary…

Cabal Hell

The feeling of powerlessness one has when Cabal does not do what one wanted and one does not know how to fix it.

I’m joking obviously, but my point is that Cabal hell is not a precise technical term. There are a few different technical problems (and misunderstandings and UI problems) that can cause Cabal hell.

A useful concept when talking about this topic is that of the packaging “wild wild west”. What we mean is whether we are in a context where we reasonably expect packages to work together (because there has been some deliberate effort to make them work together), or if we are in the “wild wild west”. In the “wild wild west” we have to do things like deal with packages that were uploaded yesterday by multiple different authors. The point is that nobody has yet had time to try and make things consistent. It is a useful concept because we have developers who need to deal with the “wild wild west” and those who would really rather not, and the solutions tend to look a bit different.

Another term we often use when talking about packages is “consistency”. What we mean is that in a collection of packages there is at most one version of each package. For example when you ask cabal-install to install package A and B, we say that it will try to find a “consistent” set of dependencies – meaning a set including A, B and their dependencies that has only one version of any package.

“Cabal hell”: the symptoms

So lets consider a breakdown of the technical problems. To start with lets look at a breakdown based on the symptoms that a developer in Cabal Hell experiences

 

We can first break things down by whether there is a solution or not. That is, whether a perfect dependency resolver could find a plan to install the package(s) and their dependencies consistently. We want such a solution because it’s a prerequisite for installing working packages. (We’re ignoring the possibility that there is a solution but the solver fails to find one. That is possible but it’s a relatively rare problem.)

Given the situation where the solver tells us that there is no solution, there are a few different cases to distinguish:

No solution expected

The failure was actually expected. For example a developer updating their package to work with the latest version of GHC is not going to be surprised if their initial install attempt fails. Then based on what the solver reports they can work out what changes they need to make to get things working.

Solution had been expected

The more common case is that the developer was not expecting to be working in the wild west. The developer had an expectation that the package or packages they were asking for could just be installed. In this case the answer “no that’s impossible” from the solver is very unhelpful, even though it’s perfectly correct.

Unnecessary solver failure

The symptoms here are exactly the same, namely the solver cannot find a solution, but the reason is different. More on reasons in a moment.

Even when there is a solution we can hit a few problems:

Compile error

Compilation can fail because some interface does not match. Typically this will manifest as a naming error or type error.

Breaking re-installations

Cabal’s chosen solution would involve reinstalling an existing version of a package but built with different dependencies. This re-installation would break any packages that depend on the pre-existing instance of the installed package. By default cabal-install will not go ahead with such re-installation, but you can ask it to do so.

Type errors when using packages together

It is possible to install two package and then load them both in GHCi and find that you cannot use them together because you get type errors when composing things defined in the two different packages.

“Cabal hell”: the reasons

So those are the major problems. Lets look at some reasons for those problems.

 

Inconsistent versions of dependencies required

There are two sub-cases worth distinguishing here. One is where the developer is asking for two or more packages that could be installed individually, but cannot be installed and used together simultaneously because they have clashing requirements on their common dependencies. The other is that a package straightforwardly has no solution (at least with the given compiler & core library versions), because of conflicting constraints of its dependencies.

Constraints wrong

With under-constrained dependencies we get build failures, and with over-constrained dependencies we get unnecessary solver failures. That is, a build failure is (almost always) due to dependency constraints saying some package version combination should work, when actually it does not. And the dual problem: an unnecessary solver failure is the case where there would have been a solution that would actually compile, if only the constraints had been more relaxed.

Single instance restriction

Existing versions of GHC and Cabal let you install multiple versions of a package, but not multiple instances of the same version of a package. This is the reason why Cabal has to reinstall packages, rather than just add packages.

Inconsistent environment

These errors occur because cabal-install does not enforce consistency in the developer’s environment, just within any set of packages it installs simultaneously.

We’ll go into more detail on all of these issues in subsequent posts, so don’t worry if these things don’t fully make sense yet.

“Cabal hell”: the solutions

There are several problems and there isn’t one solution that covers them all. Rather there are several solutions. Some of those solutions overlap with each other, meaning that for some cases either solution will work. The way the solutions overlap with the problems and each other is unfortunately a bit complicated.

Here’s the overview:

 

So what does it all mean?

We’ll look at the details of the solutions in subsequent posts. At this stage the thing to understand is which solutions cover which problems, and where those solutions overlap.

We’ll start with the two most important solutions. They’re the most important in the sense that they cover the most cases.

Nix-style persistent store with multiple consistent environments

This solves all the cases of breaking re-installations, and all cases of inconsistent environments. It doesn’t help with wrong constraints.

You’ll note that it covers some cases where there is no solution and you might wonder what this can mean. Some cases where there is no solution are due to two (or more) sets of packages that could be installed independently but cannot be installed together consistently. In a nix-style setting it would be possible to offer developers the option to install the packages into separate environments when the solver determines that this is possible.

Curated consistent package collections

These are things like the Debian Haskell packages or Stackage. This solves some cases of each of the different problems: breaking re-installations, inconsistent environments, wrong constraints and lack of consistent solutions. It solves those cases to the extent that the package collection covers all the packages that the developer is interested in. For many developers this will be enough. Almost by definition however it cannot help with the “wild west” of packages because the curation takes time and effort. Unless used in combination with a isolated environment solution (e.g. nix-style, but also less sophisticated systems like hsevn or cabal sandboxes) it does not allow using multiple versions of the collection (e.g. different projects using different Stackage versions).

It is worth noting that these two solutions should work well together. Neither one subsumes the other. We don’t need to pick between the two. We should pick both. The combination would get us a long way to abolishing Cabal hell.

There are also a number of smaller solutions:

Automatic build reporting

This helps with detecting compile errors arising from constraints that are too lax. It doesn’t help with constraints that are too tight. This solution requires a combination of automation and manual oversight to fix package constraints and to push those fixes upstream.

Upper-bound build bots

This is similar to gathering build reports from users, but instead of looking at cases of compile failure (constraints too lax), it explicitly tries relaxing upper bounds and checks if things still compile and testsuites work. Again, this requires automation to act on the information gleaned to minimise manual effort.

Package interface compatibility tools

This is to help package authors get their dependency constraints right in the first place. It can help them follow a version policy correctly, and tell them what minimum and maximum version bounds of their dependencies to use. It does not completely eliminate the need to test, because type compatibility does not guarantee semantic compatibility. Solutions in this area could eliminate a large number of cases of wrong constraints, both too lax and too tight.

Private dependencies

This allows solutions to exist where they do not currently exist, by relaxing the consistency requirement in a safe way. It means global consistency of dependencies is not always required, which allows many more solutions. This solution would cover a lot of cases in the “wild wild west” of packaging, and generally in the large set of packages that are not so popular or well maintained as to be included in a curated collection.

Next time…

So that’s the big picture of the problems and solutions and how they relate to each other. In subsequent posts we’ll look in more detail at the problems and solutions, particularly the solutions people are thinking about or actively working on.

Categories: Offsite Blogs

ETAPS 2015 final call for papers

General haskell list - Tue, 09/30/2014 - 12:10am
****************************************************************** CALL FOR PAPERS: ETAPS 2015 18th European Joint Conferences on Theory And Practice of Software London, UK, 11-18 April 2015 http://www.etaps.org/2015 ******************************************************************
Categories: Incoming News

HTTP Basic Auth with Snap

Haskell on Reddit - Mon, 09/29/2014 - 11:45pm
Categories: Incoming News

Magnus Therning: Adding tags

Planet Haskell - Mon, 09/29/2014 - 6:00pm

Adding tags to a Hakyll site brought some surprises. In retrospect it all makes sense, but it took some thinking on my part to work out the why of it all. The resulting code was heavily influenced by Erik Kronberg’s site.

First I thought I’d just add tags to each rendered post, by building the tags

tags <- buildTags "posts/*" (fromCapture "tags/*.html")

then adding them to the post context

let postCtx = field "previousPostUrl" (previousPostUrl "posts/*") <> field "previousPostTitle" (previousPostTitle "posts/*") <> field "nextPostUrl" (nextPostUrl "posts/*") <> field "nextPostTitle" (nextPostTitle "posts/*") <> field "postId" getPostId <> tagsField "tags" tags <> listFieldFunc "comments" defaultContext (getComments "comments/*") <> baseCtx

and last modify the template

<p>Tags: $tags$</p>

Easy! Except it doesn’t work that way. The $tags$ is always empty. To actually get the tagsField to work as intended it’s necessary to build the tag pages, which can be accomplished using tagsRules

tagsRules tags $ \ tagStr pattern -> do route idRoute compile $ do posts <- loadAll pattern >>= recentFirst let tagsCtx = constField "thetag" tagStr <> listField "posts" baseCtx (return posts) <> baseCtx makeItem "" >>= loadAndApplyTemplate "templates/tag-post-list.html" tagsCtx >>= loadAndApplyTemplate "templates/default.html" tagsCtx >>= relativizeUrls

The template for the tags pages is very simple at the moment

<h1>Posts tagged $thetag$</h1> <ul> $for(posts)$ <li> <a href="$url$">$title$</a> - $date$ </li> $endfor$ </ul>

That’s it. With that in place the $tags$ field renders properly in the post pages as well.

Categories: Offsite Blogs

Magnus Therning: Adding tags

Planet Haskell - Mon, 09/29/2014 - 6:00pm

Adding tags to a Hakyll site brought some surprises. In retrospect it all makes sense, but it took some thinking on my part to work out the why of it all. The resulting code was heavily influenced by Erik Kronberg’s site.

First I thought I’d just add tags to each rendered post, by building the tags

tags <- buildTags "posts/*" (fromCapture "tags/*.html")

then adding them to the post context

let postCtx = field "previousPostUrl" (previousPostUrl "posts/*") <> field "previousPostTitle" (previousPostTitle "posts/*") <> field "nextPostUrl" (nextPostUrl "posts/*") <> field "nextPostTitle" (nextPostTitle "posts/*") <> field "postId" getPostId <> tagsField "tags" tags <> listFieldFunc "comments" defaultContext (getComments "comments/*") <> baseCtx

and last modify the template

<p>Tags: $tags$</p>

Easy! Except it doesn’t work that way. The $tags$ is always empty. To actually get the tagsField to work as intended it’s necessary to build the tag pages, which can be accomplished using tagsRules

tagsRules tags $ \ tagStr pattern -> do route idRoute compile $ do posts <- loadAll pattern >>= recentFirst let tagsCtx = constField "thetag" tagStr <> listField "posts" baseCtx (return posts) <> baseCtx makeItem "" >>= loadAndApplyTemplate "templates/tag-post-list.html" tagsCtx >>= loadAndApplyTemplate "templates/default.html" tagsCtx >>= relativizeUrls

The template for the tags pages is very simple at the moment

<h1>Posts tagged $thetag$</h1> <ul> $for(posts)$ <li> <a href="$url$">$title$</a> - $date$ </li> $endfor$ </ul>

That’s it. With that in place the $tags$ field renders properly in the post pages as well.

Categories: Offsite Blogs

Yesod Web Framework: Persistent 2.1 released

Planet Haskell - Mon, 09/29/2014 - 5:51pm

Persistent 2.1, a stable release of the next generation of persistent is released to Hackage.

Persistent is an ORM for Haskell that keeps everything type-safe.

Persistent 2.1 features

  • a flexible, yet more type-safe Key type
  • a simplified monad stack

I already announced persistent 2 and the 2.1 release candidate.

Everyone should set their persistent dependencies to > 2.1 && < 3. 2.0.x was the unstable release and is now deprecated.

I want to thank all the early persistent 2 adopters for putting up with a fast-moving, buggy code base. This was an experiment in shipping an unstable version, and what I learned from it is that it was a great process, but we need to make sure Travis CI is running properly, which it is now!

Persistent 2.1 library support

The persistent and persistent-template libraries should support any kind of primary key type that you need. The backends are still catching up to the new features

  • persistent-sqlite backend has fully implemented these features.
  • persistent-postgres and persitent-mysql don't yet support changing the type of the id field
  • persistent-mongoDB does not yet support composite primary keys

All of the above packages except persistent-mysql are being well maintained, but just developing new features at their own pace. persistent-mysql is in the need of a dedicated maintainer. There are some major defects in the migration code that have gone unresolved for a long time now.

  • persistent-redis is in the process of being upgraded to 2.1
  • persistent-zookeeper was just released, but it is on persistent 1.3.*
  • There are other persistent packages out there that I have not had the chance to check on yet, most noteably persistent-odbc. Feel free to ask for help with upgrading.
Persistent 2.1 upgrade guide

Simple persistent usage may not need any changes to upgrade.

The fact that the Key type is now flexible means it may need to be constrained. So if you have functions that have Key in the type signature that are not specific to one PersistEntity, you may need to constrain them to the BackendKey type. An easy way to do this is using ConstraintKinds.

type DBEntity record = ( PersistEntityBackend record ~ MongoContext , PersistEntity record , ToBackendKey MongoContext record )

A SQL user would use SqlBackend instead of MongoContext. So you can now change the type signature of your functions:

- PersistEntity record => Key record + DBEntity record => Key record

Depending on how you setup your monad stacks, you may need some changes. Here is one possible approach to creating small but flexible Monad stack type signatures. It requires Rank2Types, and the code show is specialized to MongoDB.

type ControlIO m = ( MonadIO m , MonadBaseControl IO m) type LogIO m = ( MonadLogger m , ControlIO m) -- these are actually types, not constraints -- with persistent-2 things work out a lot easier this way type DB a = LogIO m => ReaderT MongoContext m a type DBM m a = LogIO m => ReaderT MongoContext m a

The basic type signature is just DB () (no constraints required). For working with different monad stacks, you can use DBM. If you are using conduits, you will have MonadResource m => DBM m (). Here is another example:

class Monad m => HasApp m where getApp :: m App instance HasApp Handler where getApp = getYesod instance HasApp hasApp => HasApp (ReaderT MongoContext hasApp) where getApp = lift $ getApp instance MonadIO m => HasApp (ReaderT App m) where getApp = ask -- | synonym for DB plus HasApp operations type DBApp a = HasApp m => DBM m a type DBAppM m a = HasApp m => DBM m a

With this pattern our return type signature is always ReaderT MongoContext m, and we are changing m as needed. A different approach is to have a return type signature of m and to place a MonadReader constraint on it.

type Mongo m = (LogIO m, MonadReader MongoContext m)

Right now this approach requires using a call to Database.MongoDB.liftDB around each database call, but I am sure there are approaches to dealing with that. One approach would be to wrap every persistent "primitive" with liftDB.

Categories: Offsite Blogs

Aliasing current module qualifier

glasgow-user - Mon, 09/29/2014 - 9:19am
Hello *, Here's a situation I've encountered recently, which mades me wish to be able to define a local alias (in order to avoid CPP use). Consider the following stupid module: module AnnoyinglyLongModuleName ( AnnoyinglyLongModuleName.length , AnnoyinglyLongModuleName.null ) where length :: a -> Int length _ = 0 null :: a -> Bool null = (== 0) . AnnoyinglyLongModuleName.length Now it'd be great if I could do the following instead: module AnnoyinglyLongModuleName (M.length, M.null) where import AnnoyinglyLongModuleName as M -- <- does not work length :: a -> Int length _ = 0 null :: a -> Bool null = (== 0) . M.length However, if I try to compile this, GHC complains about AnnoyinglyLongModuleName.hs:4:1: Bad interface file: AnnoyinglyLongModuleName.hi AnnoyinglyLongModuleName.hi: openBinaryFile: does not exist (No such file or directory) while GHCi tells me: Module imports form a
Categories: Offsite Discussion

Pre-proposal discussion: add a version of dropWhileEnd with different laziness properties to Data.List

libraries list - Sun, 09/28/2014 - 9:39pm
BACKGROUND: A somewhat common idiom I discovered digging around the GHC tree is the use of reverse . dropWhile p . reverse to remove some unwanted things from the end of a list. The lists involved tend to be short (e.g., file names, package names, lines of user input, etc.) and the amount removed from the end of the list tends to be short (a trailing newline, the last part of a filename, extra spaces, etc.). I initially intended to replace all of these with Data.List.dropWhileEnd p. Unfortunately, my benchmarking showed that this had a substantial negative impact on performance. Data.List.dropWhileEnd is defined like this: dropWhileEnd p = foldr (\x r -> if p x && null r then [] else x:r) [] This is lazy in the *spine* of the list--it can "flush its buffer" and produce list elements any time p x is found to be false. This is the best you can do if you need to process a very large list and don't want to have to load the whole thing into memory, and/or your predicate is *very* cheap. Unfortunately
Categories: Offsite Discussion

the complex package is hidden

haskell-cafe - Sun, 09/28/2014 - 8:55pm
Hi, surprisingly i can't seem to google an answer to this: Could not find module `Complex' It is a member of the hidden package `haskell98-2.0.0.2'. Use -v to see a list of the files searched for. it's been quite a while since I've used ghci, and I'm just about positive I've solved this problem before, but can't seem to figure it out. Thanks for any help.
Categories: Offsite Discussion

Magnus Therning: Adding support for comments

Planet Haskell - Sun, 09/28/2014 - 6:00pm

It seems most people using Hakyll or other static site generators rely on services like Disqus, but I really don’t like the idea of putting a bunch of JavaScript on each page and dynamically loading all comments off some cloud storage. It sorts of fly in the face of the idea of having a static site to begin with. Searching online resulted in a few posts related to a plugin for static comments in Jekyll.

This post only covers dealing with the comments, and not how the reader actually submits a comment. I’ll save that for the next post.

Code changes

I settled on the following naming scheme for comments. The comments for a post P, which is found at posts/<P>.mkd will be put into files named comments/<P>-c000.mkd, comments/<P>-c001.mkd, and so on. The crucial bits are that, first the post’s name is a prefix of all its comments’ names, and two the identifiers (basically the filenames) of the comments are, just like identifiers for posts, easy to sort in date order.

Adding a rule for the comments is easy:

match "comments/*" $ compile pandocCompiler

Then it got a little more tricky. The comments for each post needs to be put into the context used to build the posts. Previously I’ve used field, which takes a function turning an Item String into String. I’ve also used listField which is used to tie a key to a list of Item a. What I needed here though doesn’t seem to exist, i.e. a context function that takes an Item a and returns a list of Item s. So after a bit of studying the source of field and listField I came up with listFieldFunc:

listFieldFunc :: Show a => String -> Context a -> (Item a -> Compiler [Item a]) -> Context a listFieldFunc key ctx func = Context $ \ k i -> if k == key then value i else empty where value i = do is <- func i return $ ListField ctx is

The function for extracting a post’s comments can then be written as

getComments :: (Binary a, Typeable a) => Pattern -> Item a -> Compiler [Item a] getComments pattern item = do idents <- getMatches pattern >>= sortChronological let iId = itemIdentifier item comments = filter (isCommentForPost iId) idents mapM load comments isCommentForPost :: Identifier -> Identifier -> Bool isCommentForPost post comment = let postBase = takeBaseName $ toFilePath post cmtBase = takeBaseName $ toFilePath comment in isPrefixOf postBase cmtBase

Adding the key to the context used for the posts results in

let postCtx = field "previousPostUrl" (previousPostUrl "posts/*") <> field "previousPostTitle" (previousPostTitle "posts/*") <> field "nextPostUrl" (nextPostUrl "posts/*") <> field "nextPostTitle" (nextPostTitle "posts/*") <> field "postId" getPostId <> listFieldFunc "comments" defaultContext (getComments "comments/*") <> baseCtx Template changes

The template changes are trivial of course

$for(comments)$ <div> <p>$author$</p> $body$ </div> $endfor$
Categories: Offsite Blogs

Magnus Therning: Adding support for comments

Planet Haskell - Sun, 09/28/2014 - 6:00pm

It seems most people using Hakyll or other static site generators rely on services like Disqus, but I really don’t like the idea of putting a bunch of JavaScript on each page and dynamically loading all comments off some cloud storage. It sorts of fly in the face of the idea of having a static site to begin with. Searching online resulted in a few posts related to a plugin for static comments in Jekyll.

This post only covers dealing with the comments, and not how the reader actually submits a comment. I’ll save that for the next post.

Code changes

I settled on the following naming scheme for comments. The comments for a post P, which is found at posts/<P>.mkd will be put into files named comments/<P>-c000.mkd, comments/<P>-c001.mkd, and so on. The crucial bits are that, first the post’s name is a prefix of all its comments’ names, and two the identifiers (basically the filenames) of the comments are, just like identifiers for posts, easy to sort in date order.

Adding a rule for the comments is easy:

match "comments/*" $ compile pandocCompiler

Then it got a little more tricky. The comments for each post needs to be put into the context used to build the posts. Previously I’ve used field, which takes a function turning an Item String into String. I’ve also used listField which is used to tie a key to a list of Item a. What I needed here though doesn’t seem to exist, i.e. a context function that takes an Item a and returns a list of Item s. So after a bit of studying the source of field and listField I came up with listFieldFunc:

listFieldFunc :: Show a => String -> Context a -> (Item a -> Compiler [Item a]) -> Context a listFieldFunc key ctx func = Context $ \ k i -> if k == key then value i else empty where value i = do is <- func i return $ ListField ctx is

The function for extracting a post’s comments can then be written as

getComments :: (Binary a, Typeable a) => Pattern -> Item a -> Compiler [Item a] getComments pattern item = do idents <- getMatches pattern >>= sortChronological let iId = itemIdentifier item comments = filter (isCommentForPost iId) idents mapM load comments isCommentForPost :: Identifier -> Identifier -> Bool isCommentForPost post comment = let postBase = takeBaseName $ toFilePath post cmtBase = takeBaseName $ toFilePath comment in isPrefixOf postBase cmtBase

Adding the key to the context used for the posts results in

let postCtx = field "previousPostUrl" (previousPostUrl "posts/*") <> field "previousPostTitle" (previousPostTitle "posts/*") <> field "nextPostUrl" (nextPostUrl "posts/*") <> field "nextPostTitle" (nextPostTitle "posts/*") <> field "postId" getPostId <> listFieldFunc "comments" defaultContext (getComments "comments/*") <> baseCtx Template changes

The template changes are trivial of course

$for(comments)$ <div> <p>$author$</p> $body$ </div> $endfor$
Categories: Offsite Blogs

Danny Gratzer: Notes on Abstract and Existential Types

Planet Haskell - Sun, 09/28/2014 - 6:00pm
Posted on September 29, 2014 Tags: haskell, types

I’m part of a paper reading club at CMU. Last week we talked about a classic paper, Abstract Types have Existential Type. The concept described in this paper is interesting and straightforward. Sadly some of the notions and comparisons made in the paper are starting to show their age. I thought it might be fun to give a tldr using Haskell.

The basic idea is that when we have an type with an abstract implementation some functions upon it, it’s really an existential type.

Some Haskell Code

To exemplify this let’s define an abstract type (in Haskell)

module Stack (Stack, empty, push, pop) where newtype Stack a = Stack [a] empty :: Stack a empty = Stack [] push :: a -> Stack a -> Stack a push a (Stack xs) = Stack (a : xs) pop :: Stack a -> Maybe a pop (Stack []) = Nothing pop (Stack (x : xs)) = Just x shift :: Stack a -> Maybe (Stack a) shift (Stack []) = Nothing shift (Stack (x : xs)) = Just (Stack xs)

Now we could import this module and use its operations:

import Stack main = do let s = push 1 . push 2 . push 3 $ empty print (pop s)

What we couldn’t do however, is pattern match on stacks to take advantage of its internal structure. We can only build new operations out of combinations of the exposed API. The classy terminology would be to say that Stack is abstract.

This is all well and good, but what does it mean type theoretically? If we want to represent Haskell as a typed calculus it’d be a shame to have to include Haskell’s (under powered) module system to talk about abstract types.

After all, we’re not really thinking about modules as so much as hiding some details. That sounds like something our type system should be able to handle without having to rope in modules. By isolating the concept of abstraction in our type system, we might be able to more deeply understand and reason about code that uses abstract types.

This is in fact quite possible, let’s rephrase our definition of Stack

module Stack (Stack, StackOps(..), ops) where newtype Stack a = Stack [a] data StackOps a = StackOps { empty :: Stack a , push :: a -> Stack a -> Stack a , pop :: Stack a -> Maybe a , shift :: Stack a -> Maybe (Stack a) } ops :: StackOps ops = ...

Now that we’ve lumped all of our operations into one record, our module is really only exports a type name, and a record of data. We could take a step further still,

module Stack (Stack, StackOps(..), ops) where newtype Stack a = Stack [a] data StackOps s a = StackOps { empty :: s a , push :: a -> s a -> s a , pop :: s a -> Maybe a , shift :: s a -> Maybe (s a) } ops :: StackOps Stack ops = ...

Now the only thing that needs to know the internals of Stack. It seems like we could really just smush the definition into ops, why should the rest of the file see our private definition.

module Stack (StackOps(..), ops) where data StackOps s a = StackOps { empty :: s a , push :: a -> s a -> s a , pop :: s a -> Maybe a , shift :: s a -> Maybe (s a) } ops :: StackOps ??? ops = ...

Now what should we fill in ??? with? It’s some type, but it’s meant to be chosen by the callee, not the caller. Does that sound familiar? Existential types to the rescue!

{-# LANGUAGE PolyKinds, KindSignatures, ExistentialQuantification #-} module Stack where data Packed (f :: k -> k' -> *) a = forall s. Pack (f s a) data StackOps s a = StackOps { empty :: s a , push :: a -> s a -> s a , pop :: s a -> Maybe a , shift :: s a -> Maybe (s a) } ops :: Packed StackOps ops = Pack ...

The key difference here is Packed. It lets us take a type function and instantiate it with some type variable and hide our choice from the user. This means that we can even drop the whole newtype from the implementation of ops

ops :: Packed StackOps ops = Pack $ StackOps { empty = [] , push = (:) , pop = fmap fst . uncons , shift = fmap snd . uncons } where uncons [] = Nothing uncons (x : xs) = Just (x, xs)

Now that we’ve eliminated the Stack definition from the top level, we can actually just drop the notion that this is in a separate module.

One thing that strikes me as unpleasant is how Packed is defined, we must jump through some hoops to support StackOps being polymorphic in two arguments, not just one.

We could get around this with higher rank polymorphism and making the fields more polymorphic while making the type less so. We could also just wish for type level lambdas or something. Even some of the recent type level lens stuff could be aimed at making a general case definition of Packed.

From the client side this definition isn’t actually so unpleasant to use either.

{-# LANGUAGE RecordWildCards #-} someAdds :: Packed Stack Int -> Maybe Int someAdds (Pack Stack{..}) = pop (push 1 empty)

With record wild cards, there’s very little boilerplate to introduce our record into scope. Now we might wonder about using a specific instance rather than abstracting over all possible instantiations.

someAdds :: Packed Stack Int -> Maybe Int someAdds = let (Pack Stack{..}) = ops in pop (push 1 empty)

The resulting error message is amusing :)

Now we might wonder if we gain anything concrete from this. Did all those language extensions actually do something useful?

Well one mechanical transformation we can make is that we can change our existential type into a CPS-ed higher rank type.

unpackPacked :: (forall s. f s a -> r) -> Packed f a -> r unpackPacked cont (Pack f) = cont f someAdds' :: Stack s Int -> Maybe Int someAdds' Stack{..} = pop (push 1 empty) someAdds :: Packed Stack Int -> Maybe Int someAdds = unpackPacked someAdds'

Now we’ve factored out the unpacking of existentials into a function called unpack. This takes a continuation which is parametric in the existential variable, s.

Now our body of someAdds becomes someAdds, but notice something very interesting here, now s is a normal universally quantified type variable. This means we can apply some nice properties we already have used, eg parametricity.

This is a nice effect of translating things to core constructs, all the tools we already have figured out can suddenly be applied.

Wrap Up

Now that we’ve gone through transforming our abstract types in existential ones you can final appreciate at least one more thing: the subtitle on Bob Harper’s blog. You can’t say you didn’t learn something useful :)

I wanted to keep this post short and sweet. In doing this I’m going to some of the more interesting questions we could ask. For the curious reader, I leave you with these

  • How can we use type classes to prettify our examples?
  • What can we do to generalize Packed?
  • How does this pertain to modules? Higher order modules?
  • How would you implement “sharing constraints” in this model?
  • What happens when we translate existentials to dependent products?

Cheers.

<script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs