News aggregator

Thiago Negri: Peg Solitaire

Planet Haskell - Mon, 02/23/2015 - 7:13pm
I'm proud to announce my new game: Peg Solitaire.

Peg solitaire is a classic board puzzle game for one player involving movement of pegs on a board with holes.

The objective is, making valid moves, to empty the entire board except for a solitary peg. A valid move is to jump a peg orthogonally over an adjacent peg into a hole two positions away and then to remove the jumped peg. Thus valid moves in each of the four orthogonal directions are:
  • Jump to right;
  • Jump to left;
  • Jump up;
  • Jump down.
The first challenge is to solve the puzzle, and then you can challenge yourself and try to solve it with the minimum number of movements. Each jump is considered a move, multiple jumps with the same peg are considered a single move, so you can keep jumping with the same peg and still count it as a single movement. The game displays the count of how many movements you have made at the top of the screen.

Features:
  • Multiple board types to select;
  • Five different board types: English (Standard), European (French), Wiegleb, Asymmetrical (Bell) and Diamond;
  • Display game move count;
  • Quick replay same board;
  • Give up current game;
  • Restart current game;
  • Shows system status bar, so you can keep track of other things while playing, for example current time
The game is entirely free and displays ads, there is no in-app purchase options.

It's made of web technologies (HTML, JavaScript, CSS) and deployed as a native application via Apache Cordova. Translated to six languages. Really exciting!

I tried to get the look and feel of each platform individually, using a flat design for iOS, a Material design for Android and a dark theme for Windows Phone. Take a look, I hope it's pretty obvious which screenshot belongs to which platform:




There are still updates to come. iOS has one in the queue for review process and I'm delivering another one for Windows Phone this week.

Take a look at the game, download, play it and leave a rating or feedback:

Hope to see you soon!
Categories: Offsite Blogs

Jasper Van der Jeugt: Writing an LRU Cache in Haskell

Planet Haskell - Mon, 02/23/2015 - 6:00pm
Introduction

In-memory caches form an important optimisation for modern applications. This is one area where people often tend to write their own implementation (though usually based on an existing idea). The reason for this is mostly that having a one-size-fits all cache is really hard, and people often want to tune it for performance reasons according to their usage pattern, or use a specific interface that works really well for them.

However, this sometimes results in less-than-optimal design choices. I thought I would take some time and explain how an LRU cache can be written in a reasonably straightforward way (the code is fairly short), while still achieving great performance. Hence, it should not be too much trouble to tune this code to your needs.

The data structure usually underpinning an LRU cache is a Priority Search Queue, where the priority of an element is the time at which it was last accessed. A number of Priority Search Queue implementations are provided by the psqueues package, and in this blogpost we will be using its HashPSQ data type.

Disclaimer: I am one of the principal authors of the psqueues package.

This blogpost is written in literate Haskell, so you should be able to plug it into GHCi and play around with it. The raw file can be found here.

A pure implementation

First, we import some things, including the Data.HashPSQ module from psqueues.

> {-# LANGUAGE BangPatterns #-} > import Control.Applicative ((<$>)) > import Data.Hashable (Hashable, hash) > import qualified Data.HashPSQ as HashPSQ > import Data.IORef (IORef, newIORef, atomicModifyIORef') > import Data.Int (Int64) > import Data.Maybe (isNothing) > import qualified Data.Vector as V > import Prelude hiding (lookup)

Let’s start with our datatype definition. Our Cache type is parameterized by k and v, which represent the types of our keys and values respectively. The priorities of our elements will be the logical time at which they were last accessed, or the time at which they were inserted (for elements which have never been accessed). We will represent these logical times by values of type Int64.

> type Priority = Int64

The cTick field stores the “next” logical time – that is, the value of cTick should be one greater than the maximum priority in cQueue. At the very least, we need to maintain the invariant that all priorities in cQueue are smaller than cTick. A consequence of this is that cTick should increase monotonically. This is violated in the case of an integer overflow, so we need to take special care of that case.

> data Cache k v = Cache > { cCapacity :: !Int -- ^ The maximum number of elements in the queue > , cSize :: !Int -- ^ The current number of elements in the queue > , cTick :: !Priority -- ^ The next logical time > , cQueue :: !(HashPSQ.HashPSQ k Priority v) > }

Creating an empty Cache is easy; we just need to know the maximum capacity.

> empty :: Int -> Cache k v > empty capacity > | capacity < 1 = error "Cache.empty: capacity < 1" > | otherwise = Cache > { cCapacity = capacity > , cSize = 0 > , cTick = 0 > , cQueue = HashPSQ.empty > }

Next, we will write a utility function to ensure that the invariants of our datatype are met. We can then use that in our lookup and insert functions.

> trim :: (Hashable k, Ord k) => Cache k v -> Cache k v > trim c

The first thing we want to check is if our logical time reaches the maximum value it can take. If this is the case, can either reset all the ticks in our queue, or we can clear it. We choose for the latter here, since that is simply easier to code, and we are talking about a scenario that should not happen very often.

> | cTick c == maxBound = empty (cCapacity c)

Then, we just need to check if our size is still within bounds. If it is not, we drop the oldest item – that is the item with the smallest priority. We will only ever need to drop one item at a time, because our cache is number-bounded and we will call trim after every insert.

> | cSize c > cCapacity c = c > { cSize = cSize c - 1 > , cQueue = HashPSQ.deleteMin (cQueue c) > } > | otherwise = c

Insert is pretty straightforward to implement now. We use the insertView function from Data.HashPSQ which tells us whether or not an item was overwritten.

insertView :: (Hashable k, Ord p, Ord k) => k -> p -> v -> HashPSQ k p v -> (Maybe (p, v), HashPSQ k p v)

This is necessary, since we need to know whether or not we need to update cSize.

> insert :: (Hashable k, Ord k) => k -> v -> Cache k v -> Cache k v > insert key val c = trim $! > let (mbOldVal, queue) = HashPSQ.insertView key (cTick c) val (cQueue c) > in c > { cSize = if isNothing mbOldVal then cSize c + 1 else cSize c > , cTick = cTick c + 1 > , cQueue = queue > }

Lookup is not that hard either, but we need to remember that in addition to looking up the item, we also want to bump the priority. We can do this using the alter function from psqueues: that allows us to modify a value (bump its priority) and return something (the value, if found) at the same time.

alter :: (Hashable k, Ord k, Ord p) => (Maybe (p, v) -> (b, Maybe (p, v))) -> k -> HashPSQ.HashPSQ k p v -> (b, HashPSQ.HashPSQ k p v)

The b in the signature above becomes our lookup result.

> lookup > :: (Hashable k, Ord k) => k -> Cache k v -> Maybe (v, Cache k v) > lookup k c = case HashPSQ.alter lookupAndBump k (cQueue c) of > (Nothing, _) -> Nothing > (Just x, q) -> > let !c' = trim $ c {cTick = cTick c + 1, cQueue = q} > in Just (x, c') > where > lookupAndBump Nothing = (Nothing, Nothing) > lookupAndBump (Just (_, x)) = (Just x, Just ((cTick c), x))

That basically gives a clean and simple implementation of a pure LRU Cache. If you are only writing pure code, you should be good to go! However, most applications deal with caches in IO, so we will have to adjust it for that.

A simple IO-based cache

Using an IORef, we can update our Cache to be easily usable in the IO Monad.

> newtype Handle k v = Handle (IORef (Cache k v))

Creating one is easy:

> newHandle :: Int -> IO (Handle k v) > newHandle capacity = Handle <$> newIORef (empty capacity)

Our simple interface only needs to export one function. cached takes the key of the value we are looking for, and an IO action which produces the value. However, we will only actually execute this IO action if it is not present in the cache.

> cached > :: (Hashable k, Ord k) > => Handle k v -> k -> IO v -> IO v > cached (Handle ref) k io = do

First, we check the cache using our lookup function from above. This uses atomicModifyIORef', since our lookup might bump the priority of an item, and in that case we modify the cache.

> lookupRes <- atomicModifyIORef' ref $ \c -> case lookup k c of > Nothing -> (c, Nothing) > Just (v, c') -> (c', Just v)

If it is found, we can just return it.

> case lookupRes of > Just v -> return v

Otherwise, we execute the IO action and call atomicModifyIORef' again to insert it into the cache.

> Nothing -> do > v <- io > atomicModifyIORef' ref $ \c -> (insert k v c, ()) > return v Contention

This scheme already gives us fairly good performance. However, that can degrade a little when lots of threads are calling atomicModifyIORef' on the same IORef.

atomicModifyIORef' is implemented using a compare-and-swap, so conceptually it works a bit like this:

atomicModifyIORef' :: IORef a -> (a -> (a, b)) -> IO b atomicModifyIORef' ref f = do x <- readIORef ref let (!x', !y) = f x -- Atomically write x' if value is still x swapped <- compareAndSwap ref x x' if swapped then return y else atomicModifyIORef' ref f -- Retry

We can see that this can lead to contention: if we have a lot of concurrent atomicModifyIORef's, we can get into a retry loop. It cannot cause a deadlock (i.e., it should still eventually finish), but it will still bring our performance to a grinding halt. This is a common problem with IORefs which I have also personally encountered in real-world scenarios.

A striped cache

A good solution around this problem, since we already have a Hashable instance for our key anyway, is striping the keyspace. We can even reuse our Handle in quite an elegant way. Instead of just using one Handle, we create a Vector of Handles instead:

> newtype StripedHandle k v = StripedHandle (V.Vector (Handle k v))

The user can configure the number of handles that are created:

> newStripedHandle :: Int -> Int -> IO (StripedHandle k v) > newStripedHandle numStripes capacityPerStripe = > StripedHandle <$> V.replicateM numStripes (newHandle capacityPerStripe)

Our hash function then determines which Handle we should use:

> stripedCached > :: (Hashable k, Ord k) > => StripedHandle k v -> k -> IO v -> IO v > stripedCached (StripedHandle v) k = > cached (v V.! idx) k > where > idx = hash k `mod` V.length v

Because our access pattern is now distributed among the different Handles, we should be able to avoid the contention problem.

Conclusion

We have implemented a very useful data structure for many applications, with two variations and decent performance. Thanks to the psqueues package, the implementations are very straightforward, small in code size, and it should be possible to tune the caches to your needs.

Many variations are possible: you can use real timestamps (UTCTime) as priorities in the queue and have items expire after a given amount of time. Or, if modifications of the values v are allowed, you can add a function which writes the updates to the cache as well as to the underlying data source.

For embedding the pure cache into IO, there many alternatives to using IORefs: for example, we could have used MVars or TVars. There are other strategies for reducing contention other than striping, too.

You could even write a cache which is bounded by its total size on the heap, rather than by the number of elements in the queue. If you want a single bounded cache for use across your entire application, you could allow it to store heterogeneously-typed values, and provide multiple strongly-typed interfaces to the same cache. However, implementing these things is a story for another time.

Thanks to the dashing Alex Sayers for proofreading and suggesting many corrections and improvements.

Categories: Offsite Blogs

GHC Weekly News - 2015/02/23

Haskell on Reddit - Mon, 02/23/2015 - 5:57pm
Categories: Incoming News

The GHC Team: GHC Weekly News - 2015/02/23

Planet Haskell - Mon, 02/23/2015 - 5:52pm

Hi *,

It's once again time for your sometimes-slightly-irregularly-scheduled GHC news! This past Friday marked the end of the FTP vote for GHC 7.10, there's an RC on the way (see below), we've closed out a good set of patches and tickets from users and pushed them into HEAD, and to top it off - it's your editor's birthday today, so that's nice!

Quick note: as said above GHC HQ is expecting to make a third release candidate for GHC 7.10.1 soon in early March since the delay has allowed us to pick up some more changes and bugfixes. We plan on the final release being close to the end of March (previously end of February).

This week, GHC HQ met up again to discuss and write down our current priorities and thoughts:

  • After discussing our current timetable - as we're currently hovering around the ICFP deadline - we're hoping to make our third GHC 7.10.1 release candidate on Friday, March 13th, with the final release on Friday, March 27th. This was the main take away from our meeting today.

We've also had a little more list activity this week than we did before:

Some noteworthy commits that went into ghc.git in the past week include:

Closed tickets the past week include: #9266, #10095, #9959, #10086, #9094, #9606, #9402, #10093, #9054, #10102, #4366, #7604, #9103, #10104, #7765, #7103, #10051, #7056, #9907, #10078, #10096, #10072, #10043, #9926, #10088, #10091, #8309, #9049, #9895, and #8539.

Categories: Offsite Blogs

Austin Seipp: The Future of community.haskell.org

Planet Haskell - Mon, 02/23/2015 - 4:50pm

Community.haskell.org is a server in our ecosystem that comparatively few know about these days. It actually was, to my knowledge, a key part of how the whole haskell.org community infrastructure got set up way back when. The sparse homepage still even says: "This server is run by a mysterious group of Haskell hackers who do not wish to be known as a Cabal, and is funded from money earned by haskell.org mentors in the Google Summer-of-Code programme." At a certain point after this server was created, it ceased to be run by a "mysterious group of Haskell hackers" and instead became managed officially by the haskell.org Committee that we know today. You can see the original announcement email in the archives.

The community server, first set up in 2007 played a key role back before the current set of cloud-based services we know today was around. It provided a shared host which could provide many of the services a software project needs -- VCS hosting, public webspace for documentation, issue trackers, mailing lists, and soforth.

Today, the server is somewhat of a relic of another time. People prefer to host projects in places like github, bitbucket, or darcs hub. Issue trackers likewise tend to be associated with these hosts, and there are other free, hosted issue trackers around as well. When folks want a mailing list, they tend to reach for google groups.

Meanwhile, managing a big box full of shell account has become a much more difficult, riskier proposition. Every shell account is a security vulnerability waiting to happen, and there are more adversarial "scriptkiddie" hackers than ever looking to claim new boxes to spam and otherwise operate from.

Managing a mailman installation is likewise more difficult. There are more spammers out there, with better methods, and unmaintained lists quickly can turn into ghost towns filled with pages of linkspam and nothing but. The same sad fate falls on unmaintained tracs.

As a whole, the internet is a more adversarial world for small, self-hosted services, especially those whose domain names have some "google juice". We think it would be good to, to the extent possible, get out of the business of doing this sort of hosting. And indeed, very few individuals tend to request accounts, since there are now so many nicer, better ways of getting the benefits that community.haskell.org once was rare in providing.

So what next? Well, we want to "end of life" most of community.haskell.org, but in as painless a way as possible. This means finding what few tracs, if any, are still active, and helping their owners migrate. Similarly for mailing lists. Of course we will find a way to continue to host their archives for historical purposes.

Similarly, we will attempt to keep source repositories accessible for historical purposes, but would very much like to encourage owners to move to more well supported code hosting. One purpose that, until recently, was hard to serve elsewhere was in hosting of private darcs repositories with shared access -- such as academics might use to collaborate on a work in project. However, that capability is now also provided on http://hub.darcs.net. At this point, we can't think of anything in this regard that is not better provided elsewhere -- but if you can, please let us know.

On webspace, it may be the case that a little more leniency is in order. For one, it is possible to provide restricted accounts that are able to control web-accessible files but have no other rights. For another, while many open source projects now host documentation through github pages or similar, and there are likewise many services for personal home pages, nonetheless it seems a nice thing to allow projects to host their resources on a system that is not under the control of a for-profit third party that, ultimately is responsible to its bottom line and not its users.

But all this is open for discussion! Community.haskell.org was put together to serve the open source community of Haskell developers, and its direction needs to be determined based on feedback regarding current needs. What do you think? What would you like to see continued to be provided? What do you feel is less important? Are there other good hosted services that should be mentioned as alternatives?

And, of course, are you interested in rolling up your sleeves to help with any of the changes discussed? This could mean simply helping out with sorting out the mailman and trac situation, inventorying the active elements and collaborating with their owners. Or, it could mean a more sustained technical involvement. Whatever you have to offer, we will likely have a use for it. As always, you can email admin@h.o or hop on the #haskell-infrastructure freenode channel to get involved directly.

Categories: Offsite Blogs

GHC 7.10 will use Plan FTP

Haskell on Reddit - Mon, 02/23/2015 - 4:27pm
Categories: Incoming News

Is there a repository of implementations of program on the lambda calculus?

Haskell on Reddit - Mon, 02/23/2015 - 3:39pm

I've been challenging myself to program everything on the bare-bone lambda calculus those days. There are no primitives, I have to implement all from scratch, which is hard since resource on the subject is very scarce. It is easy to find basic functions such as add, sub, div, etc., for church-encoded numbers. Anything more complex such as sin, cos, fractionals, binary encodings, etc., is nowhere to be found. Why there is no book/repository of lambda calculus implementations? I know there is not really any practical demand to find those functions, but, I mean, we have more resources about coconuts. If anyone know good ones, please share it here!

submitted by SrPeixinho
[link] [8 comments]
Categories: Incoming News

Suggestions for a semestral project in Haskell

Haskell on Reddit - Mon, 02/23/2015 - 2:04pm

I'm taking an undergrad class on logic & functional programming and need to pick a topic for my semestral project, so I'd like to ask here for a topic suggestion.

The project should be mostly about algorithms & data structures, and not really about advanced Haskell itself, meaning no lenses, conduits, or even concurrency.

I've looked through the Purely Functional Data Structures and Pearls of Functional Algorithm Design, but everything seems either too easy or too difficult to pick as a project. I don't really have much experience with the algorithmic nature of Haskell, as I've only programmed web apps in it, and it's hard to judge based on how difficult the problem would be in an imperative language (some things that are hundreds of lines of C are 5 lines of Prolog.)

What are some interesting problems you'd recommend for one semester worth of undergrad work? :)

submitted by progfu
[link] [5 comments]
Categories: Incoming News

Announcing BufferBuilder - Encode JSON 2-5x faster than Aeson

Haskell on Reddit - Mon, 02/23/2015 - 12:52pm

Hi all,

/u/implicit_cast and I wrote a new library for fast encoding of JSON (and UTF-8, and buffers of bytes in general). Our benchmarks show that it is about five times faster than Aeson if encoding from records straight into JSON, and about two times faster if you use it to encode Aeson Values directly.

Here is a blog post describing the rationale and backstory: http://chadaustin.me/2015/02/buffer-builder/

And here are the packages on Hackage:

https://hackage.haskell.org/package/buffer-builder-0.2.0.3

https://hackage.haskell.org/package/buffer-builder-aeson-0.2.0.3

submitted by chadaustin
[link] [23 comments]
Categories: Incoming News

hpc - ignore some expressions

haskell-cafe - Mon, 02/23/2015 - 12:36pm
Hi, I am generating code coverage reports using hpc, it works great. The reports would be much more helpful if I could somehow tell hpc to treat certain expressions as "covered" though. As an example, let's think of "impossible" cases of a function. f x | check x = result x | otherwise = error "impossible" I know there are techniques to get rid of (or minimise the number of) the impossible cases. I also can see how this information can be relevant, i.e. seeing whether `check` always holds or not in the report. However, at least for one version of the report, I want the error call like the above example to be treated as covered. Or maybe a different colour could be used to annotate "ignored"? Did anyone need/want anything like this? Is it possible? My guess is that this is somewhat possible using manually generated .tix files and merging them with the .tix file that is generated in the usual way. However this feels too adhoc, fragile and seems to be too much effort to reach a simple goal. Best, O
Categories: Offsite Discussion

Trying to create a good monad homework assignment for undergrads

Haskell on Reddit - Mon, 02/23/2015 - 9:59am

Does anyone have any suggestions? I'm gonna assume they'll have a basic understanding.

submitted by gin-writ
[link] [10 comments]
Categories: Incoming News

Throwing an Exception Back with Continuations?

Haskell on Reddit - Mon, 02/23/2015 - 2:09am

I'm playing around with the pipes library and I'd like to have a producer which runs a certain interaction between a producer and a consumer, (except the producer and consumer really both send messages back and forth so that's not a good name for them) but the outer producer intercepts some (but not all) of the messages being sent to the inner consumer and sends them down the outer pipeline to the outer consumer, then sends the response of the outer consumer back as the simulated inner response. It sounds like a sort of nested call/cc operation but I'm not sure if it makes sense or if it's possible to get it working without excessive use of the continuation monad and even then I'm not completely sure if it will work or not.

inner: innerProducer --> interceptor --> innerConsumer
outer: outerProducer --> outerConsumer

With continuations, I think I want the interceptor to sometimes throw an "exception" with its current continuation and the message to be sent. The outer producer catches this exception, sends the message, awaits a response from downstream, then sends the response back through the continuation. I'm pretty sure this can't fit exclusively in the ExceptionT monad but I could be wrong.

I haven't worked with continuations before so I have no idea if this works or not. Does this make any sense?

edit: Monad morphisms might help too but I'm not sure what the limits of those are.

submitted by expwnent
[link] [5 comments]
Categories: Incoming News

Reference card - HaskellWiki

del.icio.us/haskell - Mon, 02/23/2015 - 1:18am
Categories: Offsite Blogs

ANN: Lambdabot 5.0

haskell-cafe - Sun, 02/22/2015 - 9:36pm
Dear Haskell users, after a long hiatus, I'm pleased to announce that we have just released Lambdabot 5.0 on hackage. Lambdabot is an IRC bot that can also be used offline; it provides tools for querying information about and even for producing Haskell code. To install it, use the following command: cabal install lambdabot-haskell-plugins lambdabot (cabal install lambdabot *should* work, but instead seems to send cabal's dependency solver onto a wild goose chase from which it fails to return.) * What's new Lambdabot is now maintained by a team, currently consisting of James Cook and me. The repository and bugtracker have moved to https://github.com/lambdabot/lambdabot The most significant change compared to lambdabot 4.3 is that lambdabot has been refactored and split into multiple packages. As a result, lambdabot has become easier to customize: simply edit the source files (Main.hs and Modules.hs) in the lambdabot package, and optionally add your own plugins. Other notable changes include - The
Categories: Offsite Discussion

Russell O'Connor: Parallel Evaluation of the Typed Lambda Calculus

Planet Haskell - Sun, 02/22/2015 - 5:31pm

There is much written about the duality between strict-order (call-by-value) evalutaion for the lambda calculus and the normal-order (call-by-need) evaluation (or semantic equivently, lazy evaluation). In the simply typed lambda calculus, all evaluation eventually terminates, so both evaluation strategies result in the same values. However, when general recursion is added to the simply typed lambda calculus (via a fixpoint operator, for example) then evaluation of some expressions does not terminate. More expressions terminate with normal-order evaluation than with strict-order evaluation. In fact, if evaluation terminates in any order, then it terminates with normal-order evaluation.

I would like to discuss the possibility of a third, even laxer evaluation strategy available for the typed lambda calculus that allows for even more expressions to terminate. I did just say that normal-order evaluation is, in some sense, a best possible evaluation order, so, in order to beat it, we will be adding more redexes that add the commuting conversions.

The typed lambda calculus enjoys certain commuting conversions for case expressions that allow every elimination term to pass through the case expression. For example, the commuting conversion for the π₁ elimination term and the case experssion says that

π₁(case e₀ of σ₁ x. e₁ | σ₂ y. e₂)

converts to

case e₀ of σ₁ x. π₁(e₁) | σ₂ y. π₁(e₂)

These commuting conversions are required so that the subformula property holds.

My understanding is that a corollary of this says that

f(case e₀ of σ₁ x. e₁ | σ₂ y. e₂)

and

case e₀ of σ₁ x. f(e₁) | σ₂ y. f(e₂)

are denotationally equivalent whenever f is a strict function.

I would like to develop a version of the lambda calculus that allows these two expressions to denote the same value for any f. Call this, the unrestricted commuting conversion property. A lambda calculus with this property would necessarily be parallel and thus will require a parallel evaluation strategy. For example, the natural definition of or becomes the parallel-or operation.

or x y := if x then True else y

This definition has the usual short-circuit property that or True ⊥ is True where ⊥ is defined by

⊥ := fix id

If we use the unrestricted commuting conversion property then we also have the that or ⊥ True is True:

or ⊥ True = {definition of or} if ⊥ then True else True = {β-expansion} if ⊥ then const True ⟨⟩ else const True ⟨⟩ = {commuting} const True (if ⊥ then ⟨⟩ else ⟨⟩) = {β-reduction} True

Hence or is parallel-or.

Other parallel functions, such as the majority function, also follow from their natural definitions.

maj x y z := if x then (or y z) else (and y z)

In this case maj ⊥ True True is True.

maj ⊥ True True = {definition of maj} if ⊥ then (or True True) else (and True True) = {evaluation of (or True True) and (and True True) if ⊥ then True else True = {commuting} True

It is easy to verify that maj True ⊥ True and maj True True ⊥ are also both True.

My big question is whether we can devise some nice operational semantics for the lambda calculus that will have the unrestricted commuting conversions property that I desire. Below I document my first attempt at such operational semantics, but, spoiler alert, it does not work.

The usual rule for computing weak head normal form for the case expression case e₀ of σ₁ x. e₁ | σ₂ y. e₂ says to first convert e₀ to weak head normal form. If it is σ₁ e₀′ then return the weak head normal form of e₁[x ↦ e₀′]. If it is σ₂ e₀′ then return the weak head normal form of e₂[y ↦ e₀′]. In addition to this rule, I want to add another rule for computing the weak head normal form for the case expression. This alernative rules says that we compute the weak head normal forms of e₁ and e₂. If we get C₁ e₁′ and C₂ e₂′ respectively for introduction terms (a.k.a. constructors) C₁ and C₂, and if additionally C₁ = C₂ then return the following as a weak head normal form, C₁ (case e₀ of σ₁ x. e₁′ | σ₂ y. e₂′). If C₁ ≠ C₂ or if we get stuck on a neutral term (e.g. a varaible), then this rule does not apply.

This new rule is in addition to the usual rule for case. Any implementation must run these two rules in parallel because it is possible that either rule (or both) can result in non-termination when recursivly computing weak head normal forms of sub-terms. I suppose that in case one has unlifted products then when computing the weak head normal form of a case expression having a product type or function type, one can immediately return

⟨case e₀ of σ₁ x. π₁ e₁ | σ₂ y. π₁ e₂, case e₀ of σ₁ x. π₂ e₁ | σ₂ y. π₂ e₂⟩ or λz. case e₀ of σ₁ x. e₁ z | σ₂ y. e₂ z

This amended computation of weak head normal form seems to work for computing or and maj functions above so that they are non-strict in every argument, but there is another example where even this method of computing weak head normal form is not sufficent. Consider the functions that implements associativity for the sum type.

assocL : A + (B + C) -> (A + B) + C assocL z := case z of σ₁ a. σ₁ (σ₁ a) | σ₂ bc. (case bc of σ₁ b. σ₁ (σ₂ b) | σ₂ c. σ₂ c) assocR : (A + B) + C -> A + (B + C) assocR z := case z of σ₁ ab. (case ab of σ₁ a. σ₁ a | σ₂ b. σ₂ (σ₁ b)) | σ₂ c. σ₂ (σ₂ c)

Now let us use unrestricted commuting conversions to evaluate assocR (assocL (σ₂ ⊥)).

assocR (assocL (σ₂ ⊥)) = { definition of assocL and case evaluation } assocR (case ⊥. σ₁ b. σ₁ (σ₂ b) | σ₂ c. σ₂ c) = { commuting conversion } case ⊥. σ₁ b. assocR (σ₁ (σ₂ b)) | σ₂ c. assocR (σ₂ c) = { definition of assocR and case evaluations } case ⊥. σ₁ b. σ₂ (σ₁ b) | σ₂ c. σ₂ (σ₂ c) = { commuting conversion } σ₂ (case ⊥. σ₁ b. σ₁ b | σ₂ c. σ₂ c) = { η-contraction for case } σ₂ ⊥

Even if η-contraction is not a reduction rule used for computation, it is still the case that t and case t. σ₁ b. σ₁ b | σ₂ c. σ₂ c should always be dentotationally equivalent. Anyhow, we see that by using commuting conversions that a weak head normal form of assocR (assocL (σ₂ ⊥)) should expose the σ₂ constructor. However, if you apply even my ammended computation of weak head normal form, it will not produce any constructor.

What I find particularly surprising is the domain semantics of assocL and assocR. assocL seems to map σ₂ ⊥ to ⊥ because no constructor can be exposed. assocR maps ⊥ to ⊥. Therefore, according to the denotational semantics, the composition should map σ₂ ⊥ to ⊥, but as we saw, under parallel evaluation it does not. It would seem that the naive denotational semantics appears to not capture the semantics of parallel evaluation. The term case ⊥. σ₁ b. σ₁ (σ₂ b) | σ₂ c. σ₂ c seems to be more defined than ⊥, even though no constructor is available in the head position.

Although my attempt at nice operational semantics failed, I am still confident some nice computation method exists. At the very least, I believe a rewriting system will work which has all the usual rewriting rules plus a few extra new redexes that says that an elimination term applied to the case expression commutes the elimination term into all of the branches, and another that says when all branches of a case expression contain the same introduction term, that introduction term is commuted to the outside of the case expression, and maybe also the rules I listed above for unlifted products. I conjecture this rewriting system is confluent and unrestricted commuting conversions are convertable (probably requiring η-conversions as well).

Without proofs of my conjectures I am a little concerned that this all does not actually work out. There may be some bad interaction with fixpoints that I am not seeing. If this does all work out then shouldn’t I have heard about it by now?

Categories: Offsite Blogs

Instance of typeclass as return value

Haskell on Reddit - Sun, 02/22/2015 - 4:06pm

Suppose I create a typeclass with some instances.

class Value a instance Value Int instance Value Bool

Now I want to return a Value from a function.

foo :: (Value v) => v foo = True

But I get the error message:

Could not deduce (v ~ Bool) from the context (Value v) bound by the type signature for foo :: Value v => v at /var/folders/mw/sjqkv7xs5rv6dy83b_g17z5c0000gn/T/flycheck2929287s/Value.hs:19:8-21 ‘v’ is a rigid type variable bound by the type signature for foo :: Value v => v at /var/folders/mw/sjqkv7xs5rv6dy83b_g17z5c0000gn/T/flycheck2929287s/Value.hs:19:8 Relevant bindings include foo :: v (bound at /var/folders/mw/sjqkv7xs5rv6dy83b_g17z5c0000gn/T/flycheck2929287s/Value.hs:20:1) In the expression: True In an equation for ‘foo’: foo = True

However, this does work for a typeclass such as Num:

foo :: (Num a) => a foo = 10 -- No error

Why does my typeclass behave differently than a 'builtin' typeclass? How can I fix it?

submitted by joseph
[link] [21 comments]
Categories: Incoming News