News aggregator

Edward Kmett: Free Monoids in Haskell

Planet Haskell - Sat, 02/21/2015 - 5:08pm

It is often stated that Foldable is effectively the toList class. However, this turns out to be wrong. The real fundamental member of Foldable is foldMap (which should look suspiciously like traverse, incidentally). To understand exactly why this is, it helps to understand another surprising fact: lists are not free monoids in Haskell.

This latter fact can be seen relatively easily by considering another list-like type:

  data SL a = Empty | SL a :> a   instance Monoid (SL a) where mempty = Empty mappend ys Empty = ys mappend ys (xs :> x) = (mappend ys xs) :> x   single :: a -> SL a single x = Empty :> x  

So, we have a type SL a of snoc lists, which are a monoid, and a function that embeds a into SL a. If (ordinary) lists were the free monoid, there would be a unique monoid homomorphism from lists to snoc lists. Such a homomorphism (call it h) would have the following properties:

  h [] = Empty h (xs <> ys) = h xs <> h ys h [x] = single x  

And in fact, this (together with some general facts about Haskell functions) should be enough to define h for our purposes (or any purposes, really). So, let's consider its behavior on two values:

  h [1] = single 1   h [1,1..] = h ([1] <> [1,1..]) -- [1,1..] is an infinite list of 1s = h [1] <> h [1,1..]  

This second equation can tell us what the value of h is at this infinite value, since we can consider it the definition of a possibly infinite value:

  x = h [1] <> x = fix (single 1 <>) h [1,1..] = x  

(single 1 ) is a strict function, so the fixed point theorem tells us that x = ⊥.

This is a problem, though. Considering some additional equations:

  [1,1..] <> [n] = [1,1..] -- true for all n h [1,1..] = ⊥ h ([1,1..] <> [1]) = h [1,1..] <> h [1] = ⊥ <> single 1 = ⊥ :> 1 ≠ ⊥  

So, our requirements for h are contradictory, and no such homomorphism can exist.

The issue is that Haskell types are domains. They contain these extra partially defined values and infinite values. The monoid structure on (cons) lists has infinite lists absorbing all right-hand sides, while the snoc lists are just the opposite.

This also means that finite lists (or any method of implementing finite sequences) are not free monoids in Haskell. They, as domains, still contain the additional bottom element, and it absorbs all other elements, which is incorrect behavior for the free monoid:

  pure x <> ⊥ = ⊥ h ⊥ = ⊥ h (pure x <> ⊥) = [x] <> h ⊥ = [x] ++ ⊥ = x:⊥ ≠ ⊥  

So, what is the free monoid? In a sense, it can't be written down at all in Haskell, because we cannot enforce value-level equations, and because we don't have quotients. But, if conventions are good enough, there is a way. First, suppose we have a free monoid type FM a. Then for any other monoid m and embedding a -> m, there must be a monoid homomorphism from FM a to m. We can model this as a Haskell type:

  forall a m. Monoid m => (a -> m) -> FM a -> m  

Where we consider the Monoid m constraint to be enforcing that m actually has valid monoid structure. Now, a trick is to recognize that this sort of universal property can be used to define types in Haskell (or, GHC at least), due to polymorphic types being first class; we just rearrange the arguments and quantifiers, and take FM a to be the polymorphic type:

  newtype FM a = FM { unFM :: forall m. Monoid m => (a -> m) -> m }  

Types defined like this are automatically universal in the right sense. [1] The only thing we have to check is that FM a is actually a monoid over a. But that turns out to be easily witnessed:

  embed :: a -> FM a embed x = FM $ \k -> k x   instance Monoid (FM a) where mempty = FM $ \_ -> mempty mappend (FM e1) (FM e2) = FM $ \k -> e1 k <> e2 k  

Demonstrating that the above is a proper monoid delegates to instances of Monoid being proper monoids. So as long as we trust that convention, we have a free monoid.

However, one might wonder what a free monoid would look like as something closer to a traditional data type. To construct that, first ignore the required equations, and consider only the generators; we get:

  data FMG a = None | Single a | FMG a :<> FMG a  

Now, the proper FM a is the quotient of this by the equations:

  None :<> x = x = x :<> None x :<> (y :<> z) = (x :<> y) :<> z  

One way of mimicking this in Haskell is to hide the implementation in a module, and only allow elimination into Monoids (again, using the convention that Monoid ensures actual monoid structure) using the function:

  unFMG :: forall a m. Monoid m => FMG a -> (a -> m) -> m unFMG None _ = mempty unFMG (Single x) k = k x unFMG (x :<> y) k = unFMG x k <> unFMG y k  

This is actually how quotients can be thought of in richer languages; the quotient does not eliminate any of the generated structure internally, it just restricts the way in which the values can be consumed. Those richer languages just allow us to prove equations, and enforce properties by proof obligations, rather than conventions and structure hiding. Also, one should note that the above should look pretty similar to our encoding of FM a using universal quantification earlier.

Now, one might look at the above and have some objections. For one, we'd normally think that the quotient of the above type is just [a]. Second, it seems like the type is revealing something about the associativity of the operations, because defining recursive values via left nesting is different from right nesting, and this difference is observable by extracting into different monoids. But aren't monoids supposed to remove associativity as a concern? For instance:

  ones1 = embed 1 <> ones1 ones2 = ones2 <> embed 1  

Shouldn't we be able to prove these are the same, becuase of an argument like:

  ones1 = embed 1 <> (embed 1 <> ...) ... reassociate ... = (... <> embed 1) <> embed 1 = ones2  

The answer is that the equation we have only specifies the behavior of associating three values:

  x <> (y <> z) = (x <> y) <> z  

And while this is sufficient to nail down the behavior of finite values, and finitary reassociating, it does not tell us that infinitary reassociating yields the same value back. And the "... reassociate ..." step in the argument above was decidedly infinitary. And while the rules tell us that we can peel any finite number of copies of embed 1 to the front of ones1 or the end of ones2, it does not tell us that ones1 = ones2. And in fact it is vital for FM a to have distinct values for these two things; it is what makes it the free monoid when we're dealing with domains of lazy values.

Finally, we can come back to Foldable. If we look at foldMap:

  foldMap :: (Foldable f, Monoid m) => (a -> m) -> f a -> m  

we can rearrange things a bit, and get the type:

  Foldable f => f a -> (forall m. Monoid m => (a -> m) -> m)  

And thus, the most fundamental operation of Foldable is not toList, but toFreeMonoid, and lists are not free monoids in Haskell.

[1]: What we are doing here is noting that (co)limits are objects that internalize natural transformations, but the natural transformations expressible by quantification in GHC are already automatically internalized using quantifiers. However, one has to be careful that the quantifiers are actually enforcing the relevant naturality conditions. In many simple cases they are.

Categories: Offsite Blogs

Munich Haskell Meeting, 2015-02-24 < at > 19:30

haskell-cafe - Sat, 02/21/2015 - 4:43pm
Dear all, Next week, our monthly Munich Haskell Meeting will take place again on Tuesday, February 24 at Cafe Puck at 19h30. For details see here: http://www.haskell-munich.de/dates If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-feb-2015/ Everybody is welcome! cu,
Categories: Offsite Discussion

fumieval.github.io

del.icio.us/haskell - Sat, 02/21/2015 - 3:43pm
Categories: Offsite Blogs

fumieval.github.io

del.icio.us/haskell - Sat, 02/21/2015 - 3:43pm
Categories: Offsite Blogs

What's your best explanation of Haskell's non-strict evaluation?

Haskell on Reddit - Sat, 02/21/2015 - 3:02pm

I've been trying to help people get started with Haskell for some time now via exercism.io, and one of the most common stumbling blocks I've seen people run into is developing good intuition for Haskell's evaluation.

Here are the resources I currently recommend:

Here are some of the early mistakes that I see very often:

-- Should be a tail-recursive implementation with strict accumulator length [] = 0 length (x:xs) = 1 + length xs -- foldr instead of foldl' sum = foldr (+) 0 reverse = foldr (\x acc -> acc ++ [x]) [] count = foldr (flip (M.insertWith (+)) 1) M.empty

The best explanation I've been able to come up with is: If the all of the input must be consumed in order to use any of the result, then a tail-recursive implementation with a strict accumulator (such as foldl') should be used.

Does anyone have other explanations that might be more correct or easier for people new to Haskell to understand? Or any other useful (publicly available) resources that I should link people to?

submitted by etrepum
[link] [12 comments]
Categories: Incoming News

Console Watch - call clock_nanosleep via FFI

Haskell on Reddit - Sat, 02/21/2015 - 12:17pm

I am learning Haskell by writing simple programs. I have implemented simple console watch in Haskell that uses FFI to do system call clock_nanosleep. It prints current local time once a second. Use Ctrl-C to stop it.

$ ./watch

Sat Feb 21 12:45:04 2015

Sat Feb 21 12:45:05 2015

C

This is text of program in Haskell Console Watch in Haskell

and this is text of C prototype Console Watch 2 in C

The program is self contained and should be compilable under plain ghc. The fragments of posix-timer were copied for call of clock_nanosleep and clock_gettime .

I would like to hear any comments on the program.

Is it ok to use recursive call to emulate "for loops"?

The strace logs for C version is different from Haskell version. Each 10 ms Haskell generates SIGVTALRM that interrupts clock_nanosleep. I have tried to disable interrupts during call of clock_nanosleep - but in that case program reacts with delay on press of Ctrl-C. Can it be mitigated?

EDIT: Put call to clock_nanosleep in C version in while cycle. Previous version Console Watch in C

submitted by scrambledeggs71
[link] [2 comments]
Categories: Incoming News

Haskell and Cosmology

Haskell on Reddit - Sat, 02/21/2015 - 10:26am
Categories: Incoming News

how can I do this the best

haskell-cafe - Sat, 02/21/2015 - 10:03am
Hello, Im busy with a programm which translates log files to another format. I also have to look if the lines in the old one have the rigtht format. Schould I check the format and also if another field contains a I, E or W. Or schould I do the checking in one clause and make another case on the rest ? Roelof
Categories: Offsite Discussion

Can there be sum type polymorphism, like there can be row polymorphism?

Haskell on Reddit - Sat, 02/21/2015 - 9:13am

Hi. Recently I've found that there's a lot of talk about records in Haskell, and how other languages handle records. One of the issues I found is "row polymorphism", where some other languages allow functions to be applied to more than a single type of record, like so:

f :: {a: Int, b: Int | p} -> Int f x = x.a + x.b g :: {a: Int, b: Int, c: Int} -> Int g x = f x

Defining the function "g" would not give you a type error, because "f" is polymorphic over the type of the record (here the polymorphism is specified in the type with "p").

But I haven't seen any talks about sum types, and if you could have something similar with them or not. For instance, could there be a language (or language extension in the case of Haskell) that can allow you to do this?:

f :: Int -> [x: Bool | y: Bool | p] f 0 = x True f 1 = x False f n = y True g :: Int -> [x: Bool | y: Bool | z: Bool] g n = f n

If you see records as "labelled product types", and are able to define row polymorphism (and/or extensible records, etc) on them, then shouldn't you also be able to define "labelled sum types" and define polymorphism over them too? Considering they are dual, you should be able to define both at the same time, right?

I can see polymorphic sum types being useful in things like error handling for instance (where you have functions that give you different possible errors as a sum type, but you can extend those errors outside of that function application if you want).

Note: The syntax above is there to make it easier to show what I mean. I know it's not valid Haskell

submitted by gonzaw308
[link] [22 comments]
Categories: Incoming News

Looking for: hscolour with go-to-definition clickable code

Haskell on Reddit - Sat, 02/21/2015 - 7:06am

In a discussion on Reddit, probably a year ago or so, somebody linked in a comment a tool they had built that was like hscolour, but you could click on the identifiers to go-to-definition.

Similar to this (http://goto.ucsd.edu/~rjhala/Annot/hscolour/src/Language-Haskell-HsColour-CSS.html#) but with go-to-definition instead of showing the types on hover.

I've searched for the last two hours, but I cannot find it at all.

Does anybody remember what the name of the tool, the link to the demo page that was posted, or anything related, was?

Pointers would be much appreciated!

submitted by nh2_
[link] [6 comments]
Categories: Incoming News

Is there a better way to think about typeclasses?

Haskell on Reddit - Sat, 02/21/2015 - 6:28am

This is about typeclasses in general but I'll use Functor as an example.

Recently I was reading something about profunctors and there was an expression like this one

f a -> b -> f (a, b)

The question was, can you write that function for any functor (or something similar; I may be butchering the concepts here) and I realized I had no intuition about functors in general.

Sure, I can imagine that there's [] or Maybe instead of f. For those I'm sure that I could write the given function, but there's no machinery in my brain that would make me confident I can make a claim about every one of them.

Of course, this specific example is simple and the answer is yes.

g fa b = fmap (\a -> (a, b)) fa

But the one with

f (Either a b) -> Either (f a) b

turned out not to be possible. Now, what I'm asking is: Is there a unified way to think about EVERY functor? And I don't mean just following the definition. I'm talking about intuition. The same goes, and perhaps even more so, for monads. When I see a function that operates on Monad m, I really can't say I know what it does. I can answer questions like "What does it do for lists?" or "What does it do for IO?", but I can't answer "What does it do for m?".

Sometimes I can decompose the function into smaller ones that work on monads. In those cases I can say "traverse maps the monadic function over the functor and then sequences it" but again, the only intuition I get from it is by substituting the functor and the monad for some specific ones.

submitted by Darwin226
[link] [12 comments]
Categories: Incoming News

FLTK GUI Bindings - Alpha Release

haskell-cafe - Sat, 02/21/2015 - 4:32am
I'm pleased to announce the alpha release of Haskell bindings [1] to the FLTK [2] GUI library. On Linux and Mac this package and FLTK 1.3.3 [3] should install pretty smoothly. The library covers about 80+% of API and has 60+ widgets that can be used transparently from Haskell. You can read more about the motivation behind the library, installation instructions and quick start documentation here [4]. The package also comes with 14 demos to show you how to get started. There still isn't any support for Windows because I don't have access to a Windows machine, and don't have any experience with the Windows C/C++ build tools. The code itself should be fully portable. I could use some help here. Please report any issues on the Github page [5]. Hope you enjoy and I'd love any feedback on the install and development process. Thanks! -deech [1] http://hackage.haskell.org/package/fltkhs-0.1.0.0 [2] http://fltk.org [3] http://www.fltk.org/software.php?VERSION=1.3.3&FILE=fltk/1.3.3/fltk-1.3.3-source.tar.gz [4]
Categories: Offsite Discussion

Where to go after LYAH?

Haskell on Reddit - Sat, 02/21/2015 - 3:26am

I'm almost done with LYAH, and I'm wondering where to go from here? I'd like to start working on some personal poject to build on my understanding of functional programming, but I have no idea where to start and/or what to do.

Is there any guide that basically walks you through on how to develop a simple project in Haskell? The project should be an utility program or game, something concrete.

submitted by The_Kilo
[link] [14 comments]
Categories: Incoming News

Trouble with forkProcess

haskell-cafe - Sat, 02/21/2015 - 2:58am
Hi folks, I’d love to get a sense of the prevailing wisdom with respect to forkProcess from System.Posix.Process. I’m building a multithreaded (STM) application with potentially many threads simultaneously reading and writing to pieces of (an otherwise rather large) shared state. Occasionally I would like to record a whole snapshot of that state, and it seems to me a natural way to accomplish this is to fork a process (eliminating all the other threads) and perform a read-only dump of the state. This dump can be time-consuming, which is why I’d rather not have contention with other threads that may be trying to modify the state while I’m dumping. My experience with forkProcess so far is that sometimes it works brilliantly, and other times it just deadlocks. I’m at a loss to understand what the problem is, but the deadlock seems to occur before the new process gets any control -- certainly before it has started to access any of the shared state. I’m aware of the giant warning in the forkProces
Categories: Offsite Discussion