News aggregator

Domain settings

Lambda the Ultimate - Sat, 10/04/2014 - 5:24am

I am about to make some changes to the name server definitions. Since changes take time to propagate, you may have trouble reaching the site for awhile. If this happens, try using the .com domain instead of the preferred .org domain.

Categories: Offsite Discussion

apfelmus: GUI - Release of the threepenny-gui library, version 0.5.0.0

Planet Haskell - Sat, 10/04/2014 - 2:59am

I am pleased to announce release of threepenny-gui version 0.5, a cheap and simple library to satisfy your immediate GUI needs in Haskell.

Want to write a small GUI thing but forgot to sacrifice to the giant rubber duck in the sky before trying to install wxHaskell or Gtk2Hs? Then this library is for you! Threepenny is easy to install because it uses the web browser as a display.

The library also has functional reactive programming (FRP) built-in, which makes it a lot easier to write GUI application without getting caught in spaghetti code. For an introduction to FRP, see for example my slides from a tutorial I gave in 2012 (the API is slightly different in Reactive.Threepenny) and the preliminary widget design guide.

Version 0.5 is essentially a maintenance release, allowing for newer versions of the libraries that it depends on. It also incorporates various contributions by other people, including a small Canvas API by Ken Friis Larsen and Carsten König, and a complete set of SVG elements and attributes by Steve Bigham. Many thanks also to Yuval Langer and JP Moresmau.

However, while it’s great that the library begins to grow in breadth, incorporating larger and larger parts of the DOM API, I also feel that the current backend code is unable to cope with this growth. In the next version, I intend to overhaul the server code and put the JavaScript FFI on a more solid footing.

To see Threepenny in action, have a look at the following applications:

Daniel Austin’s FNIStash
Editor for Torchlight 2 inventories.
Daniel Mlot’s Stunts Cartography Track Viewer
Map viewer for the Stunts racing game.

Get the library here:

Note that the API is still in flux and is likely to change radically in the future. You’ll have to convert frequently or develop against a fixed version.

Categories: Offsite Blogs

Blog | jelv.is¬es

del.icio.us/haskell - Fri, 10/03/2014 - 8:53pm
Categories: Offsite Blogs

Blog | jelv.is¬es

del.icio.us/haskell - Fri, 10/03/2014 - 8:53pm
Categories: Offsite Blogs

www.joachim-breitner.de

del.icio.us/haskell - Fri, 10/03/2014 - 8:00pm
Categories: Offsite Blogs

www.joachim-breitner.de

del.icio.us/haskell - Fri, 10/03/2014 - 8:00pm
Categories: Offsite Blogs

ANN: hxt-css 0.1.0.0, a CSS selector engine for HXT

haskell-cafe - Fri, 10/03/2014 - 7:37pm
Hello everybody, I'd like to announce the first public release of hxt-css, a CSS selector engine for the Haskell XML Toolbox (HXT). Its main design goals are: * support as many CSS selectors as possible: it currently supports all CSS 3 selectors except the ones that do not make sense outside a web browser (e.g. such as :hover). For example, it supports weird things like div > span + p:not(:nth-of-type(3n-1)) * try to be 100% correct: in all tests I ran, the output of hxt-css was identical to that of firefox & chrome. * follow the conventions of other hxt packages: for example, error reporting is done the same way as hxt-xpath. Note, there is already a similar package in hackage called HandsomeSoup. - Marios
Categories: Offsite Discussion

ANN: New version of graphmod (1.2.4)

haskell-cafe - Fri, 10/03/2014 - 7:37pm
Hello, I am pleased to announce a new version of `graphmod`---a program that helps you visualize the import dependencies between the modules in your Haskell programs. The new feature in version 1.2.4 is support for pruning the dependency graph, which is enabled with the flag -p or --prune-edges. When this option is selected, `graphmod` will ignore imports to modules that are already imported by some of the dependencies of the module. For example, consider the following modules: module A where { import B; import C } module B where { import C } module C where { } When generated with `--prune-edges`, the resulting graph will be: A -> B -> C Note that there is no edge from `A` to `C`, because `C` is already imported by `B`. Happy hacking, -Iavor _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Magnus Therning: Script for migrating from WordPress to Hakyll

Planet Haskell - Fri, 10/03/2014 - 6:00pm

As I wrote about in a previous post on converting post from WordPress to Hakyll I couldn’t quite find a tool that met my needs out of the box. I had a look at the source of hakyll-convert but it uses a library for RSS parsing, which sounds good. However, the export of posts from WordPress is in an extended RSS format, among other things it contains the comments of posts. Unfortunately it doesn’t look like the RSS library supports the WordPress extensions, so modifying hakyll-convert to also extract comments seems like a bit more work than I’d like to put into it. Especially since I had a feeling that hacking up something using tagsoup would be quite a bit faster.

I put the resulting script in gist on github. I call it bbwp, which is short for Bye, bye, WordPress.

Categories: Offsite Blogs

Magnus Therning: Script for migrating from WordPress to Hakyll

Planet Haskell - Fri, 10/03/2014 - 6:00pm

As I wrote about in a previous post on converting post from WordPress to Hakyll I couldn’t quite find a tool that met my needs out of the box. I had a look at the source of hakyll-convert but it uses a library for RSS parsing, which sounds good. However, the export of posts from WordPress is in an extended RSS format, among other things it contains the comments of posts. Unfortunately it doesn’t look like the RSS library supports the WordPress extensions, so modifying hakyll-convert to also extract comments seems like a bit more work than I’d like to put into it. Especially since I had a feeling that hacking up something using tagsoup would be quite a bit faster.

I put the resulting script in gist on github. I call it bbwp, which is short for Bye, bye, WordPress.

Categories: Offsite Blogs

Jeremy Gibbons: Distributivity in Horner’s Rule

Planet Haskell - Fri, 10/03/2014 - 9:25am

This is a continuation of my previous post on Horner’s Rule, and in particular, of the discussion there about distributivity in the datatype-generic version of the Maximum Segment Sum problem:

the essential property behind Horner’s Rule is one of distributivity. In the datatype-generic case, we will model this as follows. We are given an -algebra [for a binary shape functor ], and a -algebra [for a collection monad ]; you might think of these as “datatype-generic product” and “collection sum”, respectively. Then there are two different methods of computing a result from an structure: we can either distribute the structure over the collection(s) of s, compute the “product” of each structure, and then compute the “sum” of the resulting products; or we can “sum” each collection, then compute the “product” of the resulting structure. Distributivity of “product” over “sum” is the property that these two different methods agree, as illustrated in the following diagram.

For example, with adding all the integers in an -structure, and finding the maximum of a (non-empty) collection, the diagram commutes.

There’s a bit of hand-waving above to justify the claim that this is really a kind of distributivity. What does it have to do with the common-or-garden equation

stating distributivity of one binary operator over another? That question is the subject of this post.

Distributing over effects

Recall that distributes the shape functor over the monad in its second argument; this is the form of distribution over effects that crops up in the datatype-generic Maximum Segment Sum problem. More generally, this works for any idiom ; this will be important below.

Generalizing in another direction, one might think of distributing over an idiom in both arguments of the bifunctor, via an operator , which is to say, , natural in the . This is the method of the subclass of that Bruno Oliveira and I used in our Essence of the Iterator Pattern paper; informally, it requires just that has a finite ordered sequence of “element positions”. Given , one can define .

That traversability (or equivalently, distributivity over effects) for a bifunctor is definable for any idiom, not just any monad, means that one can also conveniently define an operator for any traversable unary functor . This is because the constant functor (which takes any to ) is an idiom: the method returns the empty list, and idiomatic application appends two lists. Then one can define

where makes a singleton list. For a traversable bifunctor , we define where is the diagonal functor; that is, , natural in the . (No constant functor is a monad, except in trivial categories, so this convenient definition of contents doesn’t work monadically. Of course, one can use a writer monad, but this isn’t quite so convenient, because an additional step is needed to extract the output.)

One important axiom of that I made recent use of in a paper with Richard Bird on Effective Reasoning about Effectful Traversals is that it should be “natural in the contents”: it should leave shape unchanged, and depend on contents only up to the extent of their ordering. Say that a natural transformation between traversable functors and “preserves contents” if . Then, in the case of unary functors, the formalization of “naturality in the contents” requires to respect content-preserving :

In particular, itself preserves contents, and so we expect

to hold.

Folding a structure

Happily, the same generic operation provides a datatype-generic means to “fold” over the elements of an -structure. Given a binary operator and an initial value , we can define an -algebra —that is, a function —by

(This is a slight specialization of the presentation of the datatype-generic MSS problem from last time; there we had . The specialization arises because we are hoping to define such an given a homogeneous binary operator . On the other hand, the introduction of the initial value is no specialization, as we needed such a value for the “product” of an empty “segment” anyway.)

Incidentally, I believe that this “generic folding” construction is exactly what is intended in Ross Paterson’s Data.Foldable library.

Summing a collection

The other ingredient we need is an -algebra . We already decided last time to

stick to reductions—s of the form for associative binary operator ; then we also have distribution over choice: . Note also that we prohibited empty collections in , so we do not need a unit for .

On account of being an algebra for the collection monad , we also get a singleton rule .

Reduction to distributivity for lists

One of the take-home messages in the Effective Reasoning about Effectful Traversals paper is that it helps to reduce a traversal problem for datatypes in general to a more specific one about lists, exploiting the “naturality in contents” property of traversability. We’ll use that tactic for the distributivity property in the datatype-generic version Horner’s Rule.

In this diagram, the perimeter is the commuting diagram given at the start of this post—the diagram we have to justify. Face (1) is the definition of in terms of . Faces (2) and (3) are the expansion of as generic folding of an -structure. Face (4) follows from being an -algebra, and hence being a left-inverse of . Face (5) is an instance of the naturality property of . Face (6) is the property that respects the contents-preserving transformation . Therefore, the whole diagram commutes if Face (7) does—so let’s look at Face (7)!

Distributivity for lists

Here’s Face (7) again:

Demonstrating that this diagram commutes is not too difficult, because both sides turn out to be list folds.

Around the left and bottom edges, we have a fold after a map , which automatically fuses to , where is defined by

or, pointlessly,

Around the top and right edges we have the composition . If we can write as an instance of , we can then use the fusion law for

to prove that this composition equals .

In fact, there are various equivalent ways of writing as an instance of . The definition given by Conor McBride and Ross Paterson in their original paper on idioms looked like the identity function, but with added idiomness:

In the special case that the idiom is a monad, it can be written in terms of (aka ) and :

But we’ll use a third definition:

where

That is,

Now, for the base case we have

as required. For the inductive step, we have:

which completes the fusion proof, modulo the wish about distributivity for :

Distributivity for cartesian product

As for that wish about distributivity for :

which discharges the proof obligation about distributivity for cartesian product, but again modulo two symmetric wishes about distributivity for collections:

Distributivity for collections

Finally, the proof obligations about distributivity for collections are easily discharged, by induction over the size of the (finite!) collection, provided that the binary operator distributes over in the familiar sense. The base case is for a singleton collection, ie in the image of (because we disallowed empty collections); this case follows from the fact that is an -algebra. The inductive step is for a collection of the form with both strictly smaller than the whole (so, if the monad is idempotent, disjoint, or at least not nested); this requires the distribution of the algebra over choice , together with the familiar distribution of over .

Summary

So, the datatype-generic distributivity for -structures of collections that we used for the Maximum Segment Sum problem reduced to distributivity for lists of collections, which reduced to the cartesian product of collections, which reduced to that for pairs. That’s a much deeper hierarchy than I was expecting; can it be streamlined?

Categories: Offsite Blogs

Jeremy Gibbons: Morality and temptation

Planet Haskell - Fri, 10/03/2014 - 9:20am

Inspired by Bob Harper’s recent postings, I too have a confession to make. I know what is morally right; but sometimes the temptation is too great, and my resolve is weak, and I lapse. Fast and loose reasoning may excuse me, but my conscience would be clearer if I could remain pure in the first place.

Initial algebras, final coalgebras

We know and love initial algebras, because of the ease of reasoning with their universal properties. We can tell a simple story about recursive programs, solely in terms of sets and total functions. As we discussed in the previous post, given a functor , an -algebra is a pair consisting of an object and an arrow . A homomorphism between -algebras and is an arrow such that :

The -algebra is initial iff there is a unique such for each ; for well-behaved functors , such as the polynomial functors on , an initial algebra always exists. We conventionally write “” for the initial algebra, and “” for the unique homomorphism to another -algebra . (In , initial algebras correspond to datatypes of finite recursive data structures.)

The uniqueness of the solution is captured in the universal property:

In words, is this fold iff satisfies the defining equation for the fold.

The universal property is crucial. For one thing, the homomorphism equation is a very convenient style in which to define a function; it’s the datatype-generic abstraction of the familiar pattern for defining functions on lists:

These two equations implicitly characterizing are much more comprehensible and manipulable than a single equation

explicitly giving a value for . But how do we know that this assortment of two facts about is enough to form a definition? Of course! A system of equations in this form has a unique solution.

Moreover, the very expression of the uniqueness of the solution as an equivalence provides many footholds for reasoning:

  • Read as an implication from left to right, instantiating to to make the left-hand side trivially true, we get an evaluation rule for folds:

  • Read as an implication from right to left, we get a proof rule for demonstrating that some complicated expression is a fold:

  • In particular, we can quickly see that the identity function is a fold:

    so . (In fact, this one’s an equivalence.)

  • We get a very simple proof of a fusion rule, for combining a following function with a fold to make another fold:

  • Using this, we can deduce Lambek’s Lemma, that the constructors form an isomorphism. Supposing that there is a right inverse, and it is a fold, what must it look like?

    So if we define , we get . We should also check the left inverse property:

And so on, and so on. Many useful functions can be written as instances of , and the universal property gives us a very powerful reasoning tool—the universal property of is a marvel to behold.

And of course, it all dualizes beautifully. An -coalgebra is a pair with . A homomorphism between -coalgebras and is a function such that :

The -coalgebra is final iff there is a unique homomorphism to it from each ; again, for well-behaved , final coalgebras always exist. We write “” for the final coalgebra, and for the unique homomorphism to it. (In , final coalgebras correspond to datatypes of finite-or-infinite recursive data structures.)

Uniqueness is captured by the universal property

which has just as many marvellous consequences. Many other useful functions are definable as instances of , and again the universal property gives a very powerful tool for reasoning with them.

Hylomorphisms

There are also many interesting functions that are best described as a combination of a fold and an unfold. The hylomorphism pattern, with an unfold followed by a fold, is the best known: the unfold produces a recursive structure, which the fold consumes.

The factorial function is a simple example. The datatype of lists of natural numbers is determined by the shape functor

Then we might hope to write

where and with

More elaborately, we might hope to write as the composition of (to generate a binary search tree) and (to flatten that tree to a list), where is the shape functor for internally-labelled binary trees,

partitions a list of integers into the unit or a pivot and two sublists, and

glues together the unit or a pivot and two sorted lists into one list. In fact, any divide-and-conquer algorithm can be expressed in terms of an unfold computing a tree of subproblems top-down, followed by a fold that solves the subproblems bottom-up.

But sadly, this doesn’t work in , because the types don’t meet in the middle. The source type of the fold is (the carrier of) an initial algebra, but the target type of the unfold is a final coalgebra, and these are different constructions.

This is entirely reasonable, when you think about it. Our definitions in —the category of sets and total functions—necessarily gave us folds and unfolds as total functions; the composition of two total functions is a total function, and so a fold after an unfold ought to be a total function too. But it is easy to define total instances of that generate infinite data structures (such as a function , which generates an infinite ascending list of naturals), on which a following fold is undefined (such as “the product” of an infinite ascending list of naturals). The composition then should not be a total function.

One might try interposing a conversion function of type , coercing the final data structure produced by the unfold into an initial data structure for consumption by the fold. But there is no canonical way of doing this, because final data structures may be “bigger” (perhaps infinitely so) than initial ones. (In contrast, there is a canonical function of type . In fact, there are two obvious definitions of it, and they agree—a nice exercise!)

One might try parametrizing that conversion function with a natural number, bounding the depth to which the final data structure is traversed. Then the coercion is nicely structural (in fact, it’s a fold over the depth), and everything works out type-wise. But having to thread such “resource bounds” through the code does terrible violence to the elegant structure; it’s not very satisfactory.

Continuous algebras

The usual solution to this conundrum is to give up on , and to admit that richer domain structures than sets and total functions are required. Specifically, in order to support recursive definitions in general, and the hylomorphism in particular, one should move to the category of continuous functions between complete partial orders (CPOs). Now is not the place to give all the definitions; see any textbook on denotational semantics. The bottom line, so to speak, is that one has to accept a definedness ordering on values—both on “data” and on functions—and allow some values to be less than fully defined.

Actually, in order to give meaning to all recursive definitions, one has to further restrict the setting to pointed CPOs—in which there is a least-defined “bottom” element for each type , which can be given as the “meaning” (solution) of the degenerate recursive definition at type . Then there is no “empty” CPO; the smallest CPO has just a single element, namely . As with colimits in general, this smallest object is used as the start of a chain of approximations to a limiting solution. But in order for really to be an initial object, one also has to constrain the arrows to be strict, that is, to preserve ; only then is there a unique arrow for each . The category of strict continuous functions between pointed CPOs is called .

It so happens that in , initial algebras and final coalgebras coincide: the objects (pointed CPOs) and are identical. This is very convenient, because it means that the hylomorphism pattern works fine: the structure generated by the unfold is exactly what is expected by the fold.

Of course, it still happen that the composition yields a “partial” (less than fully defined) function; but at least it now type-checks. Categories with this initial algebra/final coalgebra coincidence are called algebraically compact; they were studied by Freyd, but there’s a very good survey by Adámek, Milius and Moss.

However, the story gets murkier than that. For one thing, does not have proper products. (Indeed, an algebraically compact category with products collapses.) But beyond that, —with its restriction to strict arrows—is not a good model of lazy functional programming; , with non-strict arrows too, is better. So one needs a careful balance of the two categories. The consequences for initial algebras and final coalgebras are spelled out in one of my favourite papers, Program Calculation Properties of Continuous Algebras by Fokkinga and Meijer. In a nutshell, one can only say that the defining equation for folds has a unique strict solution in ; without the strictness side-condition, is unconstrained (because for any ). But the situation for coalgebras remains unchanged—the defining equation for unfolds has a unique solution (and moreover, it is strict when is strict).

This works, but it means various strictness side-conditions have to be borne in mind when reasoning about folds. Done rigorously, it’s rather painful.

Recursive coalgebras

So, back to my confession. I want to write divide-and-conquer programs, which produce intermediate data structures and then consume them. Folds and unfolds in do not satisfy me; I want more—hylos. Morally, I realise that I should pay careful attention to those strictness side-conditions. But they’re so fiddly and boring, and my resolve is weak, so I usually just brush them aside. Is there away that I can satisfy my appetite for divide-and-conquer programs while still remaining in the pure world?

Tarmo Uustalu and colleagues have a suggestion. Final coalgebras and algebraic compactness are sufficient but not necessary for the hylo diagram above to have a unique solution; they propose to focus on recursive coalgebras instead. The -coalgebra is “recursive” iff, for each , there is a unique such that :

This is a generalization of initial algebras: if has an initial algebra , then by Lambek’s Lemma has an inverse , and is a recursive coalgebra. And it is a strict generalization: it also covers patterns such as paramorphisms (primitive recursion)—since is a recursive -coalgebra where is the functor taking to —and the “back one or two steps” pattern used in the Fibonacci function.

Crucially for us, almost by definition it covers all of the “reasonable” hylomorphisms too. For example, is a recursive -coalgebra, where is the shape functor for lists of naturals and the -coalgebra introduced above that analyzes a natural into nothing (for zero) or itself and its predecessor (for non-zero inputs). Which is to say, for each , there is a unique such that ; in particular, for the given above that returns 1 or multiplies, the unique is the factorial function. (In fact, this example is also an instance of a paramorphism.) And is a recursive -coalgebra, where is the partition function of quicksort—for any -algebra , there is a unique such that , and in particular when is the glue function for quicksort, that unique solution is quicksort itself.

This works perfectly nicely in ; there is no need to move to more complicated settings such as or , or to consider partiality, or strictness, or definedness orderings. The only snag is the need to prove that a particular coalgebra of interest is indeed recursive. Capretta et al. study a handful of “basic” recursive coalgebras and of constructions on coalgebras that preserve recursivity.

More conveniently, Taylor and Adámek et al. relate recursivity of coalgebras to the more familiar notion of variant function, ie well-founded ordering on arguments of recursive calls. They restrict attention to finitary shape functors; technically, preserving directed colimits, but informally, I think that’s equivalent to requiring that each element of has a finite number of elements—so polynomial functors are ok, as is the finite powerset functor, but not powerset in general. If I understand those sources right, for a finitary functor and an -coalgebra , the following conditions are equivalent: (i) is corecursive; (ii) is well-founded, in the sense that there is a well-founded ordering such that for each “element” of ; (iii) every element of has finite depth; and (iv) there is a coalgebra homomorphism from to .

This means that I can resort to simple and familiar arguments in terms of variant functions to justify hylo-style programs. The factorial function is fine, because ( is a finitary functor, being polynomial, and) the chain of recursive calls to which leads is well-founded; quicksort is fine, because the partitioning step is well-founded; and so on. Which takes a great weight of guilt off my shoulders: I can give in to the temptation to write interesting programs, and still remain morally as pure as the driven snow.

Categories: Offsite Blogs

Jeremy Gibbons: Adjunctions

Planet Haskell - Fri, 10/03/2014 - 9:16am

Universal properties are a generalization of the notion of a Galois connection between two orderings. Or perhaps I should say: universal properties arise from adjunctions, and it is adjunctions that are a generalization of Galois connections. Adjunctions capture in an abstract categorical setting the idea of “optimal solutions to a problem”; and this idea is itself very general, capturing many of the structures underlying common patterns in programming (not to mention the rest of mathematics). Solutions to equations, products, limits of sequences of approximations, and minimality and maximality are just some of the instances of this powerful abstraction that we will make use of. In the preface to Categories for the Working Mathematician, Mac Lane wrote that “adjoint functors arise everywhere”.

Adjoint functors

Two functors and form an adjunction, written , if there is an isomorphism between the sets of arrows in and in . We say that is the left adjoint and the right adjoint. The essence of the isomorphism is captured by two natural transformations in and in , called the unit and counit of the adjunction; is the image in of in , and conversely, is the image in of in . The unit and counit satisfy the laws

From them one can construct the witnesses to the isomorphism for arbitrary arrows: for each arrow in , there is a unique arrow in such that , given by ; and conversely, for each arrow in , there is a unique arrow in such that , given by ; and moreover, these two constructions are each other’s inverses.

Adjunctions from Galois connections

A preorder forms a category: the objects of the category are the elements of the set~, and between any two elements , there is a unique arrow if , and no arrow otherwise. That adjunctions are a generalization of Galois connections follows straightforwardly from the fact that there is at most one arrow between any two objects in a preorder category. Then monotonic functions and between preorders and form a Galois connection precisely if the sets of arrows and are isomorphic—that is, if both and hold, or neither do, or in other words,

Adjoints of the diagonal functor

A very useful example of adjunctions arises in the definition of products—in the category of sets and total functions, for given types , there is an isomorphism between the set of pair-generating functions, of type , and their two projections, pairs of functions of types and . (Indeed, given functions and , one can construct the pair-generating function ; and conversely, given a pair-generating function , one can construct its two projections and ; and moreover, these two constructions are inverses.)

The “isomorphism between sets of arrows” can be elegantly expressed as an adjunction; since it concerns pairs of arrows, one side of the adjunction involves the product category . The right adjoint is the product functor , mapping an object in —that is, a pair of sets—to their cartesian product as an object in , and an arrow in —that is, a parallel pair of functions—to a function in acting pointwise on pairs. In the other direction, the left adjoint is the diagonal functor , mapping an object in to the object in , and a function to the pair of functions as an arrow in . The adjunction amounts to the isomorphism

or equivalently,

The unit and counit of the adjunction are and . In more familiar terms, the unit is a natural transformation in , so a polymorphic function; in fact, it’s the function of type that we might call . However, the counit is a natural transformation in , so not simply a (polymorphic) function; but arrows in are pairs of functions, so we might write this .

Then the “fork” operation is in fact one of the two witnesses to the isomorphism between the sets of arrows: given an arrow in , that is, a pair of functions of types , then is an arrow in , that is, a function of type , given by the construction above:

or, with more points,

The laws that the unit and counit satisfy are

or, in more familiar terms,

The universal property of follows from the isomorphism between sets of arrows:

The universal property of underlies all the useful laws of that operator.

Of course, the situation nicely dualizes too. Coproducts in arise from the isomorphism between the set of arrows and the pairs of arrows in and . Again, “pairs of arrows” suggest the product category; but this time, the diagonal functor is the right adjoint, with the coproduct functor (which takes a pair of sets to their disjoint union) as the left adjoint. That is, the adjunction is , and the isomorphism is

The unit is a natural transformation in , that is, a pair of functions and . The counit is a natural transformation in , which we might call . The “join” of two functions with a common range is a witness to one half of the isomorphism—given an arrow in , then is an arrow in , defined by

The two laws that the unit and counit satisfy are:

or, perhaps more perspicuously,

Another familiar example from functional programming is the notion of currying, which arises when one can construct the function space (the type of functions from to , for each type and ), such that there is an isomorphism between the sets of arrows and . Here, the adjunction is —in this case, both functors are endofunctors on . The unit and counit are natural transformations and . We might call these and , since the first is a curried pair-forming operator, and the second applies a function to an argument:

The laws they satisfy are as follows:

or, in points,

The isomorphism itself is witnessed by the two inverse functions

where and .

Categories: Offsite Blogs

Jeremy Gibbons: Universal properties and Galois connections

Planet Haskell - Fri, 10/03/2014 - 9:13am

One recurring theme throughout this series will be that of a universal property—an identity that captures an indirect means of solving a problem, by transforming that problem into a different (and hopefully simpler) domain, while still preserving all its essential properties. In particular, the original problem has a solution if and only if the transformed problem does, and moreover, the solution to the transformed problem can easily be translated back into a solution to the original problem. One can see universal properties as a generalization of the notion of a Galois connection between two orderings, which are a similarly powerful technique of relating problems in two different settings. (In fact, the proper generalization of Galois connections is to adjunctions, but that’s a story for next time.)

Universal properties

The universal property of the operation for products is a representative example. Recall that when and ; and that and . Then is completely defined by its universal property:

This identity repays careful study.

  • It translates a problem in the more complex domain of products (namely, the problem of showing how some complicated expression can be written in terms of ) into simpler problems (here, equations about the two projections of ).
  • It’s an equivalence. So not only do you have an implication from left to right (any expressible as a satisfies the two properties on the right), you also have one from right to left (any pair of functions satisfying the two properties on the right induces a ). In other words, is a solution to the equation on the left iff it is a solution on the right; not only does a solution on the right yield a construction on the left, but also the absence of solutions on the right implies the absence on the left. Or again: the equations on the right have a unique solution in —since any two solutions must both be equal to the same expression on the left.
  • It has many useful simple consequences. You can make the left-hand side trivially true by letting ; then the right-hand side must also be true:

    Symmetrically, you can make the right-hand side trivially true by letting and ; then the left-hand side must also be true:

    If you further let , you conclude that every pair consists solely of its two projections, nothing more:

    In fact, the universal property of tells you everything you need to know about ; you might take that as one justification for the term “universal”.

  • It also has many useful less obvious consequences. For example, if you’re searching for an that acts independently on the two components of a pair— and —just let and in the universal property, and conclude

    (which we’ve written “” elsewhere). For another example, we can deduce a fusion law for : for what does the equation

    hold? This matches the left-hand side of the universal property; expanding the right-hand side yields

Such a rich harvest from so small a seed! (In fact, we will see later that an even smaller seed suffices.)

Galois connections

We can see the same structures that occur in universal properties like that of above also in relationships between orderings. As a very simple example, consider the problem of dividing a natural number by two, exactly; the universal property of a solution to this problem is the equivalence

That is, is a solution to the problem “compute ” precisely when ; both the existence and the identification of a solution to a problem expressed in terms of division has been translated to one in terms of multiplication—which is arguably a simpler setting. Note that the universal property amounts to an equivalence

involving the two functions and , which are in some sense inverses. This pattern will crop up over and over again.

The division example involved an equivalence between the two identities and . More generally, another relation than “” might be involved. Extending the previous example to integer division, rounding down, we have for :

Again, this relates the two (in some sense inverse) functions and ; but this time equality is inadequate for stating the problem, and it perhaps more convincing to claim that a more complicated problem has been translated into a simpler one . What is more, translating the problem via this universal property pays dividends when it comes to reasoning about the problem, because the simpler problem space is much more amenable to calculation. For example, properties of repeated division (for ) do not trip off the tongue; but we can reason straightforwardly as follows:

Thus, precisely when , or in other words, .

In this case, the two problem spaces have both involved the same relation on the same domain, namely the natural numbers; that is not essential. For example, the universal property of the floor function from reals to integers is given by:

where, to be completely explicit, we have written for the usual ordering on reals and for the corresponding ordering on integers, and for the injection from the integers into the reals. This time the two problem spaces involve two different orderings on different domains; we say that the pair of functions and form a Galois connection between the orderings and . (We also see that the relationship between the two functions and is becoming less like a pure inverse relationship, and more of an embedding–projection pair.)

As a simple non-arithmetical example of a Galois connection on a single domain, consider some set and a fixed subset ; then

That is, and form a Galois connection between and itself.

A non-arithmetical example between two different domains is afforded by the field of formal concept analysis, which relates “objects” and their “properties”. Given are sets of objects and of properties, and a relation ; we write to denote that object has property . This induces “concept-forming operators” and defined by:

That is, is the set of properties enjoyed by all objects in , and is the set of objects enjoying all the properties in ; a concept is a pair with and . The concept-forming operators form a Galois connection between and :

This construction can be used to translate a problem about the extension of a concept (that is, an enumeration of its instances) into one about the intension (that is, the characteristic properties of its instances). It is related to the observation that “syntax and semantics are adjoint“—under the analogy that “objects” are sets of mathematical structures, “properties” are axioms, and the relation is “satisfaction”, the models of an axiomatic theory are included in a set of structures if and only if the theory logically entails the minimal axiomatization of .

Categories: Offsite Blogs