News aggregator
Manuel M T Chakravarty: Inline ObjectiveC in Haskell for GHC 7.8
The latest GHC release (GHC 7.8) includes significant changes to Template Haskell, which make it impossible to get type information from variables in the current declaration group in a Template Haskell function. Version 0.6 of languagecinline introduces marshalling hints in the form of type annotations to compensate for that lack of type information.
In the above code example, hints are used in two places: (1) the quoted local variable msg carries an annotation suggesting to marshal it as a String and (2) the result type of the inline ObjectiveC code is suggested to be IO () by the void annotation. These hints are required as Template Haskell no longer propagates the type information contained in the function signature.
doblocks and braces?
I've noticed some people write their doblocks with braces and semicolons, like so:
do { foo ; bar ; qux }Actually, code like this still gets committed into GHC by people like SPJ in new modules.
Do people find it {looks better, enhances readability}? Should I start doing so?
submitted by kvanberendonck[link] [26 comments]
github.com
github.com
Magnus Therning: Visitor and Walkabout (in Python)
A couple of weeks ago I found a link to a stackoverflow question I’d saved away a long time ago. I’d saved it due to having asked myself the exact same question and being curious about the papers when I found that answer. Over the last few weeks I’ve made my way through those papers and as so often I found a couple of references to other papers that sound interesting. One such paper was The Essence of the Visitor Pattern by Jens Palsberg and C. Barry Jay. There was however one little thing that bugged me with the Walkabout pattern and I thought I’d try to work that out for myself. Of course I’m using Python rather than Java
The full code can be found at https://gist.github.com/magthe/ad6e23fb560a8a494fd2
VisitorThe Visitor pattern separates the traversal and the operation. It does this by using an accept method in the classes making up the data structure, this method takes a Visitor instance which implements the operation to be carried out. This enables adding new operations without modifying the data structure.
First we need a simple structure to play with: a tree where each node can have zero or more subtrees.
class Tree: def __init__(self, value, children=[]): self.value = value self.children = childrenThe implementation of accept for this type is rather obvious:
def accept(self, visitor): visitor.visit(self) for c in self.children: c.accept(visitor)Next we need an implemention of a Visitor that all visitors can derive from. Since Python’s dispatch doesn’t depend at all on the type of the argument we’ll have to implement the necessary behaviour ourselves, i.e. inspect the type and then pick the correct method to call.
class Visitor: def visit(self, obj): func_name = 'visit_' + obj.__class__.__name__ visit_func = getattr(self, func_name) visit_func(obj)In order to visit a Tree type it also needs the appropriately named method:
def visit_Tree(self, obj): passNow it’s easy to create a visitor. Here’s a very simple one:
class TreePrintVisitor(Visitor): def visit_Tree(self, obj): print('Tree (%s): %i' % (hex(id(obj)), obj.value))Finally here’s a function that exercises what we’ve just put together:
def test_visitor(): leaves = [Tree(42), Tree(17)] tree = Tree(1, leaves) printer = TreePrintVisitor() tree.accept(printer)Looking at this it’s easy to see the objections Palsberg and Jay present in their paper:
 A data type is only ‘visitable’ if it has an accept method, and
 we must know the types of all objects we want to visit, so changes to the class structure requires changes to the visitors.
The authors then introduce Walkabout in order to remove these limitations.
WalkaboutTo remove these limitations the authors use reflection to find if the visitor has a method to carry out the operation on the type of the current object. If such a method doesn’t exist they use reflection to find all members of the object and then visits them. The Walkbout class and its visit method then looks something like this:
class Walkabout: def visit(self, obj): func_name = 'visit_%s' % obj.__class__.__name__ if hasattr(self, func_name): visit_func = getattr(self, func_name) visit_func(obj) elif hasattr(obj, '__dict__'): for m in obj.__dict__.keys(): self.visit(getattr(obj, m))The accept method can be removed from Tree and the visitor is changed to include code to continue the traversal:
class TreePrintWalkabout(Walkabout): def visit_Tree(self, tree): print('Tree (%s): %i' % (hex(id(tree)), tree.value)) for c in tree.children: self.visit(c)The function for exercising this only changes slightly:
def test_walkabout(): leaves = [Tree(42), Tree(17)] tree = Tree(1, leaves) printer = TreePrintWalkabout() printer.visit(tree)This is where Palsberg and Jay stop, but I think there is one little detail in Walkabout that’s worth looking a little closer at.
Walkabout TwoI personally find it a little strange that the authors first note that the visitor pattern suffers from the limitation that one has to know up front the full set of types to visit, then they don’t recognise that their own solution to this instead require one to know the shape of (parts of) the structure to operate on. In other words, the classes deriving from Walkabout are in charge of carrying on the traversal (the last two lines of visit_Tree above).
This little detail is of course easy to work around, we just modify visit to always visit all members irrespective of whether there is a method to handle the current object. There may be cases where we want to stop the walking about for efficiency reasons, we can address that at the same time and WalkaboutTwo will then look like this:
class WalkaboutTwo: def visit(self, obj): func_name = 'visit_%s' % obj.__class__.__name__ descend = True if hasattr(self, func_name): visit_func = getattr(self, func_name) descend = visit_func(obj) if descend and hasattr(obj, '__dict__'): for m in obj.__dict__.keys(): self.visit(getattr(obj, m)) if descend and hasattr(obj, '__iter__'): for o in obj: self.visit(o)One little detail above is that if we use only reflection to traverse the Tree we won’t actually find the subtrees as immediate members since they are contained in a list. We address this by checking whether the object has an iterator as well, and if it does we visit all items.
20140612 Fixed minor typo, thanks to Joe for pointing it out.
Asking for a code review and a decision
Dear fellow Haskellers,
I've written a small program, both in Haskell and in C++.
Both are functionally equal, by which I mean, that they do the same.
However, I can't make a decision which one I should continue to expand.
Personal preference doesn't work for me, I {,dis}like both languages equally,
I hate and fear their drawbacks and love their upsides.
Continuing the C++ project would mean that it would be easier for me to reuse
some existing code, as well as to interface with C libraries. I know about FFI
and how to use it, but I want to avoid it as much as I can.
The current Haskell implementation features around 1600 loc, whereas my C++
code is at ~900 loc. There's a slight advantage to C++ because I already wrote
a huge XCB wrapper for it. For the Haskell program I'm using XHB.
I have the constant feeling that I'm not doing it right when writing Haskell
code. Somehow similar to the feeling when I try to get my OO design right in
C++. I know and write Haskell for around four to five years now, C++ a bit
longer. C++ and OO was on my university's CV, whereas I've learned Haskell
almost completely on my own. However, I feel much more confident when writing
C++ code, including TMP.
Now for the code itself. The Haskell code is built around Components, where
each runs in its own thread, with its own monad transformer stack. The interface
is defined in Types.hs, an interesting implementation is in Button.hs. This
way, I hope to keep the program as modular as possible. Components can
communicate via Messages, implemented in Message.hs. Other files of interest
are Component.hs and Main.hs.
The C++ implementation is built around the XCB wrapper I mentioned before.
Starting in main.cpp, basically everything is an object which implements the
event handler interface from xpp. An interesting (and readable) implementation
might be mover.hpp.
The code can be found here:
C++ edition
Haskell edition
If you have questions, feel free to ask, I'm glad to explain it some more.
I hope this wasn't to much text, but thanks for reading!
P.S.: I'm not doing this to push any of my personal projects. Actually I'm
kind of unhappy to make that code publicly available, because it's not polished,
not feature complete and just quite a sad state. However, I'd love to have
a honest opinion on this, because I can waste days and weeks meditating about
this and not come to a conclusion.
[link] [9 comments]
How good at programming should you be before you learn Haskell?
For whatever reason, Haskell seems really interesting to me, like it'd be a good fit. But I learned Python because it's supposed to be the ideal beginners language  and it's good for that, to be sure. By now, I really feel as if I'm getting my programming chops together. Maybe nothing I write at this stage is good and there's still a bunch of stuff that I'm hoping to learn. I have an intermediate programming book I'm about to start and work through and college is currently out and I have extra time.
So, in general, when is a good time to learn Haskell? Should one wait they're an expert in Python capable of reciting each native function and attribute in alphabetical order? Should a person master a C language before they even think about Haskell?
submitted by c3534l[link] [64 comments]
Why doesn't Haskell allow type aliases in the monad typeclass?
I want this:
type Id a = a instance Monad Id where return a = id a m >>= k = k mBut the compiler leaves me hanging with
Type synonym `Id' should have 1 argument, but has been given none In the instance declaration for `Monad Id'Is it some technical limitation of the GHC typechecker? It means Haskell can't have a real identity endofunctor which seems rather strange. It's certainly counterintuitive and goes against category theory.
My motivation was to have a real identity monad in order to let code in monadic form seamlessly integrate with code expecting a pure value. I.e. let's say we have a complex computation that should be pure but not before we debug it:
fa :: Int > Int > IO Int fa x 0 = return x fa x n = do print x fa (x*n) (n  1)Once we're done debugging, we just change the return type in one place:
fa :: Int > Int > Id Intand the compiler nicely helps us catch any IO calls inside our computation that we might have missed:
fa x n = do print x  a compiletime error if uncommented fa (x*n) (n  1)Now, without rewriting anything to nonmonadic style (changing functions to their nonmonadic equivalents, e.g. mapM to map) we can pass our monadic value to a function that expects a nonmonadic value:
5 + (fa 5 1)That way the transition between "pure" (modulo unsafePerformIO) and impure code will not be a painful nonmonadic <> monadic refactoring that it is today ("monad hell"), but simply a transition from one monad to another.
But alas, we can't have that. Why?
submitted by ISupportTheRegime[link] [23 comments]
Dominic Steinitz: A Monoidal Category Example
I have never felt entirely comfortable with Haskell’s arrows and skimming the literature for their categorical basis didn’t reveal anything as straightforward as monads or applicatives. It did however lead me to start thinking about monoidal categories and since I always want an example, I thought I would write up Hilbert spaces.
Let and be Hilbert spaces then as vector spaces we can form the tensor product . The tensor product can be defined as the free vector space on and as sets (that is purely formal sums of ) modulo a relation defined by
Slightly overloading notation, we can define an inner product on the tensored space by
Of course this might not be complete so we define the tensor product on Hilbert spaces to be the completion of this inner product.
For Hilbert spaces to form a monoidal category, we take the arrows (in the categorical sense) to be linear continuous maps and the bifunctor to be the tensor product. We also need an identity object which we take to be considered as a Hilbert space. We should check the coherence conditions but the associativity of the tensor product and the fact that our Hilbert spaces are over the make this straightforward.
Now for some slightly interesting properties of this category.

The tensor product is not the product in the categorical sense. If and are (orthonormal) bases for and then is a (orthonormal) basis for . Thus a linear combination of basis vectors in the tensor product cannot be expressed as the tensor of basis vectors in the component spaces.

There is no diagonal arrow . Suppose there were such a diagonal then for arbitrary we would have and since must be linear this is not possible.
Presumably the latter is equivalent to the statement in quantum mechanics of “no cloning”.
Dan Piponi (sigfpe): Cofree meets Free
> {# LANGUAGE RankNTypes, MultiParamTypeClasses, TypeOperators #}
After I spoke at BayHac 2014 about free monads I was asked about cofree comonads. So this is intended as a sequel to that talk. Not only am I going to try to explain what cofree comonads are. I'm also going to point out a very close relationship between cofree comonads and free monads.
At the beginning of the talk the Google Hangout software seems to have switched to the laptop camera so you can't see the slides in the video. However the slides are here.
Cothings as machines
I often think of coalgebraic things as machines. They have some internal state and you can press buttons to change that internal state. For example here is a type class for a machine with two buttons that's related to a magma:
> class TwoButton a where
> press :: a > (a, a)
> pressLeft, pressRight :: TwoButton a => a > a
> pressLeft = fst . press
> pressRight = snd . press
Comonads
Just as monads are a kind of generalised algebraic structure (for example see my talk), comonads are a generalised kind of machine. The idea is that for any state of the machine there is a bunch of buttons we could press. But we don't have two buttons, or any fixed number of buttons. We instead have a functorful of buttons (if you think of functors by analogy with containers). We also don't get to directly see the internal state of the machine but instead we get to make observations.
Here's the type class:
> class Comonad w where
> extract :: w a > a
> duplicate :: w a > w (w a)
For example, various kinds of zipper give rise to comonads. Zippers allow you to "focus" on a part of a data structure. The extract operation allows you to observe the point that currently has focus. There is one button for every position in the structure where the focus could be. Pressing the corresponding button moves the focus to that point. Similarly the Store comonad has one button for each value you can store in the field it represents. Press the button and the value gets stored in the field.
Cofreeness as a way to memoise
Cofree coalgebras can be thought of as memoised forms of elements of coalgebras. For example, the TwoButton machine above has a function, press, as part of its definition. Memoising an element of such a thing means tabulating everything that could possibly happen if you pressed the buttons so we no longer need the press function. One approach is to try something like this:
data CofreeTwoButton = Memo CofreeTwoButton CofreeTwoButton
memoiseTwoButton :: TwoButton m => m > CofreeTwoButton
memoiseTwoButton m = Memo (memoiseTwoButton (pressLeft m)) (memoiseTwoButton (pressRight m))
> data CofreeTwoButton a = Memo a (CofreeTwoButton a) (CofreeTwoButton a)
> memoiseTwoButton :: TwoButton m => (m > a) > m > CofreeTwoButton a
> memoiseTwoButton f m = Memo (f m) (memoiseTwoButton f (pressLeft m)) (memoiseTwoButton f (pressRight m))
Note how this is like foldMap:
foldMap :: Monoid m => (a > m) > t a > m
With the TwoButton we have a dualised version of the above. Every element of every TwoButton can be packed into the CofreeTwoButton. So every equation in the original structure will still hold after the packing. So every equation that holds in some TwoButton must have some solution in CofreeTwoButton. That gives an idea of what a CofreeTwoButton is by analogy with the free monoid.
Cofree comonads
A cofree comonad is basically a memoised comonad. So the data structure is:
> data Cofree f a = Cofree a (f (Cofree f a))
> instance Functor f => Functor (Cofree f) where
> fmap f (Cofree a fs) = Cofree (f a) (fmap (fmap f) fs)
The duplicate function takes a memoised state and replaces the observable stored at each position with the memoised state that gives rise to the observable.
> instance Functor f => Comonad (Cofree f) where
> extract (Cofree a _) = a
> duplicate c@(Cofree _ fs) = Cofree c (fmap duplicate fs)
> memoiseComonad :: (Comonad w, Functor f) =>
> (forall x.w x > f x) > (forall x.w x > Cofree f x)
> memoiseComonad f w = Cofree (extract w) (fmap (memoiseComonad f) (f (duplicate w)))
Cofree comonad meets free monad
But that's not all. There is a close relationship between cofree comonads and free monads. So to get going, here's a free monad type:
> data Free f a = Id a  Free (f (Free f a))
> join' :: Functor f => Free f (Free f a) > Free f a
> join' (Id x) = x
> join' (Free fa) = Free (fmap join' fa)
> instance Functor f => Functor (Free f) where
> fmap f (Id x) = Id (f x)
> fmap f (Free fa) = Free (fmap (fmap f) fa)
> instance Functor f => Monad (Free f) where
> return = Id
> m >>= f = join' (fmap f m)
> class (Functor f, Functor g) => Pairing f g where
> pair :: (a > b > r) > f a > g b > r
> data Identity a = Identity a
> instance Functor Identity where
> fmap f (Identity x) = Identity (f x)
> instance Pairing Identity Identity where
> pair f (Identity a) (Identity b) = f a b
> data (f :+: g) x = LeftF (f x)  RightF (g x)
> instance (Functor f, Functor g) => Functor (f :+: g) where
> fmap f (LeftF x) = LeftF (fmap f x)
> fmap f (RightF x) = RightF (fmap f x)
> data (f :*: g) x = f x :*: g x
> instance (Functor f, Functor g) => Functor (f :*: g) where
> fmap f (x :*: y) = fmap f x :*: fmap f y
> instance (Pairing f f', Pairing g g') => Pairing (f :+: g) (f' :*: g') where
> pair p (LeftF x) (a :*: _) = pair p x a
> pair p (RightF x) (_ :*: b) = pair p x b
> instance (Pairing f f', Pairing g g') => Pairing (f :*: g) (f' :+: g') where
> pair p (a :*: _) (LeftF x) = pair p a x
> pair p (_ :*: b) (RightF x) = pair p b x
> instance Pairing ((>) a) ((,) a) where
> pair p f = uncurry (p . f)
> instance Pairing f g => Pairing (Cofree f) (Free g) where
> pair p (Cofree a _) (Id x) = p a x
> pair p (Cofree _ fs) (Free gs) = pair (pair p) fs gs
Here's an almost trivial example just so you can see everything working together. A reasonable definition of a comagma structure on the type a is a > UpDown a with UpDown defined as:
> data UpDown a = Up a  Down a
> instance Functor UpDown where
> fmap f (Up a) = Up (f a)
> fmap f (Down a) = Down (f a)
> type CofreeComagma a = Cofree UpDown a
> collatz :: Integer > UpDown Integer
> collatz n = if even n then Down (n `div` 2) else Up (3*n+1)
> memoisedCollatz :: Integer > CofreeComagma Integer
> memoisedCollatz n = Cofree n (fmap memoisedCollatz (collatz n))
Now let's make the dual functor in readiness for building the dual monad:
> data Two a = Two a a
> instance Functor Two where
> fmap f (Two a b) = Two (f a) (f b)
> instance Pairing UpDown Two where
> pair f (Up a) (Two b _) = f a b
> pair f (Down a) (Two _ c) = f a c
> execute :: Cofree UpDown x > Free Two (x > r) > r
> execute w m = pair (flip ($)) w m
> data Direction = WentUp  WentDown deriving Show
> choose :: Free Two Direction
> choose = Free (Two (return WentUp) (return WentDown))
> ex1 :: Free Two (Integer > String)
> ex1 = do
> x < choose
> y < choose
> case (x, y) of
> (WentDown, WentDown) > return (\z > "Decreased twice " ++ show z)
> _ > return show
And here's what happens when they meet:
> go1 :: String
> go1 = execute (memoisedCollatz 12) ex1
References
On getting monads from comonads more generally see Monads from Comonads. For more on memoising and how it's really all about the Yoneda lemma see Memoizing Polymorphic Functions. I'm waiting for Tom Leinster to publish some related work. The pairing above gives a way for elements of free monads to pick out elements of cofree comonads and is a special case of what I'm talking about here. But I think Tom has some unpublished work that goes further.
If you think of a comonad as a compressed object that is decompressed by a monadic decision tree, then you'd expect some form of information theoretical description to apply. That makes me think of Convex spaces and an operadic approach to entropy.
Implementing Conway's Life with a set
To avoid pathological edge effects, I find it convenient to represent Life as a hashbacked set of pairs. My cursory searches find little the internet has to say about implementing Life this way, so I'm curious what Haskellers have to say. In particular, I'm eager to hear suggestions for improving this algorithm:
next cells = toList $ Set.filter live cs `union` births where live = (`elem` [2, 3]) . neighbors born = (3 ==) . neighbors neighbors = size . intersection cs . fromList . moore births = Set.filter born $ mooreSet `difference` cs mooreSet = fromList . concatMap moore $ toList cs moore (x,y) = tail $ liftM2 (,) [x, x+1, x1] [y, y+1, y1] cs = fromList cellsCan anyone find obvious inefficiencies or easy ways to parallelize this? Is there any better data structure for representing this automaton as a set?
submitted by sacheie[link] [11 comments]