In part 1 of this tutorial we talked about types and kinds. Knowledge of kinds will help to orient yourself in today's discussion of monads.
What is a monad? When you type "monad" into Hayoo the first result takes you to the documentation for the type class Monad. If you don't already have a basic familiarity with type classes, you can think of a type class as roughly equivalent to a Java interface. A type class defines a set of functions involving a certain data type. When a data type defines all the functions required by the type class, we say that it is an instance of that type class. When a type Foo is an instance of the Monad type class, you'll commonly hear people say "Foo is a monad". Here is a version of the Monad type class.class Monad m where return :: a -> m a (=<<) :: (a -> m b) -> m a -> m b
(Note: If you're the untrusting type and looked up the real definition to verify that mine is accurate, you'll find that my version is slightly different. Don't worry about that right now. I did it intentionally, and there is a method to my madness.)
This basically says that in order for a data type to be an instance of the Monad type class, it has to define the two functions return and (=<<) (pronounced "bind") that have the above type signatures. What do these type signatures tell us? Let's look at return first. We see that it returns a value of type m a. This tells us that m has the kind signature m :: * -> *. So whenever we hear someone say "Foo is a monad" we immediately know that Foo :: * -> *.
In part 1, you probably got tired of me emphasizing that a type is a context. When we look at return and bind, this starts to make more sense. The type m a is just the type a in the context m. The type signature return :: a -> m a tells us that the return function takes a plain value a and puts that value into the context m. So when we say something is a monad, we immediately know that we have a function called return that lets us put arbitrary other values into that context.
Now, what about bind? It looks much more complicated and scary, but it's really pretty simple. To see this, let's get rid of all the m's in the type signature. Here's the before and after.before :: (a -> m b) -> m a -> m b after :: (a -> b) -> a -> b
The type signature for after might look familiar. It's exactly the same as the type signature for the ($) function! If you're not familiar with it, Haskell's $ function is just syntax sugar for function application. (f $ a) is exactly the same as (f a). It applies the function f to its argument a. It is useful because it has very low precedence and is right associative, so it is a nice syntax sugar that allows us to eliminate parenthesis in certain situations. When you realize that (=<<) is roughly analogous to the concept of function application (modulo the addition of a context m), it suddenly makes a lot more sense.
So now what happens when we look at bind's type signature with the m's back in? (f =<< k) applies the function f to the value k. However, the crucial point is that k is a value wrapped in the context m, but f's parameter is an unwrapped value a. From this we see that the bind function's main purpose is to pull a value out of the context m and apply the function f, which does some computation, and returns the result back in the context m again.
The monad type class does not provide any mechanism for unconditionally pulling a value out of the context. The only way to get access to the unwrapped value is with the bind function, but bind does this in a controlled way and requires the function to wrap things up again before the result is returned. This behavior, enabled by Haskell's strong static type system, provides complete control over side effects and mutability.
Some monads do provide a way to get a value out of the context, but the choice of whether to do so is completely up to the author of said monad. It is not something inherent in the concept of a monad.
Monads wouldn't be very fun to use if all you had was return, bind, and derived functions. To make them more usable, Haskell has a special syntax called "do notation". The basic idea behind do notation is that there's a bind between every line, and you can do a <- func to unwrap the return value of func and make it available to later lines with the identifier 'a'.
In summary, a monad is a certain type of context that provides two things: a way to put things into the context, and function application within the context. There is no way to get things out. To get things out, you have to use bind to take yourself into the context. Once you have these two operations, there are lots of other more complicated operations built on the basic primitives that are provided by the API. Much of this is provided in Control.Monad. You probably won't learn all this stuff in a day. Just dive in and use these concepts in real code. Eventually you'll find that the patterns are sinking in and becoming clearer.
According to the German news page “heisse news”, the Debian Technical Committee has made a decision on the question about the default init system in Debian: There will be none!
Instead, according to Ian “Vorlon” Bart, Debian will start to distribute a suspend-to-disk image to their users, with all daemons already started, completely eradicating the need for an init system. If you are able to read German, read all about it at Debian Technical Committee entscheidet sich gegen ein Init-System
I am reading this paper recently, and I found the implementation of alpha-beta pruning is different from the implementation of prune n
The minmax tree is defined as followmoves :: Position -> [Position] reptree f a = Node a (map (reptree f) (f a)) gametree p = reptree moves p maximize (Node n Nil) = n maximize (Node n sub) = max (map minimize sub) minimize (Node n Nil) = n minimize (Node n sub) = min (map maximize sub) static :: Position -> Number evaluate = maximize . maptree static . gametree
Implementation of prune n (only go n step down the search tree) is elegant:prune 0 (Node a x) = Node a Nil prune (n+1) (Node a x) = Node a (map (prune n) x) evaluate5 = maximize . maptree static . prune 5 . gametree
But the implementation of alpha-beta pruning has to mess up with the implementation of maximize (and minimize as well)maximize = max . maximize' minimize = min . minimize' maximize' (Node n Nil) = Cons n Nil maximize' (Node n l) = map (min . minimize') l = map min (map minimize' l) = mapmin (map minimize' l) mapmin (Cons nums rest) = Cons (min nums) (omit (min nums) rest) omit pot Nil = Nil omit pot (Cons nums rest) | minleq nums pot = omit pot rest | otherwise = Cons (min nums) (omit (min nums) rest) minleq Nil pot = False minleq (Cons n rest) pot | n <= pot True | otherwise = minleq rest pot evaluateAB = max . maxmize' . maptree static . prune 8 . gametree
I think the alpha-beta pruning is not as elegant as prune n, isn't the idea of functional programming not to mess up with your old function? Can the alpha-beta pruning be defined in compositional way just like the prune n?
And in omit min nums seems to be calculated inefficiently because each time it has to go through the list. In C or python it would be easy, just give a variable the min value of nums and keep updating it while iterating.submitted by Sherlockhlt
[link] [17 comments]
I just finished writing the second version of what I guess could be described as a 'bounded LRU map' - A map data structure with an upper limit of elements, removing the least recently used one on overflow. The obvious use case is a cache, say mapping file names to file contents or something like that.
I'd like to improve the performance, lookups being the most important operation. Since a lookup involves updating the access time / order, it is a much heavier operation than with a plain map.
My first naive version uses two Data.Maps, one indexed by the key, the other one by the access time stamp (just an increasing Int). Both maps have a value containing the actual value, and the key to the entry in the other map. Doing a lookup with an LRU update uses a total of five O(log n) operations from Data.Map.
The second version I wrote is heavily inspired Data.Cache.LRU / lrucache, which uses a Data.Map containing a doubly linked-list. Since there is no ordering requirement for this structure, I build it on top of Data.HashMap.Strict instead in my version. This still does five operations for a lookup, but is ~2x faster in practice thanks to the HashMap.
On a 2.2Ghz Core 2 this ends up giving ~550K lookups/sec.
Here's all the code:
(Modules: LRUBoundedMap, DoubleMap used by it, the second version LRUBoundedHashMap and Main, a simple benchmark I quickly whipped up)
Is there any better data structure / algorithm or underlying container I could use? Right now, I can only think of how I'd implement of this in an imperative / mutable language like C/C++: Have a hash map implemented with a bucket array + an array storing the overflow linked-list, and then having a pointer based doubly linked-list in the map values. That would do a single O(1) lookup + the LRU update by updating 5 linked-list pointers. I wonder if there's a functional way of doing it this quickly!submitted by SirRockALot1
[link] [13 comments]
What would be the benefits of being able to specify that a type variable couldn't be passed to a function?
Currently we have the ability to restrict what types can be passed to a function using type classes, but what about the opposite? What if there was a sort of "negative type class" that specified what types couldn't be passed in?
Right now I can't think of any (valid) examples of how this could be useful, but I'm also no type theory expert. I have a fairly strong math background, and from that I know that sometimes useful results can come from knowing what an object can't be, rather than what it can.
Maybe if it were possible to specify that a function operated on applicatives but not one with a monad instance, then extra optimization could be performed? I imagine that it might not be very useful for writing code itself, but could be for optimization, parallelism, and other subtle tricks.
Just some of my musings. Any ideas? Is it worth considering or just silly?submitted by bheklilr
[link] [6 comments]