# News aggregator

### Why is my minmax (negamax, with alpha-beta pruning) algorithm so slow?

I'm trying to implement an AI for the board game Pentago in Haskell. I have previously done this in Python (but lost the code) and if I remember correctly my Python implementation was faster. In Python I was searching at least 4 or 5 plys deep in a reasonable amount of time, but this Haskell implementation takes very long to reach 3 plys. Maybe my memory is wrong? Whatever, the case, I'm hoping to speed up the following implementation of negamax with alpha-beta pruning:

negamaxScore :: Int -> Space -> Board -> Int negamaxScore depth color = abPrune depth color (-(8^8)) (8^8) where abPrune depth color a b board | depth == 0 = let s = scoreBoard board in if color == White then s else negate s | otherwise = (\(a,_,_) -> a) $ foldl' f (-(8^8), a, b) (nub $ possibleMoves color board) where f :: (Int, Int, Int) -> Board -> (Int, Int, Int) f x@(bestVal, a, b) board = if a >= b then x else let val = abPrune (depth-1) (otherColor color) (negate b) (negate a) board bestVal' = max bestVal val a' = max a val in (bestVal', a', b)I would appreciate any suggestions on how I can improve the style or performance of this code.

Relevant Links:

http://en.wikipedia.org/wiki/Negamax#NegaMax_with_Alpha_Beta_Pruning

http://en.wikipedia.org/wiki/Pentago

Full code: https://github.com/DevJac/haskell-pentago/blob/master/src/Main.hs

submitted by Buttons840[link] [16 comments]

### Haskell Books for the Holidays?

What books can you recommend that I ask for for the holidays? (I'm in highschool, so I still do that)

I'm particularly interested in programming language design/paradigms (esp. FRP, dependent typing), logic, category theory, set theory, etc. I know very little about the last two, but what I have read about them in papers, blog posts, and on Wikipedia, has fascinated me--especially when they are connected to functional programming.

Any recommendations?

submitted by MuricanWillzyx[link] [16 comments]

### Looking for an abstraction to compose contravariant functors

I have this thing, which takes two values of the same type and produces a floating point number, defining, how similar the values are:

newtype Meter a = Meter (a -> a -> Double)I've already figured out that it makes a perfect contravariant functor:

instance Contravariant Meter where contramap f (Meter m) = Meter $ \a b -> m (f a) (f b)I now have stumbled upon the following:

song :: Meter Song song = Meter $ \a b -> runMeter (contramap songTitle text) a b * runMeter (contramap songDuration int) a bwhere text and int are other meters and songTitle and songDuration are field accessors on Song.

Obviously this is filled with boilerplate and I have a gut feeling that there should be some kind of an abstraction, which relates to Contravariant Functor the same way as Applicative Functor relates to Functor.

I mean, I expect that it should be possible to compose this thing in a style similar to applicative, like the following abstract (which obviously wouldn't work in this case):

song :: Meter Song song = (*) <$> contramap songTitle text <*> contramap songDuration intIs there an abstraction that approaches this problem?

submitted by nikita-volkov[link] [4 comments]

### Oliver Charles: 24 Days of GHC Extensions: Functional Dependencies

Over the last few days we’ve seen a few different ways to model the class of mutable variables using type classes. First of all we saw that we could use type families to associate the type of monad with the type of mutable variables, and yesterday we saw that we could almost take the same approach using multi-parameter type classes. Unfortunately, when we moved to multi-parameter type classes, the type inference engine became a little less useful, as there are multiple possible choices of monad for any given mutable variable.

What we really wanted to do with the multiple types was to model a *relation* between the types - knowing the type of the mutable variable should be enough to inform us as to the type of monad. By using the FunctionalDependencies extension, we have the ability to augment a type class with information about *functional dependencies* - a concept you might already be familiar with from relational database theory. Loosely speaking, a functional dependency lets us indicate that a given set of one or more types determine the type of a single other type. The notation for this is to indicate a list of types and then use an arrow (->) to note the dependency.

Revisiting our mutable variables type class, we can now write:

> class Store store m | store -> m where > new :: a -> m (store a) > get :: store a -> m a > put :: store a -> a -> m ()This is the same type class as yesterday, but we have now indicated that store determines m. We are able to re-use the existing instances unchanged:

> instance Store IORef IO where > new = newIORef > get = readIORef > put ioref a = modifyIORef ioref (const a)However, *now* when we ask for the type of yesterday’s ex function and choose IORef as our store, GHC will be able to infer that the type of m must be IO - as that was determined by the instance above:

Perfect!

While this may seem inconsequential, this use of functional dependencies to direct the type inference engine is significant if we want to build practical libraries. While it’s great to be able to do a lot with types, many agree that giving up type inference can be too much of a cost.

That said, the fun doesn’t stop there - as functional dependencies and multi-parameter type classes really do start to capture relations between types we can start using type classes as a form of logic programming. A prime example of this is the paper Fun with Functional Dependencies. Another example is in the work around the HList library - discussed in the paper Strongly Typed Heterogeneous Collections.

To recap, here is yesterday’s code:

> type Present = String > storePresents :: (Store store m, Monad m) => [Present] -> m (store [Present]) > storePresents xs = do > store <- new [] > forM_ xs $ \x -> do > old <- get store > put store (x : old) > return store > > ex ps = do > store <- storePresents ps > get (store :: IORef [Present])*This post is part of 24 Days of GHC Extensions - for more posts like this, check out the calendar*.

### What libraries are there for real-time/periodic computations in Haskell

Say I have a vector of byte strings, and I want to iterate over it - every 10 (or 25/40/50, depending on user input) milliseconds, I want to write the next byte string to a socket. If an individual write fails to make it in time, it should be skipped and logged. But if the entire thing fails (say, the socket is closed) it should stop and return an error. Are there libraries that do this? I'm not sure how to search for this.

submitted by technicolorNoise[link] [14 comments]

### Test.SmallCheck

### Writing Libraries - What instances and functionalities should you provide to users?

Hi. I've been wondering what the "etiquette" is when you create new datatypes and functions in a library you expose to other programmers. What instances should you provide? And if possible, what other functionalities should you provide to the developer for him to potentially use?

If your types conform to the corresponding laws, I know there are some "basic" instances you have to provide, like Eq, Ord, Show, Functor, Applicative and Monad. But is there a certain list of "standard" ones you should ALWAYS provide, or always research if your types conform to their laws? For example, should you investigate if you can make your type an Alternative? Or what about: MonadPlus, MonadZip, MonadFix, Arrow, ArrowZero, ArrowPlus, ArrowChoice, ArrowApply, ArrowLoop, Monoid, Category, Foldable, Traversable, and others? Or what if you need a dependency to another package? Is it worth it to make your type a certain instance then? Like Semigroup, Comonad, or an instance of one of the "categorical" packages, like contravariant, categories, bifunctors, etc (or many more).

The issues I see with trying to make instances of everything, is that you may need a lot of effort in figuring out if your types/functions conform to every single one of them (verifying the laws of every single one, etc), may add a lot of cruft to your package, may add unnecessary dependencies to external library, and may ultimately be pointless and unnecessary (if that specific instance is not used by anybody). How do you determine which ones to provide and which ones not to provide?

Does this also hold to providing other functionalities? For instance, if you create a Monad you expect your users to use, you may also decide to provide a monad transformer for it. But with that you also need more effort, and also need more dependencies (to transformers/mtl/etc). Or perhaps you create some datatypes, which you believe your users may need to use lenses to use, so you provide those (and need another dependency to the lens library).

How do you determine where to stop? My main concern is how everybody should strive to avoid orphan instances. You can't expect your own users to provide the above instances or functionalities if they need them, and you can't expect the maintainers of those packages you depend on to submit to your will and add an instance of your type to their package.

submitted by gonzaw308[link] [11 comments]

### Oliver Charles: 24 Days of GHC Extensions: Multi-parameter Type Classes

Over the last few days, we’ve looked at a few extensions that can extend the notion of type classes in Haskell. First, we saw that nullary type classes remove the requirement that a type class varies over a single type by allowing it mention no types at all, and yesterday we saw how type families can be used to associate more types against a single type. Today, we’re going to revisit yesterdays example and use the multi-parameter type classes extension.

> {-# LANGUAGE FlexibleInstances #-} > {-# LANGUAGE MultiParamTypeClasses #-} > import Control.Monad.Trans.Class (lift) > import Control.Monad.Trans.Reader (ReaderT) > import Data.Foldable (forM_) > import Data.IORefThe extension does just what it says on the tin - with MultiParamTypeClasses enabled, GHC removes the constraint that a type class can mention only a single type. Now, we’re able to have our type class mention multiple types at once. Lifting this constraint has significant consequences; if we think of a type class over one type as modelling a set of types, whereas multiple types now let us model *relations* between types. The latter is interesting, though beyond the scope of this article. Interested readers are pointed to Oleg Kiselyov’s home page - which is full of mind bending tricks with type classes!

Yesterday, we looked at a traditional example around type classes - modelling the class of types that represent mutable variables. We used type families to associate the type of monad with each mutable variable, reaching the following API:

class Store store where type StoreMonad store :: * -> * new :: a -> (StoreMonad store) (store a) get :: store a -> (StoreMonad store) a put :: store a -> a -> (StoreMonad store) ()However, the API that we ended at is a little obtuse - those types take quite a bit of mental parsing to understand. Conceptually, we can think of mutable variables as having a relationship between types - the type of a mutable variable is related to the type of its monad. Using MultiParamTypeClasses, we can encode just this idea - we simply vary the type class over both the variable type *and* its monad:

This API is much easier to understand! Furthermore, because the type class itself mentions the type of monad, using this type in our programs is straightforward. We can port over yesterdays example with only changes to the type:

> type Present = String > storePresents :: (Store store m, Monad m) => [Present] -> m (store [Present]) > storePresents xs = do > store <- new [] > forM_ xs $ \x -> do > old <- get store > put store (x : old) > return storeI’m sure you’ll agree, that’s a much more manageable type. All that is left now is to provide instances for our type class:

> instance Store IORef IO where > new = newIORef > get = readIORef > put ioref a = modifyIORef ioref (const a)Again, very little has changed from yesterdays code here - we just move the type of monad up to the type class instance declaration, rather than using an associated type.

So far I’ve put the extension in a great light, but there is a caveat: the use of multi-parameter type classes can lead to ambiguity during type checking. This can be a huge problem when writing large applications, as it means we now have to annotate our programs extensively.

To look at this problem in more detail, let’s look at using the storePresents function we wrote earlier. If we build a store out of a list of Presents as an IORef and then query for the contents of the IORef, something perculiar seems to happen:

> ex ps = do > store <- storePresents ps > get (store :: IORef [Present])What would you expect the type of this function to be? We’ve chosen IORef as our store, and IORefs are associated with the IO monad, so we have ex :: [Present] -> IO [Present], right? Let’s see what GHCI makes of it:

.> :t ex ex :: (Store IORef m, Monad m) => [Present] -> m [Present]That’s odd! GHCI clearly knows that the variable type itself is an IORef, but that’s *not* enough information to determine type of monad. For example, another equally valid definition of Store IORef would be:

The problem we’re encountering is that multi-parameter type classes don’t add any information to the type inference engine - because knowing one type doesn’t let you know anything about the other types. However, we needn’t abandon hope here - this problem can be solved, it just needs another extension (oh, of course!).

*This post is part of 24 Days of GHC Extensions - for more posts like this, check out the calendar*.

### Unification-fd tutorial (part 1/n)

### wren gayle romano: Unification-fd tutorial (part 1/n)

A while back I released the *unification-fd* library, which gives a generic implementation of first-order unification of non-cyclic terms. I've given a few talks on how the library is implemented and what optimizations it performs, but that's not the topic for today. Today, I'm going to talk about how to *use* it.

Unification is a widely useful operation and, consequently, comes in many different flavors. The version currently supported by the library is the sort used by logic programming languages like Prolog, Curry, Dyna, and MiniKanren; which is the same sort that's used for unification-based type inference algorithms like Hindley–Damas–Milner. Of these two examples, the logic programming example is the simpler one to discuss— at least for folks who've used a language like Prolog before. So let's start from there.

*Caveat Emptor:* This post is something of a stream of consciousness. I've gotten a few requests for tutorials on how to use the library, but the requests weren't terribly specific about what problems people've had or what's been difficult to figure out. So I'm shooting in the dark as far as what folks need and how much background they have. I'm going to assume you're familiar with Prolog and the basics of what unification is and does.

*Preemptive apology:* I started writing this post months and months (and months) ago, but unintentionally dropped it after running into a certain issue and then getting distracted and moving onto other things. Actually, this happened at least twice. I'm terribly sorry about that. So, apologies for not tackling the disjunction issue in this post. I'll come back to it later, but figured this post really needs to get out the door already.

A ** term**, in Prolog, is just a fancy name for a value of some algebraic data type. In most variants of Prolog there's no explicit definition of the ADT, no restriction on what the constructors are, and no type checking to ensure that subterms have a particular shape. That is, Prolog is what's called a single-sorted logic; in other words, Prolog is an untyped/unityped language. With

*unification-fd*we can implement multi-sorted (aka typed) logics, but for this tutorial we're going to stick with Prolog's single-sorted approach.

Opening up Control.Unification we'll see a handful of types and type classes, followed by a bunch of operators. The UTerm data type captures the recursive structure of logic terms. (UTerm is the free monad, if you're familiar with that terminology.) That is, given some functor t which describes the constructors of our logic terms, and some type v which describes our logic variables, the type UTerm t v is the type of logic terms: trees with multiple layers of t structure and leaves of type v. For our single-sorted logic, here's an implementation of t:

data T a = T String [a]The String gives the name of the term constructor, and the list gives the ordered sequence of subterms. Thus, the Prolog term foo(bar,baz(X)) would be implemented as UTerm$T "foo" [UTerm$T "bar" [], UTerm$T "baz" [UVar x]]. If we're going to be building these terms directly, then we probably want to define some smart constructors to reduce the syntactic noise:

foo x y = UTerm$T "foo" [x,y] bar = UTerm$T "bar" [] baz x = UTerm$T "baz" [x]Now, we can implement the Prolog term as foo bar (baz x). If you prefer a more Prolog-like syntax, you can use uncurried definitions for smart constructors that take more than one argument.

UnifiableIn order to use our T data type with the rest of the API, we'll need to give a Unifiable instance for it. Before we do that we'll have to give Functor, Foldable, and Traversable instances. These are straightforward and can be automatically derived with the appropriate language pragmas.

The Unifiable class gives one step of the unification process. Just as we only need to specify one level of the ADT (i.e., T) and then we can use the library's UTerm to generate the recursive ADT, so too we only need to specify one level of the unification (i.e., zipMatch) and then we can use the library's operators to perform the recursive unification, subsumption, etc.

The zipMatch function takes two arguments of type t a. The abstract t will be our concrete T type. The abstract a is polymorphic, which ensures that we can't mess around with more than one level of the term at once. If we abandon that guarantee, then you can think of it as if a is UTerm T v. Thus,t a means T (UTerm T v); and T (UTerm T v) is essentially the type UTerm T v with the added guarantee that the values aren't in fact variables. Thus, the arguments to zipMatch are non-variable terms.

The zipMatch method has the rather complicated return type: Maybe (t (Either a (a,a))). Let's unpack this a bit by thinking about how unification works. When we try to unify two terms, first we look at their head constructors. If the constructors are different, then the terms aren't unifiable, so we return Nothing to indicate that unification has failed. Otherwise, the constructors match, so we have to recursively unify their subterms. Since the T structures of the two terms match, we can return Just t0 where t0 has the same T structure as both input terms. Where we still have to recursively unify subterms, we fill t0 with Right(l,r) values where l is a subterm of the left argument to zipMatch and r is the corresponding subterm of the right argument. Thus, zipMatch is a generalized zipping function for combining the shared structure and pairing up substructures. And now, the implementation:

instance Unifiable T where zipMatch (T m ls) (T n rs) | m /= n = Nothing | otherwise = T n <$> pairWith (\l r -> Right(l,r)) ls rsWhere list-extras:Data.List.Extras.Pair.pairWith is a version of zip which returns Nothing if the lists have different lengths. So, if the names m and n match, and if the two arguments have the same number of subterms, then we pair those subterms off in order; otherwise, either the names or the lengths don't match, so we return Nothing.

Feature StructuresFor the T example, we don't need to worry about the Left option. The reason it's there is to support feature structures and other sparse representations of terms. That is, consider the following type:

newtype FS k a = FS (Map k a)Using this type, our logic terms are sets of key–subterm pairs. When unifying maps like these, what do we do if one argument has a binding for a particular key but the other argument does not? In the T example we assumed that subterms which couldn't be paired together (because the lists were different lengths) meant the unification must fail. But for FS it makes more sense to assume that terms which can't be paired up automatically succeed! That is, we'd like to assume that all the keys which are not explicitly present in the Map k a are implicitly present and each one is bound to a unique logic variable. Since the unique logic variables are implicit, there's no need to actually keep track of them, we'll just implicitly unify them with the subterm that can't be paired off.

This may make more sense if you see the Unifiable instance:

instance (Ord k) => Unifiable (FS k) where zipMatch (FS ls) (FS rs) = Just . FS $ unionWith (\(Left l) (Left r) -> Right(l,r)) (fmap Left ls) (fmap Left rs)We start off by mapping Left over both the ls and the rs. We then call unionWith to pair things up. For any given key, if both ls and rs specify a subterm, then these subterms will be paired up as Right(l,r). If we have extra subterms from either ls or rs, however, then we keep them around as Left l or Left r. Thus, the Unifiable instance for FS performs a union of the FS structure, whereas the instance for T performs an intersection of T structure.

The Left option can be used in any situation where you can immediately resolve the unification of subterms, whereas the Right option says you still have work to do.1

Logic VariablesThe library ships with two implementations of logic variables. The IntVar implementation uses Int as the names of variables, and uses an IntMap to keep track of the environment. The STVar implementation uses STRefs, so we can use actual mutation for binding logic variables, rather than keeping an explicit environment around. Of course, mutation complicates things, so the two implementations have different pros and cons.

Performing unification has the side effect of binding logic variables to terms. Thus, we'll want to use a monad in order to keep track of these effects. The BindingMonad type class provides the definition of what we need from our ambient monad. In particular, we need to be able to generate fresh logic variables, to bind logic variables, and to lookup what our logic variables are bound to. The library provides the necessary instances for both IntVar and STVar.

You can, of course, provide your own implementations of Variable and BindingMonad. However, doing so is beyond the scope of the current tutorial. For simplicity, we'll use the IntVar implementation below.

Example ProgramsWhen embedding Prolog programs into Haskell, the main operators we want to consider are those in the section titled "Operations on two terms". These are structural equality (i.e., equality modulo substitution), structural equivalence (i.e., structural equality modulo alpha-variance), unification, and subsumption.

Consider the following Horn clause in Prolog:

example1(X,Y,Z) :- X = Y, Y = Z.To implement this in Haskell we want a function which takes in three arguments, unifies the first two, and then unifies the second two. Thus,2

example1 x y z = do x =:= y y =:= zTo run this program we'd use one of the functions runIntBindingT, evalIntBindingT, or execIntBindingT, depending on whether we care about the binding state, the resulting logic term, or both. Of course, since the unifications may fail, we also need to run the underlying error monad, using something like runErrorT3,4. And since these are both monad transformers, we'll need to use runIdentity or the like in order to run the base monad. Thus, the functions to execute the entire monad stack will look like:

-- Aliases to simplify our type signatures. N.B., the -- signatures are not actually required to get things -- to typecheck. type PrologTerm = UTerm T IntVar type PrologFailure = UnificationFailure T IntVar type PrologBindingState = IntBindingState T -- N.B., the @FallibleBindingMonad@ isn't yet a monad -- for Prolog because it doesn't support backtracking. type FallibleBindingMonad = ErrorT PrologFailure (IntBindingT T Identity) -- N.B., this definition runs into certain issues. type PrologMonad = ErrorT PrologFailure (IntBindingT T Logic) runFBM :: FallibleBindingMonad a -> (Either PrologFailure a, PrologBindingState) runFBM = runIdentity . runIntBindingT . runErrorTHere are some more examples:

-- A helper function to reduce boilerplate. First we get -- a free variable, then we embed it into @PrologTerm@, -- and then we embed it into some error monad (for -- capturing unification failures). getFreeVar = lift (UVar <$> freeVar) -- example2(X,Z) :- X = Y, Y = Z. example2 x z = do y <- getFreeVar x =:= y y =:= z -- example3(X,Z) :- example1(X,Y,Z). example3 x z = do y <- getFreeVar example1 x y z -- example4(X) :- X = bar; X = backtrack. example4 x = (x =:= bar) <|> (x =:= atom "backtrack")The complete code for this post can be found here online, or at ./test/tutorial/tutorial1.hs in the Darcs repo. Notably, there are some complications about the semantics of example4; it doesn't mean what you think it should mean. We'll tackle that problem and fix it later on in the tutorial series (in part 4 or thereabouts).

Term Factoring and Clause Resolution Automata (CRAs)Note that for the above examples, the Haskell functions only execute the right-hand side of the Horn clause. In Prolog itself, there's also a process of searching through all the Horn clauses in a program and deciding which one to execute next. A naive way to implement that search process would be to have a list of all the Horn clauses and walk through it, trying to unify the goal with the left-hand side of each clause and executing the right-hand side if it matches. A more efficient way would be to compile all the right-hand sides into a single automaton, allowing us to match the goal against all the right-hand sides at once. (The idea here is similar to compiling a bunch of strings together into a trie or regex.)

Constructing optimal CRAs is NP-complete in general, though it's feasible if we have an arbitrary ordering of clauses (e.g., Prolog's top–down order for trying each clause). The *unification-fd* library does not implement any support for CRAs at present, though it's something I'd like to add in the future. For more information on this topic, see Dawson et al. (1995) *Optimizing Clause Resolution: Beyond Unification Factoring* and Dawson et al. (1996) *Principles and Practice of Unification Factoring*.

In addition to unification itself, it's often helpful to have various other operators on hand.

One such operator is the subsumption operator. Whereas unification looks for a most-general substitution which when applied to both arguments yields terms which are structurally equal (i.e., l =:= r computes the most general s such that s l === s r), subsumption applies the substitution to only one side. That is, l subsumes r just in case r is a substitution instance of l (i.e., there exists a substitution s such that s l === r). The symbolic name (<:=) comes from the fact that when l subsumes r we also say that l is less defined5 than r. Subsumption shows up in cases where we have to hold r fixed for some reason, such as when implementing polymorphism or subtyping.

Other operators work on just one term, such as determining the free variables of a term, explicitly applying the ambient substitution to obtain a pure term, or cloning a term to make a copy where all the free variables have been renamed to fresh variables. These sorts of operators aren't used very often in logic programming itself, but are crucial for implementing logic programming languages.

ConclusionHopefully that gives a quick idea of how the library's API is set up. Next time I'll walk through an implementation of Hindley–Damas–Milner type inference, and then higher-ranked polymorphism à la Peyton Jones et al. (2011) *Practical type inference for arbitrary-rank types*. After that, I'll discuss the complications about backtracking choice I noticed when writing this post, and walk through how to fix them. If there's still interest after that, I can get into some of the guts of the library's implementation— like ranked path compression, maximal structure sharing, and so on.

If you have any particular questions you'd like me to address, drop me a line.

[1] Older versions of the library used the type zipMatch :: forall a b. t a -> t b -> Maybe (t (a,b)) in order to ensure that we did in fact properly pair up subterms from the two arguments. Unfortunately I had to relax that guarantee in order to add support for feature structures. ↩

[2] N.B., a more efficient implementation is:

example1' x y z = do y' <- x =:= y y' =:= zThe unification operator returns a new term which guarantees maximal structure sharing with both of its arguments. The implementation of unification makes use of observable structure sharing, so by capturing y' and using it in lieu of y, the subsequent unifications can avoid redundant work. ↩

[3] The ErrorT transformer was deprecated by *transformers-0.4.1.0*, though it still works for this tutorial. Unfortunately, the preferred ExceptT does not work since UnificationFailure doesn't have a Monoid instance as of *unification-fd-0.9.0*. The definition of UnificationFailure already contains a hack to get it to work with ErrorT, but future versions of the library will remove that hack and will require users to specify their own monoid for combining errors. The First monoid captures the current behavior, though one may prefer to use other monoids such as a monoid that gives a trace of the full execution path, or witnesses for all the backtracks, etc. ↩

[4] To be honest, I don't entirely recall why I had the error monad explicitly separated out as a monad transformer over the binding monad, rather than allowing these two layers to be combined. Since it's so awkward, I'm sure there was some reason behind it, I just failed to make note of why. If there turns out to be no decent reason for it, future versions of the library may remove this fine-grain distinction. ↩

[5] The symbolic name for subsumption is chosen to reflect the meaning of more/less defined (rather than more/less grounded) so that the subsumption ordering coincides with the domain ordering (think of logic variables as being bottom). This is the standard direction for looking at subsumption; though, of course, we could always consider the dual ordering instead. ↩

Twitter Facebook Google+ Tumblr WordPresscomments

### Who wants help documenting their libraries? Are industrial users prepared to pay for better documentation?

One of the problem I had with examples for documentation was that they didn't scale very well. They are usually in markdown or unverified haddocks and don't even get type-checked - let alone executed and validated.

With Bloodhound, I decided to try out doctest a la lens and I'm pretty happy with the result for the Client module. I will expand this to the Types module in time.

One of the main recurring (objections?) to raising the issue of library documentation was that library author's time is precious and we cannot ask them to write documentation.

Okay!

I'd like to test-drive writing it for them. I anticipate it'll be time-consuming for me to understand the libraries well enough to write examples. This makes sense at least from the POV of avoiding other people having to go through the pain of integrating doctests - I've already figured it out.

So, library authors that would like to have better documentation but don't have time - any volunteers? I can't promise this will work well or that the offer will stand in perpetuity. I'm hoping for a patient individual who can answer questions via email periodically.

Next point of order: are people (their companies, really) using Haskell in industry prepared to pay to get better documentation and examples? Seems like something Snowdrift might be useful for, though I'm not sure if the idea is a good match or not.

There's the issue that one-off examples for individual functions and data types don't always demonstrate or encapsulate how to use a library very well. Often something like a cookbook is more appropriate.

submitted by Mob_Of_One[link] [62 comments]