News aggregator

Newbie questions on Applicatives, Types and Typeclasses

Haskell on Reddit - Sat, 12/20/2014 - 11:36am

Hi /r/haskell,

I'd written this as a comment to a great post by /u/tailcalled but I think I was a bit late to the comments. I was wondering if anyone had any feedback on my understanding of the code and some of the questions generated in the process.

Let me see if I understand what I'm seeing.

You create Map data structures by calling the names and handles functions.

The data structures produced by calling names and handles are wrapped into a Table data type via the from function. Within the from function, a Table type is constructed using as a type parameter an anonymous function which itself takes a key argument and returns a lookup of that key within the map m.

The Table data type specification is written in 'record syntax', with a its two type variables (k and v used as parts of the definition of an anonymous function).

What I don't understand is how the Table type is being constructed with that anonymous function. (The fact that this could be done was new to me! Cool stuff). Is this function evaluated during construction, or is it lazy (e.g. will that anonymous function remain unexecuted until other code runs lookup?) Perhaps related, it looks like that anonymous function gets partially applied later on, and then composed into something else later.

The other part I don't understand is how join2 works, which is a good segue into the Applicative which I need to become more familiar with. Here is a shot at it:

The join2 function constrains the function parameter types (f a) on the typeclass Applicative. In its body, join2 uses the "," type constructor (function?)[1] to construct a Tuple, The "," seems to be using as its parameters the two f a parameters of join2 (I'm not certain). Now in this case the two parameters of "," are functions, as used in the example:

*Main> let table = join2 names' handles' *Main> :t table table :: Table Id ((FirstName, LastName), TwitterHandle)

which I would guess means that liftA2 together with "," could be described as doing the following "it applies[2] the two functions, and tuples up their results." And within names' and handles' themselves, we're seeing the promotion of names and handles from Maps to Tables. Hence the return type f (a, b), which (I am guessing) is coaxed into type Table (\k -> f k <*> x k) by the typeclass instance instance Applicative (Table k). I am not certain how the instance and its function definition (Table f <*> Table x = Table (\k -> f k <*> x k)) fit together with the liftA2[3].

The author states that the original purpose converting from Map to Table is to be able to use the Applicative, or more specifically, to have a working implementation of pure. I am not sure how pure fits in.

I recognize I am also showing that my understanding of typeclasses is probably also weak. I tend to think of typeclass instances as "make a given operation operate this way on this type, that that way on that type.

[1] *Main> :t (,) (,) :: a -> b -> (a, b)

[2] Perhaps it is not applying them but sort of freezing them in their state for use later, or taking the two chunks of computation (the two f a functions that are arguments) and bundling them together

[3] I recall seeing this sort of "logic in the type" in this example, hopefully it is roughly analogous:

you can retrieve an element at an 'index' in a sum type if you convert an Int into an Enum ('toEnum') and return the result in a function whose return type is the sum type. I hope this is roughly analogous to what I'm seeing with the `T

27 newtype ID = ID Int 28 deriving (Show, Num, Eq, Ord, Integral, Real, Enum) 29 30 data Person = John | Sarah | Susan 31 deriving (Show, Eq, Enum, Bounded) ... 47 personFromId :: ID -> Maybe Person 48 personFromId (ID i) | i <= 3 && i > 0 = Just (toEnum (i-1)) 49 | otherwise = Nothing


submitted by nabokovian
[link] [4 comments]
Categories: Incoming News

Yesod Web Framework: Use Stackage for docs

Planet Haskell - Sat, 12/20/2014 - 8:30am

I'm writing this blog post to address a personal annoyance of mine as the maintainer of a large number of Haskell packages. Very often, I get bug reports about lack of documentation on Hackage. This has occurred for years. Most people who file these issues are not aware of the fact that lack of documentation error is more often than not a problem with Hackage. Some people are aware of this, and are asking me to start running a separate tool every time I upload a package to generate the documentation locally.

I have another annoyance with documentation on Hackage: I'm forced to write my package's description in a very strange Haddock-inside-cabal format in the cabal file itself. I need to write a description in any event in a README for users on Github, so this is purely wasted efforted.

To address both of these issues at the same time, I've started modifying the description field in my package's to give a link to their Stackage address. I'm doing this out of laziness on my part: I can now feel confident that documentation will be available at pages where people will be pointed to, I will hopefully get less needless issues opened about "Hackage documentation is broken," and I don't need to keep two meaningful descriptions of all of my packages written (one in the weird cabal/Haddock format, one in much nicer Markdown).

Others are clearly welcome to do this as well, but my main motivation here is explaining my reasoning for these changes, so I don't get a flood of new inquiries as to "why do you have such a strange description field in all your packages?" For those wishing to emulate this, follow these steps:

  1. Make sure your package has a good README or file with a description.
  2. Include the README or file in the extra-source-files field of your cabal file.
  3. Update your description field with text like: "Hackage documentation generation is not reliable. For up to date documentation, please see:*name*."

UPDATE See my next blog post for aftermath of this.

Categories: Offsite Blogs

Lennart Augustsson: A commentary on 24 days of GHC extensions, part 3

Planet Haskell - Sat, 12/20/2014 - 5:52am
It's time for some more Haskell opinions.

Day 13: Functional DependenciesSee day 12.

The addition of functional dependencies to multi-parameter type classes all of a sudden opened up the possibility of doing type level computations that had not been there before. The type level programming industry seems to have started with Thomas Hallgren's 2001 paper Fun with Functional Dependencies (which got some inspiration from the type level numeric computation available in (classic) Bluespec).

Day 14: DerivingI don't know the history of deriving Functor et al. I believe Simon PJ added it after enough nagging.

I do have an intense dislike for GeneralizedNewtypeDeriving as implemented in GHC. As illustrated by ghc bug 1496 (found by Stefan O'Rear who was 14-15 at the time) it leads to an inconsistent type system. Instead of removing the root cause of the problem GHC HQ has decided to add a rather ingenious, but grotesque, solution where type variables are given <it>roles. It's complex and it's going to get worse. It also brings its own problems (e.g., join can no longer be made a method in the monad class. WAT?!?!).

In the two compilers where I've implemented GND I've chosen a different route. The compiler tries to construct a proper instance declaration for the derived class. If it can be constructed and type checks then all is well. If it doesn't, there's something tricky going on and the instance has to be written by hand. I claim this covers 99% of all GND uses. And it makes me sleep better at night having had the instance declaration approved by the type checker.

I also prefer to make the compiler recognise and remove the identity functions that newtype gives rise to rather than forcing the user to explicitly using the new Coercible class.

Day 15: Derive GenericThe generic deriving extension is a clever idea from Clean, where all the meta-data about a data type (constructor names, types, etc) is encoded at the type level, and thus available at compile time.

The details of this encoding are rather gruesome, but I think it could be a lot nicer using the new type level strings, type level numbers, and type families. Time for a small redesign?

Day 16: Overloaded StringOverloaded strings was another extension I added to GHC because of my work on the Paradise DSL. Since Paradise is deeply embedded using DSL-level strings would be ugly unless the string literals were overloaded.

It was quite easy to add to GHC. The least easy part was adapting the defaulting rules to strings as well.

Worth noting is that pattern matching on overloaded strings turns into an equality comparison (just as with numeric literals), and so requires the string type to be an instance of Eq, which is not a superclass of IsString. Oh, and the name IsString for the class with the fromString was just temporary since I couldn't think of a good name. I guess it stuck.

Day 17: Rank-N TypesRank-n types are, of course, a rather old idea, but it took a while before they made it into Haskell. At the Haskell Workshop in La Jolla in 1995 Mark Jones had a very nice paper, From Hindley-Milner Types to First-Class Structures. It shows how to add a form of records to Haskell, where the record components can be polymorhic. This has the same expressivity as current rank-n types, but looks a little different.

For example, the archetypical Haskell rank-2 function has this type now

runST :: (forall s . ST s a) -> a With records with polymorphic components we have to introduce a new record type to write this. With modern Haskell it would look like
data ArgST a = ArgST (forall s . ST s a) runST :: ArgST a -> a Or with Mark's suggested anonymous record types
runST :: { arg :: forall s . ST s a } -> a The 1996 release of HBC contained this kind of polymorphic components, but it did not have the anonymous records. GHC got rank-n (or was it rank-2?) types in 2000. The GHC rank-n types are somewhat more convenient, but requires function type signatures which the hbc version did not.

At the La Jolla ICFP we had a Haskell committee meeting about the next Haskell release, 1.3. If memory serves me, after a lot of discussions we decided that Haskell was going to get record types. Something along what Mark's paper suggested, but I think they were going to be named. The records would support polymorphic components (thus general universal quantification) and data types would support existentials. But after we split up, I guess the czar got cold feet and we ended up with the current Haskell records (I didn't really like them when I first saw them, and I still don't). Haskell could have been rather different if we had followed through on the decision (I'm not sure about better, but at least different).

Day 18: Existential TypesThe idea of existential types is old, but the first time I saw it in an ML-like programming language was Läufer&Odersky's An Extension of ML with First-Class Abstract Types at the ML workshop in San Francisco in June 1992. I promptly implemented it and it was in version 0.998.1 of HBC in July 1992.

Adding existentials to ML is very easy; it's just a matter of tweaking the type checker to accept some more programs. Adding it to Haskell is a little more involved since the existential variables my appear in a context, and so a value of an existential type has to be a pair of a dictionaries and the actual value (ML only needs the actual value). The dictionaries are needed to use overloaded operations on the value.

What about syntax? I did it exactly like Läufer&Odersky, namely free variables in a data declaration are existentially quantified. E.g., data E = C a (a -> Int), since the type variable a does not appear in the head of the data declaration it is existentially quantified.

Some people (I remember Mark Jones and Simon PJ) argued fiercely against this, saying that it would be too easy to make mistakes and forget a type variable and get it existentially qualified. So when GHC finally got them they looked like data E = forall a . C a (a -> Int), using the confusing (but entirely logical!) forall to indicate an existential type. But then GADTs came along, and now it's suddenly no longer important to use the (pointless) forall, i.e., now we can write data E where { C :: a -> (a -> Int) -> E }. Oh, well.

Some people say that existential types is an anti-pattern in Haskell. I don't agree. It has a few perfectly legitimate uses, the problem is just that if you come from an OO background you'll probably reach for existentials in all kind of cases when there are better solutions.

Categories: Offsite Blogs

FARM 2014

Haskell on Reddit - Fri, 12/19/2014 - 3:21pm
Categories: Incoming News

Ken T Takusagawa: [ojorjgmk] 1D Lego fractal

Planet Haskell - Fri, 12/19/2014 - 11:26am

A one dimensional cellular automaton generated using the following state transition rules. Note that each cell in the next line (generation) depends on four parents, so it is best drawn on a hexagonal grid, rather than the square grid on which 1D cellular automata are usually drawn (three parents). Vaguely reminiscent of the Sierpinski gasket, and more to Rule 30.

* represents either 0 or 1 ("don't care"), so 16 rules total, as one would expect with 4 parents.

*00* -> 0
*11* -> 0
101* -> 0
*101 -> 0
001* -> 1
*100 -> 1

The general idea is that cells grow to the left and right, but try to avoid crowding. The structure is a binary tree: branches never re-meet.

Bigger picture (click to enlarge):

What is the asymptotic density?

Inspired by an attempt to build a dense Lego pyramid that could still be easily taken apart. This fractal is the vertical cross section; concentric squares around the vertical axis. (The inner structures turned out to be too fragile.)

Haskell source code to generate the images.

Categories: Offsite Blogs

Lennart Augustsson: A commentary on 24 days of GHC extensions, part 2

Planet Haskell - Fri, 12/19/2014 - 10:45am
Day 7: Type OperatorsI understand the usefulness of the type operator GHC extension, but I don't like it. For value identifiers any operator not starting with a ':' is treated the same way as identifiers starting with a lower case letter. This is how it used to be on the type level too. But with the latest version of the TypeOperators extension these operators now behave like upper case identifiers and can be used to name types. An ugly wart, IMO.

Day 8: Recursive doThis extension stems from Levent Erkök's PhD thesis Value Recursion in Monadic Computations. The motivation was to be able to describe circuits with feedback using monadic notation.

There's actually two way to get recursive do-notation. You can either use the rec keyword inside the do or you can use the keyword mdo instead of do. Why mdo, you ask? It should really be μdo, where μ is the recursion operator. But people at GHC HQ are scared of Unicode, so instead we got mdo.

Day 9: Nullary Type ClassesOnce you have accepted multi-parameter type classes then nullary type classes are a natural consequence; zero is also a number, after all.

In 1994 I worked on the pH language when I visited MIT. The acronym pH stands for "parallel Haskell", which is what it is. It had some imperative features that were not in a monad. This was in 1994, and the ST monad was just being invented, so we were trying out other things. To keep track of what parts of the program used imperative features I used a nullary type class Imperative that tainted any code that used assignment et al. (As far as I know, this is the earliest use of a nullary type class.) When mixing imperative code and pure code it is important to not be too polymorphic. ML-like languages use the value restriction to accomplish this. Since Haskell's monomorphism restriction is essentially the same as the value restriction, having the Imperative type class neatly accomplished the value restriction for pH.

Day 10: Implicit ParametersThe implicit parameter started as an attempt to simplify Haskell overloading. Haskell overloading, when implemented by dictionary passing, really has two parts: first there is a way to implicitly pass dictionaries around, and second, dictionaries are constructed automatically by using the instance declarations as a logical inference system.

The implicit parameters do the automagic passing of parameters, and is thus half of what is needed for overloading. But they were never used for overloading, instead it was discovered that they can be made to behave like dynamic binding, but with static types. This is quite cool! Even if one doesn't like dynamic binding (I don't).

There idea is presented Erik Meijer et al's paper Implicit Parameters: Dynamic Scoping with Static Types. It's worth reading.

Day 11: Type FamiliesType families grew out of the need to have type classes with associated types. The latter is not strictly necessary since it can be emulated with multi-parameter type classes, but it gives a much nicer notation in many cases. The same is true for type families; they can also be emulated by multi-parameter type classes. But MPTC gives a very logic programming style of doing type computation; whereas type families (which are just type functions that can pattern match on the arguments) is like functional programming.

Using closed type families adds some extra strength that cannot be achieved by type classes. To get the same power from type classes we would need to add closed type classes. Which would be quite useful; this is what instance chains gives you.

Day 12: Multi-Parameter Type ClassesEven the original paper on Haskell type classes mentions multiple parameters, so this idea has been with us from the start. But it's not present in Stefan Kaes original work on type classes.

Unfortunately, multiple type class parameters very quickly lead to type ambiguities in the code, and we are forced to add type signatures. So MPTC was not really useful until Mark Jones introduced functional dependencies. Mark got the idea from database theory where the same idea is used to limit the relations in the database.

Categories: Offsite Blogs