News aggregator

Haskell software engineering master thesis

haskell-cafe - Fri, 10/17/2014 - 11:07am
Hi, everyone! I'm about to write a software engineering/management master thesis at the IT University of Gothenburg, and I'm considering to do research into the Haskell world. It should probably be in one of the following five fields: empirical software engineering, requirements engineering, software architecture, model-driven engineering or quality management. The focus should probably be more towards an empirical study, rather than something more computer science-related. It should also add to general body of knowledge, and not be too tied to, for example, a specific implementation. Does anyone have any suggestions of relevant topics that I could write about, or perhaps some related papers that I could read? Thanks! Warm regards, Jon Kristensen _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

ICFEM 2014, Luxembourg, 3-7 November 2014: Last Call for Participation

General haskell list - Fri, 10/17/2014 - 10:45am
16th International Conference on Formal Engineering Methods ICFEM 2014, Luxembourg, 3-7 November 2014 Last Call for Participation http://icfem2014.uni.lu ---------------------------------------- The 16th International Conference on Formal Engineering Methods (ICFEM 2014) will be held at the Melia Hotel in Luxembourg, Luxembourg from 3rd November to 7 November 2014. Since 1997, ICFEM has been serving as an international forum for researchers and practitioners who have been seriously applying formal methods to practical applications. Highlights: ---------------------------------------- + Keynote speakers: Nikolaj Bjorner (Microsoft Research), Lionel Briand (University of Luxembourg) and Vincent Danos (University of Edinburgh) + The conference is now available http://icfem2014.uni.lu/program.php + Panel discussion on November 4th: Are Formal Engineering Methods and Agile Methods Friend or Enemy? + Two affiliated workshops FTSCS 2014 and SOFL+MSVL. PC Chairs ---------------------------------------- + Steph
Categories: Incoming News

Neil Mitchell - Downloads

del.icio.us/haskell - Fri, 10/17/2014 - 6:08am
Categories: Offsite Blogs

Neil Mitchell - Downloads

del.icio.us/haskell - Fri, 10/17/2014 - 6:08am
Categories: Offsite Blogs

Yosemite

haskell-cafe - Fri, 10/17/2014 - 3:49am
Hi, Has anybody tried GHC on Yosemite? I cannot afford to do it by myself this time. So, I would like to know. --Kazu
Categories: Offsite Discussion

Best way to Introduce FP into a dev team with many C,C++ and Java developers?

haskell-cafe - Fri, 10/17/2014 - 3:00am
Hi, Recently our development team had decided to attempt using Haskell or Scala as one of our programming language. (We had hoped some of our problem will be addressed through taking advantage of some element of FP, like immutability and Rx) Most of our team members have fairly good knowledge of C, C++ and Java. And few of them, including myself, have been self-studying a FP language like Haskell and Scala since last year. Besides some early starters, functional programming is quite new to us. When we held a lecture on FP, we found many of our developers were struggling in grasping some idea of FP. They fumbled with writing functional programming style code. We kept on lecturing several times, but still not quite successful. It seems more difficult than it looks - thinking in functional programming way for the long time C, C++ and Java developers. Does anyone had an experience with initiating FP adoption into a large dev team without pain? Our development team has around 300+ people. Thanks. _______
Categories: Offsite Discussion

Hiding import behaviour

glasgow-user - Thu, 10/16/2014 - 11:19pm
With the prelude changes that people have been discussing recently I've been wondering is there any reason why importing an identifier explicitly and unqualified doesn't automatically hide any implicit imports of the same identifier? Specifically I'm wondering about cases where you've imported an identifier explicitly from only one module, like this: module Foo (x, ...) where { ... } module Bar (x, ...) where { ... } import Bar import Foo (x) Even if you needed a pragma to enable it I can't think of any sensible reason why that shouldn't be equivalent to: import Bar hiding (x) import Foo (x) I don't know much of GHC's internals, but it seems like a pretty minimal change. Typing rules remain the same; explicit imports just shadow implicits. So importing multiple identifiers both implicitly or both explicitly would remain ambiguous.
Categories: Offsite Discussion

More about function equivalences?

Haskell on Reddit - Thu, 10/16/2014 - 9:19pm

There are a few notable transforms on functions that can be convenient: "tabulation":

(a -> b) ~> Set (a,b)

"classification":

Set (a,b) ~ ((a,b) -> Bool)

and "inverse image"

(a -> b) ~> (b -> Set a)

As far as I can tell, these don't fall exactly under the yoneda transform or classifying objects, but they seem suspiciously close. I see people throwing them around a lot but it's not clear to me under what circumstances (ie. what kind of category) this is possible. Is there a name for this relationship or a pointer to more tricks with it?

submitted by dogirardo
[link] [15 comments]
Categories: Incoming News

Proposal: Add foldrWithIndex and foldlWithIndex to Data.List

libraries list - Thu, 10/16/2014 - 6:14pm
These functions can be lifted pretty much straight out of Data.Sequence. In particular, foldrWithIndex makes for a particularly nice expression of a fusing findIndices function, as is present in Data.Sequence. _______________________________________________ Libraries mailing list Libraries< at >haskell.org http://www.haskell.org/mailman/listinfo/libraries
Categories: Offsite Discussion

Danny Gratzer: Notes on Quotients Types

Planet Haskell - Thu, 10/16/2014 - 6:00pm
Posted on October 17, 2014 Tags: types

Lately I’ve been reading a lot of type theory literature. In effort to help my future self, I’m going to jot down a few thoughts on quotient types, the subject of some recent google-fu.

But Why!

The problem quotient types are aimed at solving is actually a very common one. I’m sure at some point or another you’ve used a piece of data you’ve wanted to compare for equality. Additionally, that data properly needed some work to determine whether it was equal to another piece.

A simple example might would be representing rational numbers. A rational number is a fraction of two integers, so let’s just say

type Rational = (Integer, Integer)

Now all is well, we can define a Num instance and what not. But what about equality? Clearly we want equivalent fractions to be equal. That should mean that (2, 4) = (1, 2) since they both represent the same number.

Now our implementation has a sticky point, clearly this isn’t the case on its own! What we really want to say is “(2, 4) = (1, 2) up to trivial rejiggering”.

Haskell’s own Rational type solves this by not exposing a raw tuple. It still exists under the hood, but we only expose smart constructors that will reduce our fractions as far as possible.

This is displeasing from a dependently typed setting however, we want to be able to formally prove the equality of some things. This “equality modulo normalization” leaves us with a choice. Either we can really provide a function which is essentially

foo : (a b : Rational) -> Either (reduce a = reduce b) (reduce a /= reduce b)

This doesn’t really help us though, there’s no way to express that a should be observationally equivalent to b. This is a problem seemingly as old as dependent types: How can we have a simple representation of equality that captures all the structure we want and none that we don’t.

Hiding away the representation of rationals certainly buys us something, we can use a smart constructor to ensure things are normalized. From there we could potentially prove a (difficult) theorem which essentially states that

=-with-norm : (a b c d : Integer) -> a * d = b * c -> mkRat a b = mkRat c d

This still leaves us with some woes however, now a lot of computations become difficult to talk about since we’ve lost the helpful notion that denominator o mkRat a = id and similar. The lack of transparency shifts a lot of the burden of proof onto the code privy to the internal representation of the type, the only place where we know enough to prove such things.

Really what we want to say is “Hey, just forget about a bit of the structure of this type and just consider things to be identical up to R”. Where R is some equivalence relation, eg

  1. a R a
  2. a R b implies b R a
  3. a R b and b R c implies a R c

If you’re a mathematician, this should sound similar. It’s a lot like how we can take a set and partition it into equivalence classes. This operation is sometimes called “quotienting a set”.

For our example above, we really mean that our rational is a type quotiented by the relation (a, b) R (c, d) iff a * c = b * d.

Some other things that could potentially use quotienting

  • Sets
  • Maps
  • Integers
  • Lots of Abstract Types

Basically anything where we want to hide some of the implementation details that are irrelevant for their behavior.

More than Handwaving

Now that I’ve spent some time essentially waving my hand about quotient types what are they? Clearly we need a rule that goes something like

Γ ⊢ A type, E is an equivalence relation on A ———————————————–——————————————————————————————— Γ ⊢ A // E type

Along with the typing rule

Γ ⊢ a : A —————————————————— Γ ⊢ a : A // E

So all members of the original type belong to the quotiented type, and finally

Γ ⊢ a : A, Γ ⊢ b : A, Γ ⊢ a E b –——————————————–—————————————————— Γ ⊢ a ≡ b : A // E

Notice something important here, that ≡ is the fancy shmancy judgmental equality baked right into the language. This calls into question decidability. It seems that a E b could involve some non-trivial proof terms.

More than that, in a constructive, proof relevant setting things can be a bit trickier than they seem. We can’t just define a quotient to be the same type with a different equivalence relation, since that would imply some icky things.

To illustrate this problem, imagine we have a predicate P on a type A where a E b implies P a ⇔ P b. If we just redefine the equivalence relation on quotes, P would not be a wellformed predicate on A // E, since a ≡ b : A // E doesn’t mean that P a ≡ P b. This would be unfortunate.

Clearly some subtler treatment of this is needed. To that end I found this paper discussing some of the handling of NuRPL’s quotients enlightening.

How NuPRL Does It

The paper I linked to is a discussion on how to think about quotients in terms of other type theory constructs. In order to do this we need a few things first.

The first thing to realize is that NuPRL’s type theory is different than what you are probably used to. We don’t have this single magical global equality. Instead, we define equality inductively across the type. This notion means that our equality judgment doesn’t have to be natural in the type it works across. It can do specific things at each case. Perhaps the most frequent is that we can have functional extensionality.

f = g ⇔ ∀ a. f a = g a

Okay, so now that we’ve tossed aside the notion of a single global equality, what else is new? Well something new is the lens through which many people look at NuRPL’s type theory: PER semantics. Remember that PER is a relationship satisfying

  1. a R b → then b R a
  2. a R b ∧ b R c → a R c

In other words, a PER is an equivalence relationship that isn’t necessarily reflexive at all points.

The idea is to view types not as some opaque “thingy” but instead to be partial equivalence relations across the set of untyped lambda calculus terms. Inductively defined equality falls right out of this idea since we can just define a ≡ b : A to be equivalent to (a, b) ∈ A.

Now another problem rears it head, what does a : A mean? Well even though we’re dealing with PERs, but it’s quite reasonable to say something is a member of a type if it’s reflexive. That is to say each relation is a full equivalence relation for the things we call members of that type. So we can therefore define a : A to be (a, a) ∈ A.

Another important constraint, in order for a type family to be well formed, it needs to respect the equality of the type it maps across. In other words, for all B : A → Type, we have (a, a') ∈ A' ⇒ (B a = B a') ∈ U. This should seem on par with how we defined function equality and we call this “type functionality”.

Let’s all touch on another concept: squashed types. The idea is to take a type and throw away all information other than whether or not it’s occupied. There are two basic types of squashing, extensional or intensional. In the intensional we consider two squashed things equal if and only if the types they’re squashing are equal

A = B ———————————— [A] = [B]

Now we can also consider only the behavior of the squashed type, the extensional view. Since the only behavior of a squashed type is simply existing, our extensional squash type has the equivalence

∥A∥ ⇔ ∥B∥ ————————– ∥A∥ = ∥B∥

Now aside from this, the introduction of these types are basically the same: if we can prove that a type is occupied, we can grab a squashed type. Similarly, when we eliminate a type all we get is the trivial occupant of the squashed type, called •.

Γ ⊢ A ——————— Γ ⊢ [A] Γ, x : |A|, Δ[̱•] ⊢ C[̱•] —————————————————————————— Γ, x : |A|, Δ[x] ⊢ C[x]

What’s interesting is that when proving an equality judgment, we can unsquash obth of these types. This is only because NuRPL’s equality proofs computationally trivial.

Now with all of that out of the way, I’d like to present two typing rules. First

Γ ⊢ A ≡ A'; Γ, x : A, y : A ⊢ E[x; y] = E'[x; y]; E and E' are PERS ———————————————————————————————————————————————————————————————————— Γ ⊢ A ‌// E ≡ A' // E'

In English, two quotients are equal when the types and their quotienting relations are equal.

Γ, u : x ≡ y ∈ (A // E), v : 
∥x E y∥, Δ[u] ⊢ C [u] ———————————————————————————————————————————————————– Γ, u : x ≡ y ∈ (A // E), Δ[u] ⊢ C [u]

There are a few new things here. The first is that we have a new Δ [u] thing. This is a result of dependent types, can have things in our context that depend on u and so to indicate that we “split” the context, with Γ, u, Δ and apply the depend part of the context Δ to the variable it depends on u.

Now the long and short of this is that when we’re of this is that when we’re trying to use an equivalence between two terms in a quotient, we only get the squashed term. This done mean that we only need to provide a squash to get equality in the first place though

Γ ⊢ ∥ x E y 
∥; Γ ⊢ x : A; Γ ⊢ y : A ——————————————————————————————————– Γ ⊢ x ≡ y : A // E

Remember that we can trivially form an ∥ A ∥ from A’.

Now there’s just one thing left to talk about, using our quotiented types. To do this the paper outlines one primitive elimination rule and defines several others.

Γ, x : A, y : A, e : x E y, a : ND, Δ[ndₐ{x;y}] ⊢ |C[ndₐ{x;y}]| ——————————————————————————————————————————————————————————————– Γ, x : A // E, Δ[x] ⊢ |C[x]|

ND is a admittedly odd type that’s supposed to represent nondeterministic choice. It has two terms, tt and ff and they’re considered “equal” under ND. However, nd returns its first argument if it’s fed tt and the second if it is fed ff. Hence, nondeterminism.

Now in our rule we use this to indicate that if we’re eliminating some quotiented type we can get any value that’s considered equal under E. We can only be assured that when we eliminate a quotiented type, it will be related by the equivalence relation to x. This rule captures this notion by allowing us to randomly choose some y : A so that x E y.

Overall, this rule simply states that if C is occupied for any term related to x, then it is occupied for C[x].

Wrap up

As with my last post, here’s some questions for the curious reader to pursue

  • What elimination rules can we derive from the above?
  • If we’re of proving equality can we get more expressive rules?
  • What would an extensional quotient type look like?
  • Why would we want intensional or extensional?
  • How can we express quotient types with higher inductive types from HoTT

The last one is particularly interesting.

Thanks to Jon Sterling for proof reading

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Jasper Van der Jeugt: Generalizing function composition

Planet Haskell - Thu, 10/16/2014 - 6:00pm
TL;DR

In this blogpost I present a proof-of-concept operator $.$, which allows you to replace:

foo x0 x1 x2 ... xN = bar $ qux x0 x1 x2 ... xN

by:

foo = bar $.$ qux Introduction

This is a literate Haskell file, which means you should be able to just drop it into GHCi and play around with it. You can find the raw .lhs file here. Do note that this requires GHC 7.8 (it was tested on GHC 7.8.2).

> {-# LANGUAGE FlexibleContexts #-} > {-# LANGUAGE FlexibleInstances #-} > {-# LANGUAGE OverlappingInstances #-} > {-# LANGUAGE TypeFamilies #-} > {-# LANGUAGE TypeOperators #-} > import Data.Char (toLower)

If you have been writing Haskell code for a while, you have undoubtedly used the $ operator to “wrap” some expression with another function, mapping over the result type. For example, we can “wrap” the expression toLower 'A' with print to output the result.

print $ toLower 'A'

It is not unlikely either to have functions that just wrap other functions, e.g.:

> printLower :: Char -> IO () > printLower x = print $ toLower x

If the function that is being wrapped (toLower in the previous example) has only one argument, the . operator allows writing a very concise definition of functions which just wrap those single-argument functions.

> printLower' :: Char -> IO () > printLower' = print . toLower

However, this gets tedious when the number arguments increases. Say that we have the following function which takes three arguments (don’t worry about the horrible implementation, but rather focus on the type):

> -- | Formats a double using a simple spec. Doesn't do proper rounding. > formatDouble > :: Bool -- ^ Drop trailing '0'? > -> Int -- ^ #digits after decimal point > -> Double -- ^ Argument > -> String -- ^ Result > formatDouble dropTrailingZero numDigits double = > let (pre, post) = case break (== '.') (show double) of > (x, '.' : y) -> (x, y) > (x, y) -> (x, y) > post' = take numDigits (post ++ repeat '0') > pre' = case pre of > '0' : x -> if dropTrailingZero then x else pre > _ -> pre > in pre' ++ "." ++ post'

We can wrap formatDouble using print by successively using the . operator, but the result does not look pretty, nor very readable:

> printDouble :: Bool -> Int -> Double -> IO () > printDouble = (.) ((.) ((.) print)) formatDouble The $.$ operator

This makes one wonder if we can’t define an additional operator $.$ (pronounced holla-holla-get-dolla) which can be used like this:

> printDouble' :: Bool -> Int -> Double -> IO () > printDouble' = print $.$ formatDouble

Additionally, it should be generic, as in, it should work for an arbitrary number of arguments, so that we can also have:

> printMax' :: Int -> Int -> IO () > printMax' = print $.$ max > printLower'' :: Char -> IO () > printLower'' = print $.$ toLower

From this, it becomes clear that the type of $.$ should be something like:

($.$) :: (a -> b) -> (x0 -> x1 -> ... -> xn -> a) -> (x0 -> x1 -> ... -> xn -> b)

The first question is obviously, can we write such an operator? And if we can, how generic is it?

When my colleague Alex asked me this question, I spent some time figuring it out. I previously thought it was not possible to write such an operator in a reasonably nice way, but after some experiments with the closed type families in GHC 7.8 I managed to get something working. However, the solution is far from trivial (and I now suspect more elegant solutions might exist).

A possible solution Spoiler warning: Click here to reveal the solution.

The main ingredient for my solution is type-level composition.

> newtype (f :.: g) a = Comp {unComp :: f (g a)} > infixl 9 :.:

This way we can represent nested types in an alternative way. If we have, for example:

> ex1 :: Maybe [Int] > ex1 = Just [3]

We could represent the type of this as the composition of the type constructors Maybe and []:

> ex1Comp :: (Maybe :.: []) Int > ex1Comp = Comp (Just [3])

We can nest these compositions arbitrarily. If we have three nested type constructors:

> ex2 :: Int -> (Maybe [Int]) > ex2 = \x -> Just [x + 1]

A nested :.: composition is used:

> ex2Comp :: ((->) Int :.: Maybe :.: []) Int > ex2Comp = Comp (Comp (\x -> Just [x + 1]))

We already gave a hint to the solution here: (->) a (in this case (->) Int) is also a type constructor. We can nest these as well:

> formatDoubleComp > :: ((->) Bool :.: (->) Int :.: (->) Double) String > formatDoubleComp = Comp (Comp formatDouble)

Now, let’s think about what happens to f :.: g if f and g are both a Functor:

> instance (Functor f, Functor g) => Functor (f :.: g) where > fmap f (Comp g) = Comp (fmap (fmap f) g)

The result is a Functor which allows us to fmap deep inside the original functor.

Additionally, we know that (->) a is a Functor widely known as Reader. This shows us that it indeed becomes feasible to apply a function to the final result of a function (in its :.: form), namely just by using fmap:

For example, for the function formatDouble, we get:

> printDoubleComp > :: ((->) Bool :.: (->) Int :.: (->) Double) (IO ()) > printDoubleComp = fmap print formatDoubleComp

At this point, it is clear that we could try to implement the $.$ operator by:

  1. converting the regular function to its :.: form (toComp);
  2. applying the transformation using fmap;
  3. converting the :.: form of the updated function back to a regular function (fromComp).

E.g.:

> printDouble'' > :: Bool -> Int -> Double -> IO () > printDouble'' = fromComp (fmap print (toComp formatDouble))

However, implementing (1) and (3) turns out to be reasonably hard. I think it makes more sense for me to just give a high-level overview: a very substantial amount of explanation would be required to explain this to new Haskellers, and more experienced Haskellers would probably have more fun figuring this out themselves anyway.

We’re going to need the simple Id Functor, let’s inline it here.

> newtype Id a = Id {unId :: a} > instance Functor Id where > fmap f (Id x) = Id (f x)

Implementing toComp involves implementing a typeclass with no fewer than three auxiliary type families.

> class ToComp t where > toComp :: t -> (F t :.: G t) (A t) > type family F t where > F (a -> (b -> c)) = (->) a :.: F (b -> c) > F (b -> c) = (->) b > type family G t where > G (a -> (b -> c)) = G (b -> c) > G (b -> c) = Id > type family A t where > A (a -> (b -> c)) = A (b -> c) > A (b -> c) = c > instance ( F (a -> b) ~ (->) a > , G (a -> b) ~ Id > , A (a -> b) ~ b > ) => ToComp (a -> b) where > toComp f = Comp (Id . f) > instance ( ToComp (b -> c) > , F (a -> (b -> c)) ~ ((->) a :.: F (b -> c)) > , G (a -> (b -> c)) ~ G (b -> c) > , A (a -> (b -> c)) ~ A (b -> c) > ) => ToComp (a -> (b -> c)) where > toComp f = Comp $ Comp $ unComp . toComp . f

Implementing fromComp requires another typeclass, which in turn requires one auxiliary closed type family.

> class FromComp t where > fromComp :: t -> C t > type family C t where > C (Id a) = a > C (b -> Id a) = b -> a > C (((->) b :.: f) a) = b -> C (f a) > C ((f :.: g :.: h) a) = C ((f :.: g) (h a)) > instance FromComp (Id a) where > fromComp = unId > instance FromComp (b -> Id a) where > fromComp f = unId . f > instance ( FromComp (f a) > , C (((->) b :.: f) a) ~ (b -> C (f a)) > ) => FromComp (((->) b :.: f) a) where > fromComp (Comp f) = fromComp . f > instance ( FromComp ((f :.: g) (h a)) > , C ((f :.: g :.: h) a) ~ C ((f :.: g) (h a)) > ) => FromComp ((f :.: g :.: h) a) where > fromComp (Comp (Comp f)) = fromComp (Comp f)

Once we have these, the implementation of $.$ becomes:

> ($.$) > :: ( ToComp t > , Functor (F t) > , Functor (G t) > , FromComp ((F t :.: G t) b) > ) > => (A t -> b) > -> t > -> C ((F t :.: G t) b) > f $.$ t = fromComp $ fmap f (toComp t)

…where the implementation is arguably much easier to read than the type signature.

With this loaded up, GHCi now even manages to infer nice and readable type signatures:

*Main> :t print $.$ toLower print $.$ toLower :: Char -> IO () *Main> :t print $.$ formatDouble print $.$ formatDouble :: Bool -> Int -> Double -> IO ()

It doesn’t do well when type polymorphism is involved though; e.g. for max :: Ord a => a -> a -> a, we get:

*Main> :t print $.$ max print $.$ max :: (ToComp (a -> a), FromComp (F (a -> a) (G (a -> a) (IO ()))), Show (A (a -> a)), Ord a, Functor (F (a -> a)), Functor (G (a -> a))) => a -> C (F (a -> a) (G (a -> a) (IO ())))

Now, two questions remain for the interested reader:

  • Can the implementation be simplified? In particular, do we need such an unholy number of type families?
  • Can this be generalized to all Functors, rather than just the Reader Functor? I was able to do something like this, but it made the infered types a lot less nice, and didn’t work that well in practice.

Thanks to my colleague Alex for proofreading!

Categories: Offsite Blogs