# News aggregator

### When is it appropriate to use exceptions in Haskell?

Considering the power and flexibility of monadic error handling, I was wondering in what cases it's considered necessary to use exceptions instead. The absence of exceptions in Go and Rust leads me to suspect that they are practically never necessary, especially considering those languages manage to go without exceptions in spite of lacking the same type system functionality for handling errors (HKTs) as Haskell. Exceptions also seem to go against the Haskell spirit of representing potential failures in the type system so that they aren't unhandled at runtime, with asynchronous exceptions in particular seeming like they could make code a lot more difficult to reason about.

I'm aware that Go and Rust do have a form of exceptions in their respective panic mechanisms, but these are coarse-grained and generally not recommended for most use-cases. Rust's panic I believe doesn't even have a recover mechanism, it just terminates the thread in which it occurred.

To put it another way, is there anything that can be done now but couldn't be done in an exception-less Haskell (or at least one with only a coarse-grained panic mechanism like Rust's)?

submitted by logicchains[link] [39 comments]

### My first experience using Haskell under OS X Yosemite: Noob cabal Hell. I've wasted over three hours tonight trying to manually chase down packages and guess which is the right one. Still not working.

### ANN: TestExplode 0.1.0.0: testscript generator for all target languages

### Haskell extension for visual studio?

Any projects going on to make it possible to write Haskell in Visual Studio? If not, how to make such an extension? I will try it myself

submitted by zai_naom[link] [4 comments]

### Is this the right way to understand Haskell's motto "avoid success at all costs" ?

I think it means, in the context of evolving the language, don't make compromises for the sake of convenience when they break the purity consistency of the language, unless it's absolutely necessary for all real life programming tasks. Does that sound close?

submitted by SteveTheCatholic[link] [27 comments]

### Why is deriving not automatic?

So a problem I am currently having with haskell and one you guys can hopefully help me resolve.

If I have function way down a call stack that uses higher kinded type variables, which I for some reason need to do a quick trace to see the outputs of. I often run into the issue that I have to apply show constraints to every single function in the entire call stack.

So my question is why are the (show, eq, ord... etc) not applied automatically in haskell. As they are already automatically implemented. Also this is problem is especially annoying when using newtype. As you have to copy-paste deriving everywhere.

This is the last issue I have with haskell, but it comes up in debugging a lot. So if anyone knows how to correctly solve this I would be very grateful.

Edit: If you are thinking I am approaching the problem incorrectly please tell me and don't just downvote :(

submitted by enzain[link] [14 comments]

### [total noob] performance considerations (coming from Python)

I'm a Python programmer and have decided to dig into Haskel.

I think that because Python is an interpreted language, the programmer benefits from being aware of how what they write is going to be interpreted, and avoid causing unnecessary allocations and look-ups. Taking the time to learn which of several options is the most performant gives me insight into how the language is working behind the scenes, and has helped me write better code IMO.

My guess is that Haskell, as a compiled language, cares less about different ways of expressing a solution.

I'm just on problem 4 of 99 Haskel problems right now, and after I figured out how to solve it any way I could, I see there is a variety of solutions. In a certain way they all seem very similar. I'm wondering if there are any significant or important differences in execution efficiency.

My intuition is that #4 is the most wasteful. Then between (#1 and #2) and (#3 and #5) I wonder if the fold, sum and map of #3 and #5 is optimized in the compiler more than the pattern matching of #1 and #2.

Or do Haskel folk not think about this stuff because it is unnecessary?

Thanks

submitted by elbiot[link] [26 comments]

### Recommendations on beginner & intermediate Haskell exercises for practicing it?

I'm ready to finally take the plunge and start learning Haskell by writing it. I've read enough books about it and collected enough resources to reference as I need. Now I just need to start writing programs!

Can you recommend good exercises to do for learning Haskell? I'm very comfortable writing in Clojure, if that makes a difference.

submitted by SteveTheCatholic[link] [18 comments]

### [Question] Physical units in type system

I would like to write a library that makes it possible to have types composed out of units. So, for example,

f :: PUnit Distance → PUnit Distance → PUnit (Square Distance) f x y = x >*< y

c :: PUnit Distance → PUnit Distance → PUnit () c x y = x >/< y

compiles fine, but

g :: PUnit Distance → PUnit Time → PUnit Distance g x y = x >+< y

throws an error.

My questions are: 1. Is this possible? 2. How can I do it? 3. How could units cancel out? (This actually is the main question)

I already thought about it a bit: 1. We only need a limited number of base units, so something like

PUnit Time Weight Distance

could work... 2. Base units could be created the following way:

data Distance = Distance {meterCoefficient :: Double}

where

meterCoefficient x * y = y in meters.

Base units do not carry any value. Therefore, PUnit could also look like:

PUnit Value Time Distance Weight

Thanks for any help, I would really appreciate it!

submitted by sammecs[link] [6 comments]

### To "instance C D",GHCI responds that C "is applied to too many type arguments".

### Is there a better way of expressing consecutive dots?

I want a function to use up two arguments before passing the result as argument to another function. So far the options for achieving this have been lambdas, explicit argument passing and the double dot. But I wonder if there's a better way of doing this, especially when it is about more than just two arguments but perhaps 3 or 4.

**Don't misunderstand me, I am not trying to produce particularly readable code. I am aware that explicit arguments and lambdas may be more readable here. I am trying to learn different ways of expressing things, not which of these is most readable.**

So: how could I get rid of the (.) . thing?

Exhibit A:

allTargeting :: Lesson s -> WeightMap -> Int allTargeting = (foldl (+) 0 .) . sequenceA . sequenceA (map (Map.findWithDefault 0 .) [ Slot . timeslot , Day . day , uncurry Cell . time ]) submitted by CynicalHarry[link] [10 comments]

### Overlaps when using FlexibleInstances

So I'm wondering how can I get compile time errors when writing overlapping instances in with the FlexibleInstances extension. I can get a compile time error when using an overlapping case; but what I want is to get a compile time error when defining the instances.

I would like this for developing Libraries to make sure, that a user can't run into any unexpected cases when using my class...

An Example:

{-# LANGUAGE FlexibleInstances#-} module Main where d = 0.3 :: Double lst = [0.3,0.6] :: [Double] main = do putStrLn "foo" -- Uncomment the following line for compile failure --putStrLn $ show $ convert d lst class Convertable a where convert :: Double -> a -> a instance Convertable Double where convert d x = d * x instance (Functor f, Convertable a) => Convertable (f a) where convert d x = convert d <$> x instance Convertable a => Convertable [a] where convert d lst = (convert d) <$> lst submitted by muzzlecar[link] [1 comment]

### Trying to understand sum as coproduct using Haskell

I've spent the last week or so learning category theory. I have this almost funny concept in my head that category theory could be abstracted to the phrase "things that are the same, are the same". It has given me a new perspective when reading type signatures (I can now actually understand the type of Coyoneda and fix).

But having come from groups and rings the one thing I can't seem to get my head around is the product/coproduct duality. I'm hoping that a Haskell explanation of it might help. This is what I have so far:

I get the duality concept in say the Comonad: extract :: c a -> a makes sense as coreturn - return :: a -> m a with the arrows reversed.

I get that in Hask the canonical sum type is Either a b = Left a | Right b and the canonical product type is just the tuple (a,b). In that sense I can see the inverses: left :: a -> Either a b is the dual of the projection fst :: (a,b) -> a. So in terms of sum types and product types this also seems to make sense. But I don't understand the link to *sum* and *product*.

For example I get that OR is the dual of AND and that OR distributes over AND: (A || C) && (B || C) == C || (A && B)

If we uncurry each:

(||) :: Eq a => (a,a) -> a (&&) :: Eq a => (a,a) -> aFrom their type signatures I can't really see how they represent the dual of each other, or how || is product while && is coproduct.

On sets, A x B is the Cartesian product, while the coproduct is the disjoint union A + B:

product :: ([a],[b]) -> [(a,b)] sum :: ([a],[b]) -> [Either (Int,a) (Int,b)]Obviously in Set the objects are just sets and the "type" of the contents is not relevant. So these could be better phrased as:

product :: ([a],[a]) -> [a] sum :: ([a],[a]) -> [a]This looks an awful lot like the AND/OR example and very much unlike the Either/Tuple example.

Finally with the archetype - in group theory the group G = (Z,+) of integers under addition is defined as G = <1> - the cyclic group generated by repeated addition of 1 and -1. M=(Z,*) is the Monoid of integers under multiplication. To form the ring R = (Z,+,0,*,1) we need a definition of multiplication with respect to G. The only definition that works is (End(G), . ) which is the set of endomorphisms on the group, with composition as the operator.

In this case each endomorphism is of the type fx(y) = x * y forall y in Z where the entire group is defined as fx = <x>. So for example f7 would be a functor that takes the integers to the isomorphic <7> i.e. {0,7,-7,14,-14,...}. Each endomorphism is an automorphism except f0 which takes everything to 0.

The group of all endofunctors are the only possible functors that preserve the group structure and keep us in the category Grp. This definition is what makes a negative * negative = positive, because f-n, as well as "scaling by n", also rotates the number line by making the generator <-n> instead of <+n>.

Each multiplication and addition could then be seen as these arrows:

prodN :: Int -> Int sumN :: Int -> Intor in Hask it would be the uncurried function that take (Int,Int) -> Int.

So in all three examples (truth statements with (OR,AND), sets with (x,+) and integers with (*,+)) the arrows were all of the form (A,A) -> A. If each were expressed as a Monoid the arrows would simply be A -> A.

Now I'm not so sure what these have to do with product and sum types and the projection arrows: fst :: (a,b) -> b and cofst :: a -> (a,b). Also, is there any relation from product/coproduct to the ring theory definition of product as the set of endomorphisms with composition as the operator which preserves addition in the group?

This post was way longer than I thought it would be. Appreciate if you took the time to read through it.

submitted by TheCriticalSkeptic[link] [5 comments]

### Beautiful parallel non-determinism; Using non-determinism for parallel file search

### Rewriting a function in CAF (constant applicative form)

I read that Haskell will only remember the results of a call to a function if the function is in CAF (as opposed to being a lambda expression with variables).

I'm trying to rewrite some code in this form so that more of my functions become "memoized".

I was wondering: can all functions be written in CAF?

Let's say I had this function below, how would it look in CAF ?

f arg1 arg2 = map (\x-> someFunction x arg1 ) arg2

submitted by asswaxer[link] [8 comments]