News aggregator

[ANNOUNCE] New Core Libraries Committee Members

libraries list - Fri, 07/11/2014 - 11:26pm
I'm pleased to announce that Eric Mertens and Luite Stegeman have joined the core libraries committee. http://www.haskell.org/haskellwiki/Core_Libraries_Committee Brent Yorgey and Doug Beardsley stepped down to make room. I'd like to thank them for helping us get the committee started. In the short term Eric will probably help out with moving the Applicative Monad Proposal forward and with the knock-on effects of that change, and Luite will join Joachim Breitner in trying to figure out the right path forward for the split base proposal. -Edward Kmett _______________________________________________ Libraries mailing list Libraries< at >haskell.org http://www.haskell.org/mailman/listinfo/libraries
Categories: Offsite Discussion

PPDP 2014 Call for Participation

General haskell list - Fri, 07/11/2014 - 11:00pm
====================================================================== CALL FOR PARTICIPATION: PPDP 2014 16th International Symposium on Principles and Practice of Declarative Programming Canterbury, Kent, September 8-10, 2014 http://users-cs.au.dk/danvy/ppdp14/ co-located with LOPSTR 2014 24th International Symposium on Logic-Based Program Synthesis and Transformation Canterbury, Kent, September 9-11, 2014 http://www.iasi.cnr.it/events/lopstr14/ ====================================================================== Registration is now open: http://www.cs.kent.ac.uk/events/2014/ppdp-lopstr-14/ A significant discount is available when registering to both events, especially as a student (until August 8). PPDP 2014 features * an invited talk by Roberto Giacobazzi, shared with LOPSTR: "Obscuring Code -- Unveiling and Veiling Information in Programs" * no fewer than 4 distilled tutorials by - Henrik Nilsson a
Categories: Incoming News

wren gayle romano: #done

Planet Haskell - Fri, 07/11/2014 - 7:54pm

Holy hell, things are bad for everyone.

I've started having PTSD issues again. One of my wife's coworkers got thrown in jail for 24hrs due to a domestic violence accusation (as required by Indiana state law for every accusation with any shred of evidence). Once he got out he filed for divorce because of it, to which his wife shot their son and herself and lit the house on fire— timed at 17 minutes before he was scheduled to (and did) arrive to pick up their son. An online friend of mine was dealing with a family crisis, got dumped by her fiancée, and has been on suicide watch. And now another friend is dealing with a suicide close to her

WTF world? W. T. F?



comments
Categories: Offsite Blogs

What is the state of the art in testing codegeneration?

haskell-cafe - Fri, 07/11/2014 - 6:58pm
I am implementing an EDSL that compiles to SQL and I am wondering what is the state of the art in testing code generation. All the Haskell libraries I could find that deal with SQL generation are tested by implementing multiple one-off adhoc queries and checking that when either compiled to SQL or run against a database they give the expected, prespecified result. * https://github.com/prowdsponsor/esqueleto/blob/master/test/Test.hs * https://github.com/m4dc4p/haskelldb/blob/master/test/TestCases.hs * https://github.com/yesodweb/persistent/blob/master/persistent-test/SumTypeTest.hs I couldn't find any tests for groundhog. * https://github.com/lykahb/groundhog I also had a look at Javascript generators. They take a similar adhoc, one-off approach. * https://github.com/valderman/haste-compiler/tree/master/Tests * https://github.com/faylang/fay/tree/master/tests Is this the best we can do in Haskell? Certainly it seems hard to use a QuickCheck/SmallCheck approach for this purpose. Is there any wa
Categories: Offsite Discussion

Why is "sum" lazy?

Haskell on Reddit - Fri, 07/11/2014 - 5:28pm

I understand why one might want to use foldl in a lazy manner, but if the function passed to foldl is (+), why would anyone object to using foldl'? I work in big data, and I always forget that sum is lazy and I end up getting stack size overflows, tracing all the way back to sum. If it's a teaching function (like some argue foldl is), then can we seriously consider an official, built-in prelude which is more sane? I see that idea mentioned here a lot, but can we make that dream a reality?

submitted by Pugolicious2244
[link] [19 comments]
Categories: Incoming News

Is there any library that deals with the creation of trees with shared branches?

Haskell on Reddit - Fri, 07/11/2014 - 4:07pm

Code explains better than words, so, take a look:

type TreeId = Int leaf :: Int -> TreeId isLeaf :: TreeId -> Boolean leafValue :: TreeId -> Int node :: TreeId -> TreeId -> TreeId isNode :: TreeId -> Boolean nodeLeft :: TreeId -> TreeId nodeRight :: TreeId -> TreeId toTree :: TreeId -> Tree a = node (leaf 1) (leaf 2) b = node (leaf 0) (node (leaf 1) (leaf 2)) main = do print (a == (nodeLeft b)) print (a == b)

Output:

True False

What is going on is the following: node and leaf are constructors for a Tree type. There is a detail, though: before creating the tree, those functions should check, in constant time, if an identical tree already exists. If it doesn't, it creates the Tree, tags it with a unique id and returns that id. If it does, instead of actually allocating more memory, it just returns the id of the existing tree. Notice that a and (left b) are the same value, because they are the same tree. This will guarantee that the heap stays compact, with not a single duplicated branch ever stored.

My questions are:

  1. Is there a name for this?

  2. Are there good-performing libraries available?

submitted by SrPeixinho
[link] [14 comments]
Categories: Incoming News

help with some code

haskell-cafe - Fri, 07/11/2014 - 4:01pm
Hi Cafe, I've got some code where a user can provide a filter to subscribe to a particular type of event. The events come in as xml and are parsed using the toEvent method. The code works fine but is repetitive since each event record potentially has the information needed to parse the xml using the field names and types. So is there a way to get rid of the repetition and streamline the code? Should I be using TH or lenses or something else entirely? Here is the paste. http://lpaste.net/107338 Thanks for any pointers! Grant _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

ANNOUNCE: GHC version 7.8.3

General haskell list - Fri, 07/11/2014 - 2:40pm
============================================================== The (Interactive) Glasgow Haskell Compiler -- version 7.8.3 ============================================================== The GHC Team is pleased to announce a new patchlevel release of GHC, 7.8.3. This is an important bugfix release relative to 7.8.2 (with over 50 defects fixed), so we highly recommend upgrading from the previous 7.8 releases. The full release notes are here: https://www.haskell.org/ghc/docs/7.8.3/html/users_guide/release-7-8-3.html How to get it ~~~~~~~~~~~~~ The easy way is to go to the web page, which should be self-explanatory: https://www.haskell.org/ghc/ We supply binary builds in the native package format for many platforms, and the source distribution is available from the same place. Packages will appear as they are built - if the package for your system isn't available yet, please try again later. Background ~~~~~~~~~~ Haskell is a standard lazy functional programming language. GHC is
Categories: Incoming News

Recovering preconditions from Data.Constraint.(:-)

Haskell on Reddit - Fri, 07/11/2014 - 1:12pm

In the constraints package's Data.Constraint module, the 'a :- b' type expresses a dependency: the Constraint 'b' is entailed by the Constraint 'a'.

e.g.

(Show a, Show b) :- Show (a,b)

Now, for my use case: I am trying to write a Show1 instance for a GADT:

data Fmt r a where Int :: Fmt r Int (:&) :: Fmt r a -> Fmt r b -> Fmt r (a,b) Lift :: r a -> Fmt r a ...

In brief, (Show1 (Fmt r)) would allow me to show any Fmt r a whenever (Show a). However, when I try to write showsPrec1 for the case:

p :& q -> ...

I know from the type of showsPrec1 that there is an instance (Show (b,c)), since I have the original constraint (Show a) for Fmt r a, and in the (:&) case, the type a is refined to (b,c). But, I have been unable to convince the type checker that-- no, really-- there are instances (Show b, Show c), and so I should be able to call showsPrec1 on p and q.

I've turned to the constraints package to help me manage this explicitly, but I haven't figured out how, yet.

How can I recover the constraints (Show b, Show c) in this situation?

I suppose it comes down to being able to write a function:

super :: Dict a -> (b :- a) -> Dict b super Dict = _

Okay, so in the general case, super is impossible. It falls prey to the fallacy of 'P -> Q; Q; therefore P'. I'm not satisfied with that answer, though. In the Show example, we have an instance

instance (Show a, Show b) :=> Show (a,b) where ins = Sub Dict

which models the actual instance

instance (Show a, Show b) => Show (a,b) where showsPrec d (a,b) = ...

Why shouldn't we be able to derive knowledge of the preconditions, given proof of the consequent? Isn't the instance for Show (a,b) actually stating that Show holds for (a,b) iff Show holds for a and for b-- ie. a biconditional, rather than an implication?

submitted by resrvsgate
[link] [7 comments]
Categories: Incoming News

Embedding version info in executables

haskell-cafe - Fri, 07/11/2014 - 10:26am
What are existing solutions for embedding version info (git revision, build date/time, versions of dependencies) in Haskell programs? Roman _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Edward Z. Yang: Type classes: confluence, coherence and global uniqueness

Planet Haskell - Fri, 07/11/2014 - 10:07am

Today, I'd like to talk about some of the core design principles behind type classes, a wildly successful feature in Haskell. The discussion here is closely motivated by the work we are doing at MSRC to support type classes in Backpack. While I was doing background reading, I was flummoxed to discover widespread misuse of the terms "confluence" and "coherence" with respect to type classes. So in this blog post, I want to settle the distinction, and propose a new term, "global uniqueness of instances" for the property which people have been colloquially referred to as confluence and coherence.

Let's start with the definitions of the two terms. Confluence is a property that comes from term-rewriting: a set of instances is confluent if, no matter what order constraint solving is performed, GHC will terminate with a canonical set of constraints that must be satisfied for any given use of a type class. In other words, confluence says that we won't conclude that a program doesn't type check just because we swapped in a different constraint solving algorithm.

Confluence's closely related twin is coherence (defined in the paper "Type classes: exploring the design space"). This property states that every different valid typing derivation of a program leads to a resulting program that has the same dynamic semantics. Why could differing typing derivations result in different dynamic semantics? The answer is that context reduction, which picks out type class instances, elaborates into concrete choices of dictionaries in the generated code. Confluence is a prerequisite for coherence, since one can hardly talk about the dynamic semantics of a program that doesn't type check.

So, what is it that people often refer to when they compare Scala type classes to Haskell type classes? I am going to refer to this as global uniqueness of instances, defining to say: in a fully compiled program, for any type, there is at most one instance resolution for a given type class. Languages with local type class instances such as Scala generally do not have this property, and this assumption is a very convenient one when building abstractions like sets.

So, what properties does GHC enforce, in practice? In the absence of any type system extensions, GHC's employs a set of rules to ensure that type class resolution is confluent and coherent. Intuitively, it achieves this by having a very simple constraint solving algorithm (generate wanted constraints and solve wanted constraints) and then requiring the set of instances to be nonoverlapping, ensuring there is only ever one way to solve a wanted constraint. Overlap is a more stringent restriction than either confluence or coherence, and via the OverlappingInstances and IncoherentInstances, GHC allows a user to relax this restriction "if they know what they're doing."

Surprisingly, however, GHC does not enforce global uniqueness of instances. Imported instances are not checked for overlap until we attempt to use them for instance resolution. Consider the following program:

-- T.hs data T = T -- A.hs import T instance Eq T where -- B.hs import T instance Eq T where -- C.hs import A import B

When compiled with one-shot compilation, C will not report overlapping instances unless we actually attempt to use the Eq instance in C. This is by design: ensuring that there are no overlapping instances eagerly requires eagerly reading all the interface files a module may depend on.

We might summarize these three properties in the following manner. Culturally, the Haskell community expects global uniqueness of instances to hold: the implicit global database of instances should be confluent and coherent. GHC, however, does not enforce uniqueness of instances: instead, it merely guarantees that the subset of the instance database it uses when it compiles any given module is confluent and coherent. GHC does do some tests when an instance is declared to see if it would result in overlap with visible instances, but the check is by no means perfect; truly, type-class constraint resolution has the final word. One mitigating factor is that in the absence of orphan instances, GHC is guaranteed to eagerly notice when the instance database has overlap (assuming that the instance declaration checks actually worked...)

Clearly, the fact that GHC's lazy behavior is surprising to most Haskellers means that the lazy check is mostly good enough: a user is likely to discover overlapping instances one way or another. However, it is relatively simple to construct example programs which violate global uniqueness of instances in an observable way:

-- A.hs module A where data U = X | Y deriving (Eq, Show) -- B.hs module B where import Data.Set import A instance Ord U where compare X X = EQ compare X Y = LT compare Y X = GT compare Y Y = EQ ins :: U -> Set U -> Set U ins = insert -- C.hs module C where import Data.Set import A instance Ord U where compare X X = EQ compare X Y = GT compare Y X = LT compare Y Y = EQ ins' :: U -> Set U -> Set U ins' = insert -- D.hs module Main where import Data.Set import A import B import C test :: Set U test = ins' X $ ins X $ ins Y $ empty main :: IO () main = print test -- OUTPUT $ ghc -Wall -XSafe -fforce-recomp --make D.hs [1 of 4] Compiling A ( A.hs, A.o ) [2 of 4] Compiling B ( B.hs, B.o ) B.hs:5:10: Warning: Orphan instance: instance [safe] Ord U [3 of 4] Compiling C ( C.hs, C.o ) C.hs:5:10: Warning: Orphan instance: instance [safe] Ord U [4 of 4] Compiling Main ( D.hs, D.o ) Linking D ... $ ./D fromList [X,Y,X]

Locally, all type class resolution was coherent: in the subset of instances each module had visible, type class resolution could be done unambiguously. Furthermore, the types of ins and ins' discharge type class resolution, so that in D when the database is now overlapping, no resolution occurs, so the error is never found.

It is easy to dismiss this example as an implementation wart in GHC, and continue pretending that global uniqueness of instances holds. However, the problem with global uniqueness of instances is that they are inherently nonmodular: you might find yourself unable to compose two components because they accidentally defined the same type class instance, even though these instances are plumbed deep in the implementation details of the components. This is a big problem for Backpack, or really any module system, whose mantra of separate modular development seeks to guarantee that linking will succeed if the library writer and the application writer develop to a common signature.

Categories: Offsite Blogs

Dominic Orchard: Automatic SIMD Vectorization for Haskell and ICFP 2013

Planet Haskell - Fri, 07/11/2014 - 9:00am

I had a great time at ICFP 2013 this year where I presented my paper “Automatic SIMD Vectorization for Haskell”, which was joint work with Leaf Petersen and Neal Glew of Intel Labs. The full paper and slides are available online. Our paper details the vectorization process in the Intel Labs Haskell Research Compiler (HRC) which gets decent speedups on numerical code (between 2-7x on 256-bit vector registers). It was nice to be able to talk about HRC and share the results. Paul (Hai) Liu also gave a talk at the Haskell Symposium which has more details about HRC than the vectorization paper (see the paper here with Neal Glew, Leaf Petersen, and Todd Anderson). Hopefully there will be a public release of HRC in future.  

Still more to do

It’s been exciting to see the performance gains in compiled functional code over the last few years, and its encouraging to see that there is still much more we can do and explore. HRC outperforms GHC on roughly 50% of the benchmarks, showing some interesting trade-offs going on in the two compilers. HRC is particularly good at compiling high-throughput numerical code, thanks to various strictness/unboxing optimisations (and the vectorizer), but there is still more to be done.

Don’t throw away information about your programs

One thing I emphasized in my talk was the importance of keeping, not throwing away, the information encoded in our programs as we progress through the compiler stack. In the HRC vectorizer project, Haskell’s Data.Vector library was modified to distinguish between mutable array operations and “initializing writes”, a property which then gets encoded directly in HRC’s intermediate representation. This makes vectorization discovery much easier. We aim to preserve as much effect information around as possible in the IR from the original Haskell source.

This connected nicely with something Ben Lippmeier emphasised in his Haskell Symposium paper this year (“Data Flow Fusion with Series Expressions in Haskell“, joint with Manuel Chakravarty, Gabriele Keller and Amos Robinson). They provide a combinator library for first-order non-recursive dataflow computations which is guaranteed to be optimised using flow fusion (outperforming current stream fusion techniques). The important point Ben made is that, if your program fits the pattern, this optimisation is guaranteed. As well as being good for the compiler, this provides an obvious cost model for the user (no more games trying to coax the compiler into optimising in a particular way).

This is something that I have explored in the Ypnos array language, where the syntax is restricted to give (fairly strong) language invariants that guarantee parallelism and various optimisations, without undecidable analyses. The idea is to make static as much effect and coeffect (context dependence) information as possible. In Ypnos, this was so successful that I was able to encode the Ypnos’ language invariant of no out-of-bounds array access directly in Haskell’s type system (shown in the DSL’11 paper; this concept was also discussed briefly in my short language design essay).

This is a big selling point for DSLs in general: restrict a language such that various program properties are statically decidable, facilitating verification and optimisation.

Ypnos has actually had some more development in the past year, so if things progress further, there may be some new results to report on. I hope to be posting again soon about more research, including the ongoing work with Tomas Petricek on coeffect systems, and various other things I have been playing with. – D


Categories: Offsite Blogs

OCL 2014: Submission Deadline Extended by One Week

General haskell list - Fri, 07/11/2014 - 6:25am
(Apologies for duplicates) ************************************************************** ** Submission Deadline Extended to July 18th, 2014 ** ************************************************************** CALL FOR PAPERS 14th International Workshop on OCL and Textual Modeling Applications and Case Studies (OCL 2014) Co-located with ACM/IEEE 17th International Conference on Model Driven Engineering Languages and Systems (MODELS 2014) September 30, 2014, VALENCIA, SPAIN http://www.software.imdea.org/OCL2014/ Modeling started out with UML and its precursors as a graphical notation. Such visual representations enable direct intuitive capturing of reality, but some of their features are difficult to formalize and lack the level of precision required to create complete and unambiguous specifications. Limitations of the graphical notations encouraged the development of text-based modeling languages that
Categories: Incoming News

OCL 2014: Submission Deadline Extended by One Week

haskell-cafe - Fri, 07/11/2014 - 6:25am
(Apologies for duplicates) ************************************************************** ** Submission Deadline Extended to July 18th, 2014 ** ************************************************************** CALL FOR PAPERS 14th International Workshop on OCL and Textual Modeling Applications and Case Studies (OCL 2014) Co-located with ACM/IEEE 17th International Conference on Model Driven Engineering Languages and Systems (MODELS 2014) September 30, 2014, VALENCIA, SPAIN http://www.software.imdea.org/OCL2014/ Modeling started out with UML and its precursors as a graphical notation. Such visual representations enable direct intuitive capturing of reality, but some of their features are difficult to formalize and lack the level of precision required to create complete and unambiguous specifications. Limitations of the graphical notations encouraged the development of text-based modeling languages that
Categories: Offsite Discussion

GHC-7.8.3 is out!

Haskell on Reddit - Fri, 07/11/2014 - 6:18am
Categories: Incoming News

ANN: FFI bindings to cuBLAS and cuSPARSE

haskell-cafe - Fri, 07/11/2014 - 6:00am
I have written FFI bindings to the cuBLAS and cuSPARSE libraries, which are CUDA libraries for executing linear algebra computations on the GPU. It's a relatively straightforward translation of the C API. It's slightly novel in that I use language-c and Template Haskell to parse the C headers and create the FFI declarations, avoiding the boilerplate that may otherwise be necessary, even using a preprocessor such as c2hs. http://hackage.haskell.org/package/cublas-0.2.0.0 I've done a similar thing with a subset of the MAGMA GPU library. It's less polished, and the installation process is more unforgiving, so I haven't put it up on Hackage. https://github.com/bmsherman/magma-gpu Finally, I've written a library which abstracts the immutable API of hmatrix and provides a pure, hmatrix-like interface for cuBLAS/MAGMA, enabling simultaneous development of linear algebra programs using either hmatrix or the above GPU bindings as backends. Additionally, I have written "medium-level" mutable and immutable interfac
Categories: Offsite Discussion