News aggregator

Limitations of type families as type-level functions

Haskell on Reddit - Mon, 03/30/2015 - 9:16am

I need to better understand the limits of type-families as type-level functions.

For example, here I'm trying to craft a data type that contains, in its type, a function f :: * -> Constraint to test whether all values of some type f c => c :: * can be replaced with values of a type c' :: *.

{-# LANGUAGE GADTs #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE ConstraintKinds #-} module Temp where import GHC.Prim type family Both (a :: * -> Constraint) (b :: * -> Constraint) (x :: *) :: Constraint where Both a b x = (a x, b x) type family None (a :: *) :: Constraint where None a = () data Expression (f :: * -> Constraint) (c :: *) (v :: *) where Constant :: c -> Expression None c v Variable :: Eq v => v -> Expression None c v -- ... Negate :: (Num c, f c) => Expression f c v -> Expression (Both Num f) c v -- ... Recip :: (Fractional c, f c) => Expression f c v -> Expression (Both Fractional f) c v -- ... Log :: (Floating c, f c) => Expression f c v -> Expression (Both Floating f) c v -- ... embed :: f c' => (c -> c') -> Expression f c v -> Expression f c' v embed f = \case Constant c -> Constant $ f c Variable v -> Variable v Negate e -> Negate $ embed f e -- ... Recip e -> Recip $ embed f e -- ... Log e -> Log $ embed f e -- ...

This doesn't work (in ghc-7.8.3), which doesn't thoroughly surprise me, but I don't know why it doesn't work.

From the errors:

Temp.hs:30:17: Could not deduce (f1 c') arising from a use of ‘Negate’ from the context (f c') ... or from (f ~ Both Num f1, Num c, f1 c) ... Temp.hs:30:17: Could not deduce (Num c') arising from a use of ‘Negate’ from the context (f c') ... or from (f ~ Both Num f1, Num c, f1 c) ... ...

It appears to be correctly inferring the value of f in each case, but not expanding f ~ Both SomeTypeClass f1 in the context (f c') to get (SomeTypeClass c', f1 c').

What's some good things to read to getter better intuition on the limitations of type families, so I don't waste time trying to get them to do something they cannot?

submitted by rampion
[link] [3 comments]
Categories: Incoming News

Terra language: Lua+LLVM JIT+multi-stagemetaprogramming

haskell-cafe - Mon, 03/30/2015 - 8:28am
Bulat Ziganshin wrote: You have omitted MetaOCaml. One may argue about the convenience of MetaOCaml but one has to acknowledge that it is the typed and hygienic staged language. (The TExp subset of the new TH is typed and hygienic, but it does not permit any effects in the generator, which are often necessary). As to low level, it turns out relatively straightforward to interface MetaOCaml with LLVM, so that OCaml becomes a very powerful, typed and hygienic ``macro language'' for LLVM.
Categories: Offsite Discussion

Index for local documentation is incomplete

haskell-cafe - Mon, 03/30/2015 - 7:40am
Dear Haskellers, I have a problem with local haddock documentation. Specifically, the documentation gets built but the index.html is not updated. As an example, here is the bottom part of the log of a package I installed a few days ago: The documentation files are present in ~/.cabal/share/doc/i386-linux-ghc-7.8.3/filemanip-0.3.6.3/html/index.html but when if I open `~/.cabal/share/doc/index.html`, there is no `System.FilePath.Find` module. cabal version is 1.22.0.0, GHC 7.8.3 Any idea on how to diagnose this?
Categories: Offsite Discussion

Well-Typed.Com: OverloadedRecordFields revived

Planet Haskell - Mon, 03/30/2015 - 7:39am

Way back in the summer of 2013, with support from the Google Summer of Code programme, I implemented a GHC extension called OverloadedRecordFields to address the oft-stated desire to improve Haskell’s record system. This didn’t get merged into GHC HEAD at the time, because the implementation cost outweighed the benefits. Now, however, I’m happy to report that Well-Typed are sponsoring the work required to get an improved version into GHC. Moreover, the first part of the work is already up for review on Phabricator.

The crucial thing that has enabled OverloadedRecordFields to get going again is that we’ve found a way to factor the design differently, so that we get a much better power-to-weight ratio for the extension by splitting it into two parts.

Part 1: Records with duplicate field labels

The first step is to cut down the core OverloadedRecordFields extension as much as possible. The essential idea is the same as it ever was, namely that a single module should be able to use the same field name in multiple datatypes, as in this example:

data Person = Person { personId :: Int, name :: String } data Address = Address { personId :: Int, address :: String }

These definitions are forbidden in normal Haskell, because personId is defined twice, but the OverloadedRecordFields extension will permit them and instead postpone name conflict checking to use sites. The basic extension will require that fields are used in such a way that the relevant datatype is always unambiguously determined, and the meanings of record construction, pattern matching, selection and update will not change. This means that the extension can always be enabled for an existing module and it will continue to compile unchanged, an important property that was not true of the previous design.

The Haskell syntax for record construction and pattern-matching is always unambiguous, because it mentions the data constructor, which means that code like this is perfectly fine:

p = Person { personId = 1, name = "Donald" } getId (Person { personId = i }) = i

On the other hand, record selector functions are potentially ambiguous. The name and address selectors can be used without restrictions, and with their usual types, but it will not be possible to use personId as a record selector if both versions are in scope (although we will shortly see an alternative).

Record update is a slightly more interesting case, because it may or may not be ambiguous depending on the fields being updated. In addition, since updates are a special syntactic form, the ambiguity can be resolved using a type signature. For example, this update would be ambiguous and hence rejected by the compiler:

f x = x { personId = 0 } -- is x a Person or an Address?

On the other hand, all these updates are unambiguous:

g :: Person -> Person g x = x { personId = 0 } -- top-level type signature h x = x { personId = 0 } :: Person -- type signature outside k x = (x :: Person) { personId = 0 } -- type signature inside l x = x { personId = 0, name = "Daffy" } -- only Person has both fields

Overall, this extension requires quite a bit of rewiring inside GHC to distinguish between field labels, which may be overloaded, and record selector function names, which are always unambiguous. However, it requires nothing conceptually complicated. As mentioned above, the implementation of this part is available for review on Phabricator.

Part 2: Polymorphism over record fields

While the OverloadedRecordFields extension described in part 1 is already useful, even though it is a relatively minor relaxation of the Haskell scoping rules, another important piece of the jigsaw is some way to refer to fields that may belong to multiple datatypes. For example, we would like to be able to write a function that selects the personId field from any type that has such a field, rather than being restricted to a single datatype. Much of the unavoidable complexity of the previous OverloadedRecordFields design came from treating all record selectors in an overloaded way.

But since this is new functionality, it can use a new syntax, tentatively a prefix # sign (meaning that use of # as an operator will require a space afterwards when the extension is enabled). This means that it will be possible to write #personId for the overloaded selector function. Since we have a syntactic cue, it is easy to identify such overloaded uses of selector functions, without looking at the field names that are in scope.

Typeclasses and type families will be used to implement polymorphism over fields belonging to record types, though the details are beyond the scope of this blog post. For example, the following definition is polymorphic over all types r that have a personId :: Int field:

getId :: r { personId :: Int } => r -> Int getId x = #personId x

Moreover, we are not limited to using #personId as a selector function. The same syntax can also be given additional interpretations, allowing overloaded updates and making it possible to produce lenses for fields without needing Template Haskell. In fact, the syntax is potentially useful for things that have nothing to do with records, so it will be available as a separate extension (implied by, but distinct from, OverloadedRecordFields).

Further reading

More details of the redesigned extensions are available on the GHC wiki, along with implementation notes for GHC hackers. Last year, I gave a talk about the previous design which is still a good guide to how the types work under the hood, even though it predates the redesign.

Categories: Offsite Blogs

Well-Typed.Com: OverloadedRecordFields revived

Planet Haskell - Mon, 03/30/2015 - 7:39am

Way back in the summer of 2013, with support from the Google Summer of Code programme, I implemented a GHC extension called OverloadedRecordFields to address the oft-stated desire to improve Haskell’s record system. This didn’t get merged into GHC HEAD at the time, because the implementation cost outweighed the benefits. Now, however, I’m happy to report that Well-Typed are sponsoring the work required to get an improved version into GHC. Moreover, the first part of the work is already up for review on Phabricator.

The crucial thing that has enabled OverloadedRecordFields to get going again is that we’ve found a way to factor the design differently, so that we get a much better power-to-weight ratio for the extension by splitting it into two parts.

Part 1: Records with duplicate field labels

The first step is to cut down the core OverloadedRecordFields extension as much as possible. The essential idea is the same as it ever was, namely that a single module should be able to use the same field name in multiple datatypes, as in this example:

data Person = Person { personId :: Int, name :: String } data Address = Address { personId :: Int, address :: String }

These definitions are forbidden in normal Haskell, because personId is defined twice, but the OverloadedRecordFields extension will permit them and instead postpone name conflict checking to use sites. The basic extension will require that fields are used in such a way that the relevant datatype is always unambiguously determined, and the meanings of record construction, pattern matching, selection and update will not change. This means that the extension can always be enabled for an existing module and it will continue to compile unchanged, an important property that was not true of the previous design.

The Haskell syntax for record construction and pattern-matching is always unambiguous, because it mentions the data constructor, which means that code like this is perfectly fine:

p = Person { personId = 1, name = "Donald" } getId (Person { personId = i }) = i

On the other hand, record selector functions are potentially ambiguous. The name and address selectors can be used without restrictions, and with their usual types, but it will not be possible to use personId as a record selector if both versions are in scope (although we will shortly see an alternative).

Record update is a slightly more interesting case, because it may or may not be ambiguous depending on the fields being updated. In addition, since updates are a special syntactic form, the ambiguity can be resolved using a type signature. For example, this update would be ambiguous and hence rejected by the compiler:

f x = x { personId = 0 } -- is x a Person or an Address?

On the other hand, all these updates are unambiguous:

g :: Person -> Person g x = x { personId = 0 } -- top-level type signature h x = x { personId = 0 } :: Person -- type signature outside k x = (x :: Person) { personId = 0 } -- type signature inside l x = x { personId = 0, name = "Daffy" } -- only Person has both fields

Overall, this extension requires quite a bit of rewiring inside GHC to distinguish between field labels, which may be overloaded, and record selector function names, which are always unambiguous. However, it requires nothing conceptually complicated. As mentioned above, the implementation of this part is available for review on Phabricator.

Part 2: Polymorphism over record fields

While the OverloadedRecordFields extension described in part 1 is already useful, even though it is a relatively minor relaxation of the Haskell scoping rules, another important piece of the jigsaw is some way to refer to fields that may belong to multiple datatypes. For example, we would like to be able to write a function that selects the personId field from any type that has such a field, rather than being restricted to a single datatype. Much of the unavoidable complexity of the previous OverloadedRecordFields design came from treating all record selectors in an overloaded way.

But since this is new functionality, it can use a new syntax, tentatively a prefix # sign (meaning that use of # as an operator will require a space afterwards when the extension is enabled). This means that it will be possible to write #personId for the overloaded selector function. Since we have a syntactic cue, it is easy to identify such overloaded uses of selector functions, without looking at the field names that are in scope.

Typeclasses and type families will be used to implement polymorphism over fields belonging to record types, though the details are beyond the scope of this blog post. For example, the following definition is polymorphic over all types r that have a personId :: Int field:

getId :: r { personId :: Int } => r -> Int getId x = #personId x

Moreover, we are not limited to using #personId as a selector function. The same syntax can also be given additional interpretations, allowing overloaded updates and making it possible to produce lenses for fields without needing Template Haskell. In fact, the syntax is potentially useful for things that have nothing to do with records, so it will be available as a separate extension (implied by, but distinct from, OverloadedRecordFields).

Further reading

More details of the redesigned extensions are available on the GHC wiki, along with implementation notes for GHC hackers. Last year, I gave a talk about the previous design which is still a good guide to how the types work under the hood, even though it predates the redesign.

Categories: Offsite Blogs

Generalizing "unlift" functions with monad-control

haskell-cafe - Mon, 03/30/2015 - 6:33am
I'm trying to extract an "unlift" function from monad-control, which would allow stripping off a layer of a transformer stack in some cases. It's easy to see that this works well for ReaderT, e.g.: {-# LANGUAGE RankNTypes #-} {-# LANGUAGE TypeFamilies #-} import Control.Monad.Trans.Control import Control.Monad.Trans.Reader newtype Unlift t = Unlift { unlift :: forall n b. Monad n => t n b -> n b } askRun :: Monad m => ReaderT r m (Unlift (ReaderT r)) askRun = liftWith (return . Unlift) The reason this works is that the `StT` associated type for `ReaderT` just returns the original type, i.e. `type instance StT (ReaderT r) m a = a`. In theory, we should be able to generalize `askRun` to any transformer for which that applies. However, I can't figure out any way to express that generalized type signature in a way that GHC accepts it. It seems like the following should do the trick: askRunG :: ( MonadTransControl t , Monad m , b ~ StT t b ) => t m (Unlift t) askRunG
Categories: Offsite Discussion

Is it possible for cabal sandboxes to use$HOME/.cabal/lib?

haskell-cafe - Mon, 03/30/2015 - 5:05am
Hi all, There are some packages that I use again and again and I would like to install them once (but not in a system-wide location) and then have cabal sandboxes use that install, instead of installing them in every sandbox, which is time consuming. Is there a way do that? For example, I noticed that cabal, when operating inside a sandbox, can still find the packages installed system-wide, but not the ones in $HOME/.cabal/lib. Can I change that? Thanks
Categories: Offsite Discussion

Ian Ross: C2HS 0.25.1 "Snowmelt"

Planet Haskell - Mon, 03/30/2015 - 3:43am
C2HS 0.25.1 "Snowmelt" March 30, 2015

I took over the day-to-day support for C2HS about 18 months ago and have now finally cleaned up all the issues on the GitHub issue tracker. It took a lot longer than I was expecting, mostly due to pesky “real work” getting in the way. Now seems like a good time to announce the 0.25.1 “Snowmelt” release of C2HS and to summarise some of the more interesting new C2HS features.

Regression suite and Travis testing

When I first started working on C2HS, I kept breaking things and getting emails letting me know that such-and-such a package no longer worked. That got boring pretty quickly, so I wrote a Shelly-driven regression suite to build a range of packages that use C2HS to check for breakages. This now runs on Travis CI so that whenever a C2HS change is pushed to GitHub, as well as the main C2HS test suite, a bunch of C2HS-dependent packages are built. This has been pretty handy for avoiding some stupid mistakes.

Enum handling

Thanks to work contributed by Philipp Balzarek, the treatment of the mapping between C enum values and Haskell Enum types is now much better than it was. The C enum/Haskell Enum association is kind of an awkward fit, since the C and Haskell worlds make really quite different assumptions about what an “enumerated” type is, and the coincidence of names is less meaningful than you might hope. We might have to do some more work on that in the future: I’ve been thinking about whether it would be good to have a CEnum class in Foreign.C.Types to capture just the features of C enums that can be mapped to Haskell types in a sensible way.

Finalizers for foreign pointers

You can now say things like:

#include <stdio.h> {#pointer *FILE as File foreign finalizer fclose newtype#} {#fun fopen as ^ {`String', `String'} -> `File'#} {#fun fileno as ^ {`File'} -> `Int'#} main :: IO () main = do f <- fopen "tst.txt" "w" ...

and the file descriptor f will be cleaned up by a call to fclose via the Haskell garbage collector. This encapsulates a very common use case for handling pointers to C structures allocated by library functions. Previously there was no direct way to associate finalizers with foreign pointers in C2HS, but now it’s easy.

Easy access to preprocessor constants

C2HS has a new const hook for directly accessing the value of C preprocessor constants – you can just say {#const FOO#} to use the value of a constant FOO defined in a C header in Haskell code.

Special case argument marshalling

I’ve implemented a couple of special mechanisms for argument marshalling that were requested. The first of these is a little esoteric, but an example should make it clear. A common pattern in some C libraries is to have code that looks like this:

typedef struct { int a; float b; char dummy; } oid; void func(oid *obj, int aval, float bval); int oid_a(oid *obj); float oid_b(oid *obj);

Here the function func takes a pointer to an oid structure and fills in the values in the structure and the other functions take oid pointers and do various things with them. Dealing with functions like func through the Haskell FFI is a tedious because you need to allocate space for an oid structure, marshall a pointer to the allocated space and so on. Now though, the C2HS code

{#pointer *oid as Oid foreign newtype#} {#fun func as ^ {+, `Int', `Float'} -> `Oid'#}

generates Haskell code like this:

newtype Oid = Oid (ForeignPtr Oid) withOid :: Oid -> (Ptr Oid -> IO b) -> IO b withOid (Oid fptr) = withForeignPtr fptr func :: Int -> Float -> IO Oid func a2 a3 = mallocForeignPtrBytes 12 >>= \a1'' -> withForeignPtr a1'' $ \a1' -> let {a2' = fromIntegral a2} in let {a3' = realToFrac a3} in func'_ a1' a2' a3' >> return (Oid a1'')

This allocates the right amount of space using the fast mallocForeignPtrBytes function and deals with all the marshalling for you. The special + parameter in the C2HS function hook definition triggers this (admittedly rather specialised) case.

The second kind of “special” argument marshalling is more general. A lot of C libraries include functions where small structures are passed “bare”, i.e. not as pointers. The Haskell FFI doesn’t include a means to marshal arguments of this type, which makes using libraries of this kind painful, with a lot of boilerplate marshalling code needed (just the kind of thing C2HS is supposed to eliminate!). The solution I came up with for C2HS is to add an argument annotation for function hooks that says that a structure pointer should really be passed as a bare structure. In such cases, C2HS then generates an additional C wrapper function to marshal between structure pointer and bare structure arguments. An example will make this clear. Suppose you have some code in a C header:

typedef struct { int x; int y; } coord_t; coord_t *make_coord(int x, int y); void free_coord(coord_t *coord); int coord_x(coord_t c, int dummy);

Here, the coord_x function takes a bare coord_t structure as a parameter. To bind to these functions in C2HS code, we write this:

{#pointer *coord_t as CoordPtr foreign finalizer free_coord newtype#} {#fun pure make_coord as makeCoord {`Int', `Int'} -> `CoordPtr'#} {#fun pure coord_x as coordX {%`CoordPtr', `Int'} -> `Int'#}

Here, the % annotation on the CoordPtr argument to the coordX function hook tells C2HS that this argument needs to be marshalled as a bare structure. C2HS then generates Haskell code as usual, but also an extra .chs.c file containing wrapper functions. This C code needs to be compiled and linked to the Haskell code.

This is kind of new and isn’t yet really supported by released versions of Cabal. I’ve made some Cabal changes to support this, which have been merged and will hopefully go into the next or next but one Cabal release. When that’s done, the handling of the C wrapper code will be transparent – Cabal will know that C2HS has generated these extra C files and will add them to the “C sources” list for whatever it’s building.

Binding to variadic C functions

Previously, variadic C functions weren’t supported in C2HS at all. Now though, you can do fun things like this:

#include <stdio.h> {#fun variadic printf[int] as printi {`String', `Int'} -> `()'#} {#fun variadic printf[int, int] as printi2 {`String', `Int', `Int'} -> `()'#} {#fun variadic printf[const char *] as prints {`String', `String'} -> `()'#}

You need to give distinct names for the Haskell functions to be bound to different calling sequences of the underlying C function, and because there’s no other way of finding them out, you need to specify explicit types for the arguments you want to pass in the place of C’s ... variadic argument container (that’s what the C types in the square brackets are). Once you do that, you can call printf and friends to your heart’s content. (The user who wanted this feature wanted to use it for calling Unix ioctl…)

User-defined default marshallers

A big benefit of C2HS is that it tries quite hard to manage the associations between C and Haskell types and the marshalling of arguments between C and Haskell. To that end, we have a lot of default marshallers that allow you very quickly to write FFI bindings. However, we can’t cover every case. There were a few long-standing issues (imported from the original Trac issue tracker when I moved the project to GitHub) asking for default marshalling for various C standard or “standardish” typedefs. I held off on trying to fix those problems for a long time, mostly because I thought that fixing them one at a time as special cases would be a little futile and would just devolve into endless additions of “just one more” case.

In the end, I implemented a general scheme to allow users to explicitly associate C typedef names with Haskell types and to define default marshallers between them. As an example, using this facility, you can write code to marshal Haskell String values to and from C wide character strings like this:

#include <wchar.h> {#typedef wchar_t CWchar#} {#default in `String' [wchar_t *] withCWString* #} {#default out `String' [wchar_t *] peekCWString* #} {#fun wcscmp {`String', `String'} -> `Int'#} {#fun wcscat {`String', `String'} -> `String'#}

I think that’s kind of fun…

Miscellany

As well as the features described above, there’s a lot more that’s been done over the last 18 months: better handling of structure tags and typedefs; better cross-platform support (OS X, FreeBSD and Windows); lots more default marshallers; support for parameterised pointer types; some vague gestures in the direction of “backwards compatibility” (basically just a C2HS_MIN_VERSION macro); and just in the last couple of days, some changes to deal with marshalling of C bool values (really C99 _Bool) which aren’t supported directly by the Haskell FFI (so again require some wrapper code and some other tricks).

Contributors

As well as myself and Manuel Chakravarty, the original author of C2HS, the following people have contributed to C2HS development over the last 18 months (real names where known, GitHub handles otherwise):

  • Anton Dessiatov
  • Boyun Tang
  • Cindy Wang
  • Dimitri Sabadie
  • Facundo Dominguez
  • Index Int
  • j-keck
  • Kai Harries
  • Merijn Verstraaten
  • Michael Steele
  • Philipp Balzarek
  • RyanGlScott
  • Sivert Berg
  • watashi
  • Zejun Wu

Many thanks to all of them, and many thanks also to Benedikt Huber, who maintains the language-c package on which C2HS is critically dependent!

What next?

All of the work I’ve done on C2HS has been driven purely by user demand, based on issues I imported from the original Trac issue tracker and then on things that people have asked for on GitHub. (Think of it as a sort of call-by-need exploration of the C2HS design space.) I’m now anticipating that since I’ve raised my head above the parapet by touting all these shiney new features, I can expect a new stream of bug reports to come in…

One potential remaining large task is to “sort out” the Haskell C language libraries, of which there are now at least three, all with different pros and cons. The language-c library used in C2HS has some analysis capabilities that aren’t present in the other libraries, but the other libraries (notably Geoffrey Mainland’s language-c-quote and Manuel’s language-c-inline) support more recent dialects of C. Many of the issues with C2HS on OS X stem from modern C features that occur in some of the OS X headers that the language-c package just doesn’t recognise. Using one of the other C language packages might alleviate some of those problems. To do that though, some unholy mushing-together of language-c and one of these other packages has to happen, in order to bring the analysis capabilities of language-c to the other package. That doesn’t look like much fun at all, so I might ignore the problem and hope it goes away.

I guess longer term the question is whether tools like C2HS really have a future. There are better approaches to FFI programming being developed by research groups (Manuel’s is one of them: this talk is pretty interesting) so maybe we should just wait until they’re ready for prime time. On the other hand, quite a lot of people seem to use C2HS, and it is pretty convenient.

One C2HS design decision I’ve recently had to modify a little is that C2HS tries to use only information available via the “official” Haskell FFI. Unfortunately, there are situations where that just isn’t enough. The recent changes to marshal C99 _Bool values are a case in point. In order to determine offsets into structures containing _Bool members, you need to know how big a _Bool is. Types that are marshalled by the Haskell FFI are all instances of Storable, so you can just use the size method from Storable for this. However, the Haskell FFI doesn’t know anything about _Bool, so you end up having to “query” the C compiler for the information by generating a little C test program that you compile and run. (You can find out which C compiler to use from the output of ghc --info, which C2HS thus needs to run first.) This is all pretty nasty, but there’s no obvious other way to do it.

This makes me think, since I’m having to do this anyway, that it might be worth reorganising some of C2HS’s structure member offset calculation code to use the same sort of “query the C compiler” approach. There are some cases (e.g. structures within structures) where it’s just not possible to reliably calculate structure member offsets from the size and alignment information available through the Haskell FFI – the C compiler is free to insert padding between structure members, and you can’t work out just by looking when a particular compiler is going to do that. Generating little C test programs and compiling and running them allows you to get the relevant information “straight from the horse’s mouth”… (I don’t know whether this idea really has legs, but it’s one thing I’m thinking about.)

haskell <script src="http://zor.livefyre.com/wjs/v3.0/javascripts/livefyre.js" type="text/javascript"></script> <script type="text/javascript"> (function () { var articleId = fyre.conv.load.makeArticleId(null); fyre.conv.load({}, [{ el: 'livefyre-comments', network: "livefyre.com", siteId: "290329", articleId: articleId, signed: false, collectionMeta: { articleId: articleId, url: fyre.conv.load.makeCollectionUrl(), } }], function() {}); }()); </script>
Categories: Offsite Blogs

ANN: unification-fd 0.10.0

haskell-cafe - Mon, 03/30/2015 - 2:12am
--------------------------------------------
Categories: Offsite Discussion

ANN: stm-chans 3.0.0.3

libraries list - Mon, 03/30/2015 - 1:58am
--------------------------------------------
Categories: Offsite Discussion

ANN: logfloat 0.13.3

haskell-cafe - Mon, 03/30/2015 - 1:49am
--------------------------------------------
Categories: Offsite Discussion

Platform 2015.2.0.0 Alpha 2 for debian based systems

libraries list - Mon, 03/30/2015 - 1:24am
I've uploaded a build of *Haskell Platform 2015.2.0.0 Alpha 2* for generic debian linux systems. It includes GHC 7.10.2, using the generic linux bindist. This is a tarball, not a package. Installation instructions: cd / sudo tar xvf ...downloaded-tarfile... sudo /usr/local/haskell/ghc-7.10.1-x86-64/bin/activate-hs Notes: - Built on Ubuntu 14.04LTS - Needs the following packages installed: build-essential libbsd-dev libgmp-dev libtinfo-dev zlib1g-dev - The OpenGL packages need the OpenGL libs, which on a non-GUI system like a server can be gotten by installing: freeglut3-dev libgl1-mesa-dev libglu1-mesa-dev - Mark _______________________________________________ Libraries mailing list Libraries< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Categories: Offsite Discussion

[announce] pipes-cliff - library for streaming to and from processes with Pipes

haskell-cafe - Sun, 03/29/2015 - 11:09pm
Sometimes you have to write Haskell code to interact with external processes. What a bother. The System.Process module makes it easy to send data in and out, but what if you want to stream your data in constant time and space? Then you need to cobble together your own solution using the provided Handles. Or you can use pipes-cliff, which lets you use the excellent Pipes library to stream your data in and out. Some simple examples in the package get you started. https://hackage.haskell.org/package/pipes-cliff Also, take a look at the README.md file, which lists some similar packages. Maybe one of those will work better for you. https://github.com/massysett/pipes-cliff/blob/master/README.md _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

FP Complete: Announcing: open sourcing of ide-backend

Planet Haskell - Sun, 03/29/2015 - 10:00pm

After many years of development, FP Complete is very happy and proud to announce the open sourcing of ide-backend. ide-backend has served as the basis for our development of School of Haskell and FP Haskell Center, by providing a high level, easy to use, and robust wrapper around the GHC API. We'd like to express our thanks to Duncan Coutts, Edsko de Vries, and Mikolaj Konarski for implementing this library on our behalf.

ide-backend provides a means to do a variety of tasks involving GHC, such as:

  • Compile code
  • Get compile error message
  • Submit updated code for recompilation
  • Extract type information
  • Find usage locations
  • Run generated bytecode
  • Produce optimized executables

For much more information, you can see the Haddock documentation.

Members of the Commercial Haskell Special Interest Group have encouraged us to open source more of our work, to help them build more tools useful to real-world developers. We're happy to contribute.

ide-backend opens the possibility for many new and interesting tools. To give some ideas:

  • A basis for providing a fast development web server while working on a code base. The idea here is a generalized yesod devel, which compiles and runs your web application on each change.
  • Edward Kmett has a project idea of using ide-backend to extract and organize type information from a large number of packages
  • Editor plugins can be improved, simplified, and begin to share much more code than they do today
  • Lightweight tools for inspecting code
  • Refactoring tools

I've shared information about this repository with some maintainers of existing tools in the Haskell world already, and hopefully now with the complete move to open source, we can start a much broader discussion going.

But today's release isn't just a code release; we also have demos! Edsko and Chris have been collaborating on some next-generation editor plugins, and have put together ide-backend-client with support for both Emacs and Atom. Chris has put together a screencast of his Emacs integration:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/Cwi1p2CLW54" width="420"></iframe>

We also have an early prototype tool at FP Complete for inspecting a code base and getting type information, based on ide-backend, GHCJS, and React.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/FI3u8uqZ2Q4" width="420"></iframe>Where we go from here

Open sourcing this library is just the first step.

  • Duncan is planning on writing a blog post describing the architecture employed by this library.
  • Edsko's ide-backend-client project is a great place to continue making contributions and playing with new ideas.
  • FP Complete intends to release more of our ide-backend based tooling in the future as it matures.
  • I've asked for a GSoC proposal on a better development web server, and I'm sure other proposals would be great as well
  • FP Complete and Well Typed are both currently maintaining this library, and we are happy to have new people join the team

I'm excited to hear everyone's thoughts on this library, and look forward to seeing some awesome tools appear.

Categories: Offsite Blogs

www.jdon.com

del.icio.us/haskell - Sun, 03/29/2015 - 8:32pm
Categories: Offsite Blogs