News aggregator

Call for Talk Proposals: Domain-Specific Language Designand Implementation 2014

General haskell list - Thu, 07/24/2014 - 4:19pm
********************************************************************* CALL FOR TALK PROPOSALS DSLDI 2014 Second Workshop on Domain-Specific Language Design and Implementation October 20/21, 2014 Portland, USA Co-located with SPLASH/OOPSLA http://2014.splashcon.org/track/dsldi2014 ********************************************************************* Deadline for talk proposals: August 27, 2014 If designed and implemented well, domain-specific languages (DSLs) combine the best features of general-purpose programming languages (e.g., performance) with high productivity (e.g., ease of programming). *** Workshop Goal *** The goal of the DSLDI workshop is to bring together researchers and practitioners interested in sharing ideas on how DSLs should be designed, implemented, supported by tools, and applied in realistic application contexts. We are both interested in discovering how already known domains such as graph processing or machine learning can be best supported by DSLs, but also in exploring new dom
Categories: Incoming News

FHPC 2014 (and reminder about ICFP early reg)

General haskell list - Thu, 07/24/2014 - 2:27pm
The programme for the Workshop on Functional High Performance Computing (Sept. 4 immediately after ICFP) is available at https://sites.google.com/site/fhpcworkshops/fhpc-2014/programme It will be a very enjoyable workshop so please consider attending. This is also a reminder that the last day for early registration at ICFP and associated workshops is August 3. Online registration for both ICFP and FHPC starts here: https://regmaster4.com/2014/ICFP14/ic01code/regsystem.php?control=register with best wishes Mary Sheeran _______________________________________________ Haskell mailing list Haskell< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell
Categories: Incoming News

Functional Jobs: Senior Haskell Developer at Plow Technologies (Full-time)

Planet Haskell - Thu, 07/24/2014 - 2:07pm

Plow Technologies is looking for an experienced Haskell developer who can lead software design. Deep understanding of the Haskell programming language is preferred. This person would be expected to work at all levels of our platform to optimize performance, reliability, and maintainability of our code base. We want the kind of programmer who makes everyone else better by: (1) designing application programming interfaces and libraries that speed up time of development; and (2) teaching others through direct interaction. The kind of skills that are desired for this position are rare, so remote work would definitely be an option. However, some direct interaction (including travel) should be expected.

Get information on how to apply for this position.

Categories: Offsite Blogs

Papers every haskeller should read

Haskell on Reddit - Thu, 07/24/2014 - 10:11am

What are some papers that every Haskeller should read?

submitted by incompetentacademic
[link] [33 comments]
Categories: Incoming News

determining the origin of imported symbols

haskell-cafe - Thu, 07/24/2014 - 9:29am
Hi everyone, I looking for a way to analyze Haskell source to determine which module each imported symbol comes from. My goal is to transform source code like this: import Data.List ... main = do nums <- fmap (map read . words) getLine :: IO [Int] print $ sort nums to code like this: import qualified Prelude as B import qualified Data.List as A ... main = do nums <- B.fmap (B.map B.read B.. B.words) B.getLine :: B.IO [B.Int] B.print B.$ A.sort nums That is, I want to qualify all imported symbols with a module alias. Can anyone suggest modules or programs I should look at? Thanks, ER _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Markov Text Generator & Randomness

haskell-cafe - Thu, 07/24/2014 - 3:29am
Hi -cafe, I'm coding a Markov Text Generator (of order 1). Basically, you have a source text, and knowing the frequencies of pairs of consecutive words, you generate a somewhat syntactically correct text from this. Here's the link to my code and to a source text you can use as example. test.txt http://lpaste.net/raw/4004174907431714816 code http://lpaste.net/4147715261379641344 The kickers is that this code generates sentences with consecutive words that never appears next to each other in the source text. For example, the code generated "They sat over at because old those the lighted.", but "over at" never occurs in the source text, so it shouldn't occur in a generated sentence. The makeDb function gives is correct, so my problem actually lies in generate and/or in draw. I think there's something about RVar that I messed up, but I don't see the problem. Any ideas? Cheers,
Categories: Offsite Discussion

Haskell on JVM (using LLVM backend)

haskell-cafe - Thu, 07/24/2014 - 3:26am
Hello! I was recently discussing with some frients possibilities of running Haskell programs on JVM. I belive such possibility could be a breakdown for the popularity of Haskell and could be very interesting for new people and new usage-scenarios. I have seen some topics, like these: http://stackoverflow.com/questions/7261039/haskell-on-jvm http://www.haskell.org/haskellwiki/GHC/FAQ#Why_isn.27t_GHC_available_for_.NET_or_on_the_JVM.3F http://www.haskell.org/pipermail/haskell-cafe/2009-June/063454.html but are a little old and do NOT mention the LLVM backend. If we've got the LLVM backend in GHC right now, why cannot we just use something like LLJVM to convert the LLVMIR into JVM bytecode? I understand that the LLVM bytecode (optained from GHC) has all the optimizations applied (including tail recursion expansion) so it **could** be possible to just run it on JVM? All the best, Wojciech Daniło _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.hask
Categories: Offsite Discussion

Locating Modules in Haskell

Haskell on Reddit - Thu, 07/24/2014 - 2:22am

What is the rationale in locating the standard modules in Haskell?

For instance, why Monoid, Functor, etc. are located in Data, while Applicative, Monad, Category, etc. are located in Control?

submitted by BanX
[link] [10 comments]
Categories: Incoming News

GHCi: Behave nicely on `-e`, like `ghc` and other programs

libraries list - Thu, 07/24/2014 - 12:29am
Like many programming language environments, GHC offers a handy `-e` option for evaluating an expression, then returning to the shell. $ ghc -e '2 + 2' 4 One would expect the interpreter, GHCi, to offer a similar flag, but it surprisingly rejects it. ghci -e '2 + 2' ghc: on the commandline: cannot use `--interactive' with `-e' Usage: For basic information, try the `--help' option. I think this behavior is quite unintuitive--when I pass `-e <exp>` to ghci, or pass `--interactive -e <exp>` to ghc, I expect the expression to be evaluated as the leading expression in an interactive interpreter session. Could we please tweak ghc like this to make it slightly more intuitive when these flags are used together?
Categories: Offsite Discussion

Enhancement: Default cabal to `-p` profiling enabled

libraries list - Thu, 07/24/2014 - 12:18am
I love GHC's profiling support, and like to use it to analyze the performance of my Haskell applications. However, profiling an application is difficult when it depends on any third-party libraries, as cabal doesn't include profiling information by default. Fortunately, cabal can reinstall a library with profiling support, with: cabal install --reinstall -p <library> Unfortunately, cabal is a bit of a simpleton, so this will fail unless that libraries dependencies are also installed with profiling enabled: cabal install --reinstall -p <libraryX> <libraryY> <libraryZ> ... For example, a user who wants to profile his die-rolling program must run: $ sudo apt-get install haskell-platform haskell-platform-doc haskell-platform-prof $ sudo cabal install --reinstall -p mwc-random rvar random-fu random-source mersenne-random-pure64 stateref flexible-defaults th-extras MonadPrompt math-functions erf vector-th-unbox monad-loops random-shuffle MonadRandom And that long list of packages must be slowly grown one at
Categories: Offsite Discussion

Retrieving information about type families

haskell-cafe - Wed, 07/23/2014 - 7:48pm
Dear Café, My quest for obtaining information about type families continues. Now I have a simple question: how should I access the information about "type instance"s via the GHC API? My aim is to do so after type checking, that is, to get that information from a TypecheckedModule. However, I haven't yet been able to touch the right buttons to make it work ;( Thanks in advance, Alejandro _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

GhcPlugin-writing and "finding things"

glasgow-user - Wed, 07/23/2014 - 5:06pm
Dear GHC-ers, I'm working on a plugin for GHC that should help compile the library with which this plugin is to ship. What this plugin does is traverse the CoreProgram(s) to find things of types defined in my library and optimizes them. I have worked out how to "find" things, but I was wondering whether the API could be improved for plugin-writers. For the sake of argument, I have the following: - module Foo: library for users to import, containing functions, ADTs etc - module Foo.Plugin: GhcPlugin that compiles out all uses of things in Foo This example is trivial and I imagine GHC will have no trouble eliminating most cases of this, but imagine more complex stuff. Now, if I want to traverse the CoreProgram in my plugin, I need to find occurrences of these, so somewhere there's stuff like: My problem is "getting" tcFoo in this example. Below is how I do it now. Maybe I'm being thick, or maybe there's just no simpler way. This is my 'plugin' function in Foo.Plugin: I have the following questions:
Categories: Offsite Discussion

Neil Mitchell: Applicative vs Monadic build systems

Planet Haskell - Wed, 07/23/2014 - 1:11pm

Summary: Shake is a monadic build system, and monadic build systems are more powerful than applicative ones.

Several people have wondered if the dependencies in the Shake build system are monadic, and if Make dependencies are applicative. In this post I'll try and figure out what that means, and show that the claim is somewhat true.

Gergo recently wrote a good primer on the concepts of Applicative, Monads and Arrows (it is worth reading the first half if you are unfamiliar with monad or applicative). Using a similar idea, we can model a simple build system as a set of rules:

rules :: [(FilePath, Action String)]
rules = [("a+b", do a <- need "a"; b <- need "b"; return (a ++ b))
,("a" , return "Hello ")
,("b" , return "World")
]

Each rule is on a separate line, containing a pair of the file the rule produces (e.g. a for the second rule) and the action that produces the files contents (e.g. return "Hello"). I've used need to allow a rule to use the contents of another file, so the rule for a+b depends on the files a and b, then concatenates their contents. We can run these rules to produce all the files. We've written these rules assuming Action is a Monad, using the do notation for monads. However, for the above build system, we can restrict ourselves to Applicative functions:

rules = [("a+b", (++) <$> need "a" <*> need "b")
,("a" , pure "Hello ")
,("b" , pure "World")
]

If Action is applicative but not monadic then we can statically (without running any code operating on file contents) produce a dependency graph. If Action is monadic we can't generate a graph upfront, but there are some build systems that cannot be expressed applicatively. In particular, using a monad we can write a "dereferencing" build system:

rules = [("!a", do a <- need "a"; need a)
,("a" , pure "b")
,("b" , pure "Goodbye")
]

To build the file !a we first require the file a (which produces the contents b), then we require the file b (which produces the contents Goodbye). Note that the first rule has changed b the content into b the file name. In general, to move information from the file content to a file name, requires a monad. Alternatively stated, a monad lets you chose future dependencies based on the results of previous dependencies.

One realistic example (from the original Shake paper), is building a .tar file from the list of files contained in a file. Using Shake we can write the Action:

contents <- readFileLines "list.txt"
need contents
cmd "tar -cf" [out] contents

The only build systems that I'm aware of that are monadic are redo, SCons and Shake-inspired build systems (including Shake itself, Jenga in OCaml, and several Haskell alternatives).

While it is the case that Shake is monadic, and that monadic build systems are more powerful than applicative ones, it is not the case that Make is applicative. In fact, almost no build systems are purely applicative. Looking at the build shootout, every build system tested can implement the !a example (provided the file a is not a build product), despite several systems being based on applicative dependencies.

Looking at Make specifically, it's clear that the output: input1 input2 formulation of dependencies is applicative in nature. However, there are at least two aspects I'm aware of that increase the power of Make:

  • Using $(shell cat list.txt) I can splice the contents of list.txt into the Makefile, reading the contents of list.txt before the dependencies are parsed.
  • Using -include file.d I can include additional rules that are themselves produced by the build system.

It seems every "applicative" build system contains some mechanism for extending its power. I believe some are strictly less powerful than monadic systems, while others may turn out to be an encoding of monadic rules. However, I think that an explicitly monadic definition provides a clearer foundation.

Categories: Offsite Blogs

Looking for list comprehensions use cases

glasgow-user - Wed, 07/23/2014 - 12:57pm
Haskellers, recently I've been looking into the possibility of creating some new optimisations for GHC. These would be mostly aimed at list comprehensions. Here's where I need your help: 1. Do you have complex list comprehensions usage examples from real code? By complex I mean nested list comprehensions, reading from more than one list ([ ...| x <- xs, y <- ys ... ]) etc. 2. Do you have list comprehensions code that you had to optimize by hand because GHC was unable to make them fast enough? Janek
Categories: Offsite Discussion