News aggregator

Yesod Web Framework: Beginner friendly code and APIs

Planet Haskell - Fri, 10/30/2015 - 1:30am

I just flew from Tel Aviv to Los Angeles. It's a long flight, and includes significant travel time outside of the flight itself. There are two results of that: (1) I've slept something like 8 hours in the past 72, and (2) I've had a lot of time to play with random code sitting on my laptop and think about it from different perspectives. I'm going to talk about (2), and use (1) as an excuse for anything ridiculous I say. Note that this blog post is just exploring ideas, not giving any real statements.

The following two pieces of code are identical in functionality:

someFunc1 key = case lookup key someMap of Nothing -> getDefaultValue Just value -> otherFunc value someFunc2 = maybe getDefaultValue otherFunc . flip lookup someMap

Which is better? I'll start with objective facts from my own experience:

  1. When I was a beginner Haskeller, I would have been able to write something like someFunc1
  2. When I was a beginner Haskeller, I would have had to try and puzzle out someFunc2` for a long time if I read it, and almost certainly couldn't have written it
  3. Past the beginner phase, reading code like someFunc2 taught me new ways to compose Haskell code easily
  4. As an intermediate Haskeller, I definitely got the feeling that I should be writing my code in someFunc2 style (point-free, avoid pattern matching, use function composition)
  5. I seem to receive pull requests on a regular (though not frequent) basis rewriting code in the style of someFunc1 to look like someFunc2
  6. I have a gut reaction that someFunc2 is better
  7. someFunc2 is shorter
  8. And yet, to this date, I think I can still parse and understand someFunc1 more easily, and also believe I will more quickly spot a bug in that style

There's quite a few "I got the feeling" statements above. Those are objective statements about my subjective observations; I'd be really intersted in whether other people have had the same kinds of observations over time, or if my experience is unique. But now, the truly subjective/exploratory part:

It seems like my progression as a Haskeller results in forcing myself to write in a harder-to-parse style to make my code shorter, to satisfy some base need for "better" code, even though by most measurements I just made, the longer/explicit/pattern matching code is in fact better.

There are certainly advantages to point-free style though, right? Some more complicated combinators - like foldr - result in clearer code, plus code that generalizes to more data types than just a list (thanks to Foldable). In some cases (like stream fusion and other rewrite rules) point-free style may be more efficient; I know that in conduit await >>= maybe f g is more efficient than pattern matching.

To try and generalize my point beyond point-free style: we've been having some discussions about more code sharing in the web framework space recently. One point that's come up is a difference between Servant and Yesod regarding explicitness of parameters. For example, in Yesod (and I think other WAI frameworks), there's a function available inside your handler code to lookup request headers. In Servant, a dependency on such a piece of data is explicit at the type level. (There are similar issues around short-circuiting requests, and explicitness of different content representations.)

My 5-year-more-Haskell-experienced self is very much intrigued by the Servant approach. It seems more sound. And yet, I still interact regularly with less experienced Haskellers. And just this past week, this kind of need for explicit declaration of all data came up in practice for a customer, and resulted in Haskell being a harder adoption for them.

I'm feeling the same tension again. The Yesod API seems to appeal to what the beginner would expect at a straightforward level: if you want to send a redirect, you just call the redirect function, and don't need to worry about changing type signatures around. Need a request header? Ask for it. However, I know that being explicit about these things gives great benefit (in Servant's case, it has the ability to generate API bindings that Yesod doesn't have). But even so, today when I write a web app in Haskell, I still like the ability to use the "beginner-friendly" API without the explicitness.

This is where I'm stuck. There are clearly benefits to each kind of approach in these dichotomies. And I'm sure there are more dichotomies than I've listed. The kind of questions I've been pondering are:

  1. If I would design an API differently today than in the past, based on my larger Haskell experience, what does that mean? Is the new API better? Or have I lost touch with what's most useful, and am instead chasing an less important ideal?
  2. Is there an inherent tension between beginner-friendly and expert-friendly APIs for some of these things, and are the two groups best served by separating out the APIs?
  3. When do I give in to the voice that says "you could rewrite that to be cleaner" and when do I say "no, the simple code is better, don't complicate it in the name of cleanliness."

Sorry for the mostly unintelligible rambling. In the interest of sometimes listening to the less experienced person inside of me, I'm posting this without much review, in case I'm actually hitting on an important topic. If not, please ignore me.

And in case anyone's curious, a lot of this came up when working on wai-frontend-monad, which attempts to extract the HandlerT transformer from Yesod into something more generally useful, and in a more modern coding style. That work (combined with lack of sleep on a plane) seems to have opened up some existential dilemmas ;).

Categories: Offsite Blogs

leksah with UTF-8 files

Haskell on Reddit - Fri, 10/30/2015 - 12:11am

Hi,

Does anyone know why leksah 0.10.0.4 on OS X Yosemite does not display my UTF-8 encoded source files correctly? (I am using uf8-string 1.0.1.1). Candy is disabled. Output to console through leksah does not display UTF-8 correctly either..

submitted by mariatsji
[link] [2 comments]
Categories: Incoming News

wren gayle romano: Limitations of strongly-typed ABTs

Planet Haskell - Thu, 10/29/2015 - 11:17pm

Last time I talked a bit about ABTs; in particular, I introduced the notion of strongly-typed ABTs (or "GABTs" if you prefer) and showed how we can extend the basic idea of ABTs to guarantee well-typedness in addition to well-aritiedness. However, I also made a note that ensuring this sort of well-typedness runs counter to what Neel and other CMUers often do. One of my colleagues here at IU noticed the reason, so I thought I'd write a bit more about it.

The issue at stake here is how general we can make our ABT library, to minimize the amount of boilerplate needed whenever inventing a new language. By encoding object-language type systems into the kinding of the ABT, we restrict the the possible object languages we can use the ABT implementation for (namely those object languages with type systems that can be embedded into whatever kinding the ABT has). To put a finer point on it, using the kinds presented in the previous post you cannot have binders in your type system. This means no System F, and no dependent types. This is unfortunate as the whole point of ABTs is to capture binding structure once and for all!

However, I'd like to reiterate that, for our purposes in Hakaru this limitation is no restriction. Hakaru is simply-typed, so there are no type-level binders in sight. Moreover, we do a lot of program transformations in Hakaru. By using GABTs we can have GHC verify that our program transformations will never produce Hakaru code which is ill-typed, and that our program transformations will always produce Hakaru code of an appropriate type (e.g., the same type as the input term, for things like partial evaluation; but we have a number of type-changing transformations too). Thus, even though our GABT library could not be reused for implementing languages with type-level binders, it still provides a substantial benefit for those languages without type-level binders.

Although our GABTs cannot handle type-level binders, that does not mean we're restricted to only working with simply typed languages. For example, intersection types are not usually thought of as "simple types"; but they do not require binders and so they're fine. More generally, Larry Moss is engaged in a research project where he asks, "given infinite time, how far could Aristotle have gotten in logic?" By which he means, given the Aristotelian restriction to syllogistic logics (i.e., ones without the quantifiers introduced by Frege), what are the limits in what we can cover? It turns out that we can cover quite a lot. Some syllogistic logics go beyond the power of the "Peano–Frege" boundary: they can handle comparing cardinality of sets! A good pictorial summary of this line of research is on slide 2 of this talk; and a bit more about the complexity results is given in this talk (the requisite picture is on slide 39).

Twitter Facebook Google+ Tumblr WordPress

comments
Categories: Offsite Blogs

Why is this reactive-banana code slow?

Haskell on Reddit - Thu, 10/29/2015 - 11:14pm

I've been messing around with the idea of using haskell for working with audio data recently, and I figured I'd give FRP a try since it seems like a cool idea. I went with reactive-banana, and everything was going well until I realized that even generating a saw wave was taking far too long. I then rewrote it using foldM to achieve roughly the same effect, and got a significant speedup.

foldM version real 0m0.307s user 0m0.260s sys 0m0.040s reactive-banana version real 0m4.554s user 0m4.470s sys 0m0.067s

Additionally, when expanding my code to generate 3 saw waves simultaneously, the foldM version stayed at around 0.3 seconds, while the reactive-banana version trippled to around 12 seconds.

Now, I expected that the abstraction of FRP wouldn't come free, but I didn't expect the speed difference to be so drastic. This leads me to believe that I may be using the library incorrectly.

I've uploaded the code, which generates a small part of Fur Elise: https://github.com/unknownloner/slow-frp

Am I using reactive-banana incorrectly here, or does it really have that much overhead?

As a side note, the audio is output at 44100 hz in S16_LE format, so if you want to listen to it to verify that it is correct you can pipe it to a program like aplay, or import it as raw data into audacity.

submitted by Unknownloner
[link] [25 comments]
Categories: Incoming News

haskell war game

Haskell on Reddit - Thu, 10/29/2015 - 7:25pm

hey, working on a project for my college class. we need to recreate the card game in Haskell. its only a 100 level class so we only know the basics. we are stuck at recurring war function over again while keeping the cards in each recurring war still an option for the winner. so far this is what we have

war :: ([Int], [Int]) -> ([Int], [Int]) war (x:xs,y:ys) |length (x:xs) < 3 = ([],[y]++ ys ++ [x] ++ xs) |length (y:ys) < 3 = ([x]++ xs ++ [y] ++ ys,[]) |head(drop 3 xs) > head(drop 3 ys) = (drop 4 xs ++ [y] ++ take 4 ys ++ [x] ++ take 4 xs, drop 4 ys) |head(drop 3 xs) < head(drop 3 ys) = (drop 4 xs, drop 4 ys ++ [x] ++ take 4 xs ++ [y] ++ take 4 ys) |otherwise = war(drop 4 xs ++ [x] ++ take 4 xs,drop 4 ys ++ [y] ++ take 4 ys)

if you guys could help it would be greatly appreciated.

submitted by ASAPberndaddy
[link] [3 comments]
Categories: Incoming News

Using pipes or conduit for a GUI

Haskell on Reddit - Thu, 10/29/2015 - 7:06pm

Does anyone here have any experience using pipes or conduit for a GUI?

I've been exploring the Haskell GUI and OpenGL ecosystems for some personal projects. I really like the core ideas of FRP and reactive-banana, but I've recently discovered the various streaming libraries for Haskell. I think they could make some nice composable GUI components, and I thought that surely someone has thought of this before and could provide some info or tips.

Using a streaming library for a GUI looks like Haskell's analog of Reactive Extensions. I have had a wonderful experience using Rx for a large GUI I am building at work, and I think conduit or pipes could provide a similarly pleasant set of abstractions.

As an example use case, I am currently working with handling GLFW/OpenGL mouse click, scroll, and movement events. I am trying to manually handle the state between events, like recording the previous position and any pressed mouse keys. In Rx, I could easily compose some streams of these events and perform simple operations on the streams to determine if the user clicked or dragged. Some examples of the usefulness of a pipeline for mouse events:

  1. If there is a mouse button press and a mouse button release immediately after in the pipeline, then the user meant to do a click (not a drag). In a purely callback-based approach, I have to keep a data structure that records all mouse events in a TVar and query it every time the callback is called. I also have to record if any mouse movement occurred between button events. Ugh!
  2. If there is a mouse button press event, then movement events, then a release event, then I can assume the user dragged. Again, this is vastly simpler than the purely callback-based approach with a TVar.
  3. I can accumulate the currently pressed buttons into a list to determine if I should do a drag action with the left button, right button, or both. When a button is pressed, add it to the list, and when it is released, remove it from the list. I assume I can do some kind of accumulation like this in the streaming library.

Anyway, I really like the idea of structuring GUI behavior as "pipelines". I am just trying to figure out which style I like better between FRP, Rx, and now pipes/conduit.

submitted by jdreaver
[link] [15 comments]
Categories: Incoming News

Jasper Van der Jeugt: Tries and elegant Scope Checking

Planet Haskell - Thu, 10/29/2015 - 6:00pm
Introduction

This blogpost is mostly based upon a part of the talk I recently gave at the Haskell eXchange. I discussed scope checking – also referred to as scope analysis or renaming. While the talk focussed on Ludwig, a DSL used to program Fugue, the ideas around scope checking are broadly applicable, so in this blogpost we use a simple toy language.

> {-# LANGUAGE DeriveFoldable #-} > {-# LANGUAGE DeriveFunctor #-} > {-# LANGUAGE DeriveTraversable #-} > import qualified Data.HashMap.Strict as HMS > import Data.Hashable (Hashable) > import Data.List (foldl') > import Data.Either.Validation (Validation (..), > validationToEither) > import Prelude hiding (lookup)

This part of a Compiler/Interpreter is concerned with resolving occurence names to full names. Occurrence names are just what the programmer uses in the source file, and full names contain more information.

I think this is an interesting area to explore. The vast majority of articles about creating parsers and interpreters just use Strings as names, in order to keep things simple (which is of course fully justified). This blogpost, on the other hand, explains what you can do if things become a bit more complicated.

Consider the following Haskell snippet:

import qualified Data.HashMap.Strict as HMS emptyThing = HMS.empty

HMS.empty is an occurrence name. The full name, on the other hand, is something like unordered-containers-0.2.5.1:Data.HashMap.Base.empty. Let’s get started by representing these types in Haskell:

> -- E.g. ["HMS", "empty"]. > type OccName = [String] > > -- E.g. ["Data", "HashMap", "Strict"] > type ModuleName = [String] > > -- Just an example of what sort of things can be in 'FullName'. > data BindingScope = ToplevelScope | LocalScope > deriving (Show) > > data FullName = FullName > { fnOccName :: !OccName > , fnModuleName :: !ModuleName > , fnBindingScope :: !BindingScope > } deriving (Show)

Note that this is just a toy example. Firstly, we can use more efficient representations for the above, and we might want to add newtype safety. Secondly, we might also want to store other things in FullName, for example the package where the name originated. The FullName record can really get quite big.

Now that we have two name types – OccName and FullName, we can parametrise our abstract syntax tree over a name type.

> data Expr n > = Literal Int > | Add (Expr n) (Expr n) > | Var n > deriving (Show)

Now, we can formalise the problem of scope checking a bit more: it is a function which turns an Expr OccName into an Expr FullName.

Tries

In order to implement this, it is clear that we need some sort of “Map” to store the FullName information. The specific data structure we will use is a Trie. Tries are somewhat similar to Radix trees, but significantly more simple. We will implement one here for educational purposes.

A Trie k v can be seen as a mapping from lists of keys to values, so it could be defined as:

type Trie k v = HMS.HashMap [k] v

However, there is a nicer representation which we will need in order to support some fast operations.

First, we need a quick-and-dirty strict Maybe type.

> data M a = J !a | N > deriving (Foldable, Functor, Show, Traversable)

Note how we automically added Foldable, Functor and Traversable instances for this type. Thanks GHC!

Then, we can define Trie in a recursive way:

> data Trie k v = Trie > { tValue :: !(M v) > , tChildren :: !(HMS.HashMap k (Trie k v)) > } deriving (Foldable, Functor, Show, Traversable)

We can have a value at the root (tValue), and then the other elements in the Trie are stored under the first key of their key list (in tChildren).

Now it is time to construct some machinery to create Tries. The 1 empty Trie is really easy:

> empty :: Trie k v > empty = Trie N HMS.empty

Let’s draw the empty Trie as a simple box with an N value, since it has no value and no children.

The empty trie

We can also define a function to create a Trie with a single element. If the list of keys is empty, we simply have a J value at the root. Otherwise, we define the function recursively.

> singleton :: (Eq k, Hashable k) => [k] -> v -> Trie k v > singleton [] x = Trie (J x) HMS.empty > singleton (k : ks) x = Trie N (HMS.singleton k (singleton ks x))

As an example, this is the result of the call singleton ["foo", "bar"] "Hello World".

A singleton trie

We can skip insert and simply create a unionWith function instead. This function unifies two Tries, while allowing you to pass in a function that decides how to merge the two values if there is a key collision.

> unionWith > :: (Eq k, Hashable k) > => (v -> v -> v) -> Trie k v -> Trie k v -> Trie k v > unionWith f (Trie v1 c1) (Trie v2 c2) = > Trie v $ HMS.unionWith (unionWith f) c1 c2 > where > v = case (v1, v2) of > (N, _) -> v2 > (_, N) -> v1 > (J x, J y) -> J (f x y)

The bulk of the work is of course done by HMS.unionWith. This is the result of calling unionWith (\x _ -> x) (singleton "foo" "Hello") (singleton "bar" "World"):

unionWith example

For convenience, we can then extend unionWith to work on lists:

> unionsWith > :: (Eq k, Hashable k) > => (v -> v -> v) -> [Trie k v] -> Trie k v > unionsWith f = foldl' (unionWith f) empty

A last function we need to modify tries is prefix. This function prefixes a whole Trie by nesting it under a list of keys. Because of the way our Trie is represented, this can be done efficiently and we don’t need to change every key.

> prefix :: (Eq k, Hashable k) => [k] -> Trie k v -> Trie k v > prefix [] trie = trie > prefix (k : ks) trie = Trie N $ HMS.singleton k (prefix ks trie)

This is the result of prefixing the trie from the previous example with ["qux"]:

prefix example

In addition to creating Tries, we also need to be able to lookup stuff in the Trie. All we need for that is a simple lookup function:

> lookup :: (Eq k, Hashable k) => [k] -> Trie k v -> Maybe v > lookup [] (Trie N _) = Nothing > lookup [] (Trie (J x) _) = Just x > lookup (k : ks) (Trie _ children) = do > trie <- HMS.lookup k children > lookup ks trie

These are all the Trie functions we need. A real implementation would, of course, offer more.

The scope type

Now, recall that we’re trying to resolve the occurrence names in a module into full names. We will tackle this from the opposite direction: we’ll gather up all the names which are in scope into one place. After this, actually, resolving an occurrence name is as simple as performing a lookup.

In order to gather up all these names we need some datatype – which is, of course, the Trie we just implemented!

> type Scope a = Trie String a

We will differentiate between two different kinds of scopes (hence the a). An AmbiguousScope might contain duplicate names. In that case, we want to throw an error or show a warning to the user. In an UnambiguousScope, on the other hand, we know precisely what every name refers to.

> type AmbiguousScope = Scope [FullName] > type UnambiguousScope = Scope FullName

Let’s first focus on building AmbiguousScopes. We will later see how we can validate these and convert them into an UnambiguousScope.

Building a scope for one module

In order to build a scope, let’s start with a simple case. Let’s look at a sample module in our DSL and construct a scope just for that module.

module Calories.Fruit where apple = 52 banana = 89

We need to have some intuition for how such a module is represented in Haskell. Let’s try to keep things as simple as possible:

> data Module n = Module > { mName :: !ModuleName > , mBindings :: [Binding n] > } deriving (Show) > data Binding n = Binding > { bName :: !n > , bBody :: !(Expr n) > } deriving (Show)

We can define a function to convert this module into a local Scope which contains all the bindings in the module. In order to keep things simple, we assume every binding in a module is always exported.

> scopeFromModule :: Module OccName -> AmbiguousScope > scopeFromModule m = > unionsWith (++) $ map scopeFromBinding (mBindings m) > where > scopeFromBinding :: Binding OccName -> AmbiguousScope > scopeFromBinding b = singleton (bName b) > [ FullName > { fnOccName = bName b > , fnModuleName = mName m > , fnBindingScope = ToplevelScope > } > ]

For our example module, we obtain something like:

The fruit module scope

Multiple imports

Of course, a realistic program will import multiple modules. Imagine a program with the following import list:

import Calories.Fruit import qualified Calories.Pie as Pie -- An apple and an apple pie! combo = apple + Pie.apple

In order to build the Scope for the program, we need three more things:

  1. Joining a bunch of Scopes, one for each import statement (plus the local scope, and maybe a builtin scope…);
  2. Qualifying a Scope, so that the qualified imports end up under the right name;
  3. Finally, converting the AmbiguousScope into an UnambiguousScope.

The first one is easy, since we have our Trie machinery.

> unionScopes :: [AmbiguousScope] -> AmbiguousScope > unionScopes = unionsWith (++)

So is the second:

> qualifyScope :: [String] -> AmbiguousScope -> AmbiguousScope > qualifyScope = prefix

We can now build the scope for our little program. It is:

> myScope :: AmbiguousScope > myScope = unionScopes > [ scopeFromModule myModule -- Defines 'combo' > , scopeFromModule fruitModule > , qualifyScope ["Pie"] $ scopeFromModule pieModule > ]

We get something like:

myScope

Great! So now the problem is that we’re left with an AmbiguousScope instead of an UnambiguousScope. Fortunately we can convert between those fairly easily, because Trie (and by extension Scope) is Traversable:

> toUnambiguousScope > :: AmbiguousScope -> Validation [ScopeError] UnambiguousScope > toUnambiguousScope = traverse $ \fullNames -> case fullNames of > [single] -> pure single > [] -> Failure [InternalScopeError "empty list in scope"] > multiple -> Failure [AmbiguousNames multiple]

It is perhaps worth noting that this behaviour is different from GHC 2.

By using the Validation Applicative, we ensure that we get as many error messages as we can. We have a nice datatype which describes our possible errors:

> data ScopeError > = AmbiguousNames [FullName] > | NotInScope OccName > | InternalScopeError String -- For other failures > deriving (Show) Scope checking an expression

That entails everything we needed to build an UnambiguousScope, so we can now scope check a program. The actual scope checking itself is very straightforward:

> scExpr > :: UnambiguousScope -> Expr OccName > -> Validation [ScopeError] (Expr FullName) > scExpr _ (Literal x) = pure (Literal x) > scExpr s (Add x y) = Add <$> scExpr s x <*> scExpr s y > scExpr s (Var n) = Var <$> scOccName s n > > scOccName > :: UnambiguousScope -> OccName > -> Validation [ScopeError] FullName > scOccName s n = case lookup n s of > Just fullName -> pure fullName > Nothing -> Failure [NotInScope n] Conclusion

I have described a simple and (in my opinion) elegant approach to scope checking. I hope this is inspiring if you ever are in the situation where modules would be a nice extension to some DSL (or full-fledged programming language) you are implementing.

We’ve also seen how one can implement a Trie in a reasonably easy way. These often come in handy when you are modelling some sort of hierarchical Map.

This entire blogpost is written in Literate Haskell, and works as a standalone example for scope checking. If you feel up to the challenge, try to add Let-bindings as an exercise! You can find the raw .lhs file here.

Appendix

This is the rest of the source code to this blogpost, in order to make it testable (and hackable!).

> fruitModule :: Module OccName > fruitModule = Module > { mName = ["Calories.Fruit"] > , mBindings = > [ Binding ["apple"] (Literal 52) > , Binding ["banana"] (Literal 89) > ] > } > > pieModule :: Module OccName > pieModule = Module > { mName = ["Calories.Pie"] > , mBindings = > [ Binding ["apple"] (Literal 240) > , Binding ["blueberry"] (Literal 371) > ] > } > > myModule :: Module OccName > myModule = Module > { mName = ["Main"] > , mBindings = > [ Binding ["combo"] $ Add > (Var ["apple"]) > (Var ["Pie", "apple"]) > ] > } > > scModule > :: UnambiguousScope -> Module OccName > -> Validation [ScopeError] (Module FullName) > scModule s (Module n bs) = Module n <$> traverse (scBinding s) bs > > scBinding > :: UnambiguousScope -> Binding OccName > -> Validation [ScopeError] (Binding FullName) > scBinding s (Binding n e) = Binding <$> scOccName s n <*> scExpr s e > > main :: IO () > main = do > let ambiguous = unionScopes > [ scopeFromModule fruitModule > , qualifyScope ["Pie"] $ scopeFromModule pieModule > , scopeFromModule myModule > ] > > print $ do > unambiguous <- validationToEither $ toUnambiguousScope ambiguous > validationToEither $ scModule unambiguous myModule
  1. Actually, in this representation, there is no “the” empty trie, since one can represent an empty trie in infinite ways.

  2. GHC only reports ambiguity errors for imported names when they are actually used, not when they are imported. We could also achieve this behaviour by continuing with the AmbiguousScope and throwing an error from scOccName when there is ambiguity.

Categories: Offsite Blogs

Proposal: make minimumBy/maximumBy go through foldl', not foldr1

libraries list - Thu, 10/29/2015 - 4:34pm
I just realized (thanks to this reddit thread[1]) that minimumBy is now implemented through foldr1. Previously (before FTP), minimumBy from Data.List used foldl1, while minimumBy from Data.Foldable used foldr1. I think that the way these were merged was a mistake. Let's admit that the absolute majority of use cases for minimumBy are numbers, for which the comparison functions are strict in both arguments. Even for something like Bool, where comparison could potentially be in one of the arguments, the standard compare method is derived and is strict. Now, for Foldable, there isn't necesarily a connection between the direction and the way fold is performed. For example, for Data.Map.Map both folds are tail-recursive. Yet foldr is non-tail-recursive at least for: - good old lists - anything that uses the same convention for foldl/foldr as lists - anything that defines its foldable instance through conversion to lists - anything that uses the default implementation of foldr/foldr1 I also put a simple microb
Categories: Offsite Discussion

2nd CfP: Haskell in Leipzig (Germany) 2015

haskell-cafe - Thu, 10/29/2015 - 1:10pm
HaL-10 Haskell in Leipzig (December 4/5) http://nfa.imn.htwk-leipzig.de/HAL2015/ We are proud to present Joachim Breitner (nomeata) as our invited speaker. The submission deadline (November 2) is approaching! See you - Johannes Waldmann (PC chair)
Categories: Offsite Discussion

2nd CfP: Haskell in Leipzig (Germany) 2015

General haskell list - Thu, 10/29/2015 - 1:08pm
HaL-10 Haskell in Leipzig (December 4/5) http://nfa.imn.htwk-leipzig.de/HAL2015/ We are proud to present Joachim Breitner (nomeata) as our invited speaker. The submission deadline (November 2) is approaching! See you - Johannes Waldmann (PC chair)
Categories: Incoming News

Why does this Haskell run 400 times slower than the equivalent OCaml?

Haskell on Reddit - Thu, 10/29/2015 - 10:12am

UPDATE: due to the changes suggested here the program is now...slower! Execution time has gone from 640 seconds to 880.

UPDATE 2: Actually, if I remove the -threaded option the performance has improved from 640 to 620 seconds.

UPDATE 3: It looks like there is a chunk of code missing in the haskell version, and this is causing the problem. I will fix and upload.

FINAL UPDATE: new code checked in, GHC version now runs in about 0.7 seconds vs. Ocaml's 2.5.

This repo contains part of John Harrison's automatic theorem prover code from his book "Handbook of Practical Logic and Automated Reasoning", along with a literal translation of some of that code (specifically, the MESON prover) into 500 lines of Haskell (thanks to Ruben Zilibowitz for this code.) Unfortunately, the Haskell version runs about 400 times more slowly than the ocaml version, and I'm trying to find out why.

% cd ocaml % apt-get install camlp5 % ocaml # #use "init.ml";; # meson wishnu;; Searching with depth limit 0 Searching with depth limit 1 Searching with depth limit 2 Searching with depth limit 3 [counts to 16 twice.] Searching with depth limit 14 Searching with depth limit 15 Searching with depth limit 16 - : int list = [16; 16] #

To run the Haskell version:

% cd haskell % ghc -O2 Main.hs -o test % ./test Searching with depth limit 0 Searching with depth limit 1 Searching with depth limit 2 Searching with depth limit 3 [... fifteen minutes later ...] Searching with depth limit 16 [16,16]

My question is, what the heck?

submitted by dsfox
[link] [65 comments]
Categories: Incoming News