News aggregator

Edward Z. Yang: Parsec: “try a <|> b” considered harmful

Planet Haskell - Sat, 05/17/2014 - 7:46pm
tl;dr The scope of backtracking try should be minimized, usually by placing it inside the definition of a parser.

Have you ever written a Parsec parser and gotten a really uninformative error message?

"test.txt" (line 15, column 7): unexpected 'A' expecting end of input

The line and the column are randomly somewhere in your document, and you're pretty sure you should be in the middle of some stack of parser combinators. But wait! Parsec has somehow concluded that the document should be ending immediately. You noodle around and furthermore discover that the true error is some ways after the actually reported line and column.

You think, “No wonder Parsec gets such a bad rep about its error handling.”

Assuming that your grammar in question is not too weird, there is usually a simple explanation for an error message like this: the programmer sprinkled their code with too many backtracking try statements, and the backtracking has destroyed useful error state. In effect, at some point the parser failed for the reason we wanted to report to the user, but an enclosing try statement forced the parser to backtrack and try another (futile possibility).

This can be illustrated by way of an example. A Haskeller is playing around with parse combinators and decides to test out their parsing skills by writing a parser for Haskell module imports:

stmt ::= import qualified A as B | import A

Piggy-backing off of Parsec’s built in token combinators (and the sample code), their first version might look something like this:

import Text.Parsec import qualified Text.Parsec.Token as P import Text.Parsec.Language (haskellDef) data Stmt = QualifiedImport String String | Import String deriving (Show) pStmt = pQualifiedImport <|> pImport pQualifiedImport = do reserved "import" reserved "qualified" i <- identifier reserved "as" i' <- identifier return (QualifiedImport i i') pImport = do reserved "import" i <- identifier return (Import i) lexer = P.makeTokenParser (haskellDef { P.reservedNames = P.reservedNames haskellDef ++ ["qualified", "as"] }) identifier = P.identifier lexer reserved = P.reserved lexer parseStmt input = parse (pStmt >> eof) "(unknown)" input

Unfortunately, the parser doesn't work for regular imports—they get this error message:

*Main> parseStmt "import Foo" Left "(unknown)" (line 1, column 8): unexpected "F" expecting "qualified"

After a little Googling, they discover that Parsec doesn’t backtrack by default. Well, that’s fine; why not just insert a try into the parser.

pStmt = try pQualifiedImport <|> pImport

This fixes both parses and suggests the following rule for writing future parsers:

If I need choice over multiple parsers, but some of these parsers might consume input, I better tack a try onto each of the parsers, so that I can backtrack.

Unbeknownst to the user, they have introduced bad error reporting behavior:

*Main> parseStmt "import qualified Foo s B" Left "(unknown)" (line 1, column 17): unexpected reserved word "qualified" expecting letter or digit or "#"

Wait a second! The error we wanted was that there was an unexpected identifier s, when we were expecting as. But instead of reporting an error when this occurred, Parsec instead backtracked, and attempted to match the pImport rule, only failing once that rule failed. By then, the knowledge that one of our choice branches failed had been forever lost.

How can we fix it? The problem is that our code backtracks when we, the developer, know it will be futile. In particular, once we have parsed import qualified, we know that the statement is, in fact, a qualified import, and we shouldn’t backtrack anymore. How can we get Parsec to understand this? Simple: reduce the scope of the try backtracking operator:

pStmt = pQualifiedImport <|> pImport pQualifiedImport = do try $ do reserved "import" reserved "qualified" i <- identifier reserved "as" i' <- identifier return (QualifiedImport i i')

Here, we have moved the try from pStmt into pQualifiedImport, and we only backtrack if import qualified fails to parse. Once it parses, we consume those tokens and we are now committed to the choice of a qualified import. The error messages get correspondingly better:

*Main> parseStmt "import qualified Foo s F" Left "(unknown)" (line 1, column 22): unexpected "s" expecting "as"

The moral of the story: The scope of backtracking try should be minimized, usually by placing it inside the definition of a parser. Some amount of cleverness is required: you have to be able to identify how much lookahead is necessary to commit to a branch, which generally depends on how the parser is used. Fortunately, many languages are constructed specifically so that the necessary lookahead is not too large, and for the types of projects I might use Parsec for, I’d be happy to sacrifice this modularity.

Another way of looking at this fiasco is that Parsec is at fault: it shouldn’t offer an API that makes it so easy to mess up error messages—why can’t it automatically figure out what the necessary lookahead is? While a traditional parser generator can achieve this (and improve efficiency by avoiding backtracking altogether in our earlier example), there are some fundamental reasons why Parsec (and monadic parser combinator libraries like it) cannot automatically determine what the lookahead needs to be. This is one of the reasons (among many) why many Haskellers prefer faster parsers which simply don’t try to do any error handling at all.

Why, then, did I write this post in the first place? There is still a substantial amount of documentation recommending the use of Parsec, and a beginning Haskeller is more likely than not going to implement their first parser in Parsec. And if someone is going to write a Parsec parser, you might as well spend a little time to limit your backtracking: it can make working with Parsec parsers a lot more pleasant.

Categories: Offsite Blogs

My first "real" Haskell program - how am I doing?

Haskell on Reddit - Sat, 05/17/2014 - 5:35pm

It's compiler from MiniJava to MIPS: https://github.com/chrismwendt/MiniJava

This compiler is modeled after the one we wrote in compilers class in Java. I had been itching to give it a shot in Haskell but couldn't muster the confidence to dive in until a few weeks ago.

I have probably spent on the order of a few hundred hours learning and programming in Haskell, starting out with some of the 99 Haskell and moving on to complete roughly 70 Project Euler problems. This is my first Haskell program which is over 100 lines, and I'm pretty happy with the result but I would love to get feedback from more experienced Haskellers.

I really tried to branch out from what I was familiar with and use appropriate packages like parsec, lens, fgl, disjoint-set, monad transformers, and applicative style. Let me know how I'm doing =)

submitted by Total_1mmersion
[link] [7 comments]
Categories: Incoming News

New with haskell. Fixing a stack space overflow? And making code faster and/or more idiomatic in general?

Haskell on Reddit - Sat, 05/17/2014 - 4:43pm

I got the code for modular exponentiation from Rosetta Code, and it does work with toy cases. I'm trying a bigger case out now and while I don't see how something so lazy is running out of stack space, it managed to do so.

So the code is as follows. I tried to stay away from loops and go with what functionally I would do, but still use the "take 2" because I only need one result from a list of a ton of results, and I know that once it finds the second result it will terminate. The comments in the annotation explain what I'm trying to do, but editing and fixing it up there is helpful. Don't feel too bad about helping me here since this is derived from an assignment for a math class not related to haskell working on a problem we were never asked to tackle. If it makes any difference, I already know I passed the math class and this code won't be useful for people taking the class in the future unless they know enough about the problem to make using this code overkill.

Code pasted here, annotation of my thoughts in comments on bottom: http://lpaste.net/104249

So any help on not using up so much stack space (I really don't know why it is using more than the bare minimum to be honest) would be appreciated, as well as comments on general coding style since this is a way I can play with the arbitrary math in Haskell as part of a larger goal of learning haskell and adding it to my toolbox of languages. The problem seemed well suited for a functional language, as well as a fast one.

Speaking of fast: I notice that even when I compile with -O2 and -threaded and tell it to use 4 threads (or 8, but I'm on a 4 core processor) it does use all the cores, but only about 30 percent max on each. I get more performance than using 1 core would give me, but if I can get 100% utilization of even 3 of the 4 cores (i.e. at least 300% of my cpu, instead of the 120% I can get right now) I would be happy. Do you know what is "wasting" cpu cycles and stopping it from fully utilizing the CPU?

Thanks in advance for any help! If anyone here knows D, I am trying it with BigInt too, but even though D does offer simpler parallelism than other languages in the class, it still seems rough around the edges. Besides, using std.range.iota to parallelize testing each number doesn't let me use BigInt, so it might be a useless exercise there.

Any tips on the math (Only test odd numbers, if you have a proof that there are no even answers for example) are useful too but I'm mostly looking for help on the stack space issue and general coding style tips.

Thanks!

EDIT: News from an 8 core CPU (with more memory as well): I haven't hit run out of stack space yet, but I still am only getting 30% usage overall (this time 1 core is not 100%, all 8 cores all) so even though I see it running with 8 threads, each CPU is giving me at most 25% utilization at any point in time, and is often only 15%.

EDIT 2: Forgot to knock on wood, ran out of stack space on the 8 core machine too. I would wonder if giving it all 16 gigs would do better than only 8, but the bigger question is what is even using 8 gigs to begin with?

submitted by maccam912
[link] [25 comments]
Categories: Incoming News

hs2bf commentary

del.icio.us/haskell - Sat, 05/17/2014 - 4:00pm
Categories: Offsite Blogs

Was directed here for help with why ghc is failingto find mtl.

haskell-cafe - Sat, 05/17/2014 - 3:49pm
This is the error message which started it all off: http://lpaste.net/104242 Then the following conversation happened: ============================================================================== --- JPMoresmau ------ First package to fail is exceptions. Just open a console and type in cabal install exceptions EclipseFP tries to put some parallel install flags that may make the errors less visible (a Cabal thing, not an EclipseFP thing). I really don't like the look of getting warnings in c code, though... --- haskell_beginner ------ $ cabal install exceptions Resolving dependencies... Configuring exceptions-0.6.1... /var/folders/3r/gvk584k50jb253024p4wxy3r0000gn/T/5187.c:1:12: warning: control reaches end of non-void function [-Wreturn-type] int foo() {} ^ 1 warning generated. Building exceptions-0.6.1... Preprocessing library exceptions-0.6.1... <command line>: cannot satisfy -package-id mtl-2.1.2-94c72af955e94b8d7b2f359dadd0cb62
Categories: Offsite Discussion

What Features Would You Like to Find in a Haskell IDE?

Haskell on Reddit - Sat, 05/17/2014 - 10:11am

In case the community decides to build a Haskell editor/IDE from scratch, how do you imagine its layout design? What particular features you want to find in it? How do you imagining the debugging procedure being handled with the tool? ...

In case you are comfortable with an existing tool, would you share your configurations so others can benefit from it.

submitted by BanX
[link] [88 comments]
Categories: Incoming News

Research position "Coalgebraic Logic Programming forType Inference"

haskell-cafe - Sat, 05/17/2014 - 9:38am
We have a fixed-term position at the School of Computing, Dundee for a postdoctoral researcher to work on the project Coalgebraic Logic Programming for Type Inference: a new generation of languages for parallelism and corecursion. More details are available below and at http://staff.computing.dundee.ac.uk/katya/CoALP/ For further inquiries please email me: Katya Komendantskaya <katya< at >computing.dundee.ac.uk> School of Computing University of Dundee Postdoctoral Researcher in Coalgebraic Logic Programming for Type Inference Fixed-term position for 2 years (extension possible). Start date: between 1 July 2014 and 1 October 2014; Salary scale: between £29,837 and £33,5562 per annum. Closing Date for applications: 16 June 2014. The School of Computing at the University of Dundee invites applications for a postdoctoral researcher to work on an interdisciplinary project "Coalgebraic Logic Programming for type inference: a new generation of languages for parallelism and corecursion" ( http
Categories: Offsite Discussion

Dan Piponi (sigfpe): Types, and two approaches to problem solving

Planet Haskell - Sat, 05/17/2014 - 9:22am
IntroductionThere are two broad approaches to problem solving that I see frequently in mathematics and computing. One is attacking a problem via subproblems, and another is attacking a problem via quotient problems. The former is well known though I’ll give some examples to make things clear. The latter can be harder to recognise but there is one example that just about everyone has known since infancy.

SubproblemsConsider sorting algorithms. A large class of sorting algorithms, including quicksort, break a sequence of values into two pieces. The two pieces are smaller so they are easier to sort. We sort those pieces and then combine them, using some kind of merge operation, to give an ordered version of the original sequence. Breaking things down into subproblems is ubiquitous and is useful far outside of mathematics and computing: in cooking, in finding our path from A to B, in learning the contents of a book. So I don’t need to say much more here.

Quotient problemsThe term quotient is a technical term from mathematics. But I want to use the term loosely to mean something like this: a quotient problem is what a problem looks like if you wear a certain kind of filter over your eyes. The filter hides some aspect of the problem that simplifies it. You solve the simplified problem and then take off the filter. You now ‘lift’ the solution of the simplified problem to a solution to the full problem. The catch is that your filter needs to match your problem so I’ll start by giving an example where the filter doesn’t work.

Suppose we want to add a list of integers, say: 123, 423, 934, 114. We can try simplifying this problem by wearing a filter that makes numbers fuzzy so we can’t distinguish numbers that differ by less than 10. When we wear this filter 123 looks like 120, 423 looks like 420, 934 looks like 930 and 114 looks like 110. So we can try adding 120+420+930+110. This is a simplified problem and in fact this is a common technique to get approximate answers via mental arithmetic. We get 1580. We might hope that when wearing our filters, 1580 looks like the correct answer. But it doesn’t. The correct answer is 1594. This filter doesn’t respect addition in the sense that if a looks like a’ and b looks like b’ it doesn’t follow that a+b looks like a’+b’.

To solve a problem via quotient problems we usually need to find a filter that does respect the original problem. So let’s wear a different filter that allows us just to see the last digit of a number. Our original problem now looks like summing the list 3, 3, 4, 4. We get 4. This is the correct last digit. If we now try a filter that allows us to see just the last two digits we see that summing 23, 23, 34, 14 does in fact give the correct last two digits. This is why the standard elementary school algorithms for addition and multiplication work through the digits from right to left: at each stage we’re solving a quotient problem but the filter only respects the original problem if it allows us to see the digits to the right of some point, not digits to the left. This filter does respect addition in the sense that if a looks like a’ and b looks like b’ then a+b looks like a’+b’.

Another example of the quotient approach is to look at the knight’s tour problem in the case where two opposite corners have been removed from the chessboard. A knight’s tour is a sequence of knight’s moves that visit each square on a board exactly once. If we remove opposite corners of the chessboard, there is no knight’s tour of the remaining 62 squares. How can we prove this? If you don’t see the trick you can get get caught up in all kinds of complicated reasoning. So now put on a filter that removes your ability to see the spatial relationships between the squares so you can only see the colours of the squares. This respects the original problem in the sense that a knight’s move goes from a black square to a white square, or from a white square to a black square. The filter doesn’t stop us seeing this. But now it’s easier to see that there are two more squares of one colour than the other and so no knight’s tour is possible. We didn’t need to be able to see the spatial relationships at all.

(Note that this is the same trick as we use for arithmetic, though it’s not immediately obvious. If we think of the spatial position of a square as being given by a pair of integers (x, y), then the colour is given by x+y modulo 2. In other words, by the last digit of x+y written in binary. So it’s just the see-only-digits-on-the-right filter at work again.)

Wearing filters while programmingSo now think about developing some code in a dynamic language like Python. Suppose we execute the line:

a = 1

The Python interpreter doesn’t just store the integer 1 somewhere in memory. It also stores a tag indicating that the data is to be interpreted as an integer. When you come to execute the line:

b = a+1

it will first examine the tag in a indicating its type, in this case int, and use that to determine what the type for b should be.

Now suppose we wear a filter that allows us to see the tag indicating the type of some data, but not the data itself. Can we still reason about what our program does?

In many cases we can. For example we can, in principle, deduce the type of

a+b*(c+1)/(2+d)

if we know the types of a, b, c, d. (As I’ve said once before, it’s hard to make any reliable statement about a bit of Python code so let's suppose that a, b, c and d are all either of type int or type float.) We can read and understand quite a bit of Python code wearing this filter. But it’s easy to go wrong. For example consider

if a>1 then: return 1.0else: return 1

The type of the result depends on the value of the variable a. So if we’re wearing the filter that hides the data, then we can’t predict what this snippet of code does. When we run it, it might return an int sometimes and a float other times, and we won’t be able to see what made the difference.

In a statically typed language you can predict the type of an expression knowing the type of its parts. This means you can reason reliably about code while wearing the hide-the-value filter. This means that almost any programming problem can be split into two parts: a quotient problem where you forget about the values, and then problem of lifting a solution to the quotient problem to a solution to the full problem. Or to put that in more conventional language: designing your data and function types, and then implementing the code that fits those types.

I chose to make the contrast between dynamic and static languages just to make the ideas clear but actually you can happily use similar reasoning for both types of language. Compilers for statically typed languages, give you a lot of assistance if you choose to solve your programming problems this way.

A good example of this at work is given in Haskell. If you're writing a compiler, say, you might want to represent a piece of code as an abstract syntax tree, and implement algorithms that recurse through the tree. In Haskell the type system is strong enough that once you’ve defined the tree type the form of the recursion algorithms is often more or less given. In fact, it can be tricky to implement tree recursion incorrectly and have the code compile without errors. Solving the quotient problem of getting the types right gets you much of the way towards solving the full problem.

And that’s my main point: types aren’t simply a restriction mechanism to help you avoid making mistakes. Instead they are a way to reduce some complex programming problems to simpler ones. But the simpler problem isn’t a subproblem, it’s a quotient problem.Dependent typesDependently typed languages give you even more flexibility with what filters you wear. They allow you to mix up values and types. For example both C++ and Agda (to pick an unlikely pair) allow you to wear filters that hide the values of elements in your arrays while allowing you to see the length of your arrays. This makes it easier to concentrate on some aspects of your problem while completely ignoring others.

NotesI wrote the first draft of this a couple of years ago but never published it. I was motivated to post by a discussion kicked off by Voevodsky on the TYPES mailing list http://lists.seas.upenn.edu/pipermail/types-list/2014/001745.html

This article isn’t a piece of rigorous mathematics and I’m using mathematical terms as analogies.

The notion of a subproblem isn’t completely distinct from a quotient problem. Some problems are both, and in fact some problems can be solved by transforming them so they become both.
More generally, looking at computer programs through different filters is one approach to abstract interpretation http://en.wikipedia.org/wiki/Abstract_interpretation. The intuition section there (http://en.wikipedia.org/wiki/Abstract_interpretation#Intuition) has much in common with what I’m saying.
Categories: Offsite Blogs

Haskell for the Evil Genius

del.icio.us/haskell - Sat, 05/17/2014 - 8:47am
Categories: Offsite Blogs

Haskell for the Evil Genius

del.icio.us/haskell - Sat, 05/17/2014 - 8:47am
Categories: Offsite Blogs

Wanting to learn Haskell, then I hit this error before I can even start.

Haskell on Reddit - Sat, 05/17/2014 - 1:07am
Resolving dependencies... Configuring asn1-encoding-0.8.1.3... Configuring blaze-builder-0.3.3.2... Configuring crypto-random-0.0.7... Configuring exceptions-0.6.1... Downloading fgl-5.5.0.1... Downloading haskell-src-exts-1.15.0.1... Downloading hslua-0.3.12... Downloading mime-types-0.1.0.4... Downloading pem-0.2.2... Building crypto-random-0.0.7... Building asn1-encoding-0.8.1.3... Building blaze-builder-0.3.3.2... Building exceptions-0.6.1... Downloading regex-pcre-builtin-0.94.4.8.8.35... Downloading zip-archive-0.2.2.1... Failed to install exceptions-0.6.1 Last 10 lines of the build log ( /Users/cschwenz/.cabal/logs/exceptions-0.6.1.log ): /var/folders/3r/gvk584k50jb253024p4wxy3r0000gn/T/3376.c:1:12: warning: control reaches end of non-void function [-Wreturn-type] int foo() {} ^ 1 warning generated. Building exceptions-0.6.1... Preprocessing library exceptions-0.6.1... <command line>: cannot satisfy -package-id mtl-2.1.2-94c72af955e94b8d7b2f359dadd0cb62 (use -v for more information) Failed to install blaze-builder-0.3.3.2 Last 10 lines of the build log ( /Users/cschwenz/.cabal/logs/blaze-builder-0.3.3.2.log ): /var/folders/3r/gvk584k50jb253024p4wxy3r0000gn/T/3370.c:1:12: warning: control reaches end of non-void function [-Wreturn-type] int foo() {} ^ ... ^ 1 warning generated. Building regex-pcre-builtin-0.94.4.8.8.35... Preprocessing library regex-pcre-builtin-0.94.4.8.8.35... <command line>: cannot satisfy -package-id regex-base-0.93.2-f9403610b59f8cc474edd63a82806d18 (use -v for more information) Building haskell-src-exts-1.15.0.1... Installed haskell-src-exts-1.15.0.1 Updating documentation index /Users/cschwenz/Library/Haskell/doc/index.html cabal: Error: some packages failed to install: Graphalyze-0.14.1.0 depends on regex-pcre-builtin-0.94.4.8.8.35 which failed to install. SourceGraph-0.7.0.6 depends on regex-pcre-builtin-0.94.4.8.8.35 which failed to install. aeson-0.7.0.4 depends on scientific-0.2.0.2 which failed to install. asn1-encoding-0.8.1.3 failed during the building phase. The exception was: ExitFailure 1 asn1-parse-0.8.1 depends on asn1-encoding-0.8.1.3 which failed to install. blaze-builder-0.3.3.2 failed during the building phase. The exception was: ExitFailure 1 blaze-html-0.7.0.2 depends on blaze-builder-0.3.3.2 which failed to install. blaze-markup-0.6.1.0 depends on blaze-builder-0.3.3.2 which failed to install. conduit-1.1.2.1 depends on semigroups-0.14 which failed to install. connection-0.2.1 depends on pem-0.2.2 which failed to install. cookie-0.4.1.1 depends on blaze-builder-0.3.3.2 which failed to install. cprng-aes-0.5.2 depends on crypto-random-0.0.7 which failed to install. crypto-numbers-0.2.3 depends on crypto-random-0.0.7 which failed to install. crypto-pubkey-0.2.4 depends on crypto-random-0.0.7 which failed to install. crypto-random-0.0.7 failed during the building phase. The exception was: ExitFailure 1 exceptions-0.6.1 failed during the building phase. The exception was: ExitFailure 1 fgl-5.5.0.1 failed during the building phase. The exception was: ExitFailure 1 graphviz-2999.17.0.0 depends on fgl-5.5.0.1 which failed to install. highlighting-kate-0.5.8.1 depends on regex-pcre-builtin-0.94.4.8.8.35 which failed to install. hslua-0.3.12 failed during the building phase. The exception was: ExitFailure 1 http-client-0.3.2.2 depends on mime-types-0.1.0.4 which failed to install. http-client-tls-0.2.1.1 depends on pem-0.2.2 which failed to install. http-conduit-2.1.2 depends on pem-0.2.2 which failed to install. http-types-0.8.4 depends on blaze-builder-0.3.3.2 which failed to install. lifted-base-0.2.2.2 depends on transformers-base-0.4.2 which failed to install. mime-types-0.1.0.4 failed during the building phase. The exception was: ExitFailure 1 mmorph-1.0.3 failed during the building phase. The exception was: ExitFailure 1 monad-control-0.3.3.0 depends on transformers-base-0.4.2 which failed to install. pandoc-1.12.4.2 depends on regex-pcre-builtin-0.94.4.8.8.35 which failed to install. pandoc-types-1.12.3.3 depends on scientific-0.2.0.2 which failed to install. pem-0.2.2 failed during the building phase. The exception was: ExitFailure 1 polyparse-1.9 failed during the building phase. The exception was: ExitFailure 1 publicsuffixlist-0.1 failed during the building phase. The exception was: ExitFailure 1 regex-pcre-builtin-0.94.4.8.8.35 failed during the building phase. The exception was: ExitFailure 1 resourcet-1.1.2.2 depends on transformers-base-0.4.2 which failed to install. scientific-0.2.0.2 failed during the building phase. The exception was: ExitFailure 1 semigroups-0.14 failed during the building phase. The exception was: ExitFailure 1 socks-0.5.4 failed during the building phase. The exception was: ExitFailure 1 streaming-commons-0.1.2.4 depends on blaze-builder-0.3.3.2 which failed to install. tagsoup-0.13.1 failed during the building phase. The exception was: ExitFailure 1 texmath-0.6.6.1 depends on xml-1.3.13 which failed to install. tls-1.2.7 depends on pem-0.2.2 which failed to install. transformers-base-0.4.2 failed during the building phase. The exception was: ExitFailure 1 void-0.6.1 depends on semigroups-0.14 which failed to install. wl-pprint-text-1.1.0.2 failed during the building phase. The exception was: ExitFailure 1 x509-1.4.11 depends on pem-0.2.2 which failed to install. x509-store-1.4.4 depends on pem-0.2.2 which failed to install. x509-system-1.4.5 depends on pem-0.2.2 which failed to install. x509-validation-1.5.0 depends on pem-0.2.2 which failed to install. xml-1.3.13 failed during the building phase. The exception was: ExitFailure 1 yaml-0.8.8.2 depends on semigroups-0.14 which failed to install. zip-archive-0.2.2.1 failed during the building phase. The exception was: ExitFailure 1

(Full error text available upon request. Error text trimmed due to max character limit.)

What I was doing (following the instructions in "Beginning Haskell"):
1. Installed XCode
2. Installed XCode command line tools
3. Installed Eclipse
4. Installed EclipseFP
5. Restarted Eclipse
6. Checked the boxes for "Install optional helper executables (...)" and "Install for current user only"
7. Voila, errors!

While I am a Haskell beginner, I do know my way around a few other programming languages. From this small body of knowledge I make the following observations:
* If you want your language to be taken as more than an academic language, don't have errors in commonly used code (the exceptions package has had this exact same error for at least two months now).
* There is a distinct lack of resources targeted at any level of expertise other than Haskell guru for working through problems such as the one above (a.k.a., I know this is the wrong place to post this but I am doing so anyhow out of desperation).

submitted by haskell_beginner
[link] [37 comments]
Categories: Incoming News

System.Process and -threaded

haskell-cafe - Sat, 05/17/2014 - 12:44am
Hello, I'm writing a little networking wrapper around a sub-process (mplayer -idle -slave) and I'm running into some issues with the System.Process API. This is the program: When compile with -threaded, the mplayer process gets zombified and hangs until I shut down the program. When compiled with non-threaded RTS (thats whats its called, correct?) I can successfully send a few commands, but then mplayer freezes. When I strace mplayer, this error is what it gets stuck on. ioctl(0, TIOCGWINSZ, 0x7fff2897a070) = -1 ENOTTY (Inappropriate ioctl for device) Apparently that means I'm trying to communicate with it as though it were a type writer. How fitting :) The commands are all simple strings as docs here: http://www.mplayerhq.hu/DOCS/tech/slave.txt My questions are these: is there anything I need to take care of when handling sub-processes like this, specifically while writing to stdin of the process, and with particular regard to -threaded? Does anybody spot a problem or something I'm o
Categories: Offsite Discussion

ICFP 2014 Student Research Competition: Call forSubmissions

haskell-cafe - Fri, 05/16/2014 - 9:29pm
====================================================================== CALL FOR SUBMISSION SRC< at >ICFP 2014 Gothenburg, Sweden 1-3 September 2014 http://www.icfpconference.org/icfp2014/src.html Co-located with the International Conference on Functional Programming (ICFP 2014) ====================================================================== Student Research Competition ------------------------ This year ICFP will host a Student Research Competition where undergraduate and postgraduate students can present posters. The SRC at the ICFP 2014 consists of three rounds: Extended abstract round: All students are encouraged to submit an extended abstract outlining their research (800 words). Poster session at ICFP 2014: Based on the abstracts, a panel of judges will select the most promising entrants to participate in the poster session which will take place at ICF
Categories: Offsite Discussion

ICFP 2014 Student Research Competition: Call forSubmissions

General haskell list - Fri, 05/16/2014 - 9:29pm
====================================================================== CALL FOR SUBMISSION SRC< at >ICFP 2014 Gothenburg, Sweden 1-3 September 2014 http://www.icfpconference.org/icfp2014/src.html Co-located with the International Conference on Functional Programming (ICFP 2014) ====================================================================== Student Research Competition ------------------------ This year ICFP will host a Student Research Competition where undergraduate and postgraduate students can present posters. The SRC at the ICFP 2014 consists of three rounds: Extended abstract round: All students are encouraged to submit an extended abstract outlining their research (800 words). Poster session at ICFP 2014: Based on the abstracts, a panel of judges will select the most promising entrants to participate in the poster session which will take place at ICF
Categories: Incoming News

what did it take for you to get comfortable with Haskell ?

Haskell on Reddit - Fri, 05/16/2014 - 8:43pm

is it books you read, courses, meetings, projects you worked on...etc

submitted by pyThat
[link] [28 comments]
Categories: Incoming News

How do I learn Fay?

Haskell on Reddit - Fri, 05/16/2014 - 6:20pm

I've recently become interested in Haskell, and I'm working on my first project, a tic-tac-toe game. I have gotten the game to work on the command line, but I would also like to build a better interface with Fay/Javascript.

The problem is that I haven't found any good tutorials or explanations of Fay. Does anyone know any good resources for Fay, or should I just study the examples that are provided in the package?

submitted by Judde10
[link] [4 comments]
Categories: Incoming News

Parent Modules: Common Functions or Re-Exportation?

haskell-cafe - Fri, 05/16/2014 - 6:13pm
So as a relatively long term user of haskell at this point, one issue which I've never found a simple solution to is the one stated in the thread. That is, given modules Foo, Foo.Bar, and Foo.Baz, should Foo reexport Bar and Baz, or should Foo provide common functions for Bar and Baz? In the first case, the common functions would have to be provided by some third module Foo.Util, which for some reason I find unsatisfying, as it means all my my modules have these Util after Util after Util module floating aroudn. My natural tendency is follow the second case, but then of course when it comes to actually using the code that I've written, I no longer have such a nicely exposed interface outside of the particular library. One simply can't eat ones cake and have it too. Maybe this problem is just silly, but perhaps others have always been left feeling ambivalent across similar lines, and have somehow found a pleasant solution for when this particular crossroads is reached? Cheers, - Sacha Sokol
Categories: Offsite Discussion