News aggregator

I'd like to write a shell in Haskell as a project. Any advice?

Haskell on Reddit - Mon, 11/16/2015 - 9:19am

Basically, using higher-order functions in a shell would probably be fun. Of course, we do have the pipe, which feels like some kind of map, but things like zip and groupBy: I can see this getting very interesting quickly. Making the shell typed is also something I'm thinking about.

If I do end up doing this, this will be my first big (for me) project. What should I look up? I still do not really grok monads (well, I understand Maybe and [], and Reader to some extent), so should I focus on that before doing this?

Yes, I know this has been done before. :-/

submitted by octatoan
[link] [10 comments]
Categories: Incoming News

Can someone explain Data.Vector.Fusion.Bundle?

Haskell on Reddit - Mon, 11/16/2015 - 6:21am


I noticed the vector package introduced new stream fusion primitives in version 0.11: Chunk and Bundle. Now, how do they work? What are the benefits?

And concretely does this abstraction help in fusing different consumers of one stream as it would be needed in the following code?

filterMax :: Vector Int -> (Vector Int, Int) filterMax vec1 = let vec2 = map (+ 1) vec1 vec3 = filter (> 0) vec2 n = fold max 0 vec3 in (vec3, n)

(Example taken from here)

submitted by alexbiehl
[link] [2 comments]
Categories: Incoming News

Erik de Castro Lopo: Forgive me Curry and Howard for I have Sinned.

Planet Haskell - Mon, 11/16/2015 - 5:22am

Forgive me Curry and Howard for I have sinned.

For the last several weeks, I have been writing C++ code. I've been doing some experimentation in the area of real-time audio Digital Signal Processing experiments, C++ actually is better than Haskell.

Haskell is simply not a good fit here because I need:

  • To be able to guarantee (by inspection) that there is zero memory allocation/de-allocation in the real-time inner processing loop.
  • Things like IIR filters are inherently stateful, with their internal state being updated on every input sample.

There is however one good thing about coding C++; I am constantly reminded of all the sage advice about C++ I got from my friend Peter Miller who passed away a bit over a year ago.

Here is an example of the code I'm writing:

class iir2_base { public : // An abstract base class for 2nd order IIR filters. iir2_base () ; // Virtual destructor does nothing. virtual ~iir2_base () { } inline double process (double in) { unsigned minus2 = (minus1 + 1) & 1 ; double out = b0 * in + b1 * x [minus1] + b2 * x [minus2] - a1 * y [minus1] - a2 * y [minus2] ; minus1 = minus2 ; x [minus1] = in ; y [minus1] = out ; return out ; } protected : // iir2_base internal state (all statically allocated). double b0, b1, b2 ; double a1, a2 ; double x [2], y [2] ; unsigned minus1 ; private : // Disable copy constructor etc. iir2_base (const iir2_base &) ; iir2_base & operator = (const iir2_base &) ; } ;
Categories: Offsite Blogs

Eric Kidd: Bare Metal Rust 3: Configure your PIC to handle interrupts correctly

Planet Haskell - Mon, 11/16/2015 - 5:16am

Want to build your own kernel in Rust? See Bare Metal Rust to get started.

We're almost ready to write a keyboard driver in Rust! But first, we need to deal with two obstacles: setting up the PIC, and handling interrupts without crashing. This is one of the most frustrating steps, as Julia Evans explains in her hilarious and very helpful post After 5 days, my OS doesn't crash when I press a key:

  1. Turn interrupts on (sti).
  2. The OS AGAIN crashes every time i press a key. Read “I Can’t Get Interrupts Working” again. This is called “I’m receiving EXC9 instead of IRQ1 when striking a key?!” Feel on top of this.
  3. Remap the PIC so that interrupt i gets mapped to i + 32, because of an Intel design bug. This basically looks like just typing in a bunch of random numbers, but it works.
  4. 12. THE OS IS STILL CRASHING WHEN I PRESS A KEY. This continues for 2 days.

We're going to follow Julia Evans' roadmap. (She saved me several days of suffering.) And once we're past these next few obstacles, things will get easier. Let's talk to the PIC first.

The 8295/8295A Programmable Interrupt Controller

We're going to with the retro approach here, and handle interrupts using the 8295 PIC. You can read all about it on the OSDev wiki, as usual. The PIC works fine in 64-bit mode, but someday, if we want to support multiple processors and I/O, we'll eventually need to support the newer APIC and IOAPIC. But for now, let's keep it simple.

Technically, the x86 architecture has two PIC chips, usually known as PIC1 and PIC2. PIC1 handles external interrupts 0–7, and PIC2 handles 8–15. PIC2 is actually chained into interrupt 2 on PIC1, which means that we'll frequently need to talk to them as a pair.

Unfortunately, the modern x86 architecture reserves CPU interrupts 0-31 for processor exceptions. This means that when we press a key, the CPU will think it just received the "EXC9" mentioned by Julia Evans, which the Intel manual tells me is "Coprocessor-Segment-Overrun Exception." So we need to tell our PIC that, no, McGyver and Miami Vice are no longer cutting-edge television, that there's this new-fangled thing called 386 Protected Mode, and that it needs to start mapping interrupts at offset 32.

Read more…

Categories: Offsite Blogs

Johan Tibell: The design of the Strict Haskell pragma

Planet Haskell - Mon, 11/16/2015 - 4:47am

Since the Strict Haskell pragma just landed in HEAD, thanks to the hard work of my GSoC student Adam Sandberg Ericsson, I thought I'd share my thoughts behind the design.

First, is this making Haskell a strict language? No. This pragma lets you reverse the default on a per-module basis. Since the pragma will never be used in every module (most notably those in base) this doesn't make Haskell a strict language. Making Haskell strict is not possible at this point.

This pragma (and its companion StrictData)

  • lets us get rid of some of the syntatic noise and error-proneness in code that needs to be (mostly) strict and

  • lets us experiment with how switching the default might affect code, in particular code with performance problems.

The first is a quite practical "might be useful in real code soon" kind of thing and the second is a "lets do some sciene and see what happens" kind of thing.

What's actually in GHC 8.0?

There are two new pragmas, Strict and StrictData, where the second is a subset of the former.

Strict makes all constructor fields and bindings (i.e. let/where/case/function arguments) strict, as if you had put a bang pattern on all of them.

StrictData does the same, but only to constructor fields.

Both affect only definitions in the current module. For example, using StrictData doesn't mean that the tuple types defined in base have strict fields and using Strict doesn't mean that existing fmap instances are strict, etc. Having a Strict pragma affecting other modules wouldn't make much sense, because those other modules might rely on laziness for correctness.

See the wiki page for more details on the actual semantics and implementation.

Practical reasons for this pragma

Writing production quality [1] Haskell code often detracts from the beauty of Haskell. For example, instead of writing

data Vector2 = V2 Double Double

you need to write

data Vector2 = V2 {-# UNPACK #-} !Double {-# UNPACK #-} !Double

when you define a data type with small (e.g. Int or Double) fields. Even for larger fields you often want to make the field strict (and often unpacked). This syntactic noise hurts readability.

The same applies to functions. Here's an example from unordered-containers (slightly modified for pedagogical purposes):

insert :: (Eq k, Hashable k) => k -> v -> HashMap k v -> HashMap k v insert k v m = go h k v 0 m where h = hash k go !h !k !x !_ Empty = Leaf h (L k x) ...

This function is entirely strict, so all those bangs are just noise. In addition, knowing why they need to be there is not entirely obvious.

The new Strict pragma, together with a previous change I made to unpacking, lets us write the code we want to write because

  • the Strict pragma lets us leave out the bangs but still get strict fields [2] and

  • default unpacking of small fields (see -funbox-small-strict-fields) gets us the unpacking without the pragmas.

So with GHC 8.0 you can write e.g. the definition of Vector2 you want to write, but get the representation you need.

When can I use this in my code?

The first thing you should consider is whether you want to only try StrictData or also Strict. The former is a quite modest change; you're probably (mostly) using strict constructor fields anyway so using StrictData will just remove some noise and force you to be explicit about when you actually want lazy fields. I'm hoping some application developers will start using StrictData.

Using the the Strict pragma would be a bigger change for your code base (I expect), so that should be approached with more caution. If nothing else it could be interesting to compile your own code with it and see what effect that has, both in terms of correctness (i.e. how often do you actually rely on laziness) and performance.

So when can you actually use this stuff?

If you're maintaing libraries that have lots of users and you need to support the three latest major GHC releases: when GHC 8.4 is out. Doing it before is probably not worth it as you'd need to use CPP. Just use bang patterns until then.

If you're writing an application where you can pick just a single GHC version and stick with it, you can try to use Strict/StrictData as soon as GHC 8.0 is out.

Scientific reasons for this pragma

For quite some time I've wanted to do an emperical analysis of lazy-by-default languages, in particular on performance correctness.

While we could compare e.g. OCaml with Haskell that's not a great comparison as the two languages differ in quite a few aspects (e.g. one uses monads as a way to control side effects, the GC implementations are different, etc).

What I plan to do is to take a number of sample programs e.g.

  • all programs on StackOverflow tagged with "haskell" and "performance", for a selection of programs with performance problems,
  • all of nofib, or
  • all of nofib but with base compiled with Strict (or StrictData),
  • some more realistic programs,
  • etc.

and see what happens if we compile them with Strict or StrictData.

I'd like to look at the results along two axes: does the program produce the same result under both defaults and if it does, does the strict default change the performance? Of course, doing some analysis on the causes e.g. where do programmers rely on laziness and what's the main reason strictness affects performance would be interesting as well.

[1] Defining "production quality code" probably deserves a whole post on its own. For the purpose of this post think of it as meaning code without surprised, in the sense that it produces the expected result using the expect computational resources. For example of of non-production quality code, consider a CSV library that needs 1.4GB of RAM to parse a 100MB CSV file.

[2] Here's it's actually enough to use the StrictData pragma, which is a subset of the Strict pragma.

Categories: Offsite Blogs

Russell O'Connor: Stochastic Elections Canada 2015 Results

Planet Haskell - Sun, 11/15/2015 - 10:20pm

It is time to announce the results from Stochastic Elections Canada for the 42st General Election.

Every vote counts with the stochastic election process, so we had to wait until all election results were validated and certified before we could announce our results. However, stochastic election results are not very sensitive to small changes to the number of votes counted. The distributions for each candidate are typically only slightly adjusted.

Now that the last judicial recount has been completed, we can announce our MP selection.

2015 Stochastic Election Simulation Results Party Seats Seat Percentage Vote Percentage Liberal 139 41.1% 39.5% Conservative 105 31.1% 31.9% NDP-New Democratic Party 63 18.6% 19.7% Bloc Québécois 19 5.62% 4.66% Green Party 11 3.25% 3.45% Christian Heritage Party 1 0.296% 0.0870%

Results by province and by riding are available (electoral districts on page 2).

The results were generated from Elections Canada data. One hundred and seventy-four elected candidates differ from the actual 2015 election outcome.

The Christian Heritage Party holds the balance of power in this parliament. Assuming a Liberal party member becomes speaker of the house, that means the Liberals together with the Bloc Québécois and Green Party have 168 votes and the Conservative and NDP together have 168 votes. In this case, it is the Christian Heritage Party vote that would break the tie.

Unfortunately, Liberal leader Justin Trudeau with 52.0% of the vote in the riding of Papineau, still lost to Maxime Claveau of the Bloc Québécois with 12.2% of the vote. Apparently it is now the Bloc Québécois’s turn to represent their support in Papineau. If Justin Trudeau wants to be prime minister, his best bet is to try to be appointed to the Senate and rule from there. Similarly NDP leader Tom Mulcair lost to Liberal candidate Rachel Bendayan in the riding of Outremont. Perhaps there is a deal to be struck between the Liberal and NDP to get their leaders appointed to the Senate.

This is only one example of the results of a stochastic election. Because of the stochastic nature of the election process, actual results may differ.

In Canada’s election process, it is sometimes advantageous to not vote for one’s preferred candidate. The stochastic election system is the only system in which it always best to vote for your preferred candidate. Therefore, if the 2015 election were actually using a stochastic election system, people would be allowed to vote for their true preferences. The outcome could be somewhat different than what this simulation illustrates.

Related info

Categories: Offsite Blogs

wxHaskell can now be run multiple times in GHCi

haskell-cafe - Sun, 11/15/2015 - 9:14pm
L.S., Good news for people who want to develop wxHaskell programs by experimenting in GHCi: wxHaskell programs can now be run multiple times in GHCi (GHC 7.10.2, 32 bit). Running a wxHaskell program in the 64 bit version of the interpreter results in the message: addDLL: could not load DLL This is caused by a GHC bug and is reported in GHC bug ticket 3242 [0], it should be solved in GHC 8.0.1 (The 32 bit GHCi crashes when exited, after running a wxHaskell program, but this is not a problem for application developers.) Regards, Henk-Jan van Tuyl [0]
Categories: Offsite Discussion

What does "reverse [0..]" try to do in GHCI?

Haskell on Reddit - Sun, 11/15/2015 - 8:58pm

I know that it's nonsensical, but out of curiosity I tried it and it hangs. Not unexpected, but it raised a few questions for me.

First, how would you check for an error like this? A pattern guard if length > someBigNumber? Or isn't there some way the interpreter could know that this is an infinite list that it can't reverse?

Second, is there a name for this kind of bug? Does it belong to any class of errors that have terms I might not know? Or would this just be referred to as an "infinite loop" or "runtime error?"

Last question is what actual steps is the interpreter taking here? Where does it actually hang?

Edit: Thanks for all of the detailed answers. You're awesome. I'm familiar with infinite loops and the halting problem, but with Haskell's rep, I didn't expect to run into it so casually. All the info given has made it clear that it's not a "footgun," which was maybe the reason for my curiosity. A lot of takeaways here. Thanks again. Also, this one implementation on SO was great:

isInfinite x = length x `seq` False submitted by tailanyways
[link] [35 comments]
Categories: Incoming News

Type-level lazy bool operations

haskell-cafe - Sun, 11/15/2015 - 3:13pm
Hello, cafe! There are some type-level boolean operation (If, &&, ||, Not) in Data.Type.Bool. Are they lazy enough? I faced with problem and it seems that the reason is non-lazy "If". I.e. both (on-True and on-False) types are trying to be calculated. Does it work in this way really? Could it be changed? Then, looking into source I see such definition for (&&)
Categories: Offsite Discussion

Template Haskell quasiquotation questions

Haskell on Reddit - Sun, 11/15/2015 - 2:09pm

While writing some boilerplate for using the users library, I realized that I could rather easily metaprogram the boilerplate. Excited for an opportunity to finally learn Template Haskell, I dug in and produced the following code:

functionLevels :: Type -> Int functionLevels = go 0 where go :: Int -> Type -> Int go n (AppT (AppT ArrowT _) rest) = go (n+1) rest go n (ForallT _ _ rest) = go n rest go n _ = n getType :: Info -> Maybe Type getType (ClassOpI _ t _ _) = Just t getType (DataConI _ t _ _) = Just t getType (VarI _ t _ _) = Just t getType (TyVarI _ t) = Just t getType _ = Nothing deriveReader :: Name -> DecsQ deriveReader rd = mapM (decForFunc rd) [ 'destroyUserBackend , 'housekeepBackend , 'getUserIdByName , 'getUserById , 'listUsers , 'countUsers , 'createUser , 'updateUser , 'updateUserDetails , 'authUser , 'deleteUser ] decForFunc :: Name -> Name -> Q Dec decForFunc reader fn = do info <- reify fn arity <- maybe (reportError "Unable to get arity of name" >> return 0) (return . functionLevels) (getType info) varNames <- replicateM (arity - 1) (newName "arg") b <- newName "b" let fnName = mkName . nameBase $ fn bound = AppE (VarE '(>>=)) (VarE reader) binder = AppE bound . LamE [VarP b] varExprs = map VarE (b : varNames) fullExpr = foldl AppE (VarE fn) varExprs liftedExpr = AppE (VarE 'liftIO) fullExpr final = binder liftedExpr varPat = map VarP varNames return $ FunD fnName [Clause varPat (NormalB final) []]

I feel like this would be much more readable and writable if I could use splicing and quasi quotation for the various bits, but I'm not sure how to eg. convert a list of VarPs into a Q Pat that would work in a [d| fnName varPs = ... |].

The generated code ends up looking like:

updateUserDetails arg_ask4 arg_ask5 = (>>=) backend (\ b_ask6 -> liftIO (WU.updateUserDetails b_ask6 arg_ask4 arg_ask5))

which, in a more legible expression, is:

updateUserDetails a b = do back <- backend liftIO (WU.updateUserDetails back a b)

How would you go about this sort of thing?

submitted by ephrion
[link] [2 comments]
Categories: Incoming News

Status of Percent Encoding and x-www-urlencoded in Haskell web services

Haskell on Reddit - Sun, 11/15/2015 - 1:17pm

Hi everybody, I've been wrestling with x-www-urlencoded for a couple days now, and have found myself a bit perplexed.

From what it seems, Yesod has some custom parsing going on under the hood to make sure the decoding is correct. I did a bit more investigation (ie; asked a follow-up question on SO) and found it to be a bit more tricky than I expected.

A quick google search led me to the urlencoded library, which looked fairly promising at first (it uses MonadError via fail, instant red-flag for me :\ ), but after further investigation it actually gets parsing incorrect - query strings don't necessarily need a value associated with a key - it's all well and fine to do something like ?key1=val1&key2&key3=val3; no val2 is required.

This leads me to believe that the best method we have to decoding percent-encoded strings is with Network.HTTP.Base.urlDecode, then manually break the string into a [(String, Maybe String)] (and do + substitution afterward).

I feel like we can do better - maybe an attoparsec parser would better serve us? What methods do you use to perform form-data parsing? Any help with this would be greatly appreciated.

EDIT: Just found http-types' urlDecode, too.

submitted by Spewface
[link] [10 comments]
Categories: Incoming News

Haskell Platform Plans

Haskell on Reddit - Sun, 11/15/2015 - 4:50am
Categories: Incoming News

Brent Yorgey: A strange representation of Z6

Planet Haskell - Sat, 11/14/2015 - 11:34pm

On my other blog I am writing about a proof of the Lucas-Lehmer test, and today in the course of working up some examples I stumbled across this little gem.

Let be a monoid, and let denote the subset of elements of which actually have an inverse. Then it is not hard to show that is a group: the identity is its own inverse and hence is in ; it is closed under the monoid operation since if and have inverses then so does (namely, ); and clearly the inverse of every element in is also in , because being an inverse also implies having one.

Now let , where the operation is multiplication, but the coefficients and are reduced modulo 3. For example, . This does turn out to be associative, and is clearly commutative; and is the identity. I wrote a little program to see which elements have inverses, and it turns out that the three elements with do not, but the other six do. So this is an Abelian group of order 6; but there’s only one such group, namely, the cyclic group . And, sure enough, turns out to be generated by and .

Categories: Offsite Blogs

Darcs vs Git

haskell-cafe - Sat, 11/14/2015 - 9:28pm
Hi friends! I love Haskell very much. So, I'm very interested in using Darcs. Is there some documentation explaining diffrences between Darcs and Git. Is there something that makes Darcs more robust rathen that Git. Does Darcs have some Emacs support (something like `magit`)? And one other question. Currently, I have no collaborators and use Git majorly myself to have version control, e.g. to revert some bad code, to have a nice log with code diffs and comments, and etc. But in future I will need something like GitHub for collaboration, is there a good solution for Darcs' hub already? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion