News aggregator

Acid-State using IxSet to keep only keys in memory

Haskell on Reddit - Fri, 04/24/2015 - 7:53pm

Acid state is awesome, but keeping all of my data in memory is a bit scary. One of the solutions to this problem suggested on the wiki is as follows:

Another potential solution is to create a special data-structure like IxSet which stores only the keys in RAM, and uses mmap and/or enumerators to transparently read the values from disk without loading them all into RAM at once.

How would I go about doing this? Can anyone share some sample code or links to reading that might help me accomplish this?

submitted by broma0
[link] [4 comments]
Categories: Incoming News

How do you run Haskell on an Amazon EC2 cluster?

Haskell on Reddit - Fri, 04/24/2015 - 7:46pm

Does anyone have any experience running Haskell on an Amazon EC2 cluster? I'd like to do some data analytics and I expect to use 10-30 nodes; I'm curious how this can be done in Haskell. Of course, projects like Apache Spark are always options, but I'd rather not have to learn a new framework from scratch. I've looked at Cloud Haskell, but haven't come across any specific documentation on how to actually deploy it on a cluster. Any tips/links would be greatly appreciated!

submitted by bitmadness
[link] [14 comments]
Categories: Incoming News

Workshop on Type Inference, May 12

General haskell list - Fri, 04/24/2015 - 6:46pm
Hello everyone, as the 12th of May is getting closer I would like to invite you yet again for the Workshop on Type Inference and Automated Proving at University of Dundee. Please send me an email if you wish to attend as described bellow. It helps us with organization. For those who have not decided yet, I am please to announce that we have one more speaker - Conor McBride will give a (revolutionary?) talk. Best regards, Frantisek Farka The Theory of Computation group at the University of Dundee invites you for the ********************************************************************* WORKSHOP ON TYPE INFERENCE AND AUTOMATED PROVING Tuesday the 12th of May, 12PM to 6PM School Of Computing, University of Dundee ********************************************************************* Refreshments have been kindly funded by SICSA and will be available from 12:00, with talks beginning at 12:45. For the detailed programme plea
Categories: Incoming News

Diagrams: Diagrams 1.3

Planet Haskell - Fri, 04/24/2015 - 6:00pm

by Brent Yorgey on April 25, 2015

Tagged as: release, features, announcement, 1.3.

Diagrams 1.3

The diagrams team is very pleased to announce the release of diagrams 1.3. The actual release to Hackage happened a week or so ago, and by now I think we have most of the little kinks ironed out. This is an exciting release that represents a great deal of effort from a lot of people (see the list of contributors at the end of this post). Here's a quick rundown of some of the new features. If you're just looking for help porting your diagrams code to work with 1.3, see the migration guide.

Path intersection

Using the functions intersectPointsP and intersectPointsT, it is now possible to find the points of intersection between two paths or two trails, respectively. This is not so hard for paths with straight segments, but for cubic Bezier curves, finding intersection points is nontrivial!

> {-# LANGUAGE TupleSections #-} > > example :: Diagram B > example = mconcat (map (place pt) points) mconcat ellipses > where > ell = circle 1 # scaleX 2 > pt = circle 0.05 # fc blue # lw none > ellipses = [ ell # rotateBy r # translateX (2*r) | r <- [0, 1/12 .. 5/12] ] > points = allPairs ellipses >>= uncurry (intersectPointsP' 1e-8) > allPairs [] = [] > allPairs (x:xs) = map (x,) xs ++ allPairs xs

Note that this feature is something of a "technology preview" in diagrams 1.3: the API will probably change and grow in the next release (for example, giving a way to find the parameters of intersection points).

Affine maps and projections

Affine maps have been added to Diagrams.LinearMap that can map between different vector spaces. The Deformable class has also been generalised to map between spaces. Helper functions have been added to Diagrams.ThreeD.Projection for basic orthographic and perspective projections.

In the below example, we construct a 3-dimensional path representing the wireframe of a simple house, and then project it into 2 dimensions using perspective, orthographic, and isometric projections.

> import Diagrams.ThreeD.Transform (translateZ) > import Diagrams.ThreeD.Projection > import Diagrams.LinearMap (amap) > import Linear.Matrix ((!*!)) > > box :: Path V3 Double > box = Path [f p1 ~~ f p2 | p1 <- ps, p2 <- ps, quadrance (p1 .-. p2) == 4] > where > ps = getAllCorners $ fromCorners (-1) 1 > f = fmap fromIntegral > > roof :: Path V3 Double > roof = Path > [ mkP3 1 1 1 ~~ mkP3 1 0 1.4 > , mkP3 1 (-1) 1 ~~ mkP3 1 0 1.4 > , mkP3 1 0 1.4 ~~ mkP3 (-1) 0 1.4 > , mkP3 (-1) 1 1 ~~ mkP3 (-1) 0 1.4 > , mkP3 (-1) (-1) 1 ~~ mkP3 (-1) 0 1.4 > ] > > door :: Path V3 Double > door = fromVertices > [ mkP3 1 (-0.2) (-1) > , mkP3 1 (-0.2) (-0.4) > , mkP3 1 (0.2) (-0.4) > , mkP3 1 (0.2) (-1) > ] > > house = door roof box > > -- Perspective projection > -- these bits are from Linear.Projection > m = lookAt (V3 3.4 4 2.2) zero unitZ > pm = perspective (pi/3) 0.8 1 3 !*! m > > pd = m44Deformation pm > perspectiveHouse = stroke $ deform pd (translateZ (-1) house) > > -- Orthogonal projection > am = lookingAt (mkP3 3.4 4 2.2) zero zDir > orthogonalHouse = stroke $ amap am house > > -- Isometric projection (specialised orthogonal) > isometricHouse = stroke $ isometricApply zDir house > > example :: Diagram B > example = hsep 1 . map (sized (mkHeight 3) . centerXY) $ > [ perspectiveHouse, orthogonalHouse, isometricHouse ]

Note that this should also be considered a "technology preview". Future releases of diagrams will likely include higher-level ways to do projections.

Grouping for opacity

A few backends (diagrams-svg, diagrams-pgf, and diagrams-rasterific as of version 1.3.1) support grouped transparency. The idea is that transparency can be applied to a group of diagrams as a whole rather than to individual diagrams. The difference is in what happens when diagrams overlap: if they are individually transparent, the overlapping section will be darker, as if two pieces of colored cellophane were overlapped. If transparency is applied to a group instead, the transparency is uniformly applied to the rendered output of the group of diagrams, as if a single piece of colored cellophane were cut out in the shape of the group of diagrams.

An example should help make this clear. In the example to the left below, the section where the two transparent circles overlap is darker. On the right, the call to groupOpacity means that the entire shape formed form the union of the two circles is given a uniform opacity; there is no darker region where the circles overlap.

> cir = circle 1 # lw none # fc red > overlap = (cir cir # translateX 1) > > example = hsep 1 [ overlap # opacity 0.3, overlap # opacityGroup 0.3 ] > # centerX > rect 9 0.1 # fc lightblue # lw noneVisualizing envelopes and traces

Some new functions have been added to help visualize (approximations of) the envelope and trace of a diagram. For example:

> d1, d2 :: Diagram B > d1 = circle 1 > d2 = (pentagon 1 === roundedRect 1.5 0.7 0.3) > > example = hsep 1 > [ (d1 ||| d2) # showEnvelope' (with & ePoints .~ 360) # showOrigin > , (d1 ||| d2) # center # showEnvelope' (with & ePoints .~ 360) # showOrigin > , (d1 ||| d2) # center # showTrace' (with & tPoints .~ 20) # showOrigin > ]Better command-line looping

For some time, many diagrams backends have had a "looped compilation mode", where the diagram-rendering executables produced take a --loop option, causing them to watch for changes to a source file, recompile and relaunch themselves. Support for this feature is now greatly improved. We have switched to use fsnotify, which eliminates polling and allows the feature to work on Windows for the first time (previous versions depended on the unix package). The output of the --loop mode has also been improved.

New backends

The diagrams-postscript, diagrams-svg, diagrams-cairo, diagrams-gtk, and diagrams-rasterific backends are all still going strong and fully supported. We now have several new backends as well: diagrams-html5 generates Javascript which sets up an HTML5 canvas containing the diagram; diagrams-canvas also targets HTML5 canvas, but uses the blank-canvas package to interact directly with an HTML5 canvas, enabling various sorts of interactivity. We also have a new backend diagrams-pgf which generates PGF/TikZ code suitable for including in \(\TeX\) documents—one can even have embedded text in diagrams typeset by \(\TeX\), allowing e.g. mathematical formulas as labels for things in your diagram.

Generalized numerics

It used to be that diagrams were hard-coded to use Double. As of version 1.3, Double is no longer baked in: diagrams are now parameterized by a suitable numeric type. It's too early to tell what the full implications of this will be, but in theory it opens up opportunities for things like automatic differentiation, constraint solving, and using diagrams in conjunction with deeply embedded DSLs.

This feature in particular was a tough nut to crack and is the fruit of a lot of labor. I want to especially highlight the work of Jan Bracker, Allan Gardner, and Frank Staals, all of whom did a lot of work attempting to generalize diagrams in this way. Although their code ultimately did not get merged, we learned a lot from their attempts! The fourth attempt, by Chris Chalmers, actually stuck. A big factor in his success was to simultaneously replace the vector-space package with linear, which turns out to work very nicely with diagrams and with generalized numeric types in particular.

Note that this is a Very Breaking Change as the types of almost everything changed. Anything which used to take a single type representing a vector space (such as R2) as an argument now takes two arguments, one for the structure/dimension of the vector space (e.g. V2) and one for the numeric/scalar type. See the migration guide for more specific information and help upgrading!

And lots more...

Of course, there are lots of other miscellaneous improvements, added type class instances, new lenses and prisms, bug fixes, and the like. For a full rundown see the release notes. There is also lots more exciting stuff in the pipeline!


We now have a team of 5 people working very actively on diagrams (Chris Chalmers, Daniel Bergey, Jeff Rosenbluth, Ryan Yates, and myself), with many, many more contributing here and there. As a way to say thank you for all the great contributions and hard work by many in the community, below is a (probably incomplete) list of people who have contributed to diagrams in some way — all 67 of them! We welcome involvement from anyone, regardless of experience. If you'd like to get involved, come chat with us on the #diagrams IRC channel on freenode, or send a message to the mailing list.

Diagrams contributors:

Alexis Praga / Allan Gardner / Andy Gill / Anthony Cowley / Bartosz Nitka / Ben Gamari / Brent Yorgey / Carlos Scheidegger / Carter Tazio Schonwald / Chris Mears / Christopher Chalmers / Claude Heiland-Allen / Conal Elliott / Daniel Bergey / Daniel Kröni / Daniel Wagner / Daniil Frumin / Deepak Jois / Denys Duchier / Dominic Steinitz / Doug Beardsley / Felipe Lessa / Florent Becker / Gabor Greif / Hans Höglund / Heinrich Apfelmus / Ian Ross / Jan Bracker / Jeffrey Rosenbluth / Jeremy Gibbons / Jeroen Bransen / Jim Snavely / Joachim Breitner / Joel Burget / John Lato / John Tromp / Jonas Haag / Kanchalai Suveepattananont / Kaspar Emanuel / Konrad Madej / Konstantin Zudov / Luite Stegeman / Michael Sloan / Michael Thompson / Moiman / Niklas Haas / Peter Hall / Pontus Granström / Robbie Gleichman / Robert Vollmert / Ryan Scott / Ryan Yates / Sam Griffin / Scott Walck / Sergei Trofimovich / sleepyMonad / soapie / Steve Sprang / Steven Smith / Tad Doxsee / Taneb / Taru Karttunen / Tillmann Vogt / Tim Docker / Vilhelm Sjöberg / Vincent Berthoux / Yiding Jia

Categories: Offsite Blogs

Problems with understanding article about IO (

Haskell on Reddit - Fri, 04/24/2015 - 5:33pm

The article says that having

getchar :: Int -> (Char, Int)

will guarantee that while evaluating get2chars

get2chars _ = [a,b] where (a,i) = getchar 1 (b,_) = getchar i

a will be evaluated before b because second call of getchar depends on Int returned by first call. But as I understand it only guarantees that we will evaluate this int (denoted by i) before b. Should not there be any additional conditions on getchar like strictness?

submitted by AnkalagonBlack
[link] [9 comments]
Categories: Incoming News

Ord for partially ordered sets

libraries list - Fri, 04/24/2015 - 2:06pm
What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered? Specifically, I have a pull request for fgl [1] to add Ord instances for the graph types (based upon the Ord instances for Data.Map and Data.IntMap, which I believe are themselves partially ordered), and I'm torn as to the soundness of adding these instances. It might be useful in Haskell code (the example given is to use graphs as keys in a Map) but mathematically-speaking it is not possible to compare two arbitrary graphs. What are people's thoughts on this? What's more important: potential usefulness/practicality or mathematical correctness? (Of course, the correct answer is to have a function of type a -> a -> Maybe Ordering :p) [1]:
Categories: Offsite Discussion

Export ($!) from Data.Function

libraries list - Fri, 04/24/2015 - 1:29pm
Hello, Right now ($!) operator can be imported either from Prelude or GHC.Base. The later is specific to GHC, so it makes sense to avoid it. There are reasons to avoid Prelude too (e.g. because it pollutes name space, changes too often, etc.) We need a home module for all identifies from Prelude, so that one can import them without importing Prelude. Data.Functor seems to be a good candidate for ($!) operator home module. It already exports ($) operator. Thanks, Yuras
Categories: Offsite Discussion

Toby Goodwin: Building ghc-7.8.4 for Debian 7 (wheezy) with Stackage

Planet Haskell - Fri, 04/24/2015 - 1:27pm
Building ghc-7.8.4 for Debian 7 (wheezy) with Stackage

This article is a recipe for building ghc-7.8.4 from source, along with the other tools you need to use Haskell. Why would you do this rather than use haskell-platform? Well, the latest haskell-platform is now over a year old, and ships with ghc-7.8.3, so perhaps you want or need the latest compiler in the 7.8 series. Or perhaps you're just curious as to how it might be done.

We're also going to ensure that any packages we install to get things going will be the ones from the current Stackage LTS.

So far, I've only tried this recipe on a Debian 7 (wheezy) server, although apart from the very first step it should be the same for any distro.

Install a binary ghc

The very first step is to install a functioning binary build of haskell-platform from somewhere... presumably your distro's packages:

# apt-get install haskell-platform

(Throughout this recipe, I will prefix commands to be run as root with # and commands to be run as any non-root user with $. For the root commands, you can of course either use a root shell, or prefix each command with sudo.)

We're also going to need some other bits and pieces to build a new ghc. On a pretty minimal Debian 7, all I needed was this:

# apt-get install make libncurses5-dev Build and install ghc

This is all straight forward. We want the default installation paths under /usr/local so there is nothing to pass to configure. Build as some non-root user:

$ wget $ tar xfJ ghc-7.8.4-src.tar.xz $ cd ghc-7.8.4 $ sh configure $ make -j5

In the last line, the number 5 should be 1 more than the CPU cores you have available.

Now install as root:

# cd /home/toby/ghc-7.8.4 # make install Bootstrap cabal

This is the tricky part, that took me a while to work out. There is a very loose coupling between ghc and cabal, but they do need to agree on some things: cabal needs to install libraries to where ghc is going to look for them! The distro supplied cabal-1.14 won't work properly with our newly installed ghc as it uses the wrong value for libsubdir. The symptom of this is that any library including C code will install happily, but anything depending on such a library will then produce errors like these:

/usr/bin/ld: cannot find -lHStf-random-0.5-ghc7.8.4 /usr/bin/ld: cannot find -lHSprimitive-0.6-ghc7.8.4 /usr/bin/ld: cannot find -lHSrandom-1.1-ghc7.8.4

The cleanest way to fix this is to perform a two-stage boot of cabal: use cabal-1.14 to install a “stage1” cabal-1.18, then clean everything out and use that stage1 cabal to install a fully functional “stage2” cabal-1.18. The particular version of cabal we are using will be the one from the current LTS stackage:

# cd # cabal update # cabal install cabal-install== # cp .cabal/bin/cabal . # rm -r .ghc .cabal # ./cabal update

The last line creates a new cabal configuration file that just needs a couple of tweaks. First, we want to reset the user-install flag. You could do that in your favourite text editor, or with this handy sed script:

# sed -i '/user-install/auser-install: False' .cabal/config

And we want to append the LTS stackage global file to the end of the file:

# u='' # wget -q -O- $u >> .cabal/config

Now we're ready to perform the second stage install:

# ./cabal install cabal-install # rm ./cabal # hash -r # sacrifice goat to the great god Bash Install other tools

Finally, to finish off, install the other basic tools:

# cabal install alex # cabal install happy
Categories: Offsite Blogs

Mark Jason Dominus: Easy exhaustive search with the list monad

Planet Haskell - Fri, 04/24/2015 - 8:02am

(Haskell people may want to skip this article about Haskell, because the technique is well-known in the Haskell community.)

Suppose you would like to perform an exhaustive search. Let's say for concreteness that we would like to solve this cryptarithm puzzle:

S E N D + M O R E ----------- M O N E Y

This means that we want to map the letters S, E, N, D, M, O, R, Y to distinct digits 0 through 9 to produce a five-digit and two four-digit numerals which, when added in the indicated way, produce the indicated sum.

(This is not an especially difficult example; my 10-year-old daughter Katara was able to solve it, with some assistance, in about 30 minutes.)

If I were doing this in Perl, I would write up either a recursive descent search or a solution based on a stack or queue of partial solutions which the program would progressively try to expand to a full solution, as per the techniques of chapter 5 of Higher-Order Perl. In Haskell, we can use the list monad to hide all the searching machinery under the surface. First a few utility functions:

import Control.Monad (guard) digits = [0..9] to_number = foldl (\a -> \b -> a*10 + b) 0 remove rs ls = foldl remove' ls rs where remove' ls x = filter (/= x) ls

to_number takes a list of digits like [1,4,3] and produces the number they represent, 143. remove takes two lists and returns all the things in the second list that are not in the first list. There is probably a standard library function for this but I don't remember what it is. This version is , but who cares.

Now the solution to the problem is:

-- S E N D -- + M O R E -- --------- -- M O N E Y solutions = do s <- remove [0] digits e <- remove [s] digits n <- remove [s,e] digits d <- remove [s,e,n] digits let send = to_number [s,e,n,d] m <- remove [0,s,e,n,d] digits o <- remove [s,e,n,d,m] digits r <- remove [s,e,n,d,m,o] digits let more = to_number [m,o,r,e] y <- remove [s,e,n,d,m,o,r] digits let money = to_number [m,o,n,e,y] guard $ send + more == money return (send, more, money)

Let's look at just the first line of this:

solutions = do s <- remove [0] digits …

The do notation is syntactic sugar for

(remove [0] digits) >>= \s -> …

where “…” is the rest of the block. To expand this further, we need to look at the overloading for >>= which is implemented differently for every type. The mote on the left of >>= is a list value, and the definition of >>= for lists is:

concat $ map (\s -> …) (remove [0] digits)

where “…” is the rest of the block.

So the variable s is bound to each of 1,2,3,4,5,6,7,8,9 in turn, the rest of the block is evaluated for each of these nine possible bindings of s, and the nine returned lists of solutions are combined (by concat) into a single list.

The next line is the same:

e <- remove [s] digits

for each of the nine possible values for s, we loop over nine value for e (this time including 0 but not including whatever we chose for s) and evaluate the rest of the block. The nine resulting lists of solutions are concatenated into a single list and returned to the previous map call.

n <- remove [s,e] digits d <- remove [s,e,n] digits

This is two more nested loops.

let send = to_number [s,e,n,d]

At this point the value of send is determined, so we compute and save it so that we don't have to repeatedly compute it each time through the following 300 loop executions.

m <- remove [0,s,e,n,d] digits o <- remove [s,e,n,d,m] digits r <- remove [s,e,n,d,m,o] digits let more = to_number [m,o,r,e]

Three more nested loops and another computation.

y <- remove [s,e,n,d,m,o,r] digits let money = to_number [m,o,n,e,y]

Yet another nested loop and a final computation.

guard $ send + more == money return (send, more, money)

This is the business end. I find guard a little tricky so let's look at it slowly. There is no binding (<-) in the first line, so these two lines are composed with >> instead of >>=:

(guard $ send + more == money) >> (return (send, more, money))

which is equivalent to:

(guard $ send + more == money) >>= (\_ -> return (send, more, money))

which means that the values in the list returned by guard will be discarded before the return is evaluated.

If send + more == money is true, the guard expression yields [()], a list of one useless item, and then the following >>= loops over this one useless item, discards it, and yields a list containing the tuple (send, more, money) instead.

But if send + more == money is false, the guard expression yields [], a list of zero useless items, and then the following >>= loops over these zero useless items, never runs return at all, and yields an empty list.

The result is that if we have found a solution at this point, a list containing it is returned, to be concatenated into the list of all solutions that is being constructed by the nested concats. But if the sum adds up wrong, an empty list is returned and concated instead.

After a few seconds, Haskell generates and tests 1.36 million choices for the eight bindings, and produces the unique solution:


That is:

S E N D 9 5 6 7 + M O R E + 1 0 8 5 ----------- ----------- M O N E Y 1 0 6 5 2

It would be an interesting and pleasant exercise to try to implement the same underlying machinery in another language. I tried this in Perl once, and I found that although it worked perfectly well, between the lack of the do-notation's syntactic sugar and Perl's clumsy notation for lambda functions (sub { my ($s) = @_; … } instead of \s -> …) the result was completely unreadable and therefore unusable. However, I suspect it would be even worse in Python because of semantic limitations of that language. I would be interested to hear about this if anyone tries it.

[ Addendum: Thanks to Tony Finch for pointing out the η-reduction I missed while writing this at 3 AM. ]

[ Addendum: Several people so far have misunderstood the question about Python in the last paragraph. The question was not to implement an exhaustive search in Python; I had no doubt that it could be done in a simple and clean way, as it can in Perl. The question was to implement the same underlying machinery, including the list monad and its bind operator, and to find the solution using the list monad.

[ Peter De Wachter has written in with a Python solution that, while not using the list monad, I think clearly demonstrates that the problems I was worried about will not arise, at least for this task. I hope to post his solution in the next few days. ]

Categories: Offsite Blogs

Paul Hudak

Lambda the Ultimate - Fri, 04/24/2015 - 6:26am

These are sad news indeed. I am sure almost everyone here read at least one paper by Paul and many knew him personally. When I just started thinking about programming languages I was fascinated by DSLs and his work was simply inspiring. His voice will be missed.

Discussions of Paul Hudak's work

Update:There is some confusion about the situation. Please see the comments for further information.

Categories: Offsite Discussion

low-cost matrix rank?

haskell-cafe - Thu, 04/23/2015 - 11:34pm
Noticing that diagrams 1.3 has moved from vector-space to linear, I decided to check them both for a function to compute the rank of a matrix. Neither seems to have it. While I'm doing quite a bit of work with 2 and 3-element vectors, the only thing I do with matrices is take their rank, as part of verifying that the faces of a polyhedron actually make a polyhedron. So I'm looking for a relatively light-weight way of doing so that will work with a recent (7.8 or 7.10) ghc release. Or maybe getting such a function added to an existing library. Anyone have any suggestions? Thanks, Mike _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion

First non-trivial Haskell project, suggestions to make code more idiomatic?

Haskell on Reddit - Thu, 04/23/2015 - 11:17pm

Hi, recently I implemented Red-Black trees in Haskell for my algorithms class. Before this project, all I did were minor problems in Real World Haskell and LYAH. There aren't many resources out there on idiomatic coding in haskell, so I was wondering if you guys could give me obvious suggestions for writing nicer, more idiomatic, code. Thanks.

submitted by CookieManifesto
[link] [12 comments]
Categories: Incoming News

LOLA 2015: Deadline extended to May 11

General haskell list - Thu, 04/23/2015 - 10:42pm
CALL FOR TALK PROPOSALS ______________________________________________________________________ LOLA 2015: Syntax and Semantics of Low Level Languages Sunday, 5 July 2015, Kyoto, Japan A satellite workshop of ICALP/LICS ______________________________________________________________________ /Important Dates/ Abstract submission: Monday, 11 May 2015 (extended) Author notification: Friday, 22 May 2015 (extended) LOLA 2015 workshop: Sunday, 5 July 2015 /Invited Speakers/ Nikos Tzevelekos, Queen Mary University of London, UK Katsuhiro Ueno, Tohoku University, Japan /Workshop Description/ It has been understood since the late 1960s that tools and structures arising in mathematical logic and proof theory can usefully be applied to the design of high level programming languages, and to the development of reasoning principles for such languages. Yet low level languages, such as machine code, and the compilation of high level languages into a low level ones have traditionally
Categories: Incoming News

CfP LPNMR 2015: ***Last call***,registration closes in 33 hours

General haskell list - Thu, 04/23/2015 - 10:29pm
[apologies for any cross-posting] Call for Papers --------------------------------------------------------------------------- 13th International Conference on Logic Programming and Non-monotonic Reasoning LPNMR 2015 Lexington, KY, USA September 27-30, 2015 (Collocated with the 4th Conference on Algorithmic Decision Theory 2015) --------------------------------------------------------------------------- AIMS AND SCOPE LPNMR 2015 is the thirteenth in the series of international meetings on logic programming and non-monotonic reasoning. LPNMR is a forum for exchanging ideas on declarative logic programming, non-monotonic reasoning, and knowledge representation. The aim of the conference is to facilitate interactions between researchers and practitioners interested in the design and implementation of logic-b
Categories: Incoming News

Holding On and Letting Go

Haskell on Reddit - Thu, 04/23/2015 - 10:19pm
Categories: Incoming News

Space leak with recursion

haskell-cafe - Thu, 04/23/2015 - 6:27pm
Hi, I'm trying to make an observable (code below) that listens to two ZeroMQ sockets and publishes the received messages on a third. Every message that is received gets an incremented sequence number before it is send on the publishing socket. To keep the current sequence number a state object is passed to the polling function and after handling the message, the polling function is called again with the updated state. Unfortunately this recursive calling of the polling function creates a space leak. Any suggestions how to fix this? Note: for brevity I left the incrementing of the sequence number and the sending on the publishing socket out of the code. Kind regards, Martijn Rijkeboer --- Code --- module Observable ( run ) where import Control.Monad (void) import Data.Int (Int64) import System.ZMQ4 data State = State { nextSeqNum :: !Int64 , listenSocket :: !(Socket Pull) , publishSocket :: !(Socket Pub) , snapSocket :: !(Socket Router) } run :: IO () run = do
Categories: Offsite Discussion

Danny Gratzer: SML for Haskellers

Planet Haskell - Thu, 04/23/2015 - 6:00pm
Posted on April 24, 2015 Tags: sml, haskell

Inspired by ezyang’s OCaml for Haskellers I decided to write something similar for SML. If you already know OCaml I also recommend Adam Chlipala’s guide

I’ll follow mostly the same structure as Edward’s article so we’ll have

{- Haskell -} (* SML *) What Do They Have in Common

SML and Haskell have quite a lot in common

Common types:

() | Int | Integer | Char | Bool | String | Double | (A, B, C) unit | int | | char | bool | string | real | (A * B * C)


() | 1 | 'a' | True | "hello" | 3.14 | (1, 2, 3) () | 1 | #'a' | true | "hello" | 3.14 | (1, 2, 3)

Common operators

== | /= | not | && | || | ++ | !! = | <> | not | andalso | orelse | ^ | String.sub

Type variables:

a b 'a 'AnythingGoes

Function application:

f x y z f x y z


\x -> ... fn x => ...


if True then 1 else 0 if true then 1 else 0

Pattern matching

case x of Nothing -> ... Just a -> ... case x of NONE => ... | SOME a => ...

Top level functions support pattern matching in both:

factorial 0 = 1 factorial n = n * factorial (n - 1) fun factorial 0 = 1 | factorial n = n * factorial (n - 1)

Top level bindings can be declared without the sugar for currying as well.

f = \x -> \y -> x val f = fn x => fn y => x

We can have top level patterns in both as well:

(a, b) = (1, 2) val (a, b) = (1, 2)

Type synonyms:

type Three a b = (a, a, b) type ('a, 'b) three = 'a * 'a * 'b

Data types:

data List a = Cons a (List a) | Nil datatype 'a list = Cons of 'a * 'a list | Nil

Notice that in ML data type constructors can only take on argument. This means they often end up taking a tuple (or record). They are however normal functions unlike in OCaml.

Type annotations:

f :: a -> a f x = x fun f (x : 'a) : 'a = x

Type annotations for expressions:

(1 + 1 :: Int) (1 + 1 : int)

Let bindings:

let x = 1 in x + x let val x = 1 in x + x end

Declare a new mutable reference:

newIORef True ref true

Modify a mutable reference:

setIORef r False r := false

Read a mutable reference:

readIORef r ! r

Making exceptions:

data MyExn = Exn String; instance Exception ... where exception Exn of string

Raising an exception:

throw (Exn "uh oh") raise Exn "uh oh"

Catching an exception:

catch e $ \(Exn s) -> s e handle Exn s => s

Since SML isn’t a purely functional language, none of the last couple of constructs listed live in anything monadic like their Haskell siblings. The type of r := false is just unit, not IO () or something.

What Is SML Missing

Aside from the obvious things, like SML being strict so it’s missing pervasive lazy evaluation, SML is missing some things from Haskell.

the biggest gap I stumble across in SML is the lack of higher kinded polymorphism:

data Fix f = Fix (f (Fix f)) datatype 'f fix = Fix of ('f fix) 'f (* Urk, syntax error *)

Even applying a type variable is a syntax error! As this might suggest to you, SML’s type system is much simpler than what we have in Haskell. It doesn’t have a notion of type families, GADTs, fancy kinds, data type promotion, etc, etc. SML is really limited to the areas of the Haskell type system you’d be accustomed to after reading Learn You A Haskell! Just algebraic data types, functions, and polymorphism.

Aside from this, SML doesn’t have guards, nor a lot of syntactic sugar that Haskell has. A nice exception to this is lambda cases, which is written

fn 0 => 1 | 1 => 2 | n => 0

Additionally, SML doesn’t have significant indentation which means that occasionally awkward parenthesis is necessary. For example

case x of true => (case !r of x => x + 1) | false => (r := 1; 2)

The parenthesis are mandatory.

On the stranger side, SML has records (discussed later) but they don’t have a functional updating operation. This is a pain to be honest. Also related, SML has a somewhat nasty habit of allowing for ad-hoc overloading in the way most languages do: certain expressions are “blessed” with unutterable types that must be inferred from context. There are only a few of these, +, *, and record accessors being among them. I’m personally not a huge fan, but in practice this is almost never an issue.

Finally ML doesn’t have Haskell-style type classes. I don’t miss them, some people would.

What Is Haskell Missing (in Comparison)

Aside from the obvious things, like Haskell being lazy so it’s missing pervasive eager evaluation, SML does have a couple of interesting things.

Of course SML has actual modules. I’ve explained a bit about them earlier. This alone is reason enough to write some ML. Additionally, SML has a saner notion of records. Records are a type in and of themselves. This means we can have something like

type coord = {x : int, y : int}

However, since this is just a type synonym we don’t actually need to declare it. Accessors are written #x to access the field x from a record. SML doesn’t have a very advanced record system so #x isn’t typeable. It’s overloaded to access a field from some record and the concrete record must be inferrable from context. This often means that while we can have free floating records, the inference woes make us want to wrap them in constructors like so

data coord = Coord of {x : int, y : int}

This has the nice upshot that record accessors aren’t horrible broken with multiple constructors. Let’s say we had

datatype user = Person {firstName : string, lastName : string} | Robot {owner : string, guid : int}

We can’t apply #firstName to an expression of type user. It’s ill-typed since user isn’t a record, it has a constructor which contains a record. In order to apply #firstName we have to pattern match first.

Finally, SML has a real, honest to goodness specification. In fact, SML is so well specified it’s been completely mechanized. There is an actual mechanized proof that SML is typesafe. The practical up shot of this is that SML is rock solid. There’s a definitive right answer to what a program should do and that answer is “whatever that one true implementation does”. In fact, there are actually a lot of SML compilers and they’re all reasonably compliant. Two noteworthy ones

  1. SML/NJ - An interactive system for SML. This provides a REPL and is what we use at CMU for our introduction to functional programming courses.
  2. Mlton - A whole program optimizing compiler. Mlton produces stupidly fast code but is significantly slower for compilation.

Since SML is fully standardized, I general develop with NJ and eventually feed the program into mlton if I intend the thing to run fast.

Also, modules are amazing, have I mentioned modules yet?

Wrap Up

So now that we’ve gone through most of the basic syntactic constructs of SML, most ML code should be readable. This is great because there’s some interesting pieces of ML code to read. In particular, these wonderful books are written with ML

  • Purely Functional Data Structures
  • Compiling With Continuations
  • Modern Compiler Construction in ML

I recommend all three of these books heartily. If you’re looking to learn about compilers, the last one in particular is the best introduction I’m aware of. The second one is an in depth look at a trick for compiling strict functional language.

Other general books on ML if you decide you want to give SML a more serious look

  • ML for the Working Programmer
  • Elements of ML Programming
  • Programming in Standard ML

The course I’m TAing currently is based around the last one and it’s freely available online which is nice.


<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

Diagrams: Diagrams 1.3

Planet Haskell - Thu, 04/23/2015 - 6:00pm

by Brent Yorgey on April 24, 2015

Tagged as: release, features.

Categories: Offsite Blogs