News aggregator

Need help understanding lazysplines

Haskell on Reddit - Fri, 08/08/2014 - 11:28pm

The documentation is rather sparse, but from reading the source and trying to understand the examples, it seems that duckDeathAtAge defines a piece-wise function where duckDeathAtAge at x gives the probability that the duck will die at age x. Then, survival at x gives the probability that the duck will live till age x. I'm not fully understanding how the recursion works here and the other examples seem almost impenetrable.

Some googling led me to the announcement for a talk by the author but I couldn't find slides/video in the websites linked to at the end of the paper.

Can anyone with more experience with the library point me to additional resources to learn about the topic?

submitted by precalc
[link] [3 comments]
Categories: Incoming News

Yesod Web Framework: Deprecating yesod-platform

Planet Haskell - Fri, 08/08/2014 - 11:10pm

I want to deprecate the yesod-platform, and instead switch to Stackage server as the recommended installation method for Yesod for end users. To explain why, let me explain the purpose of yesod-platform, the problems I've encountered maintaining it, and how Stackage Server can fit in. I'll also explain some unfortunate complications with Stackage Server.

Why yesod-platform exists

Imagine a simpler Yesod installation path:

  1. cabal install yesod-bin, which provides the yesod executable.
  2. yesod init to create a scaffolding.
  3. cabal install inside that directory, which downloads and installs all of the necessary dependencies.

This in fact used to be the installation procedure, more or less. However, this led to a number of user problems:

  • Back in the earlier days of cabal-install, it was difficult for the dependency solver to find a build plan in this situation. Fortunately, cabal-install has improved drastically since then.
    • This does still happen occasionally, especially with packages with restrictive upper bounds. Using --max-backjumps=-1 usually fixes that.
  • It sometimes happens that an upstream package from Yesod breaks Yesod, either by changing an API accidentally, or by introducing a runtime bug.

This is where yesod-platform comes into play. Instead of leaving it up to cabal-install to track down a consistent build plan, it specifies exact versions of all depedencies to ensure a consistent build plan.

Conflicts with GHC deps/Haskell Platform

Yesod depends on aeson. So logically, yesod-platform should have a strict dependency on aeson. We try to always use the newest versions of dependencies, so today, that would be aeson == 0.8.0.0. In turn, this demands text >= 1.1.1.0. However, if you look at the Haskell Platform changelog, there's no version of the platform that provides a new enough version of text to support that constraint.

yesod-platform could instead specify an older version of aeson, but that would unnecessarily constrain users who aren't sticking to the Haskell Platform versions (which, in my experience, is the majority of users). This would also cause more dependency headaches down the road, as you'd now also need to force older versions of packages like criterion.

To avoid this conflict, yesod-platform has taken the approach of simply omitting constraints on any packages in the platform, as well as any packages with strict bounds on those packages. And if you look at yesod-platform today, you'll that there is no mention of aeson or text.

A similar issue pops up for packages that are a dependency of the GHC package (a.k.a., GHC-the-library). The primary problem there is the binary package. In this case, the allowed version of the package depends on which version of GHC is being used, not the presence or absence of the Haskell Platform.

This results in two problems:

  • It's very difficult to maintain this list of excluded packages correctly. I get large number of bug reports about these kinds of build plan problems.

  • We're giving up quite a bit of the guaranteed buildability that yesod-platform was supposed to provide. If aeson 0.7.0.4 (as an example) doesn't work with yesod-form, yesod-platform won't be able to prevent such a build plan from happening.

There's also an issue with the inability to specify dependencies on executable-only packages, like alex, happy, and yesod-bin.

Stackage Server

Stackage Server solves exactly the same problem. It provides a consistent set of packages that can be installed together. Unlike yesod-platform, it can be distinguished based on GHC version. And it's far simpler to maintain. Firstly, I'm already maintaining Stackage Server full time. And secondly, all of the testing work is handled by a very automated process.

So here's what I'm proposing: I'll deprecate the yesod-platform package, and change the Yesod quickstart guide to have the following instructions:

  • Choose an appropriate Stackage snapshot from stackage.org
  • Modify your cabal config file appropriately
  • cabal install yesod-bin alex happy
  • Use yesod init to set up a scaffolding
  • cabal install --enable-tests in the new directory

For users wishing to live on more of a bleeding edge, the option is always available to simply not use Stackage. Such a usage will give more control over package versions, but will also lack some stability.

The problems

There are a few issues that need to be ironed out.

  • cabal sandbox does not allow changing the remote-repo. Fortunately, Luite seems to have this solved, so hopefully this won't be a problem for long. Until then, you can either use a single Stackage snapshot for all your development, or use a separate sandboxing technology like hsenv.

  • Haskell Platform conflicts still exist. The problem I mentioned above with aeson and text is a real problem. The theoretically correct solution is to create a Stackage snapshot for GHC 7.8 + Haskell Platform. And if there's demand for that, I'll bite the bullet and do it, but it's not an easy bullet to bite. But frankly, I'm not hearing a lot of users saying that they want to peg Haskell Platform versions specifically.

    In fact, the only users who really seem to want to stick to Haskell Platform versions are Windows users, and the main reason for this is the complexity in installing the network package on Windows. I think there are three possible solutions to this issue, without forcing Windows users onto old versions of packages:

    1. Modify the network package to be easier to install on Windows. I really hope this has some progress. If this is too unstable to be included in the official Hackage release, we could instead have an experimental Stackage snapshot for Windows with that modification applied.
    2. Tell Windows users to simply bypass Stackage and yesod-platform, with the possibility of more build problems on that platform.
      • We could similarly recommend Windows users develop in a Linux virtual machine/Docker image.
    3. Provide a Windows distribution of GHC + cabal-install + network. With the newly split network/network-uri, this is a serious possibility.

Despite these issues, I think Stackage Server is a definite improvement on yesod-platform on Linux and Mac, and will likely still improve the situation on Windows, once we figure out the Haskell Platform problems.

I'm not making any immediate changes. I'd very much like to hear people using Yesod on various operating systems to see how these changes will affect them.

Categories: Offsite Blogs

Get out structured data from a C/C++ library inHaskell

haskell-cafe - Fri, 08/08/2014 - 3:34pm
Hello everybody. I'm new to the list so I'd like to say hello to you. I'm a student of computer science and early practitioner of Haskell. I've decided to implement the next project in Haskell but I need to interface a C++ library. I've read all the wikis material about the matter and I have an understanding of how FFI to C works in general. The library in question is a SAT solver for LTL formulas, and all I need to do is to be able to create the AST of the formula (Haskell will do the parsing) and pass it to the library, and then get back the reply. From the C++ point of view, the AST of the formulas consists simply of objects linked together with raw pointers. Nodes are of the same type, with an internal enum that specifies the type of node, so it's not an inheritance hierarchy or fancy things... What I would like to do is to be able to declare an algebraic data type that represents the AST and somehow mangle it to a form that can be passed to the C++ function of the library (that I can wrap into an ex
Categories: Offsite Discussion

Data families and classes to simulate GADT example

Haskell on Reddit - Fri, 08/08/2014 - 2:25pm

This works

data One a where A :: One () B :: One Int blah :: One a -> a blah A = () blah B = 10 blah C = 42

but this doesn't:

class One a where data Two a instance One () where data Two () = A instance One Int where data Two Int = B | C blah :: One a => Two a -> a blah A = undefined blah _ = undefined

Presumably because it doesn't have a proof that unifies a with (), but is there any way to get it to work?

submitted by haskellthrowaway
[link] [11 comments]
Categories: Incoming News

How are polymorphic numeric functions implemented?

Haskell on Reddit - Fri, 08/08/2014 - 2:08pm

Are numbers boxed with a vtable for looking up functions at runtime or since Haskell knows the concrete types at runtime (I think?), is it like C++ templates where a copy of each function is created for each concrete type that is needed? If it's the latter, if you had a gigantic program and right at the beginning it "switched" on a value you read in from the user into branches for X different number types, would your executable be X times bigger? Like switch(user input){ case 1: longCalculation (read n :: Double) ; case 2: longCalculation (read n :: Integer) ... etc. } This has no relevance to anything, just weird curiousity.

submitted by chromaticburst
[link] [17 comments]
Categories: Incoming News

Brent Yorgey: Maniac week

Planet Haskell - Fri, 08/08/2014 - 1:43pm

Inspired by Bethany Soule (and indirectly by Nick Winter, and also by the fact that my dissertation defense and the start of the semester are looming), I am planning a “maniac week” while Joyia and Noah will be at the beach with my family (I will join them just for the weekend). The idea is to eliminate as many distractions as possible and to do a ton of focused work. Publically committing (like this) to a time frame, ground rules, and to putting up a time-lapse video of it afterwards are what actually make it work—if I don’t succeed I’ll have to admit it here on my blog; if I waste time on Facebook the whole internet will see it in the video; etc. (There’s actually no danger of wasting time on Facebook in particular since I have it blocked, but you get the idea.)

Here are the rules:

  • I will start at 6pm (or thereabouts) on Friday, August 8.
  • I will continue until 10pm on Wednesday, August 13, with the exception of the morning of Sunday, August 10 (until 2pm).
  • I will get at least 7.5 hours of sleep each night.
  • I will not eat cereal for any meal other than breakfast.
  • I will reserve 3 hours per day for things like showering, eating, and just plain resting.  Such things will be tracked by the TagTime tag “notwork”.
  • I will spend the remaining 13.5 hours per day working productively. Things that will count as productive work:
    • Working on my dissertation
    • Course prep for CS 354 (lecture and assignment planning, etc.) and CS 134 (reading through the textbook); making anki decks with names and faces for both courses
    • Updating my academic website (finish converting to Hakyll 4; add potential research and independent study topics for undergraduates)
    • Processing FogBugz tickets
    • I may work on other research or coding projects (e.g. diagrams) each day, but only after spending at least 7 hours on my dissertation.
  • I will not go on IRC at all during the week.  I will disable email notifications on my phone (but keep the phone around for TagTime), and close and block gmail in my browser.  I will also disable the program I use to check my UPenn email account.
  • For FogBugz tickets which require responding to emails, I will simply write the email in a text file and send it later.
  • I may read incoming email and write short replies on my phone, but will keep it to a bare minimum.
  • I will not read any RSS feeds during the week.  I will block feedly in my browser.
  • On August 18 I will post a time-lapse video of August 8-13.  I’ll probably also write a post-mortem blog post, if I feel like I have anything interesting to say.
  • I reserve the right to tweak these rules (by editing this post) up until August 8 at 6pm.  After that point it’s shut up and work time, and I cannot change the rules any more.

And no, I’m not crazy. You (yes, you) could do this too.


Categories: Offsite Blogs

Six Points About Type Safety

Haskell on Reddit - Fri, 08/08/2014 - 8:38am
Categories: Incoming News

Visualising Haskell function execution

haskell-cafe - Fri, 08/08/2014 - 6:30am
Hey all, Last weekend my friend Steve and I did a small project for visualising Haskell function execution in the browser. It's meant to be used in education, and uses a tiny custom parser. I figured it could be of interest for anyone here learning or teaching Haskell: https://stevekrouse.github.io/hs.js/ To see it in action, scroll a bit down to the red bordered box, and click on "map", and then keep clicking on each new line. I hope it can be useful to someone. Cheers, JP _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

How to improve the zipwith's performance

haskell-cafe - Fri, 08/08/2014 - 4:24am
Dear All I write a code for Clustering with Data.Clustering.Hierarchical, but it's slow. I use the profiling and change some code, but I don't know why zipwith take so many time? (even I change list to vector) My code is as blow, Any one kindly give me some advices. ====================== main = do .... let cluster = dendrogram SingleLinkage vectorList getVectorDistance .... getExp2 v1 v2 = d*d where d = v1 - v2 getExp v1 v2 | v1 == v2 = 0 | otherwise = getExp2 v1 v2 tfoldl d = DV.foldl1' (+) d changeDataType:: Int -> Double changeDataType d = fromIntegral d getVectorDistance::(a,DV.Vector Int)->(a, DV.Vector Int )->Double getVectorDistance v1 v2 = fromIntegral $ tfoldl dat where l1 = snd v1 l2 = snd v2 dat = DV.zipWith getExp l1 l2 ======================================= build with ghc -prof -fprof-auto -rtsopts -O2 log_cluster.hs run with log_cluster.exe +RTS -p profiling result is log_cluster.exe +RTS -p -RTS total time =
Categories: Offsite Discussion

Bryan O'Sullivan: criterion 1.0

Planet Haskell - Fri, 08/08/2014 - 4:02am
<html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"/> <meta content="text/css" http-equiv="Content-Style-Type"/> <meta content="pandoc" name="generator"/> <style type="text/css">code{white-space: pre;}</style> </head> <body>

Almost five years after I initially released criterion, I'm delighted to announce a major release with a large number of appealing new features.

As always, you can install the latest goodness using cabal install criterion, or fetch the source from github.

Please let me know if you find criterion useful!

New documentation

I built both a home page and a thorough tutorial for criterion. I've also extended the inline documentation and added a number of new examples.

All of the documentation lives in the github repo, so if you'd like to see something improved, please send a bug report or pull request.

New execution engine

Criterion's model of execution has evolved, becoming vastly more reliable and accurate. It can now measure events that take just a few hundred picoseconds.

benchmarking return () time 512.9 ps (512.8 ps .. 513.1 ps)

While almost all of the core types have changed, criterion should remain API-compatible with the vast majority of your benchmarking code.

New metrics

In addition to wall-clock time, criterion can now measure and regress on the following metrics:

  • CPU time
  • CPU cycles
  • bytes allocated
  • number of garbage collections
  • number of bytes copied during GC
  • wall-clock time spent in mutator threads
  • CPU time spent running mutator threads
  • wall-clock time spent doing GC
  • CPU time spent doing GC
Linear regression

Criterion now supports linear regression of a number of metrics.

Here's a regression conducted using --regress cycles:iters:

cycles: 1.000 R² (1.000 R² .. 1.000 R²) iters 47.718 (47.657 .. 47.805)

The first line of the output is the R² goodness-of-fit measure for this regression, and the second is the number of CPU cycles (measured using the rdtsc instruction) to execute the operation in question (integer division).

This next regression uses --regress allocated:iters to measure the number of bytes allocated while constructing an IntMap of 40,000 values.

allocated: 1.000 R² (1.000 R² .. 1.000 R²) iters 4.382e7 (4.379e7 .. 4.384e7)

(That's a little under 42 megabytes.)

New outputs

While its support for active HTML has improved, criterion can also now output JSON and JUnit XML files.

New internals

Criterion has received its first spring cleaning, and is much easier to understand as a result.

Acknowledgments

I was inspired into some of this work by the efforts of the authors of the OCaml Core_bench package.

</body> </html>
Categories: Offsite Blogs

Can I safely delete the older files in my .cabal folder?

Haskell on Reddit - Fri, 08/08/2014 - 12:58am

My current .cabal folder looks like

.cabal

packages

-------hackage.haskell.org

--------------- 00-index.tar

lib

------- x86_64-osx-ghc-7.8.2

-------x86_64-osx-ghc-7.8.3

setup-exe-cache

-------setup-Configure-Cabal-1.18.1.3-x86_64-osx-ghc-7.8.3

-------setup-Simple-Cabal-1.18.1.3-x86_64-osx-ghc-7.8.3

------- setup-Configure-Cabal-1.18.1.3-x86_64-osx-ghc-7.8.2

------- setup-Simple-Cabal-1.18.1.3-x86_64-osx-ghc-7.8.2

Can I delete all the files I've bolded?

submitted by abhishkk65
[link] [2 comments]
Categories: Incoming News

How do parametricity and type classes interact?

Haskell on Reddit - Thu, 08/07/2014 - 11:43pm

I've heard responses that they do not interact since type classes are ad-hoc and parametricity requires parametric polymorphism as the name indicates.

But in Wadler's paper Theorems for Free! he gives examples of some theorems (given a : A → A′ and b : B → B′):

sort : ∀X. (X → X → Bool) → [X] → [X] if for all x, y ∈ A, (x < y) = (a x <′ a y) then map a ∘ sort (<) = sort (<′) ∘ map a fold : ∀X. ∀Y. (X → Y → Y) → Y → [X] → Y if for all x ∈ A, y ∈ B, b (x ⊕ y) = (a x) ⊛ (b y) and b u = u′ then b ∘ fold (⊕) u = fold (⊛) u′ ∘ map a

sort corresponds to Haskell's Data.List.sortBy but you could also view both of those parametrically polymorphic functions as ad-hoc functions whose type constraints (Ord and Foldable) have been reified into arguments. So is it valid to view ad-hoc polymorphic functions such as \x -> x == x with type Eq a => a -> Bool as having the following parametric type (a -> a -> Bool) -> a -> Bool (ignoring (/=))?

submitted by haskellthrowaway
[link] [11 comments]
Categories: Incoming News

Looking for name of a concept (relating to parametricity)

Haskell on Reddit - Thu, 08/07/2014 - 11:27pm

Given a function of the following type:

f :: [a] -> [a]

Due to parametricity, the only information f has to make choices is the length of its input list since that is the only “element-independent “information content” of a list”.

Is there a name for this information, the information content you can retrieve from a data type when one or more of its types are parametrically polymorphic.

submitted by haskellthrowaway
[link] [16 comments]
Categories: Incoming News

How does Accelerate performance compare to handwritten OpenCL/CUDA?

Haskell on Reddit - Thu, 08/07/2014 - 10:57pm

I guess it would be much slower, or else why would anyone use CUDA. But has anyone benchmarked it? How does it compare?

submitted by SrPeixinho
[link] [7 comments]
Categories: Incoming News

[ANN] rtorrent-state 0.1.0.0

haskell-cafe - Thu, 08/07/2014 - 8:07pm
Hi, rtorrent-state is a library that allows working with rtorrent state files (SOMEHASH.torrent.rtorrent) placed in your session directory. If you're an rtorrent user and ever had to manually muck around with those files, you should be able to use this library to make your life easier. For example, you can stop all torrents in your session directory with just: “overFilesIn "rtorrent/session/dir" stopTorrent” The way it works is by parsing the session files, modifying the resulting data type and serialising it back into the file. I did not do optimisation but I had no problem with test sample of 100,000 files. I need to add IOException handling and maybe extra utility functions but otherwise I consider the library finished. Thanks
Categories: Offsite Discussion