I remember from a long time ago that Clean seemed almost a viable competitor to Haskell in the lazy functional language space. It looks like the latest Clean release is from 2011. My recollection is that early on the implementation suffered from some licensing issues. Anyone still dabbling with Clean? The uniqueness typing seems like something worthy of not getting lost in the sands of time. Is Mercury the only language still pursing uniqueness?submitted by sleepingsquirrel
[link] [20 comments]
I'm particularly interested in the typesafe compilers.submitted by wcb10
[link] [18 comments]
I've started to learn it and it seems really unusual that Haskell allows integers to be of unlimited size by default. How does it implement its Integer type and some of the corresponding operations, like addition?submitted by coffeecoffeecoffeee
[link] [11 comments]
Hello! I am proud to present my new game at Play Store.
It's a game for everyone: Turtlerise.
The game has a simple mechanic: you need to make the turtle go up, the higher it goes, the higher you score. To make it go up, you place rocks so it can bounce. Other elements are in the game to make it more interesting.
Fast and challenging gameplay, scoring is not so easy as it seems!
Facebook integration is built in, so you can play with your friends and compete to see who can go higher.
- Turtlerise at Play Store: https://play.google.com/store/apps/details?id=com.turtlecorp.turtlerise
- Turtlerise at Facebook: https://www.facebook.com/turtlerise
Last Friday, I attended Well Typed’s training course on the various extensions available in Haskell - specifically, those available in GHC. I had a terrific time, and as I feel Well Typed’s courses go somewhat un-noticed, it deserves a write up. Despite having a fairly wide knowledge of type level magic, I always felt my knowledge was a little… ad hoc. I can confidently say that I no longer feel that is the case.
First though, what was this course about? As you no doubt know, Haskell is well known for being a very type safe language. However, if we limit ourselves to “vanilla” Haskell, we can quickly run into problems. Andres began with a motivating example: a basic quiz application, where we have a list of Questions and a corresponding list of Answers. We can model these in Haskell as [Question] and [Answer] - but there’s very little that the type system is doing to aid us build programs manipulating this data. For example, scoring the questions and answers should be a case of zipping the two lists together - but if the two lists aren’t the same length, then we certainly won’t calculate the right score!
Andres is fantastic at breaking complicated topics apart into small pieces. If you haven’t seen his talk explaining free monads from last year’s Haskell eXchange, I highly recommend it. The Well Typed course progressed in a very similar way. From the original problem statement, we first tried to write length-indexed lists using newtypes with phantom types, but noticed that this isn’t a particularly useful abstraction. We quickly moved to GADTs and rephrased our data as Vec n Question and Vec n Answer, which already let’s us write a much sounder form of zipWith for scoring. This material alone is well worth learning, as GADTs are a good solution for a wide range of problems (Andres nicely explained that GADTs are a good way to model relations between types).
However, we can go further with this data type. One problem we noticed was that this type wasn’t restrictive enough - GHC will quite happily accept types like of Vec Bool Char, which is completely meaningless! Even though we can’t construct any terms of this type, it would be good if we were prevented from making such mistakes as soon as possible. Using the recent data type promotion functionality, we addressed this problem, and considered Vec :: Nat -> * -> * a good solution for our application so far.
We then extended the application to support multiple types of questions and answers (true/false vs. quantity questions), and reached the limit of Vec. We generalised a step further to a type of heterogeneous list called Env:data Env :: [k] -> (k -> *) -> * where Nil :: Env ' f (:*) :: f a -> Env as f -> Env (a ': as) f
We moved machinery from Vec to Env to extend our application, still maintaining a huge amount of type safety. However, the more general type introduced problems, and we diverged out to understand how type class derivation works, and observed the need for higher-rank polymorphism to write zipWith for this data type.
After a short break, we explored operations on Env in further detail, comparing against the following “weakly typed Haskell” code1:task :: [Question] -> [Answer] -> (Text -> Bool) -> Maybe String task qs as p = do i <- findIndex p qs let a = as !! i return (show a)
This code is clearly very dangerous - !! can fail at runtime if the we have the wrong index, how do we move this over to Env? We’d like to keep the shape of the program the same, so how would we write a type safe !! function? Here we began to understand how functions that return Int and Bool are throwing information away, and that we need to somehow preserve information when we work with richer types. Rather than pointing into a list with Int, we built a Ptr object that gives us a type safe way to point into Env, and then we compared this against the standard implementation of Peano numbers to build more intuition.
All of this is great, but it’s not very practical - we often have data that lives outside our lovely type-safe sandbox. For example, we’d probably want to store the questions in a database, and receive answers from website form submissions. Here we learnt how we can move from a weakly typed setting to stronger types through decision procedures, and how our “check” functions actually witness more type information in the process. We saw a need for existential types and how these can give us one way of encoding type information that we don’t know statically.
With a movement towards proofs, we saw how we can use :~: and the new Data.Type.Equality module to introduce new information into the type system - specifically constructing proofs on natural numbers to implement a type safe reverse :: Vec n a -> Vec n a function. This is a technique I was somewhat aware of and had briefly seen in Agda, but had certainly never seen done in Haskell. I must say, I’m quite impressed with how natural it is with the new Data.Type.Equality module!
Finally, we wrapped the day up with a look at type families to perform type level addition of natural numbers, and saw how associated types can be used with type classes.
There’s a lot of material I’ve left out, such as singleton types and various caveats on the extensions we were using; it’s remarkable just how much was squeezed into a day’s studying. As I said before, Andres has a very systematic and reasoned approach to teaching, which lets the student see the small incremental steps while also being able to check progress against a bigger picture.
Well Typed offer a wide range of courses - not just courses on the plethora of extensions to GHC. If you’re studying Haskell and want to truly solidify your knowledge, I can definitely recommend Well Typed’s work. Thanks again, Andres!
Of course, we’d normally write this with zipWith, so it’s not particularly idiomatic Haskell. However, code like this does crop up, and this was an easy example for educational purposes.↩
I've been trying Freenet on-and-off (well, mostly off) for over a decade now. I like the technical foundations it's built upon, but I always felt the implementation and presentation are lacking.
Some day I decided to have a look at it's code, maybe I can help out. That's what I thought... I don't want to offend anyone, so let's just say I did not like what I see.
Just to make a point I decided to re-implement (a subset of) what Freenet is in Haskell. I'm not good at names, so the result is called A Distributed Store. I am pleased say it works better than I expected. (please read the readme on GitHub for some details) But I feel like I've lost track right now.
Until recently it always was obvious to me what to do next, so I just did it. Now I'm out of low-hanging fruits and could use some feedback. There's still lots of stuff to do:
- simply connecting to every node you possibly can is not a problem at the current network size of ~20, but it won't be good if the network will ever grow
- I'd love to have a bundled implementation of FMS (Freenet Message System, a spam-free, pseudonymous forum system built on top of Freenet)
- Proper support for up/downloading large files might be attractive for some audiences as well
- Someone knowledgeable in crypto stuff might do a review, better before people actually start to bet their ass on this. But then, I've never heard of someone being busted for stuff he's been doing on Freenet. Maybe because nobody is doing anything on Freenet. Avoid success at all cost, you know? :-)
- Someone knowledgeable in Haskell might point out the antipatterns I'm applying.
So, may I beg for some feedback dear Redditors? Or maybe someone hears his/her inner Cypherpunk and wants to jump on hacking on this? Yeah, that would be great...
(this is my first reddit post, so please pardon whatever I'm doing wrong)submitted by waldheinz
[link] [34 comments]
I'd like to package the Haskell Platform and GHC (for GHC API) together in one bundle in order to distribute it with a Mac app. (Specifically, I am working on a Mac app for IHaskell, so beginners can download that to immediately get started playing with Haskell).
Does anyone have any experience with this? What's the easiest way to do this? My list of dependencies is fairly long:
- Haskell Platform (well, all packages in it)
- A few other packages installed via cabal
- A native library (libzmq); cabal packages depend on it
- Functioning Python > 2.6ish
- GHC API
My current best idea is to package this all in a Virtualbox VM. I need to run a server that my Mac app client can use, so mount shared folders in the VM so that the VM can read/write to disk and expose some ports from the VM. If this is the best solution, what Linux distro would you suggest to use in the VM? I'd want something very lightweight.
I've also considered using Docker somehow, as IHaskell is already packaged with a Dockerfile. However, I'm not sure how I'd package docker so that its all doable via a single Mac app install.
Thanks! I know this isn't directly related to Haskell, but I'm hoping someone here has experience packaging Haskell applications.submitted by NiftyIon
[link] [3 comments]
It's been over two years since the last major release of ekg. Ever since the first release I knew that there were a number of features I wanted to have in ekg that I didn't implement back then. This release adds most of them.Integration with other monitoring systems
When I first wrote ekg I knew it only solved half of the program monitoring problem. Good monitoring requires two things
- a way to track what your program is doing, and
- a way to gather and persist that data in a central location.
The latter is neccesary because
- you don't want to lose your data if your program crashes (i.e. ekg only stores metrics in memory),
- you want to get an aggregate picture of your whole system over time, and
- you want to define alarms that go off if some metric passes some threshold.
Ekg has always done (1), as it provides a way to define metrics and inspect their values e.g. using your web browser or curl.
Ekg could help you to do (2), as you could use the JSON API to sample metrics and then push them to an exiting monitoring solution, such as Graphite or Ganglia. However, it was never really convenient.
Today (2) will get much easier.Statsd integration
A network daemon that ... listens for statistics, like counters and timers, sent over UDP and sends aggregates to one or more pluggable backend services (e.g., Graphite).
Statsd is quite popular and has both client and server implementations in multiple languages. It supports quite a few backends, such as Graphite, Ganglia, and a number of hosted monitoring services. It's also quite easy to install and configure (although many of the backends it uses are not.)
Ekg can now be integrated with statsd, using the ekg-statsd package. With a few lines you can have your metrics sent to a statsd:main = do store <- newStore -- Register some metrics with the metric store: registerGcMetrics store -- Periodically flush metrics to statsd: forkStatsd defaultStatsdOptions store
ekg-statsd can be used either together with ekg, if you also want the web interface, or standalone if the dependencies pulled in by ekg are too heavyweight for your application or if you don't care about the web interface. ekg has been extended so it can share the Server's metric store with other parts of the application:main = do handle <- forkServer "localhost" 8000 forkStatsd defaultStatsdOptions (serverMetricStore handle)
Once you set up statsd and e.g. Graphite, the above lines are enough to make your metrics show up in Graphite:Integration with your monitoring systems
The ekg APIs have been re-organized and the package split such that it's much easier to write your own package to integrate with the monitoring system of your choice. The core API for tracking metrics has been split out from the ekg package into a new ekg-core package. Using this package, the ekg-statsd implementation could be written in a mere 121 lines.
While integrating with other systems was technically possible in the past, using the ekg JSON API, it was both inconvenient and wasted CPU cycles generating and parsing JSON. Now you can get an in-memory representation of the metrics at a given point in time using the System.Metrics.sampleAll function:-- | Sample all metrics. Sampling is /not/ atomic in the sense that -- some metrics might have been mutated before they're sampled but -- after some other metrics have already been sampled. sampleAll :: Store -> IO Sample -- | A sample of some metrics. type Sample = HashMap Text Value -- | The value of a sampled metric. data Value = Counter !Int64 | Gauge !Int64 | Label !Text | Distribution !Stats
All that ekg-statsd does is to call sampleAll periodically and convert the returned Values to UDP packets that it sends to statsd.Namespaced metrics
In a large system each component may want to contribute their own metrics to the set of metrics exposed by the program. For example, the Snap web server might want to track the number of requests served, the latency for each request, the number of requests that caused an internal server error, etc. To allow several components to register their own metrics without name clashes, ekg now supports namespaces.
Namespaces also makes it easier to navigate metrics in UIs. For example, Graphite gives you a tree-like navigation of metrics based on their namespaces.
In ekg dots in metric names are now interpreted as separating namespaces. For example, the default GC metric names now all start with "rts.gc.". Snap could for example prefix all its metric names with "snap.". While this doesn't make collisions impossible, it should make them much less likely.
If your library want to provide a set of metrics for the application, it should provide a function that looks like this:registerFooMetrics :: Store -> IO ()
The function should call the various register functions in System.Metrics. It should also document which metrics it registers. See System.Metrics.registerGcMetrics for an example.A new metric type for tracking distributions
It's often desirable to track the distribution of some event. For example, you might want to track the distribution of response times for your webapp, so you can get notified if things are slow all of a sudden and so you can try to optimize the latency.
The new Distribution metric lets you do that.
Every time an event occurs, simply call the add function:add :: Distribution -> Double -> IO ()
The add function takes a value which could represent e.g. the number of milliseconds it took to serve a request.
When the distribution metric is later sampled you're given a value that summarizes the distribtuion by providing you with the mean, variance, min/max, and so on.
The implementation uses an online algorithm to track these statistics so it uses O(1) memory. The algorithm is also numerically stable so the statistics should be accurate even for long-running programs.
While it didn't make this release, in the future you can look forward to being able to track both quantiles and keep histrograms of the events. This will let you track e.g. the 95-percentile response time of your webapp.Counters and gauges are always 64-bits
To keep ekg more efficient even on 32-bit platforms, counters and gauges were stored as Int values. However, if a counter is increased 10,000 times per second, which isn't unusual for a busy server, such a counter would wrap around in less than 2.5 days. Therefore all counters and gauges are now stored as 64-bit values. While this is technically a breaking change, it shouldn't affect the majority of users.Improved performance and multi-core scaling
I received a report of contention in ekg when multiple cores were used. This prompted me to improve the scaling of all metrics types. The difference is quite dramatic on my heavy contention benchmark:+RTS -N1 +RTS -N6 Before 1.998s 82.565s After 0.117s 0.247s
The benchmark updates a single counter concurrently in 100 threads, performing 100,000 increments per thread. It was run on a 6 core machine. The cause of the contention was atomicModifyIORef, which has been replaced by an atomic-increment instruction. There are some details on the GHC Trac.
In short, you shouldn't see contention issues anymore. If you, I still have some optimizations that I didn't apply because the implementation should already be fast enough.
Brian McFadden nails it in The Nib. If you want to do something about it, look to Fight for the Future.