I've been playing around with Haskell in my spare time for a while now and have a reasonable grasp on how to implement algorithms and data structures idiomatically, but beyond this there seems to be a lack of good documentation and tutorials on how to actually architect larger systems in Haskell.
I've had great success with Scala and F# on some pretty complex systems, and I'd like to take the next step and start using Haskell in production as well - but this requires I cure my addiction to mutability and OO, and I'm not quite sure exactly how I should architect my applications. I can understand at a basic level how to use monads to help perform computations and maintain state at a localised level, but I'm not sure how to take this to the next step and actually architect a larger solution - i.e. something that maintains state across multiple separate modules with their own encapsulated state.
Are there any good guides that I've overlooked that can provide examples, tutorials, or questions that are focussed more on building real world applications (rather than mathematical algorithms or data structures)?
Edit: Thanks guys - some great advice!submitted by zoomzoom83
[link] [23 comments]
This assignment is driving me crazy!!! How do I solve it. http://cs.anu.edu.au/student/comp1100/1-Labs-Assignments.html
I'm a new student to programming. I have only been programming for seven weeks and this due tomorrow. I'm stuck, I fell like an idiot. Only option is to hand in sub pare work.
If you guys have any resources to how to do this, that would be great.submitted by CanberraStudent
[link] [12 comments]
Today, let's look at the continuation-function.
Now, here is an interesting bird, if there ever was one.
And there is, and it's call the continuation-function. So there!
First of all, let's define what a function does. A function is simply an arrow that points from one thing to another.
Let's tighten that up, a bit.
A function from A -> B maps a value in the A-category to the B-category, and, in some cases, the B-category doesn't even have to be a different category. For example, the successor function maps a integer value to an integer value:
succ 5 = 6succ 41 = 42etc.
But a function can map from one type to another type:
number_of_letters "foo" = 3number_of_letters "bar" = 3number_of_letters "food" = 4etc.
So, let's talk about the 'types' of the functions as things unto themselves.
So, succ is a function that maps integers to integers, e.g.: 5 -> 6 and 41 -> 42, etc, and we show this mapping of types thus:
succ :: Integer -> Integer
And number_of_letters maps strings to integers, and so we represent that as:
number_of_letters :: String -> Integer
Now there are generalization of functions. We can say, for example, that we have a function, f, that takes an object of type a and maps that object to a different kind of object of type b. We show this representation thusly:
f :: a -> b
We don't specify the types when we declare the function here, the definition of the function may flesh that out, or, when we actually come down to applying the function to an actual object, then the object will specify the input type and the actual result of the function will specify the output.
This generalization or abstraction is a very powerful tool in the right hands.
And even in the wrong hands.
For it allows us to express truths of objects regardless of what kind of object they are: you can express universal truths in mathematics. Or, you can even, if you wish, partially constrain the types of the objects, and therefore express more specific, or more existential, truths, and there are various ways of going about doing that.
(This type-constraining may or may not be the subject of some future entry. Type-constraints are not the point of this entry.)
(So I do not digress. In this case.)
(Or I do digress, in that I am stating I'm not digressing.)
So, some functions operate with specific kinds of objects (and are not of general interest here) and some functions operate with universals.
Now, there are special kinds of functions that, simply by their types, their declarations, express a truth. The truth-expressing function we'll examine here is the continuation-function.
The continuation function's type is as follows:
c :: (p -> r) -> r
What is this type declaration? And what does it declare, actually?
The type declaration says this:
Given that I'm operating with an object that, itself, is a function from p -> r, I give an r result.
This sounds something like a syllogism (you can look that up if you're a bit rusty on syllogisms), and that's what it is, but it's also rather (or very) reductionist.
In other words it says: If I'm given something that proves r from a given p, then I can prove (or provide you) r.
No, really: wait. You're saying I give you p proves r and you'll give me r? What's the point?
Actually, there is not, from proof theory. If you already have p -> r then you also have r, too, don't you?
The kicker is this: in certain domains.
Because, what if I give you a function p -> r but p is false? Well then p -> r gives you nothing, nada, zip, zitch! So you can just take your r and ...
Well, you may not even have an r in that case, anyway, so the rest of my statement can remain unsaid.
So in the domain where there is no 'false' or no negation, the (p -> r) -> r works, and it is intuitively obvious: I've got p -> r so I've also got r, given that I've got p.
Math is hard.
The obvious has to be stated, or else falsehood and inconsistency rears it's ugly, nondeterministic head, and we have to wake up and smell the coffee.
But, okay, we're in the continuum of truths, were I do have a p, and p is true, and I have a p -> r, so I've got me an r, so what's the point of continuations?
Well, not that I've got p -> r, I can get my r out any time I want to: now, or later, and 'later' being that the context has entirely changed and the world has moved on, but, if I want to 'reset the clock' or 'turn back time' I now can! All I do is feed my p into my p -> r function and I get my r out in the state when I originally got them.
Importantly, not now messed up by everything else that's come along after.
And therefore the name of the continuation-function, because at time
(p -> r) -> r
I create a continuation from that point, and then I go along and do something else, but then I have my p and I continue from the continuation point in the context of the state of the prior continuation.
The classic, humorous example:
I buy chocolate mousse cake and a latte, create the continuation, eat the cake, drink the coffee, gain weight after eating and drinking.
I take my little continuation function, my weight goes right back down (because I'm now in the old context of 'before I ate the cake'), and there, lo, and behold! is the cake, all ready to be eaten. So, I sit down, and enjoy my cake. Again. As often as I'd like.
With continuation-functions, you can have your cake, and eat it, too.
But it gets better than even that!
How? you demand.
Thank you for asking.
Because there's this little thing calle the 'continuation-passing style.' When you're working with continuations, you never really ... activate continuations. You just pass them around, as if they're arguments, composing them together, and eventually you pass in your activation function after you're all done and your whole system gets executed or proved all in one go.
But here comes the better.
Because, usually, continuation-functions are of some generic types p -> r, you can exercise your system with one set of types, then, when you find that wasn't exactly what you were looking for, you just simply ...
... do nothing.
You don't change one thing in your proof system, you just feed in the new, or desired, type at the beginning of the proof, and all the types adjust themselves along the way.
Here's what a coding effort is, in the traditional sense. You code a solution, or so you thought, but somewhere in the middle, you wanted to do something different.
If, instead, you were using the continuation-passing style, instead of changing that middle bit, you just simply pass in the continuation you actually wanted instead. Zero code change.
Or, you code a solution, or so you thought, but you were using the wrong types, and it could be something as simple as changing from British Imperial to Metric (and therefore you just lose a satellite at launch, is all), or it could be something as drastic as using plowshares instead of swords, I don't know.
But the thing is, in the traditional coding paradigm, you're stuck, you have to change all the functions, what type of arguments they take and what value-types they return.
Seventy-two hours. A team in the Nation's Capital spent seventy-two hours recoding an integer to a string because the two-billionth row was reached and the entire system crashed with the computer couldn't index higher than that, and some frikken genius notice the indices weren't being used as numbers, they were just row identifiers, so they could be arbitrarily-lengthed strings instead of integers with an artificial maximum.
Seventy-two frikken hours of everybody's life wasted on that snafu. A whole weekend.
I'm a consultant. I got paid for my trouble. Some other folks didn't.
That system was coded in the traditional sense.
If the continuation-passing style had been used, how much recoding would we had to have done?
Because p, the input type, whereas before was (arbitrarily) designated as an integer now could simply be a string. There were no counting functions used against the indices, just simply identifying functions: "Are you row x?" "Why, yes, I am." "Well, then, we're done."
That would have been our seventy-two hours: a simply "well, then, we're done."
Okay, smarty-pants, if continuations are so hot, then why isn't everyone using them?
Actually, everybody coding in the language Scheme is using continuations, and transparently at that. A joke from the BrainF community is that they have continuations built into their language: it's called the enter key.
So why isn't everybody else?
I don't know.
Actually, I do.
Continuations require you work with functions-as-arguments as a coding style, so instead of just coding the for-loop or the if-statement, you actually have to think, and to think strategically, about what you're doing with the system. Meaning you have to know what you're doing, not only right here and right now, but with entire system or subsystem. You have to know the details, but you also have to know the overall plan.
And having a good grasp of everything is just too much for some people. For most people who just want to code that for-loop.
But what are you actually doing with that for-loop? Why do you have that if-statement? Why are you adding those two numbers? What story is your code telling the reader, be it the customer who will use the system or you, again, months later, when you come back to that part of the system and try to decipher what the heck you were doing, and why ...
Well, then, looking at code that I've looked at, that's just way too hard for most. That for-loop? It's tight! What need is it supposed to be addressing? I have no idea. And all the unit tests on the code (which for most coders, is 'none') are so helpful in clearing up these open questions.
But, now, with continuations, does teasing out the meaning of a particular piece of code become harder or easier?
Well, it actually becomes much harder, and in most cases. Why? Because this particular piece of code depends on a continuation passed in (or injected) from elsewhere, and depending (a 'dependency') on what's injected, the entire behavior changes.
This is the dependency-injection style of coding, or, instead of object-oriented, it becomes aspect-oriented, and most traditional, imperative-style coders simply loathe the aspect-oriented approach for it's dearth of for-loops and if-statements.
Most software projects fail. Spectacularly, with huge cost overruns and a complete failure to deliver on promised feature sets.
Spectacularly as in 'billions of dollars' spectacularly. I'm not speaking hypothetically.
Two projects that used generic typing and the continuation-function passing style made over forty million dollars, and the problem with these projects is that they overdelivered on promised feature sets and the users got accustomed to working with software that actually worked. I talked with both product owners of those projects. Nice guys. Really relaxed outlook on life.
Eh. But I digress. As always.
The continuation, it's a function c :: (p -> r) -> r that gives you the power 'to rewind time,' as it were. And what does that give you?
You have the freedom to experiment, because if a particular approach doesn't work, you hit the continuation-'button' and try again with something else, ...
... without having to retool.
Or, if you were working with one set of types, and the situation or the story changes and you now have to work with an entirely new set of things doing a similar set of processes in the work-flow, you hit the continuation-'button' with the new types, ... without having to retool ... and try it with the new types. Got your solution? Great! Need to change something in the middle? Okay, just pass in a new continuation and try again.
'Programming,' ... 'coding' is considered brittle and expensive, and for the most part it is, but there's one thing that just works and works sweet.
The World-wide Web.
There's this very flexible style of coding to web-programming called functional-reactive programming. Web-frameworks have been prototyped, stood up, changed on-the-fly, what have you, using this functional-reactive style.
Guess what's under the hood of this approach.
The little, plain-old continuation-function.
See you tomorrow.
Unless I reset my continuation-function from before I wrote this article.
But then I'd have to rewrite this article. Oops! Let's press forward, then!
For an infix operator you you can for a section, i.e., a use of the operator with one operand left out. For instance (* 2) leaves out the first operand, and Haskell defines this to be the same as (\ x -> x * 2). Regarding :: as an operator we should be able to write (:: type) and it should have the obvious meaning (\ x -> x :: type).
I suggest, and I plan sending the haskell-prime mailing list, Haskell should adopt this small extension.
Why? First, the extension is very light weight and has almost no extra intellectual weight for anyone learning Haskell. I'd argue it makes the language simpler because it allows :: to be treated more like an infix operator. But without use cases this would probably not be enough of an argument. Example 1 We want to make a function, canonDouble, that takes a string representing a Double and changes it to the standard Haskell string representing this Double. E.g. canonDouble "0.1e1" == "1.0". A first attempt might look like this:
canonDouble :: String -> String
canonDouble = show . read -- WRONG!
This is, of course, wrong since the compiler cannot guess that the type between read and show should be a Double. We can convey this type information in different ways, e.g.:
canonDouble :: String -> String
canonDouble = show . asDouble . read where asDouble :: Double -> Double asDouble x = x
This is somewhat clumsy. Using my proposed extension we can instead write:
canonDouble :: String -> String
canonDouble = show . (:: Double) . read
This has the obvious meaning, and succinctly describes what we want. Example 2 In ghc 7.8 there is a new, better implementation of Data.Typeable. It used to be (before ghc 7.8) that to get a TypeRep for some type you would have to have a value of that type. E.g., typeOf True gives the TypeRep for the Bool type. If we don't have a value handy of the type, then we will have to make one, e.g., by using undefined. So we could write typeOf (undefined :: Bool).
This way of using undefined is rather ugly, and relies on non-strictness to work. Ghc 7.8 add a new, cleaner way of doing it.
typeRep :: proxy a -> TypeRep
The typeRep function does not need an actual value, but just a proxy for the value. A common proxy is the Proxy type from Data.Proxy:
data Proxy a = Proxy
Using this type we can now get the TypeRep of a Bool by writing typeRep (Proxy :: Proxy Bool). Note that in the type signature of typeRep the proxy is a type variable. This means we can use other values as proxies, e.g., typeRep ( :: [Bool]).
We can in fact use anything as a proxy that has a structure that unifies with proxy a. For instance, if we want a proxy for the type T we could use T -> T, which is the same as (->) T T. The (->) T part makes of it is the proxy and the last T makes up the a.
The extension I propose provides an easy way to write a function of type T -> T, just write (:: T). So to get a TypeRep for Bool we can simply write typeRep (:: Bool). Doesn't that look (deceptively) simple?
In fact, my driving force for coming up with this language extension was to get an easy and natural way to write type proxies, and I think using (:: T) for a type proxy is a as easy and natural as it gets (even if the reason it works is rather peculiar).
Implementation I've implemented the extension in one Haskell compiler and it was very easy to add and it works as expected. Since it was so easy, I'll implement it for ghc as well, and the ghc maintainers can decide if the want to merge it. I suggest this new feature is available using the language extension name SignatureSections.
Extensions Does it make sense to do a left section of ::? I.e., does (expr ::) make sense? In current Haskell that does not make sense, since it would be an expression that lacks an argument that is a type. Haskell doesn't currently allow explicit type arguments, but if it ever will this could be considered.
With the definition that (:: T) is the same as (\ x -> x :: T) any use of quantified or qualified types as T will give a type error. E.g., (:: [a]), which is (\ x -> x :: [a], is a type error. You could imagine a different desugaring of (:: T), namely (id :: T -> T). Now (:: [a]) desugars to (id :: [a] -> [a]) which is type correct. In general, we have to keep quantifiers and qualifiers at the top, i.e., (:: forall a . a) turns into (id :: forall a . a -> a).
Personally, I'm not convinced this more complex desugaring is worth the extra effort.
On a whim (partly inspired by something I read in /r/haskell a short while ago) I tried to compile the following in GHC...module Main (main) where data Test = Test x where x = Maybe x main :: IO () main = putStrLn "Hello"
Unsurprisingly, I got a syntax error referring to that where clause.
But it seems intuitive to me that while that particular recursive type doesn't make much sense, a where clause for types could be useful. Maybe only for abbreviating long type signatures, but still useful.
And perhaps even that recursive type has some theoretical validity in a depth-of-Just-constructors-expressing-natural-numbers way.
Am I making sense, or is this silly?
Has supporting where in types been proposed before?submitted by ninereeds314
[link] [34 comments]
My last blog post detailed a number of changes I was going to be making for package consolidation. A number of those have gone through already, this blog post is just a quick summary of the changes.shakespeare
shakespeare is now a single package. hamlet, shakespeare-css, shakespeare-js, shakespeare-i18n, shakespeare-text, and servius have all been merged in and marked as deprecated. I've also uploaded new, empty versions of those deprecated packages. This means that, in order to support both the old and new versions of shakespeare, you just need to ensure that you have both the shakespeare and deprecated packages listed in your cabal file. In other words, if previously you depended on hamlet, now you should depend on hamlet and shakespeare. When you're ready to drop backwards compatibility, simply put a lower bound of >= 2.0 on shakespeare and remove the deprecated packages.
(Note: this method for dealing with deprecated packages is identical for all future deprecations, I won't detail the steps in the rest of this blog post.)conduit
conduit-extra now subsumes attoparsec-conduit, blaze-builder-conduit, network-conduit, and zlib-conduit. It also includes three modules that used to be in conduit itself: .Text, .Binary, and .Lazy. To deal with this change, simply adding conduit-extra to your dependencies should be sufficient.
The other changes have to do with resourcet. In particular:
- Data.Conduit no longer reexports identifiers from resourcet and monad-control. These should be imported directly from their sources.
- Instead of defining its own MonadThrow typeclass, resourcet now uses the MonadThrow typeclass from the exceptions package. For backwards compatibility, Control.Monad.Trans.Resource provides monadThrow as an alias for the new throwM function.
- The Resource monad had a confusing name, in that it wasn't directly related to the ResourceT transformer. I've renamed it to Acquire, and put it in its own module (Data.Acquire).
- I'm actually very happy with Acquire, and think it's a great alternative to hard-coding either the bracket pattern or resourcet into libraries. I'm hoping to add better support to WAI for Acquire, and blog a bit more about the usage of Acquire.
- MonadUnsafeIO has been removed entirely. All of its functionality can be replaced with MonadPrim and MonadBase (for example, see the changes to blaze-builder-conduit).
- MonadActive, which is only needed for Data.Conduit.Lazy, has been moved to that module.
http-client-multipart has been merged into http-client. In addition, instead of using the failure package, http-client now uses the exceptions package.
http-client-conduit has been merged into http-conduit. I've also greatly expanded the Network.HTTP.Client.Conduit module to contain what I consider its next-gen API. In particular:
- No usage of ResumableSource.
- Instead of explicit ResourceT usage, it uses the Acquire monad and bracket pattern (acquireResponse, withResponse).
- Instead of explicitly passing around a Manager, it uses MonadReader and the HasHttpManager typeclass.
I'm curious how people like the new API. I have no plans on removing or changing the current Network.HTTP.Conduit module, this is merely an alternative approach.Updated yesod-platform
I've also released a new version of yesod-platform that uses the new versions of the packages above. A number of packages on Hackage still depend on conduit 1.0, but I've sent quite a few pull requests in the past few days to get things up-to-date. Thankfully, maintaining compatibility with both 1.0 and 1.1 is pretty trivial.
Functional Geometry and the Traite ́ de Lutherie by Harry Mairson, Brandeis University.
We describe a functional programming approach to the design of outlines of eighteenth-century string instruments. The approach is based on the research described in Francois Denis’s book, Traite ́ de lutherie. The programming vernacular for Denis’s instructions, which we call functional geometry, is meant to reiterate the historically justified language and techniques of this musical instrument design. The programming metaphor is entirely Euclidean, involving straightedge and compass constructions, with few (if any) numbers, and no Cartesian equations or grid. As such, it is also an interesting approach to teaching programming and mathematics without numerical calculation or equational reasoning.
The advantage of this language-based, functional approach to lutherie is founded in the abstract characterization of common patterns in instrument design. These patterns include not only the abstraction of common straightedge and compass constructions, but of higher-order conceptualization of the instrument design process. We also discuss the role of arithmetic, geometric, harmonic, and subharmonic proportions, and the use of their rational approximants.
Quark Games was established in 2008 with the mission to create hardcore games for the mobile and tablet platforms. By focusing on making high quality, innovative, and engaging games, we aim to redefine mobile and tablet gaming as it exists today.
We seek to gather a group of individuals who are ambitious but humble professionals who are relentless in their pursuit of learning and sharing knowledge. We're looking for people who share our passion for games, aren’t afraid to try new and different things, and inspire and push each other to personal and professional success.
As a Server Game Developer, you’ll be responsible for implementing server related game features. You’ll be working closely with the server team to create scalable infrastructure as well as the client team for feature integration. You’ll have to break out of your toolset to push boundaries on technology to deliver the most robust back end to our users.
What you’ll do every day
- Develop and maintain features and systems necessary for the game
- Collaborate with team members to create and manage scalable architecture
- Work closely with Client developers
on feature integration
- Solve real time problems at a large
- Evaluate new technologies and products
What you can bring to the role
- Ability to get stuff done
- Desire to learn new technologies and design patterns
- Care about creating readable, reusable, well documented, and clean code
- Passion for designing and building systems to scale
- Excitement for building and playing games
Bonus points for
- Experience with a functional language (Erlang, Elixir, Haskell, Scala, Julia, Rust, etc..)
- Experience with a concurrent language (Erlang, Elixir, Clojure, Go, Scala, etc..)
- Being a polyglot programmer and having experience with a wide range of languages (Ruby, C#, and Objective-C)
- Experience with database integration and management for NoSQL systems (Riak, Couchbase, Redis, etc...)
- Experience with server operations, deployment, and with tools such as Chef or Puppet
- Experience with system administration
Get information on how to apply for this position.
This is just a quick follow-up to my previous post. We have now released Haddock 2.14.2 which contains few minor changes. The reason for this release is to get a few quick patches in. No fancy overview today, just quick mentions. Here is the relevant part of the changelog:
Changes in version 2.14.2
Always drop –split-objs GHC flag for performance reasons (#292)
Print kind signatures GADTs (#85)
Drop single leading whitespace when reasonable from @-style blocks (#201)
Fix crashes associated with exporting data family record selectors (#294)
#201 was the the annoying aesthetics bug I mentioned last time and that is now fixed.
#294 was a bug we’re glad to have gotten rid of now: it was only reported recently but I imagine more and more projects would have start to hit it.
#292 should improve performance considerably in some special cases, such as when Template Haskell is being used.
#85 was just a quick resolution of years old ticket, I think you’ll find it useful.
I predict that this is the version that will ship with GHC 7.8.1 and I don’t think we’ll have any more 2.14.x releases.
Ideally I’d like to get well under 100 open tickets for the next release (there are currently 117 open).
Some things I will be concentrating on next is splitting up Haddock into a few packages and working on the Hoogle back-end. The Hoogle back-end is incredibly broken which is a shame considering Hoogle is a very useful service. We want to make the maintainers life easier.
Splitting up Haddock into a few packages will be of great advantage to people wishing to use (parts of) Haddock as a library without adding a dependency on a specific version of GHC to their program. It should also become much easier to implement and maintain your own back-ends.
If you are interested in helping out with Haddock, we’d love to have you. Pop into #haddock on Freenode, make some noise and wait for someone to respond. Alternatively, contact me through other means.
PS: While I realise that some of my posts make it on reddit, I myself do not use it. You’re welcome to discuss these but if you leave questions or messages to me on reddit, I will almost certainly not see them. If you want my attention, please either use e-mail or IRC. Thanks!
Does anyone have a motivating example of why you would ever want or need object-oriented style classes in Haskell (such as OHaskell)? I haven't found the desire to use OO in Haskell.
I'm curious why the OCaml implementation of Caml is so popular (among Caml implementations) and yet the OO based Haskell efforts have died.submitted by milksteaksonthehouse
[link] [33 comments]