… please don't stop! I'm getting a real good laugh. Favorite tweet so far:
This afternoon's debate: conduit vs pipes. THEY MEAN THE SAME THING, YOU ASSHATS! It's every GODDAMN day with these kids.
The answers by edwardkmett, Tekmo and snoyberg are gold.submitted by 8d5rbz6p
[link] [112 comments]
There has been some talk about what a Foldable instance is even supposed to be constrained to do...or why not just any implementation that satisfies the type signature is ok. The documentations just call a good implementation "suitable".
So why can't every foldMap just be foldMap _ _ = mempty ? Or every toList be toList _ = ?
I'm proposing a universal property to supply (at least one) law to Foldable:toList :: Foldable t => t a -> [a]
An implementation of toList f is considered a "proper implementation" if, for any other function g satisfying the above type signature, one can construct a unique k such that:g = k . f
For example, for the "bad instance" above, k = const .
What this does is basically say that if an instance can "extract more information" than another, it is the better one. So the best instances are the ones that extract and preserve the most possible information.
This actually allows for multiple implementations of toList per type...for example, consider:toList'  =  toList' (x:xs) = x:x:xs
But i think that the different implementations all have unique isomorphisms to each other (that is, you can turn the output of toList' into the output of the real toList and vice versa, and backwards).
And besides, Traversable has this same problem as well, even with all of its fancy laws: Nevermind, it doesn't >_>
Anyways, this is my humble proposal for a Foldable law that might give the Foldable detractors (it has no laws! it doesn't mean anything! any implementation is ok!) at least something.submitted by mstksg
[link] [12 comments]
Recently the Haskell community has been engaged in a very intense discussion around potential changes to the Prelude (aka "burning bridges" or "FTP"). Here's the most recent incarnation of the discussion for context. The changes under discussion are non-trivial, and many people are putting in a huge amount of energy to try and make Haskell the best it can be. And to be clear: I'm talking about people arguing on both sides of this discussion, and people trying to moderate it. As someone who's mostly been sitting on the sidelines in this one, I want to start by expressing a big thank you to everyone working on this.
(If anyone's wondering why I'm mostly sitting this one out, it's because I don't feel very strongly about it either way. I think there are great arguments going both ways, and over the past week I've fluctuated between being -0.2 on the proposal and being +0.2.)
When a big discussion like this happens, it's easy for people to misinterpret it as something unhealthy. I'm here to remind everyone that, in fact, the opposite is true: what we're seeing now is the sign of an incredibly healthy community, based on an amazing language, that is undertaking extraordinary things. And there are of course some warts being revealed too, but those are relatively minor, and I believe we will be able to overcome them.
So to begin my cheerleading:
The fact that we can even consider doing this is amazing. I don't think very many languages could sustain a significant rewrite of their most core library. Let's just keep things in perspective here: even the worst case scenario damage from this change involves updating some documentation and modifying a relatively small amount of code in such a way that will be backwards compatible with old versions of the library. This is a true testament not only to the power of the Haskell language, but to the thoughtfulness with which this proposal was made.
The discussion has been incredibly civil. This topic had all the makings for an internet flame war: strongly held opinions, good arguments on both sides, lots of time and effort involved, and Reddit. I am happy to say that I have not seen a single personal attack in the entire discussion. Almost every piece of discourse has been beyond reproach, and the few times where things have gotten close to crossing the line, people on both sides have politely expressed that sentiment, leading to the original content being removed.
To some extent, I think we're all a bit spoiled by how good the civility in the Haskell world is, and we should take a moment to appreciate it. That's not to say we should ever expect any less, but we should feel comfortable patting ourselves on the back a bit.
We're dynamically coming up with new, better processes. When opinions are so strongly divided, it's difficult to make any kind of progress. As a community, we're adapting quickly and learning how to overcome that. As you can see in the thread I linked above, we now have a clear path forward: a feedback form that will be processed by Simon PJ and Simon Marlow, who will make the ultimate decision. This process is clear, and we couldn't be more fortunate to have such great and well respected leaders in our community.
Nothing else has stopped. If you look at issue trackers, commit logs, and mailing list discussions, you can see that while many members of the community are actively participating in this discussion, nothing else has ground to a halt. We're a dynamic community with many things going on, so the ability to digest a major issue while still moving forward elsewhere is vital.
That said, I think there are still some areas for improvement. The biggest one lies with the core libraries committee, of which I'm a member. We need to learn how to be better at communicating with the community about these kinds of large scale changes. I'm taking responsibility for that problem, so if you don't see improvements on that front in the next few weeks, you can blame me.
More generally, I think there are some process and communications improvements that can be made at various places in the community. I know that's an incredibly vague statement, but that's all I have for the moment. I intend to follow up in the coming weeks with more concrete points and advice on how to improve things.
In sum: Haskell's an amazing language, which has attracted an amazing community. This discussion doesn't detract from that statement, but rather emphasizes it. Like any group, we can still learn to do a few things better, but we've demonstrated time and again (including right now!) that we have the strength to learn and improve, and I'm certain we'll do so again.
I'm proud to be part of this community, and everyone else should be as well.
Recruit IT are looking for a Senior Developer to join a bleeding edge Big Data organisation in the New Zealand market. You will bring a proven background in big data systems, business intelligence, and/or data warehouse technologies along with a preference for functional programming and open source solutions.
You will be playing an integral part in ensuring the growth and performance of a state of the art Big Data platform. To do this you will need to have a good understanding of the importance of analytics and a variety of Big Data technologies.
Your experience to date will include:
Experience with big data, business intelligence, and data warehouse applications. This will include; Hadoop, Hive, Spring, Pivotal HD, Cloud Foundry, HAWQ, GreenPlum, MongoDB, Cassandra, Hortonworks, Cloudera, MapReduce, Flume, or Scoop to name a few!
Ideally functional programming experience including; Scala, Haskell, Lisp, Python etc
Passion for bleeding edge tech is a MUST!
If you are interested in finding out more, apply online to Kaleb at Recruit IT with your CV and an overview of your current situation.
Get information on how to apply for this position.
Hello, Pretty new to Haskell but just trying to piece things together through sites and articles. However I have yet to find a source that can answer the question of taking user's input and placing it into a list.
import System.Environmentprompt x = do putStrLn x number <- getLine return number main = do let numbersList =  --create a list for all numbers num <- (prompt "Please input a number: ") if ( (read num :: Int) /= 0) then do --valid number numbersList ++ (read num :: Int) --UNSURE HOW TO PUT THE NUMBER INTO THE LIST else do --check for primes print ("done") submitted by DESU-troyer
[link] [21 comments]
The Foldable-Traversable proposal (aka FTP) has spawned a lot of debate in the Haskell community.
Here I want to analyze the specific concern which Ben Moseley raised in his post, FTP dangers.
Ben’s general point is that more polymorphic (specifically, ad-hoc polymorphic, i.e. using type classes) functions are less readable and reliable than their monomorphic counterparts.
On the other hand, Tony Morris and Chris Allen argue on twitter that polymorphic functions are more readable due to parametricity.
Is that true, however? Are the ad-hoc generalized functions more parametric than the monomorphic versions?February 12, 2015 <script async="async" charset="utf-8" src="http://platform.twitter.com/widgets.js"></script>
Technically, the Functor-based type is more parametric. A function with type (a -> b) -> [a] -> [b] is something like map, except it may drop or duplicate some elements. On the other hand, Functor f => (a -> b) -> f a -> f b has to be fmap.
But this is a trick question! The first thing we see in the code is the function’s name, not its type. What carries more information, map or fmap? (Assuming both come from the current Prelude.) Certainly map. When fmap is instantiated at the list type, it is nothing more than map. When we see fmap, we know that it may or may not be map. When we see map, we know it is map and nothing else.
The paradox is that there are more functions with map’s type than fmap’s, but there are more functions with fmap’s name than map’s. Even though fmap is more parametric, that doesn’t win us much.
Nevertheless, is there a benefit in using more parametric functions in your code? No. If it were true, we’d all be pumping our code with «parametricity» by writing id 3 instead of 3. You can’t get more parametric than id.
Merely using parametric functions doesn’t make code better. Parametricity may pay off when we’re defining polymorphic parametric functions in our code instead of their monomorphic instantiations, since parametric types are more constrained and we’re more likely to get a compile error should we do anything stupid.
(It won’t necessarily pay off; the type variables and constraints do impose a tax on types’ readability.)
But if you have an existing, monomorphic piece of code that works with lists, simply replacing Data.List functions with Data.Foldable ones inside it, ceteris paribus, will not make your code any safer or more readable.
After a recent chat with Simon Meier, we decided that I would take over the maintenance of the exceedingly popular blaze-builder package.
Of course, this package has been largely superseded by the new builder shipped inside bytestring itself. The point of this new release is to offer a smooth migration path from the old to the new.
If you have a package that only uses the public interface of the old blaze-builder, all you should have to do is compile it against blaze-builder-0.4 and you will in fact be using the new builder. If your program fails to compile against the old public interface, or there’s any change in the semantics of your program, then please file a bug against my blaze-builder repository.
If you are looking for a function to convert Blaze.ByteString.Builder.Builder to Data.ByteString.Builder.Builder or back, it is id. These two types are exactly the same, as the former is just a re-export of the latter. Thus inter-operation between code that uses the old interface and the new should be efficient and painless.
The one caveat is that the old implementation has all but disappeared, and programs and libraries that touch the old internal modules will need to be updated.
This compatibility shim is especially important for those libraries that have the old blaze-builder as part of their public interface, as now you can move to the new builder without breaking your interface.
There are a few things to consider in order to make this transition as painless as possible, however: libraries that touch the old internals should probably move to the new bytestring builder as soon as possible, while those libraries who depend only on the public interface should probably hold off for a bit and continue to use this shim.
For example, blaze-builder is part of the public interface of both the Snap Framework and postgresql-simple. Snap touches the old internals, while postgresql-simple uses only the public interface. Both libraries are commonly used together in the same projects.
There would be some benefit to postgresql-simple to move to the new interface. However, let’s consider the hypothetical situation where postgresql-simple has transitioned, and Snap has not. This would cause problems for any project that 1.) depends on this compatibility shim for interacting with postgresql-simple, and 2.) uses Snap.
Any such project would have to put off upgrading postgresql-simple until Snap is updated, or interact with postgresql-simple through the new bytestring builder interface and continue to use the old blaze-builder interface for Snap. The latter option could range from anywhere from trivial to extremely painful, depending on how entangled the usage of Builders are between postgresql-simple and Snap.
By comparison, as long as postgresql-simple continues to use the public blaze-builder interface, it can easily use either the old or new implementation. If postgresql-simple holds off until after Snap makes the transition, then there’s little opportunity for these sorts of problems to arise.
The issue was that a connection from the pool wasn’t reserved for the duration of the transaction. This meant that the individual queries of a transaction could be issued on different connections, and that queries from other requests could be issued on the connection that’s in a transaction. Setting the maximum size of the pool to a single connection fixes the first problem, but not the second.
At Hac Phi 2014, Doug and I finally sat down and got serious about fixing this issue. The fix did require breaking the interface in a fairly minimal fashion. Snaplet-postgresql-simple now offers the withPG and liftPG operators that will exclusively reserve a single connection for a duration, and in turn uses withPG to implement withTransaction.
We were both amused by the fact that apparently a fair number of people have been using snaplet-postgresql-simple, even transactions in some cases, without obviously noticing the issue. One could speculate the reasons why, but Doug did mention that he pretty much never uses transactions. So in response, I came up with a list of five common use cases, the first three involve changing the database, and last two are useful even in a read-only context.
Transactions allow one to make a group of logically connected changes so that they either all reflected in the resulting state of the database, or that none of them are. So if anything fails before the commit, say due to a coding error or even something outside the control of software, the database isn’t polluted with partially applied changes.
Databases that provide durability, like PostgreSQL, are limited in the number of transactions per second by the rotational speed of the disk they are writing to. Thus individual DML statements are rather slow, as each PostgreSQL statement that isn’t run in an explicit transaction is run in its own individual, implicit transaction. Batching multiple insert statements into a single transaction is much faster.
This use case is relatively less important when writing to a solid state disk, which is becoming increasingly common. Alternatively, postgresql allows a client program to turn synchronous_commit off for the connection or even just a single transaction, if sacrificing a small amount of durability is acceptable for the task at hand.
Avoiding Race Conditions
Transactional databases, like Software Transactional Memory, do not automatically eliminate all race conditions, they only provide a toolbox for avoiding and managing them. Transactions are the primary tool in both toolboxes, though there are considerable differences around the edges.
Cursors are one of several methods to stream data out of PostgreSQL, and you’ll almost always want to use them inside a single transaction.2 One advantage that cursors have over the other streaming methods is that one can interleave the cursor with other queries, updates, and cursors over the same connection, and within the same transaction.
Running multiple queries against a single snapshot
If you use the REPEATABLE READ or higher isolation level, then every query in the transaction will be executed on a single snapshot of the database.
So I no longer have any reservations about using snaplet-postgresql-simple if it is a good fit for your application, and I do recommend that you learn to use transactions effectively if you are using Postgres. Perhaps in a future post, I’ll write a bit about picking an isolation level for your postgres transactions.
There is the WITH HOLD option for keeping a cursor open after a transaction commits, but this just runs the cursor to completion, storing the data in a temporary table. Which might occasionally be acceptable in some contexts, but is definitely not streaming.↩