News aggregator

Which chapters of Real World Haskell are still relevant?

Haskell on Reddit - Fri, 08/15/2014 - 9:35pm

I read through Learn You a Haskell when I first started. That went really well and since I've been doing some Web, Parsec, FFI, Database etc. code but always stayed away from Real World Haskell due to the complaints about it being out of date. As I look through some of the RWH book though it seems that many parts would be good to read. Can someone provide a list of chapters that are either:

  • 1) still highly relevant
  • 2) good, but to be read with an understanding that the content may no longer be best practice or
  • 3) outdated and worth skipping.
submitted by singularai
[link] [7 comments]
Categories: Incoming News

Rules for class methods and Safe Haskell

glasgow-user - Fri, 08/15/2014 - 9:10pm
Hi, the module Control.Arrow declares a set of rules for the Arrow class. It is marked “Trustworthy”, probably to allow these rules to actually fire. Now these rules are only correct for class instances that actually satisfy the arrow laws. If the author of another module defines an instance of Arrow that does not respect the laws, this other module could still be considered “Safe” by GHC, although the rules from Control.Arrow are bogus now. Is this considered a problem? All the best, Wolfgang _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users< at >
Categories: Offsite Discussion

Are safe coercions safe in the sense of Safe Haskell?

glasgow-user - Fri, 08/15/2014 - 9:04pm
Hi, I would expect the function coerce :: Coercible a b => a -> b to be safe in the sense of Safe Haskell. However, the Data.Coerce module is marked “Unsafe”. The coerce function is also available via GHC.Exts and GHC.Prim. The former module is marked “Unsafe”, but the latter is (surprisingly) marked “Safe-Inferred”. What are the reasons behind this? All the best, Wolfgang _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users< at >
Categories: Offsite Discussion

local mins

haskell-cafe - Fri, 08/15/2014 - 8:33pm
Some time ago we have a discussion and now I am ready to present my algorithm for counting local mins. It is liniar for imperative case and in functially too (I hope). I spent hours of debugging on it...
Categories: Offsite Discussion

Problem with popular GHC.Generics example?

Haskell on Reddit - Fri, 08/15/2014 - 8:15pm

The popular GHC.Generics serialization example at illustrates how to serialize a sum datatype (a :+: b), with a 0 bit representing a and a 1 bit representing b.

However, consider a :+: b :+: c.

If the compiler treats this as (a :+: b) :+: c, then a is 00, b is 01, c is 1.

If it's a :+: (b :+: c), then a is 0, b is 10, c is 11.

The compiler's decision, even though (as I understand it) consistent for any given compile, could differ later.

The manual, at states not to depend on the ordering, which the example does.

submitted by Sgeo
[link] [3 comments]
Categories: Incoming News

wren gayle romano: Citation, Recitation, and Resuscitation

Planet Haskell - Fri, 08/15/2014 - 6:26pm

Citation is a necessary practice for any sort of intellectual engagement, whether formal or colloquial, and whether academic or activistic. It is crucial to give credit to the originators of ideas— for ethical honesty: to acknowledge those who've enlightened you; for professional honesty: to make clear where your contributions begin; and for intellectual honesty: to allow others to read the sources for themselves and to follow up on other extensions and criticisms of that work.

When encountering a new idea or text, I often engage in a practice I call "encitation". In order to more thoroughly understand and ingrain a text's intellectual content, I try (temporarily) to view all other ideas and arguments through its lens. This is why when I was reading Whipping Girl I was citing it left and right, just as when I was reading Killing Rage I quoted it incessantly. To understand structuralism, I embraced the structuralist theory and viewed all things in structuralist terms; to understand functionalism, or Marxism, or Freudianism, or performativity, I did the same. Of course, every framework is incomplete and emphasizes certain things to the exclusion of observing others; so viewing the world entirely from within any single framework distorts your perception of reality. The point of the exercise is not to embrace the framework per se, it's to roleplay the embracing of it. The point of this roleplay is to come to understand the emphases and limitations of the framework— not abstractly but specifically. This is especially important for trying to understand frameworks you disagree with. When we disagree with things, the instinct is to discount everything they say. But it's intellectually dishonest to refuse to understand why you disagree. And it's counterproductive, since you cannot debunk the theory nor convince people to change their minds without knowing and addressing where they're coming from.

I engage in encitation not only for anthropological or philosophical ideas, I also do it for mathematical ideas. By trying to view all of mathematics through a particular idea or framework, you come to understand both what it's good at and what it cannot handle. That's one of the things I really love about the way Jason Eisner teaches NLP and declarative methods. While it's brutal to give people a framework (like PCFGs or SAT solving) and then ask them to solve a problem just barely outside of what that framework can handle, it gives you a deep understanding of exactly where and why the framework fails. This is the sort of knowledge you usually have to go out into industry and beat your head against for a while before you see it. But certain fields, like anthropology and writing, do try to teach encitation as a practice for improving oneself. I wonder how much of Jason's technique comes from his background in psychology. Regardless, this practice is one which should, imo, be used (and taught explicitly) more often in mathematics and computer science. A lot of the arguing over OO vs FP would go away if people did this. Instead, we only teach people hybridized approaches, and they fail to internalize the core philosophical goals of notions like objects, functions, types, and so on. These philosophical goals can be at odds, and even irreconcilable, but that does not make one or the other "wrong". The problem with teaching only hybridized approaches is that this irreconcilability means necessarily compromising on the full philosophical commitment to these goals. Without understanding the full philosophical goals of these different approaches, we cannot accurately discuss why sometimes one philosophy is more expedient or practical than another, and yet why that philosophy is not universally superior to others.

The thing to watch out for, whether engaging in the roleplay of encitation or giving citations for actual work, is when you start reciting quotes and texts like catechisms. Once things become a reflexive response, that's a sign that you are no longer thinking. Mantras may be good for meditation, but they are not good critical praxis. This is, no doubt, what Aoife is referring to when she castigates playing Serano says. This is also why it's so dangerous to engage with standardized narratives. The more people engage in recitations of The Narrative, the more it becomes conventionalized and stripped of whatever humanity it may once have had. Moreover, reiterating The Narrative to everyone you meet is the surest way to drive off anyone who doesn't believe in that narrative, or who believes the content but disagrees with the message. Even if I was "born this way", saying so doesn't make it any more true or any more acceptable to those who who would like Jesus to save me from myself. More to the point, saying so places undue emphasis on one very tiny aspect of the whole. I'd much rather convince people of the violent nature of gender enculturation, and get them to recognize the psychological damage that abuse causes, than get them to believe that transgender has a natal origin.

As time goes on, we ask different questions. Consequently, we end up discarding old theories and embracing new ones when the old theory cannot handle our new questions. In our tireless pursuit of the "truth", educators are often reticent to teach defunct theories because we "know" they are "wrong". The new theory is "superior" in being able to address our new questions, but we often lose track of the crucial insights of the old theory along the way. For this reason, it's often important to revive old theories in order to re-highlight those insights and to refocus on old questions which may have become relevant once more. In a way, this revitalization is similar to encitation: the goal is not to say that the old theory is "right", the goal is to understand what the theory is saying and why it's important to say those things.

But again, one must be careful. When new theories arise, practitioners of the immediately-old theory often try to derail the asking of new questions by overemphasizing the questions which gave rise to the preceding theory. This attempt to keep moribund theories on life support often fuels generational divides: the new theoreticians cannot admit to any positives of the old theory lest they undermine their own work, while the old theoreticians feel like they must defend their work against the unrelenting tide lest it be lost forever. I think this is part of why radfems have been spewing such vitriol lately. The theoretical framework of radical feminism has always excluded and marginalized trans women, sex workers, and countless others; but the framework does not justify doxxing, stalking, and harassing those women who dare refute the tenets of The Doctrine. This reactionary violence bears a striking resemblance to the violence of religious fundamentalists1. And as with the religious fundamentalists, I think the reactionary violence of radfems stems from living in a world they can no longer relate to or make sense of.

Major changes in mathematics often result in similar conflicts, though they are seldom so violent. The embracing/rejection of constructivism as a successor to classical mathematics. The embracing/rejection of category theory as an alternative to ZFC set theory. Both of these are radical changes to the philosophical foundations of mathematical thought, and both of these are highly politicized, with advocates on both sides who refuse to hear what the other side is saying. Bob Harper's ranting and railing against Haskell and lazy evaluation is much the same. Yes, having simple cost models and allowing benign side effects is important; but so is having simple semantic models and referential transparency. From where we stand now, those philosophical goals seem to be at odds. But before we can make any progress on reconciling them, we must be willing to embrace both positions long enough to understand their crucial insights and to objectively recognize where and how both fail.

[1] To be clear: I do not draw this analogy as a way of insulting radfems; only to try and make sense of their behavior. There are many religious people (even among those who follow literalist interpretations of their religious texts) who are not terrorists; so too, there are women who believe in the radfem ideology and don't support the behavior of TERFs, SWERFs, etc. It is important to recognize both halves of each community in order to make sense of either side's reactions; and it's important to try to understand the mechanism that leads to these sorts of splits. But exploring this analogy any further is off-topic for this post. Perhaps another time.

Twitter Facebook Google+ Tumblr WordPress

Categories: Offsite Blogs

Functor explorer

Haskell on Reddit - Fri, 08/15/2014 - 6:04pm

Just a little proof-of-concept I whipped up in a couple hours based on this idea:

One thing that would be neat to demonstrate, would be that you write an expression like fmap (+2) and then below it you have a bunch of data structures printed out. Maybe a, [a], (a,), Tree a, Vector a, (-> a), Void, etc. and it shows live what happens when you manipulate the fmap first argument. This might really be a neat way to build intuitions about what fmap is. Similarly you could do it for return and >>= in Monad. These kind of classes are all about that kind of "small piece of code" -> "applies to vast number of instances and they all do different things".

It abuses tryhaskell for its querying in a daft way so expect it to be slow. In a proper implementation one could use a much faster backend.

submitted by chrisdoner
[link] [7 comments]
Categories: Incoming News

ANNOUNCE: vty 5.2.0 and vty-examples 5.2.0

General haskell list - Fri, 08/15/2014 - 5:54pm
5.2.0 - Corrected handling of Color240 values. - Squashed warnings. Thanks jtdaugherty! - Config structure now specifies file descriptor to use. The default is stdInput and stdOutput file descriptors. Previously Vty used stdInput for input and the follow code for output: - hDuplicate stdout >>= handleToFd >>= (`hSetBuffering` NoBuffering) - the difference was required by Vty.Inline. Now, Vty.Inline uses the Config structure options to acheive the same effect. - removed: derivedVtime, derivedVmin, inputForCurrentTerminal, inputForNameAndIO, outputForCurrentTerminal, outputForNameAndIO - added: inputForConfig, outputForConfig - updates to vty-rogue from jtdaugherty. Thanks! - the oldest version of GHC tested to support vty is 7.4.2 -Corey O'Connor coreyoconnor< at > _______________________________________________ Haskell mailing list Haskell< at >
Categories: Incoming News

lit - a *modern* literate programming tool

Haskell on Reddit - Fri, 08/15/2014 - 5:26pm

I built a literate programming tool this summer, and would be curious to get some feedback.

Current literate programming tools falls into several categories:

  • require tex
  • lack actual macro expansion (docco and friends, literatehaskell)
  • outdated and difficult to build

I think lit offers some compelling features. I would love to get some feedback.

More about literate programming...

submitted by cdosborn
[link] [11 comments]
Categories: Incoming News

ANNOUNCE: BOB Conference 2015 Berlin

haskell-cafe - Fri, 08/15/2014 - 2:58pm
BOB Conference 2015 Berlin 23.1.2015 CALL FOR CONTRIBUTIONS English: German: Deadline: September 30, 2014 You drive advanced software engineering methods, implement ambitious architectures and are open to cutting-edge innovation? Attend this conference, meet people that share your goals, and get to know the best software tools and technologies available today. We strive to offer you a day full of new experiences and impressions that you can use to immediately improve your daily life as a software developer. If you share our vision and want to contribute, submit a proposal for a talk or tutorial! We are looking for talks about best-of-breed software technology, e.g.: - functional programming - reactive programming - micro-service architec
Categories: Offsite Discussion

Functional Programming Job Opportunities

haskell-cafe - Fri, 08/15/2014 - 2:38pm
The Risk & Analytics team at Barclays currently has opportunities for functional programmers with experience of investment banking technology and quantitative analytics. If you're interested, please contact me (stephen dot t dot west at barclays dot com). Regards, Stephen. _______________________________________________ This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: For important disclosures, please see: regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresear
Categories: Offsite Discussion

Using `jack` on Windows, sounds realistic?

haskell-cafe - Fri, 08/15/2014 - 12:56pm
Hi all, I've been recently used the great `jack` library from Henning Thielemann (cc'ed) on my linux computer. It work like a charm, but I was wondering about Windows support? As I see Jack run on windows, I'm hoping I might be able to build my haskell application that use `jack` on windows as well, is that realistic? anyone have tried already? Thanks in advance Note: I plan to have an other backend using 'portaudio', which would allow me to be cross-platform anyway... but using Jack bring ASIO support, which is really need for my use case (real time synthesizer).
Categories: Offsite Discussion

Ken T Takusagawa: [leuzkdqp] Units

Planet Haskell - Fri, 08/15/2014 - 12:39pm

Some notes on dimensional quantities and type systems:

Addition, subtraction, assignment, and comparison should fail if the units are incompatible.

Multiplication, division, and exponentiation by a rational dimensionless power always work.  These operations assume commutativity.

Distinguishing addition from multiplication vaguely reminds me of the difference between floating point and fixed point.

Unit conversion: a quantity can be read in one set of units then shown in another set.  Abstractly it does not exist as a real number in either.

Converting between different families of units requires exact linear algebra on rational numbers.

In some functions, units pass through just fine.  Others, e.g., trigonometric, require dimensionless numbers.

Not all dimensionless numbers are the same unit: adding an angle to the fine structure constant seems as meaningless as adding a foot to a volt.  But multiplying them could be reasonable.

One can take any compound type with a dimensionless internal type and turn it into a new compound type with that internal type having units.  But should this be considered a "new" type?  Of course, this is useless unless the internal type defined arithmetic operations: "True" miles per hour seems meaningless.

Creating such compound types is analogous to the "function" idea above by viewing a compound type as a data constructor function of a base type.  Constructors do not do operations which can fail, like addition, so the function always succeeds.

Creating a list populated by successive time derivatives of position seems like a useful thing to be able do.  But every element of the list will have different dimensions, which violates the naive idea of a list being items all of the same type.

We would like to catch all dimensionality errors at compile time, but this may not be possible.  The extreme example would be implementing the "units" program.  Is that an anomaly?

It is OK to add a vector to a coordinate (which has an origin) but not a coordinate to a coordinate.  There seems to be a concept of units and "delta" units.

It is OK to subtract coordinates to get a delta.

Maybe multiplying coordinates is also illegal.

Coordinates versus vectors, units versus delta units, seems like an orthogonal problem to "regular" units.  Separate them in software so one can use either concept independently, for example, distinguishing dimensionless from delta dimensionless.

Go further than just "delta" to distinguish first, second, etc., differences.

An X component and Y component of a vector might have the same units, say, length, but one wants to avoid adding them, as this is typically a typo.  But sometimes, for rotations, one does add them.

A Haskell Wiki page: The units package seems promising.

Categories: Offsite Blogs

[PROPOSAL] Add `FiniteBits(count{Leading,Trailing}Zeros)`

libraries list - Fri, 08/15/2014 - 9:22am
Hello *, As GHC 7.10.1 will have support for new assembler-optimized CLZ & CTZ[1] primops[2], it'd be useful to provide also a convenient high-level interface to avoid having to work with -XMagicHash and unboxed values. To this end, I hereby propose to add two new methods to the 'FiniteBits' class, specifically class Bits b => FiniteBits b where {- ... -} countLeadingZeros :: b -> Int countLeadingZeros x = (w-1) - go (w-1) where go i | i < 0 = i -- no bit set | testBit x i = i | otherwise = go (i-1) w = finiteBitSize x countTrailingZeros :: b -> Int countTrailingZeros x = go 0 where go i | i >= w = i | testBit x i = i | otherwise = go (i+1) w = finiteBitSize x The full patch (including Haddock doc-strings) is available for code review at I suggest to try to keep the discussion/voting/bikeshedding about the proposal proper here
Categories: Offsite Discussion