News aggregator

Are safe coercions safe in the sense of Safe Haskell?

glasgow-user - Fri, 08/15/2014 - 9:04pm
Hi, I would expect the function coerce :: Coercible a b => a -> b to be safe in the sense of Safe Haskell. However, the Data.Coerce module is marked “Unsafe”. The coerce function is also available via GHC.Exts and GHC.Prim. The former module is marked “Unsafe”, but the latter is (surprisingly) marked “Safe-Inferred”. What are the reasons behind this? All the best, Wolfgang _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users< at >haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Categories: Offsite Discussion

local mins

haskell-cafe - Fri, 08/15/2014 - 8:33pm
Some time ago we have a discussion and now I am ready to present my algorithm for counting local mins. It is liniar for imperative case and in functially too (I hope). I spent hours of debugging on it...
Categories: Offsite Discussion

Problem with popular GHC.Generics example?

Haskell on Reddit - Fri, 08/15/2014 - 8:15pm

The popular GHC.Generics serialization example at http://www.haskell.org/ghc/docs/latest/html/users_guide/generic-programming.html#idp25226064 illustrates how to serialize a sum datatype (a :+: b), with a 0 bit representing a and a 1 bit representing b.

However, consider a :+: b :+: c.

If the compiler treats this as (a :+: b) :+: c, then a is 00, b is 01, c is 1.

If it's a :+: (b :+: c), then a is 0, b is 10, c is 11.

The compiler's decision, even though (as I understand it) consistent for any given compile, could differ later.

The manual, at http://hackage.haskell.org/package/generic-deriving-1.6.3/docs/Generics-Deriving-Base.html#g:9 states not to depend on the ordering, which the example does.

submitted by Sgeo
[link] [3 comments]
Categories: Incoming News

wren gayle romano: Citation, Recitation, and Resuscitation

Planet Haskell - Fri, 08/15/2014 - 6:26pm

Citation is a necessary practice for any sort of intellectual engagement, whether formal or colloquial, and whether academic or activistic. It is crucial to give credit to the originators of ideas— for ethical honesty: to acknowledge those who've enlightened you; for professional honesty: to make clear where your contributions begin; and for intellectual honesty: to allow others to read the sources for themselves and to follow up on other extensions and criticisms of that work.

When encountering a new idea or text, I often engage in a practice I call "encitation". In order to more thoroughly understand and ingrain a text's intellectual content, I try (temporarily) to view all other ideas and arguments through its lens. This is why when I was reading Whipping Girl I was citing it left and right, just as when I was reading Killing Rage I quoted it incessantly. To understand structuralism, I embraced the structuralist theory and viewed all things in structuralist terms; to understand functionalism, or Marxism, or Freudianism, or performativity, I did the same. Of course, every framework is incomplete and emphasizes certain things to the exclusion of observing others; so viewing the world entirely from within any single framework distorts your perception of reality. The point of the exercise is not to embrace the framework per se, it's to roleplay the embracing of it. The point of this roleplay is to come to understand the emphases and limitations of the framework— not abstractly but specifically. This is especially important for trying to understand frameworks you disagree with. When we disagree with things, the instinct is to discount everything they say. But it's intellectually dishonest to refuse to understand why you disagree. And it's counterproductive, since you cannot debunk the theory nor convince people to change their minds without knowing and addressing where they're coming from.

I engage in encitation not only for anthropological or philosophical ideas, I also do it for mathematical ideas. By trying to view all of mathematics through a particular idea or framework, you come to understand both what it's good at and what it cannot handle. That's one of the things I really love about the way Jason Eisner teaches NLP and declarative methods. While it's brutal to give people a framework (like PCFGs or SAT solving) and then ask them to solve a problem just barely outside of what that framework can handle, it gives you a deep understanding of exactly where and why the framework fails. This is the sort of knowledge you usually have to go out into industry and beat your head against for a while before you see it. But certain fields, like anthropology and writing, do try to teach encitation as a practice for improving oneself. I wonder how much of Jason's technique comes from his background in psychology. Regardless, this practice is one which should, imo, be used (and taught explicitly) more often in mathematics and computer science. A lot of the arguing over OO vs FP would go away if people did this. Instead, we only teach people hybridized approaches, and they fail to internalize the core philosophical goals of notions like objects, functions, types, and so on. These philosophical goals can be at odds, and even irreconcilable, but that does not make one or the other "wrong". The problem with teaching only hybridized approaches is that this irreconcilability means necessarily compromising on the full philosophical commitment to these goals. Without understanding the full philosophical goals of these different approaches, we cannot accurately discuss why sometimes one philosophy is more expedient or practical than another, and yet why that philosophy is not universally superior to others.

The thing to watch out for, whether engaging in the roleplay of encitation or giving citations for actual work, is when you start reciting quotes and texts like catechisms. Once things become a reflexive response, that's a sign that you are no longer thinking. Mantras may be good for meditation, but they are not good critical praxis. This is, no doubt, what Aoife is referring to when she castigates playing Serano says. This is also why it's so dangerous to engage with standardized narratives. The more people engage in recitations of The Narrative, the more it becomes conventionalized and stripped of whatever humanity it may once have had. Moreover, reiterating The Narrative to everyone you meet is the surest way to drive off anyone who doesn't believe in that narrative, or who believes the content but disagrees with the message. Even if I was "born this way", saying so doesn't make it any more true or any more acceptable to those who who would like Jesus to save me from myself. More to the point, saying so places undue emphasis on one very tiny aspect of the whole. I'd much rather convince people of the violent nature of gender enculturation, and get them to recognize the psychological damage that abuse causes, than get them to believe that transgender has a natal origin.

As time goes on, we ask different questions. Consequently, we end up discarding old theories and embracing new ones when the old theory cannot handle our new questions. In our tireless pursuit of the "truth", educators are often reticent to teach defunct theories because we "know" they are "wrong". The new theory is "superior" in being able to address our new questions, but we often lose track of the crucial insights of the old theory along the way. For this reason, it's often important to revive old theories in order to re-highlight those insights and to refocus on old questions which may have become relevant once more. In a way, this revitalization is similar to encitation: the goal is not to say that the old theory is "right", the goal is to understand what the theory is saying and why it's important to say those things.

But again, one must be careful. When new theories arise, practitioners of the immediately-old theory often try to derail the asking of new questions by overemphasizing the questions which gave rise to the preceding theory. This attempt to keep moribund theories on life support often fuels generational divides: the new theoreticians cannot admit to any positives of the old theory lest they undermine their own work, while the old theoreticians feel like they must defend their work against the unrelenting tide lest it be lost forever. I think this is part of why radfems have been spewing such vitriol lately. The theoretical framework of radical feminism has always excluded and marginalized trans women, sex workers, and countless others; but the framework does not justify doxxing, stalking, and harassing those women who dare refute the tenets of The Doctrine. This reactionary violence bears a striking resemblance to the violence of religious fundamentalists1. And as with the religious fundamentalists, I think the reactionary violence of radfems stems from living in a world they can no longer relate to or make sense of.

Major changes in mathematics often result in similar conflicts, though they are seldom so violent. The embracing/rejection of constructivism as a successor to classical mathematics. The embracing/rejection of category theory as an alternative to ZFC set theory. Both of these are radical changes to the philosophical foundations of mathematical thought, and both of these are highly politicized, with advocates on both sides who refuse to hear what the other side is saying. Bob Harper's ranting and railing against Haskell and lazy evaluation is much the same. Yes, having simple cost models and allowing benign side effects is important; but so is having simple semantic models and referential transparency. From where we stand now, those philosophical goals seem to be at odds. But before we can make any progress on reconciling them, we must be willing to embrace both positions long enough to understand their crucial insights and to objectively recognize where and how both fail.

[1] To be clear: I do not draw this analogy as a way of insulting radfems; only to try and make sense of their behavior. There are many religious people (even among those who follow literalist interpretations of their religious texts) who are not terrorists; so too, there are women who believe in the radfem ideology and don't support the behavior of TERFs, SWERFs, etc. It is important to recognize both halves of each community in order to make sense of either side's reactions; and it's important to try to understand the mechanism that leads to these sorts of splits. But exploring this analogy any further is off-topic for this post. Perhaps another time.

Twitter Facebook Google+ Tumblr WordPress

comments
Categories: Offsite Blogs

Functor explorer

Haskell on Reddit - Fri, 08/15/2014 - 6:04pm

http://chrisdone.com/fmap

Just a little proof-of-concept I whipped up in a couple hours based on this idea:

One thing that would be neat to demonstrate, would be that you write an expression like fmap (+2) and then below it you have a bunch of data structures printed out. Maybe a, [a], (a,), Tree a, Vector a, (-> a), Void, etc. and it shows live what happens when you manipulate the fmap first argument. This might really be a neat way to build intuitions about what fmap is. Similarly you could do it for return and >>= in Monad. These kind of classes are all about that kind of "small piece of code" -> "applies to vast number of instances and they all do different things".

It abuses tryhaskell for its querying in a daft way so expect it to be slow. In a proper implementation one could use a much faster backend.

submitted by chrisdoner
[link] [7 comments]
Categories: Incoming News

ANNOUNCE: vty 5.2.0 and vty-examples 5.2.0

General haskell list - Fri, 08/15/2014 - 5:54pm
5.2.0 - Corrected handling of Color240 values. - Squashed warnings. Thanks jtdaugherty! - Config structure now specifies file descriptor to use. The default is stdInput and stdOutput file descriptors. Previously Vty used stdInput for input and the follow code for output: - hDuplicate stdout >>= handleToFd >>= (`hSetBuffering` NoBuffering) - the difference was required by Vty.Inline. Now, Vty.Inline uses the Config structure options to acheive the same effect. - removed: derivedVtime, derivedVmin, inputForCurrentTerminal, inputForNameAndIO, outputForCurrentTerminal, outputForNameAndIO - added: inputForConfig, outputForConfig - updates to vty-rogue from jtdaugherty. Thanks! - the oldest version of GHC tested to support vty is 7.4.2 -Corey O'Connor coreyoconnor< at >gmail.com http://corebotllc.com/ _______________________________________________ Haskell mailing list Haskell< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell
Categories: Incoming News

lit - a *modern* literate programming tool

Haskell on Reddit - Fri, 08/15/2014 - 5:26pm

I built a literate programming tool this summer, and would be curious to get some feedback.

Current literate programming tools falls into several categories:

  • require tex
  • lack actual macro expansion (docco and friends, literatehaskell)
  • outdated and difficult to build

I think lit offers some compelling features. I would love to get some feedback.

More about literate programming...

submitted by cdosborn
[link] [11 comments]
Categories: Incoming News

ANNOUNCE: BOB Conference 2015 Berlin

haskell-cafe - Fri, 08/15/2014 - 2:58pm
BOB Conference 2015 Berlin 23.1.2015 http://bobkonf.de/2015/ CALL FOR CONTRIBUTIONS English: http://bobkonf.de/2015/cfp.html German: http://bobkonf.de/2015/cfp.html Deadline: September 30, 2014 You drive advanced software engineering methods, implement ambitious architectures and are open to cutting-edge innovation? Attend this conference, meet people that share your goals, and get to know the best software tools and technologies available today. We strive to offer you a day full of new experiences and impressions that you can use to immediately improve your daily life as a software developer. If you share our vision and want to contribute, submit a proposal for a talk or tutorial! We are looking for talks about best-of-breed software technology, e.g.: - functional programming - reactive programming - micro-service architec
Categories: Offsite Discussion

Functional Programming Job Opportunities

haskell-cafe - Fri, 08/15/2014 - 2:38pm
The Risk & Analytics team at Barclays currently has opportunities for functional programmers with experience of investment banking technology and quantitative analytics. If you're interested, please contact me (stephen dot t dot west at barclays dot com). Regards, Stephen. _______________________________________________ This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: www.barclays.com/emaildisclaimer. For important disclosures, please see: www.barclays.com/salesandtradingdisclaimer regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresear
Categories: Offsite Discussion

Using `jack` on Windows, sounds realistic?

haskell-cafe - Fri, 08/15/2014 - 12:56pm
Hi all, I've been recently used the great `jack` library from Henning Thielemann (cc'ed) on my linux computer. It work like a charm, but I was wondering about Windows support? As I see Jack run on windows, I'm hoping I might be able to build my haskell application that use `jack` on windows as well, is that realistic? anyone have tried already? Thanks in advance Note: I plan to have an other backend using 'portaudio', which would allow me to be cross-platform anyway... but using Jack bring ASIO support, which is really need for my use case (real time synthesizer).
Categories: Offsite Discussion

Ken T Takusagawa: [leuzkdqp] Units

Planet Haskell - Fri, 08/15/2014 - 12:39pm

Some notes on dimensional quantities and type systems:

Addition, subtraction, assignment, and comparison should fail if the units are incompatible.

Multiplication, division, and exponentiation by a rational dimensionless power always work.  These operations assume commutativity.

Distinguishing addition from multiplication vaguely reminds me of the difference between floating point and fixed point.

Unit conversion: a quantity can be read in one set of units then shown in another set.  Abstractly it does not exist as a real number in either.

Converting between different families of units requires exact linear algebra on rational numbers.

In some functions, units pass through just fine.  Others, e.g., trigonometric, require dimensionless numbers.

Not all dimensionless numbers are the same unit: adding an angle to the fine structure constant seems as meaningless as adding a foot to a volt.  But multiplying them could be reasonable.

One can take any compound type with a dimensionless internal type and turn it into a new compound type with that internal type having units.  But should this be considered a "new" type?  Of course, this is useless unless the internal type defined arithmetic operations: "True" miles per hour seems meaningless.

Creating such compound types is analogous to the "function" idea above by viewing a compound type as a data constructor function of a base type.  Constructors do not do operations which can fail, like addition, so the function always succeeds.

Creating a list populated by successive time derivatives of position seems like a useful thing to be able do.  But every element of the list will have different dimensions, which violates the naive idea of a list being items all of the same type.

We would like to catch all dimensionality errors at compile time, but this may not be possible.  The extreme example would be implementing the "units" program.  Is that an anomaly?

It is OK to add a vector to a coordinate (which has an origin) but not a coordinate to a coordinate.  There seems to be a concept of units and "delta" units.

It is OK to subtract coordinates to get a delta.

Maybe multiplying coordinates is also illegal.

Coordinates versus vectors, units versus delta units, seems like an orthogonal problem to "regular" units.  Separate them in software so one can use either concept independently, for example, distinguishing dimensionless from delta dimensionless.

Go further than just "delta" to distinguish first, second, etc., differences.

An X component and Y component of a vector might have the same units, say, length, but one wants to avoid adding them, as this is typically a typo.  But sometimes, for rotations, one does add them.

A Haskell Wiki page: http://www.haskell.org/haskellwiki/Physical_units. The units package seems promising.

Categories: Offsite Blogs

[PROPOSAL] Add `FiniteBits(count{Leading,Trailing}Zeros)`

libraries list - Fri, 08/15/2014 - 9:22am
Hello *, As GHC 7.10.1 will have support for new assembler-optimized CLZ & CTZ[1] primops[2], it'd be useful to provide also a convenient high-level interface to avoid having to work with -XMagicHash and unboxed values. To this end, I hereby propose to add two new methods to the 'FiniteBits' class, specifically class Bits b => FiniteBits b where {- ... -} countLeadingZeros :: b -> Int countLeadingZeros x = (w-1) - go (w-1) where go i | i < 0 = i -- no bit set | testBit x i = i | otherwise = go (i-1) w = finiteBitSize x countTrailingZeros :: b -> Int countTrailingZeros x = go 0 where go i | i >= w = i | testBit x i = i | otherwise = go (i+1) w = finiteBitSize x The full patch (including Haddock doc-strings) is available for code review at https://phabricator.haskell.org/D158 I suggest to try to keep the discussion/voting/bikeshedding about the proposal proper here
Categories: Offsite Discussion

APLAS 2014: Call for Posters and Demo

General haskell list - Fri, 08/15/2014 - 6:24am
?(Apologies for multiple copies.) Call for Posters and Demos: APLAS 2014 12th Asian Symposium on Programming Languages and Systems November 17-19, 2014 Singapore http://www.math.nagoya-u.ac.jp/~garrigue/APLAS2014/ Submission due: 15 September 2014 (Monday), 23:59 GMT Notification: 22 September 2014 (Monday) ========== BACKGROUND ========== APLAS aims to stimulate programming language research by providing a forum for the presentation of latest results and the exchange of ideas in programming languages and systems. APLAS is based in Asia, but is an international forum that serves the worldwide programming language community. APLAS is sponsored by the Asian Association for Foundation of Software (AAFS) founded by Asian researchers in cooperation with many researchers from Europe and the USA. Past APLAS symposiums were successfully held in Melbourne ('13), Kyoto ('12), Kenting ('11), Shanghai ('10), Seoul ('09), Bangalore (
Categories: Incoming News

Asking for help on MVC, pure Models and IO. Partially related to the mvc library.

Haskell on Reddit - Fri, 08/15/2014 - 4:00am

Hello,

I'm currently writing a small program based upon the MVC pattern. I noticed that many of my design choices in terms of Controller => Model => View resemble the mvc library. However, I have a problem which I don't know how to solve. I tried to get some hints from the mvc library, but from what I understand I would face the same problem there, too. The problem goes as follows.

This should be window management system. The Model is a representation of all windows as clients. A Client contains the window id and additional information like coordinates, size and maybe title, etc. Controllers produce events and Views draw or render the Model.

A Controller produces an event, notifying that a window was created. This event includes x and y coordinates as well as width and height. Now I can wrap this window into a client and add it to the model purely. However, for deciding if I should add the window I need to do an impure query. In addition, for client information like the title I also need impure methods.

I came up with some solutions but in my opinion they are all unsatisfactory:

  • Allow IO for modifying the Model
  • Use the Interpreter pattern to purify the Model and still get IO
  • Work around the issue by having the Views decide which Client should be drawed and which additional information needs to be gathered (defies the idea of using the Model to cache information instead of querying it every time)
  • Give up on the Model entirely and abstract the Controller using the Interpreter pattern. The Interpreter would then become the View.
    Downside: Every Controller would have to keep a Model on its own if necessary possibly duplicating some code.

I'm a bit at a loss here. Having a pure Model is nice and makes sense, there are practical needs for impure IO. How could I improve this?

submitted by jrk-
[link] [5 comments]
Categories: Incoming News

HaskellWiki page about SDL

haskell-cafe - Thu, 08/14/2014 - 11:43pm
L.S., Does the problem with SDL on OS X, as mentioned on the HaskellWiki page about SDL[0], still exist? This text is quite old and I would like to remove it. Regards, Henk-Jan van Tuyl [0] https://www.haskell.org/haskellwiki/SDL#Haskell-SDL_with_Mac_OS_X
Categories: Offsite Discussion

maintainer needed: extensible-effects

haskell-cafe - Thu, 08/14/2014 - 7:33pm
Hey cafe, is there anybody interested in taking over maintainership of the extensible-effects package? I'm rather busy and not keeping a good eye on it. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion