# News aggregator

### Understanding the State monad

I am just trying to get to understand haskell and I am stuck at State monad. First of all I am confused where it is defined (all the other monads I know about IO, Maybe, List, Functions seem to be easily accessible to me). Secondly I wish to write a function that reads input (using IO) and updates some inner state and once in a while prints the state. The infinite loop would probably have to be recursion of main :: IO (). Can anybody please (pretty pretty please) sketch such a function for me, so that I can analyze it. If yes, please use standard haskell types/functions so that I can trace them in prelude and stuff, rather than custom-made code (where possible). Thank you.

submitted by jd823592[link] [22 comments]

### Looking for a library for parsing the "aeson" Value

IMO there are ways to improve over the Parser API of the "aeson" library. Do there exist any alternatives for parsing Value into Haskell data structures?

submitted by nikita-volkov[link] [7 comments]

### Proposal: "cabal gen-bounds" for easy generation of dependency version bounds

### mightybyte: "cabal gen-bounds": easy generation of dependency version bounds

In my last post I showed how release dates are not a good way of inferring version bounds. The package repository should not make assumptions about what versions you have tested against. You need to tell it. But from what I've seen there are two problems with specifying version bounds:

- Lack of knowledge about how to specify proper bounds
- Unwillingness to take the time to do so

Early in my Haskell days, the first time I wrote a cabal file I distinctly remember getting to the dependencies section and having no idea what to put for the version bounds. So I just ignored them and moved on. The result of that decision is that I can no longer build that app today. I would really like to, but it's just not worth the effort to try.

It wasn't until much later that I learned about the PVP and how to properly let bounds. But even then, there was still an obstacle. It can take some time to add appropriate version bounds to all of a package's dependencies. So even if you know the correct scheme to use, you might not want to take the time to do it.

Both of these problems are surmountable. And in the spirit of doing that, I would like to propose a "cabal gen-bounds" command. It would check all dependencies to see which ones are missing upper bounds and output correct bounds for them. I have implemented this feature and it is available at https://github.com/mightybyte/cabal/tree/gen-bounds. Here is what it looks like to use this command on the cabal-install package:

$ cabal gen-bounds Resolving dependencies... The following packages need bounds and here is a suggested starting point. You can copy and paste this into the build-depends section in your .cabal file and it should work (with the appropriate removal of commas). Note that version bounds are a statement that you've successfully built and tested your package and expect it to work with any of the specified package versions (PROVIDED that those packages continue to conform with the PVP). Therefore, the version bounds generated here are the most conservative based on the versions that you are currently building with. If you know your package will work with versions outside the ranges generated here, feel free to widen them. network >= 2.6.2 && < 2.7, network-uri >= 2.6.0 && < 2.7,The user can then paste these lines into their build-depends file. They are formatted in a way that facilitates easy editing as the user finds more versions (either newer or older) that the package builds with. This serves to both educate users and automate the process. I think this removes one of the main frustrations people have about upper bounds and is a step in the right direction of getting more hackage packages to supply them. Hopefully it will be merged upstream and be available in cabal-install in the future.

### Thiago Negri: Dunning-Kruger effect on effort estimates

The experiment and the poll comes first as I don't want to infect you with my idea before you answer the questions. If you are in the mood of reading a short story and answering a couple of questions, keep reading. In case you are only concerned with my ideas, you may skip the first part.

I won't give any discussion about the subject. I'm just throwing my ideas to the internet, be warned.

Part 1. The experiment

You have to estimate the effort needed to complete a particular task of software development. You may use any tool you'd like to do it, but you will only get as much information as I will tell you now. You will use all the technologies that you already know, so you won't have any learning curve overhead and you will not encounter any technical difficulty when doing the task.

Our customer is bothered by missing other co-workers birthdates. He wants to know all co-workers that are cellebrating birthday or just cellebrated, so he can send a "happy birthday" message at the very morning, when he just turned on his computer. To avoid sending duplicated messages, he doesn't want to see the same person on multiple days at the list.

Your current sofware system already have all workers of the company with birthdates and their relationship, so you can figure out pretty easily who are the co-workers of the user and when is the birthdate of everyone.

Now, stop reading further, take your time and estimate the effort of this task by answering the following poll.

<script charset="utf-8" src="http://static.polldaddy.com/p/9030565.js" type="text/javascript"></script>

<noscript>Estimate your effort</noscript>

Okay, now I'll give you more information about it and ask for your estimate again.

Some religions do not celebrate birthdates and some people get really mad when receiving a message of "happy birthday". To avoid this, you also need to check if the user wants to make its birthdate public.

By the way, the customer's company closes at the weekend, so you need to take into account that at monday you will need to show birthdates that happened at the weekend and not only of the current day.

This also applies to holidays. The holidays are a bit harder as it depends on the city of the employee, as they may have different holidays.

Oh, and don't forget to take into account that the user may have missed a day, so it needs to see everyone that he would on the day that he missed the job.

Now, take your time and estimate again.

<script charset="utf-8" src="http://static.polldaddy.com/p/9030566.js" type="text/javascript"></script>

<noscript>Estimate your effort - II</noscript>

Part 2. The Dunning-Kruger effect on estimates

I don't know if the little story above tricked you or not, but that same story tricked me in real-life. :)

The Dunning-Kruger effect is stated at Wikipedia as:

"[...] a cognitive bias wherein relatively unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than is accurate. This bias is attributed to a metacognitive inability of the unskilled to accurately evaluate their own ability level. Conversely, highly skilled individuals may underestimate their relative competence, erroneously assuming that tasks that are easy for them are also easy for others."

I'm seeing that this effect contributes to make the task of estimating effort to be completely innacurate by nature, as it always pulls to a bad outcome. If you know little about it, you will overestimate your knowledge and consequently underestimate the effort to accomplish it. If you know much, you will underestimate your knowledge and consequently overestimate the effort.

I guess one way to minimize this problem is to remove knowledge up to the point that you only have left the essential needed to complete the task. Sort of what Taleb calls "via negativa" in his Antifragile book.

What do you think? Does this makes any sense to you?

### Kill forked threads in ghci

### LPNMR 2015 - Call for participation

### STABILIZER : Statistically Sound Performance Evaluation

My colleague Mike Rainey described this paper as one of the nicest he's read in a while.

STABILIZER : Statistically Sound Performance Evaluation

Charlie Curtsinger, Emery D. Berger

2013

Researchers and software developers require effective performance evaluation. Researchers must evaluate optimizations or measure overhead. Software developers use automatic performance regression tests to discover when changes improve or degrade performance. The standard methodology is to compare execution times before and after applying changes.

Unfortunately, modern architectural features make this approach unsound. Statistically sound evaluation requires multiple samples to test whether one can or cannot (with high confidence) reject the null hypothesis that results are the same before and after. However, caches and branch predictors make performance dependent on machine-specific parameters and the exact layout of code, stack frames, and heap objects. A single binary constitutes just one sample from the space of program layouts, regardless of the number of runs. Since compiler optimizations and code changes also alter layout, it is currently impossible to distinguish the impact of an optimization from that of its layout effects.

This paper presents STABILIZER, a system that enables the use of the powerful statistical techniques required for sound performance evaluation on modern architectures. STABILIZER forces executions to sample the space of memory configurations by repeatedly re-randomizing layouts of code, stack, and heap objects at runtime. STABILIZER thus makes it possible to control for layout effects. Re-randomization also ensures that layout effects follow a Gaussian distribution, enabling the use of statistical tests like ANOVA. We demonstrate STABILIZER's efficiency (< 7% median overhead) and its effectiveness by evaluating the impact of LLVM’s optimizations on the SPEC CPU2006 benchmark suite. We find that, while -O2 has a significant impact relative to -O1, the performance impact of -O3 over -O2 optimizations is indistinguishable from random noise.

One take-away of the paper is the following technique for validation: they verify, empirically, that their randomization technique results in a gaussian distribution of execution time. This does not guarantee that they found all the source of measurement noise, but it guarantees that the source of noise they handled are properly randomized, and that their effect can be reasoned about rigorously using the usual tools of statisticians. Having a gaussian distribution gives you much more than just "hey, taking the average over these runs makes you resilient to {weird hardward effect blah}", it lets you compute p-values and in general use statistics.

### ETAPS 2016 call for papers

### State of the Haskell ecosystem - August 2015

Interesting survey.

Based on a brief look I am not sure I agree with all the conclusions/rankings. But most seem to make sense and the Notable Libraries and examples in each category are helpful.

### ANN: react-flux initial release

I am announcing the initial release of react-flux. It is a GHCJS package for React based on the Flux design.

I spent some effort writing good haddocks, so the haddock documentation is the best place to learn the library. There is also an TODO example application.

It differes significantly from the other two react bindings, react-haskell and ghcjs-react. In particular, the major difference is how events are handled. In the Flux design, the state is moved out out of the view and then handlers produce actions which transform the state. Thus there is a one-way flow of data from the store into the view. In contrast, react-haskell and ghcjs-react both have event signals propagaing up the react component tree, transforming state at each node. In particular, react-haskell with its InSig and OutSig have the signals propagate up the tree and optionally transform state at each node and change the type of the signal.

I have had success in the past with the Flux design in javascript, and wanted to bring it to GHCJS. At first I tried to work with or slightly modify react-haskell, but the design difference is too fundamental. I then tried to at least share code with react-haskell, but there is unfortunately nothing that can be shared. The element creation, class definition, and event handlers are all significantly different due to the difference in how events are handled. Therefore, I made it a separate package.

submitted by wuzzeb[link] [4 comments]

### ANN: ghc-mod-5.3.0.0

### Show Haskell: Python Dependency Graphing

Hello Haskellers,

I've been reading this sub and introductory Haskell materials for awhile, and finally decided to try to *actually learn* the language by building something interesting in it.

I work as a Python/Django developer, and one of my frustrations when dealing with legacy code is circular dependencies, which, to my thinking represent broader architectural problems. For this reason, I'd been thinking about building an application that graphs Python dependencies, and so I decided to do it in Haskell in order to combine various experimental activities into one project:

Graphs in general and graphs in a functional language

Parsing of Python source

File-system stuff, including locating files and directories

It's still a work-in-progress and has some issues, and some of my goals have not yet been realized, but here's what I've got so far: https://github.com/pellagic-puffbomb/haskpy-dependency-graphs.

I found myself accumulating loads of questions, but maybe I'll just post the highlights here:

I tried to use if-then-else here and got the confusing message "ifThenElse Not in scope". Isn't "ifThenElse" built-in? (Errantly copying junk from cabal files without thinking about what it is...)

I wrote this chunk and then afterward had the thought that there's an issue of context here that monads may solve, but I couldn't really piece it together. If I made my datatype an instance of monad, is it possible that I could write this chunk in a simpler way?

I will happily take any other comments you have. Thanks for checking it out.

submitted by erewok[link] [13 comments]

### oddsFrom3 function

### Flycheck (emacs) now supports stack

### What haskellers' critiques of PHP

Not really expecting a variaty of opinions (if any) in this one to be honest

submitted by zarandysofia[link] [26 comments]

### Mark Jason Dominus: A message to the aliens, part 4/23 (algebra)

Earlier articles: Introduction Common features Page 1 (numerals) Page 2 (arithmetic) Page 3 (exponents)

This is page 4 of the *Cosmic Call*
message. An explanation follows.

Reminder: page 1 explained the ten digits:

0

1

2

3

4

5

6

7

8

9

And the equal sign . Page 2 explained the four basic arithmetic operations and some associated notions:

addition

subtraction

multiplication

division

negation

ellipsis (…)

decimal

point

indeterminate

This page, headed with the glyph for “mathematics” , describes the solution of simple algebraic equations and defines glyphs for three variables, which we may as well call and :

x

y

z

Each equation is introduced by the locution which means “solve for ”. This somewhat peculiar “solve” glyph will not appear again until page 23.

For example the second equation is :

**Solve for : **

The solution, 6, is given over on the right:

After the fourth line, the equations to be solved change from simple numerical equations in one variable to more abstract algebraic relations between three variables. For example, if

**Solve for : **

then

.

The next-to-last line uses a decimal fraction in the exponent, : . On the previous page, the rational fraction was used. Had the same style been followed, it would have looked like this: .

Finally, the last line defines and then, instead of an algebraic solution, gives a graph of the resulting relation, with axes labeled. The scale on the axes is not the same; the -coordinate increases from 0 to 20 pixels, but the -coordinate increases from 0 to 8000 pixels because . If axes were to the same scale, the curve would go up by 8,000 pixels. Notice that the curve does not peek above the -axis until around or so. The authors could have stated that this was the graph of , but chose not to.

I also wonder what the aliens will make of the arrows on the axes. I think the authors want to show that our coordinates increase going up and to the left, but this seems like a strange and opaque way to do that. A better choice would have been to use a function with an asymmetric graph, such as .

(After I wrote that I learned that similar concerns were voiced about the use of a directional arrow in the Pioneer plaque.

(Wikipedia says: “An article in Scientific American criticized the use of an arrow because arrows are an artifact of hunter-gatherer societies like those on Earth; finders with a different cultural heritage may find the arrow symbol meaningless.”)

The next article will discuss page 5, shown at right. (Click to enlarge.) Try to figure it out before then.