# News aggregator

### Cabal config file - default install flags?

### Can the "read" function execute functions?

For example, if I have *f n = n2*, is there a way that I can do *read "f 5"* that will return *25*? Is it possible to execute functions though the read function?

EDIT: Or any other function for that matter. I'm just wondering if it's possible.

submitted by k3DW[link] [14 comments]

### What ever happened to 'layers` package?

You'll find an old discussion here: http://www.reddit.com/r/haskell/comments/1xnmiv/monad_layers_an_alternative_to_transformers/

It seems quite promising, but the hackage package is not up-to-date with the GitHub source. Is anyone actively using this?

Some context: "Monad layers, an alternative to transformers: The author claims [layers package's] superiority over transformers, but given that in my whole Haskell experience I have never heard anything about that package I am interested why, also whether the author's claims are true and in general opinions from the community."

submitted by eacameron[link] [4 comments]

### haskell-game/sdl2 · GitHub

### neurocyte/ghc-android · GitHub

### Roman Cheplyaka: Spiral similarity solves an IMO problem

While recovering from a knee surgery, I entertained myself by solving a geometry problem from the last International Mathematical Olympiad. My solution, shown below, is an example of using plane transformations (spiral similarity, in this case) to prove geometric statements.

Problem*(IMO-2014, P4)*

Points \(P\) and \(Q\) lie on side \(BC\) of acute-angled triangle \(ABC\) such that \(\angle PAB=\angle BCA\) and \(\angle CAQ = \angle ABC\). Points \(M\) and \(N\) lie on lines \(AP\) and \(AQ\), respectively, such that \(P\) is the midpoint of \(AM\) and \(Q\) is the midpoint of \(AN\). Prove that the lines \(BM\) and \(CN\) intersect on the circumcircle of triangle \(ABC\).

SolutionLet \(\angle BAC = \alpha\).

\[\angle APB = \pi - \angle PAB - \angle PBA = \pi - \angle ACB - \angle CBA = \alpha\]

Let \(B_1\) and \(C_1\) be such points that \(B\) and \(C\) are midpoints of \(AB_1\) and \(AC_1\), respectively.

Consider a spiral similarity \(h\) such that \(h(B_1)=A\) and \(h(B)=C\) (it necessarily exists).

Now we shall prove that \(h(M)=N\), i.e. that \(h\) transforms the green \(\triangle B_1BM\) into the magenta \(\triangle ACN\) .

Being a spiral similarity, \(h\) rotates all lines by the same angle. It maps \(B_1B\) to \(AC\), therefore that angle equals \(\angle(B_1B, AC)=\pi-\alpha\). (We need to be careful to measure all rotations in the same direction; on my drawing it is clockwise.)

\(h(A)=C_1\), since \(h\) preserves length ratios. So \(h(AM)\) (where \(AM\) denotes the line, not the segment) is a line that passes through \(h(A)=C_1\). It also needs to be parallel to \(BC\), because \(\angle (AM,BC)=\pi-\alpha\) is the rotation angle of \(h\). \(C_1B_1\) is the unique such line (\(C_1B_1 \parallel BC\) by the midline theorem).

Since \(h(AM)=C_1B_1\) and \(h(MB_1)=NA\), \[h(M)=h(AM\cap MB_1)=h(AM)\cap h(MB_1)=C_1B_1\cap NA=N.\]

Now that we know that \(h(BM)=CN\), we can deduce that \(\angle BZC=\angle(BM,CN)=\pi-\alpha\) (the rotation angle). And because \(\angle BAC+\angle BZC=\pi\), \(Z\) lies on the circumcircle of \(ABC\).

### What are the current best practices for installing and maintaining Haskell development environment, with multiple versions of GHC, installed global packages, etc. : haskell

### What are the current best practices for installing and maintaining Haskell development environment, with multiple versions of GHC, installed global packages, etc.

I know this question is probably here quite often, but after not having worked in Haskell for around 8 months, I've found that a lot of things have changed, and my old way of working doesn't work anymore.

I'm splitting this into multiple questions, so that it is easier to answer them.

- It seems that Stackage LTS is the way to go if one wants to avoid cabal hell while installing global packages (like ghc-mod, yesod-bin, hlint, etc.)?
- What version of cabal-install is the go-to one? Since cabal itself constantly prompts for new updates, I ended up installing 1.22.x, which turns out to be incompatible with ghc-mod on GHC 7.8 (https://github.com/kazu-yamamoto/ghc-mod/issues/429), and now I'm feeling stuck, now knowing if I should upgrade GHC, downgrade cabal-install, or choose a completely different approach?
- Is there a point in using sandboxes when one uses Stackage? I used to use sandboxed for everything, but if Stackage LTS would avoid the versioning issues, are they still needed?
- What is the correct way of installing global packages, such as (ghc-mod, hlint, ..., or even lens)? Should I rm -rf ~/.ghc ~/.cabal, install a specific version of cabal, download stackage config and then proceed with installing everything else? Or is there a better way?
- What is the best way of installing GHC on OS X? There's Haskell Platform, Homebrew GHC/Cabal formulas, and Haskell for OS X. I've been told that Homebrew formulas were broken a few months ago, and that Haskell Platform is not good if you plan on doing anything but the most basic things. And now that I'm looking at versions of Cabal, it seems that Haskell for OS X uses cabal-install 1.20.0.3, while Stackage LTS locks at 1.18.0.6 ... wouldn't this cause problems? One can also download the binary build of GHC and install it manually, which seemed to work quite well in the past, but it wasn't so easy to uninstall.
- Similar to the previous question, what is a good way of maintaining multiple GHC versions? There are versions of packages that only work under GHC 7.10, and others that don't, so it seems quite important to be able to switch with ease, without having to reinstall everything.

I'm sorry if this looks like a rant, but I just can't seem to find this kind of information written up in one place. There are snippets and guides and tutorials, and everyone is recommending something else (since there are multiple solutions), but also every time I ask on IRC or somewhere about an issue like this, it ends up being a problem in my setup and I have to reinstall a bunch of packages to get everything working again.

**TL;DR: What is the best and most versatile way to install GHC, global packages and avoid reinstalling every single thing every time in a sandbox.** It's not that I don't like sandboxes, but my fastest quad core machine still takes over a half an hour to build some packages, which just kills the "let's quickly start up a new project and play around" kind of work.

**edit:** It'd be great if we had a list of the install options, with pros&cons and recommendations when to use which one, such as "don't use Haskell Platform if you want to use bleeding edge packages, because of X", etc.

[link] [26 comments]

### alevy/postgresql-orm

### alevy/postgresql-orm

### Линзы: Real World

### Side conditions on Data.Profunctor.Choice?

### JP Moresmau: Searching for food using a LambdaNet neural network

The problem is simple. In a rectangular world, there is food is one place. The food "smells" and so each position in the world has a smell associated with it, the higher the smell meaning the closer to the food. Can we have a neural network that can navigate to the food?

A few definitions:

-- | Direction to go to

data Direction = TopLeft | Left | BottomLeft | Top | Center | Bottom | TopRight | Right | BottomRight

deriving (Show,Read,Eq,Ord,Bounded,Enum)

-- | Strength of the smell

type Smell = Int

-- | Input information

type Input = [(Direction,Smell)]

-- | Position in world

type Position = (Int,Int)

-- | Size of the world

type Size = (Int,Int)

-- | Maximum number of steps to take

type Max = Int

-- | Number of directions

dirLength :: Int

dirLength = 1 + fromEnum (maxBound :: Direction)

-- | The world

data World = World

{ wSize :: Size -- ^ size

, wSmell :: Smell -- ^ Smell of the food position

, wSmells :: DM.Map Position Smell -- ^ All smell strengths by position

}

deriving (Show,Read,Eq,Ord)

-- | Function deciding in which direction to move

type StepFunction = Position -> Input -> Direction

Fundamental is the concept of Direction, since we want to move. Basically, when we are in a given position in a world, we can get nine directions and their associated smell (staying in the same place is one position). The function to decide what to do in a given position given all the smells of the neighbouring positions is called StepFunction.

The algorithm is easy to write for a human brain:

-- | The base algorithm: just go toward the highest smell

baseAlg :: StepFunction

baseAlg _ = fst . maximumBy (comparing snd)

Note that we ignore the current position, we only work with the input structure.

On top of that, we need function to build the world with the proper smell indicators, run the algorithm till we find the food, etc. All this code can be found in the GitHub project but is not really critical for our understanding of neural networks. One function of interest is running one step of the algorithm, showing the intermediate structures generated:

-- | Perform one step and return the information generated: direction/smell input, direction output

algStepExplain :: World -> StepFunction -> Position -> (Position,([(Direction,Smell)],Direction))

We get the position back, and the second element of the tuple is the input and the output of the StepFunction.

What we want to do is train a neural network, which should be easy since we have an algorithm we know will work well to find the best position to move to, and then use that network as an implementation of StepFunction.

The hardest in neural network programming is to design the input and output structures, so that they represent adequately the information about your problem in a format that the network can deal with. Here, we have a fixed input size: the smells of the 9 neighbouring positions. The StepFunction returns a Direction, and a Direction is an enum of nine values, so the output of the network could also be 9 values, the highest of these indicating the direction chosen by the network.

The networks in LambdaNet requires Vectors as their input and output data, so lets format the inputs:

-- | Format the inputs suitable for the network

formatInputs :: World -> [(Direction,Smell)] -> Vector Float

formatInputs w = fromList . map (\i-> fromIntegral (snd i) / fromIntegral (wSmell w))

So an input of 1 means we're on the food itself, and the input value will decrease as we're further from the food, while staying between 0 and 1.

If we have a network, the implementation of StepFunction is straightforward:

-- | Use the network to give the answer

neuralAlg :: World -> Network Float -> StepFunction

neuralAlg w n _ is = toEnum $ maxIndex $ predict (formatInputs w is) n

We format the input, run predict, retrieve the index for the maximum value in the output vector, and use that as the index in the Direction enum. We just need a trained network!

To get that, we generate the training data from a given world. We list all possible positions in the world, calculate the corresponding inputs, run the basic algorithm on the input to get the optimal answer. For the result direction will set the output value to 1, and zero for all the others

-- | Training data: for each position in the world, use the base algorithm to get the training answer

trainData :: World -> [(Vector Float, Vector Float)]

trainData w = map onePos $ allPositions w

where

onePos p =

let (_,(is,dir)) = algStepExplain w baseAlg p

os = map (\(d,_)->if dir==d then 1 else 0) is

in (formatInputs w is,fromList os)

From here, we unimaginatively reuse the LambdaNet tutorial code to build a network...

-- | Create the network

buildNetwork :: RandomGen g => g -> Network Float

buildNetwork g = createNetwork normals g $ replicate 3 $ LayerDefinition sigmoidNeuron dirLength connectFully

And train it:

-- | Train a network on several given worlds

train :: Network Float -> [World] -> Network Float

train n ws =

let t = BackpropTrainer (3 :: Float) quadraticCost quadraticCost'

dat = concatMap trainData ws

in trainUntilErrorLessThan n t online dat 0.01

What is critical here is that we train the network on several different worlds. I tried training only one world, the resulting network would perform well on worlds of the same size or smaller, but not bigger worlds, because it was too fit for the actual smell values. Training even on only two quite different worlds brought big enhancements in the intelligence of the network, at the code of longer learning time.

Once the network is trained, you can run it on several different worlds and see how it can find the food. There is a simple visualization module that allows you to see clearly the moves, for example:

Iteration 1

##########

#........#

#........#

#........#

#...X....#

#........#

#........#

#........#

#.......@#

#........#

#........#

##########

(X being the food, @ the current position)

Iteration 3

##########

#........#

#........#

#........#

#...X....#

#........#

#........#

#......@.#

#........#

#........#

#........#

##########

Iteration 6

##########

#........#

#........#

#........#

#...@....#

#........#

#........#

#........#

#........#

#........#

#........#

##########

Yummy!

If you're interested, the full source code with tasty unit tests in on Github.

This is of course very basic, and only begs to be enhanced with more complicated worlds (maybe with walls, several sources of food, places with no smell at all, etc). What do you do when you don't know the best algorithm yourself? Maybe I'll come back for more later to find out!

### [ANN] dtw - Dynamic Time Warping

I am pleased to announce the first public release of my dynamic time warping (DTW) implementation: https://github.com/fhaust/dtw

DTW is used in several fields to measure the similarity of time series. Both *standard* and *fast* (approximative) DTW algorithms are implemented.

I am publishing this with a version of 0.9 with the plan to bump this up to 1.0 once I get some feedback here. So please have a look at the code and tell me if there are any obvious flaws regarding gaping no-gos or ignored best-practices.

As this is a fundamental algorithm I am especially interested in problems regarding long term maintainability.

submitted by goliatskipson[link] [7 comments]