News aggregator

Program "Three-of-a-crime" in Haskell

Haskell on Reddit - Sat, 03/29/2014 - 6:13pm

Here is the problem.Three-of-a-crime is a simple logic game for up to 3 players. There are 7 different criminals. The computer randomly chooses three of these (the "perpetrators"), but doesn't tell the players which are chosen. The computer then puts down three random criminals. 0, 1, or 2 of these may be the actual perpetrators. The computer also tells the player how many (but not which) of the three criminals are perpetrators. The players may either guess which three criminals are the actual perpetrators or they may pass. If a player guesses wrong, she is out of the game and the other players continue. If no player chooses to guess, the computer puts down another three randomly chosen criminals (0, 1, or 2 of which may be actual perpetrators) and tells the players how many (but not which) of these are actual perpetrators. Players can again use logic to deduce the three actual criminals and may guess. Play continues until some player guesses correctly or until all players have guessed incorrectly.

Here is what I have done till now

import Data.List import System.Random

diff_select :: Int -> IO [Char] diff_select 3 = diff_select' 3 ['a','b','c','d','e','f','g']

diff_select' 0 _ = return [] diff_select' _ [] = error "too few elements to choose from" diff_select' 3 xs = do r <- randomRIO (0,(length xs)-1) let remaining = take r xs ++ drop (r+1) xs rest <- diff_select' (n-1) remaining return ((xs!!r) : rest)

This will generate 3 random elements from the elements ['a'..'g']

Can anyone help me from there on?

submitted by jitendra8911
[link] [4 comments]
Categories: Incoming News

Christopher Done: One step forward, two steps back

Planet Haskell - Sat, 03/29/2014 - 6:00pm

The issue is that programming languages don’t go forward, they move sideways or diagonally, or sometimes backwards.

A new car comes out, and it has some cool feature: Hey, it has road surface detection and changes the steering accordingly! But it should also come with all the old stuff that you come to expect. Comfy seats, seatbelts, airconditioner, heated windows, wipers, proximity detection, power steering, cruise control, etc.

With new programming languages, what you tend to get is a chassis, engine and steering wheel, and the road surface detection.

Here is a list of cool ideas that have been discovered and implemented in programming languages, but which do not in their whole make up any existing language:

The moral is, if you’re inventing a new general purpose programming language and you have some clue that it’s going to be adopted, I implore you to thoroughly explore all of the features above within the existing languages that do them well, and think about adding it to your language.

As a follow-up post I might make a matrix of the top, say, 30, general purpose programming languages and all the features that they tick off.

  1. Also known as quotation, quasiquotes, macros, templating, mixins, etc.

  2. So, numbers, strings, vectors, patterns, etc.

  3. Preferablly static. Also known as implicit parameters, contexts, etc.

  4. Also known as “hot-swapping”, “live update”, “plugins”, etc.

Categories: Offsite Blogs

Parsing with HXT

Haskell on Reddit - Sat, 03/29/2014 - 3:55pm

I originally posted this on /r/haskellquestions, but read that asking questions of this sort was fine here as well...

I'm trying to accomplish what I would imagine is a trivial task, yet it has given me a lot of trouble.

The problem is, quite simply:

Given an HTML table row,

<tr> <td>john</td> <td>25 Mar 2013</td> <td>123</td> </tr>

and a datatype,

data Field = IntF Int | DateF UTCTime | StrF String

how would I parse it to the following?

[StrF "john", DateF 2014-03-26 00:00:00 UTC, IntF 123]

My problem is that HXT seems to operate on groups of nodes, but for this problem I think I need to be able to consider each node separately, taking into account its position.

I've explored a lot of potential solutions, but haven't come upon one that is satisfactory.

Here is some more context:

Is the problem in my approach, or in my understanding of Arrows / Hxt?


submitted by matchi
[link] [8 comments]
Categories: Incoming News

What's the Arrow operation in Bananas, Lenses, Envelopes, and Barbed Wire

Haskell on Reddit - Sat, 03/29/2014 - 1:02pm

On page 7 of Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire (pdf), the authors define an "arrow" operation:

Arrow The operation --> that forms the function space D --> D' of continuous functions from D to D' has as action on functions the 'wrapping' function:
(f --> g) h = g . h . f

While I understand the righthand side of the equation and see how that serves as a "wrapping" of h by f and g, I don't understand what that has to do with a "function space of continuous functions".

The sections also says that:

Closely related to the --> are the combinators:
curry f x y = f (x,y)
uncurry f (x, y) = f x y
eval (f, x) = f x

But likewise, I don't see how that's related to "wrapping".

Could someone please explain what's going on here? I feel like I'm missing an important categorical concept or construction, especially because this definition immediately follows the definitions for sums and products.

submitted by ulularem
[link] [5 comments]
Categories: Incoming News

ERDI Gergo: My first computer

Planet Haskell - Sat, 03/29/2014 - 7:06am

tl;dr: I've built a computer on a Xilinx FPGA using Kansas Lava, based on a virtual machine design from the mid seventies.

I would be lying if I said I always wanted to build my own computer. For a very long time, hardware didn't tickle my curiosity at all; and even today, I prefer staying away from dangling wires and soldering irons. I like my computing platforms to Just Work™, and hardware problems are just a huge hassle. But then in 2010 some coworkers of mine started getting into electronics and I learned from them just enough to start hooking some ICs up on a breadbord, and it seemed like a very fun diversion from all the high-level, abstract, softwarey stuff. In some sense, it filled the same void in me that assembly programing would probably have had. But even back in the days on the Commodore 64, I was looking for more BASIC extensions instead of going downwards to assembly / machine code.

One thing led to another and I was creating a Brainfuck CPU on a Papilio FPGA. It was a very satisfying experience, plus, I got to learn about a completely new world (that of digital hardware design and hardware description languages). So ever since then, I had a voice in the back of my head saying I should go all in, and implement a proper full computer, with I/O and all. But jumping straight into replicating a real computer seemed like trying to run before I can walk.

I can't remember now how I bumped into the CHIP-8 platform, but when I read about it in detail, I realized this is something I, a self-taught HDL guy, could realistically implement. To recap, it's a virtual machine spec from the mid seventies indended to be implemented on home computers of the time. It is super low performance, meaning I could implement everything the most naïve way possible and still get something workable out of it: graphics is 64×32 black & white, RAM is short of 4KBytes, CPU speed is not really part of the spec, and the only timers provided run at 60 Hz.

I can do this!

Setting the scene

The FPGA board that I targeted is a Papilio One 500K, which has 500K... somethings called "system gates", and about 40KByte of SRAM. The Arcade MegaWing provides a D-sub 15 VGA connector and two PS/2 ports, among some other stuff that I am not using for this project.

The home computer targeted by the original CHIP-8 spec only had a four-by-four keypad, so I am not 100% happy about using a PS/2 keyboard for the input. Maybe a later version could use a bespoke keypad connected to the unused pins of the LogicStart MegaWing. However, by using a PS/2 keyboard, I could not worry about the hardware side of things and just implement a decoder for the PS/2 protocol.

In terms of software, there are several games available for the CHIP-8. Maybe even donzens! But just for fun, after finishing the machine itself, I ended up writing my own game as well: a crappy port of the currently trending 2048 game.

Understanding the CHIP-8

I would say the CPU itself is a RISC architecture, in that it has 16 registers and not a whole lot of operations. But I'm not sure it would be called RISC back in its day.

The aforementioned 64×32 one-bit graphics is manipulated via a sprite interface: there's a special CPU opcode for xor-blitting an 8-pixel-wide, variable-height sprite onto any part of the screen. I went with the obvious way of implementing it as a 2048-bit frame buffer.

To validate my understanding, I first created a software emulator in Haskell. That unearthed a ton of edge cases and situations where I was not reading the spec with enough attention. Once I had the emulator working well enough to play the sample games I found, I enumerated the modules I'll need to implement.

A TODO list

Since I've already done the Brainfuck CPU, I didn't foresee too much difficulty in implementing the CPU proper (oh boy was I wrong). However, the world of peripherals was a new one to me.

I've never looked into VGA signal timings in detail before, and for some reason I just assumed that it's going to be just as complicated as PAL, about which I knew just enough to know that you have to generate all kinds of elaborate sync patterns. So actually reading the VGA spec was a relief, and I quickly came up with a scheme where my CHIP-8 computer would be running at 50 MHz, so that it can easily implement the 25 MHz pixel clock needed for 640×480@60 Hz. I went with this mode because it has several advantages:

  • It refreshes the screen at 60 Hz, so I can synchronize the CHIP-8 timers to it
  • It has square pixel aspect ratio
  • It is a "default" mode, so I can be confident that if I generate the correct signal, modern LCDs will be able to display it. In fact, for testing, I later got a 7" CCTV LCD with a VGA input on the cheap, so that I didn't have to make room for the 21" TFT I had lying around. The CCTV monitor supports three VGA video modes, and 640×480@60 Hz is one of them.
  • By dividing both the horizontal and vertical resolution by 8, I can get 80×60 which doesn't leave too much unused border around 64×32. I could use all horizontal screen real estate if I divided by 10 instead, to get 64×48, but I have no idea how I could divide my horizontal/vertical position counter by 10; whereas dividing by 8 is a simple matter of shifting to the right by 3 bits.

The Papilio One itself has a clock of 32 MHz, and I first hoped that 32 Mhz and 25 Mhz is close enough that I can just generate a signal using the 32 MHz as the pixel clock. Turns out that's not quite how signal timings work.

Luckily, I've found out that the Papilio One also has something called a DCM which can divide and multiply the clock signal. I was able to go to 25 or 50 MHz easily with it. It's a bit of a black box component: I had to run a wizard in the Xilinx IDE to get a small binary file describing my DCM parameters, and then I integrated it into my build process by running another Xilinx program on it which spits out some magic bitstream.

The PS/2 protocol is a simple serial protocol with a slow (~10 KHz) clock and one parity bit per 8 data bits. Decoding it into a stream of bytes was a straightforward thing once I hunted down an actual PS/2 keyboard, since it turned out those USB to PS/2 dongles don't really work with regular USB keyboards; rather, the keyboard have to be made ready to "speak" PS/2 over the dongle. So I ended up getting a proper PS/2 keyboard from Mustafa Centre (thanks to Viki's sister Dori for picking it up for me); god only knows why they still had one on stock.


The Xilinx tool suite seems to be aimed at being used from the IDE. This has several drawbacks: version controlling is complicated because generated artifacts litter the source tree; the editor itself in the IDE is of course crap compared to Emacs; and most importantly, I didn't intend to manually write either VHDL or Verilog. As I've mentioned before in my post about the Brainfuck CPU, I've found both of these hardware description languages to be lacking in the same way mainstream imperative programming languages are: the tools you, the developer, is given to introduce your own abstractions are very poor. So I planned to use Kansas Lava, an HDL embedded in Haskell.

Now, the first thing to note about Kansas Lava is, as nice at is, the software package itself is bit-rotten. The latest version available on Hackage cannot even be compiled with GHC 7.6. While the change to fix that is trivial, after that, you can easily bump into bugs in the Lava compiler itself. But more on that later. Later versions available from GitHub are not even self-consisent between the various dependencies. I've put my patched-up version of Kansas Lava and some of the packages it dependens on on GitHub and I'm trying to get Andy Gill to allow me to upload the bundle as a update to Hackage. Don says I should maybe say fuck it and create my own Singapore Lava fork, just to stay with the Lava tradition of a twisty little maze of incompatible forks, all alike.

However, when it all works, it is amazing. I was able to extend the library of Papilio-specific Lava code that I created for the Brainfuck project with reusable modules for the VGA signal generator and the PS/2 decoder in such a way that they should be very easy to be re-used for any other future projects. And it's all type-safe, so for example, the Papilio Arcade MegaWing VGA port is statically enforced to be 4+4+4 bits whereas the LogicStart MegaWing is 3+3+2.

But there was one bug that left a bitter aftertaste in my mouth. Once I had both the CPU and the VGA parts working and I started running some test programs on the thing, I noticed that the framebuffer blitting is exhibiting "or" behaviour instead of "xor". Running the same code in the Lava simulator and dumping the contents of the framebuffer, it showed the correct, "xor" behaviour. After weeks of frustration and a complete rework of the communication system between the CPU, the VGA signal generator and the frame buffer to add memory read bussing, I finally took a look at the Lava compiler's source code to find that it simulates the xor2 primitive as xor but compiles it to or. How do you not notice this bug? Has the Kansas Lava suite been used by anyone for everything at all in the history of ever???

End result

The final result, after much trial and error, looking at the Lava simulator's output, and pouring over the code, is available here. I'm reasonably happy about the code base, except for the implementation of the CPU itself, which feels a bit spaghetti to me. Especially around the parts where it's waiting for results to come back from main RAM or the video framebuffer.

Below are some photographs and videos of it running two games: the memory puzzle game Hidden and the 2048 clone mentioned above. Unfortunately, the Tetris game from the screenshot above seems to have a bug in its input handling, in that it samples the keyboard at an unthrottled frequency; thus, even the shortest keypress sends the falling piece to the edge of the wall. I'll eventually need to disassemble the game and fix this.

The VGA signal generator is not as neat as it could be, because it doesn't do pre-fetching. This means by the time the current CHIP-8 pixel's value is read from the framebuffer, it is already too late, the first of the 8 real pixels for the given virtual CHIP-8 pixel should already have been drawn. This results in some ghosting. But since I know what's causing it, I will hopefully be able to fix this in some next version. But right now I'm too happy to have the whole thing just working.

<object height="350" width="425"><param name="movie" value=""/><param name="wmode" value="transparent"/><embed height="350" src="" type="application/x-shockwave-flash" width="425" wmode="transparent"></embed></object>

<object height="350" width="425"><param name="movie" value=""/><param name="wmode" value="transparent"/><embed height="350" src="" type="application/x-shockwave-flash" width="425" wmode="transparent"></embed></object>

Categories: Offsite Blogs

Inline makes program slow?

haskell-cafe - Sat, 03/29/2014 - 2:43am
Hi cafe, Inline sometimes can cause problems, at least in following case: import qualified Data.Vector as V import Data.List f ∷ String → (String → Int → Char) → [Int] → String f str g idx = map (g str) idx h ∷ String → Int → Char *{-# INLINE h #-}* h s i = (V.fromList $ sort s) V.! i slow ∷ String → [Int] → String slow str = f str h fast ∷ String → [Int] → String fast str = map ((V.fromList $ sort str) V.!) main = do let testString = replicate 100000 'a' iterations = replicate 1000 100 putStrLn $ fast testString iterations putStrLn $ slow testString iterations Without inline (remove the inline pragma), "slow" would be much faster. I suspect this is because ghc can build a "persistent structure" for the partial applied function. After inline, each call of "g" will try to build a new vector. How can I tell ghc not to inline some specific functions? Or are there other ways to resolve this issue? _______________________________________________ Haskell-Ca
Categories: Offsite Discussion - Sat, 03/29/2014 - 12:26am
Categories: Offsite Blogs

I am interested in how the Haskell runtime manage memory and how does it take advantage from laziness and purity.

Haskell on Reddit - Fri, 03/28/2014 - 6:54pm

Can you provide me with relevant links [and maybe explanations] please ?

submitted by nawfel_bgh
[link] [6 comments]
Categories: Incoming News

[Question] Do Typeclasses Make Sense Here? (free-game)

Haskell on Reddit - Fri, 03/28/2014 - 6:09pm

I've recently started porting my code from gloss to free-game. While I'm generally happy with some things (the amount of documentation, screen vs cartesian coordinates and richer text handling), I've been struggling with the library other's choice of typeclasses to represent a lot of abstractions instead of ADTs and functions.

In particular, the Affine typeclass. Where I'd have expected things like circle, polygon, etc to be constructor functions that produced types, they are typeclass methods which (I think) produce types.

As a solid example, I wanted to create a rectangle function. I'd have expected to write something like:

-- pretending Picture2D is an ADT rectangle :: Double -> Double -> Picture2D

But instead, had to write rectangle :: (Picture 2D p) => Double -> Double -> p ()

What does the author of this library gain by using typeclasses so liberally? I'm sure that, as a novice, I'm missing some obvious points here.

submitted by emarshall85
[link] [8 comments]
Categories: Incoming News

Vincent Hanquez: Listing licenses with cabal-db

Planet Haskell - Fri, 03/28/2014 - 6:00pm

Following discussions with fellow haskellers, regarding the need to be careful with adding packages that could depends on GPL or proprietary licenses, it turns out it’s not easy to get your dependencies’s licenses listed.

It would be convenient to be able to ask the hackage database those things, and this is exactly what cabal-db usually works with.

cabal-db, for the ones that missed the previous annoucement, is a simple program that using the index.tar.gz file, is able to recursively display or search into packages and dependencies.

The license subcommand is mainly to get a summary of the licenses of a packages and its dependencies, but it can also display the tree of licenses. Once a package has been listed, it would not appears again in the tree even if another package depend on it.

A simple example is better than many words:

$ cabal-db license -s -t BNFC BNFC: GPL process: BSD3 unix: BSD3 time: BSD3 old-locale: BSD3 base: BSD3 deepseq: BSD3 array: BSD3 bytestring: BSD3 filepath: BSD3 directory: BSD3 pretty: BSD3 mtl: BSD3 transformers: BSD3 containers: BSD3 == license summary == BSD3: 14 GPL: 1

cabal-db is only using the license listed in the license field in cabal files, so if the field is incorrectly set, cabal-db would have no idea.

Categories: Offsite Blogs

Make lllegal State *Transitions* Unrepresentable?

Haskell on Reddit - Fri, 03/28/2014 - 3:38pm

You may have seen this lecture by Yaron Minsky for some Harvard undergraduates. The title of the lecture is Effective ML, but many of these principles apply to Haskell as well. There is a part of the lecture around 18:05 where he talks a bit about

  • Make illegal states unrepresentable

I guess this got me thinking. Making illegal states unrepresentable is something I'm familiar with and the example he gives is pretty simple.

But what about making illegal state transitions unrepresentable? I've only had a year of self-taught experience with Haskell/FP so I'm wondering what are (hopefully practical) strategies that I can use to manage this?

This is probably a pretty vague question, so I'm hoping also to start a conversation to help me clarify it.

submitted by chebertapps
[link] [31 comments]
Categories: Incoming News

What are some useful tricks you can do with Haskell that you wish you knew earlier?

Haskell on Reddit - Fri, 03/28/2014 - 1:12pm

There are some really useful Haskell libs that solve some problems that you wouldn't even know if you weren't told beforehand. For example, there is a library that allows you to write a list-operating function once and automatically get its inverse. It could cut the work in half of things like creating an editor for a view of some data.

This led me to wonder, how many neat libraries are around there that every programmer should know before starting a new project?

submitted by SrPeixinho
[link] [79 comments]
Categories: Incoming News

Could Monad fail be removed and replaced by modified bind (>>=) functions?

Haskell on Reddit - Fri, 03/28/2014 - 12:16pm

I recently asked this SO question:

The answer made me realize that the many times I've heard "do-notation is just syntax sugar used to replace >> and >>=" isn't as straight forward as I would have thought. I now know that do-notation and the Monad fail function are related.

I now see do-notation as indispensable instead of a mere syntax-sugar-convenience. I know it's technically just syntax sugar, but desugaring do-notation with certain Monads seems difficult. If there is a generally applicable way of desugaring do-notation I would like to know about it. So far I've only seen naive approaches to desugaring which ignore the error handling behavior of do-notation.

I've gathered this is a controversial design, and I don't want to restart a debate that has surely happened before. But I do want to ask:

Is Monad fail really necessary? It seems to me that by removing Monad fail and changing the bind (>>=) implementation you could get the same "error handling" behavior from the Monad while simplifying how do-notation works. Could this be done?

submitted by Buttons840
[link] [12 comments]
Categories: Incoming News