# News aggregator

### How can I improve performance on defining instance Ord?

Hi, I'm a haskell noob and reading "Beginning Haskell". I stumbled when I tried that of Exercise 4.3 which is training to get used to Set and Map using Client data type defined as follow:

data Client i = GoOrg { clientId :: i , clientName :: String} | Company { clientId :: i , clientName :: String , companyId :: Integer , person :: Person , duty :: String } | Individual { clientId :: i , person :: Person , responsible :: Bool} deriving(Show,Read,Eq)and I defined an instance for that as below:

instance (Eq i, Ord i) => Ord (Client i) where (<) = (<) `on` (clientId)-- (Company 1 "A" 1 (Person "XXX" "YYY" Female) "") < (GoOrg 2 "B") => True

-- (Company 3 "A" 1 (Person "XXX" "YYY" Female) "") < (GoOrg 2 "B") => False

But it didn't work and was said "out of memory" that Set.fromList [(Company 1 "A" 1 (Person "XXX" "YYY" Female) ""),(GoOrg 2 "B"),(Company 3 "C" 2 (Person "XX" "YY" Male) "")], although it worked with deriving (Ord) instead of defining instance or using fromDistinctAscList instead of fromList.

I have also checked with foldr min (maxval) [(Company 1 "A" 1 (Person "XXX" "YYY" Female) ""),(GoOrg 2 "B"),(Company 3 "C" 2 (Person "XX" "YY" Male) "")] and that was same result as above in spite of quite short list..

how can I implement that code defining instance? Thank you!

submitted by comrebi[link] [9 comments]

### Hackage "500 Internal Server Error"

### Hoogle usage - can't find Control.Monad.Par

Hoogle has me stumped. I'm looking for the name of the package that implements module Control.Monad.Par. I tried

Control.Monad.Par

+Control.Monad.Par

What's the proper syntax?

Annoyingly, google find it just fine https://www.google.com/search?q=Control.Monad.Par

submitted by ignorantone[link] [5 comments]

### "Spinless Tagless G-Machine" erratum?

### Why do haskellers prefer nondescriptive variables?

I'm working my way through the Haskell School of Music and no matter where I turn (blogs, LYAH, RWH...) Everyone uses short, nondescriptive variables. This really makes it difficult to understand certain concepts. Communities like ruby and python are very verbose, and in retrospect I think that's why those languages were easier to grasp.

Consider how much easier it is to understand (albeit a trivial example)

head :: [list] -> first_item

Than

head :: [a] -> a

If code examples were more verbose in the way their type definitions were laid out, I know for me personally that would help a great deal in deciphering more advance concepts. So why exactly do all of these tutorials use such cryptic variables? It reminds me of my school days dealing with unreadable academic C++ code.

submitted by acconrad[link] [53 comments]

### Alp Mestanogullari: Testing attoparsec parsers with hspec

Almost all haskellers end up, some day, having to write a parser. But then, that’s not really a problem because writing parsers in Haskell isn’t really annoying, like it tends to be elsewhere. Of special interest to us is attoparsec, a **very fast** parser combinator library. It lets you combine small, simple parsers to express how data should be extracted from that specific format you’re working with.

For example, suppose you want to parse something of the form |<any char>| where <any char> can be… well, any character. We obviously only care about that precise character sitting there – once the input is processed, we don’t really care about these | anymore. This is a no-brainer with attoparsec.

module Parser import Data.Attoparsec.Text weirdParser :: Parser Char weirdParser = do -- attoparsec's Parser type has a useful monad instance char '|' -- matches just '|', fails on any other char c <- anyChar -- matches any character and returns it char '|' -- matches just '|', like on the first line return c -- return the inner character we parsedHere we go, we have our parser. If you’re a bit lost with these combinators, feel free to switch back and forth between this article and the documentation of Data.Attoparsec.Text.

This parser will fail if any of the 3 smaller parsers I’m using fail. If there’s more input than just what we’re interested in, the additional content will be left unconsumed.

Let’s now see our parser in action, by loading it in ghci and trying to feed it various inputs.

First, we want to be able to type in Text values directly without using conversions functions from/to Strings. For that reason, we enable the OverloadedStrings extension. We also import Data.Attoparsec.Text because in addition to containing char and anyChar it also contains the functions that let us run a parser on some input (*make sure attoparsec is installed*).

Data.Attoparsec.Text contains a parse function, which takes a parser and some input, and yields a Result. A Result will just let us know whether the parser failed, with some diagnostic information, or if it was on its way to successfully parsing a value but didn’t get enough input (imagine we just feed "|x" to our parser: it won’t fail, because it looks almost exactly like what we want to parse, except that it doesn’t have that terminal '|', so attoparsec will just tell us it needs more input to complete – or fail), or, finally, if everything went smoothly and it actually hands back to us a successfully parser Char in our case, along with some possibly unconsumed input.

Why do we care about this? Because when we’ll test our parsers with hspec-attoparsec, we’ll be able to test the kind of Result our parsers leaves us with, among other things.

Back to concrete things, let’s run our parser on a valid input.

λ> parse weirdParser "|x|" Done "" 'x'That means it successfully parsed our inner 'x' between two '|'s. What if we have more input than necessary for the parser?

λ> parse weirdParser "|x|hello world" Done "hello world" 'x'Interesting! It successfully parsed our 'x' and also tells us "hello world" was left unconsumed, because the parser didn’t need to go that far in the input string to extract the information we want.

But, if the input looks right but lets the parser halfway through completing, what happens?

λ> parse weirdParser "|x" Partial _Here, the input is missing the final | that would make the parser succeed. So we’re told that the parser has *partially* succeeded, meaning that with that input, it’s been running successfully but hasn’t yet parsed everything it’s supposed to. What that Partial holds isn’t an just underscore but a function to resume the parsing with some more input (a continuation). The Show instance for parsers just writes a _ in place of functions.

Ok, and now, how about we feed some “wrong data” to our parser?

λ> parse weirdParser "bbq" Fail "bbq" ["'|'"] "Failed reading: satisfy"Alright! Equipped with this minimal knowledge of attoparsec, we’ll now see how we can test our parser.

</section> <section class="level1" id="introducing-hspec-attoparsec"> Introducing hspec-attoparsecWell, I happen to be working on an HTML parsing library based on *attoparsec*, and I’ve been using hspec for all my testing needs these past few months – working with the author surely helped, hello Simon! – so I wanted to check whether I could come up with a minimalist API for testing attoparsec parsers.

If you don’t know how to use hspec, I warmly recommend visititing hspec.github.io, it is well documented.

So let’s first get the boilerplate out of our way.

{-# LANGUAGE OverloadedStrings #-} module ParserSpec where -- we import Text, this will be our input type import Data.Text (Text) -- we import hspec, to run the test suite import Test.Hspec -- we import 'hspec-attoparsec' import Test.Hspec.Attoparsec -- we import the module where our parser is defined import Parser main :: IO () main = hspec spec spec :: Spec spec = return () -- this is temporary, we'll write our tests hereAnd sure enough, we can already get this running in ghci (ignore the warnings, they are just saying that we’re not yet using our parser or hspec-attoparsec), although it’s quite useless:

λ> :l example/Parser.hs example/ParserSpec.hs [1 of 2] Compiling Parser ( example/Parser.hs, interpreted ) [2 of 2] Compiling ParserSpec ( example/ParserSpec.hs, interpreted ) example/ParserSpec.hs:8:1: Warning: The import of ‘Test.Hspec.Attoparsec’ is redundant except perhaps to import instances from ‘Test.Hspec.Attoparsec’ To import instances alone, use: import Test.Hspec.Attoparsec() example/ParserSpec.hs:10:1: Warning: The import of ‘Parser’ is redundant except perhaps to import instances from ‘Parser’ To import instances alone, use: import Parser() Ok, modules loaded: Parser, ParserSpec. λ> ParserSpec.main Finished in 0.0001 seconds 0 examples, 0 failuresAlright, let’s first introduce a couple of tests where our parser should succeed.

spec :: Spec spec = do describe "weird parser - success cases" $ do it "successfully parses |a| into 'a'" $ ("|a|" :: Text) ~> weirdParser `shouldParse` 'a' it "successfully parses |3| into '3'" $ ("|3|" :: Text) ~> weirdParser `shouldParse` '3' it "successfully parses ||| into '|'" $ ("|||" :: Text) ~> weirdParser `shouldParse` '|'We’re using two things from hspec-attoparsec:

- (~>), which connects some input to a parser and extracts either an error string or an actual value, depending on how the parsing went.
- shouldParse, which takes the result of (~>) and compares it to what you expect the value to be. If the parsing fails, the test won’t pass, obviously, and hspec-attoparsec will report that the parsing failed. If the parsing succeeds, the parsed value is compared to the expected one and a proper error message is reported with both values printed out.

Running them gives:

λ> ParserSpec.main weird parser - success cases - successfully parses |a| into 'a' - successfully parses |3| into '3' - successfully parses ||| into '|' Finished in 0.0306 seconds 3 examples, 0 failuresIf we modify our first test case by expecting 'b' instead of 'a', while still having "|a|" as input, we get:

λ> ParserSpec.main weird parser - success cases - successfully parses |a| into 'b' FAILED [1] - successfully parses |3| into '3' - successfully parses ||| into '|' - successfully parses a digit character from |3| 1) weird parser - success cases successfully parses |a| into 'b' expected: 'b' but got: 'a' Randomized with seed 1330009810 Finished in 0.0267 seconds 4 examples, 1 failure *** Exception: ExitFailure 1Nice! But what else can we test? Well, we can test that what we parse satisfies some predicate, for example. Let’s add the following to spec:

-- you have to add: import Data.Char (isDigit) -- in the import list it "successfully parses a digit character from |3|" $ ("|3|" :: Text) ~> weirdParser `parseSatisfies` isDigitwhere

parseSatisfies :: Show a => Either String a -- ^ result of ~> -> (a -> Bool) -- ^ predicate the parsed value should satisfy -> Expectation -- ^ resulting hspec expectationAnd we get:

λ> ParserSpec.main weird parser - success cases - successfully parses |a| into 'a' - successfully parses |3| into '3' - successfully parses ||| into '|' - successfully parses a digit character from |3| Finished in 0.0012 seconds 4 examples, 0 failuresGreat, what else can we do? Well, sometimes we don’t really care about the concrete values produced, we just want to test that the parser succeeds or fails on some precise inputs we have, because that’s how it’s supposed to behave and we want to have a way that changes in the future won’t affect the parser’s behavior on these inputs. This is what shouldFailOn and shouldSucceedOn are for. Let’s add a couple more tests:

spec :: Spec spec = do describe "weird parser - success cases" $ do it "successfully parses |a| into 'a'" $ ("|a|" :: Text) ~> weirdParser `shouldParse` 'a' it "successfully parses |3| into '3'" $ ("|3|" :: Text) ~> weirdParser `shouldParse` '3' it "successfully parses ||| into '|'" $ ("|||" :: Text) ~> weirdParser `shouldParse` '|' it "successfully parses a digit character from |3|" $ ("|3|" :: Text) ~> weirdParser `parseSatisfies` isDigit -- NEW it "successfully parses |\160|" $ weirdParser `shouldSucceedOn` ("|\160|" :: Text) -- NEW describe "weird parser - failing cases" $ do it "fails to parse |x-" $ weirdParser `shouldFailOn` ("|x-" :: Text) it "fails to parse ||/" $ weirdParser `shouldFailOn` ("||/" :: Text)where

shouldSucceedOn :: (Source p s s' r, Show a) => p s' a -- ^ parser to run -> s -- ^ input string -> Expectation shouldFailOn :: (Source p s s' r, Show a) => p s' a -- ^ parser to run -> s -- ^ input string -> ExpectationAnd we run our new tests:

λ> :l example/Parser.hs example/ParserSpec.hs [1 of 2] Compiling Parser ( example/Parser.hs, interpreted ) [2 of 2] Compiling ParserSpec ( example/ParserSpec.hs, interpreted ) Ok, modules loaded: Parser, ParserSpec. λ> ParserSpec.main weird parser - success cases - successfully parses |a| into 'a' - successfully parses |3| into '3' - successfully parses ||| into '|' - successfully parses a digit character from |3| - successfully parses | | weird parser - failing cases - fails to parse |x- - fails to parse ||/ Finished in 0.0015 seconds 7 examples, 0 failuresI think by now you probably understand how to use the library, so I’ll just show the last useful function: leavesUnconsumed. This one will just let you inspect the unconsumed part of the input if there’s any. Using it, you can easily describe how eager in consuming the input your parsers should be.

describe "weird parser - leftovers" $ it "leaves \"fooo\" unconsumed in |a|fooo" $ ("|a|fooo" :: Text) ~?> weirdParser `leavesUnconsumed` "fooo"Right now, hspec-attoparsec will only consider leftovers when the parser succeeds. I’m not really sure whether we should return Fail’s unconsumed input or not.

</section> <section class="level1" id="documentation"> DocumentationThe code lives at github.com/alpmestan/hspec-attoparsec, the package is on hackage here where you can also view the documentation. A good source of examples is the package’s own test suite, that you can view in the repo. The example used in this article also lives in the repo, see example/. Let me know through github or by email about any question, feedback, PR, etc.

</section> Posted on June 18, 2014### Package Maintenance: hfuse

### Philip Wadler: Mairi McFadyen's Open Letter to J. K. Rowling

Mairi McFadyen wrote a heartfelt response to J. K. Rowling, at National Collective.

To be asked, ‘what kind of country do you want to live in?’ is the most wonderful gift. Many people have taken this opportunity to empower themselves with knowledge. They are actively engaged in the world, not passively accepting of the status quo. They could have chosen to remain, in your own words, ‘comfortably within the bounds of their own experience, never troubling to wonder how things might be improved.’ They could remain switched off. Now we frequently overhear the #indyref discussed passionately at the taxi rank at 3 o’clock in the morning on a Friday night; in the chippy queue; at the hairdressers. It is being discussed by high school leavers: full of hope, full of promise for life and all the joy and wonders and pain it brings. ... I do not believe that independence will be easy or will somehow magically cure society’s problems. What this historical moment provides us with is an unmatched opportunity to participate in the writing of our own future. We have a chance to liberate ourselves from the stranglehold of austere Westminster politics and lead by example. We must ask ourselves, what really matters? ...We need a new story to live by. In your own words: “We do not need magic to change the world, we carry all the power we need inside ourselves already: we have the power to imagine better.”

### Philip Wadler: Scottish independence: Prof Patrick Dunleavy says Treasury claims 'ludicrous'

### Philip Wadler: The the impotence of proofreading

<object class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" data-thumbnail-src="https://ytimg.googleusercontent.com/vi/dp9Hb8LAgqs/0.jpg" height="266" width="320"><param name="movie" value="https://youtube.googleapis.com/v/dp9Hb8LAgqs&source=uds"/><param name="bgcolor" value="#FFFFFF"/><param name="allowFullScreen" value="true"/><embed allowfullscreen="true" height="266" src="https://youtube.googleapis.com/v/dp9Hb8LAgqs&source=uds" type="application/x-shockwave-flash" width="320"></embed></object>

### Brent Yorgey: Anafunctors

This is part four in a series of posts on avoiding the axiom of choice (part one, part two, part three).

In my previous post, we considered the “Axiom of Protoequivalence”—that is, the statement that every fully faithful, essentially surjective functor (*i.e.* every *protoequivalence*) is an equivalance—and I claimed that in a traditional setting this is equivalent to the axiom of choice. However, intuitively it feels like AP “ought to” be true, whereas AC must be rejected in constructive logic.

One way around this is by generalizing functors to *anafunctors*, which were introduced by Makkai (1996). The original paper is difficult going, since it is full of tons of detail, poorly typeset, and can only be downloaded as seven separate postscript files. There is also quite a lot of legitimate depth to the paper, which requires significant categorical sophistication (more than I possess) to fully understand. However, the basic ideas are not too hard to grok, and that’s what I will present here.

It’s important to note at the outset that anafunctors are much more than just a technical device enabling the Axiom of Protoequivalence. More generally, if everything in category theory is supposed to be done “up to isomorphism”, it is a bit suspect that functors have to be defined for objects *on the nose*. Anafunctors can be seen as a generalization of functors, where each object in the source category is sent not just to a single object, but to an entire *isomorphism class* of objects, without privileging any particular object in the class. In other words, anafunctors are functors whose “values are specified only up to unique isomorphism”.

Such functors represent a many-to-many relationship between objects of and objects of . Normal functors, as with any function, may of course map multiple objects of to the same object in . The novel aspect is the ability to have a single object of correspond to multiple objects of . The key idea is to add a class of “specifications” which mediate the relationship between objects in the source and target categories, in exactly the same way that a “junction table” must be added to support a many-to-many relationship in a database schema, as illustrated below:

On the left is a many-to-many relation between a set of shapes and a set of numbers. On the right, this relation has been mediated by a “junction table” containing a set of “specifications”—in this case, each specification is simply a pair of a shape and a number—together with two mappings (one-to-many relations) from the specifications to both of the original sets, such that a specification maps to a shape and number if and only if and were originally related.

In particular, an *anafunctor* is defined as follows.

- There is a class of
*specifications*. - There are two functions mapping specifications to objects of and .

, , and together define a many-to-many relationship between objects of and objects of . is called a *specified value of at * if there is some specification such that and , in which case we write . Moreover, is *a value of at * (not necessarily a *specified* one) if there is some for which .

The idea now is to impose additional conditions which ensure that “acts like” a regular functor .

- Functors are defined on all objects; so we require each object of to have at least one specification which corresponds to it—that is, must be surjective.
- Functors transport morphisms as well as objects. For each (the middle of the below diagram) and each in (the left-hand side below), there must be a morphism in (the right-hand side):

- Functors preserve identities: for each we should have .
- Finally, functors preserve composition: for all (in the middle below), , and (the left side below), it must be the case that :

Our initial intuition was that an anafunctor should map objects of to isomorphism classes of objects in . This may not be immediately apparent from the definition, but is in fact the case. In particular, the identity morphism maps to isomorphisms between specified values of ; that is, under the action of an anafunctor, an object together with its identity morphism “blow up” into an isomorphism class (aka a *clique*). To see this, let be two different specifications corresponding to , that is, . Then by preservation of composition and identities, we have , so and constitute an isomorphism between and .

There is an alternative, equivalent definition of anafunctors, which is somewhat less intuitive but usually more convenient to work with: an anafunctor is a *category* of specifications together with a span of *functors* where is fully faithful and (strictly) surjective on objects.

Note that in this definition, must be *strictly* (as opposed to *essentially*) surjective on objects, that is, for every there is some such that , rather than only requiring . Given this strict surjectivity on objects, it is equivalent to require to be full, as in the definition above, or to be (strictly) surjective on the class of all morphisms.

We are punning on notation a bit here: in the original definition of anafunctor, is a set and and are functions on objects, whereas in this more abstract definition is a category and and are functors. Of course, the two are closely related: given a span of functors , we may simply take the objects of as the class of specifications , and the actions of the functors and on objects as the functions from specifications to objects of and . Conversely, given a class of specifications and functions and , we may construct the category with and with morphisms in acting as morphisms in . From to , we construct the functor given by on objects and the identity on morphisms, and the other functor maps in to in .

Every functor can be trivially turned into an anafunctor . Anafunctors also compose. Given compatible anafunctors and , consider the action of their composite on objects: each object of may map to multiple objects of , via objects of . Each such mapping corresponds to a zig-zag path . In order to *specify* such a path it suffices to give the pair , which determines , , and . Note, however, that not every pair in corresponds to a valid path, but only those which agree on the middle object . Thus, we may take as the set of specifications for the composite , with and . On morphisms, . It is not hard to check that this satisfies the anafunctor laws.

If you know what a pullback is, note that the same thing can also be defined at a higher level in terms of spans. , the category of all (small) categories, is cocomplete, and in particular has pullbacks, so we may construct a new anafunctor from to by taking a pullback of and and then composing appropriately.

One can go on to define ananatural transformations between anafunctors, and show that together these constitute a -category which is analogous to the usual -category of (small) categories, functors, and natural transformations; in particular, there is a fully faithful embedding of into , which moreover is an equivalence if AC holds.

To work in category theory based on set theory and classical logic, while avoiding AC, one is therefore justified in “mixing and matching” functors and anafunctors as convenient, but discussing them all as if they were regular functors (except when defining a particular anafunctor). Such usage can be formalized by turning everything into an anafunctor, and translating functor operations and properties into corresponding operations and properties of anafunctors.

However, as I will argue in some future posts, there is a better solution, which is to throw out set theory as a foundation of category theory and start over with homotopy type theory. In that case, thanks to a generalized notion of equality, regular functors act like anafunctors, and in particular AP holds.

ReferencesMakkai, Michael. 1996. “Avoiding the Axiom of Choice in General Category Theory.” *Journal of Pure and Applied Algebra* 108 (2). Elsevier: 109–73.

### 1 = 0?

### Why won't my code work? (Beginner)

I'm a Haskell noob and my previous programming experience is mainly in JavaScript.

I'm trying to make an infinite set that follows a pattern. Why does this not work?

import Data.List sndLast l = last (init l) enunBlock :: [Int] -> [Int] -- [2,2,2] -> [3,2] | [3] -> [1,3] enunBlock [] = [] enunBlock l = [length l, head l] enunPattern = [1] ++ concat (map enunBlock (group (sndLast enunPattern) ) )sndLast and enunBlock work, but enunPattern doesn't. The error states:

Couldn't match type 'Int' with '[Int]' Expected type: [[Int]] Actual tpe: [Int] In the first argument of 'sndLast', namely 'enunPattern' In the first argument of 'group', namely '(sndLast enunPattern)' In the second argument of 'map', namely '(group (sndLast enunPattern))' Failed, mdoules loaded: noneThank you!

submitted by Loonybinny[link] [22 comments]

### Douglas M. Auclair (geophf): No Roam at the Inn(sbruck)

Okay, so, for yesterday's 1HaskellADay problem (about 'collect'ing 'stuff' from 'room's), we’re looking at a pathing-kind of problem, and that is

{- You have a list of rooms, each rooms has two doors and contain one element. To enter a room, the first door must be opened. To leave a room, the second door must be open. It means that if you want to move from one room to the next one, both the second door of the first room and the first door of the second room.-}data Room a = Room (Bool, Bool) a

{- | Given a list of rooms, we want to go as far as we can and collect the elements in he room. This is the purpose of the function `collect`.

Examples:

>>> collect [Room (True, True) 1, Room (True, False) 2, Room (True, False) 3] [1,2]

>>> collect [Room (False, True) 1, Room (True, False) 2, Room (True, False) 3] []

>>> collect [Room (True, True) 'a', Room (False, True) 'b', Room (True, True) 'c'] ['a']-}collect :: [Room Int] -> [Int]collect = undefined

Okay, so right off the bat, you see the type is just ... wrong. Look at the type signature. It does not allow the third example at all, so, really collect’s type is:

collect :: [Room a] -> [a]

Ouch.

So, anyway (you know, running the examples before publishing them ...

*‘might’*be prudent).

This particular, problem, albeit trivial, brings to mind two analogues: one is proof search, because we’re going from ‘room’ to ‘room’ (truth to truth) collecting ‘stuff’ (lemmas), and we stop when we hit a closed door (the proof search stops at a falsehood). So if we look at it this way, then all we need to do is build us some generic proof search algorithm, and make the Room type hypotheses, and we’re done.

But we don’t

*need*to do this in Idris, as Idris already has a proof search algorithm.

It’s called, conveniently enough: ‘proof search.’

UGH! TOO EASY!

The other thing this little problem recalled to me was the optimization algorithms I was exploring, including GAs (genetic algorithms) and ACO (ant colony optimization), but the pathing is

*way*too easy for these kinds of hammers.

Now if we had room defined something like:

data Room a = Room String [Room a] a

*THEN*we’re cooking with gas, because if you have a house, your rooms sometimes have one door, sometimes have several, so:

(bleh! draw a picture of our house here!)

And in this example here the rooms are identified (as they aren’t in today’s problem), and a room can have multiple egress points. Direction isn’t defined either. Another things: this graphic is

*not*acyclic. So how do we deal with that.

Maybe

*now*it’s time to call in the ants, eh?

At any rate, my solution to today’s problem was a rather straightforward Idris exercise:

module corn -- as in maze (pronounced mah-ees)

data Roam a = Room (Bool, Bool) a -- note: Roam, Room

collect : List (Roam a) -> List a -- note: NOT! Int, but _a_ duh!collect [] = []collect ((Room (False, _) _) :: _) = []collect ((Room (True, x) y) :: rest) = y :: (if x then (collect rest) else [])

soln-2014-06-16*> :total collectcorn.collect is Total

Note that Room defined by itself in Idris is a circular-type error. Since every value is typeful and every type is a value (‘valueful’?), then even instances of a (disjoint- or tagged-)type are a type, themselves, so

data Room a = Room (Bool, Bool) a

doesn’t work in Idris, ... at least, it didn’t work for me. So that’s why I ‘Roam’ed about my ‘Room’s for my solution.

But now we look at the problem of ‘shouldbe’ because

sample1 `shouldbe` [1,2]sample2 `shouldbe` []sample3 `shouldbe` [‘a’] or type-failure, because the function type is [Room Int] -> [Int]

*not*[Room a] -> [a], amirite?!?!

I’m

*SO*the right.

So here I tried to make the above provable properties. If I could prove the above statements, then I knew my implementation did what I said what it was supposed to do (which, I hope, is what was asked of me, too).

So, my first stab at it was to show decidability. I had gotten wrapped around this axel (of decidability) because from the nGrams problem from before I could show that it was decidable that a String equals a String, but I couldn’t prove a String equaled a String (because there are obvious counterexamples, e.g. “Joe” == “Smart” is false. (sorry, Joe. It’s not personal; it’s just the truth. You

*are not*smart, even if you do (or you don’t)

*have*smarts (as the case may be))).

So I went for decidability, because that worked in the existential case for me before when I was verifying:

isDecEqStr : String -> BoolisDecEqStr str = case (decEq (concat $ nGrams 1 str) str) of Yes (_) => True No (_) => False

For any (existing and non-empty) String str, we can show that it is decidable that

concat $ nGrams 1 str = str

... it’s like a join, right?

Anyway.

So, I thought we can show the decidability of collect over some List (Roam a)

So, we start with

isDecEqRoam : Eq a => List (Roam a) -> List a -> Bool isDecEqRoam rooms stuff = (collect rooms) == stuff

and that works fine. But to prove that? Hm. Lessee.

sample1Is12 : { dec : (isDecEqRoam sample1 [1,2]) } -> dec = Truesample1Is12 = refl

Nope. We get the following error when we try to compile our module:

When elaborating type of corn.sample1is12:Can’t unify Bool with Type

(with the same ‘specifically’)

So, maybe we try ...

Yeah, I iterated and iterated over decidability until I realized that I could just unify the resulting type of isDecEqRoam with True and I’d be done with it. I didn’t need decidability at all, because I wasn’t deciding that a list of things could equal a list of other (the same) things, no: I was unifying booleans, and that

*is*provable. Woot!

sample1Is12 : (isDecEqRoam sample1 [1,2]) = Truesample1Is12 = refl

soln-2014-06-16*> sample1Is12refl : True = True

and so then:

sample2IsEmpty : (isDecEqRoam sample2 []) = Truesample2IsEmpty = refl

soln-2014-06-16*> sample2IsEmptyrefl : True = True

sample3IsA : (isDecEqRoam sample3 [‘a’]) = Truesample3IsA = refl

soln-2014-06-16*> sample3IsArefl : True = True

YAY! ‘Unit’ ‘testing’ by proofs!

(Granted these are very simple (tautological, in fact) proofs, but I have a warm-fuzzy knowing that I proved my samples are true from my simple collect function. YAY!)

But what does that get us, besides proving what I wrote actually

*is*what I said it would be.

The thing is ... this:

badSample : (isDecEqRoam sample3 [‘b’]) = TruebadSample = refl

If I insert the above function (the above proof), the

*program won’t compile!*

By putting these assertions, I’m not (just) ‘unit testing,’ nor am I catching unsafe things at runtime (throwing an ‘AssertionException,’ or whatever, aborting the program instead of allowing the program to proceed under false assumptions). No, I’m doing neither of these things, or, more precisely, I’m doing more than either of those things: much more.

What I am doing is this: I am

*proving*my program is correct, and the proof is that I get a compiled object. The above ‘sampleXisY’ are proof-carry code and I’ve just certified my program as safe and correct for the given samples at compile-time.

Cool story, bro!

### Hodor, a brainfuck dialect

Hi guys,

I implemented for fun an interpreter for Brainfuck and OokOok. I also added a variation of OokOok named Hodor (it's just the "Ook" replaced by a "Hodor"). Is Hodor turing complete ? :p

More seriously, I'm a beginner haskeller, and would appreciate some feedback about my code. Maybe suggestions, critics about style, designe, etc.

Thank you very much. Remi

submitted by remusao[link] [29 comments]

### Lee Pike: SmartCheck: Redux

A few years ago, I started playing with the idea of making counterexamples from failed tests easier to understand. (If you’re like me, you spend a lot of time debugging.) Specifically, I started from QuickCheck, a test framework for Haskell, which has since been ported to many other languages. From this, a tool called SmartCheck was born.

In this post, I’m not going to describe the technical details of SmartCheck. There’s a paper I wrote for that, and it was just accepted for publication at the 2014 Haskell Symposium (warning: the paper linked to will change slightly as I prepare the final version).

Rather, I want to give a bit of perspective on the project.

As one reviewer for the paper noted, the paper is not fundamentally about functional programming but about testing. This is exactly right. From the perspective of software testing in general, I believe there are two novel contributions (correct me ASAP if I’m wrong!):

- It combines counterexample generation with counterexample reduction, as opposed to delta-debugging-based approaches, which performs deterministic reduction, given a specific counterexample. One possible benefit is SmartCheck can help avoid stopping at local minima during reduction, since while shrinking, new random values are generated.
**Update: as John Regehr points out in the comments below, his group has already done this in the domain of C programs. See the paper.** - Perhaps the coolest contribution is generalizing sets of counterexamples into formula that characterize them.

I’d be interested to see how the work would be received in the software testing world, but I suppose first it’d need to be ported to a more mainstream language, like C/C++/Java.

Compared to QuickCheck specifically, QuickCheck didn’t used to have generics for implementing shrinking functions; recent versions include it, and it’s quite good. In many cases, SmartCheck outperforms QuickCheck, generating smaller counterexamples faster. Features like counterexample generalization are unique to SmartCheck, but being able to test functions is unique to QuickCheck. Moreover, I should say that SmartCheck uses QuickCheck in the implementation to do some of the work of finding a counterexample, so thanks, QuickCheck developers!

When you have a tool that purports to improve on QuickCheck in some respects, it’s natural to look for programs/libraries that use QuickCheck to test it. I found that surprisingly few Hackage packages have QuickCheck test suites, particularly given how well known QuickCheck is. The one quintessential program that does contain QuickCheck tests is Xmonad, but I challenge you to think of a few more off the top of your head! This really is a shame.

The lack of (public) real-world examples is a challenge when developing new testing tools, especially when you want to compare your tool against previous approaches. More generally, in testing, it seems there is a lack of standardized benchmarks. What we really need is analogue of the SPEC CPU2000 performance benchmarks for property-based testing, in Haskell or any other language for that matter. I think this would be a great contribution to the community.

In 1980, Boyer and Moore developed a linear-time majority vote algorithm and verified an implementation of it. It took until 1991 to publish it after many failed tries. Indeed, in the decade between invention and publication, others had generalized their work, and it being superseded was one of the reasons reviewers gave for rejecting it! (The full story can be found here.)

To a small extent, I can empathize. I submitted a (slightly rushed) version of the paper a couple years ago to ICFP in what was a year of record submissions. One reviewer was positive, one luke-warm, and one negative. I didn’t think about the paper much over the following year or two, but I got a couple of requests to put the project on Hackage, a couple of reports on usage, and a couple of inquiries about how to cite the draft paper. So after making a few improvements to the paper and implementation, I decided to try again to publish it, and it finally will be.

As I noted above, this is not particularly a Haskell paper. However, an exciting aspect of the Haskell community, and more generally, the functional programming community, is that it is often exploring the fringes of what is considered to be core programming language research at the time. I’m happy to be part of the fringe.