I just wrote my first "real" haskell program: a game of life implementation. Would anyone be willing to critique my code? it's here
Regarding the code:
- I ran into trouble with IO (random and writeout)
- I probably should have used a 2d array to represent the board
- There are some performance issues (related to all the concatenations in the svg code?)
Any thoughts/comments are most welcome, please don't hold back! Thanks.
Thanks for the great comments everybody! I have started updating the code with the suggested changes. In particular: fixing the randomness bug, making evolution and writeboard "pure(r)" and simplifying functions.
I've updated the link above to point to the new version, the original code can be found here.submitted by stmu
[link] [14 comments]
As a spin off from teaching programming to my 10 year old son and his friends, we have published a sprite and pixel art editor for the iPad, called BigPixel, which you can get from the App Store. (It has a similar feature set as the earlier Haskell version, but is much prettier!)
* Atom feed support
* Hosted on https:// (with heartbleed patched)
* Sources hosted on github
* Main blog over at https://blog.cppcabrera.com
* All content from here ported over
* All posts tagged appropriately
I've documented the process as my first new post at the new location: Learning Hakyll and Setting Up
At this point, the one feature I'd like to add to my static blog soon is the ability to have a Haskell-only feed. I'll be working on that over the coming week.
Thanks again for reading, and I hope you'll enjoy visiting the new site!
The ML Family workshop intends to attract the entire family of ML languages, whether related by blood to the original ML or not. Our slogan is ``Higher-order, Typed, Inferred, Strict''. Designers and users of the languages fitting the description have many issues in common, from data representation and garbage collection to fancy type system features. As an example, some form of type classes or implicits has been tried or been looked into in several languages of the broad ML family. We hope the ML Family workshop is a good forum to discuss these issues.
Also new this year is a category of submissions -- informed opinions -- to complement research presentation, experience reports and demos. We specifically invite arguments about language features, be they types, garbage collection, implicits or something else -- but the arguments must good and justified. Significant personal experience does count as justification, as do empirical studies or formal proofs. We would be delighted if language implementors or long-time serious users could tell, with examples from their long experience, what has worked out and what has not in their language.
The deadline for submitting an abstract of the presentation, up to 2 PDF pages, is in a month. Please consider submitting and attending!
'G' is for Gödel-Gentzen Translation.
Okay, this is going to feel a little ... weird.
Cue Morpheus marlinspiking Neo through the spine to reinject him into the Matrix.
So, what is 'true' and what is 'false'?
In Latin-based languages there is only 'non' but there is no word for 'yes.' You'd think there is, but the most common one: 'Si' is from 'sic' 'this.' And the alternative 'Oc' (from history: "The people who say 'Oc'") is 'hoc' 'this,' too.
There is no 'yes' in Latin, just 'no.'
In mathematics, there is no 'no' only instances, but not 'not something.'
I mean, there is, but there isn't.
You weirded out yet?
The problem is logic.
There are universals: 'For every x, ...'
And existentials: 'There is some x that, ...'
But no 'non's.
So, if there're aren't 'non-exististentials,' then we can't really formulate 'not' with any authority.
So we can't say, 'That person is not Julia.'
We can say who she is, but we can't identify her by who she isn't.
So what to do? If 'not' really is the absence of something, and mathematics models what is, then how do you model what isn't?
Some logics just assume 'not' into existence ...
... You see the problem? You're creating a 'nothing' or a 'voider' or some such. How do you create something that isn't?
But some logics do exactly that: 'not' 'false' 'no' they all ... 'exist' in these logics.
From antiquity, too: classical logic admits 'not' into theory and use 'not' to prove these theories.
There it is. (Or more precisely: there it isn't!) (So if it isn't there, then where is it? Nowhere, right?)
(My head hurts.)
So, okay, some logics admit 'not.' And, since some of these logics are from antiquity, there's a lot of foundational work based in these logics that are extremely useful, practical, all that.
It'd be a shame to throw all that away.
But here's the problem. A lot of recent developments have started to question everything, about everything, because, after all, a set of rules or assumptions limit you to a set of world-views. Questioning these views allows you to open new avenues to explore.
So, classical logic admits 'not,' but a more recent logic, Intuitionistic logic, does not allow 'not.'
So what? Intuitionistic logic can go take a hike, right?
Well, the thing is, it's called Intuitionistic logic for a reason, and that is, this logic is a closer model to what our intuitions show us to be the 'real world,' whatever that may be. In particular, Intuitionistic logic is a very good descriptor of type theory that is heavily used in computer science, and, furthermore, (or, 'so, therefore,') statements of Intuitionistic logic have a direct mapping into (some) computer programming languages.
Again, so what?
The 'so what' is this: a statement of Intuitionistic logic has the strength of veracity to it. It can be proved. If you can map that down to a piece of a computer program, then you know, for sure, that that piece of code is correct, and verifiably so (that is, if your mapping is sound, and there are ways to prove the soundness of a mapping, as well, so we got ya covered there, too).
The same cannot be said for classical logic. Because we make the (axiomatic) assumption of 'not,' we get into weird circularities of the types when we try to map statements of classical logic down to computer program fragments.
(Logic is ... hard. It's very dogged in exhausting all possible paths an argument can take.
Argument: "I can't take anymore!"Logic: "Well, tough!"
Logic is just so ... unsympathetic!
The contrapositive statement: when somebody's being sympathetic? ... or sincere? They are not being logical. So you're getting somebody to agree with you, and a shoulder to cry on, but is somebody saying 'aw, that's okay, really!' what you really need? Or would brutal honesty get you out of the fix you're in?)
(But I digress.)
So, what do we do? We have all this neat stuff in classical logic, but we can't use it. We have a logic we can use in Intuitionistic Logic, but do we have to reinvent the wheel to use it?
Then, along came a little guy by the name of Gödel, following up on some really interesting work that Gentzen did. Gentzen was able to prove things about arithmetic (both 'primitively recursive' and 'first-order') using something called construction, that is by not using 'not.' But by constructing the proof for every number (the 'ordinals') in the arithmetic (and since number is ... 'infinite,' there are a lot of them). This is called using constructive theory or 'ordinal analysis.'
He invented the thing.
Gödel used ordinal analysis (along with his own genius innovations) to disprove the Principia Mathematic system was consistent, but he also used Gentzen's work to provide a mapping between classical logic and intuitionistic logic.
You see, classic logic has three axioms, that is, three statements we accept as 'true' or foundational to the logic:
1. p -> (q -> p) If you have a p, then q implies p is true2. (p -> (q -> r)) -> ((p -> q) -> r) urgh. You read it.3. ¬¬p -> p "not-not p" implies p (if you don't have a not-p, then you have p)
We can live with the first two in intuitionistic logic, but the third one uses 'not,' and so can't be handled directly in that logic.
So, here's the thing, '¬' or 'not' is really just a short-hand for saying
¬p == p -> ⊥
That is 'p implies "bottom"' where ⊥ or 'bottom' is an unprovable statement in the logic: false, fail, void, icky-poo. More rigorously: a type that ... isn't, and this is known as an 'uninhabited type.'
You know: that.
So ¬¬p can also be said thus: (p -> ⊥) -> ⊥
And then there are two other translation functions from classical logic to intuitionistic logic that Gödel provides:
⊥⊥ -> ⊥(p -> q)⊥ -> (p⊥ -> q⊥)
That's great, but it really doesn't get us anything, because we are still in classical logic, so we need a translator from classical logic to intuitionistic logic, and we use ⊥ as a superscript ...
... zzzzzz, whoopsie, continuing this convo in the next entry ... *blush*
Why are we relying on numeric identifiers for dependencies when we have so much richer information available?
Reading the recent thread about a proposal to change the PVP has revived a question I've had about how haskell manages dependencies.
When a package is successfully compiled, GHC has almost perfect information about its dependencies and exports.
Presumably, you can build a package at the time you're releasing it, so why not add all this information to the package itself so it can be used by the package management system?
list each function/constructor dependency with the names of its source package and module, along with the types the function/constructor is constrained to match in order for the package being released to compile.
So if I'm releasing package foo and it contains one module:module Foo where foo :: [String] foo = replicate (id 3) (id "foo")
Then my dependencies should bebase/Prelude.id :: Num a => a -> _foo_Foo_0 base/Prelude.id :: String -> _foo_Foo_1 base/Prelude.replicate :: _foo_Foo_0 -> _foo_Foo_1 -> [String]
conversely, list each function/constructor the package being released provides along with its full type as a package signature.foo/Foo.foo :: [String]
Handle datatype and typeclass dependencies/provided similarly.
Handle typeclass instances similarly, with a caveat. The package management system would need to decide whether to handle silent re-export by
- EITHER explicitly listing the original source package of the instance as a dependency, and not listing re-exported instances in the package signature.
- OR listing the proximate source package of the instance as a dependency, and listing all re-exported instances in the package signature.
Both options have their benefits and drawbacks.
The machine-generated dependency listing and package signature should be included in the release. The semantic version is used only for the human side of the interaction, while these two files determine how the package manager perceives this version of the package.
Then, when trying to install a package, the package manager can check its dependency listing, and use that to generate a combination of specific package versions whose package signatures unify with the dependencies.
So now the dependency upper bound is defined in terms of what's actually needed, rather than some arbitrary identifier.
Even if it's computationally infeasible for the package manager to find a satisfying set of dependency versions, that can be worked around. The package manager could certainly check whether a given set of dependency versions' package signatures satisfy the dependency list of a package efficiently. This means that after a package is released, users could propose various dependency version sets to the some service that the package manager talks to, and the package manager could verify the proposed sets could work.
Notably, as the developer of a package, I don't need to do anything when new versions of my dependencies come out that:
- add new functions/constructors
- make the type of a function/constructor I use more general
- make the type of a function/constructor I use more specific in a way that still unifies with my dependency list
The one risk I can see is if one of my dependencies modifies a function/constructor I use but don't change its type. A function becomes partial, or changes the format in the strings it produces, or something like that. Errors from this would be at run-time, not compile-time, and would have to be revealed through testing.
But perhaps we can reuse the idea of a service that the package manager talks to here, and allow users to flag specific dependency sets as bad in this case. This spares other users from reencountering the same issue.
tl;dr - why don't we take advantage of types to let the package manager decide what dependency versions a package should be compile-time compatible with?submitted by rampion
[link] [18 comments]
I'm chugging through Learn You a Haskell, like everyone says to do. A code sample in the book uses the following construct to delete the nth item in a list:let newStuff = delete (stuff !! n) stuff
Am I missing something, or is scanning the list twice like this inefficient? I tried looking in Data.List for some sort of deleteNth function, such that you could write:let newStuff = deleteNth n stuff
and only scan the list once, but I was unable to find such a function.
Edit: Sorry, went to sleep after asking. The code sample is from the Input and Output chapter, in the "Todo" program. The list of todo items has already been zipped up with [0..], so it's at least guaranteed to delete the right thing, since every element of the list is a string starting with a different number.submitted by the_noodle
[link] [35 comments]
'H' is 'G'++
(Mathematical humor ... gotta love it!)
(although the symbol '++' to signify 'Mathematical Humor' is an egregious misnomer, as the 'auto-increment' operator is not mathematical in any sense at all! How do you auto-increment the number 4 to become the number 5? You don't!)
(Not that I'm digressing or anything, but ...)
(continuing right along from the previous post, la-di-dah, as if I didn't fall asleep right on my laptop, or anything like that) ... to get there, but now we enhance Gödel translation functions, instead of translating ⊥ (the uninhabited type), we instead choose some (arbitrary) type, k, to be translated and then to do our translation:
k⊥ -> k(p -> q)k -> (pk -> qk)
The beauty of this is that k, our arbitrary type, can be anything we choose, be it integers or person-objects, or whatever unit of logic we are having the conversation about.
We can now lift the conversation from classical logic to intuitionistic logic, because we now have a translation for 'not' that intuitionistic logic can deal with, and that is:
¬p == p -> ⊥
Now, translating from intuitionistic logic to executable (programming) code, that's been covered in the literature by other articles, ... maybe I'll take that under 'i' for intuitionistic logic or 't' for program translation, but not here.
As promised, the Gödel numbering.
There's actually many ways to skin this Schödinger's cat. One of them, not Gödel's, is to recode what is a 'number.'
For example, in the programming language Forth, at base 36, every number, and letter, is a number!
0 is 01 is 1...9 is 9A is 10B is 11...Z is 36
There it is: base thirty-six. In base thirty-six, the number 'number' is
foldr (\c sum -> sum * 36 + ord c - ord 'a' + 10) 0 "number"
Huh. A pretty big number.
So one encoding is to assign primitive functions to numbers (which now include letters) and when we string them together we have an unique number for each formula,
So, if we give
B = \x y z -> x(yz)
C = \x y z -> xzy
I = \x -> x
K = \x y -> x
M = \x -> xx
S = \x y z -> xy(xz)
We can actually write statements of truth using these letters, like:
... anything else
(you see a problem, however: I did have to use open and close parentheses, so those two symbols must also be introduced as numbers).
And the above formula are actually numbers as well as being formulae, that is statements of truth, that can be reasoned about.
That's one numbering; the alphabet-soup that Combinator logic is.
Another is even simpler, reducing down to the simplest pair of combinators (including parenthesization (that, actually, is a word, regardless of what my stupid spell-checker says.) (stupid spell-checker)), and for that we have the iota programming language, or just ι to its friends.
Programs look like this in ι:
Needless to say, that, too, can be reduced to a number which can be reasoned about.
Gödel went in neither of these directions, or, instead of looking at a functional/lambda approach, he chose to represent mathematical logic in ordinal form, and instead of using combinators to eliminate variable-representation from his statements, he also represented logical variables in his enumerated set of symbols,(1) choosing a basic set of logical symbols to represent as numbers, so each number became, not a function, but a statement of truth.
He would encode a statement of truth by it's (lexical) ordering, so:
if ¬ were encoded as 1 and 'p' were encoded as 2 would be represented as powers of primes:
21 * 31 * 52
And by the fundamental theorem of algebra, the resulting number, for an unique input statement would yield an unique number.
But how do you represent numbers here?
Gödel used Peano numbers, which, like the Roman numerals, are unary:
z = 0sz = 1ssz = 2sssz = 3... you get the point
So if 'z' is 3 and 's' is 4
Then the number 12 is 12 s's followed by a 'z' which is
23 * 33 * 53 * .... * 414
which is number so big that if I had a dollar for every 's' in its resulting Peano representation, ... well, I'd have a lot of dollars!(2)
Well, so using simple logic symbols (that created very, very, ... very big numbers) (bigger than infinity + 1, even!) (Okay, now I have to go to confession), Gödel was able to create a predicate that determined if a formula was determinable in Principia Mathematica, he named it det(f) ... I suppose because 'Gertrude' was taken, already, so he had to settle for 'det.'
It also could be because 'det' implies 'determinable,' and not because of unrequited love, but mathematicians' hearts are always breaking over a girl named Agatha (just ask one of the world's most famous mathematicians, I'm sure you've heard his name: Brahms. Johannes to his buddies) or Gertrude or Millicent or Chrysanthemum.
But I digress.
So, but what happens when you run a formula that never returns through det?
For example, the formula M is \x -> x x, so the formula M M reduces as follows:
M M -> (\x -> x x) M -> substitute M in x gets you M M -> (\x -> x x) M -> substitute ... forever.
M M gets you M M and when you try to reduce that it gets you M M.
M is known as the looping function. It gives you loops, functionally, or drives you loopy.
Either one. Or both.
So, the point is that det(M M) is ...
There is no answer to that.
And that's the problem, because now you have a case were det itself is ... wait for it.
det is not det.
That's like saying p = ¬p or 5 isn't 5 anymore.
Once Gödel had that, he encoded det as a Gödel number (it was 'rather' large)(like: infinity + 2)(... but not) and then showed that ¬ det(det) was true inside the Principia Mathematica, and using only the axioms and rules that the Principia Mathematica allowed.
The Principia Mathematica, a system Russell created and worked so hard to show was complete and consistent ...
Well, Gödel showed that it was inconsistent, and showed it entirely within the Principia Mathematica framework.
That show, that proof, was named Gödel's Incompleteness Theorem (again, because Gertrude was taken, at the time), and has forever changed the tenor and tone of the conversation of mathematics since.
Meta-epilogue: 'H' is for Hilbert
Now I was going to go on a rant about how ++ is just plain bad in every sense of the word. Once you can change the number 5 into the number 6, then you have just opened up a box that you can never close again.
In Category Theory a function has identity and composition, but if you can change something, anything, like, the function, then what does identity mean? ('I' is for isomorphism. But later, not now).
Identity loses its meaning in the face of mutability, and provable properties about your theories and your system at large go out the window.
But I not going to talk about 'H' is for G++ and how mutability is just plain bad, bad, bad.
I'm going to talk a little bit about Hilbert.
Okay, so this dude was on fire. He basically invented mathematical logic, as a concept, and stood up before the whole world and held in his hands the only set of unsolved problems in math (there were 23 of them) and he shook his fist and said the famous last words: "We must know. We will know!"
Then he died. Then Gödel proved mathematics inconsistent, intrinsically so.
Not only that, Gödel, and others using his work, showed at least three of the problems were unsolvable. It wasn't that the solution was hard to get to without computers or supercomputers (that they didn't have at the time) nor even quantum computers ('Q' is for qubit) (later! later!) but they were intrinsically unsolvable because of Gödel's incompleteness theorem.
I mean, he died brimming with hope. He had his words carved on his gravestone: "We must know. We will know."
And now, we will never, ever will know.
'H' is for Hilbert. He thought we were big enough to conquer all of mathematics.
But they did name Hilbert-space after him. So he does have that going for him.
(1) This smacks of coding in the good old days of FORTRAN where you could have any integral variable you wanted, insofar it was named either 'i' or 'j' or 'k' ... what? You want more variables? Go write a different program! Jeez! Why would anybody want a variable named something else than i, j, or k, anyway!
(2) 1154720154130223108008498916387587352955801000 of them, in fact! I'd buy that for a dollar!