Why isn’t there a clamp function in Data.Ord? Something like that:clamp :: (Ord a) => a -> a -> a -> a clamp mn mx = max mn . min mx
We have min and max, clamp is a very useful function as well, and very famous.submitted by _skp
[link] [23 comments]
I haven't looked into how the Haskell language is standardized, but I have noticed that articles and libraries seem to liberally use extensions specific to GHC, or at least that GHC lists as syntactic extensions. In comparison, the Hugs compiler looks pretty boring and no one talks about it much.
I learned to program with C++, and that community cares very much about standard conformance, but we also have multiple compilers on multiple OS's and we want to write code that conforms to everyone at once. So when referring to Haskell, should one assume that everyone conforms to GHC, even if GHC does not define Haskell? On a related note: does anyone use a compiler other than GHC, including Hugs, and why?submitted by SplinterOfChaos
[link] [39 comments]
'I' is for isomorphic
Howdy, howdy, howdy! ('H' is for Howdy, but howdy ends with the Y-combinator, so we'll talk about then we cover the letter 'Y,' as in 'Y' is for Recursion.)
(I laugh, but the Y-combinator isn't necessarily ... well, anything, and not even recursive, because it could be it's opposite: inductive) (and by 'opposite' you do know I mean 'dual.')
(Uh, yeah, that.)
So, this new math, this new way of counting, the new way of proving everything, the new, new way of proving you can't prove everything, like, oh, location and velocity at one point.
What's really neat is these new maths prove the coast of Britain is how long, precisely?
Trick question. You may say, 'Oh, the coast of Britain is about 2,000 miles long,' because you're forgetting that the British don't use the British Imperial system to measure distance anymore, 'cause that's how those wild brits roll.
The metric system, it's so perfectly square, it's cool! (Square, geddit?)
... for those cool brits, because I guarantee you it's below 30° outside on the 2,000 miles of Britain's coast.
It's also a trick question, because using fractal geometries ('fract': 'to break' into smaller and smaller pieces), Mandelbröt, the inventor of the geometry has measured a very precise length of the British coast:
It's infinite length. Immeasurably so. Mandelbröt knows. He measured it.
Bully for him. Question: did he actually walk those zillion + 1 metres? Huh? Did he? Huh?
All these mathematicians making these bold proclamations, like 'oh, the British coast, when more and more exactly measured, is of infinite length!'
But do they put their moneys (pounds? why is money so heavy? or euros? Why is money so cute and feminine? euro from Europa. And why is Zues always turning himself into a bull to chase the ladiez? These are imponderables that one faces, from time to time.)
(I mean like all over Greek mythology, girlz be havin' the hots for bulls and having bull-kids tha spawn really, really bad fanfiction ... you do know what Catching Fire and Mockingjay are based on, don't you? "Oh, I, Medea will be tribute for my little sister, 'cause I suddenly totally kick ass with the bow, because Battle Royale was made a decade earlier and a decade times better, but nobody knows Battle Royale nor Greek Mythology, so I'll just crib it for some tons of moneys!" Like Fifty Shades of Grey was originally really bad Twilight fanfiction, which isn't hard to find: really bad Twilight fanfiction.)
(Now Twilight came about because Connecticut kicked a girl to the curb who had really bad self-esteem issues, so she wrote a book about that ... and about sparkly vampires. And twenty-eight publishing companies told her: 'You're joking, right?' then Summit made a really bad movie about it, and all the goth girls lined up to see it, and that's when people said, 'Huh, I guess it's got something, like Harry Potter, which is a story that, as far as I can tell, about a fat uncle who wants to punch this nerdlinger who fed his owl once in seven books in the face, but he never got that pleasure.)
(The magic of Harry Potter? He has a zero-maintenance owl, and more magic? "Gryffindor wins!" Because why? Because of PFM, that's why!)
(But don't worry if you missed it the first time, because the exact same thing repeats in the next six books, but with just more and more pages, and a kitchen elf with mismatched socks and pro-labour sentiments.)
Yeah. 'I' is for isomorphic, ... that's what I was talking about. I'm sure of it.
So, the question of equality comes up in this new world of not caring what your operating on, as the operators or functions are the bees' knees.
So how do you tell if two things are 'equal' and what does 'equal' mean now? I mean, saying
5 == 5
is simple enough in the category of numbers, but then what about of the categories of functions on numbers, how do you measure equality? Or how about where the units are categories? In the category of categories how do you know that one category is 'equal' to another category? Or, testing the identity function, for example, when an object is transformed for a function that may or may not be the identity function, how can you tell the object in the codomain is 'the same' as the object in the domain before the function was applied?
It's actually pretty simple, actually. What's hard is saying two things are equal without a clear definition of what equality is.
So let's give the categorical definition of equality: isomorphism. That's a word from the Greek, iso, meaning 'the same' and morphism, meaning 'shape.'
Equality is that object A is equivalent to object B if they both have the 'same' 'shape.'
Now, for things without a shape that your familiar with ...
For example, numbers have 'shape' in their magnitude, ...
and parallelograms have shape in their ... well, shape.
But what is the 'shape' of a function? You can't look 'inside' a function to inspect its shape, just as you don't look inside a number to see that the number 3 has the same shape of the sum of the numbers 1 and 2.
Question: what's 'inside' a number?Answer: Nothing.
Solved that problem that's been bugging mathematicians since Plato and Euclid, so, moving on.
That problem's solved, but that still leaves the question of the shape of functions, and then the shape of categories.
Well, in some cases the shape of things is undecidable, because, for example, one way to determine (if you can) the shape of functions, or to see if two functions are equivalent, is to feed the same arguments to each of the functions, then to see that if, for the same input values, both functions return the same output values (because the output values have the same shape) (again, that aggravatingly ambiguous test!) then we have equivalence.
The problem if you have something like this function:
let f x = f (x + 1)
Then what is the answer to that?
f 0 = f 1 which is f 2 which is f 3 which is f 4 ...
You'll never be able to determine the shape of that function, because for each value you give to f, it seeks the solution by monotonically increasing the argument, ad infinitum.
So what shape is f? And what if g is
let g y = f (y + 1)
does g have the same shape as f? Sure, ... right? But how do you verify that?
Or how about
let h z = h (g (f z))
What shape is h? Is it the same shape as f or g? or neither of them?
These are valid formulations, but how can we reason about them and then across them to make statements of truth of how these functions relate to each other that we can prove? For many numbers (but not all!) (take that!), many functions, and even many categories, determining equality, or, correctly, isomorphism, is decidable: we inspect the shape (usually with a shape-defining function), see that they are the same, and say, behold, objects A and B are isomorphic! And we're done.
Which is a vast improvement to what we had before:
"I assume for every p, p == p is true.."
Which is saying something equals itself because it equals itself.
But then how do you know p == q is true if you don't know what q may be, either p or not p, or in the same shape as p. If q is the same shape as p, inhabiting it, is it 'equal' to p?
In Category Theory, we say, 'yes.'
Why do we say 'yes' so easily?
Because we don't care what p and q are.
We care what the functions do. And a function taking either p or q as argument returning the same value, every time.
Good enough (verifiably so) for me. Our functions behave consistently. Let's move on.
Oh, in Latin, there is no 'j.' Not that I'm being IVDGMENTAL or anything.
Here's a neat, little number. Tiny. Infinitesimal, in fact.
The number: *.
That's right. * ('star') is a number. In Game Theory it is the number denoting that in a two-player game, the person who has the next move has only a bad move to make.
The neat thing about * is this. No other number is equal to it. In fact * is the only number there this statement is true:
* =/= *
Star is not even equal to itself.
1 = 1
is a provable (and proved) statement of truth, this infinitesimal has it's own statement of truth:
* || *
This reads: "star is incomparable to star."
Okay, so, now you have a conversation-starter at all the cocktail parties and soirées that you're attending, you stunning socialite, you.
Don't say I never gave you anything. Have a star.
geophf to move and win.
After all, I did give you the *-move.
So: 'I' for isomorphic ... or is it for the incomparable infinitesimal that isn't?
'M' is for 'mum's the word' from me.
Sometimes, I have a bug in a complex map and filter chain, such as a wrong value coming out the end, or a value that should have filtered remains in the list. How do you deal with this? I've been creating a pair of the value and itself, so that I can see which input resulted in the wrong output, then rewriting my chain to work on the first element of a tuple, but this is tedious. Is there a better way?submitted by Dooey
[link] [9 comments]
I just wrote my first "real" haskell program: a game of life implementation. Would anyone be willing to critique my code? it's here
Regarding the code:
- I ran into trouble with IO (random and writeout)
- I probably should have used a 2d array to represent the board
- There are some performance issues (related to all the concatenations in the svg code?)
Any thoughts/comments are most welcome, please don't hold back! Thanks.
Thanks for the great comments everybody! I have started updating the code with the suggested changes. In particular: fixing the randomness bug, making evolution and writeboard "pure(r)" and simplifying functions.
I've updated the link above to point to the new version, the original code can be found here.submitted by stmu
[link] [14 comments]
As a spin off from teaching programming to my 10 year old son and his friends, we have published a sprite and pixel art editor for the iPad, called BigPixel, which you can get from the App Store. (It has a similar feature set as the earlier Haskell version, but is much prettier!)
* Atom feed support
* Hosted on https:// (with heartbleed patched)
* Sources hosted on github
* Main blog over at https://blog.cppcabrera.com
* All content from here ported over
* All posts tagged appropriately
I've documented the process as my first new post at the new location: Learning Hakyll and Setting Up
At this point, the one feature I'd like to add to my static blog soon is the ability to have a Haskell-only feed. I'll be working on that over the coming week.
Thanks again for reading, and I hope you'll enjoy visiting the new site!
The ML Family workshop intends to attract the entire family of ML languages, whether related by blood to the original ML or not. Our slogan is ``Higher-order, Typed, Inferred, Strict''. Designers and users of the languages fitting the description have many issues in common, from data representation and garbage collection to fancy type system features. As an example, some form of type classes or implicits has been tried or been looked into in several languages of the broad ML family. We hope the ML Family workshop is a good forum to discuss these issues.
Also new this year is a category of submissions -- informed opinions -- to complement research presentation, experience reports and demos. We specifically invite arguments about language features, be they types, garbage collection, implicits or something else -- but the arguments must good and justified. Significant personal experience does count as justification, as do empirical studies or formal proofs. We would be delighted if language implementors or long-time serious users could tell, with examples from their long experience, what has worked out and what has not in their language.
The deadline for submitting an abstract of the presentation, up to 2 PDF pages, is in a month. Please consider submitting and attending!
'G' is for Gödel-Gentzen Translation.
Okay, this is going to feel a little ... weird.
Cue Morpheus marlinspiking Neo through the spine to reinject him into the Matrix.
So, what is 'true' and what is 'false'?
In Latin-based languages there is only 'non' but there is no word for 'yes.' You'd think there is, but the most common one: 'Si' is from 'sic' 'this.' And the alternative 'Oc' (from history: "The people who say 'Oc'") is 'hoc' 'this,' too.
There is no 'yes' in Latin, just 'no.'
In mathematics, there is no 'no' only instances, but not 'not something.'
I mean, there is, but there isn't.
You weirded out yet?
The problem is logic.
There are universals: 'For every x, ...'
And existentials: 'There is some x that, ...'
But no 'non's.
So, if there're aren't 'non-exististentials,' then we can't really formulate 'not' with any authority.
So we can't say, 'That person is not Julia.'
We can say who she is, but we can't identify her by who she isn't.
So what to do? If 'not' really is the absence of something, and mathematics models what is, then how do you model what isn't?
Some logics just assume 'not' into existence ...
... You see the problem? You're creating a 'nothing' or a 'voider' or some such. How do you create something that isn't?
But some logics do exactly that: 'not' 'false' 'no' they all ... 'exist' in these logics.
From antiquity, too: classical logic admits 'not' into theory and use 'not' to prove these theories.
There it is. (Or more precisely: there it isn't!) (So if it isn't there, then where is it? Nowhere, right?)
(My head hurts.)
So, okay, some logics admit 'not.' And, since some of these logics are from antiquity, there's a lot of foundational work based in these logics that are extremely useful, practical, all that.
It'd be a shame to throw all that away.
But here's the problem. A lot of recent developments have started to question everything, about everything, because, after all, a set of rules or assumptions limit you to a set of world-views. Questioning these views allows you to open new avenues to explore.
So, classical logic admits 'not,' but a more recent logic, Intuitionistic logic, does not allow 'not.'
So what? Intuitionistic logic can go take a hike, right?
Well, the thing is, it's called Intuitionistic logic for a reason, and that is, this logic is a closer model to what our intuitions show us to be the 'real world,' whatever that may be. In particular, Intuitionistic logic is a very good descriptor of type theory that is heavily used in computer science, and, furthermore, (or, 'so, therefore,') statements of Intuitionistic logic have a direct mapping into (some) computer programming languages.
Again, so what?
The 'so what' is this: a statement of Intuitionistic logic has the strength of veracity to it. It can be proved. If you can map that down to a piece of a computer program, then you know, for sure, that that piece of code is correct, and verifiably so (that is, if your mapping is sound, and there are ways to prove the soundness of a mapping, as well, so we got ya covered there, too).
The same cannot be said for classical logic. Because we make the (axiomatic) assumption of 'not,' we get into weird circularities of the types when we try to map statements of classical logic down to computer program fragments.
(Logic is ... hard. It's very dogged in exhausting all possible paths an argument can take.
Argument: "I can't take anymore!"Logic: "Well, tough!"
Logic is just so ... unsympathetic!
The contrapositive statement: when somebody's being sympathetic? ... or sincere? They are not being logical. So you're getting somebody to agree with you, and a shoulder to cry on, but is somebody saying 'aw, that's okay, really!' what you really need? Or would brutal honesty get you out of the fix you're in?)
(But I digress.)
So, what do we do? We have all this neat stuff in classical logic, but we can't use it. We have a logic we can use in Intuitionistic Logic, but do we have to reinvent the wheel to use it?
Then, along came a little guy by the name of Gödel, following up on some really interesting work that Gentzen did. Gentzen was able to prove things about arithmetic (both 'primitively recursive' and 'first-order') using something called construction, that is by not using 'not.' But by constructing the proof for every number (the 'ordinals') in the arithmetic (and since number is ... 'infinite,' there are a lot of them). This is called using constructive theory or 'ordinal analysis.'
He invented the thing.
Gödel used ordinal analysis (along with his own genius innovations) to disprove the Principia Mathematic system was consistent, but he also used Gentzen's work to provide a mapping between classical logic and intuitionistic logic.
You see, classic logic has three axioms, that is, three statements we accept as 'true' or foundational to the logic:
1. p -> (q -> p) If you have a p, then q implies p is true2. (p -> (q -> r)) -> ((p -> q) -> r) urgh. You read it.3. ¬¬p -> p "not-not p" implies p (if you don't have a not-p, then you have p)
We can live with the first two in intuitionistic logic, but the third one uses 'not,' and so can't be handled directly in that logic.
So, here's the thing, '¬' or 'not' is really just a short-hand for saying
¬p == p -> ⊥
That is 'p implies "bottom"' where ⊥ or 'bottom' is an unprovable statement in the logic: false, fail, void, icky-poo. More rigorously: a type that ... isn't, and this is known as an 'uninhabited type.'
You know: that.
So ¬¬p can also be said thus: (p -> ⊥) -> ⊥
And then there are two other translation functions from classical logic to intuitionistic logic that Gödel provides:
⊥⊥ -> ⊥(p -> q)⊥ -> (p⊥ -> q⊥)
That's great, but it really doesn't get us anything, because we are still in classical logic, so we need a translator from classical logic to intuitionistic logic, and we use ⊥ as a superscript ...
... zzzzzz, whoopsie, continuing this convo in the next entry ... *blush*
Why are we relying on numeric identifiers for dependencies when we have so much richer information available?
Reading the recent thread about a proposal to change the PVP has revived a question I've had about how haskell manages dependencies.
When a package is successfully compiled, GHC has almost perfect information about its dependencies and exports.
Presumably, you can build a package at the time you're releasing it, so why not add all this information to the package itself so it can be used by the package management system?
list each function/constructor dependency with the names of its source package and module, along with the types the function/constructor is constrained to match in order for the package being released to compile.
So if I'm releasing package foo and it contains one module:module Foo where foo :: [String] foo = replicate (id 3) (id "foo")
Then my dependencies should bebase/Prelude.id :: Num a => a -> _foo_Foo_0 base/Prelude.id :: String -> _foo_Foo_1 base/Prelude.replicate :: _foo_Foo_0 -> _foo_Foo_1 -> [String]
conversely, list each function/constructor the package being released provides along with its full type as a package signature.foo/Foo.foo :: [String]
Handle datatype and typeclass dependencies/provided similarly.
Handle typeclass instances similarly, with a caveat. The package management system would need to decide whether to handle silent re-export by
- EITHER explicitly listing the original source package of the instance as a dependency, and not listing re-exported instances in the package signature.
- OR listing the proximate source package of the instance as a dependency, and listing all re-exported instances in the package signature.
Both options have their benefits and drawbacks.
The machine-generated dependency listing and package signature should be included in the release. The semantic version is used only for the human side of the interaction, while these two files determine how the package manager perceives this version of the package.
Then, when trying to install a package, the package manager can check its dependency listing, and use that to generate a combination of specific package versions whose package signatures unify with the dependencies.
So now the dependency upper bound is defined in terms of what's actually needed, rather than some arbitrary identifier.
Even if it's computationally infeasible for the package manager to find a satisfying set of dependency versions, that can be worked around. The package manager could certainly check whether a given set of dependency versions' package signatures satisfy the dependency list of a package efficiently. This means that after a package is released, users could propose various dependency version sets to the some service that the package manager talks to, and the package manager could verify the proposed sets could work.
Notably, as the developer of a package, I don't need to do anything when new versions of my dependencies come out that:
- add new functions/constructors
- make the type of a function/constructor I use more general
- make the type of a function/constructor I use more specific in a way that still unifies with my dependency list
The one risk I can see is if one of my dependencies modifies a function/constructor I use but don't change its type. A function becomes partial, or changes the format in the strings it produces, or something like that. Errors from this would be at run-time, not compile-time, and would have to be revealed through testing.
But perhaps we can reuse the idea of a service that the package manager talks to here, and allow users to flag specific dependency sets as bad in this case. This spares other users from reencountering the same issue.
tl;dr - why don't we take advantage of types to let the package manager decide what dependency versions a package should be compile-time compatible with?submitted by rampion
[link] [18 comments]
I'm chugging through Learn You a Haskell, like everyone says to do. A code sample in the book uses the following construct to delete the nth item in a list:let newStuff = delete (stuff !! n) stuff
Am I missing something, or is scanning the list twice like this inefficient? I tried looking in Data.List for some sort of deleteNth function, such that you could write:let newStuff = deleteNth n stuff
and only scan the list once, but I was unable to find such a function.
Edit: Sorry, went to sleep after asking. The code sample is from the Input and Output chapter, in the "Todo" program. The list of todo items has already been zipped up with [0..], so it's at least guaranteed to delete the right thing, since every element of the list is a string starting with a different number.submitted by the_noodle
[link] [35 comments]