News aggregator

Haskell-Platform vs GHC/Cabal

Haskell on Reddit - 2 hours 6 min ago

What is your view on using the Haskell-Platform installer versus just installing GHC and Cabal and managing it as you go?

submitted by sigzero
[link] [2 comments]
Categories: Incoming News

wren gayle romano: Out of the hospital

Planet Haskell - 4 hours 51 sec ago

I woke up feeling terrible last monday, and by midnight I was on a bed in the ER. Spent the next few days in the hospital: had surgery on wednesday, got released on thursday. Since then I've been resting about the house, and have been recovering quickly. It was my first "real" surgery. (I've had other surgical procedures, but they aren't the sort that doctors mean when they ask if you've "had any surgeries".) Before saying what I had done, I'd like to make a point about social awareness.

Do you recognize these symptoms?

  • sharp shooting pain in the chest, possibly extending to shoulders and neck/throat, lightheadedness/dizziness, shortness of breath.
  • dark urine, yellowing skin/eyes, nausea/vomiting, difficulty concentrating, sleepiness.
  • urinating often, increased thirst/hunger, blurry vision, abrasions heal slowly, tingling/pain/numbness in hands/feet.
  • dull pain in the pit of your stomach, possibly extending to back or right shoulder, possibly increasing after eating fatty foots, doesn't abate in different positions, fever and chills.

This day and age I'd expect any moderately educated person to recognize the first three immediately: heart disease, liver disease, and diabetes (type 2). Both heart disease and diabetes have had powerful ad campaigns to increase awareness. Liver disease hasn't had that advantage, but the symptoms of jaundice are mentioned in the side-effect reports of most medications, and they're pretty memorable to boot. The last one I never would have recognized until it happened to me. And, frankly, the ER doctors had a hell of a time figuring out what it might be based only on my symptoms. I felt feverish at the time, though my temperature was normal. This was the first out-and-out attack, so I couldn't talk about how often it happened nor say whether it got worse after eating fatty foods. Knowing all the symptoms now, I can look back and see that this has been a long time coming; but at the moment all I could tell the docs was: intense dull pain in the pit of my stomach, doesn't get better or worse when changing position.

These are the symptoms of gallbladder disease. Women are more than twice as likely as men to get it. Women on hormone replacement therapy are more likely to get it. Many women are hit with it during the end of pregnancy— so many so that nurses remark on the number of cholecystectomy patients with one-week old babies. There's something about estrogen (or fluctuations thereof) that doesn't play nicely with the gallbladder. So I mention this for all the women, especially trans women, in the audience. Of course, none of this is to say that there aren't plenty of men who suffer from the same. Prior to going to the ER I'd heard almost nothing about gallbladder disease, other than knowing that gallstones were a thing. But in the scarce week since then I've lost track of how many women have told me about their cholecystectomies. With how common it is, I think this is one of those diseases that we as a society should be able to recognize— like diabetes, heart attacks, and jaundice.

So yeah, I'm down an organ now. There aren't too many side effects of having your gallbladder removed (compared to removing other organs), though it does mean I'll have to watch my diet. I've been doing that anyways, now it's just different things to look for. I'll have to put the high-protein low-carb diet on hold for a couple months, since I need to reduce fat intake until my body gets used to the new me. Also worth noting: apparently losing weight quickly (as with the 30-pounds I dropped last fall) can increase the risk of gallstones. So if you're dropping weight, you should be sure to monitor things and try to flush/cleanse your gallbladder.

That's it for now. Goodnight and good health.

Categories: Offsite Blogs

Roman Cheplyaka: Lens is unidiomatic Haskell

Planet Haskell - 6 hours 20 min ago

Edward Kmett writes:

Ironically if I had to say anything from the traffic in my inbox and on #haskell about it, it is mostly the old guard who gets disgruntled by lens.

So let me try and explain why that is. I’ll go ahead and say this: lens is unidiomatic Haskell.

By which I mean that lens isn’t like any other library that we normally use. It doesn’t follow the conventions of Haskell APIs, on which I elaborate below.

Now let me clarify that this doesn’t necessarily mean that lens is a bad library. It’s an unusual library. It’s almost a separate language, with its own idioms, embedded in Haskell.

It is as unusual as, say, Yesod is. But unlike Yesod, which follows Haskell’s spirit, not letter (syntax), lens, I argue, follows Haskell’s letter, but not its spirit.

So here’s why I think lens violates the spirit of Haskell:

  1. In Haskell, types provide a pretty good explanation of what a function does. Good luck deciphering lens types.

    Here’s a random function signature I picked from lens:

    below :: Traversable f => APrism' s a -> Prism' (f s) (f a)

    Despite having some (fairly basic) understanding of what prisms are, this signature tells me nothing at all, even despite the usage of type synonyms.

    So you have to rely on documentation much more that on types. Yeah, just like in Ruby.

  2. Usually, types in Haskell are rigid. This leads to a distinctive style of composing programs: look at the types and see what fits where. This is impossible with lens, which takes overloading to the level mainstream Haskell progably hasn’t seen before.

    We have to learn the new language of the lens combinators and how to compose them, instead of enjoying our knowledge of how to compose Haskell functions. Formally, lens types are Haskell function types, but while with ordinary Haskell functions you immediately see from types whether they can be composed, with lens functions this is very hard in practice.

  3. The size of the lens API is comparable to the size of what I’d call «core Haskell» (i.e. more or less the base library). It is also similar in spirit to base: it has a big number of trivial combinations of basic functions, in order to create a base vocabulary in which bigger programs are expressed.

    Ordinary libraries, instead, give only basic functions/combinators, and rely on the vocabulary provided by base (or lens) to compose them together.

    This is why I regard lens as a language in its own. And this demonstrates why learning lens is hard: surely learning a new language is harder than learning a new library.

  4. Dependencies. A library as basic in its purpose as lens ideally should have almost no dependencies at all. Instead, other libraries should depend on it and implement its interface. (Or even do it without depending on it, as is possible with lens.)

    A package implementing lenses that depends on a JSON parsing library and a gzip compression library sounds almost like a joke to me.

    OTOH, it kind of makes sense if you think about lens as a language. It just ships with a “rich standard library”. Nice!

  5. Backward composition of lenses. It’s a minor issue, and I wouldn’t mention it if it wasn’t a great demonstration of how lens goes against the conventions of Haskell.

Note that I’m not trying to make a judgment here (although my tone probably does give away my attitude towards lens). I’m simply explaining why people may dislike and resist it.

Nor am I trying to argue against any particular design decision of lens. I’m sure they all have valid rationale behind them.

I just hope that someone will write an idiomatic Haskell library as powerful as (or close to) lens, with perhaps a different set of compromises made. Otherwise, I’m afraid we all will have to learn this new language sooner or later.

Categories: Offsite Blogs

Is there plans for support in GHC for caching/memoization of the return values of pure functions?

Haskell on Reddit - 8 hours 7 min ago

It seems like an obvious optimization. Take the following function (and imagine it as a pure function that is significantly more expensive to compute, such as a hash function):

square :: Int -> Int square x = x * x

If the platform could recognize that the result of square 16 is always 256, because it's a pure, deterministic function, then it could keep the previously calculated results in a hashtable keyed by the argument.

There are obvious performance gains to be had if the hashtable lookup is significantly faster and cheaper than computing the result every time.

I understand it's not too difficult to implement in the language, but I'm wondering if it has ever been discussed as a potential feature for GHC itself, as an automatic (or optional) optimization for non-monadic functions.

submitted by DroidLogician
[link] [15 comments]
Categories: Incoming News

What you can NOT do with macro tree transducers?

Haskell on Reddit - 8 hours 47 min ago

I was recently investigating macro tree transducers as a model of computation due to their great computing characteristics. Specifically, every MTT can be composed, so, for example, coding your program with MTTs would mean you get things like Stream Fusion on arbitrary recursive functions for free.

Now I'm wondering what you can't do with macro tree transducers? What are practical algorithms that are necessary to daily programming that can't be done with those?

submitted by SrPeixinho
[link] [6 comments]
Categories: Incoming News

Douglas M. Auclair (geophf): 'T' is for Theorem-proving

Planet Haskell - 10 hours 40 min ago

'T' is for Theorem-proving
So, you've been theorem-proving all your life.
Let's just get that one out there.
When you're given a math problem that says:
"Solve x + 1 = 2 (for x) (of course)"
And you say, "Why, x is 1, of course," quick as thinking, you've just proved a theorem.
Here, let me show you:
x + 1 = 2 (basis)     -1 = -1 (reflexive)x + 0 = 1 (addition)x       = 1 (identity)
Q.E.D. ('cause you rock)
So, yeah. Theorem proving is this, correction: theorem proving is simply this: going from step 1, to step 2 to step 3 until you've got to where you want to be.
How do you go from step 1 to step 2, etc? The same way you do everything! You follow the rules you're given.
Let's prove a theorem, right now.
So, in classical logic, we have a theorem that says
p → p
That is, if you've got a p, that implies (that you've got a) p.
But proving that? How do we go about doing that?
Well, in classical logic, you're given three basic axioms (thanks to sigfpe for his article "Adventures in Classic Land"):
1. p → (q → p)2. (p → (q → r)) → ((p → q) → (p → r))3. ¬¬p → p
So, p → p. How do we get there? Let's start with axiom 1 above.
1. p → ((p → p) → p) axiom 1 (where q = (p → p))2. (p → ((p → p) → p) → ((p → (p → p)) → (p → p)) axiom 2 (where q = (p → p) and r = p)3. (p → (p → p)) → (p → p) modus ponens4. p → (p → p) axiom 1 (where q = p)5. p → p modus ponens
Q.E.D. (a.k.a. whew!)
I used something called modus ponens in the above proof. It says, basically, that if you've proved something that you're depending on, you can drop it. Or, logically:
p → q, p       ∴ q
Or, we have "p implies q" we've proved p, so now we can just use q.
Now, there's been a lot of thought put into theory, coming up with theorems and proving them, and this has been over a long time, from Aristotle up to now. The big question for a long time was that ...
Well, theorem proving is a lot of hard work. Is there any way to automate it?
So there's been this quest to make theorem proving mechanical.
But to make theorem-proving mechanical, you have to mechanize everything, the axioms, the rules, and the process. Up until recent history, theory has been a lot of great minds spouting truths, and 'oh, here's a new way to look at the world!'
And that has been great (it has), but it hasn't helped mechanize things.
Then along came Frege. What Frege did was to give us a set of symbols that represented logical connectives and then symbols that represented things.
And there you had it: when you have objects and their relations, you have an ontology, a knowledge-base. Frege provided the tools to represent knowledge as discreet things that could be manipulated by following the rules of the (symbolized) relations.
He gave abstraction to knowledge and an uniform way of manipulating those abstractions, so, regardless of the underlying knowledge be it about money or chickens or knowledge, itself, it could be represented and then manipulated.
That way you could go from step 1 to step 2 to step 3, etc, until you arrived at your conclusion, or, just as importantly, arrived at a conclusion (including one that might possibly say what you were trying to prove was inadmissible).
Since Frege, there has been (a lot of) refinement to his system, but we've been using his system since because it works so well. He was the one who came up with the concept of types ('T' is for Types) and from that we've improved logic to be able to deliver this blog post to you on a laptop that is, at base, a pile of sand that constantly proving theorems in a descendent of Frege's logic.
Let's take a look at one such mechanized system. It's a Logic Framework, and is called tweLF. The first example proof from the Princeton team that uses this system is 'implies true,' and it goes like this:
imp_true: pf B -> pf (A imp B) = ...
That's the declaration. It's saying: the proof of B, or pf B, implies the proof of A implies B, or pf (A imp B).
How can you claim such a thing?
Well, you prove it.
imp_true: pf B -> pf (A imp B) = [p1: pf B]            % you have the proof of B % ... but now what? for we don't have the proof that A implies B.
So the 'now what' is our theorem-proving adventure. Mechanized.
We don't have the proof that A implies B, but we have a logical loop-hole (called a hole) that a proof some something that's true is its proof:
hole: {C} pf C.
Which says, mechanized what I just said in words, so with that:
imp_true: pf B -> pf (A imp B) = [p1: pf B] (hole (A imp B)).
And there you go.
... that is, if you're fine with a big, gaping loophole in your proof of such a thing.
If you are satisfied, then, well, here, sit on this rusty nail until you get unsatisfied.
So there.
Okay, so we have an implication, but we need to prove that.
So we introduce the implication into the proof, which is defined as:
imp_i: (pf A -> pf B) -> pf (A imp B)
SO! we have that, and can use it:
imp_true: pf B -> pf (A imp B) = [p1 : pf B] (imp_i ([p2: pf A] (hole B))).
So, we have the proof of A and that leads to B. But wait! We already have B, don't we? It's p1 (that is [p1 : pf B])
imp_true: pf B -> pf (A imp B) = [p1 : pf B] (imp_i ([p2 : pf A] p1)).
This is what theorem-proving is: you start with what you want, e.g.:
imp_true: pf B -> pf (A imp B)
which is "implies-true is that if you have B proved, then you have the proof that (A implies B) is true, too."
Then you take what you've got:
pf B
And give it a value:
[p1 : pf B]
Then you apply your rules (in this case, implication introduction, or 'imp_i') to fill the holes in your proof, to get:
[p1: pf B] (imp_i [p2: pf A] p1)
Which is the demonstration of your proof, and that's why we say 'Q.E.D.' or 'quod est demonstratum.' 'Thus it is demonstrated.'
Ta-dah! Now you have all the tools you need to prove theorems.
Now go prove Fermat's Last Theorem.
(I am so mean!)
Okay, okay, so maybe Fermat's Last Theorem is a bit much, but if you want to try your hand at proving some theorems, there's a list of some simple theorems to prove.
Categories: Offsite Blogs

Is a left-to-right coding style ok?

Haskell on Reddit - 19 hours 15 min ago

In exploring the lens library, I recently discovered the general purpose combinators. I find using (&) (flip ($)) addictive. Below is an example of summing all the odd numbers from 1 to 100 and printing the result.


print . foldr (+) 0 . filter odd $ [1..10]


[1..10] & filter odd & Prelude.foldr (+) 0 & print

My brain works from the raw materials at hand to the final objective. Get a list from 1 to 100, filter the odd numbers, sum them, and then print the result. For me, writing right-to-left code involves a lot of left-arrowing as I go through these steps.

What I really like about left-to-right is when you need to move from pure code to a monad. Assume that the list you are odding and summing is coming from an IO effect:


print =<< (liftM $ foldr (+) 0 . filter odd) (return [1..10] :: IO [Int])

left-to-right (<&> is flip fmap)

(return [1..10] :: IO [Int]) <&> (filter odd) <&> foldr (+) 0 >>= print

It took a lot longer for me to work out the liftM,=<<,. combo required to keep it a one-liner compared with the rather snazzy <&> with a natural >>= to deal with print.

Do other people code like this, and would it be considered to be idiomatic (enough that it doesn't annoy others sharing my code)? And are there major problems with adoption of a left-to-right coding style?

submitted by tonyday567
[link] [26 comments]
Categories: Incoming News

Is there a library class for splitting a value into a part and rebuild-the-value function?

Haskell on Reddit - Tue, 04/22/2014 - 9:56pm

I've just been reviewing some basic functional programming ideas and playing with functions as values-with-holes. One thing I came up with was...

class Cut a b where cut :: a -> (b, b -> a) -- snd (cut x) $ fst (cut x) = x

The cut function breaks a piece out of a value, returning the pair of the piece and a reconstruct-the-value function. The law states that if I use the reconstruct function with the same piece I cut out I get back the original value.

This looks like it might be a useful idea, so is something like it already in a library somewhere? Are there any papers I should read that use a similar class in an interesting way?

I feel like I'm missing something obvious, so sorry if this is dumb.

submitted by ninereeds314
[link] [14 comments]
Categories: Incoming News

How can I tell GHC to pre-compute all constants?

Haskell on Reddit - Tue, 04/22/2014 - 6:57pm

Notice this program:

even_fibs = take 200000 $ filter even $ map fst $ iterate (\(a,b) -> (b,a+b)) (0,1) solve x = sum_digits $ sum $ take x even_fibs solutions = map solve [0..200000] sum_digits = sum . map (read . return) . show main = readLn >>= (\x -> print $ solutions !! x)

Now mind this compilation/execution:

$ ghc -O2 programa.hs -o programa $ echo 100000 | time ./programa 282042 3.13 real 2.11 user 0.90 sys

There is a problem with that: solutions is a constant, static list. It has 200000 elements and none of them depend on any variable. It can be calculated in compile time, and in fact is what I want, or else I wouldn't have coded it like that. Yet GHC does not do this. As a result, a program that is supposed to be instantaneous needs 3 seconds to run. How can I force GHC to do so?

submitted by SrPeixinho
[link] [8 comments]
Categories: Incoming News

Handy - A Haskell-based Virtual Machine for executing ARM Assembly Code

Haskell on Reddit - Tue, 04/22/2014 - 6:03pm

For my undergraduate dissertation I wrote a simulator that executes ARM assembly code for the older ARM4T instruction set architecture using Haskell.

It's my first significant project in Haskell and there are some things I would do differently if I were to start it again today, or attest to the quality of my code. I'm quite happy with the result, though, and wanted to share it. I would be open to any critique people can offer.

The entire project including my dissertation is available here:

To view the dissertation run pdflatex main in the report directory and open main.pdf.

submitted by minuteman3
[link] [4 comments]
Categories: Incoming News

Job offer at Create-Net, Italy

Haskell on Reddit - Tue, 04/22/2014 - 2:25pm

Hi guys, I'm surprised I didn't got any reply from the community to this job offer, that I posted some time ago:

While it's not 100% guarantee to work all the time in Haskell, the position is still very interesting with technologies such as Constraint Programming (I'm currently exploring SMT with the SBV library), Cloud computing and energy efficiency.

submitted by kaukau
[link] [19 comments]
Categories: Incoming News

Just a little decider

Haskell on Reddit - Tue, 04/22/2014 - 1:51pm

Just wanted to share a little snippet of code that I wrote to help me decide on the order of song for a recording session. Given that I had like 20 song choices I thought I'd throw them in a binary tree and let some insertion strategy take the sorting from me.

In the end I got a result after 79 decisions and I am quite happy with the resulting order.

The Code:

(And yes ... unsafePerformIO is bad ... but this was hacked together in like 10 minutes and probably resulted in a more qualified decision result.)

submitted by goliatskipson
[link] [17 comments]
Categories: Incoming News

WJWH/HPi - Tue, 04/22/2014 - 1:33pm
Categories: Offsite Blogs

Hayoo Alfred 2 workflow for functions and packages search

Haskell on Reddit - Tue, 04/22/2014 - 1:24pm

I don't really like hoogle because sometimes it does not find functions from some packages like hxt. That's why I created alfred hayoo workflow. Big thanks to @carlosgaldino's alfred-hoogle-workflow for learning me how to do that. Get it!

submitted by prettynatty
[link] [4 comments]
Categories: Incoming News

Douglas M. Auclair (geophf): 'S' is for Simply Summing

Planet Haskell - Tue, 04/22/2014 - 1:06pm

'S' is for ... simply summing.
Okay, this post was going to be about symmetries and super-symmetries, then I was going to transition to sieves, like the Sieve of Eratosthenes, and how that relates to fizz-buzz, ...
But then I thought: 'eh.'
Yeah, that's what I thought: 'eh.'
I'm a moody brute, filled with deep, indiscernible thoughts, 'cause that's how I roll.
So, yeah: eh. This one will be a simple post today.
Sum the numbers 1.
Yeah, I just said that; humor me.
Okay, the sum of 1 is 1. Toughie.
Next, sum 1 + 2.
3, right? Still with me?
Okay, 1 + 2 + 3.
Got it? 6. No probs.
Now: sum 1 to 10. No laptop help, just sum it yourself.
A little more fun, right? So, that's 55.
Okay, no computers: sum 1 to 100.
Hm, hm, hm. Done yet?
La-di-dah. You're not even trying, are you? You're just reading this post.
Okay, the sum from 1 to 100, inclusive, no computers is ... 5,050.
1 to 1000: 500,500.
Pattern? You bet.
1 to ... oh, I don't know: 123:
That's a might harder, but, no computers:
62 00 + 12 4 0 + 18 6 = 7,626.
Okay, let's verify, by asking my Haskell interpreter: 
Prelude> sum [1 .. 123]7626
Yeppers, I was right. I was also as fast as a person typing a program into their computer to solve it, if they were using a functional programming language, and much, much faster than a person using a computer if they weren't and called up Excel, for example (loading ... loading ... loading ... blue screen of death) (You really need to virus-clean your computer, because I see you don't have a Mac, do you, tsk, tsk!) or if they pulled out their HP calculator from their high school days because they are that old, boys and girls!
You see this thing in my hand, children? This is not a smart phone, it's a calculator.
Children: Ooh! Papa, can I borrow it for a sec ...
Children, borrowing it, the puzzling over it: Um, Dad, ... where's the Plants vs. Zombies 2 app?
Yeah, a calculator, it calculates. And that's it. No google, no nothing, just numbers and numeric operations.
Children, looking uncomprehendingly at the thing in their hands, then, tossing the cootied thing from their hands and run to their rooms, screaming, remembering with horror the time Dear Old Dad got out an LP and told them that music came out of it, but ... it didn't have playlists!
Yeah. Calculator. But would your calculator get you that sum that fast? I mean, after you go to the CVS to buy a new battery. Those things are long-lasting, but thirty years later? And you expect your calculator to still just work?
Pfft! Heh.
Pick a number (natural number), any number, I'll sum it for you, from the origin.
Sum 1 to 31415.
31415 * 15708 = 314,15 0,000 + 157,075, 000 + 21,990,5 00 + 0 + 251,320 = 493,466,820
Okay, that took a bit longer, let's see what Haskell says:
Prelude> sum [1..31415]493466820
Okay. Damn! ... Sometimes I impress even myself.
How do I do this? How do I add all the numbers from 1 to 31,415 and in under a minute? And perfectly, perfectly, correctly?
Well, I'll tell you.
But first, I'll tell you a story.
Once upon a time, there was a little boy named Gauss, and he was a naughty, little, precocious lad, always getting into trouble for dunking Susie Derkins' pigtails in the inkwell at his desk, and that would be fine if she had jet black hair, but you know the Viking types, right? All blond-haired, blue-eyed things, and getting your honey-blond long-flowing locks dunked into ink was not on the agenda for poor, little Susie, who cried, and had to be sent home to console herself with an appointment at the beautician's who thought the Goth-look actually fit Susie well, so she came back the next day as Suze-in-black-leather-and-studs which was disruptive for the class in other ways, so Gauss found himself at detention again.
But this time the math teacher had had enough of little Gauss' pronouncements and recitations of π to sixty-seven places, and decided to teach the little scamp but good.
Not that I'm channeling, or anything. It's not 'Mary Sue'-writing; it's 'walking in your character's moccasins a mile' ... even though Gauss probably didn't wear moccasins all that often, anyway.
Still with me?
So, mean-old Herr Schopenhauer told little Gauss. "Okay, twerp, this is gonna be a light one on you. You can go home after you sum the number from one to one hundred."
Huh? One to one hundred? But that will take all night!
"That's right," said mean old Herr Bergermeister Meisterburger, "so you'd better get crackin'!" And he cackled evilly with Schadenfreude.
The Head-Master had several names, don't you know, ... and a peridiction for Schadenfreude with his afternoon tea and muffin.
So, what could little Gauss do? He got cracking, the poor lamb.
1 = 11 + 2 = 31 + 2 + 3 = 61 + 2 + 3 + 4 = 10 ... obviously, you just add the last number to the previous sum1 + 2 + 3 + 4 + 5 = 15 ... now, wait a minute ...1 + 2 + 3 + 4 + 5 + 6 = 21 ... there seems to be another pattern here ...1 + 2 + 3 + 4 + 5 + 6 + 7 = 28 ... Gauss looked at the chalkboard, then ...
Got it! Little Gauss thought.
Then he wrote:
1 + 2 + 3 + ... + 98 + 99 + 100 = 5,050.
"G'nite, Teach!" Gauss crowed, and off he skipped home with Suze, where they shared a chocolate milk and a strawberry poptart ... before all that sugar glaze had been added to the crust (shudder).
And Herr Herren HerringSchönheit was left stuttering a "Vat? Vat? Git back here, you rapscallion!"
Okay, what's a rap-scallion? Is it a thug onion with 'tude, a glock, and a triple-platinum pedigree?
But all Herr whatz-his-name could do was sputter, because little Gauss got it.
The sum from one (zero, actually) to any number reduces to a simple formula:
sum [1..n] = n * (n + 1) / 2
Either n or n + 1 is even, so one of them is divisible by two, so it works.
Try it out:
1 = 1 * (1 + 1) / 2 = 1 * 2 / 2 = 11 + 2 = 2 * (2 + 1) / 2 = 2 * 3 / 2 = 31 + 2 + 3 = 3 * (3 + 1) / 2 = 3 * 4 / 2 = 3 * 2 = 6... so obviously:1 + 2 + 3 + ... + 31,415 = 31,415 * (31,415 + 1) / 2 = 31,415 * 15,708 = that big number I computed earlier. Yeah, almost five-hundred million!
Put that into your "ceci-n'est-pas-unu-pipe" and smoke it. I'd buy that for a dollar.
Now, there's something you can impress your friends at Happy Hour with.
Okay, so you can sum 1 to ... whatevs, but what if a friend asks you, very kindly, and very politely, 'Okay, Ms. Smartypants, how about summing 1,000 to 2,000? What's that, huh? Betcha can't do that! Huh? Amirite? Amirite? Lookacha sweating bullets now, arencha? So, well, what is it? Huh?"
Yeah, really politely. Like that.
Now you could say, "But-but-but, Gauss' summer[-function, not -time] doesn't work like that."
But then you'd have feet of clay and egg on your face.
And with this hot summer coming, you don't want egg on your face. Trust me on that one.
So. What to do?
Think about it, that's what.
You simply make your 1,000 your zero, by scaling each number back by 1,000, then get your sum, then scale back each number by 1,000. For each time you applied the scale, you apply the scale to the result.
So, the sum of 1,000 to 2,000 inclusive is:
1,000 + 1,001 + 1,002 + ... 1,998 + 1,999 + 2,000 =
scaled back is
0 + 1 + 2 + ... + 998 + 999 + 1000 = 1000 * (1000 + 1) / 2 = 500,500
Now scale each number forward again, by adding 1,000 back to each number. You did that 1,001 times, right? (Don't forget to count the zero.) That's a rescalation (that's a word now) of 1,001,000. Add that to your sum to get 1,501,500.
There's your answer.
Let's check:
Prelude> sum [1000..2000]1501500
Bingo! Tickets! This must be the Front Row!
And you, very politely, just told your friend where they could put their superior 'tude, too.
Win-win! Good times; good times!
See y'all tomorrow. By the way, what's the number of moves needed to solve the Towers of Hanoi? Not that 'Towers of Hanoi' is going to be my topic for 'T.'
Not at all.
Categories: Offsite Blogs