# News aggregator

### Ken T Takusagawa: [zljfhron] Lagged Fibonacci digits

**Part 1: Two newest taps**

The Fibonacci sequence modulo 10 starting with 0 and 1 repeats with a period of 60, the Babylonians' favorite number: 0 1 1 2 3 5 8 3 1 4 5 9 4 3 7 0 7 7 4 1 5 6 1 7 8 5 3 8 1 9 0 9 9 8 7 5 2 7 9 6 5 1 6 7 3 0 3 3 6 9 5 4 9 3 2 5 7 2 9 1. One could imagine a very confusing digital clock in which two digits slide by every minute (or second).

Other starting pairs give other periods:

Period 20: 0 2 2 4 6 0 6 6 2 8 0 8 8 6 4 0 4 4 8 2.

Period 12: 2 1 3 4 7 1 8 9 7 6 3 9. (Also usable for a clock.)

Period 4: 4 2 6 8.

Period 3: 0 5 5.

Period 1: 0 0.

These periods sum to 100, which exhausts all possibilities of two starting digits.

**Part 2: Lagged Fibonacci, newest and oldest taps**

Start with the digits 0 0 0 0 0 0 1 at the top of a 7-column tableau. Fill in the following rows left-to-right by adding modulo-10 the digit to left and the digit immediately above. For the leftmost digit on each row, the digit "to the left" is the last digit on the previous row. This is equivalent to the recurrence a[i] = (a[i-1] + a[i-7]) mod 10. The sequence repeats with an impressively long period of 2480437. The original motivation was a method for a person to produce by hand a page of seemingly random digits with little effort. The tableau begins

0 0 0 0 0 0 1

1 1 1 1 1 1 2

3 4 5 6 7 8 0

3 7 2 8 5 3 3

6 3 5 3 8 1 4

and ends 2480437 digits later:

5 1 7 4 0 4 8

3 4 1 5 5 9 7

0 4 5 0 5 4 1

1 5 0 0 5 9 0

1 6 6 6 1 0 0

1 7 3 9 0 0 0

1 8 1 0 0 0 0

1 9 0 0 0 0 0

1

Periods of other column widths (lag lengths) and their factorizations, starting with 0 0 ... 1:

1 4 = 2*2

2 60 = 2*2*3*5

3 217 = 7*31

4 1560 = 2*2*2*3*5*13

5 168 = 2*2*2*3*7

6 196812 = 2*2*3*3*7*11*71

7 2480437 = 127*19531

8 15624 = 2*2*2*3*3*7*31

9 28515260 = 2*2*5*73*19531

10 1736327236 = 2*2*7*19*31*127*829

11 249032784 = 2*2*2*2*3*7*11*13*71*73

12 203450520 = 2*2*2*3*5*7*13*31*601

13 482322341820 = 2*2*3*5*11*17*31*71*19531

The sequence of periods 4, 60, 217, 1560, 168 does not appear on OEIS. These first four terms I have confirmed are the longest possible period among all start seeds, not just 0 0 0 ... 1. It is curious that the factor 19531 occurs multiple times.

**Part 3: Two oldest taps**

We consider the recurrence a[i] = (a[i-6] + a[i-7]) mod 10, that is, the two "oldest" values. To calculate the next digit on a seven-column tableau, add the number above to the number above and to the right (northeast). Or, this could also be done with a more compact six-column tableau adding the number above and the number above and to the left (northwest). This recurrence repeats with a period 661416, smaller than the corresponding lag-7 sequence in Part 2.

Periods for lag lengths are given below. The periods seem neither uniformly longer nor shorter than Part 2.

2 60 = 2*2*3*5

3 168 = 2*2*2*3*7

4 1560 = 2*2*2*3*5*13

5 16401 = 3*7*11*71

6 196812 = 2*2*3*3*7*11*71

7 661416 = 2*2*2*3*7*31*127

8 15624 = 2*2*2*3*3*7*31

9 8894028 = 2*2*3*11*13*71*73

10 1736327236 = 2*2*7*19*31*127*829

11 3712686852 = 2*2*3*7*31*73*19531

12 203450520 = 2*2*2*3*5*7*13*31*601

13 25732419240 = 2*2*2*3*5*11*17*31*71*521

The sequence 60, 168, 1560 does not appear in OEIS. But the analogous sequence modulo 2 (instead of modulo 10) is A046932, and curiously, the analogous sequence of Part 2 modulo 2 seems to be the exact same sequence. Also, there's the factor 19531 again. Searching for the number on OEIS gives a hint of what is going on. The VIC cipher used this recurrence in base 10 with width 10.

**Source code**

Here is a Haskell implementation including Floyd's cycle-detection algorithm. It is only a partial implementation of cycle detection because it does not go back to test that the found period is not a multiple of a shorter period. I'm hoping this second step wasn't necessary.

Because the generated sequences were implemented as giant lists, it was very easy to have catastrophic space leaks. I've left in old versions of code demonstrating a few such failures. Even printing out a list followed by its length required some care. This aspect of Haskell, compared to an imperative programming language, I strongly dislike.

The code is far from optimized, most notably because it was an exercise in learning to use Data.Sequence to hold the state. Unboxed arrays would probably have been better.

### Haskell Platform 2014.2.0.0 Release Candidate 3

### [ANN] hsimport 0.5: configurable pretty printing and placing of imports

### Seeking comments on proposed fusion rules

### Does ghc leak memory or is something else going on?

If I do "cabal install cabal-install" then while it is building Cabal I get:

[46 of 78] Compiling Distribution.Simple.Setup ( Distribution/Simple/Setup.hs, dist/build/Distribution/Simple/Setup.o ) ghc: out of memory (requested 1048576 bytes)But if I do a "cabal unpack Cabal" and then go in the directory and do a "cabal build" I get the same error, but I can just run "cabal build" again and it picks up where it left off and finishes just fine. This seems to happen with any package that contains a lot of files to be compiled. Is this a memory leak in GHC or is something else going on? Is there any way to tell cabal to compile each file with a fresh ghc process instead of reusing the same ghc process for the whole thing to at least work around the problem easier?

submitted by haskellnoob[link] [4 comments]

### wren gayle romano: Transitioning is a mindfuck.

*[Content warning: discussion of rape culture and child abuse]*

Transitioning is a mindfuck. Doesn't matter how prepared you are, how sure you are, how long and deeply you've thought about gender/sexuality issues. Outside of transitioning1 we have no way of inhabiting more than one position in any given discourse. Sure, we can understand other positions on an intellectual level, we may even sympathize with them, but we cannot empathize with what we have not ourselves experienced, and even having experienced something in the past does not mean we can continue to empathize with it in the present. Julia Serano emphasizes this epistemic limit in her books. And it's no wonder that no matter how prepared you may be, completely uprooting your sense of self and reconfiguring the way the world sees, interprets, and interacts with you is going to fundamentally alter whatever notions you had going into it all.

Since transitioning none of the major details of my identity have changed. I'm still a woman. Still feminine. Still a flaming lesbo. Still kinky, poly, and childfree. Still attracted to the same sorts of people. Still into the same sorts of fashion (though now I can finally act on that). Still interested in all the same topics, authors, and academic pursuits. And yet, despite —or perhaps because of— all this consistency, transitioning is still a mindfuck.

**( Read more... )**

comments

### wren gayle romano: New skin

Hello all,

I just changed the theme/skin for my blog and have been playing around with new fonts, css, handling of footnotes, etc. Let me know what you think and whether you run into any issues (especially on older posts). It's been years since I've done webdev, long before CSS3 and HTML5 so thing's are a bit different than I'm used to.

In other news, I haven't heard back from the Haskell Planet admins about getting the feed switched over to Haskell/coding/math content only. So, if you've been annoyed by the OT, sorry bout that!

comments

### FP Complete: FP Haskell Center is Going Free

As you know, FP Complete’s mission is to drive the wide-scale adoption of Haskell functional programming. As our next step in this mission, I’m pleased to announce we are releasing our full FP Haskell Center (FPHC) free for developers of open projects. This model will be very familiar to Haskellers and GitHub users.

FPHC has been live online for a little less than a year and has been very successful. I’m proud of our hard-working team who have achieved so much so quickly, using many important open-source components as well as some very good commercial services. Because of this progress and the support and activity from our users, we have reached the point where we can give even more back to the Haskell community and do more to promote the overall advancement of Haskell.

As of October 1, users of our free Community or “Open Publish” edition will have access to features previously offered only to FPHC Commercial customers. This includes complete remote git support, complete git command line integration for Emacs clients, as well as git “mega repos,” and inter-project dependencies.

With these increased free features, the paid Personal edition is no longer needed. If you have a Personal license, we’ll stop charging you, and will renew you to a free Open Publish license. Similarly, Academic accounts will also convert to Open Publish accounts. For more details see the schedule below and our FAQ.

Unlike Commercial licenses, Open Publish accounts will automatically publish all projects on the FPHC site with each commit. Open Publish accounts aren’t available on private servers, and won’t include a shared or private FP Application Server. Of course we continue to offer the paid Commercial license for those who need it — but most people won’t.

We’re confident that access to more capable tools will inspire others to be more involved with pushing Haskell forward, and we believe this move will better suit open source projects and independent developers.

Now that our corporate clients are expanding their work orders with us, we can offer even more for open source developers. This has also afforded us the opportunity to focus on our own innovative Haskell projects which we will be rolling out over time. Our corporate clients are excited by what we’ve been able to deliver with our Haskell-built tools, and this is just the beginning. Of course we also continue to contribute to Yesod, Fay, GHC, Stackage, and other important Haskell community projects beyond FP Haskell Center itself.

Innovation is about motivation, and we hope our free tools for open source Haskell developers provide the resources and motivation to build more projects. What has already been amassed in our School of Haskell is proof that the Haskell community knows how to build great resources given the right tools and forums. Keep up the amazing work and let us know what else we can do to help.

Aaron Contorer, CEO, FP Complete

Planned Release Schedule:- As of July 31, we will no longer be offering the yearly Personal edition. The monthly edition will continue to be available to allow Personal subscribers to continue using git while we prepare the new FPHC Open Publish license. At this time we will continue to offer FPHC Professional and Community licenses.
- On August 30, we will stop accepting new monthly Personal edition subscriptions.
- October 1, we will release the new enhanced free Open Publish Community edition and discontinue online purchases and monthly subscriptions. At this time, all previous Community and personal licenses will become FPHC Open Publish licenses. New yearly paid FPHC commercial licenses may still be ordered by contacting FP Complete Sales.

### [ANN] hsimport 0.5: configurable pretty printing and placing of imports

hsimport is a command line program for extending the import list of a Haskell source file.

There's an integration for the vim editor, which can automatically extend the import list for the symbol under the cursor.

hsimport 0.5 changes:

- configurable pretty printing of imports
- configurable placing of new imports
- support of multi line imports
- better handling of incomplete/invalid Haskell source files

[link] [comment]

### Bad interaction of inlinePerformIO and mutablevectors

### Newtype derivation understanding problem

### Feedback on my probability distribution package?

Hello Reddit!

I made a package for manipulating finite discrete probability distributions, and before submitting it to Hackage I would love to get some feedback from you.

The code is available on github: https://github.com/redelmann/haskell-distribution

The package has modules for creating and transforming distributions, measuring them, efficiently sampling random values from them and plotting them to file.

I'm wondering if including the plotting feature is worthwhile considering the number of dependencies. Would you put that module in a package of its own, or would you leave it as is?

I am also unsure about the Data.Distribution.Aggregator package. The naming of the module and some of its functions doesn't feel right. The point of the module is to provide functions that can modify the probabilities of lists of pair of value and probability. This is useful for instance for getting cumulative probability distributions for instance.

Thanks for any feedback you might have!

submitted by wargotad[link] [7 comments]

### Gabriel Gonzalez: Scalable program architectures

Haskell design patterns differ from mainstream design patterns in one important way:

**Conventional architecture:**Combine a several components together of type A to generate a "network" or "topology" of type B**Haskell architecture:**Combine several components together of type A to generate a new component of the same type A, indistinguishable in character from its substituent parts

This distinction affects how the two architectural styles evolve as code bases grow.

The conventional architecture requires layering abstraction on top of abstraction:

Oh no, these Bs are not connectable, so let's make a network of Bs and call that a C.

Well, I want to assemble several Cs, so let's make a network of Cs and call that a D

...

Wash, rinse, and repeat until you have an unmanageable tower of abstractions.

With a Haskell-style architecture, you don't need to keep layering on abstractions to preserve combinability. When you combine things together the result is still itself combinable. You don't distinguish between components and networks of components.

In fact, this principle should be familiar to anybody who knows basic arithmetic. When you combine a bunch of numbers together you get back a number:

3 + 4 + 9 = 16Zero or more numbers go in and exactly one number comes out. The resulting number is itself combinable. You don't have to learn about "web"s of numbers or "web"s of "web"s of numbers.

If elementary school children can master this principle, then perhaps we can, too. How can we make programming more like addition?

Well, addition is simple because we have (+) and 0. (+) ensures that we can always convert **more than** one number into exactly number:

... and 0 ensures that we can always convert **less than** one number into exactly one number by providing a suitable default:

This will look familiar to Haskell programmers: these type signatures resemble the methods of the Monoid type class:

class Monoid m where-- `mappend` is analogous to `(+)`

mappend :: m -> m -> m

-- `mempty` is analogous to `0`

mempty :: m

In other words, the Monoid type class is the canonical example of this Haskell architectural style. We use mappend and mempty to combine 0 or more ms into exactly 1 m. The resulting m is still combinable.

Not every Haskell abstraction implements Monoid, nor do they have to because category theory takes this basic Monoid idea and generalizes it to more powerful domains. Each generalization retains the same basic principle of preserving combinability.

For example, a Category is just a typed monoid, where not all combinations type-check:

class Category cat where-- `(.)` is analogous to `(+)`

(.) :: cat b c -> cat a b -> cat a c

-- `id` is analogous to `0`

id :: cat a a

... a Monad is like a monoid where we combine functors "vertically":

-- Slightly modified from the original type classclass Functor m => Monad m where

-- `join` is analogous to `(+)`

join :: m (m a) -> m a

-- `return` is analogous to `0`

return :: a -> m a

... and an Applicative is like a monoid where we combine functors "horizontally":

-- Greatly modified, but equivalent to, the original type classclass Functor f => Applicative f where

-- `mult` is is analogous to `(+)`

mult :: f a -> f b -> f (a, b)

-- `unit` is analogous to `0`

unit :: f ()

Category theory is full of generalized patterns like these, all of which try to preserve that basic intuition we had for addition. We convert **more than** one *thing* into exactly one *thing* using something that resembles addition and we convert **less than** one *thing* into exactly one *thing* using something that resembles zero. Once you learn to think in terms of these patterns, programming becomes as simple as basic arithmetic: combinable components go in and exactly one combinable component comes out.

These abstractions scale limitlessly because they always preserve combinability, therefore we never need to layer further abstractions on top. This is one reason why you should learn Haskell: you learn how to build flat architectures.