News aggregator

Robert Harper: A few new papers

Planet Haskell - Fri, 08/01/2014 - 12:40pm

I’ve just updated my web page with links to some new papers that are now available:

  1. Homotopical Patch Theory” by Carlo Angiuli, Ed Morehouse, Dan Licata, and Robert Harper. To appear, ICFP, Gothenburg, October 2014. We’re also preparing an expanded version with a new appendix containing material that didn’t make the cut for ICFP. (Why do we still have such rigid space limitations?  And why do we have such restricted pre-publication deadlines as we go through the charade of there being a “printing” of the proceedings? One day soon CS will step into its own bright new future.). The point of the paper is to show how to apply basic methods of homotopy theory to various equational theories of patches for various sorts of data. One may see it as an application of functorial semantics in HoTT, in which theories are “implemented” by interpretation into a universe of sets. The patch laws are necessarily respected by any such interpretation, since they are just cells of higher dimension and functors must behave functorially at all dimensions.
  2. Cache Efficient Functional Algorithms” by Guy E. Blelloch and Robert Harper. To appear, Comm. ACM Research Highlight this fall.  Rewritten version of POPL 2013 paper meant for a broad CS audience.  Part of a larger effort to promote integration of combinatorial theory with logical and semantic theory, two theory communities that, in the U.S. at least, ignore each other completely.  (Well, to be plain about it, it seems to me that the ignoring goes more in one direction than the other.)  Cost semantics is one bridge between the two schools of thought, abandoning the age-old “reason about the compiled code” model used in algorithm analysis.  Here we show that one can reason about spatial locality at the abstract level, without having to drop-down to low-level of how data structures are represented and allocated in memory.
  3. Refining Objects (Preliminary Summary)” by Robert Harper and Rowan Davies. To appear, Luca Cardelli 60th Birthday Celebration, Cambridge, October, 2014.  A paper I’ve been meaning to write sometime over the last 15 years, and finally saw the right opportunity, with Luca’s symposium coming up and Rowan Davies visiting Carnegie Mellon this past spring.  Plus it was a nice project to get me started working again after I was so rudely interrupted this past fall and winter.  Provides a different take on typing for dynamic dispatch that avoids the ad hoc methods introduced for oop, and instead deploying standard structural and behavioral typing techniques to do more with less.  This paper is a first cut as proof of concept, but it is clear that much more can be said here, all within the framework of standard proof-theoretic and realizability-theoretic interpretations of types.  It would help to have read the relevant parts of PFPL, particularly the under-development second edition, which provides the background elided in the paper.
  4. Correctness of Compiling Polymorphism to Dynamic Typing” by Kuen-Bang Hou (Favonia), Nick Benton, and Robert Harper, draft (summer 2014).  Classically polymorphic type assignment starts with untyped -terms and assigns types to them as descriptions of their behavior.  Viewed as a compilation strategy for a polymorphic language, type assignment is rather crude in that every expression is compiled in uni-typed form, complete with the overhead of run-time classification and class checking.  A more subtle strategy is to maintain as much structural typing as possible, resorting to the use of dynamic typing (recursive types, naturally) only for variable types.  The catch is that polymorphic instantiation requires computation to resolve the incompatibility between, say, a bare natural number, which you want to compute with, and its encoding as a value of the one true dynamic type, which you never want but are stuck with in dynamic languages.  In this paper we work out an efficient compilation scheme that maximizes statically available information, and makes use of dynamic typing only insofar as the program demands we do so.  Of course there are better ways to compile polymorphism, but this style is essentially forced on you by virtual machines such as the JVM, so it is worth studying the correctness properties of the translation, which we do here making use of a combination of structural and behavioral typing.

I hope to comment here more fully on these papers in the near future, but I also have a number of other essays queued up to go out as soon as I can find the time to write them.  Meanwhile, other deadlines loom large.

[Update: added fourth item neglected in first draft.  Revise formatting.  Add links to people. Brief summary of patch theory paper.  Minor typographical corrections.]

[Update: the promised expanded version of the forthcoming ICFP paper is now available.]


Filed under: Programming, Research Tagged: behavioral typing, cache efficient algorithms, compilation, cost semantics, dynamic dispatch, homotopy type theory, ICFP, polymorphism, structural typing, type refinements
Categories: Offsite Blogs

Splitting Network.URI from the network package

libraries list - Fri, 08/01/2014 - 12:31pm
This was brought up last year[1], and I'd like to bring it up again, based on a recent issue I was working through with a user[2]. I realize that this is a breaking change, but: 1. It's a minor breaking change: you simply need to add an extra package to your build-depends. 2. The problems caused by having a parsec dependency in network can be severe, especially for new users (I'll describe the details after the proposal). Concretely, I believe we should do the following: 1. Create a new package, network-uri, version 2.5.0.0, which exposes no modules and has an upper bound `network < 2.6. 2. Create a second release of network-uri, version 3.0.0.0, which provides the Network.URI module verbatim as provided by the network package today, and has a lower bound `network >= 3.0`. 3. Release network version 3.0.0.0, with no changes from the currently released version, except that (a) no Network.URI module is provided, and (b) there is no parsec dependency. I don't remember how the discussion went last time, but
Categories: Offsite Discussion

Does web-based email harm mailman lists?

haskell-cafe - Fri, 08/01/2014 - 10:46am
Beautiful haskell people, Ever noticed the lacunae on some list threads? Someone hits reply and instead of reflecting via mailman, it goes direct to OP. OP notices absence of To:haskell-cafe and adds it back in in their reply to the reply. End result? The thread looks like OP having a convo with themselves. Unless you look at the quoted parts, which you have to click to reveal in web-based email. The convention, say with a google-groups based mailing list, is that conversations in mailing list are public by default. With some manual C&P, you can email responses in private. For cafe participants using web-based email, the situation is reversed, through no fault of their own. Approx 18 months ago, the haskell-beginners list suffered the same problem [1]. After some digging, it looks like there's a configurable option to Do The Right Thing: http://ccit.mines.edu/Mailman-FAQ#25 I was also privately emailed that there are downsides I wasn't aware of: Reply-To Munging Considered Harmful http://www.unicom.
Categories: Offsite Discussion

Attempting to create a new monad / dsl - need help

Haskell on Reddit - Fri, 08/01/2014 - 9:26am

I may be trying to do something impossible here but would like confirmation.

Requirements:

  • Creating a task requires only a task name and a function

    createTask :: String -> (TaskContext -> IO a) -> Task a
  • Task results can be used by subsequent tasks

    do x <- firstTask; secondTask x;

The above is really easy and straight forward.

Here is the hard part:

  • Task metadata can be accessed without actually running the task.
  • In other words, I would like to be able to get the names, order, and total count of all tasks to be run without ever running the task.

I suspect there is some sort of indexed state/continuation monad that could do this; but haven't been able to figure it out.

Any help would be appreciated.

submitted by recty
[link] [13 comments]
Categories: Incoming News

What would be necessary to use ZeroVM as a GHC back end?

Haskell on Reddit - Fri, 08/01/2014 - 9:06am

I just wonder (naive as I am) if it is possible to use GHC to create ZeroVM binaries?

At first glance I see that GHC uses GCC (or LLVM) for compilation and ZeroVM compilation is "just another GCC toolchain". So how hard could that be? (Famous last words)

ZeroVM: http://docs.zerovm.org

submitted by goliatskipson
[link] [6 comments]
Categories: Incoming News

Tor project and Haskell (FPL) ,..

haskell-cafe - Fri, 08/01/2014 - 7:58am
I wasn't trying to offend anybody .... Only trying to provoke a lively discussion in order lift all boats. Sometimes one sentence IHMO provokes thought. In any case I appreciate all responders :-) Kind thanks friends. Vasya _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Well-Typed.Com: Debugging Haskell at assembly level by scripting lldb in Python

Planet Haskell - Fri, 08/01/2014 - 3:17am

Haskell programmers tend to spend far less time with debuggers than programmers in other languages. Partly this is because for pure code debuggers are of limited value anyway, and Haskell’s type system is so strong that a lot of bugs are caught at compile time rather than at runtime. Moreover, Haskell is a managed language – like Java, say – and errors are turned into exceptions. Unlike in unmanaged languages such as C “true” runtime errors such as segmentation faults almost never happen.

I say “almost” because they can happen: either because of bugs in ghc or the Haskell runtime, or because we are doing low level stuff in our own Haskell code. When they do happen we have to drop down to a system debugger such as lldb or gdb, but debugging Haskell at that level can be difficult because Haskell’s execution model is so different from the execution model of imperative languages. In particular, compiled Haskell code barely makes any use of the system stack or function calls, and uses a continuation passing style instead (see my previous blog posts Understanding the Stack and Understanding the RealWorld). In this blog post I will explain a technique I sometimes use to help diagnose low-level problems.

Since I work on OSX I will be using lldb as my debugger. if you are using gdb you can probably use similar techniques; The LLDB Debugger shows how gdb and lldb commands correlate, and the ghc wiki also lists some tips. However, I have no experience with scripting gdb so your mileage may vary.

Description of the problem

As our running example I will use a bug that I was tracking down in a client project. The details of the project don’t matter so much, except that this project happens to use the GHC API to compile Haskell code—at runtime—into bytecode and then run it; moreover, it also—dynamically, at runtime—loads C object files into memory.

In one example run it loads the (compiled) C code

#include <stdio.h> void hello(void) { printf("hello\n"); }

and then compiles and runs this Haskell code:

{-# LANGUAGE ForeignFunctionInterface #-} module Main where foreign import ccall "hello" hello :: IO () main = hello

Sadly, however, this resulted in total system crash.

Starting point

By attaching lldb to the running process we got a tiny bit more information about the crash:

* thread #4: tid = 0x3550aa, 0x000000010b3b8226, stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x000000010b3b8226 -> 0x10b3b8226: addb %al, (%rax) 0x10b3b8228: addb %al, (%rax) 0x10b3b822a: addb %al, (%rax) 0x10b3b822c: addb %al, (%rax)

It turns out we have a null-pointer dereferencing here. Anybody who has spent any time debugging Intel assembly code however will realize that this particular instruction

addb %al, (%rax)

is in fact the decoding of zero:

(lldb) memory read -c 8 0x10b3b8226 0x10b3b8226: 00 00 00 00 00 00 00 00 ........

In other words, chances are good we were never meant to execute this instruction at all. Unfortunately, asking lldb for a backtrace tells us absolutely nothing new:

(lldb) bt * thread #4: tid = 0x3550aa, 0x000000010b3b8226, stop reason = EXC_BAD_ACCESS (code=1, address=0x0) * frame #0: 0x000000010b3b8226 Finding a call chain

The lack of a suitable backtrace in lldb is not surprising, since compiled Haskell code barely makes use of the system stack. Instead, the runtime maintains its own stack, and code is compiled into a continuation passing style. For example, if we have the Haskell code

functionA :: IO () functionA = do .. ; functionB ; .. functionB :: () functionB = do .. ; functionC ; .. functionC :: IO () functionC = .. crash .. main :: IO () main = functionA

and we step through the execution of this program in lldb, and we ask for a backtrace when we start executing function A all we get is

(lldb) bt * thread #1: tid = 0x379731, 0x0000000100000a20 Main`A_functionA_info, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 * frame #0: 0x0000000100000a20 Main`A_functionA_info

with no mention of main. Similarly, the backtraces on entry to functions B and C are

* thread #1: tid = 0x379731, 0x0000000100000b90 Main`B_functionB_info, queue = 'com.apple.main-thread', stop reason = breakpoint 2.1 * frame #0: 0x0000000100000b90 Main`B_functionB_info

and

* thread #1: tid = 0x379731, 0x0000000100000c88 Main`C_functionC_info, queue = 'com.apple.main-thread', stop reason = breakpoint 3.1 * frame #0: 0x0000000100000c88 Main`C_functionC_info

none of which is particularly informative. However, stepping manually through the program we do first see function A on the (singleton) call stack, then function B, and finally function C. Thus, by the time we reach function C, we have discovered a call chain A, B, C—it’s just that it involves quite a bit of manual work.

Scripting lldb

Fortunately, lldb can be scripted (see Using Scripting and Python to Debug in LLDB and the LLDB Python Reference). What we want to do is keep stepping through the code, showing the top-level (and quite possibly only) function at the top of the call stack at each step, until we crash.

We can use the following Python script to do this:

import lldb def step_func(debugger, command, result, internal_dict): thread = debugger.GetSelectedTarget().GetProcess().GetSelectedThread() while True: thread.StepOver() stream = lldb.SBStream() thread.GetStatus(stream) description = stream.GetData() print description if thread.GetStopReason() == lldb.eStopReasonException: break def __lldb_init_module(debugger, dict): debugger.HandleCommand('command script add -f %s.step_func sf' % __name__)

For the above example, we might use this as follows: we load our application into lldb

# lldb Main Current executable set to 'Main' (x86_64).

register our new command sf

(lldb) command script import mystep.py

set a breakpoint where we want to start stepping

(lldb) breakpoint set -n A_functionA_info Breakpoint 1: where = Main`A_functionA_info, address = 0x0000000100000b90

run to the breakpoint:

(lldb) run Process 54082 launched: 'Main' (x86_64) Process 54082 stopped * thread #1: tid = 0x384510, 0x0000000100000b90 Main`A_functionA_info, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x0000000100000b90 Main`A_functionA_info

and then use sf to find a call-trace until we crash:

(lldb) sf ... * thread #1: tid = 0x384510, 0x0000000100000bf0 Main`B_functionB_info, queue = 'com.apple.main-thread', stop reason = instruction step over frame #0: 0x0000000100000bf0 Main`B_functionB_info Main`B_functionB_info: ... * thread #1: tid = 0x384510, 0x0000000100000c78 Main`C_functionC_info, queue = 'com.apple.main-thread', stop reason = instruction step over frame #0: 0x0000000100000c78 Main`C_functionC_info Main`C_functionC_info: ... * thread #1: tid = 0x384510, 0x0000000100000d20 Main`crash + 16 at crash.c:3, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x0000000100000d20 Main`crash + 16 at crash.c:3

Note that if you are using the threaded runtime, you may have to select which thread you want to step through before calling sf:

(lldb) thread select 4 (lldb) sf Tweaking the script

You will probably want to tweak the above script in various ways. For instance, in the application I was debugging, I wanted to step into each assembly language instruction but over each C function call, mostly because lldb was getting confused with the call stack. I also added a maximum step count:

def step_func(debugger, command, result, internal_dict): args = shlex.split(command) if len(args) > 0: maxCount = int(args[0]) else: maxCount = 100 thread = debugger.GetSelectedTarget().GetProcess().GetSelectedThread() i = 0; while True: frame = thread.GetFrameAtIndex(0) file = frame.GetLineEntry().GetFileSpec().GetFilename() inC = type(file) is str and file.endswith(".c") if inC: thread.StepOver() else: thread.StepInstruction(False) stream = lldb.SBStream() thread.GetStatus(stream) description = stream.GetData() print i print description i += 1; if thread.GetStopReason() == lldb.eStopReasonException or i > maxCount: break

You may want to tweak this step into/step over behaviour to suit your application; certainly you don’t want to have a call trace involving every step taken in the Haskell RTS or worse, in the libraries it depends on.

Back to the example

Rather than printing every step along the way, it may also be useful to simply remember the step-before-last and show that on a crash; often it is sufficient to know what happened just before the crash. Indeed, in the application I was debugging the call stack just before the crash was:

2583 thread #3: tid = 0x35e743, 0x00000001099da56d libHSrts_thr_debug-ghc7.8.3.20140729.dylib`schedule(initialCapability=0x0000000109a2f840, task=0x00007fadda404550) + 1533 at Schedule.c:470, stop reason = step over frame #0: 0x00000001099da56d libHSrts_thr_debug-ghc7.8.3.20140729.dylib`schedule(initialCapability=0x0000000109a2f840, task=0x00007fadda404550) + 1533 at Schedule.c:470 467 } 468 469 case ThreadInterpret: -> 470 cap = interpretBCO(cap); 471 ret = cap->r.rRet; 472 break; 473 2584 thread #3: tid = 0x35e743, 0x0000000103106226, stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x0000000103106226 -> 0x103106226: addb %al, (%rax) 0x103106228: addb %al, (%rax) 0x10310622a: addb %al, (%rax) 0x10310622c: addb %al, (%rax)

Which is a lot more helpful than the backtrace, as we now have a starting point: something went wrong when running the bytecode interpreter (remember that the application was compiling and running some Haskell code at runtime).

To pinpoint the problem further, we can set a breakpoint in interpretBCO and run sf again (the way we defined sf it steps over any C function calls by default). This time we get to:

4272 thread #4: tid = 0x35f43a, 0x000000010e77c548 libHSrts_thr_debug-ghc7.8.3.20140729.dylib`interpretBCO(cap=0x000000010e7e7840) + 18584 at Interpreter.c:1463, stop reason = step over frame #0: 0x000000010e77c548 libHSrts_thr_debug-ghc7.8.3.20140729.dylib`interpretBCO(cap=0x000000010e7e7840) + 18584 at Interpreter.c:1463 1460 tok = suspendThread(&cap->r, interruptible ? rtsTrue : rtsFalse); 1461 1462 // We already made a copy of the arguments above. -> 1463 ffi_call(cif, fn, ret, argptrs); 1464 1465 // And restart the thread again, popping the stg_ret_p frame. 1466 cap = (Capability *)((void *)((unsigned char*)resumeThread(tok) - STG_FIELD_OFFSET(Capability,r))); 4273 thread #4: tid = 0x35f43a, 0x0000000107eba226, stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x0000000107eba226 -> 0x107eba226: addb %al, (%rax) 0x107eba228: addb %al, (%rax) 0x107eba22a: addb %al, (%rax) 0x107eba22c: addb %al, (%rax)

Ok, now we are really getting somewhere. Something is going wrong when are doing a foreign function call. Let’s re-run the application once more, setting a breakpoint at ffi_call:

(lldb) breakpoint set -n ffi_call Breakpoint 1: where = libffi.dylib`ffi_call + 29 at ffi64.c:421, address = 0x0000000108e098dd (lldb) cont Process 51476 resuming Process 51476 stopped * thread #4: tid = 0x360fd3, 0x0000000108e098dd libffi.dylib`ffi_call(cif=0x00007fb262f00000, fn=0x00000001024a4210, rvalue=0x000000010b786ac0, avalue=0x000000010b7868a0) + 29 at ffi64.c:421, stop reason = breakpoint 1.1 frame #0: 0x0000000108e098dd libffi.dylib`ffi_call(cif=0x00007fb262f00000, fn=0x00000001024a4210, rvalue=0x000000010b786ac0, avalue=0x000000010b7868a0) + 29 at ffi64.c:421 418 /* If the return value is a struct and we don't have a return value 419 address then we need to make one. Note the setting of flags to 420 VOID above in ffi_prep_cif_machdep. */ -> 421 ret_in_memory = (cif->rtype->type == FFI_TYPE_STRUCT 422 && (cif->flags & 0xff) == FFI_TYPE_VOID); 423 if (rvalue == NULL && ret_in_memory) 424 rvalue = alloca (cif->rtype->size);

and let’s take a look at the function we’re about to execute:

(lldb) disassemble -s fn 0x1024a4210: pushq %rbp 0x1024a4211: movq %rsp, %rbp 0x1024a4214: leaq (%rip), %rdi 0x1024a421b: popq %rbp 0x1024a421c: jmpq 0x1024a4221 0x1024a4221: pushq $0x6f6c6c65 0x1024a4226: addb %al, (%rax) 0x1024a4228: addb %al, (%rax) 0x1024a422a: addb %al, (%rax) 0x1024a422c: addb %al, (%rax) 0x1024a422e: addb %al, (%rax)

We were expecting to execute hello:

# otool -tV hello.o hello.o: (__TEXT,__text) section _hello: 0000000000000000 pushq %rbp 0000000000000001 movq %rsp, %rbp 0000000000000004 leaq L_str(%rip), %rdi ## literal pool for: "hello" 000000000000000b popq %rbp 000000000000000c jmpq _puts

and if you compare this with the code loaded into memory it all becomes clear. The jump instruction in the object file

jmpq _puts

contains a symbolic reference to puts; but the jump in the code that we are about to execute in fact jumps to the next instruction in memory:

0x1024a421c: jmpq 0x1024a4221 0x1024a4221: ...

In other words, the loaded object file has not been properly relocated, and when we try to call puts we end up jumping into nowhere. At this point the bug was easily resolved.

Further Reading

We have barely scratched the surface here of what we can do with lldb or gdb. In particular, ghc maintains quite a bit of runtime information that we can inspect with the debugger. Tim Schröder has an excellent blog post about inspecting ghc’s runtime data structures with gdb, and Nathan Howell has written some extensive lldb scripts to do the same, although they may now be somewhat outdated. See also the reddit discussion about this blog post.

Categories: Offsite Blogs

Ken T Takusagawa: [zljfhron] Lagged Fibonacci digits

Planet Haskell - Fri, 08/01/2014 - 1:33am

Part 1: Two newest taps

The Fibonacci sequence modulo 10 starting with 0 and 1 repeats with a period of 60, the Babylonians' favorite number: 0 1 1 2 3 5 8 3 1 4 5 9 4 3 7 0 7 7 4 1 5 6 1 7 8 5 3 8 1 9 0 9 9 8 7 5 2 7 9 6 5 1 6 7 3 0 3 3 6 9 5 4 9 3 2 5 7 2 9 1. One could imagine a very confusing digital clock in which two digits slide by every minute (or second).

Other starting pairs give other periods:
Period 20: 0 2 2 4 6 0 6 6 2 8 0 8 8 6 4 0 4 4 8 2.
Period 12: 2 1 3 4 7 1 8 9 7 6 3 9. (Also usable for a clock.)
Period 4: 4 2 6 8.
Period 3: 0 5 5.
Period 1: 0 0.
These periods sum to 100, which exhausts all possibilities of two starting digits.

Part 2: Lagged Fibonacci, newest and oldest taps

Start with the digits 0 0 0 0 0 0 1 at the top of a 7-column tableau. Fill in the following rows left-to-right by adding modulo-10 the digit to left and the digit immediately above. For the leftmost digit on each row, the digit "to the left" is the last digit on the previous row. This is equivalent to the recurrence a[i] = (a[i-1] + a[i-7]) mod 10. The sequence repeats with an impressively long period of 2480437. The original motivation was a method for a person to produce by hand a page of seemingly random digits with little effort. The tableau begins

0 0 0 0 0 0 1
1 1 1 1 1 1 2
3 4 5 6 7 8 0
3 7 2 8 5 3 3
6 3 5 3 8 1 4

and ends 2480437 digits later:

5 1 7 4 0 4 8
3 4 1 5 5 9 7
0 4 5 0 5 4 1
1 5 0 0 5 9 0
1 6 6 6 1 0 0
1 7 3 9 0 0 0
1 8 1 0 0 0 0
1 9 0 0 0 0 0
1

Periods of other column widths (lag lengths) and their factorizations, starting with 0 0 ... 1:
1 4 = 2*2
2 60 = 2*2*3*5
3 217 = 7*31
4 1560 = 2*2*2*3*5*13
5 168 = 2*2*2*3*7
6 196812 = 2*2*3*3*7*11*71
7 2480437 = 127*19531
8 15624 = 2*2*2*3*3*7*31
9 28515260 = 2*2*5*73*19531
10 1736327236 = 2*2*7*19*31*127*829
11 249032784 = 2*2*2*2*3*7*11*13*71*73
12 203450520 = 2*2*2*3*5*7*13*31*601
13 482322341820 = 2*2*3*5*11*17*31*71*19531

The sequence of periods 4, 60, 217, 1560, 168 does not appear on OEIS. These first four terms I have confirmed are the longest possible period among all start seeds, not just 0 0 0 ... 1. It is curious that the factor 19531 occurs multiple times.

Part 3: Two oldest taps

We consider the recurrence a[i] = (a[i-6] + a[i-7]) mod 10, that is, the two "oldest" values. To calculate the next digit on a seven-column tableau, add the number above to the number above and to the right (northeast). Or, this could also be done with a more compact six-column tableau adding the number above and the number above and to the left (northwest). This recurrence repeats with a period 661416, smaller than the corresponding lag-7 sequence in Part 2.

Periods for lag lengths are given below. The periods seem neither uniformly longer nor shorter than Part 2.

2 60 = 2*2*3*5
3 168 = 2*2*2*3*7
4 1560 = 2*2*2*3*5*13
5 16401 = 3*7*11*71
6 196812 = 2*2*3*3*7*11*71
7 661416 = 2*2*2*3*7*31*127
8 15624 = 2*2*2*3*3*7*31
9 8894028 = 2*2*3*11*13*71*73
10 1736327236 = 2*2*7*19*31*127*829
11 3712686852 = 2*2*3*7*31*73*19531
12 203450520 = 2*2*2*3*5*7*13*31*601
13 25732419240 = 2*2*2*3*5*11*17*31*71*521

The sequence 60, 168, 1560 does not appear in OEIS. But the analogous sequence modulo 2 (instead of modulo 10) is A046932, and curiously, the analogous sequence of Part 2 modulo 2 seems to be the exact same sequence. Also, there's the factor 19531 again. Searching for the number on OEIS gives a hint of what is going on. The VIC cipher used this recurrence in base 10 with width 10.

Source code

Here is a Haskell implementation including Floyd's cycle-detection algorithm. It is only a partial implementation of cycle detection because it does not go back to test that the found period is not a multiple of a shorter period. I'm hoping this second step wasn't necessary.

Because the generated sequences were implemented as giant lists, it was very easy to have catastrophic space leaks. I've left in old versions of code demonstrating a few such failures. Even printing out a list followed by its length required some care. This aspect of Haskell, compared to an imperative programming language, I strongly dislike.

The code is far from optimized, most notably because it was an exercise in learning to use Data.Sequence to hold the state. Unboxed arrays would probably have been better.

Categories: Offsite Blogs

Haskell Platform 2014.2.0.0 Release Candidate 3

haskell-cafe - Thu, 07/31/2014 - 9:48pm
Small update to the Haskell Platfrom 2014.2.0.0 release: We have new Release Candidate 3 versions of the source tarball... and a new generic-linux bindist of the platform! - source tarball: haskell-platform-2014.2.0.0-srcdist-RC3.tar.gz <http://www.ozonehouse.com/mark/platform/haskell-platform-2014.2.0.0-srcdist-RC3.tar.gz> - generic linux: haskell-platform-2014.2.0.0-unknown-linux-x86_64-RC3.tar.gz <http://www.ozonehouse.com/mark/platform/haskell-platform-2014.2.0.0-unknown-linux-x86_64-RC3.tar.gz> *Windows and OS X users: There are no RC3 versions - as the RC2 versions seem to be holding up fine!* *General* - hptool (and hence ./platform.sh script) take a new --prefix parameter that is used for generic (non-OS X, non-Windows) builds: It sets the root under which haskell installations are located. Defaults to /usr/local/haskell. Everything will be placed in a directory named ghc-7.8.3-<arch> under this prefix. - activate-hs script for default Posix-like builds - sma
Categories: Offsite Discussion

[ANN] hsimport 0.5: configurable pretty printing and placing of imports

haskell-cafe - Thu, 07/31/2014 - 8:59pm
Hi all, hsimport[1] is a command line program for extending the import list of a Haskell source file. There's an integration[2] for the vim editor, which can automatically extend the import list for the symbol under the cursor. hsimport 0.5 changes: - configurable[3] pretty printing of imports - configurable placing of new imports - support of multi line imports - better handling of incomplete/invalid Haskell source files Grettings, Daniel [1] https://github.com/dan-t/hsimport [2] https://github.com/dan-t/vim-hsimport [3] https://github.com/dan-t/hsimport/blob/master/README.md
Categories: Offsite Discussion

Seeking comments on proposed fusion rules

libraries list - Thu, 07/31/2014 - 7:38pm
The rules: lpaste.net/108508 The boring explanation: As I mentioned earlier, we currently can't fuse things like foldr (x : build g) The foldr can't see the build. Earlier, I considered a simple cons/build rule that would rewrite x : build g to build (\c n -> x `c` g c n) I wasn't quite satisfied with this approach, largely because it treats foldr c n (x1 : x2 : ... : xn : build g) one way, and foldr c n ([x1, x2, ..., xn) : build g) another way. The former expands out the foldr, while the latter (via various rules involving augment) translates into a foldr over [x1, x2, ..., xn]. I therefore came up with the idea of translating x1:x2:...:xn:build g into [x1:x2:..:xn]++build g, and then letting the current rules do their thing. dolio and carter (in #ghc) were concerned about what would happen if fusion *didn't* occur, in which case the resulting code appears to be *worse*. So then I wrote some rules to fix that up, which actually look likely to be good rules in general. They turn [x1:x2:...:xn
Categories: Offsite Discussion

Does ghc leak memory or is something else going on?

Haskell on Reddit - Thu, 07/31/2014 - 5:33pm

If I do "cabal install cabal-install" then while it is building Cabal I get:

[46 of 78] Compiling Distribution.Simple.Setup ( Distribution/Simple/Setup.hs, dist/build/Distribution/Simple/Setup.o ) ghc: out of memory (requested 1048576 bytes)

But if I do a "cabal unpack Cabal" and then go in the directory and do a "cabal build" I get the same error, but I can just run "cabal build" again and it picks up where it left off and finishes just fine. This seems to happen with any package that contains a lot of files to be compiled. Is this a memory leak in GHC or is something else going on? Is there any way to tell cabal to compile each file with a fresh ghc process instead of reusing the same ghc process for the whole thing to at least work around the problem easier?

submitted by haskellnoob
[link] [4 comments]
Categories: Incoming News

wren gayle romano: Transitioning is a mindfuck.

Planet Haskell - Thu, 07/31/2014 - 3:48pm

[Content warning: discussion of rape culture and child abuse]

Transitioning is a mindfuck. Doesn't matter how prepared you are, how sure you are, how long and deeply you've thought about gender/sexuality issues. Outside of transitioning1 we have no way of inhabiting more than one position in any given discourse. Sure, we can understand other positions on an intellectual level, we may even sympathize with them, but we cannot empathize with what we have not ourselves experienced, and even having experienced something in the past does not mean we can continue to empathize with it in the present. Julia Serano emphasizes this epistemic limit in her books. And it's no wonder that no matter how prepared you may be, completely uprooting your sense of self and reconfiguring the way the world sees, interprets, and interacts with you is going to fundamentally alter whatever notions you had going into it all.

Since transitioning none of the major details of my identity have changed. I'm still a woman. Still feminine. Still a flaming lesbo. Still kinky, poly, and childfree. Still attracted to the same sorts of people. Still into the same sorts of fashion (though now I can finally act on that). Still interested in all the same topics, authors, and academic pursuits. And yet, despite —or perhaps because of— all this consistency, transitioning is still a mindfuck.

Read more... )

comments
Categories: Offsite Blogs

wren gayle romano: New skin

Planet Haskell - Thu, 07/31/2014 - 3:47pm

Hello all,

I just changed the theme/skin for my blog and have been playing around with new fonts, css, handling of footnotes, etc. Let me know what you think and whether you run into any issues (especially on older posts). It's been years since I've done webdev, long before CSS3 and HTML5 so thing's are a bit different than I'm used to.

In other news, I haven't heard back from the Haskell Planet admins about getting the feed switched over to Haskell/coding/math content only. So, if you've been annoyed by the OT, sorry bout that!



comments
Categories: Offsite Blogs

wren gayle romano: New skin

Planet Haskell - Thu, 07/31/2014 - 3:47pm

Hello all,

I just changed the theme/skin for my blog and have been playing around with new fonts, css, handling of footnotes, etc. Let me know what you think and whether you run into any issues (especially on older posts). It's been years since I've done webdev, long before CSS3 and HTML5 so thing's are a bit different than I'm used to.

In other news, I haven't heard back from the Haskell Planet admins about getting the feed switched over to Haskell/coding/math content only. So, if you've been annoyed by the OT, sorry bout that!



comments
Categories: Offsite Blogs