# News aggregator

### FFI nested structs - malloc - free

### notepad++ && haskell -> is there a template?

Hi, I'm searching for a template plugin or something like that which structures my haskell code in npp, so that all my '=' are on the same level like that: func :: a -> a func a | a == True = 1 | a == False = 2

also it would be nice to have a possibility to fold functions/modules etc.

Is there a way to do all this in notepad++.

Bonus question: is it possible to get some syntax-style/look like you can get with emacs (-> are replaced by real arrows etc.?)

Thx for your help

submitted by baxbear[link] [2 comments]

### Philip Wadler: Bright Club: Computability

### [Jobs] Galois is hiring

### What was your next step after you gave up and washed your hands of any hope of ever binding that massive C++ library you have to use for work to Haskell?

I suspect you moved on to F# or Lua. Maybe you just embraced the new improved C++14. I'd like to hear from you so I know what my options are.

submitted by rdfox[link] [39 comments]

### Danny Gratzer: A Proof of Church Rosser in Twelf

An important property in any term rewriting system, a system of rules for saying one term can be rewritten into another, is called confluence. In a term rewriting system more than one rule may apply at a time, confluence states that it doesn’t matter in what order we apply these rules. In other words, there’s some sort of diamond property in our system

Starting Term / \ / \ Rule 1 / \ Rule 2 / \ / \ B C \ / A bunch \ of / rules later \ / \ / \ / Same end pointIn words (and not a crappy ascii picture)

- Suppose we have some term A
- The system lets us rewrite A to B
- The system lets us rewrite A to C

Then two things hold

- The system lets us rewrite B to D in some number of rewrites
- The system lets us rewrite C to D with a different series of rewrites

In the specific case of lambda calculus, confluence is referred to as the “Church-Rosser Theorem”. This theorem has several important corollaries, including that the normal forms of any lambda term is unique. To see this, remember that a normal form is always “at the bottom” of diamonds like the one we drew above. This means that if some term had multiple steps to take, they all must converge before one of them reaches a normal form. If any of them did hit a normal form first, they couldn’t complete the diamond.

Proving Church-RosserIn this post I’d like to go over a proof of the Church Rosser theorem in Twelf, everyone’s favorite mechanized metalogic. To follow along if you don’t know Twelf, perhaps some shameless self linking will help.

We need to start by actually defining lambda calculus. In keeping with Twelf style, we laugh at those restricted by the bounds of inductive types and use higher order abstract syntax to get binding for free.

term : type. ap : term -> term -> term. lam : (term -> term) -> term.We have to constructors, ap, which applies one term to another. The interesting one here is lam which embeds the LF function space, term -> term into term. This actually makes sense because term isn’t an inductive type, just a type family with a few members. There’s no underlying induction principle with which we can derive contradictions. To be perfectly honest I’m not sure how the proof of soundness of something like Twelf %total mechanism proceeds. If a reader is feeling curious, I believe this is the appropriate paper to read.

With this, something like λx. x x as lam [x] ap x x.

Now on to evaluation. We want to talk about things as a term rewriting system, so we opt for a small step evaluation approach.

step : term -> term -> type. step/b : step (ap (lam F) A) (F A). step/ap1 : step (ap F A) (ap F' A) <- step F F'. step/ap2 : step (ap F A) (ap F A') <- step A A'. step/lam : step (lam [x] M x) (lam [x] M' x) <- ({x} step (M x) (M' x)). step* : term -> term -> type. step*/z : step* A A. step*/s : step* A C <- step A B <- step* B C.We start with the 4 sorts of steps you can make in this system. 3 of them are merely “if you can step somewhere else, you can pull the rewrite out”, I’ve heard these referred to as compatibility rules. This is what ap1, ap2 and lam do, lam being the most interesting since it deals with going under a binder. Finally, the main rule is step/b which defines beta reduction. Note that HOAS gives us this for free as application.

Finally, step* is for a series of steps. We either have no steps, or a step followed by another series of steps. Now we want to prove a couple theorems about our system. These are mostly the lifting of the “compatibility rules” up to working on step*s. The first is the lifting of ap1.

step*/left : step* F F' -> step* (ap F A) (ap F' A) -> type. %mode +{F : term} +{F' : term} +{A : term} +{In : step* F F'} -{Out : step* (ap F A) (ap F' A)} (step*/left In Out). - : step*/left step*/z step*/z. - : step*/left (step*/s S* S) (step*/s S'* (step/ap1 S)) <- step*/left S* S'*. %worlds (lam-block) (step*/left _ _). %total (T) (step*/left T _).*Note, the mode specification I’m using a little peculiar. It needs to be this verbose because otherwise A mode-errors. Type inference is peculiar.*

The theorem says that if F steps to F' in several steps, for all A, ap F A steps to ap F' A in many steps. The actual proof is quite boring, we just recurse and apply step/ap1 until everything type checks.

Note that the world specification for step*/left is a little strange. We use the block lam-block because later one of our theorem needs this. The block is just

%block lam-block : block {x : term}.We need to annotate this on all our theorems because Twelf’s world subsumption checker isn’t convinced that lam-block can subsume the empty worlds we check some of our theorems in. Ah well.

Similarly to step*/left there is step*/right. The proof is 1 character off so I won’t duplicate it.

step*/right : step* A A' -> step* (ap F A) (ap F A') -> type.Finally, we have step/lam, the lifting of the compatibility rule for lambdas. This one is a little more fun since it actually works by pattern matching on functions.

step*/lam : ({x} step* (F x) (F' x)) -> step* (lam F) (lam F') -> type. %mode step*/lam +A -B. - : step*/lam ([x] step*/z) step*/z. - : step*/lam ([x] step*/s (S* x) (S x)) (step*/s S'* (step/lam S)) <- step*/lam S* S'*. %worlds (lam-block) (step*/lam _ _). %total (T) (step*/lam T _).What’s fun here is that we’re inducting on a dependent function. So the first case matches [x] step*/z and the second [x] step*/s (S* x) (S x). Other than that we just use step/lam to lift up S and recurse to lift up S* in the second case.

We need one final (more complicated) lemma about substitution. It states that if A steps to A', then F A steps to F A' in many steps for all F. This proceeds by induction on the derivation that A steps to A'. First off, here’s the formal statement in Twelf

*This is the lemma that actually needs the world with lam-blocks*

Now the actual proof. The first two cases are for constant functions and the identity function

- : subst ([x] A) S step*/z. - : subst ([x] x) S (step*/s step*/z S).In the case of the constant functions the results of F A and F A' are the same so we don’t need to step at all. In the case of the identity function we just step with the step from A to A'.

In the next case, we deal with nested lambdas.

- : subst ([x] lam ([y] F y x)) S S'* <- ({y} subst (F y) S (S* y)) <- step*/lam S* S'*.Here we recurse, but we carefully do this under a pi type. The reason for doing this is because we’re recursing on the open body of the inner lambda. This has a free variable and we need a pi type in order to actually apply F to something to get at the body. Otherwise this just uses step*/lam to lift the step across the body to the step across lambdas.

Finally, application.

- : subst ([x] ap (F x) (A x)) S S* <- subst F S F* <- subst A S A* <- step*/left F* S1* <- step*/right A* S2* <- join S1* S2* S*.This looks complicated, but isn’t so bad. We first recurse, and then use various compatibility lemmas to actually plumb the results of the recursive calls to the right parts of the final term. Since there are two individual pieces of stepping, one for the argument and one for the function, we use join to slap them together.

With this, we’ve got all our lemmas

%worlds (lam-block) (subst _ _ _). %total (T) (subst T _ _). The Main TheoremNow that we have all the pieces in place, we’re ready to state and prove confluence. Here’s our statement in Twelf

confluent : step A B -> step A C -> step* B D -> step* C D -> type. %mode confluent +A +B -C -D.Unfortunately, there’s a bit of a combinatorial explosion with this. There are approximately 3 * 3 * 3 + 1 = 10 cases for this theorem. And thanks to the lemmas we’ve proven, they’re all boring.

First we have the cases where step A B is a step/ap1.

- : confluent (step/ap1 S1) (step/ap1 S2) S1'* S2'* <- confluent S1 S2 S1* S2* <- step*/left S1* S1'* <- step*/left S2* S2'*. - : confluent (step/ap1 S1) (step/ap2 S2) (step*/s step*/z (step/ap2 S2)) (step*/s step*/z (step/ap1 S1)). - : confluent (step/ap1 (step/lam F) : step (ap _ A) _) step/b (step*/s step*/z step/b) (step*/s step*/z (F A)).In the first case, we have two ap1s. We recurse on the smaller S1 and S2 and then immediately use one of our lemmas to lift the results of the recursive call, which step the function part of the the ap we’re looking at, to work across the whole ap term. In the second case there, we’re stepping the function in one, and the argument in the other. In order to bring these to a common term we just apply the first step to the resulting term of the second step and vice versa. This means that we’re doing something like this

F A / \ S1 / \ S2 / \ F' A F A' \ / S2 \ / S1 \ / F' A'This clearly commutes so this case goes through. For the final case, we’re applying a lambda to some term so we can beta reduce. On one side we step the body of the lambda some how, and on the other we immediately substitute. Now we do something clever. What is a proof that lam A steps to lam B? It’s a proof that for any x, A x steps to B x. In fact, it’s just a function from x to such a step A x to B x. So we have that lying around in F. So to step from the beta-reduced term G A to G' A all we do is apply F to A! The other direction is just beta-reducing ap (lam G') A to the desired G' A.

In the next set of cases we deal with ap2!

- : confluent (step/ap2 S1) (step/ap2 S2) S1'* S2'* <- confluent S1 S2 S1* S2* <- step*/right S1* S1'* <- step*/right S2* S2'*. - : confluent (step/ap2 S1) (step/ap1 S2) (step*/s step*/z (step/ap1 S2)) (step*/s step*/z (step/ap2 S1)). - : confluent (step/ap2 S) (step/b : step (ap (lam F) _) _) (step*/s step*/z step/b) S1* <- subst F S S1*.The first two cases are almost identical to what we’ve seen before. The key difference here is in the third case. This is again where we’re stepping something on one side and beta-reducing on the other. We can’t use the nice free stepping provided by F here since we’re stepping the argument, not the function. For this we appeal to subst which let’s us step F A to F A' using S1* exactly as required. The other direction is trivial just like it was in the ap1 case, we just have to step ap (lam F) A' to F A' which is done with beta reduction.

I’m not going to detail the cases to do with step/b as the first argument because they’re just mirrors of the cases we’ve looked at before. That only leaves us with one more case, the case for step/lam.

- : confluent (step/lam F1) (step/lam F2) F1'* F2'* <- ({x} confluent (F1 x) (F2 x) (F1* x) (F2* x)) <- step*/lam F1* F1'* <- step*/lam F2* F2'*.This is just like all the other “diagonal” cases, like confluent (ap1 S1) (ap1 S2) .... We first recurse (this time using a pi to unbind the body of the lambda) and then use compatibility rules in order to get something we can give back from confluent. And with this, we can actually prove that lambda calculus is confluent.

%worlds (lam-block) (confluent _ _ _ _). %total (T) (confluent T _ _ _). Wrap UpWe went through a fairly significant proof here, but the end results were interesting at least. One nice thing this proof illustrates is how well HOAS lets us encode these proofs. It’s a very Twelf-y approach to use lambdas to represent bindings. All in all, it’s a fun proof.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus### A Poor Man's Type Level Debugger.

I stumbled upon a trick to step through the typechecker. Basically run the program repeatedly while incrementing the context stack and see where it stops.

To demonstrate here's a gist that has:

- a type level list type AToF = (A (B (C (D (E (F ())))))
- a type level function Contains that checks if its argument is in that list.
- a value level function run that drives Contains.

Running the file with the command line :

for i in {0..21}; do `ghc -fcontext-stack=${i} -i TypeLevelDebugger.hs -e "run (undefined :: AToF) (undefined :: F ())"`; donegenerates:

<interactive>:1:1: Context reduction stack overflow; size = 1 Use -fcontext-stack=N to increase stack size to N TypeEqual (A ()) (F ()) match In the expression: run (undefined :: AToF) (undefined :: F ()) In an equation for ‘it’: it = run (undefined :: AToF) (undefined :: F ()) <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (B ()) (F ()) match ... <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (C ()) (F ()) match ... <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (D ()) (F ()) match ... <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (E ()) (F ()) match ... <interactive>:1:1: ... No ~ match ... <interactive>:1:1: ... TypeEqual (F ()) (F ()) match ... <interactive>:1:1: ... yesOrNo0 ~ Yes ... <interactive>:1:1: No instance for (Show Yes) arising from a use of ‘print’ In a stmt of an interactive GHCi command: print it ...The third line of each error contains the type level trace. On all but the first iteration the noise has been deleted.

Now you can actually step through lens at the type level!

Enjoy!

submitted by deech[link] [2 comments]

### Call for participation (FCRC+LCTES)

### Any clever idea for a quick way to browse through Hackage?

Everytime I need to use a function from any library on Hackage, the usual process is:

Open the browser.

Google something like "hackage gloss".

Click on the module I want - Graphics.Gloss.Data.Picture.

Finally read the function's description...

Since my internet is terrible, this process is currently a bottleneck of my Haskell programming and is affecting my workflow. Is there any way to improve looking for documentation? Some command line tool would be great. (*If you somehow use Hoogle for that purpose, please explain how - I couldn't get how to make it useful for it on the command line.*)

[link] [18 comments]

### FP Complete: School of Haskell 2.0

As announced by Michael Snoyman a couple weeks ago, we are going to be releasing an open source version of the School of Haskell! The SoH provides interactive documentation and tutorials for newcomers and advanced haskellers alike. This interactivity comes in the form of editable code, inline in the webpage, letting you try things out as you learn them.

We are dedicated to supporting the community with excellent Haskell tools and infrastructure. As this is a community project, I'm writing this post to explain our plans, and to ask for feedback and ideas.

We are focused on eliminating any obstacles which have historically discouraged or prevented people from contributing to the School of Haskell. In particular, here are the changes we're planning:

**Open Source!**We'll encourage community participation on improving the service, via the bug tracker and pull requests. Anyone will be free to host their own version of the site.**Creative Commons license!**While our old license was not intended to be restrictive, it turned away some potential contributors.**Git!**Git will be used to store the SoH markdown pages. The core school repo will be hosted on GitHub. This means that editing, history, organization, and collaboration will all be based on a system that many people understand well.**Interactive code on any website!**Any website will be able to include editable and runnable Haskell code. All you'll need to do is include a script tag like <script src="https://www.schoolofhaskell.com/soh.js">. When run, this script will replace code divs with the editor. This means that haddocks could potentially have examples that you can play with directly in the browser!

Please help us guide this work to best serve you and the community!

Should all user content be licensed under the Creative Commons license? One option is to relicense all user content when it gets moved to the soh-migration-snapshot repo. Another option is to add a footnote to each page noting that its license is the SoH 1.0 license. After users migrate to their own repo, they are free to remove this footnote.

Should the markdown format change? We're considering switching to CommonMark, but perhaps having a stable format for migration is more important?

Does the system of git repos sound good (it's described below)? One issue is that it will require a little bit of setup for users to continue editing their SoH content. Is this acceptable to folks? Should we seek to streamline the process via the GitHub API?

Any other ideas? We've already got a fair amount of work cut out for us, but we'd love to hear more ideas, particularly of the low-hanging-fruit variety.

The editor service will allow Haskell to be edited in the browser and run on the server. This service is independent of the markdown rendering service, described below.

Editor FeaturesOur plan is to support the following features in the SoH code editor / runner:

Standard input / output will use a

**console emulator**in the browser. This means that things like ncurses apps will work!**GHCI support**. Initially this will likely be a bit of a hack, directly running ghci rather than using ide-backend.**Type information**will be available for sub-expressions, simply by selecting a sub-expression in the code.-fdefer-type-errors will be used by default. This will likely be setup such that type errors still prevent running the code, but using this flag this means that type info is still yielded by GHC.

Support for looking up the

**haddocks**associated with an identifier. Info from GHC will be used for this, and open the results in an IFrame.**Go-to-definition**will tell you where an identifier is bound.The capability to

**serve websites**. This will work by detecting of when the user's code is acting as an HTTP server, and opening an IFrame / separate window for interacting with it.

This new version is going to be offering quite a lot of information! In order to have somewhere to put it all, there will be a tabbed info pane at the bottom of the snippet (much like the pane at the bottom of FPHC).

Implementation ChangesWe've learned a lot about writing this sort of application since the initial creation of the School of Haskell. In particular, we're planning on making the following changes, although things aren't quite set in stone yet:

Using GHCJS instead of Fay. Back when the SoH was created, GHCJS wasn't quite ready, and so Fay was picked. Fay was quite nice for the task, but having full-blown GHC Haskell is even nicer.

Using Docker for managing Linux containers and images.

Using Ace instead of CodeMirror. Our experience with using Ace for stackage-view was quite positive. This will address a number of open bugs, and allow us to reuse some existing GHCJS bindings to Ace.

Using websockets instead of HTTP requests. Due to usage of Amazon's Elastic Load Balancer, in the past we couldn't use websockets.

The protocol will likely be based on ide-backend-client.

The markdown rendering service allows rendering of SoH markdown into documentation / tutorials which contain active Haskell code. We hope to make this largely backwards compatible with the old markdown format, as we'll be migrating content from the old site.

Our plan is to have a central GitHub repo which hosts the main "school" section. For example, it might be placed at https://github.com/schoolofhaskell/school. This will be a central place to put Haskell educational content, and we will heartily encourage pull requests.

You'll also be able to give the server a URL to your own publicly accessible git repo. The contents of this repo will then be served as SoH pages, possibly with a URL like https://schoolofhaskell.com/user/user-name/tutorial-name. To facilitate the migration, we'll provide a tarball of your current SoH pages, in a form readable by the new implementation.

All existing SoH URLs at https://fpcomplete.com will redirect to https://schoolofhaskell.com. In order to do this, we need to migrate the content to the new system. The current plan for this is to have a repo, possibly at https://github.com/schoolofhaskell/soh-migration-snapshot, which will be used as a fallback for users who haven't yet specified a repo for their account.

### Threepenny-gui 0.6 released — featuring a JavaScript FFI over HTTP (module Foreign.JavaScript)

### Building a JSON REST API in Haskell

### ANN: Important lambabot update (5.0.2.1 release)

### Keegan McAllister: Modeling garbage collectors with Alloy: part 1

Formal methods for software verification are usually seen as a high-cost tool that you would only use on the most critical systems, and only after extensive informal verification. The Alloy project aims to be something completely different: a lightweight tool you can use at any stage of everyday software development. With just a few lines of code, you can build a simple model to explore design issues and corner cases, even before you've started writing the implementation. You can gradually make the model more detailed as your requirements and implementation get more complex. After a system is deployed, you can keep the model around to evaluate future changes at low cost.

Sounds great, doesn't it? I have only a tiny bit of prior experience with Alloy and I wanted to try it out on something more substantial. In this article we'll build a simple model of a garbage collector, visualize its behavior, and fix some problems. This is a warm-up for exploring more complex GC algorithms, which will be the subject of future articles.

I won't describe the Alloy syntax in full detail, but you should be able to follow along if you have some background in programming and logic. See also the Alloy documentation and especially the book *Software Abstractions: Logic, Language, and Analysis* by Daniel Jackson, which is a very practical and accessible introduction to Alloy. It's a highly recommended read for any software developer.

You can download Alloy as a self-contained Java executable, which can do analysis and visualization and includes an editor for Alloy code.

The modelWe will start like so:

open util/ordering [State]sig Object { }

one sig Root extends Object { }

sig State {

pointers: Object -> set Object,

collected: set Object,

}

The garbage-collected heap consists of Objects, each of which can point to any number of other Objects (including itself). There is a distinguished object Root which represents everything that's accessible *without* going through the heap, such as global variables and the function call stack. We also track which objects have already been garbage-collected. In a real implementation these would be candidates for re-use; in our model they stick around so that we can detect use-after-free.

The open statement invokes a library module to provide a total ordering on States, which we will interpret as the progression of time. More on this later.

RelationsIn the code that follows, it may look like Alloy has lots of different data types, overloading operators with total abandon. In fact, all these behaviors arise from an exceptionally simple data model:

Every value is a *relation*; that is, a set of tuples of the same non-zero length.

When each tuple has length 1, we can view the relation as a set. When each tuple has length 2, we can view it as a binary relation and possibly as a function. And a singleton set is viewed as a single atom or tuple.

Since everything in Alloy is a relation, each operator has a single definition in terms of relations. For example, the operators . and [] are syntax for a flavor of relational join. If you think of the underlying relations as a database, then Alloy's clever syntax amounts to an object-relational mapping that is at once very simple and very powerful. Depending on context, these joins can look like field access, function calls, or data structure lookups, but they are all described by the same underlying framework.

The elements of the tuples in a relation are *atoms*, which are indivisible and have no meaning individually. Their meaning comes entirely from the relations and properties we define. Ultimately, atoms all live in the same universe, but Alloy gives "warnings" when the type system implied by the sig declarations can prove that an expression is always the empty relation.

Here are the relations implied by our GC model, as tuple sets along with their types:

Object: {Object} = {O1, O2, ..., Om}Root: {Root} = {Root}

State: {State} = {S1, S2, ..., Sn}

pointers: {(State, Object, Object)}

collected: {(State, Object)}

first: {State} = {S1}

last: {State} = {Sn}

next: {(State, State)} = {(S1, S2), (S2, S3), ..., (S(n-1), Sn)}

The last three relations come from the util/ordering library. Note that a sig implicitly creates some atoms.

DynamicsThe live objects are everything reachable from the root:

fun live(s: State): set Object {Root.*(s.pointers)

}

*(s.pointers) constructs the reflexive, transitive closure of the binary relation s.pointers; that is, the set of objects reachable from each object.

Of course the GC is only part of a system; there's also the code that actually uses these objects, which in GC terminology is called the *mutator*. We can describe the action of each part as a predicate relating "before" and "after" states.

t.collected = s.collected

t.pointers != s.pointers

all a: Object - t.live |

t.pointers[a] = s.pointers[a]

}

pred gc(s, t: State) {

t.pointers = s.pointers

t.collected = s.collected + (Object - s.live)

some t.collected - s.collected

}

The mutator cannot collect garbage, but it can change the pointers of any live object. The GC doesn't touch the pointers, but it collects any dead object. In both cases we require that *something* changes in the heap.

It's time to state the overall facts of our model:

fact {no first.collected

first.pointers = Root -> (Object - Root)

all s: State - last |

let t = s.next |

mutate[s, t] or gc[s, t]

}

This says that in the initial state, no object has been collected, and every object is in the root set except Root itself. This means we don't have to model allocation as well. Each state except the last must be followed by a mutator step or a GC step.

The syntax all x: e | P says that the property P must hold for every tuple x in e. Alloy supports a variety of quantifiers like this.

Interacting with AlloyThe development above looks nice and tidy — I hope — but in reality, it took a fair bit of messing around to get to this point. Alloy provides a highly interactive development experience. At any time, you can visualize your model as a collection of concrete examples. Let's do that now by adding these commands:

pred Show {}run Show for 5

Now we select this predicate from the "Execute" menu, then click "Show". The visualizer provides many options to customise the display of each atom and relation. The config that I made for this project is "projected over State", which means you see a graph of the heap at one moment in time, with forward/back buttons to reach the other States.

After clicking around a bit, you may notice some oddities:

The root isn't a heap object; it represents all of the pointers that are reachable *without* accessing the heap. So it's meaningless for an object to point to the root. We can exclude these cases from the model easily enough:

all s: State | no s.pointers.Root

}

(This can also be done more concisely as part of the original sig.)

Now we're ready to check the essential safety property of a garbage collector:

assert no_dangling {all s: State | no (s.collected & s.live)

}

check no_dangling for 5 Object, 10 State

And Alloy says:

Executing "Check no_dangling for 5 Object, 10 State"...

8338 vars. 314 primary vars. 17198 clauses. 40ms.

Counterexample found. Assertion is invalid. 14ms.

Clicking "Counterexample" brings up the visualization:

Whoops, we forgot to say that only pointers to live objects can be stored! We can fix this by modifying the mutate predicate:

pred mutate(s, t: State) {t.collected = s.collected

t.pointers != s.pointers

all a: Object - t.live |

t.pointers[a] = s.pointers[a]

// new requirement!

all a: t.live |

t.pointers[a] in s.live

}

With the result:

Executing "Check no_dangling for 5 Object, 10 State"...

8617 vars. 314 primary vars. 18207 clauses. 57ms.

No counterexample found. Assertion may be valid. 343ms.

SAT solvers and bounded model checking

"May be" valid? Fortunately this has a specific meaning. We asked Alloy to look for counterexamples involving at most 5 objects and 10 time steps. This bounds the search for counterexamples, but it's still vastly more than we could ever check by exhaustive brute force search. (See where it says "8617 vars"? Try raising 2 to that power.) Rather, Alloy turns the bounded model into a Boolean formula, and feeds it to a SAT solver.

This all hinges on one of the weirdest things about computing in the 21st century. In complexity theory, SAT (along with many equivalents) is the prototypical "hardest problem" in NP. Why do we intentionally convert our problem into an instance of this "hardest problem"? I guess for me it illustrates a few things:

The huge gulf between worst-case complexity (the subject of classes like NP) and average or "typical" cases that we encounter in the real world. For more on this, check out Impagliazzo's "Five Worlds" paper.

The fact that real-world difficulty involves a coordination game. SAT solvers got so powerful because everyone agrees SAT is

*the*problem to solve. Standard input formats and public competitions were a key part of the amazing progress over the past decade or two.

Of course SAT solvers aren't *quite* omnipotent, and Alloy can quickly get overwhelmed when you scale up the size of your model. Applicability to the real world depends on the *small scope hypothesis*:

If an assertion is invalid, it probably has a small counterexample.

Or equivalently:

Systems that fail on large instances almost always fail on small instances with similar properties.

This is far from a sure thing, but it already underlies a lot of approaches to software testing. With Alloy we have the certainty of proof *within* the size bounds, so we don't have to resort to massive scale to find rare bugs. It's difficult (but not impossible!) to imagine a GC algorithm that absolutely cannot fail on fewer than 6 nodes, but is buggy for larger heaps. *Implementations* will often fall over at some arbitrary resource limit, but algorithms and models are more abstract.

It's not surprising that our correctness property

all s: State | no (s.collected & s.live)holds, since it's practically a restatement of the garbage collection "algorithm":

t.collected = s.collected + (Object - s.live)Because reachability is built into Alloy, via transitive closure, the simplest model of a garbage collector does not really describe an implementation. In the next article we'll look at *incremental* garbage collection, which breaks the reachability search into small units and allows the mutator to run in-between. This is highly desirable for interactive or real-time apps; it also complicates the algorithm quite a bit. We'll use Alloy to uncover some of these complications.

In the meantime, you can play around with the simple GC model and ask Alloy to visualize any scenario you like. For example, we can look at runs where the final state includes at least 5 pointers, and at least one collected object:

pred Show {#(last.pointers) >= 5

some last.collected

}

run Show for 5

Thanks for reading! You can find the code in a GitHub repository which I'll update if/when we get around to modeling more complex GCs.

### Attempt at getting primes

Alright so I just started learning (as of an hour ago I ended up reading about scanl etc in Learn You A Haskell) and I tried to write something that would get all the primes under 100. I suppose there are way better/quicker ways to figure out those primes but I'm just curious if the way I typed it out is fine.

It uses 'trial division'.

So here's the code:

module Main where import Data.Fixed getPrimes :: [Float] getPrimes = takeWhile (<100) (filter checkIfPrime [2.0,3.0..]) checkIfPrime :: Float -> Bool checkIfPrime x | x == 2 || x == 3 = True | (x `mod'` 2 == 0) || (x `mod'` 3 == 0) = False | 0.0 `elem` map (mod' x) numberList = False | otherwise = True where numberList = numbersToCheck (sqrt x) numbersToCheck :: Float -> [Float] numbersToCheck x = takeWhile (<=x) (5 : 7 : map (+6) (numbersToCheck x)) main :: IO() main = print getPrimes submitted by Piegie[link] [17 comments]