News aggregator

Danny Gratzer: Notes on Focusing

Planet Haskell - Sun, 10/26/2014 - 6:00pm
Posted on October 27, 2014 Tags: types

I’ve been spending a lot of time whacking my head on focusing literature. I’d like to jot down some intuition around what a focused system is and how it relates to the rest of the world. I’m going to steer clear of actually proving things but I will point out where a proof would be needed.

What Is Focusing

In a nutshell, focusing is a strategy to create proofs that minimizes the amount of choices available at each step. Focusing is thus amenable to mechanization since a computer is very good at applying a lot of deterministic procedures but incredibly bad at nondeterministic choice.

Now when we set out to define a focused system we usually do something like

  1. Formalize our logical framework with natural deduction
  2. Translate our framework into a sequent calculus
  3. Transform our sequent calculus into a focused variant

At each of these steps there’s a proof that says something like “System 2 is sound and complete with respect to System 1”. We can then chain these proofs together to get that we can transform any nonfocused proof into a focused one (focalization) and the reverse (de-focalization).

In order to actually carry out these proofs there’s a fair amount of work and pain. Usually we’ll need something like cut elimination and/or identity expansion.

Groundwork

Now before we go on to define an example logic, let’s notice a few things. First off, in sequent calculus there are left and right rules. Left rules decompose known facts into other known facts while right rules transform our goal. There’s also an identity sequent which more or less just states

A is an atom ————————————— Γ, A → A

This is a bit boring though.

Now certain rules are invertible: their conclusion implies their premise in addition to the reverse. For example if I said you must prove A ∧ B clearly we’ll have to prove both A and B in order to prove A ∧ B; there’s no alternative set of rule applications that let us circumvent proving A and B.

This means that if we were mechanically trying to prove something of the form A ∧ B we can immediately apply the right rule that decomposes ∧ into 2 goals.

We can these sort of rules invertible or asynchronous. Dually, there are rules that when applied transform our goal into something impossible to prove. Consider ⊥ ∨ ⊤, clearly apply the rule that transforms this into ⊥ would be a bad idea!

Now if we begin classifying all the left and write rules we’ll notice that the tend to all into 2 categories

  • Things with invertible left rules and noninvertible right rules
  • Things with noninvertible left rules and invertible right rules

We dub the first group “positive” things and the second “negative” things. This is called polarization and isn’t strictly necessary but greatly simplifies a lot of our system.

Now there are a few things that could be considered both positive and negative. For example we can consider ∧ as positive with

Γ → A⁺ Γ → B⁺ ——————————————— Γ → A⁺ ∧ B⁺ Γ, A⁺, B⁺ → C ————————————————— Γ, A⁺ ∧ B⁺ → C

In this case, the key determiner for the polarity of ∧ comes from its subcomponents. We can just treat ∧ as positive along with its subcomponents and with an appropriate dual ∧⁻, our proof system will still be complete.

As a quick example, implication ⊃ is negative. the right rule

Γ, A → B —————————— Γ → A ⊃ B

While its left rule isn’t

Γ, A ⊃ B → A Γ, B, A ⊃ B → C —————————————————————————————— Γ, A ⊃ B → C

Since we could easily have something like ⊥ ⊃ ⊤ but using this rule would entail (heh) proving ⊥! Urk. If our system applied this rules remorselessly, we’d quickly end up in a divergent proof search.

An Actual Focused System

Do note that these typing rules are straight out of Rob Simmons’ paper, linked below

Now that we’ve actually seen some examples of invertible rules and polarized connectives, let’s see how this all fits into a coherent system. There is one critical change we must make to the structure of our judgments: an addition to the form _ → _. Instead of just an unordered multiset on the left, in order to properly do inversion we change this to Γ; Ω ⊢ A where Ω is an ordered list of propositions we intend to focus on.

Furthermore, since we’re dealing with a polarized calculus, we occasionally want to view positive things as negative and vice versa. For this we have shifts, ↓ and ↑. When we’re focusing on some proposition and we reach a shift, we pop out of the focused portion of our judgment.

Our system is broken up into 3 essentially separate judgments. In this judgment we basically apply as many invertible rules as many places as we can.

Γ, A⁻; Q ⊢ U —————————————— Γ; ↓A⁻, Q ⊢ U Γ; A⁺, Ω ⊢ U Γ; B+; Ω ⊢ U ——————————————————————————— Γ; A⁺ ∨ B⁺, Ω ⊢ U Γ; A⁺, B⁺, Ω ⊢ U ———————————————————— Γ; A⁺ ∧ B⁺, Ω ⊢ U —————————————— Γ; ⊥, Ω ⊢ U

We first look at how to break down Ω into simpler forms. The idea is that we’re going to keep going till there’s nothing left in Ω. Ω can only contain positive propositions so eventually we’ll decompose everything to shifts (which we move into Γ) ⊤+ (which we just drop on the floor) or ⊥ (which means we’re done). These are all invertible rules to we can safely apply them eagerly and we won’t change the provability of our goal.

Once we’ve moved everything out of Ω we can make a choice. If U is “stable” meaning that we can’t break it down further easily, we can pick a something negative out of our context and focus on it

Γ; [A⁻] ⊢ U ————————————– Γ, A⁻; • ⊢ U

This pops us into the next judgment in our calculus. However, if U is not stable, then we have to decompose it further as well.

Γ; • ⊢ A⁺ —————————————— Γ; • ⊢ ↑ A⁺ ——————————— Γ; • ⊢ ⊤⁻ Γ; A⁺ ⊢ B⁻ ————————————— Γ; • ⊢ A⁺ ⊃ B⁻ Γ; • ⊢ A⁻ Γ; • ⊢ B⁻ ————————————————————— Γ; • ⊢ A⁻ ∧ B⁻

If we have a negative connective at the top level we can decompose that further, leaving us with a strictly smaller goal. Finally, we may reach a positive proposition with nothing in Ω. In this case we focus on the right.

Γ ⊢ [A⁺] ——————————— Γ; • ⊢ A⁺

Now we’re in a position to discuss these two focused judgments. If we focus on the right we decompose positive connectives

—————————— Γ ⊢ [⊤⁺] Γ; • ⊢ A⁻ ————————— Γ ⊢ ↓ A⁻ Γ ⊢ [A⁺] ————————————— Γ ⊢ [A⁺ ∨ B⁺] Γ ⊢ [B⁺] ————————————— Γ ⊢ [A⁺ ∨ B⁺] Γ ⊢ [A⁺] Γ ⊢ [B⁺] ——————————————————— Γ ⊢ [A⁺ ∧ B⁺]

These judgments follow the ones we’ve already seen. If we encounter a shift, we stop focusing. Otherwise we decompose the topmost positive connective. Now looking at these, you should see that sometimes these rules we’ll lead us to a “mistake”. Imagine if we applied the 4th rule to ⊤ ∨ ⊥! This is why these rules are segregated into a separate judgment.

In this judgment’s dual we essentially apply the exact same rules to the left of the turnstile and on negative connectives.

Γ; A⁺ ⊢ U ———————————— Γ; [↑A⁺] ⊢ U Γ ⊢ [A⁺] Γ; [B⁻] ⊢ U —————————————————————— Γ; [A⁺ ⊃ B⁻] ⊢ U Γ; [A⁻] ⊢ U ————————————————— Γ; [A⁻ ∧ B⁻] ⊢ U Γ; [B⁻] ⊢ U ————————————————— Γ; [A⁻ ∧ B⁻] ⊢ U

That wraps up our focused system. The idea is now we have this much more limited system which can express the same things our original, unfocused system could. A computer can be easily programmed to do a focused search since there’s much less backtracking everywhere leading to fewer rules being applicable at each step. I think Pfenning has referred to this as removing most of the “don’t-care” nondeterminism from our rules.

Wrap Up

I’m going to wrap up the post here. Proving focalization or even something like cut elimination is quite fiddly and I have no desire at all to try to transcribe it (painfully) into markdown and get it wrong in the process.

Instead, now that you have some idea of what focusing is about, go read Rob Simmons’ paper. It provides a clear account of proving everything necessary prove a focused system is complete and sound with respect to its unfocused counterpart.

Cheers

<script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'codeco'; // required: replace example with your forum shortname /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

NFM 2015 - final call for papers

General haskell list - Sun, 10/26/2014 - 3:40pm
CALL FOR PAPERS The 7th NASA Formal Methods Symposium http://www.NASAFormalMethods.org/nfm2015 <http://www.nasaformalmethods.org/nfm2015> 27 – 29 April 2015 Pasadena, California, USA Paper Submission: *** 10 Nov 2014 *** THEME The widespread use and increasing complexity of mission- and safety-critical systems require advanced techniques that address their specification, verification, validation, and certification. The NASA Formal Methods Symposium is a forum for theoreticians and practitioners from academia, industry, and government, with the goals of identifying challenges and providing solutions to achieving assurance in mission- and safety-critical systems. Within NASA such systems include for example autonomous robots, separation assurance algorithms for aircraft, Next Generation Air Transportation (NextGen), and autonomous rendezvous and docking for spacecraft. Moreover, emerging paradigms such as property-based design, code generation, and safety cases are bringing with them new challenges a
Categories: Incoming News

Counting in binary?

Haskell on Reddit - Sun, 10/26/2014 - 1:46pm

A little while ago on here, I swear I saw someone use a few functions from Data.List on the string "01" and ended up getting ["0", "1", "01", "10", "100", "101"...], but I can't figure out for the life of me what he did. Does anyone know?

Edit: Binary is hard.

submitted by Undo_all
[link] [15 comments]
Categories: Incoming News

Q: How to apply a polymorphic function to a Dynamic value in GHC 7.8

Haskell on Reddit - Sun, 10/26/2014 - 8:22am

There is a question http://stackoverflow.com/questions/10889682/how-to-apply-a-polymorphic-function-to-a-dynamic-value and a solution from Edward Kmett that no longer works in GHC 7.8 because of Kind-polymorphic Typeable. Edward also proposed "Proposal: Simplify/Generalize Data.Dynamic": http://comments.gmane.org/gmane.comp.lang.haskell.libraries/22732 where he says "Now it becomes possible to write code that does polymorphic things underneath the Dynamic wrapper, such as Lennart's example once more, but now in a principled way." Does anybody knows what is this principled way?

submitted by andrb
[link] [14 comments]
Categories: Incoming News

Tracking down where an instance comes from

haskell-cafe - Sun, 10/26/2014 - 4:14am
Hi, I'm in a situation where we have a 140+ module program (Yi) with some 30+ packages of dependencies and somewhere along the lines, this little guy gets in: instance [safe] Show (a -> b) -- Defined in ‘Text.Show.Functions’ Of course this automatically infects everything to import Yi. This is bad. What would be the best way to track down through which import this actually comes in? There's nothing directly importing Text.Show.Functions and I wonder if there's a better way than ‘remove imports one by one going deeper each time you hit it’ which might take a while, especially as there seems to be no way to unload instances except restarting GHCi.
Categories: Offsite Discussion

Polymorphic functions over string libraries

haskell-cafe - Sun, 10/26/2014 - 2:36am
Hi. I was wondering what would be the best way to create a polymorphic function over any possible string library (Text, String, Bytestring, etc). For instance, imagine I have to read a file with text, transform this text in some way and output it to another file, or to the console. If I wanted to use String, I'd just do this: / transform :: String -> String main = readFile "input.txt" >>= writeFile "output.txt" . transform / But if I wanted to use Text instead, I'd have to use this: / import qualified Data.Text.IO as T transform :: Text -> Text main = T.readFile "input.txt" >>= T.writeFile "output.txt" . transform / Idem for ByteString. I was wondering if there was a way to create these computations in a generic way, for any kind of string library, something like this: / class StringLibrary s where: sReadFile :: FilePath -> IO s sWriteFile :: FilePath -> s -> IO () ... / So then I'd just have this: / transform :: StringLibrary s => s -> s main = sReadFile "input.txt" >>= sWriteFile "output.txt" . transfo
Categories: Offsite Discussion

Monads: external questions

haskell-cafe - Sun, 10/26/2014 - 12:23am
As opposed to the internal logic of monads, how they work, I hope to start a discussion about their external logic: how and why to use monads. design ------ How do monads change the way one * thinks about a problem? * structures data? * refactors? * tests? Should I always be giving the monads a lot of cognitive bandwidth, because they reorder the way everything should be, or is it an investment with a high initial cognitive cost but requiring little maintenance thereafter? what is their common framework? ------------------------------- Monads let data reach farther than it otherwise would. Subjectively, they feel like a controlled way of violating encapsulation. Are there other, deeper or more specific, commonalities that explain why monads are a good way to implement exceptions, output, state, and perhaps other services? varieties --------- In practice, are all monads either for exceptions, state or output? If not, what other goals are monads well suited to? How should multiple contexts coexist? What's
Categories: Offsite Discussion

List functions from type families?

Haskell on Reddit - Sat, 10/25/2014 - 6:08pm

Im trying to get some type-level logic to implement anonymous "flat" sums. I can't figure out how to deal with recursive matching cases though. Something like this compilers, but it doesn't work as well as it should

class Remove (l :: [*]) (t :: *) (ll :: [*]) | l t -> ll instance (x ~ t) => Remove (x ': l) t l instance Rem l t ll => Remove (x ': l) t (x ': ll)

What I'd really like is something like a direct translation of the normal list function

type family Remove (l :: [*]) (t :: *) :: [*] type instance Remove (t ': l) t = l type instance Remove (x ': l) t = x ': Remove l t

but GHC complains about conflicting family instances. Similarly for other list functions like elem. Is there any way massage the compiler into accepting?

submitted by dogirardo
[link] [1 comment]
Categories: Incoming News

Finished reading Learn You a Haskell, please recommend me book number 2:

Haskell on Reddit - Sat, 10/25/2014 - 4:49pm

So, I finished reading Learn You a Haskell (http://learnyouahaskell.com/) and now I need book number two.

I am now taking recommendations: I'd like a book specifically that covers comonads and lens. Anybody know a good second Haskell book to read after your first?

Edit: wow, it's really nice to see I got a lot recommendations for this. I'm very proud to be learning haskell, there's a very positive community behind it no doubt.

submitted by alphonse23
[link] [24 comments]
Categories: Incoming News

a simpler way to declare typeclass instances

haskell-cafe - Sat, 10/25/2014 - 4:42pm
Hello, i am trying to understand how typeclasses work. I know that they can be used as follows (from the manual): data Foo = Foo {x :: Integer, str :: String} instance Eq Foo where (Foo x1 str1) == (Foo x2 str2) = (x1 == x2) && (str1 == str2) I am wondering, why is the following seemingly unambiguous syntax not allowed too? data Foo = Foo { x :: Integer, str :: String } instance Eq Foo (Foo x1 str1) == (Foo x2 str2) = (x1 == x2) && (str1 == str2) If it was allowed, it seems that it could also be applied to records: class HasName r where name :: r -> String data Bird = Bird { name :: String, wingNumber :: Integer } data Person = Person { name :: String, likesBirds :: Bool } instance HasName Bird instance HasName Person Alexey.
Categories: Offsite Discussion

Recursion on TypeNats

glasgow-user - Sat, 10/25/2014 - 2:53pm
If you define your own type level naturals by promoting data Nat = Z | S Nat you can define data families recursively, for example data family Power :: Nat -> * -> * data instance Power Z a = PowerZ data instance Power (S n) a = PowerS a (Power n a) But if you use the built-in type level Nat, I can find no way to do the same thing. You can define a closed type family type family Power (n :: Nat) a where Power 0 a = () Power n a = (a, Power (n-1) a) but this isn't the same thing (and requires UndecidableInstances). Have I missed something? The user guide page is pretty sparse, and not up to date anyway. If not, are there plans to add a "Successor" constructor to Nat? I would have thought this was the main point of using Nat rather than Int. Barney.
Categories: Offsite Discussion

Call-for-help: Add XDGBDS support to `directory` package

libraries list - Sat, 10/25/2014 - 10:43am
Hello *, I'd like to invite everyone with knowledge of the XDG Base Directory Specification[1] to contribute to https://github.com/haskell/directory/issues/6 From what I gathered, it appears to me that adhering to the XDGBD Spec (on Linux at least) is desirable and "strongly encouraged" by Linux distributions[2], so I think this ought to be implemented sooner rather than later (maybe even in time to be part of GHC 7.10.1) This issue was originally filed against GHC, since it affects the location of the `~/.ghc` folder (and fwiw this would also affect cabal's `~/.cabal` folder location). Thanks, hvr [1]: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html [2]: https://wiki.debian.org/XDGBaseDirectorySpecification
Categories: Offsite Discussion

aeson-t: Transform JSON

Haskell on Reddit - Sat, 10/25/2014 - 10:07am
Categories: Incoming News

Reassociating trees in Template Haskell AST's

Haskell on Reddit - Sat, 10/25/2014 - 4:41am

Disclaimer: crossposted on StackOverflow.

I'm upgrading a library where I translate Haskell to another language. Right now I'm using Meta.Parse to read in a Haskell module, and get back its TemplateHaskell AST, as described here.

The problem I'm running into is that, when I run the parse, I get a bunch of infix operators parsed as UInfixE and UInfixP, meaning that they are of unresolved associativity.

The description of the associativity talks about how the Haskell compiler resolves these.

I'm wondering, is there a function avaliable which can do this reassociation on the trees I get from parsing? I'm looking for something like this:

reassoc :: [Dec] -> [Dec]

I could write the massive AST traversal that does this, but it seems like it would be a massive amount of boilerplate, and clearly the function already exists in some form (hopefully in a form that plays nice with TH).

Does such a thing exist? Is there an easy way to get rid of unresolved infix operators in the AST I get from parsing declarations?

Even a function which, given the Name of an operator, could give its precedence and associativity, would be very useful.

submitted by jmite
[link] [comment]
Categories: Incoming News

Are there ANY good game engines for haskell?

Haskell on Reddit - Fri, 10/24/2014 - 7:54pm

I want to write a really simple intractable physics simulation, and I can't for the life of me find a game engine (really meaning a graphics library with mouse and keyboard functionality). Does anyone know of a good one?

submitted by Undo_all
[link] [38 comments]
Categories: Incoming News