News aggregator

Finding Lisp

del.icio.us/haskell - Mon, 10/27/2014 - 2:50am
Categories: Offsite Blogs

Finding Lisp

del.icio.us/haskell - Mon, 10/27/2014 - 2:50am
Categories: Offsite Blogs

Is there a better way to write the isPrime function?

Haskell on Reddit - Mon, 10/27/2014 - 2:46am

Hi all,

Is there a better way to write a isPrime function in haskell?

isPrime :: (Integral a) => a -> Bool

isPrime k | k <=1 = False | otherwise = not $ elem 0 (map (mod k)[2..k-1])

Thank You

Vinay

submitted by andiamvinay
[link] [15 comments]
Categories: Incoming News

b.hatena.ne.jp

del.icio.us/haskell - Sun, 10/26/2014 - 9:27pm
Categories: Offsite Blogs

b.hatena.ne.jp

del.icio.us/haskell - Sun, 10/26/2014 - 9:27pm
Categories: Offsite Blogs

Hiding module *exports*

glasgow-user - Sun, 10/26/2014 - 8:28pm
(Not to be confused with the "hiding import behavior" discussion also going on) -- Currently, I'm able to write "module Foo where" to export everything defined in Foo. If, though, I add to the module some definitions which I don't want to export... data Lockbox = MkLockbox Int internalFunction = ... ...I then have to explicitly enumerate everything that I *do* want the module to export, and add to that list each time I add to the module. I propose that instead, we're able to simply say what we mean: module Foo hiding (Lockbox(MkLockbox), internalFunction) where I think its semantics are immediately clear to the reader. There's a little bit of bikeshedding that needs to happen (e.g. is "hiding (Foo(..))" sufficient to hide the type Foo and not just its constructors), but are people +1 on this? I've frequently wanted this behavior. Tom _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users< at >haskell.org http://www.haskell.org/mailman/listin
Categories: Offsite Discussion

Danny Gratzer: Notes on Focusing

Planet Haskell - Sun, 10/26/2014 - 6:00pm
Posted on October 27, 2014 Tags: types

I’ve been spending a lot of time whacking my head on focusing literature. I’d like to jot down some intuition around what a focused system is and how it relates to the rest of the world. I’m going to steer clear of actually proving things but I will point out where a proof would be needed.

What Is Focusing

In a nutshell, focusing is a strategy to create proofs that minimizes the amount of choices available at each step. Focusing is thus amenable to mechanization since a computer is very good at applying a lot of deterministic procedures but incredibly bad at nondeterministic choice.

Now when we set out to define a focused system we usually do something like

  1. Formalize our logical framework with natural deduction
  2. Translate our framework into a sequent calculus
  3. Transform our sequent calculus into a focused variant

At each of these steps there’s a proof that says something like “System 2 is sound and complete with respect to System 1”. We can then chain these proofs together to get that we can transform any nonfocused proof into a focused one (focalization) and the reverse (de-focalization).

In order to actually carry out these proofs there’s a fair amount of work and pain. Usually we’ll need something like cut elimination and/or identity expansion.

Groundwork

Now before we go on to define an example logic, let’s notice a few things. First off, in sequent calculus there are left and right rules. Left rules decompose known facts into other known facts while right rules transform our goal. There’s also an identity sequent which more or less just states

A is an atom ————————————— Γ, A → A

This is a bit boring though.

Now certain rules are invertible: their conclusion implies their premise in addition to the reverse. For example if I said you must prove A ∧ B clearly we’ll have to prove both A and B in order to prove A ∧ B; there’s no alternative set of rule applications that let us circumvent proving A and B.

This means that if we were mechanically trying to prove something of the form A ∧ B we can immediately apply the right rule that decomposes ∧ into 2 goals.

We can these sort of rules invertible or asynchronous. Dually, there are rules that when applied transform our goal into something impossible to prove. Consider ⊥ ∨ ⊤, clearly apply the rule that transforms this into ⊥ would be a bad idea!

Now if we begin classifying all the left and write rules we’ll notice that the tend to all into 2 categories

  • Things with invertible left rules and noninvertible right rules
  • Things with noninvertible left rules and invertible right rules

We dub the first group “positive” things and the second “negative” things. This is called polarization and isn’t strictly necessary but greatly simplifies a lot of our system.

Now there are a few things that could be considered both positive and negative. For example we can consider ∧ as positive with

Γ → A⁺ Γ → B⁺ ——————————————— Γ → A⁺ ∧ B⁺ Γ, A⁺, B⁺ → C ————————————————— Γ, A⁺ ∧ B⁺ → C

In this case, the key determiner for the polarity of ∧ comes from its subcomponents. We can just treat ∧ as positive along with its subcomponents and with an appropriate dual ∧⁻, our proof system will still be complete.

As a quick example, implication ⊃ is negative. the right rule

Γ, A → B —————————— Γ → A ⊃ B

While its left rule isn’t

Γ, A ⊃ B → A Γ, B, A ⊃ B → C —————————————————————————————— Γ, A ⊃ B → C

Since we could easily have something like ⊥ ⊃ ⊤ but using this rule would entail (heh) proving ⊥! Urk. If our system applied this rules remorselessly, we’d quickly end up in a divergent proof search.

An Actual Focused System

Do note that these typing rules are straight out of Rob Simmons’ paper, linked below

Now that we’ve actually seen some examples of invertible rules and polarized connectives, let’s see how this all fits into a coherent system. There is one critical change we must make to the structure of our judgments: an addition to the form _ → _. Instead of just an unordered multiset on the left, in order to properly do inversion we change this to Γ; Ω ⊢ A where Ω is an ordered list of propositions we intend to focus on.

Furthermore, since we’re dealing with a polarized calculus, we occasionally want to view positive things as negative and vice versa. For this we have shifts, ↓ and ↑. When we’re focusing on some proposition and we reach a shift, we pop out of the focused portion of our judgment.

Our system is broken up into 3 essentially separate judgments. In this judgment we basically apply as many invertible rules as many places as we can.

Γ, A⁻; Q ⊢ U —————————————— Γ; ↓A⁻, Q ⊢ U Γ; A⁺, Ω ⊢ U Γ; B+; Ω ⊢ U ——————————————————————————— Γ; A⁺ ∨ B⁺, Ω ⊢ U Γ; A⁺, B⁺, Ω ⊢ U ———————————————————— Γ; A⁺ ∧ B⁺, Ω ⊢ U —————————————— Γ; ⊥, Ω ⊢ U

We first look at how to break down Ω into simpler forms. The idea is that we’re going to keep going till there’s nothing left in Ω. Ω can only contain positive propositions so eventually we’ll decompose everything to shifts (which we move into Γ) ⊤+ (which we just drop on the floor) or ⊥ (which means we’re done). These are all invertible rules to we can safely apply them eagerly and we won’t change the provability of our goal.

Once we’ve moved everything out of Ω we can make a choice. If U is “stable” meaning that we can’t break it down further easily, we can pick a something negative out of our context and focus on it

Γ; [A⁻] ⊢ U ————————————– Γ, A⁻; • ⊢ U

This pops us into the next judgment in our calculus. However, if U is not stable, then we have to decompose it further as well.

Γ; • ⊢ A⁺ —————————————— Γ; • ⊢ ↑ A⁺ ——————————— Γ; • ⊢ ⊤⁻ Γ; A⁺ ⊢ B⁻ ————————————— Γ; • ⊢ A⁺ ⊃ B⁻ Γ; • ⊢ A⁻ Γ; • ⊢ B⁻ ————————————————————— Γ; • ⊢ A⁻ ∧ B⁻

If we have a negative connective at the top level we can decompose that further, leaving us with a strictly smaller goal. Finally, we may reach a positive proposition with nothing in Ω. In this case we focus on the right.

Γ ⊢ [A⁺] ——————————— Γ; • ⊢ A⁺

Now we’re in a position to discuss these two focused judgments. If we focus on the right we decompose positive connectives

—————————— Γ ⊢ [⊤⁺] Γ; • ⊢ A⁻ ————————— Γ ⊢ ↓ A⁻ Γ ⊢ [A⁺] ————————————— Γ ⊢ [A⁺ ∨ B⁺] Γ ⊢ [B⁺] ————————————— Γ ⊢ [A⁺ ∨ B⁺] Γ ⊢ [A⁺] Γ ⊢ [B⁺] ——————————————————— Γ ⊢ [A⁺ ∧ B⁺]

These judgments follow the ones we’ve already seen. If we encounter a shift, we stop focusing. Otherwise we decompose the topmost positive connective. Now looking at these, you should see that sometimes these rules we’ll lead us to a “mistake”. Imagine if we applied the 4th rule to ⊤ ∨ ⊥! This is why these rules are segregated into a separate judgment.

In this judgment’s dual we essentially apply the exact same rules to the left of the turnstile and on negative connectives.

Γ; A⁺ ⊢ U ———————————— Γ; [↑A⁺] ⊢ U Γ ⊢ [A⁺] Γ; [B⁻] ⊢ U —————————————————————— Γ; [A⁺ ⊃ B⁻] ⊢ U Γ; [A⁻] ⊢ U ————————————————— Γ; [A⁻ ∧ B⁻] ⊢ U Γ; [B⁻] ⊢ U ————————————————— Γ; [A⁻ ∧ B⁻] ⊢ U

That wraps up our focused system. The idea is now we have this much more limited system which can express the same things our original, unfocused system could. A computer can be easily programmed to do a focused search since there’s much less backtracking everywhere leading to fewer rules being applicable at each step. I think Pfenning has referred to this as removing most of the “don’t-care” nondeterminism from our rules.

Wrap Up

I’m going to wrap up the post here. Proving focalization or even something like cut elimination is quite fiddly and I have no desire at all to try to transcribe it (painfully) into markdown and get it wrong in the process.

Instead, now that you have some idea of what focusing is about, go read Rob Simmons’ paper. It provides a clear account of proving everything necessary prove a focused system is complete and sound with respect to its unfocused counterpart.

Cheers

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus
Categories: Offsite Blogs

NFM 2015 - final call for papers

General haskell list - Sun, 10/26/2014 - 3:40pm
CALL FOR PAPERS The 7th NASA Formal Methods Symposium http://www.NASAFormalMethods.org/nfm2015 <http://www.nasaformalmethods.org/nfm2015> 27 – 29 April 2015 Pasadena, California, USA Paper Submission: *** 10 Nov 2014 *** THEME The widespread use and increasing complexity of mission- and safety-critical systems require advanced techniques that address their specification, verification, validation, and certification. The NASA Formal Methods Symposium is a forum for theoreticians and practitioners from academia, industry, and government, with the goals of identifying challenges and providing solutions to achieving assurance in mission- and safety-critical systems. Within NASA such systems include for example autonomous robots, separation assurance algorithms for aircraft, Next Generation Air Transportation (NextGen), and autonomous rendezvous and docking for spacecraft. Moreover, emerging paradigms such as property-based design, code generation, and safety cases are bringing with them new challenges a
Categories: Incoming News

Counting in binary?

Haskell on Reddit - Sun, 10/26/2014 - 1:46pm

A little while ago on here, I swear I saw someone use a few functions from Data.List on the string "01" and ended up getting ["0", "1", "01", "10", "100", "101"...], but I can't figure out for the life of me what he did. Does anyone know?

Edit: Binary is hard.

submitted by Undo_all
[link] [15 comments]
Categories: Incoming News

Q: How to apply a polymorphic function to a Dynamic value in GHC 7.8

Haskell on Reddit - Sun, 10/26/2014 - 8:22am

There is a question http://stackoverflow.com/questions/10889682/how-to-apply-a-polymorphic-function-to-a-dynamic-value and a solution from Edward Kmett that no longer works in GHC 7.8 because of Kind-polymorphic Typeable. Edward also proposed "Proposal: Simplify/Generalize Data.Dynamic": http://comments.gmane.org/gmane.comp.lang.haskell.libraries/22732 where he says "Now it becomes possible to write code that does polymorphic things underneath the Dynamic wrapper, such as Lennart's example once more, but now in a principled way." Does anybody knows what is this principled way?

submitted by andrb
[link] [14 comments]
Categories: Incoming News

Tracking down where an instance comes from

haskell-cafe - Sun, 10/26/2014 - 4:14am
Hi, I'm in a situation where we have a 140+ module program (Yi) with some 30+ packages of dependencies and somewhere along the lines, this little guy gets in: instance [safe] Show (a -> b) -- Defined in ‘Text.Show.Functions’ Of course this automatically infects everything to import Yi. This is bad. What would be the best way to track down through which import this actually comes in? There's nothing directly importing Text.Show.Functions and I wonder if there's a better way than ‘remove imports one by one going deeper each time you hit it’ which might take a while, especially as there seems to be no way to unload instances except restarting GHCi.
Categories: Offsite Discussion

Polymorphic functions over string libraries

haskell-cafe - Sun, 10/26/2014 - 2:36am
Hi. I was wondering what would be the best way to create a polymorphic function over any possible string library (Text, String, Bytestring, etc). For instance, imagine I have to read a file with text, transform this text in some way and output it to another file, or to the console. If I wanted to use String, I'd just do this: / transform :: String -> String main = readFile "input.txt" >>= writeFile "output.txt" . transform / But if I wanted to use Text instead, I'd have to use this: / import qualified Data.Text.IO as T transform :: Text -> Text main = T.readFile "input.txt" >>= T.writeFile "output.txt" . transform / Idem for ByteString. I was wondering if there was a way to create these computations in a generic way, for any kind of string library, something like this: / class StringLibrary s where: sReadFile :: FilePath -> IO s sWriteFile :: FilePath -> s -> IO () ... / So then I'd just have this: / transform :: StringLibrary s => s -> s main = sReadFile "input.txt" >>= sWriteFile "output.txt" . transfo
Categories: Offsite Discussion

Monads: external questions

haskell-cafe - Sun, 10/26/2014 - 12:23am
As opposed to the internal logic of monads, how they work, I hope to start a discussion about their external logic: how and why to use monads. design ------ How do monads change the way one * thinks about a problem? * structures data? * refactors? * tests? Should I always be giving the monads a lot of cognitive bandwidth, because they reorder the way everything should be, or is it an investment with a high initial cognitive cost but requiring little maintenance thereafter? what is their common framework? ------------------------------- Monads let data reach farther than it otherwise would. Subjectively, they feel like a controlled way of violating encapsulation. Are there other, deeper or more specific, commonalities that explain why monads are a good way to implement exceptions, output, state, and perhaps other services? varieties --------- In practice, are all monads either for exceptions, state or output? If not, what other goals are monads well suited to? How should multiple contexts coexist? What's
Categories: Offsite Discussion