It seems that every time a blog post about Stackage comes out, someone comments on Reddit how excited they are for Stackage to be ready for GHC 7.10. Unfortunately, there's still an open issue about packages that have incorrect bounds or won't compile.
Well, as about 30 different package authors probably figured out today, I decided to take a crack at compiling Stackage with 7.10. This involved changing around some bounds, sending some pull requests, setting some expected test failures, and unfortunately temporarily removing a few packages from the build. But the result is well worth it: we now have a working build of Stackage with GHC 7.10!
To give some more information: this snapshot has 1028 packages, compared to 1106 for GHC 7.8. When updating the build-constraints.yaml file, I added the phrase "GHC 7.10" next to every modification I made. I encourage people to take a look at the file and see if there are any projects you'd like to send a pull request to and add GHC 7.10 support. If you do so, please ping me once the change is on Hackage so I can add it back to Stackage.
The question is: what do we do now? I'm interested in other opinions, but my recommendation is:
- Early next week, I switch over the official nightly builds to start using GHC 7.10. LTS 2 will continue running, and will use GHC 7.8 as it does already. (LTS releases never change GHC major version.)
- We work on improving the package and test coverage for GHC 7.10.
- No earlier than July 1 do we release LTS 3, which will support GHC 7.10. If there is concern that the release isn't ready yet, we can hold off an extra month (though I doubt that will be necessary).
To ease the burden on package maintainers, LTS support cycles do not overlap. LTS 2 will be supported for a minimum of 3 months from its initial release (April 1, 2015), which is why LTS 3 will be released no sooner than July 1, 2015.
I'm quite surprised and excited that Stackage was able to move to GHC 7.10 so quickly. Thank you to package authors for updated your code so quickly!
- April 30th, 2015: "SHOW ME DA MONAY!" http://lpaste.net/3352992723589136384 for today's #haskell problem <iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/1-mOKMq19zU/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/1-mOKMq19zU?feature=player_embedded" width="320"></iframe>Simple? Sure! Solution? Yes. http://lpaste.net/7331259237240143872
- April 29th, 2015: We take stock of the Stochastic Oscillator http://lpaste.net/8447434917217828864 for today's #haskell problem #trading We are so partially stoched for a partial solution for the Stochastic Oscillator http://lpaste.net/4307607333212520448
- April 28th, 2015: Today's #haskell puzzle as a ken-ken solver http://lpaste.net/6211501623257071616 a solution (beyond my ... ken) is defined at http://lpaste.net/929006498481176576
- April 27th, 2015: Rainy days and Mondays do not stop the mail, nor today's #haskell problem! http://lpaste.net/6468251516921708544 The solution posted at http://lpaste.net/6973841984536444928 … shows us view-patterns and how to spell the word 'intercalate'.
- April 24th, 2015: Bidirectionally (map) yours! for today's #haskell problem http://lpaste.net/1645129197724631040 A solution to this problem is posted at http://lpaste.net/540860373977268224
- April 23rd, 2015: Today's #haskell problem looks impossible! http://lpaste.net/6861042906254278656 So this looks like this is a job for ... KIM POSSIBLE! YAY! @sheshanaag offers a solution at http://lpaste.net/131309 .
- April 22nd, 2015: "I need tea." #BritishProblems "I need clean data" #EveryonesPipeDream "Deletia" today's #haskell problem http://lpaste.net/2343021306984792064 Deletia solution? Solution deleted? Here ya go! http://lpaste.net/5973874852434542592
- April 21st, 2015: In which we learn about Tag-categories, and then Levenshtein distances between them http://lpaste.net/2118427670256549888 for today's #haskell problem Okay, wait: is it a group of categories or a category of groups? me confused! A solution to today's #haskell at http://lpaste.net/8855539857825464320
- April 20th, 2015: Today we can't see the forest for the trees, so let's change that http://lpaste.net/3949027037724803072 A solution to our first day in the tag-forest http://lpaste.net/4634897048192155648 ... make sure you're marking your trail with breadcrumbs!
- April 17th, 2015: No. Wait. You wanted line breaks with that, too? Well, why didn't you say so in the first place? http://lpaste.net/8638783844922687488 Have some curry with a line-breaky solution at http://lpaste.net/8752969226978852864
- April 16th, 2015: "more then." #okaythen Sry, not sry, but here's today's #haskell problem: http://lpaste.net/6680706931826360320 I can't even. lolz. rofl. lmao. whatevs. And a big-ole-blob-o-words is given as the solution http://lpaste.net/2810223588836114432 for today's #haskell problem. It ain't pretty, but... there it is
- April 15th, 2015: Poseidon's trident or Andrew's Pitchfork analysis, if you prefer, for today's #haskell problem http://lpaste.net/5072355173985157120
- April 14th, 2015: Refining the SMA-trend-ride http://lpaste.net/3856617311658049536 for today's #haskell problem. Trending and throttling doesn't ... quite get us there, but ... solution: http://lpaste.net/9223292936442085376
- April 13th, 2015: In today's #haskell problem we learn zombies are comonadic, and like eating SMA-brains. http://lpaste.net/8924989388807471104 Yeah. That. Hold the zombies, please! (Or: when $40k net profit is not enough by half!) http://lpaste.net/955577567060951040
- April 10th, 2015: Today's #haskell problem delivered with much GRAVITAS, boils down to: don't be a dumb@$$ when investing http://lpaste.net/5255378926062010368 #KeepinItReal The SMA-advisor is REALLY chatty, but how good is it? TBD, but here's a very simple advisor: http://lpaste.net/109712 Backtesting for this strategy is posted at http://lpaste.net/109687 (or: how a not so good buy/sell strategy give you not so good results!)
- April 9th, 2015: A bit of analysis of historical stock data http://lpaste.net/6960188425236381696 for today's #haskell problem A solution to the SMA-analyses part is posted at http://lpaste.net/3427480809555099648
- April 8th, 2015: MOAR! MOAR! You clamor for MOAR real-world #haskell problems, and how can I say no? http://lpaste.net/5198207211930648576 Downloading stock screens Hint: get the screens from a web service; look at, e.g.: https://code.google.com/p/yahoo-finance-managed/wiki/YahooFinanceAPIs A 'foldrM'-solution to this problem is posted at http://lpaste.net/2729747257602605056
- April 7th, 2015: Looking at a bit of real-world #haskell for today's stock (kinda-)screen-scraping problem at http://lpaste.net/5737110678548774912 Hint: perhaps you'd like to solve this problem using tagsoup? https://hackage.haskell.org/package/tagsoup *GASP* You mean ... it actually ... works? http://lpaste.net/1209131365107236864 A MonadWriter-y tagsoup-y Monoidial-MultiMap-y solution
- April 6th, 2015: What do three men teaching all of high school make, beside today's #haskell problem? http://lpaste.net/667230964799242240 Tired men, of course! Thanks, George Boole! Three Men and a High School, SOLVED! http://lpaste.net/7942804585247145984
- April 3rd, 2015: reverseR that list like a Viking! Rrrrr! for today's problem http://lpaste.net/8513906085948555264 … #haskell Totes cheated to get you the solution http://lpaste.net/1880031563417124864 used a library that I wrote, so, like, yeah, totes cheated! ;)
- April 2nd, 2015: We're breaking new ground for today's #haskell problem: let's reverse lists... relationally. And tag-type some values http://lpaste.net/389291192849793024 After several fits and starts @geophf learns how to reverse a list... relationally http://lpaste.net/7875722904095162368 and can count to the nr 5, as well
- April 1st, 2015: Take a drink of today's #haskell problem: love potion nr9 http://lpaste.net/435384893539614720 because, after all: all we need is love, la-di-dah-di-da! A solution can be found au shaque d'amour posted at http://lpaste.net/6859866252718899200
This really saddens me. I don't have much to say but that he is a great man and he has inspired me a lot. Let's hold him and his family in light.submitted by enzozhc
[link] [17 comments]
I’ve just released the first version of a haskell library for principled, cross-platform & extensible hashing of types, which includes an implementation of the FNV-1a algorithm. It is available on hackage, and can be installed with:cabal install hashabler
hashabler is a rewrite of the hashable library by Milan Straka and Johan Tibell, having the following goals:
Extensibility; it should be easy to implement a new hashing algorithm on any Hashable type, for instance if one needed more hash bits
Honest hashing of values, and principled hashing of algebraic data types (see e.g. #30)
Cross-platform consistent hash values, with a versioning guarantee. Where possible we ensure morally identical data hashes to indentical values regardless of processor word size and endianness.
I started writing a fast concurrent bloom filter variant, but found none of the existing libraries fit my needs. In particular hashable was deficient in a number of ways:
The number of hash bits my data structure requires can vary based on user parameters, and possibly be more than the 64-bits supported by hashable
Users might like to serialize their bloomfilter and store it, pass it to other machines, or work with it in a different language, so we need
- hash values that are consistent across platforms
- some guarantee of consistency across library versions
I was also very concerned about the general approach taken for algebraic types, which results in collision, the use of “hashing” numeric values to themselves, dubious combining functions, etc. It wasn’t at all clear to me how to ensure my data structure wouldn’t be broken if I used hashable. See below for a very brief investigation into hash goodness of the two libraries.
There isn’t interest in supporting my use case or addressing these issues in hashable (see e.g. #73, #30, and #74) and apparently hashable is working in practice for people, but maybe this new package will be useful for some other folks.Hash goodness of hashable and hashabler, briefly
Hashing-based data structures assume some “goodness” of the underlying hash function, and may depend on the goodness of the hash function in ways that aren’t always clear or well-understood. “Goodness” also seems to be somewhat subjective, but can be expressed statistically in terms of bit-independence tests, and avalanche properties, etc.; various things that e.g. smhasher looks at.
I thought for fun I’d visualize some distributions, as that’s easier for my puny brain to understand than statistics. We visualize 32-bit hashes by quantizing by 64x64 and mapping that to a pixel following a hilbert curve to maintain locality of hash values. Then when multiple hash values fall within the same 64x64 pixel, we darken the pixel, and finally mark it red if we can’t go any further to indicate clipping.
It’s easy to cherry-pick inputs that will result in some bad behavior by hashable, but below I’ve tried to show some fairly realistic examples of strange or less-good distributions in hashable. I haven’t analysed these at all. Images are cropped ¼ size, but are representative of the whole 32-bit range.
First, here’s a hash of all [Ordering] of size 10 (~59K distinct values):
Next here’s the hash of one million (Word8,Word8,Word8) (having a domain ~ 16 mil):
I saw no difference when hashing english words, which is good news as that’s probably a very common use-case.Please help
If you could test the library on a big endian machine and let me know how it goes, that would be great. See here.
You can also check out the TODOs scattered throughout the code and send pull requests. I mayb not be able to get to them until June, but will be very grateful!P.S. hire me
I’m always open to interesting work or just hearing about how companies are using haskell. Feel free to send me an email at firstname.lastname@example.org
Find the type error in the following Haskell expression:if null xs then tail xs else xs
You can’t, of course: this program is obviously nonsense unless you’re a typechecker. The trouble is that only certain computations make sense if the null xs test is True, whilst others make sense if it is False. However, as far as the type system is concerned, the type of the then branch is the type of the else branch is the type of the entire conditional. Statically, the test is irrelevant. Which is odd, because if the test really were irrelevant, we wouldn’t do it. Of course, tail  doesn’t go wrong – well-typed programs don’t go wrong – so we’d better pick a different word for the way they do go.
The above quote is an opening paragraph of Conor McBride’s “Epigram: Practical Programming with Dependent Types” paper. As always, Conor makes a good point – this test is completely irrelevant for the typechecker although it is very relevant at run time. Clearly the type system fails to accurately approximate runtime behaviour of our program. In this short post I will show how to fix this in Haskell using dependent types.
The problem is that the types used in this short program carry no information about the manipulated data. This is true both for Bool returned by null xs, which contains no evidence of the result, as well as lists, that store no information about their length. As some of you probably realize the latter is easily fixed by using vectors, ie. length-indexed lists:data N = Z | S N -- natural numbers data Vec a (n :: N) where Nil :: Vec a Z Cons :: a -> Vec a n -> Vec a (S n)
The type of vector encodes its length, which means that the type checker can now be aware whether it is dealing with an empty vector. Now let’s write null and tail functions that work on vectors:vecNull :: Vec a n -> Bool vecNull Nil = True vecNull (Cons _ _) = False vecTail :: Vec a (S n) -> Vec a n vecTail (Cons _ tl) = tl
vecNull is nothing surprising – it returns True for empty vector and False for non-empty one. But the tail function for vectors differs from its implementation for lists. tail from Haskell’s standard prelude is not defined for an empty list so calling tail  results in an exception (that would be the case in Conor’s example). But the type signature of vecTail requires that input vector is non-empty. As a result we can rule out the Nil case. That also means that Conor’s example will no longer typecheck1. But how can we write a correct version of this example, one that removes first element of a vector only when it is non-empty? Here’s an attempt:shorten :: Vec a n -> Vec a m shorten xs = case vecNull xs of True -> xs False -> vecTail xs
That however won’t compile: now that we written type-safe tail function typechecker requires a proof that vector passed to it as an argument is non empty. The weak link in this code is the vecNull function. It tests whether a vector is empty but delivers no type-level proof of the result. In other words we need:vecNull` :: Vec a n -> IsNull n
ie. a function with result type carrying the information about the length of the list. This data type will have the runtime representation isomorphic to Bool, ie. it will be an enumeration with two constructors, and the type index will correspond to length of a vector:data IsNull (n :: N) where Null :: IsNull Z NotNull :: IsNull (S n)
Null represents empty vectors, NotNull represents non-empty ones. We can now implement a version of vecNull that carries proof of the result at the type level:vecNull` :: Vec a n -> IsNull n vecNull` Nil = Null vecNull` (Cons _ _) = NotNull
The type signature of vecNull` says that the return type must have the same index as the input vector. Pattern matching on the Nil case provides the type checker with the information that the n index of Vec is Z. This means that the return value in this case must be Null – the NotNull constructor is indexed with S and that obviously does not match Z. Similarly in the Cons case the return value must be NotNull. However, replacing vecNull in the definition of shorten with our new vecNull` will again result in a type error. The problem comes from the type signature of shorten:shorten :: Vec a n -> Vec a m
By indexing input and output vectors with different length indices – n and m – we tell the typechecker that these are completely unrelated. But that is not true! Knowing the input length n we know exactly what the result should be: if the input vector is empty the result vector is also empty; if the input vector is not empty it should be shortened by one. Since we need to express this at the type level we will use a type family:type family Pred (n :: N) :: N where Pred Z = Z Pred (S n) = n
(In a fully-fledged dependently-typed language we would write normal function and then apply it at the type level.) Now we can finally write:shorten :: Vec a n -> Vec a (Pred n) shorten xs = case vecNull` xs of Null -> xs NotNull -> vecTail xs
This definition should not go wrong. Trying to swap expression in the branches will result in a type error.
- Assuming we don’t abuse Haskell’s unsoundness as logic, eg. by using undefined.
What are some good examples of open-source Haskell services/daemons that have been set up to work with modern init systems like systemd/upstart? I am hoping to find a good role model for the service I am working on making. Particularly, I am interested in how it handles configuration files, reloading configuration files when getting a HUP signal, and error logging.
Ryansubmitted by ryantm
[link] [9 comments]
I don't want to bother the authors of DependentHaskell and that's why I'm asking here, hoping someone knows something.
I have much hope in seeing this project becoming part of GHC (many thanks to the developers working on it) so I had like to know:
How the work is going?
Is it still in an early phase or is there already substantial work being done?
How much time before a possible release? (Just want to know if it is feasible in months or years or more)submitted by game_coder
[link] [16 comments]
Hello folks. It’s been a busy month so I haven’t had much a chance to write but I think now’s a good time to talk about another compiler related subject: continuation passing style conversion.
When you’re compiling a functional languages (in a sane way) your compiler mostly consists of phases which run over the AST and simplify it. For example in a language with pattern matching, it’s almost certainly the case that we can write something likecase x of (1, 2) -> True (_, _) -> False
Wonderfully concise code. However, it’s hard to compile nested patterns like that. In the compiler, we might simplify this tocase x of (a, b) -> case a of 1 -> case b of 2 -> True _ -> False _ -> False
note to future me, write a pattern matching compiler
We’ve transformed our large nested pattern into a series of simpler, unnested patterns. The benefit here is that this maps more straight to a series of conditionals (or jumps).
Now one of the biggest decisions in any compiler is what to do with expressions. We want to get rid of complicated nested expressions because chances are our compilation target doesn’t support them. In my second to last post we transformed a functional language into something like SSA. In this post, we’re going to walk through a different intermediate representation: CPS.What is CPS
CPS is a restriction of how a functional language works. In CPS we don’t have nested expressions anymore. We instead have a series of lets which telescope out and each binds a “flat” expressions. This is the process of “removing expressions” from our language. A compiler probably is targeting something with a much weaker notion of expressions (like assembly) and so we change our tree like structure into something more linear.
Additionally, no functions return. Instead, they take a continuation and when they’re about to return they instead pass their value to it. This means that conceptually, all functions are transformed from a -> b to (a, b -> void) -> void. Logically, this is actually a reasonable thing to do. This corresponds to mapping a proposition b to ¬ ¬ b. What’s cool here is that since each function call calls a continuation instead of returning its result, we can imagine that each function just transferring control over to some other part of the program instead of returning. This leads to a very slick and efficient way of implementing CPSed function calls as we’ll see.
This means we’d change something likefact n = if n == 0 then 1 else n * fact (n - 1)
intofact n k = if n == 0 then k 1 else let n' = n - 1 in fact n' (\r -> let r' = n * r in k r')
To see what’s going on here we
- Added an extra argument to fact, its return continuation
- In the first branch, we pass the result to the continuation instead of returning it
- In the next branch, we lift the nested expression n - 1 into a flat let binding
- We add an extra argument to the recursive call, the continuation
- In this continuation, we apply multiply the result of the recursive call by n (Note here that we did close over n, this lambda is a real lambda)
- Finally, we pass the final result to the original continuation k.
The only tree-style-nesting here comes from the top if expression, everything else is completely linear.
Let’s formalize this process by converting Simply Typed Lambda Calculus (STLC) to CPS form.STLC to CPS
First things first, we specify an AST for normal STLC.data Tp = Arr Tp Tp | Int deriving Show data Op = Plus | Minus | Times | Divide -- The Tp in App refers to the return type, it's helpful later data Exp a = App (Exp a) (Exp a) Tp | Lam Tp (Scope () Exp a) | Num Int -- No need for binding here since we have Minus | IfZ (Exp a) (Exp a) (Exp a) | Binop Op (Exp a) (Exp a) | Var a
We’ve supplemented our lambda calculus with natural numbers and some binary operations because it makes things a bit more fun. Additionally, we’re using bound to deal with bindings for lambdas. This means there’s a terribly boring monad instance lying around that I won’t bore you with.
To convert to CPS, we first need to figure out how to convert our types. Since CPS functions never return we want them to go to Void, the unoccupied type. However, since our language doesn’t allow Void outside of continuations, and doesn’t allow functions that don’t go to Void, let’s bundle them up into one new type Cont a which is just a function from a -> Void. However, this presents us with a problem, how do we turn an Arr a b into this style of function? It seems like our function should take two arguments, a and b -> Void so that it can produce a Void of its own. However, this requires products since currying isn’t possible with the restriction that all functions return Void! Therefore, we supplement our CPS language with pairs and projections for them.
Now we can write the AST for CPS types and a conversion between Tp and it.data CTp = Cont CTp | CInt | CPair CTp CTp cpsTp :: Tp -> CTp cpsTp (Arr l r) = Cont $ CPair (cpsTp l) (Cont (cpsTp r)) cpsTp Int = CInt
The only interesting thing here is how we translate function types, but we talked about that above. Now for expressions.
We want to define a new data type that encapsulates the restrictions of CPS. In order to do this we factor out our data types into “flat expressions” and “CPS expressions”. Flat expressions are things like values and variables while CPS expressions contain things like “Jump to this continuation” or “Branch on this flat expression”. Finally, there’s let expressions to perform various operations on expressions.data LetBinder a = OpBind Op (FlatExp a) (FlatExp a) | ProjL a | ProjR a | Pair (FlatExp a) (FlatExp a) data FlatExp a = CNum Int | CVar a | CLam CTp a (CExp a) data CExp a = Let a (LetBinder a) (CExp a) | CIf (FlatExp a) (CExp a) (CExp a) | Jump (FlatExp a) (FlatExp a) | Halt (FlatExp a)
Lets let us bind the results of a few “primitive operations” across values and variables to a fresh variable. This is where things like “incrementing a number” happen. Additionally, in order to create a pair or access its elements we need to us a Let.
Notice that here application is spelled Jump hinting that it really is just a jmp and not dealing with the stack in any way. They’re all jumps we can not overflow the stack as would be an issue with a normal calling convention. To seal of the chain of function calls we have Halt, it takes a FlatExp and returns it as the result of the program.
Expressions here are also parameterized over variables but we can’t use bound with them (for reasons that deserve a blogpost-y rant :). Because of this we settle for just ensuring that each a is globally unique.
So now instead of having a bunch of nested Exps, we have flat expressions which compute exactly one thing and linearize the tree of expressions into a series of flat ones with let binders. It’s still not quite “linear” since both lambdas and if branches let us have something tree-like.
We can now define conversion to CPS with one major helper functioncps :: (Eq a, Enum a) => Exp a -> (FlatExp a -> Gen a (CExp a)) -> Gen a (CExp a)
This takes an expression, a “continuation” and produces a CExp. We have some monad-gen stuff going on here because we need unique variables. The “continuation” is an actual Haskell function. So our function breaks an expression down to a FlatExp and then feeds it to the continuation.cps (Var a) c = c (CVar a) cps (Num i) c = c (CNum i)
The first two cases are easy since variables and numbers are already flat expressions, they go straight into the continuation.cps (IfZ i t e) c = cps i $ \ic -> CIf ic <$> cps t c <*> cps e c
For IfZ we first recurse on the i. Then once we have a flattened computation representing i, we use CIf and recurse.cps (Binop op l r) c = cps l $ \fl -> cps r $ \fr -> gen >>= \out -> Let out (OpBind op fl fr) <$> c (CVar out)
Like before, we use cps to recurse on the left and right sides of the expression. This gives us two flat expressions which we use with OpBind to compute the result and bind it to out. Now that we have a variable for the result we just toss it to the continuation.cps (Lam tp body) c = do [pairArg, newCont, newArg] <- replicateM 3 gen let body' = instantiate1 (Var newArg) body cbody <- cps body' (return . Jump (CVar newCont)) c (CLam (cpsTp tp) pairArg $ Let newArg (ProjL pairArg) $ Let newCont (ProjR pairArg) $ cbody)
Converting a lambda is a little bit more work. It needs to take a pair so a lot of the work is projecting out the left component (the argument) and the right component (the continuation). With those two things in hand we recurse in the body using the continuation supplied as an argument. The actual code makes this process look a little out of order. Remember that we only use cbody once we’ve bound the projections to newArg and pairArg respectively.cps (App l r tp) c = do arg <- gen cont <- CLam (cpsTp tp) arg <$> c (CVar arg) cps l $ \fl -> cps r $ \fr -> gen >>= \pair -> return $ Let pair (Pair fr cont) (Jump fl (CVar pair))
For application we just create a lambda for the current continuation. We then evaluate the left and right sides of the application using recursive calls. Now that we have a function to jump to, we create a pair of the argument and the continuation and bind it to a name. From there, we just jump to fl, the function. Turning the continuation into a lambda is a little strange, it’s also we needed an annotation for App. The lambda uses the return type of the application and constructs a continuation that maps a to c a. Note that c a is a Haskell expressions with the type CExp a.convert :: Exp Int -> CExp Int convert = runGen . flip cps (return . Halt)
With this, we’ve written a nice little compiler pass to convert expressions into their CPS forms. By doing this we’ve “eliminated expressions”. Everything is now flat and evaluation basically proceeds by evaluating one small computation and using the result to compute another and another.
There’s still some things left to compile out before this is machine code though
- Closures - these can be converted to explicitly pass records with closure conversion
- Hoist lambdas out of nested scope - this gets rid of anonymous functions, something we don’t have in C or assembly
- Make allocation explicit - Allocate a block of memory for a group of let statements and have them explicitly move the results of their computations to it
- Register allocation - Cleverly choose whether to store some particular variable in a register or load it in as needed.
Once we’ve done these steps we’ve basically written a compiler. However, they’re all influenced by the fact that we’ve compiled out expressions and (really) function calls with our conversion to CPS, it makes the process much much simpler.Wrap Up
CPS conversion is a nice alternative to something like STG machines for lazy languages or SSA for imperative ones. As far as I’m aware the main SML interpreter (SML/NJ) compiles code in this way. As does Ur/Web if I’m not crazy. Additionally, the course entitled “Higher Order, Typed Compilation” which is taught here at CMU uses CPS conversion to make compiling SML really quite pleasant.
In fact, someone (Andrew Appel?) once wrote a paper that noted that SSA and CPS are actually the same. The key difference is that in SSA we merge multiple blocks together using the phi function. In CPS, we just let multiple source blocks jump to the same destination block (continuation). You can see this in our conversion of IfZ to CPS, instead of using phi to merge in the two branches, they both just use the continuation to jump to the remainder of the program. It makes things a little simpler because no one person needs to worry about
Finally, if you’re compiling a language like Scheme with call/cc, using CPS conversion makes the whole thing completely trivial. All you do is define call/cc at the CPS level ascall/cc (f, c) = f ((λ (x, c') → c x), c)
So instead of using the continuation supplied to us in the expression we give to f, we use the one for the whole call/cc invocation! This causes us to not return into the body of f but instead to carry on the rest of the program as if f had returned whatever value x is. This is how my old Scheme compiler did things, I put figuring out how to implement call/cc off for a week before I realized it was a 10 minute job!
by Brent Yorgey on April 30, 2015
A little over a year ago, Christopher Mears wrote a nice article on how to match up mouse clicks in a GTK window with parts of a diagram. The only downside was that to make it work, you had to explicitly construct the diagram in such a way that its coordinate system precisely matched the coordinates of the window you wanted to use, so that there was essentially no "translation" to do. This was unfortunate, since constructing a diagram in a particular global coordinate system is not a very "diagrams-y" sort of thing to do. However, the 1.3 release of diagrams includes a new feature that makes matching up mouse clicks and diagrams much easier and more idiomatic, and I thought it would be worth updating Chris's original example to work more idiomatically in diagrams 1.3. The complete code is listed at the end.
First, here's how we construct the house. This is quite different from the way Chris did it; I have tried to make it more idiomatic by focusing on local relationships of constituent pieces, rather than putting everything at absolute global coordinates. We first create all the constituent pieces:> -- The diagram to be drawn, with features tagged by strings. > prettyHouse :: QDiagram Cairo V2 Double [String] > prettyHouse = house > where > roof = triangle 1 # scaleToY 0.75 # centerY # fc blue > door = rect 0.2 0.4 # fc red > handle = circle 0.02 # fc black > wall = square 1 # fc yellow > chimney = fromOffsets [0 ^& 0.25, 0.1 ^& 0, 0 ^& (-0.4)] > # closeTrail # strokeT # fc green > # centerX > # named "chimney" > smoke = mconcat > [ circle 0.05 # translate v > | v <- [ zero, 0.05 ^& 0.15 ] > ] > # fc grey
We then put the pieces together, labelling each by its name with the value function. Diagrams can be valuated by any monoid; when two diagrams are combined, the value at each point will be the mappend of the values of the two component diagrams. In this case, each point in the final diagram will accumulate a list of Strings corresponding to the pieces of the house which are under that point. Note how we make use of combinators like vcat and mconcat, alignments like alignB, snugL and snugR, and the use of a named subdiagram (the chimney) to position the components relative to each other. (You can click on any of the above function names to go to their documentation!)> house = vcat > [ mconcat > [ roof # snugR # value ["roof"] > , chimney # snugL # value ["chimney"] > ] > # centerX > , mconcat > [ handle # translate (0.05 ^& 0.2) # value ["handle"] > , door # alignB # value ["door"] > , wall # alignB # value ["wall"] > ] > ] > # withName "chimney" (\chim -> > atop (smoke # moveTo (location chim) # translateY 0.4 > # value ["smoke"] > ) > )
Now, when we render the diagram to a GTK window, we can get diagrams to give us an affine transformation that mediates between the diagram's local coordinates and the GTK window's coordinates. I'll just highlight a few pieces of the code; the complete listing can be found at the end of the post. We first create an IORef to hold the transformation:> gtk2DiaRef <- (newIORef mempty :: IO (IORef (T2 Double)))
We initialize it with the identity transformation. We use the renderDiaT function to get not only a rendering action but also the transformation from diagram to GTK coordinates; we save the inverse of the transformation in the IORef (since we will want to convert from GTK to diagram coordinates):> let (dia2gtk, (_,r)) = renderDiaT Cairo > (CairoOptions "" (mkWidth 250) PNG False) > prettyHouse > > -- store the inverse of the diagram -> window coordinate transformation > -- for later use in interpreting mouse clicks > writeIORef gtk2DiaRef (inv dia2gtk)
(Note that if it is possible for the first motion notify event to happen before the expose event, then such mouse motions will be computed to correspond to the wrong part of the diagram, but who cares.) Now, when we receive a mouse click, we apply the stored transformation to convert to a point in diagram coordinates, and pass it to the sample function to extract a list of house components at that location.> (x,y) <- eventCoordinates > > -- transform the mouse click back into diagram coordinates. > gtk2Dia <- liftIO $ readIORef gtk2DiaRef > let pt' = transform gtk2Dia (p2 (x,y)) > > liftIO $ do > putStrLn $ show (x,y) ++ ": " > ++ intercalate " " (sample prettyHouse pt')
The final product ends up looking and behaving identically to the video that Chris made.
Finally, here's the complete code. A lot of it is just boring standard GTK setup.> import Control.Monad (void) > import Control.Monad.IO.Class (liftIO) > import Data.IORef > import Data.List (intercalate) > import Diagrams.Backend.Cairo > import Diagrams.Backend.Cairo.Internal > import Diagrams.Prelude > import Graphics.UI.Gtk > > main :: IO () > main = do > -- Ordinary Gtk setup. > void initGUI > w <- windowNew > da <- drawingAreaNew > w `containerAdd` da > void $ w `on` deleteEvent $ liftIO mainQuit >> return True > > -- Make an IORef to hold the transformation from window to diagram > -- coordinates. > gtk2DiaRef <- (newIORef mempty :: IO (IORef (T2 Double))) > > -- Render the diagram on the drawing area and save the transformation. > void $ da `on` exposeEvent $ liftIO $ do > dw <- widgetGetDrawWindow da > > -- renderDiaT returns both a rendering result as well as the > -- transformation from diagram to output coordinates. > let (dia2gtk, (_,r)) = renderDiaT Cairo > (CairoOptions "" (mkWidth 250) PNG False) > prettyHouse > > -- store the inverse of the diagram -> window coordinate transformation > -- for later use in interpreting mouse clicks > writeIORef gtk2DiaRef (inv dia2gtk) > > renderWithDrawable dw r > return True > > -- When the mouse moves, show the coordinates and the objects under > -- the pointer. > void $ da `on` motionNotifyEvent $ do > (x,y) <- eventCoordinates > > -- transform the mouse click back into diagram coordinates. > gtk2Dia <- liftIO $ readIORef gtk2DiaRef > let pt' = transform gtk2Dia (p2 (x,y)) > > liftIO $ do > putStrLn $ show (x,y) ++ ": " > ++ intercalate " " (sample prettyHouse pt') > return True > > -- Run the Gtk main loop. > da `widgetAddEvents` [PointerMotionMask] > widgetShowAll w > mainGUI > > -- The diagram to be drawn, with features tagged by strings. > prettyHouse :: QDiagram Cairo V2 Double [String] > prettyHouse = house > where > roof = triangle 1 # scaleToY 0.75 # centerY # fc blue > door = rect 0.2 0.4 # fc red > handle = circle 0.02 # fc black > wall = square 1 # fc yellow > chimney = fromOffsets [0 ^& 0.25, 0.1 ^& 0, 0 ^& (-0.4)] > # closeTrail # strokeT # fc green > # centerX > # named "chimney" > smoke = mconcat > [ circle 0.05 # translate v > | v <- [ zero, 0.05 ^& 0.15 ] > ] > # fc grey > house = vcat > [ mconcat > [ roof # snugR # value ["roof"] > , chimney # snugL # value ["chimney"] > ] > # centerX > , mconcat > [ handle # translate (0.05 ^& 0.2) # value ["handle"] > , door # alignB # value ["door"] > , wall # alignB # value ["wall"] > ] > ] > # withName "chimney" (\chim -> > atop (smoke # moveTo (location chim) # translateY 0.4 > # value ["smoke"] > ) > )
Sky Welch's Fractal Maps updates Alasdair Corbett's Mandelbrot Maps. It renders faster, and displays the two curves side-by-side (rather than displaying one large and one tiny). Mandelbrot Maps is the most popular Mandelbrot app in Google Play, with over 10,000 downloads. I expect Fractal Maps to catch up soon. Try it today! (Disclaimer/boast: both Sky and Alasdair produced their software as part of UG4 projects under my supervision.)
This is my first utility written in Haskell, and really the first stateful Haskell program I have written. I mainly made this for myself since I run multiple computers with different window managers and wanted something better than messing about with rc files.
Writing this in Haskell was quite enjoyable, especially since this is the first time I have worked with a monad stack (using reader, journal, and state). The biggest issues I had, and the biggest timesink, was by far dealing with IO Exceptions. Much of the code is still messy and deserves a refactor, but I'd love any comments on what I have and what I can do better with.
As far as the program itself goes, all the core functionality is present. I'm testing it on three separate systems and haven't encountered any major bugs yet. Some useful features are currently missing like service deletion and interactive mode, but I'll do a major refactor/cleaning before adding more.submitted by Watley
[link] [9 comments]