News aggregator

Recursion Schemes

Haskell on Reddit - Mon, 05/11/2015 - 6:00am
Categories: Incoming News

Is this functional reactive programming

haskell-cafe - Mon, 05/11/2015 - 1:36am
What I want to be able to do is something like this: do x <- newSTRef 2 y <- newSTRef 3 z <- letSTRef (x + y) r1 <- readSTRef z writeSTRef x 5 r2 <- readSTRef z return (r1, r2) This should return (6,15) The "letSTRef" is what's new. The value it returns can change based on the parts that make up it's function changing. I understand this syntax above isn't going to work (I'd have to use applicative at least I'd imagine) but my main question is that does something like this exist? Is it functional reactive programming or is it something else? I don't want to be reinventing the wheel if this type of idea is already implemented but I haven't recognised it. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

FLOPS 2016, promoting cross-fertilization across the whole declarative programming and theory and practice

Lambda the Ultimate - Mon, 05/11/2015 - 1:16am
LtU generally is not appropriate venue for posting call-for-papers, but there have been exceptions, if the CFP has an exceptionally wide appeal. Hopefully FLOPS 2016 might qualify.
http://www.info.kochi-tech.ac.jp/FLOPS2016/

FLOPS has been established to promote cooperation between logic and functional programmers, hence the name. This year we have taken the name exceptionally seriously, to cover the whole extent of declarative programming, which also includes program transformation, re-writing, and extracting programs from proofs of their correctness. There is another strong emphasis: on cross-fertilization among people developing theory, writing tools and language systems using that theory, and the users of these tools. We specifically ask the authors to make their papers understandable by the wide audience of declarative programmers and researchers.

As you can see from the Program Committee list, the members have done first-rate theoretic work, and are also known for their languages, tools and libraries. PC will appreciate the good practical work. Incidentally, there is a special category, ``System Descriptions'' that FLOPS has always been known for. We really want to have more submissions in that category.

One can see even on LtU that there is some rift between theoreticians and practitioners: Sean McDermid messages come to mind. He does have many good points. We really hope that FLOPS will help repair this rift.

Categories: Offsite Discussion

Simple import-counter to help understand codebases

Haskell on Reddit - Sun, 05/10/2015 - 10:03pm

https://gist.github.com/mitchellwrosen/049352bd23357be322d7

Whenever I'm reading over a codebase I like to start at the modules with the least in-project dependencies, so I wrote this simple script. Example usage:

./explore ~/auto/src 0 Control.Auto.Blip.Internal 1 Control.Auto.Blip 1 Control.Auto.Interval 1 Control.Auto.Serialize 2 Control.Auto.Generate 2 Control.Auto.Process 2 Control.Auto.Run 5 Control.Auto.Effects 5 Control.Auto.Switch 9 Control.Auto.Time 11 Control.Auto.Process.Random 37 Control.Auto

Here, I'd start with Control.Auto.Blip and work my way up the abstraction hierarchy.

Hopefully this is useful to someone else! And if you can make this work for mutually recursive modules... I love you.

submitted by MitchellSalad
[link] [7 comments]
Categories: Incoming News

First Call for Papers for IFL 2015

haskell-cafe - Sun, 05/10/2015 - 9:02pm
Hello, Please, find below the first call for papers for IFL 2015. Please forward these to anyone you think may be interested. Apologies for any duplicates you may receive. best regards, Jurriaan Hage Publicity Chair of IFL --- IFL 2015 - Call for papers 27th SYMPOSIUM ON IMPLEMENTATION AND APPLICATION OF FUNCTIONAL LANGUAGES - IFL 2015 University of Koblenz-Landau, Koblenz, Germany In cooperation with ACM SIGPLAN September 14-16, 2015 http://ifl2015.wikidot.com/ Scope The goal of the IFL symposia is to bring together researchers actively engaged in the implementation and application of functional and function-based programming languages. IFL 2015 will be a venue for researchers to present and discuss new ideas and concepts, work in progress, and publication-ripe results related to the implementation and application of functional languages and function-based programming. Peer-review Following the IFL tradition, IFL 2015 will use a post-symposium review process to produce the formal proceedings. A
Categories: Offsite Discussion

First Call for Papers for IFL 2015

General haskell list - Sun, 05/10/2015 - 9:02pm
Hello, Please, find below the first call for papers for IFL 2015. Please forward these to anyone you think may be interested. Apologies for any duplicates you may receive. best regards, Jurriaan Hage Publicity Chair of IFL --- IFL 2015 - Call for papers 27th SYMPOSIUM ON IMPLEMENTATION AND APPLICATION OF FUNCTIONAL LANGUAGES - IFL 2015 University of Koblenz-Landau, Koblenz, Germany In cooperation with ACM SIGPLAN September 14-16, 2015 http://ifl2015.wikidot.com/ Scope The goal of the IFL symposia is to bring together researchers actively engaged in the implementation and application of functional and function-based programming languages. IFL 2015 will be a venue for researchers to present and discuss new ideas and concepts, work in progress, and publication-ripe results related to the implementation and application of functional languages and function-based programming. Peer-review Following the IFL tradition, IFL 2015 will use a post-symposium review process to produce the formal proceedings. A
Categories: Incoming News

FP Complete: Secure package distribution: ready to roll

Planet Haskell - Sun, 05/10/2015 - 6:00pm

We're happy to announce that all users of Haskell packages can now securely download packages. As a tl;dr, here are the changes you need to make:

  1. Add the relevant GPG key by following the instructions
  2. Install stackage-update and stackage-install: cabal update && cabal install stackage
  3. From now on, replace usage of cabal update with stk update --verify --hashes
  4. From now on, replace usage of cabal install ... with stk install ...

This takes advantage of the all-cabal-hashes repository, which contains cabal files that are modified to contain package hashes and sizes. The way we generate the all-cabal-hashes is interesting in its own right, but I won't shoehorn that discussion into this blog post. Wait for a separate blog post soon for a description of our lightweight architecture for this.

Note that this is an implementation of Mathieu's secure distribution proposal, with some details modified to work with the current state of our tooling (i.e., lack of package hash information from Hackage).

How it works

The all-cabal-hashes repository contains all of the cabal files Hackage knows about. These cabal files are tweaked to have a few extra metadata fields, including cryptographic hashes of the package tarball and the size of the package, in bytes. (It also contains the same data in a JSON file, which is what we currently use due to cabal issue #2585.) There is also a tag on the repo, current-hackage, which always points at the latest commit and is GPG signed. (If you're wondering, we use a tag instead of just commit signing since it's easier to verify a tag signature.)

When you run stk update --verify --hashes, it fetches the latest content from that repository, verifies the GPG signature, generates a 00-index.tar file, and places it in the same location that cabal update would place it. At this point, you have a verified package index on your location machine, which contains cryptographic signatures and sizes for each package tarball.

Now, when you run stk install ..., the stackage-install tool handles all downloads for you (subject to some caveats, like cabal issue #2566). stackage-install will look up all of the hashes and sizes that are present in your package index, and verify them during download. In particular:

  • If the server tries to send more data than expected, the download stops immediately and an exception is thrown.
  • If the server sends less data than expected, an exception is thrown.
  • If the hash does not match what was expected, an exception is thrown.

Only when the hash and size match does the file get written. In this way, tarballs are only made available to the rest of your build tools after they have been verified.

What about Windows?

In mailing list discussions, some people were concerned about supporting Windows, in particular that Git and GPG may be difficult to install and configure on Windows. But as I shared on Google+ last week, MinGHC will now be shipping with both of those tools. I've tested things myself on Windows with the new versions of MinGHC, stackage-update, and stackage-install, and the instructions above worked without a hitch.

Of course, if others discover problems- either on Windows or elsewhere- please report them so they can be fixed.

Speed and reliability

In addition to the security benefits of this tool chain, there are also two other obvious benefits. By downloading the package index updates via Git, we are able to download only the differences since the last time we downloaded. This leads to less bandwidth usage and a quicker download.

This toolchain also replaces connections to Hackage with two high reliability services: Amazon S3 (which holds the package contents) and Github. Using off the shelf, widely used services in place of hosting everything ourself reduces our community burden and increases our ecosystem's reliability.

Caveats

There are unfortunately still some caveats with this.

  • The biggest hole in the fence is that we have no way of securing distribution of packages from Hackage itself. While all-cabal-hashes downloads the package index from Hackage via HTTPS (avoiding MITM attacks), there are still other attack vectors to be concerned about (such as breaching the Hackage server itself). The improved Hackage security page documents many of these concerns. Ideally, Hackage would be modified to perform package index signing itself.
  • Due to cabal issue #2566, it's still possible that cabal-install may download packages for you instead of stackage-install, though these situations should be rare. Hopefully integrating this download code directly with a build tool will eliminate that weakness.
  • There is still no verification of package author signatures, so that if someone's Hackage credentials are compromised (which is unfortunately very probable), a corrupted package could be present. This is something Chris Done and Tim Dysinger are working on. We're looking for others in the community to work with us on pushing forward on this. If you're interested, please contact us.
Using preexisting tools

What's great about this toolchain is how shallow it is. All of the heavy lifting is handled by Git, GPG, Amazon S3, Github, and (as you'll see in a later blog post) Travis CI. We mostly just wrap around these high quality tools and services. Not only was this a practical decision (reduce development time and code burden), but also a security decision. Instead of creating a Haskell-only security and distribution framework, we're reusing the same components that are being tried and tested on a daily basis by the greater software community. While this doesn't guarantee the tooling we use is bug free, it does mean that the "many eyeballs" principle applies.

Using preexisting tools also means that we open up the possibility of use cases never before considered. For example, someone contacted me (anonymity preserved) about a use case where he wanted to be able to identify which version of Hackage was being used. Until now, such a concept didn't exist. With a Git-based package index, the Hackage version can be identified by its commit.

I'm sure others will come up with new and innovative tricks to pull off, and I look forward to hearing about them.

Categories: Offsite Blogs

(Beginner) Integrating Persistent and Scotty, parts II and III

Haskell on Reddit - Sun, 05/10/2015 - 3:07pm

Last week I wrote a post on integrating Persistent and Scotty, and received a lot of great feedback. I've been working on integrating the feedback since, and figured I'd post my findings:

Part II: The Yak Shavening

In this episode, I discovered why it's critical to keep code compartmentalized. There was some language pragma that was required for Persistent that was conflicting with the scottyT function, and the result was a type error I couldn't understand (and seemingly unGoogleable). I'm planning on posting a complete minimal reproduction to get more insight.

Part III: The It Worksening

And in this, I managed to wire up the Reader monad stack, working up my understanding on how these transformer things work.

The Github repository that hosts the code is here: https://github.com/parsonsmatt/scotty-persistent-example

I'd love any suggestions, feedback, etc. I personally found that writing out my learning experience was really helpful for solidifying what i was learning, and I hope that it can be useful for others.

submitted by ephrion
[link] [5 comments]
Categories: Incoming News

[ANN] servant 0.4.0 released (+new website)

haskell-cafe - Sun, 05/10/2015 - 1:38pm
Hello everyone, We're happy to announce the releases of: - servant 0.4.0 - servant-server 0.4.0 - servant-client 0.4.0 - servant-jquery 0.4.0 - servant-docs 0.4.0 - servant-blaze 0.4.0 - servant-lucid 0.4.0 to Hackage. As well as a new website, same URL as before: http://haskell-servant.github.io/ -- which features a tutorial that's much more informative than the getting started we had for the previous version. The tutorial is available at http://haskell-servant.github.io/tutorial The highlights for this release are: - Multiple content-types support (with servant-blaze and servant-lucid offering a way to encode data for the HTML content type) - Handlers in other monads than `EitherT` - Response headers support - Safe links to endpoints - Saner types for aborting early in request handlers (the `Left` branch in `EitherT`) For more details about the new features, please read the release post: http://haskell-servant.github.io/posts/2015-05-10-servant-0.4-released.html Cheers
Categories: Offsite Discussion

servant 0.4 released

Haskell on Reddit - Sun, 05/10/2015 - 6:39am
Categories: Incoming News

FP Complete: Guest post: Haskell at Front Row

Planet Haskell - Sat, 05/09/2015 - 12:00pm

Alexandr Kurilin from Front Row Education recently wrote an article about their usage of Haskell for the Commercial Haskell Special Interest Group. I asked his permission to post that article to our blog as well.

The mission

Front Row Education was founded to change the way math education is done in a modern day classroom. In the web universe we have all sorts of great tools for tracking, analyzing and incentivising user behavior: complex analytics, rich data visualizations, a/b testing, studying usage patterns over time, cohort analysis, gamification etc. We figured: instead of using the above to have granny click on more ads, let's make these powerful techniques available to teachers, parents and school administrators to make math education more engaging and effective.

Front Row allows schools to track student progress over time, identify areas of struggle, learn how to address them, all the while encouraging more quality practice. Learning math this way becomes a interactive and compelling experience, providing immediate feedback and adjusting content with every answer. As students practice, they generate rich data that school staff uses to continuously course-correct and fill in the gaps.

Numerous experiments from past years show that making Front Row a regular part of a math classroom leads to improved conceptual understanding, a lower rate of students falling behind, and improved scores on state tests. As of today Front Row helps over a million students in their regular math practice, and has been used in over 30% of US K-8 schools.

Our journey to Haskell

As of today Front Row uses Haskell for anything that needs to run on a server machine that is more complex than a 20 line ruby script. This includes most web services, cron-driven mailers, command-line support tools, applications for processing and validating content created by our teachers and more. We've been using Haskell actively in production since 2014.

At the time of the switch we were already familiar with the functional programming world. The central piece to the Front Row system is the JSON API used by both the student and teacher web experiences. I wrote the first version of the API in 2013 in Clojure on top of the Ring/Compojure micro-framework. At the time I didn't have plans for the API to grow to serve the kind of size and traffic we see today: it was mostly a way for me to really dive into functional programming and understanding design challenges that other popular frameworks had to come across.

Building your own framework is a fantastic learning experience, but it is also a significant commitment: without investing a ton of time and effort into the framework, you'll end up with something very bare-bones and hard to turn it into a production quality, fully-featured application. It takes innumerable iterations to make a framework extensible, modular and well maintained with a team of 1-3 developers, busy with dozens other tasks that a fast-moving startup demands.

Clojure at the time didn't offer any alternatives as far as web frameworks were concerned, and we were already starting to see the inherent critical weakness behind building large modular systems in dynamically typed languages: refactoring is a serious pain and something you will avoid at all costs because it's hard to ensure you're not breaking anything. It's not that bad if you have ONE codebase that doesn't have dependencies, but once you get into two digits you're in for a bad time.

Switching to Haskell and the Yesod framework seemed like a natural step forward: a strongly typed, purely functional, highly expressive language that would finally allow refactoring and moving fast to be painless. On top of it, a beautifully designed, extensible web framework with years of polish, one of the best high-performance web servers in the industry, extreme attention to type safety, and an all-star team of OSS contributors supporting it.

Moving from Clojure to Haskell didn't feel like a massive jump: a lot of concepts translate pretty closely, although Haskell offers a much richer vocabulary than just maps and vecs. Monads, type classes, IO etc. eventually clicked, and it was smooth sailing after that.

Advantages of using Haskell

Where does Haskell fit into all of this you say? As the development team of a small early stage edtech startup, we have two main goals:

  1. Iterate as fast as possible on new educational concepts, business model experiments and user feedback. Basically, crank out as much code as possible while keeping the quality bar very high.
  2. Stretch our runway, be conservative with our very limited resources

Haskell fits in pretty well with both of goals.

Static typing

First of all, static typing is essential when it comes to keeping the system always in a working state. Coming from a dynamically typed universe, it's surprising how much time you can save on writing unit tests, because you are getting more certainty from the compiler: no more null exceptions, no type mismatches in function calls, no more forgetting about dealing with the empty list case etc. A whole class of pesky, incredibly common and banal bugs is eliminated from your work: you now have more bandwidth to worry about implementing user stories instead of obsessing that your application doesn't blow up due to sloppy oversight.

I still remember one of my biggest Haskell/Yesod "aha" moments: not only does Yesod make sure that routes in your HTML are type-safe, but even image files linked in tags are verified to exist on disk by the compiler. No .jpg, no build, it's that simple. It's a level of guarantee that dramatically increases your confidence in the code at barely any cost.

Modularity

Modularity is another big one. We have a central module at the bottom of every one of our web applications, APIs, tools and cron binaries. This module wraps the database entities and the SQL logic necessary to access them. It also provides a lot of common shared functionality that should not be implemented more than once. Since the schema changes very aggressively, we need a way to make sure our applications are updated ASAP, we can't wait for things to blow up in production. Updating our entity definitions in that one module prevents every application built on top of it from compiling again until the change is dealt with.

No more API call mismatches, no more using an old schema, no more apps running against an old deprecated version that can lead to breaking the db state. As many others have stated, Haskell is the first language out there that feels like it manages to achieve true modularity: purity and defining what context a function is allowed to run in ensure that a library call can lead to no surprises. Testing side-effect free functions is much simpler than continuously dealing with system state.

Efficiency

Regarding the second point, why would Haskell stretch your runway? Simple. You're writing fewer bugs, you're reusing more code, new developers are causing less damage, and you have more room to deal with technical debt before it bites you. Purity and static types allow a team to aggressively refactor the codebase without having to worry that they might have forgotten to update something: a combination of a light layer of spec-style tests and a very picky compiler provide you with most of what you need to make refactoring a non-issue. More refactoring = more long-term productivity, higher team morale, more pride in one's work. Doing the same with a Ruby is as fun as pulling teeth.

All of the above adds up to needing fewer developers, as less time is spent on maintenance, which ultimately equals a higher chance of your company getting somewhere thanks to the more frequent iterations. The more stuff you try, the more likely you are to find or expand that business mechanic that will carry your business forward.

Trouble in paradise

This is not to say that things aren't all perfect though, and there's still plenty of room for improvement in the ecosystem.

Building

Build times, especially once the whole constellation of Yesod and Persistent packages are brought into the mix, are not insignificant. It still takes a good 5-10 min to build our larger web application on our beefiest machines. There are optimizations that can be made in this space which we haven't adopted yet, such as caching already build object files to avoid having to re-compile them every time, so I'm confident this will be a non-issue in the nearby future, but it's still worth being aware of. GHC works hard, you need to provide it with enough juice or time to let it do its job.

Testing

The testing frameworks out there are still fairly spartan from the developer experience standpoint. If you test Yesod with hspec, the premier BDD library for Haskell, there's currently no way to insert a bunch of rows into the database during fixtures and pass the results into the individual test cases. You have to wrap each test case in additional function calls to pull that off, adding more boilerplate to your tests.

Additionally, it's not possible to find out which one of your specific test cases failed when checking for multiple conditions within the same "it" block. This means that if you need to check the state of the system after an HTTP request, you have no clue which one of the checks failed.

Fortunately the developer(s) behind these libraries are responsive and happy to look into improvements. At the very least they're glad to point other developers in the right direction towards a PR.

This has in general been my experience with the Haskell community: things aren't perfect, but folks are always looking for a way to improve the ecosystem and want Haskell to be the best language to develop in. People are trying to carve out their little slice of paradise, and are willing to put in the hard work to make it happen.

Docs

Documentation is still not quite there and the initial onboarding of new developers is still rough. There are only so many snippets to Google for, compared to e.g. Ruby and Python. A lot of documentation is very barebones and requires diving straight into the source, which is fine for a proficient Haskeller, but not for an already terrified beginner.

Many times I've witnessed senior developers get very frustrated when something wouldn't compile for hours and they couldn't find any help to move forward: be prepared to assist them before they get too grumpy. Some projects are better about it than other: Yesod and Persistent have extensive documentation and the FPComplete crew have numerous tutorials out there to help. New books come out once in a while with fresher snippets: the time-tested Real World Haskell is now fairly outdated, but the more recent Beginning Haskell is perfectly relevant. Many channels on IRC are available: #haskell-beginners, #haskell and #yesod, although sometimes it can take work to get the answer you're looking for. More than once I heard the comment that documentation seems to be written by wizards for other wizards, and if you're a lowly initiate, you will have a rough time.

I've personally had the privilege to help all of our developers skill up in Haskell and Yesod, and I've become a huge believer in the power of having someone more experienced guide you along the way. What took me several months of learning, mostly by myself, now takes our developers a couple of weeks of quality coaching. It took me a while to grok monads, type classes, type families etc., however, properly guided developers can figure it out in a matter of hours. Having a good teacher on your team will speed adoption within the organization immensely.

Strength in numbers

We once experienced a very frustrating issue that got us thinking about our full commitment to Haskell as a company.

When we switched our main API to Yesod (a full rewrite), we almost immediately ran into the issue the API would burn up close to 95% of available CPU on whatever AWS EC2 instance it was hosted on. We upgraded machines, just to see if we could cheat our way out of fixing this by throwing money at the problem, and even with a $600/mo 16 core box, the API still managed to flood all of the available cores with barely any traffic hitting it. I personally spent a good week banging my head against it: was it resource contention? Was it a really big oversight in one of my handlers? Was it misconfiguration? Was it something about the EC2 environment? Why doesn't this reproducing AT ALL under profiling? Was it our database connection pooling? I threw a lot of screenshots and code samples at the community both on Google Groups and IRC: nobody else had ever seen anything like it. Uh oh.. All the while customer support requests are pouring in, teachers are aggravated, the team is looking at the devs and "their latest shiny toy", tapping their collective foot.

This is the part where picking exotic tooling for your stack can be a dangerous beast: "given enough eyeballs, all bugs are shallow", and when only a dozen teams out there are using your libraries at your scale, you are on your own when it comes to fixing issues. With Rails, there's enough volume of developers that there will be enough projects of every scale to burn-in your tool of choice. That's simply not the case with Haskell's usage numbers.

What this means is that if you're planning to bet the farm on Haskell, you need to be ready and comfortable with the idea that you might have to get your hands dirty, might be the first person to figure out a solution to the problem you're seeing. This requirement is pretty much non-existent in .NET / ruby / python at al. Start small, start simple, let the tooling grow on you as you gain experience. Start with tools that aren't mission critical until you're more confident.

However..

It bears mentioning that the above concerns are being actively addressed by the community and the state of things is rapidly improving:

  • Cabal, the Haskell package manager, was a real pain to work with just a few years ago and "cabal hell" is still part of Haskell vernacular. However, with sandboxes and consistent version snapshots provided by FPComplete as Stackage LTS, that problem has been mostly resolved.
  • Build times are slow, but the community is coming up with improvements such as halcyon that should alleviate things considerably.
  • Docs have gotten dramatically better over the past couple of years. There's been a big push towards keeping fresh, community-maintained, easy-to-follow and beginner-friendly instructions such as those provided by Chris Allen's Learn Haskell. We now even have IRC channels tailored specifically for beginners, e.g. #haskell-beginners . Today newcomers become more productive much faster than they did a few years ago.
  • The community has been recently doing a better job at outreach and we've seen many new developers come make Haskell a permanent part of their toolbox. With more participants, tools get more fully-featured and more maintained.
Conclusion

It's a very exciting time in the history of computing to jump on the Haskell train. Yes, the community is tiny and one might get little hand-holding compared to more popular ecosystems, however Haskell offers obvious benefits to software teams who can power through the initial pain period.

Today Haskell offers some of the best tools around for delivering quality software quickly and reliably, minimizing maintenance cost while maximizing developer enjoyment. To me Haskell is that dream of "developer happiness" that we were promised many years ago by the Ruby community: I can write beautiful, short, expressive and readable code that will perform phenomenally and stand the test of time and continuous change. What more can I ask for?

Categories: Offsite Blogs

Pycket: A Tracing JIT For a Functional Language

Lambda the Ultimate - Sat, 05/09/2015 - 11:53am

Pycket: A Tracing JIT For a Functional Language
Spenser Bauman, Carl Friedrich Bolz, Robert Hirschfeld, Vasily Krilichev, Tobias Pape, Jeremy Siek, and Sam Tobin-Hochstadt
2015

We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket’s JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites.

The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop -- we discuss two strategies for identifying loops and measure their impact.

Categories: Offsite Discussion