News aggregator

A Java bytecode interpreter written in Haskell

haskell-cafe - Tue, 05/24/2016 - 6:43pm
(Please keep expectations low for now; this is just a weekend project.) I have written, in Haskell, something that aspires to be a Java Virtual Machine (but I don't call it a JVM yet as it doesn't fully comply with the spec). The code is available here: https://github.com/edom/haji This is similar to Frege [3], but while Frege aims to run a variant of Haskell on Java, this project tries the other direction: running a subset of Java on Haskell. Thanks. Best, Erik Some related stuffs: [1] https://github.com/MateVM/MateVM [2] https://hackage.haskell.org/package/hs-java [3] https://github.com/Frege/frege [4] https://wiki.haskell.org/GHC:FAQ#Why_isn.27t_GHC_available_for_.NET_or_on_the_JVM.3F [5] https://github.com/levans/Open-Quark
Categories: Offsite Discussion

Dev/tools/git/Haskell role in London

haskell-cafe - Tue, 05/24/2016 - 5:12pm
https://donsbot.wordpress.com/2016/05/24/haskell-devtoolsgit-role-at-standard-chartered-london/ I have a new role at SCB this time in the "Modelling Infrastructure" team to work on our Haskell-based continuous integration and testing system on top of git. This is a London role. More details in the linked post. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

GHC 8.0.1 Compile time regression + Trac ticketprivileges

haskell-cafe - Tue, 05/24/2016 - 1:36pm
[x-post from Reddit reply] Thanks so much for all the hard work! Unfortunately I have a compile time performance regression to report on my FLTKHS <https://github.com/deech/fltkhs> library. I also don't have a minimal example. There is a demo that took about 15 seconds compile and link in 7.10.3, but with no changes now takes over a minute in 8.0.1. I've reproduced this across machines and operating systems. Since there was interest expressed in using this example as a benchmark if any GHC devs are still willing to help, I'm willing to walk them through getting the library set up etc. It's not a long process. The tip of my Github branch has been updated to build with GHC 8.0.1. I made myself a Trac account but apparently I don't have Ticket privileges so it won't let me create one. My username is 'deech'. Thanks! -deech _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Don Stewart (dons): Haskell dev/tools/git/… role at Standard Chartered (London)

Planet Haskell - Tue, 05/24/2016 - 9:36am

The Modelling Infrastructure (MI) team at Standard Chartered has an open position for a typed functional programming developers, based in London. MI are a dev/ops-like team responsible for the continuous delivery, testing, tooling and general developer efficiency of the Haskell-based analytics package used by the bank. They work closely with other Haskell dev teams in the bank, providing developer tools, testing and automation on top of our git ecosystem.

The role involves improving the ecosystem for developers and further automation of our build, testing and release infrastructure. You will work with devs in London, as part of the global MI team (based in Singapore and China). Development is primarily in Haskell. Knowledge of the Shake build system and Bake continuous integration system would be helpful. Strong git skills would be an advantage. Having a keen eye for analytics, data analysis and data-driven approaches to optimizing tooling and workflows is desirable.

This is a permanent, associate director-equivalent positions in London

Experience writing typed APIs to external systems such as databases, web services, pub/sub platforms is very desirable. We like working code, so if you have Hackage or github libraries, we definitely want to see them. We also like StackOverflow answers, blog posts, academic papers, or other arenas where you can show broad FP ability. Demonstrated ability to write Haskell-based tooling around git systems would be a super useful.

The role requires physical presence in London, either in our Basinghall or Old Street sites. Remote work is not an option. No financial background is required.Contracting-based positions are also possible if desired.

More info about our development process is in the 2012 PADL keynote, and a 2013 HaskellCast interview.

If this sounds exciting to you, please send your resume to me – donald.stewart <at> sc.com


Tagged: jobs
Categories: Offsite Blogs

Stack build with GHC 8?

haskell-cafe - Tue, 05/24/2016 - 9:10am
Hi, My name appears in the Stackage GHC 8 mega-issue [ https://github.com/fpco/stackage/issues/1476] and I'd like to upgrade my packages. I suspect it's just a case of relaxing the upper bound on base to <4.10 but I'd quite like to check that this does, in fact, work. However I'm not sure how to go about doing this since there don't yet seem to be any stackage snapshots that use GHC 8. What's the best way to get around this? Cheers, David _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

FP Complete: store: a new and efficient binary serialization library

Planet Haskell - Mon, 05/23/2016 - 11:00pm

A couple months ago, Michael Snoyman wrote a blogpost describing an experiment in an efficient implementation of binary serialization. Since then, we've developed this approach into a new package for efficient serialization of Haskell datatypes. I'm happy to announce that today we are putting out the initial release of our new new store package!

The store package takes a different approach than most prior serialization packages, in that performance is prioritized over other concerns. In particular, we do not make many guarantees about binary compatibility, and instead favor machine representations. For example, the binary and cereal packages use big endian encodings for numbers, whereas x86 machines use little endian. This means that to encode + decode numbers on an x86 machine, those packages end up swapping all of the individual bytes around twice!

To serialize a value, store first computes its size and allocates a properly sized ByteString. This keeps the serialization logic simple and fast, rather than mixing in logic to allocate new buffers. For datatypes that need to visit many values to compute their size, this can be inefficient - the datatype is traversed once to compute the size and once to do the serialization. However, for datatypes with constant size, or vectors of datatypes with constant size, it is possible to very quickly compute the total required size. List / set / map-like Store instances all implement this optimization when their elements have constant size.

store comes with instances for most datatypes from base, vector, bytestring, text, containers, and time. You can also use either GHC generics or Template Haskell to derive efficient instances for your datatypes.

Benchmark Results

I updated the serial-bench with store. Happily, store is even faster than any of the implementations we had in the benchmark.

See the detailed report here. Note that the x-axis is measured in micro-seconds taken to serialize a 100 element Vector where each element occupies at least 17 bytes. store is actually performing this operations in the sub-microseconds (431ns to encode, 906ns to decode). The results for binary have been omitted from this graph as it blows out the x-axis scale, taking around 8 times longer than cereal, nearly 100x slower than store)

We could actually write a benchmark even more favorable to store, if we used storable or unboxed vectors! In that case, store essentially implements a memcpy.

Speeding up stack builds

Now, the benchmark is biased towards the usecase we are concerned with - serializing a Vector of a small datatype which always takes up the same amount of space. store was designed with this variety of usecase in mind, so naturally it excels in this benchmark. But lets say we choose a case that isn't exactly store's strongsuit, how well does it perform? In our experiments, it seems that store does a darn good job of that too!

The development version of stack now uses store for serializing caches of info needed by the build.

With store (~0.082 seconds):

2016-05-23 19:52:06.964518: [debug] Trying to decode /home/mgsloan/.stack/indices/Hackage/00-index.cache @(stack_I9M2eJwnG6d3686aQ2OkVk:Data.Store.VersionTagged src/Data/Store/VersionTagged.hs:49:5) 2016-05-23 19:52:07.046851: [debug] Success decoding /home/mgsloan/.stack/indices/Hackage/00-index.cache @(stack_I9M2eJwnG6d3686aQ2OkVk:Data.Store.VersionTagged src/Data/Store/VersionTagged.hs:58:13) 21210280 bytes

With binary (~0.197 seconds):

2016-05-23 20:22:29.855724: [debug] Trying to decode /home/mgsloan/.stack/indices/Hackage/00-index.cache @(stack_4Jm00qpelFc1pPl4KgrPav:Data.Binary.VersionTagged src/Data/Binary/VersionTagged.hs:55:5) 2016-05-23 20:22:30.053367: [debug] Success decoding /home/mgsloan/.stack/indices/Hackage/00-index.cache @(stack_4Jm00qpelFc1pPl4KgrPav:Data.Binary.VersionTagged src/Data/Binary/VersionTagged.hs:64:13) 20491950 bytes

So this part of stack is now twice as fast!

Extras

Beyond the core of store's functionality, this initial release also provides:

  • Data.Store.Streaming - functions for using Store for streaming serialization with conduit. This makes it so that you don't need to have everything in memory at once when serializing / deserializing. For applications involving lots of data, this can essential to having reasonable performance, or even functioning at all.

    This allows us to recoup the benefits of lazy serialization, without paying for the overhead when we don't need it. This approach is also more explicit / manual with regards to the laziness - the user must determine how their data will be streamed into chunks.

  • Data.Store.TypeHash, which provides utilities for computing hashes based on the structural definitions of datatypes. The purpose of this is to provide a mechanism for tagging serialized data in such a way that deserialization issues can be anticipated.

    This is included in the store package for a couple reasons:

    1. It is quite handy to include these hashes with your encoded datatypes. The assumption is that any structural differences are likely to correspond with serialization incompatibilities. This is particularly true when the generics / TH deriving is used rather than custom instances.

    2. It uses store on Template Haskell types in order to compute a ByteString. This allows us to directly use cryptographic hashes from the cryptohash package to get a hash of the type info.

  • Data.Store.TH not only provides a means to derive Store instances for your datatypes, but it also provides utilities for checking them via smallcheck and hspec. This makes it easy to check that all of your datatypes do indeed serialize properly.

These extras were the more recently added parts of store, and so are likely to change quite a bit from the current API. The entirety of store is quite new, and so is also subject to API change while it stabilizes. That said, we encourage you to give it a try for your application!

TH cleverness

Usually, we directly use Storable instances to implement Store. In functionality, Storable is very similar to Store. The key difference is that Store instances can take up a variable amount of size, whereas Storable types must use a constant number of bytes. The store package also provides the convenience of Peek and Poke monads, so defining custom Store instances is quite a bit more convenient

Data.Store.TH.Internal defines a function deriveManyStoreFromStorable, which does the following:

  • Reifies all Store instances
  • Reifies all Storable instances.
  • Implements Store instances for all Storable instances

In the future, store will likely provide such a function for users, which restricts it to only deriving Store instances for types in the current package or current module. For now, this is just internal convenience.

I noticed that the Storable instance for Bool is a bit wasteful with its bytes. Rather inexplicably, perhaps due to alignment concerns, it takes up a whopping 4 bytes to represent a single bit of info:

instance Storable Bool where sizeOf _ = sizeOf (undefined::HTYPE_INT) alignment _ = alignment (undefined::HTYPE_INT) peekElemOff p i = liftM (/= (0::HTYPE_INT)) $ peekElemOff (castPtr p) i pokeElemOff p i x = pokeElemOff (castPtr p) i (if x then 1 else 0::HTYPE_INT)

We'd prefer to just use a single byte. Since deriveManyStoreFromStorable skips types that already have Store instances, all I needed to do was define our own instance for Bool. To do this, I used the derive function from the new th-utilities package (blogpost pending!), to define an instance for Bool:

$($(derive [d| instance Deriving (Store Bool) |]))

This is a bit of a magical incantation - it runs code at compiletime which generates an efficient instance Store Bool where .... We could also use generic deriving, and rely on the method defaults to just write instance Store Bool. However, this can be less efficient, because the generics instances will yield a VarSize for its size, whereas the TH instance is smart enough to yield ConstSize. In practice, this is the difference between having an O(1) implementation for size :: Size (Vector MyADT), and having an O(n) implementation. The O(1) implementation just multiplies the element size by the length, whereas the O(n) implementation needs to ask each element for its size.

Categories: Offsite Blogs

CFP: ACM MobiWac 2015 - November 13 - 17, 2016, Malta

General haskell list - Mon, 05/23/2016 - 7:16pm
** We apologize if you receive multiple copies of this message ** ================================================================== The 14th ACM International Symposium on Mobility Management and Wireless Access (MobiWac 2016) (in conjunction with the 19th ACM MSWiM) November 13 - 17, 2016 - Malta http://mobiwac-symposium.org/ ================================================================== The 14th ACM International Symposium on Mobility Management and Wireless Access (MobiWac 2016) will be held in conjunction with MSWiM 2016 (the 19th ACM International Symposium on Modeling, Analysis and Simulation of Wireless and Mobile Systems) from November 13 to 17, 2016 in Malta. The MOBIWAC series of events are intended to provide an international forum for the discussion and presentation of original ideas, recent results and achievements by research
Categories: Incoming News

Stefan Jacholke: Hello World

Planet Haskell - Mon, 05/23/2016 - 6:00pm

So, first off, I got accepted into Haskell Summer of Code 2016. The project is to develop a visual functional block based langauge for CodeWorld, under Chris Smith. The project is in the same vein as Scratch, although it will be based on functional principles which will be a subset of Haskell. The project idea is described more fully in the Project Proposal

Some of the other goals of this site include me tracking progress of the project and interacting with the community.

I’m a graduate student studying Computer and Electronic Engineering and aside from Summer of Code reasons for doing the project, I’m also interested in programming languages (and obviously Haskell) and hope to learn a good deal through doing such a project.

P.S. I’m also explicitly giving permission to be included on Planet Haskell

Categories: Offsite Blogs

Ketil Malde: Why we should stop talking, and start to prepare for climate change

Planet Haskell - Mon, 05/23/2016 - 2:00pm

The other day, I attended a meeting organized by my local University. Part of a series dealing with the Horizon 2020 themes, this one dealt with energy - and specifically, how we should replace our non-sustainable dependency on fossil fuels.

Professionally led by a well-known political journalist, it started with an introductory talk by a mathematician working with geothermal energy, specifically simulating fracturing of rock. Knowledge about the structure of cracks and fractures deep below can be used in the construction of geothermal energy plants - they produce power basically by pumping cold water down, and hot water up - so exploiting rock structre can make them more effective. It was an interesting talk, with a lot of geekish enthusiasm for the subject.

Then there was a panel of three; one politician, one solar panel evangelist-salesperson, and a geographer(?). And discussion ensued, everybody was talking about their favorite stuff on clean energy, and nobody really objected or criticized anything.

Which is, I think, highlights the problem.

When they opened for questions from the public, the first one to raise her voice was a tall, enthusiastic lady in a red dress. She was a bit annoyed by all the talk about economy and things, and why don't we just fix this?

And she is right - we can. It's just a question of resources. I recently looked at the numbers for Poland, which is one of the big coal-users in Europe1, producing about 150 TWh of electricity2 per year from coal.

Using the (now rather infamous) Olkiluoto reactor as a baseline, the contract price for unit 3 was €3 billion (but will probably end up at 2-3 times that in reality). Unit 1 and 2 which are in operation have about the same capacity, and deliver about 15 TWh/year. So, depending on how you want to include cost overruns, we can replace all coal-based electricity production in Poland with ten Olkiluoto-sized reactors for €30-80 billion. (I think it is reasonable to assume that if you build ten, you will eventually learn to avoid overruns and get closer to the price tag. On the other hand, the contractor might not give you as favorable quotes today as they gave Finland.)

Similarly, the Topaz solar power plant in the Californian desert, cost $2.4 billion to build, and delivers something above one TWh/year. Again, scaling up, we would need maybe 130 of these, and a total cost of about € 280 billion. (Granted, there are some additional challenges here, for instance, anybody going to Poland will immediately notice the lack of Californian deserts at low latitudes.3

So yes: we can solve this. But we don't. I can see the economic argument - we're talking about major investments. But more imporatntly, the debate was almost entirely focused on the small stuff. The seller of solar panels was talking at length about how the government should improve the situation for people selling solar panels. The academics were talking about how the government should invest in more research. The journalist was talking about Vandana Shiva - whom I'm not going to discuss in any detail, except notice that she is very good at generating headlines. The politician was talking about how he would work to fund all these good causes. And the topics drifted off, until at the end somebody from the audience brought up regulations of snow scooter use, apparently a matter of great concern to him personally, but hardly very relevant.

So these people, kind-spirited and idealistic as they are, are not part of the solution. Politicians and activists happily travel to their glorious meetings in Doha and Copenhagen, but they won't discuss shutting down Norwegian coal mines producing about two million tons of coal per year, corresponding to a full 10% of Norway's entire CO2 emissions. And unlike oil, which is a major source of income, this mine runs with huge losses -- last year, it had to be subsidized with more than € 50 million. Climate is important, but it turns out the jobs for the handful of people employed by this mine are more so. And thus realpolitik trumps idealism. Sic transit gloria mundi.

Subsidized by well-meaning politicians and pushed by PR-conscious business managers, we'll get a handful of solar panels on a handful of buildings. That their contribution almost certainly is as negative for the climate as it is for the economy, doesn't matter. We'll get some academic programs, which as always will support research into whatever can be twisted into sounding policy-compliant. And everything else continues on its old trajectory.

  1. Poland is the second largest coal consumer in Europe. Interestingly, since the reason they are number two, is Germany begin number one. And, ironically, the panel would often point to Germany as and illustration of successful subsidies and policies favoring renewable energy.

  2. Note that electricity is only a small part of total energy, when people talk about electricity generation, it is usually to make their favorite technology look better than it is. It sound better to say that solar power produces 1% of global electricity, than 0.2% of global energy, doesn't it?

  3. As far as I can find, the largest solar park in Scandinavia is in Västerås. This is estimated to deliver 1.2GWh from 7000m² of photovoltaic panels over a 4.5 ha area. Compared to Topaz's 25 km², that's slightly less than 0.2% of the size and 0.1% of the power output. At SEK 20M, it's also about 0.1% of the cost, which is surprisingly inexpensive. But these numbers seem to be from the project itself, who at the same time claims the power suffices for "400 apartments". In my apartment, 3000kWh is just one or two winter months, which makes me a bit suspicious about the rest of the calculations. Another comparison could be Neuhardberg, at slightly less than € 300 million and 145MWp capacity, but which apparently only translates to 20GWh(?). If that is indeed correct, Poland would need seven thousand of those, at a € 2100 billion price tag.

Categories: Offsite Blogs

Call for papers: 21th International Conference on Engineering of Complex Computer Systems (ICECCS 2016), Dubai, United Arab Emirates, November 6-8 2016

General haskell list - Mon, 05/23/2016 - 12:28pm
21th International Conference on Engineering of Complex Computer Systems (ICECCS 2016) || November 6-8, Dubai, United Arab Emirates || http://www.aston.ac.uk/eas/about-eas/academic-groups/computer-science/iceccs-2016/ Overview --------------------- Over the past several years, we have seen a rapid rising emphasis on design, implement and manage complex computer systems to help us deal with an increasingly volatile, globalised complex world. These systems are critical for dealing with the Grand Challenge problems we are facing in the 21st century, including health care, urbanization, education, energy, finance, and job creation. The complex computer systems are frequently distributed over heterogeneous networks and processing large amount data. Performance, real-time behavior, fault tolerance, security, adaptability, development time and cost, long life concerns are the key issues. The goal of this conference is to bring together industrial, academic, and government experts, from a variety of user domains
Categories: Incoming News

Brent Yorgey: Towards a new programming languages course: ideas welcome!

Planet Haskell - Mon, 05/23/2016 - 11:02am

tl;dr: This fall, I will be teaching an undergraduate PL course, with a focus on practical language design principles and tools. Feedback, questions, assignments you can share with me, etc. are all most welcome!

This fall, I will be teaching an undergraduate course on programming languages. It’s eminently sensible to ask a new hire to take on a course in their specialty, and one might think I would be thrilled. But in a way, I am dreading it.

It’s my own fault, really. In my hubris, I have decided that I don’t like the ways that PL courses are typically taught. So this summer I have to buckle down and actually design the course I do want to teach. It’s not that I’m dreading the course itself, but rather the amount of work it will take to create it!

I’m not a big fan of the sort of “survey of programming languages” course that gets taught a lot, where you spend three or four weeks on each of three or four different languages. I am not sure that students really learn much from the experience (though I would be happy to hear any reports to the contrary). At best it feels sort of like making students “eat their vegetables”—it’s not much fun but it will make them grow big and strong in some general sense.1 It’s unlikely that students will ever use the surveyed languages again. You might hope that students will think to use the surveyed languages later in their career because they were exposed to them in the course; but I doubt it, because three or four weeks is hardly enough to get any real sense for a language and where it might be useful. I think the only real argument for this sort of course is that it “exposes students to new ways of thinking”. While that is certainly true, and exposing students to new ways of thinking is important—essentially every class should be doing it, in one way or another—I think there are better ways to go about it.

In short, I want to design a course that will not only expose students to new ideas and ways of thinking, but will also give them some practical skills that they might actually use in their career. I started by considering the question: what does the field of programming languages uniquely have to offer to students that is both intellecually worthwhile (by my own standards) and valuable to them? Specifically, I want to consider students who go on to do something other than be an academic in PL: what do I want the next generation of software developers and academics in other fields to understand about programming languages?

A lightbulb finally turned on for me when I realized that while the average software developer will probably never use, say, Prolog, they almost certainly will develop a domain-specific language at some point—quite possibly without even realizing they are doing it! In fact, if we include embedded domain-specific languages, then in essence, anyone developing any API at all is creating a language. Even if you don’t want to extend the idea of “embedded domain-specific language” quite that far, the point is that the tools and ideas of language design are widely applicable. Giving students practice designing and implementing languages will make them better programmers.

So I want my course to focus on language design, encompassing both big ideas (type systems, semantics) as well as concrete tools (parsing, ASTs, type checking, interpreters). We will use a functional programming language (specifically, Haskell) for several reasons: to expose the students to a programming paradigm very different from the languages they already know (mainly Java and Python); because FP languages make a great platform for starting to talk about types; and because FP languages also make a great platform for building language-related tools like parsers, type checkers, etc. and for building embedded domain-specific languages. Notably, however, we will only use Haskell: though we will probably study other types of languages, we will ues Haskell as a medium for our study, e.g. by implementing simplified versions of them in Haskell. So while the students will be exposed to a number of ideas there is really only one concrete language they will be exposed to. The hope is that by working in a single language all semester, the students may actually end up with enough experience in the language that they really do go on to use it again later.

As an aside, an interesting challenge/opportunity comes from the fact that approximately half the students in the class will have already taken my functional programming class this past spring, and will therefore be familiar with Haskell. On the challenge side, how do I teach Haskell to the other half of the class without boring the half that already knows it? Part of the answer might lie in emphasis: I will be highlighting very different aspects of the language from those I covered in my FP course, though of course there will necessarily be overlap. On the opportunity side, however, I can also ask: how can I take advantage of the fact that half the class will already know Haskell? For example, can I design things in such a way that they help the other half of the class get up to speed more quickly?

In any case, here’s my current (very!) rough outline for the semester:

  1. Introduction to FP (Haskell) (3 weeks)
  2. Type systems & foundations (2-3 weeks)
    • lambda calculus
    • type systems
  3. Tools for language design and implementation (4 weeks)
    • (lexing &) parsing, ASTs
    • typechecking
    • interpreters
    • (very very basics of) compilers (this is not a compilers course!)
  4. Domain-specific languages (3 weeks)
  5. Social aspects? (1 week)
    • language communities
    • language adoption

My task for the rest of the summer is to develop a more concrete curriculum, and to design some projects. This will likely be a project-based course, where the majority of the points will be concentrated in a few big projects—partly because the nature of the course lends itself well to larger projects, and partly to keep me sane (I will be teaching two other courses at the same time, and having lots of small assignments constantly due is like death by a thousand cuts).

I would love feedback of any kind. Do you think this is a great idea, or a terrible one? Have you, or anyone you know of, ever run a similar course? Do you have any appropriate assignments you’d like to share with me?

  1. Actually, I love vegetables, but anyway.


Categories: Offsite Blogs

Chris Smith: CodeWorld’s Big Decisions

Planet Haskell - Mon, 05/23/2016 - 10:08am

Reflecting back on the last 6 years of developing and teaching with CodeWorld, there are a number of decisions that were unique, and often even controversial, that define the project.  For the record, here are eight of the biggest decisions I’ve made with CodeWorld, and the reasons for them.

1. Teaching functional programming

There are plenty of efforts around to teach coding in schools.  Most of them focus on standard imperative programming languages: for example, Python, or JavaScript, or even Java (which is a horrible choice, but is entrenched due to its role in the Advanced Placement curriculum and exams).  Most of these efforts don’t think much about functional programming.

Regular readers of this blog are probably familiar with functional programming, but for those who aren’t, you should understand that it’s really a rather different paradigm from most typical programming.  It’s not just another syntax, with a few different features.  Instead, it’s a whole new way of breaking down problems and expressing solutions.  Basic ideas taught in the first few weeks of traditional computer programming courses – for example, loops – just don’t exist at all.  And other really central ideas, like functions and variables, have a completely different meaning.

I’m not quite alone in teaching functional programming, though.  Matthias Felleisen and Shriram Krishnamurthi started sizable effort to teach Scheme at the K12 level in the 1990s, and Emmanuel Schanzer created a Scheme/Racket based curriculum called Bootstrap, which is heavily based on functional programming.  I’ve made the same choice, and for much the same reason.

In the end, while functional programming is very different from the mainstream of computer programming, it is very similar to something else: mathematics.  Functions and variables in the functional programming world may mean something different from the same words in Python or JavaScript; but they mean the same thing as functions and variables in mathematics.

In fact, I never set out to teach “coding” at all!  My goal is to teach mathematics more effectively.  But mathematics education suffers from the weakness that students who make a mistake often don’t find out about it until days later!  By them time, whatever confusion of ideas led to the error has long been forgotten.  CodeWorld began as my attempt to get students to directly manipulate things like functions, expressions, and variables, and get immediate feedback about whether the result makes sense, and whether it does what they intended.  For that purpose, a functional programming language is perfect for the job!

2. Teaching Haskell

Even after the switch to functional programming, I still surprise a lot of people by telling them I teach middle school students in Haskell!  Let’s face it: Haskell has a bit of a reputation as a mind-bending and difficult language to learn, and it sometimes even deserves the reputation.  This is, after all, the programming language community with more Ph.D. students per capita than any other, and where people hold regular conversations about applying the Yoneda lemma to help solve their coding challenges!

But it doesn’t have to be!  Haskell also has some advantages over almost anything else, for someone looking to work with tangible algebra and mathematical notation.

First of all, the language semantics really are comparable to mathematics.  Haskell is often called purely functional, meaning that it doesn’t just enable the use of functional programming ideas, but in fact embodies them!  By contrast, most other widely used functional languages are impure.  In an impure functional language, a function is actually the same complicated notion of a procedure or recipe that it is in an imperative language, but it is conventional (and the language offers powerful features to help with this) to stick to a subset that’s consistent with mathematics, most of the time.  That’s often a fine trade-off in a software engineering world, where the additional complexity is sometimes needed; but in education, when I tell a student that a function is really just a set of ordered pairs, I don’t want to have to later qualify this statement with “… except for this magical function here, which produces a random number.”

Even more importantly, basic syntax looks almost exactly like mathematics  (or at least, it can).  Bootstrap, for example, gets the semantics right, but looking through sample student workbooks, there’s quite a bit of “here’s how you write this in math; now write it in Racket.”  By contrast, when teaching with CodeWorld, we’ve been able to effectively explain the programming language as a set of conventions for typing math directly for the computer.  There are obviously still some differences – both at the surface level, like using * for multiplication and ^ for exponents, and at a deeper level, like distinguishing between variables and constructors on the left-hand side of equations.  But in practice, this has been easily understood by students as limitations and tweaks in which math notation CodeWorld understands.  It feels like a dialect, not a new language.

(It’s worth pointing out that Racket also includes a purely functional language subset that’s used by Bootstrap, though the syntax is different.  Shriram Krishnamurthi has mentioned Pyret, as well, which among other nice properties closes some of the ground between Scheme and mathematics notation, at least for expressions.  You still can’t just write “f(x) = x + 5” to define a function, though.)

So what about the mind-bending parts of Haskell?  It turns out most of them are optional!  It took some effort, but as I’ll mention later, I have removed things like type classes (including the dreaded monads) and many unnecessary uses of higher-order functions.  What’s left is a thin wrapper around notation that students are already learning in Algebra anyway.

3. Using the Gloss programming model

Of course, a programming language by itself isn’t a complete tool.  You also need libraries!  The next big decision was to base CodeWorld on the programming model of Ben Lippmeier’s Gloss library.

Gloss is an interesting choice on its own.  The programming model is very simple.  Everything is a pretty comprehensible mathematical thing.  It’s probably too simple for sizable projects, and you could make the case that teaching it is letting down students who want to be able to scale their programming skills up to larger projects.  But again, it has two advantages that I believe outweigh this concern.

First, it’s tangible.  Outside of Gloss, much of the current thinking around building interactive applications in functional programming environments centers around FRP (Functional Reative Programming).  FRP defines a few abstract concepts (“events” and “behaviors”), and then hides when they look like or how they work.  Of course, strong abstraction is a foundation of software engineering.  But it’s not a foundation of learning, or of mathematics!  Indeed, Elm also recently (and probably with even less justification, given its less educational audience) dropped FRP in favor of tangible functions, as well.  The advantages of concrete and tangible types that students can get their heads around are hard to overstate.

Second, again, this choice better supports building an understanding of mathematical modeling.  In addition to it being easier for a middle school student to understand a value of type Number -> Picture, than the more abstract Behavior Picture from FRP (or the even more obtuse non-terminating while-loop of the imperative world), it also gives them experience with understanding how real phenomena are modeled using simple ideas from mathematics.  Later programs are built using initial values and step functions, along explicitly bundled state.  This gently starts to introduce general patterns of thinking about change in ways that will come up again far down the road: in the study of linear algebra, calculus, differential equations, and dynamical systems!

Of course, there’s a cost here.  I wouldn’t point someone to Gloss for a real-world project.  Even something as simple as a single GUI component can be complicated and fragmented, since students have to separately connect the state, initial value, behavior over time, and event handling.  But the cost in encapsulation is most keenly felt in larger projects by more experienced programmers who can find this sort of plumbing work tedious.  Typical introductory programming students still have a lot to learn from connecting these pieces and understanding how to make them work together.

4. Replacing the Prelude

Once I had Haskell and Gloss in place, the next big choice made by CodeWorld was to replace the Haskell prelude with a customized version.  GHC, the most popular Haskell compiler, provides a lot of power to customize the language by making changes to libraries.  This extends even to the meaning of literal text and numbers in the source code!

One reason for replacing the Prelude was to keep the complexity of a first working program as low as possible.  For students who are just starting out, every word or piece of punctuation is an obstacle.  Haskell has always done better on this front than Java, which requires defining a class, and a member function with a variety of options.  But adding import statements definitely doesn’t fit the vision articulated above of the programming language as a thin wrapper around mathematical notation.  So the modified Prelude puts all of the built-in CodeWorld functions in scope automatically, without the need to import additional modules.  As a result, a minimal CodeWorld program is one line long.

A second reason for replacing the Prelude was to remove a lot of the programming jargon and historical accidents in Haskell.  Some of this is so entrenched that experienced programmers don’t even notice it any more.  For example, even the word “string” to denote a bit of text is a holdout from how computer programmers thought of their work in the mid 20th century.  (CodeWorld calls the analogous type Text, instead, and also keeps it separate from lists.)  Haskell itself has introduced its own jargon, which is confusing to students as well.

But the most important consequence of replacing the Prelude is that advanced language constructs, like type classes and monads, can be hidden.  These features haven’t actually been removed from CodeWorld, but they are not used in the standard library, so that students who don’t intend to use them will not see them at all.  This made more changes necessary, such as collapsing Haskell’s numeric type class hierarchy into a single type, called Number.  Perhaps the most interesting adaptation was the implementation of the (==) operator for equality comparison, without a type class constraint.  This was done by Luite, by inspecting the runtime representation of the values in the GHCJS runtime (see below).

5. Intentionally foiling imperative thinking

Sometimes, it seems that the dogma of the functional programming language community (and Haskellers in particular) is that programmers are corrupted by imperative languages, and that a programmer learning a functional language for their first experience would have a much easier time.  I haven’t found that to be 100% true.  Perhaps it’s because even students with no prior programming experience have still been told, for example, to think of a program as a list of instructions.  Or perhaps it’s something more intrinsic in the human brain.  I don’t know for sure.

But what I do know for sure is that even with no previous experience, middle school students will gravitate toward imperative semantics unless they are carefully held back!  Because of this, another choice made by CodeWorld, and one of the main differences from Gloss, is that it makes some changes to intentionally trip up students who try to think of their CodeWorld expressions as an imperative sequence of instructions.

One example of such a change: in Gloss, a list of pictures is overlaid from back to front.  In CodeWorld, though, the order is reversed.  Combining pictures, whether via the pictures function, or the & operator, is done from front to back.  The reason is that as I observed students in my classes, I realized that many of them had devised a subtly wrong understanding of the language semantics: namely, that circle(1) was not a circle, but instead a command to draw a circle, and that the & operator simply meant to do one thing, and then the next, and the pictures ended up overlaying each other because of the painter’s algorithm.  Because of this misunderstanding, they struggled to apply or understand other operations, like translation or rotation, in a natural way.  After swapping the order of parameters, students who form such a hypothesis will immediately have it proven wrong.  (The analogous mistake now would be to assume that & means to do the second thing first, and no student I’m aware of has made that error.)

A similar situation exists with colors.  In Gloss, the color function changes the color only of parts of a picture that don’t already have a color!  This means that the semantic model of the Picture type in Gloss is quite complex indeed.  Instead of just being a visual shape, a Gloss Picture is a shape where some parts have fixed color, but others have unspecified color, and the color function operates on that value by fixing any unspecified bits to the given color.  Indeed, the most sensible way to understand these values is in terms of the implementation: that the color function sets a current color in the graphics context, which is used for that subtree, but only if it’s not changed first.  This is a leaky implementation!  It is fixed by CodeWorld, where applying a color to a picture overrides any existing coloring.

Another change that helped a lot with this was to carefully remove the use of verbs for function names in the CodeWorld standard library.  I observed verbs misleading students many times.  Sometimes, they expected that use of a function would permanently change the value of its parameter.  Other times, they even expected a function like rotate to turn a picture into an animation that keeps moving!  The key idea they are missing is that functions are not actions, but rather just relations between values.  Such relations are better (even if it’s sometimes awkward) described somewhere on a scale between nouns and adjectives, rather than verbs.  The way the code reads after this change once again acts as a roadblock to students who try to build on an incorrect understanding.

6. Embracing the web

Beyond the programming language and libraries, another important choice in CodeWorld was to strongly adopt the web as a medium.  The first version of the platform in 2010 was a relatively early adopter of web-based programming tools!  However, the execution model (using SafeHaskell to run student code in a trusted way on the server and stream frames to the client) was definitely doomed from the start.  It was a hack, which worked for one class, but was hardly scalable.

Things got better with the advent of Haskell-to-JavaScript compilers.  I built a first prototype of this in 2012 using Fay, but ultimately settled on GHCJS, which is just an amazing project.  Now students get very capable code implementing complete games and other applications, all running locally in their browsers with very reasonable performance.

This decision was important for a few reasons.  The first is compatibility and universal access.  Schools have whatever devices they have access to: Chromebooks, bring-your-own-device plans, etc.  Students themselves are constantly switching devices, or leaving theirs at home.  Depending on a locally installed application – or saving student projects on a local disk – for a class at the middle school level would be a disaster.  Because CodeWorld is all web-based, they can work from any system they wish, and have full access to all of their saved projects.

The second reason a web-based environment was important is that sharing is a huge part of student motivation.  Because the CodeWorld server remembers all compiled code by its MD5 hash, students can send projects to each other simply by copying and pasting an appropriate URL into an email, chat message, or text message.  It is difficult to express how helpful this has been.

Despite the advantages of the web, though, I am hoping to soon have export of student projects to mobile applications, as well.  The development environment will remain web-based, but created applications can be installed as apps.  It’s likely that someone will be working on this feature over the summer.

7. Supporting mathematics education

Another big decision made by CodeWorld, and hinted at already, was to often sacrifice traditional computer programming education for better mathematics.  This has been done with a hodge-podge of small changes, such as:

  • De-emphasizing programming concepts like abstraction, maps and folds, and higher-order functions, in favor of approaches like list comprehensions that look more like mathematics.
  • Uncurrying all functions in the standard library.  This is easily the most controversial decision I’ve made for the Haskell community, but it’s really just a special case of de-emphasizing higher order functions.  After uncurrying, functions can always be written in standard mathematical notation, such as f(x) or f(x, y).
  • The coordinate plane uses a mathematical orientation.  Gloss’s coordinate plane looks like computer screen coordinates, with (0, 0) in the top left.  CodeWorld’s plane puts (0, 0) at the center, and it orients the positive y axis to point up.  These just match conventions.
  • CodeWorld also rescales the coordinates so that the plane extends from -10 to +10 in both dimensions, rather than counting in pixels.  This turns out to have been an amazing choice!  It simultaneously allows students to do low-precision placement of shapes on the plane without multi-digit artithmetic, and introduces decimals for added precision.  In the end, this combination better supports middle school mathematics than the alternative.

Another change here was originally an accident.  CodeWorld, from the beginning, did not implement using any kind of image file in a program.  Originally, this was because I hadn’t bothered to implement a UI for uploading assets to use in the web-based programs!  But after teaching with it, I don’t regret it at all.  I’ve had other teachers tell me the same thing.  By giving students only geometric primitives, not images copied and pasted from the web, as the tools for their projects, they are more creative and work with a lot more mathematics in the process.

8. Opting for student-led projects

The final big decision on my list doesn’t pertain to the web site or tools at all, but is about the organization of classes.  There are a lot of efforts out there to encourage students to learn to code.  Hour of Code encourages teachers to devote an hour to programming activities and games.  Many organizations are running day-long activities in Processing or Scratch or Greenfoot.  Bootstrap started with once-a-week after school programs using Racket, and has scaled up from there.  I’ve volunteered as a mentor and team lead for weekend hackathons by organizations like Black Girls Code.

These are great!  I wouldn’t discourage anyone from jumping in and doing what they can.  But in many cases, they seem to miss the opportunity for student creativity.  There’s a tendency for a lot of organizations to create very guided activities, or shy away from anything that might get a student away off the beaten path.  Early versions of the Bootstrap curriculum, for example, encouraged kids to build games, but designed a game from start to finish (in terms of generic words like the “player”, “target”, and “danger”), and give students limited creative choices in the process.  (Bootstrap has since expanded into a more open-ended Bootstrap 2 curriculum, as well.)  Hour of Code consists almost entirely of scripted activities that feel more like playing a game than building one, which makes sense because they are intended to be completed in an hour.  The BGC hackathon mentioned above was limited to use of a drag-and-drop GUI design tools, and devoted more time to having students sit in presentations about startup business models and UX design than letting them create something impressive of their own.

So one way that CodeWorld has been different from many of these activities is that I’ve tried to plan from the very beginning of the course for students to decide on, design, and implement their own ideas from the ground up.  Sometimes that means taking longer, and taking smaller steps.  From the very beginning, projects in the class aren’t plugging bits into a designed program, but rather creating things of their own choosing, at the level students are capable of doing creatively from scratch at that point.  It means that I don’t even start talking about games until halfway through the class.  But I think it’s important to let students dig in at each step and express themselves by creating something that’s deeply and uniquely theirs.  Along the way, they spend a lot more time tinkering and trying out things; even trying out different possible overall organizations of their programs!

I think CodeWorld has been very successful at this.  When students in CodeWorld create their own games, they really create their own games.  They work differently, and have different designs.

Here are a few examples from various classes, all written by students between 12 and 14 years old:

  • Gnome Maze  Use WASD keys to help a gnome navigate the maze and find the gold.
  • Donkey Pong  One player uses W and S, the other uses the up and down cursor keys.  Hit the ball back and forth.
  • Dot Grab  One player uses WASD, and the other uses the arrow keys.  Race to eat the most dots.
  • Yo Grandma!  Save an old lady in a wheelchair from various hazards by dragging attachments onto her wheelchair.
  • Jacob the Fish  Help Jacob dodge sushi and eat minnows, and avoid becoming a snack for an even larger fish
  • Knight-Wizard-Archer  A twist on rock/paper/scissors, with fantasy characters
  • Popcorn Cat  Drop the cat to eat the popcorn, but dodge dogs

Categories: Offsite Blogs

Help moving a bounds check into a case branch

haskell-cafe - Mon, 05/23/2016 - 4:27am
Sorry for the terrible subject line, but I'm a but stuck here. A couple days ago I overhauled Data.Sequence.splitAt, greatly improving its performance. You can see the new implementation at https://github.com/haskell/containers/blob/e8b1f664a631e3795dfd14f2d8c2b39c906284cf/Data/Sequence.hs#L2346 . There's one spot where I'm still stuck, however. Much like the original implementation, I check that the splitting index is before the end of the sequence (to ensure correctness). I go further, in fact, checking that the splitting index is positive, in order to avoid allocating a new tree-top in a trivial split. Logically, this check should be moved into the Deep case branch in splitTreeE, to avoid pattern matching on the top of the tree twice and performing redundant comparisons. However, when I make that move, the split/append benchmark gets worse. When I looked at the Core, it seemed that I confused GHC somehow. I ended up with a join point taking an extra argument that it totally ignored. I can fix that bench
Categories: Offsite Discussion

Tom Schrijvers: IFL 2016: 1st Call for Papers

Planet Haskell - Mon, 05/23/2016 - 3:03am

IFL 2016 - Call for papers
28th SYMPOSIUM ON IMPLEMENTATION AND APPLICATION OF FUNCTIONAL LANGUAGES - IFL 2016
KU Leuven, Belgium
In cooperation with ACM SIGPLAN
August 31 - September 2, 2016
https://dtai.cs.kuleuven.be/events/ifl2016/
Scope
The goal of the IFL symposia is to bring together researchers actively engagedin the implementation and application of functional and function-basedprogramming languages. IFL 2016 will be a venue for researchers to present anddiscuss new ideas and concepts, work in progress, and publication-ripe resultsrelated to the implementation and application of functional languages andfunction-based programming.
Peer-review
Following the IFL tradition, IFL 2016 will use a post-symposium review processto produce the formal proceedings. All participants of IFL 2016 are invited tosubmit either a draft paper or an extended abstract describing work to bepresented at the symposium. At no time may work submitted to IFL besimultaneously submitted to other venues; submissions must adhere to ACMSIGPLAN's republication policy:
http://www.sigplan.org/Resources/Policies/Republication
The submissions will be screened by the program committee chair to make surethey are within the scope of IFL, and will appear in the draft proceedingsdistributed at the symposium. Submissions appearing in the draft proceedingsare not peer-reviewed publications. Hence, publications that appear only in thedraft proceedings are not subject to the ACM SIGPLAN republication policy.After the symposium, authors will be given the opportunity to incorporate thefeedback from discussions at the symposium and will be invited to submit arevised full article for the formal review process. From the revisedsubmissions, the program committee will select papers for the formalproceedings considering their correctness, novelty, originality, relevance,significance, and clarity. The formal proceedings will appear in theInternational Conference Proceedings Series of the ACM Digital Library.
Important dates
August 1: Submission deadline draft papersAugust 3: Notification of acceptance for presentationAugust 5: Early registration deadlineAugust 12: Late registration deadlineAugust 22: Submission deadline for pre-symposium proceedingsAugust 31 - September 2: IFL SymposiumDecember 1: Submission deadline for post-symposium proceedingsJanuary 31, 2017: Notification of acceptance for post-symposium proceedingsMarch 15, 2017: Camera-ready version for post-symposium proceedings
Submission details
Prospective authors are encouraged to submit papers or extended abstracts to bepublished in the draft proceedings and to present them at the symposium. Allcontributions must be written in English. Papers must use the new ACM twocolumns conference format, which can be found at:
http://www.acm.org/publications/proceedings-template
For the pre-symposium proceedings we adopt a 'weak' page limit of 12 pages. Forthe post-symposium proceedings the page limit of 12 pages is firm.
Authors submit through EasyChair:
https://easychair.org/conferences/?conf=ifl2016
Topics
IFL welcomes submissions describing practical and theoretical work as well assubmissions describing applications and tools in the context of functionalprogramming. If you are not sure whether your work is appropriate for IFL 2016,please contact the PC chair at tom.schrijvers@cs.kuleuven.be. Topics of interest include,but are not limited to:
- language concepts- type systems, type checking, type inferencing- compilation techniques- staged compilation- run-time function specialization- run-time code generation- partial evaluation- (abstract) interpretation- metaprogramming- generic programming- automatic program generation- array processing- concurrent/parallel programming- concurrent/parallel program execution- embedded systems- web applications- (embedded) domain specific languages- security- novel memory management techniques- run-time profiling performance measurements- debugging and tracing- virtual/abstract machine architectures- validation, verification of functional programs- tools and programming techniques- (industrial) applications
Peter Landin Prize
The Peter Landin Prize is awarded to the best paper presented at the symposiumevery year. The honored article is selected by the program committee based onthe submissions received for the formal review process. The prize carries acash award equivalent to 150 Euros.
Programme committee
Chair: Tom Schrijvers, KU Leuven, Belgium
- Sandrine Blazy, University of Rennes 1, France - Laura Castro, University of A Coruña, Spain- Jacques, Garrigue, Nagoya University, Japan- Clemens Grelck, University of Amsterdam, The Netherlands- Zoltan Horvath, Eotvos Lorand University, Hungary- Jan Martin Jansen, Netherlands Defence Academy, The Netherlands- Mauro Jaskelioff, CIFASIS/Universidad Nacional de Rosario, Argentina- Patricia Johann, Appalachian State University, USA- Wolfram Kahl, McMaster University, Canada - Pieter Koopman, Radboud University Nijmegen, The Netherlands- Shin-Cheng Mu, Academia Sinica, Taiwan- Henrik Nilsson, University of Nottingham, UK- Nikolaos Papaspyrou, National Technical University of Athens, Greece- Atze van der Ploeg, Chalmers University of Technology, Sweden- Matija Pretnar, University of Ljubljana, Slovenia- Tillmann Rendel, University of Tübingen, Germany- Christophe Scholliers, Universiteit Gent, Belgium- Sven-Bodo Scholz, Heriot-Watt University, UK- Melinda Toth, Eotvos Lorand University, Hungary- Meng Wang, University of Kent, UK- Jeremy Yallop, University of Cambridge, UK
Venue
The 28th IFL will be held in association with the Faculty of Computer Science,KU Leuven, Belgium. Leuven is centrally located in Belgium and can be easilyreached from Brussels Airport by train (~15 minutes). The venue in theArenberg Castle park can be reached by foot, bus or taxi from the city center.See the website for more information on the venue.
Categories: Offsite Blogs

Representation of 3-D objects in non-continuous space

haskell-cafe - Sun, 05/22/2016 - 6:57pm
I've been poking at the problem that I've talked about in the following threads. https://groups.google.com/forum/#!searchin/haskell-cafe/michael$20litchard|sort:date/haskell-cafe/n0Tc29UUgoQ/iitt3z3PCwAJ https://groups.google.com/forum/#!searchin/haskell-cafe/michael$20litchard|sort:date/haskell-cafe/qD2kaZ9qpEA/jTDAp8KoCgAJ And my misguided conclusions here https://groups.google.com/forum/#!topic/haskell-cafe/PMtYhVQ5nNQ I'm trying to write a clone in haskell of the space-system implemented in http://swmud.org. The biggest error in my thinking so far is assuming I could do without spatial extent. Nope, these objects in space will have to have spatial extent. So no octree for me. The advice and comments from the first two threads prompted me to investigate R-trees. I could only find specifics about how to describe 2-D. Until I found this paper on layered R-Trees. http://www.isprs.org/proceedings/XXXIII/congress/part4/1216_XXXIII-part4.pdf This looks like what I want. Here's my re-formulation of the c
Categories: Offsite Discussion

Going insane over overlapping instances

haskell-cafe - Sun, 05/22/2016 - 6:05pm
This is driving me nuts. So I have a type class HasStructParser I've defined. Details are irrelevant except that if you have HasStructParser defined, then ToJSON is also defined. So I have: instance HasStructParser s => ToJSON s where ... But now, any type that defines ToJSON any other way causes an Overlapping Instances error to happen- the conflict being between the real ToJSON implementation and the one deriving from HasStructParser- this despite the fact that there is no implementation for HasStructParser for the given type. Now, I don't want to allow Overlapping Instances because if there are *real* overlapping instances, I want that to be an error. For instance, if a structure did implement HasStructParser and some other implementation of ToJSON, I want to know. I suppose I could go: newtype JSON a = JSON a instance HasStructParser s => ToJSON (JSON s) where ... But this strikes me as being ugly- now I have to add pointless JSON constructors everywhere I want to convert to
Categories: Offsite Discussion

Stefan Jacholke: Project Proposal

Planet Haskell - Sun, 05/22/2016 - 6:00pm
Visual functional block-based programming language for CodeWorld Introduction

The goal of this project is to develop a functional-based visual blocks-based programming language, similar to Scratch and other languages, that is based on a subset of Haskell. The project will extend CodeWorld and will use its API. The project will feature a user interface to allow the user to snap, drag and drop blocks in order to construct CodeWorld programs.

The language will be a prototype as a full language is beyond the current scope and timeframe. Future work and stretch goals are presented as well.

The project is an extension to CodeWorld, which is an educational web-based programming environment A visual language is a great way to get students started with programming and a functional language might be well suited to such composition.

Outline User interface

Development of a friendly user interface.

  • A user-friendly interface will be designed and implemented
  • Bootstrap or a similar HTML/CSS framework will be used to design the interface. The user interface logic will be implemented using Javascript and Jquery.
  • The user interface of the project will utilize Blockly.
  • Blockly will be adapted to match the functional style and various blocks will be created in order to match the CodeWorld API.
Generator

Haskell code generation

  • Blockly applications turn blocks into code in order to execute.
  • The project will generate valid Haskell CodeWorld programs from the blocks.
  • The code generation will be done using GHCJS with an intermediate block language layer handling the visual language before generating valid CodeWorld code.
Blocks
  • Each shape represents a type. A block (that consists of a shape) will have multiple slots or parameters into which other blocks can be inserted.
  • Allow creation of top-level definition blocks / functions than can be reused. A separate tab/panel for construction of definitions will be available in the interface
Validation and Error messages

Validation and verification of a valid program.

  • Ensure only valid programs can be constructed. Blocks should only be able to be connected if their types match.
  • If snapping does not occur, visually tell the user why (tell them what type was expected, if possible)
  • If an error does occur it should be displayed in a friendly manner.
  • When hovering over a slot it should display what input type is expected.
  • Blocks should indicate their output type.
Polymorphic types

Some blocks will have to handle polymorphic types.

  • Blocks may have multiple slots; if a slot gets connected the other slots might change color if the are the same type. For example, if we have an IF (condition) (consequence) (alternative) block then when either consequence or alternative is connected the other slot should reflect the same type.
  • Blocks might also change color to reflect their type.
  • Minimal handling of polymorphic types will be included in order to accommodate CodeWorld’s API. Complete integration is seen as a stretch goal (if time allows).
  • User data can be constructed from a set of basic types. Constructors and destructors will allow manipulations of the data.
CodeWorld integration
  • Blocks to reflect CodeWorld functions and data types, such that CodeWorld programs can be built.
  • Most CodeWorld functions are monomorphic, however, simulations require a polymorphic state/world type. We propose that such data types be constructed from existing primitive types using constructors and destructors.
What might need to be excluded
  • Custom algebraic data types. Since blocks are predefined there may not be a way to extract data from a user-defined data type.
Timeline

Weekly communication will be made with the mentor to ensure the project is on track.

Before June 12 - Research into current visual block-based programming languages such as Scratch, what good ideas they utilize and what can be carried over to this project. Research into some of the current problems that a functional based visual programming language faces.

June 12th - Deliverable - Mock up design of the user interface. Overview of what blocks might be supported.

June 13th - June 30th - Design of the functional block based language. Set up of CodeWorld and Blockly

July 1st - July 30th - Setup of a basic interface, incorporate Blockly and interface with CodeWorld. Discover and overcome unsuspected challenges

July 31st - Deliverable - Basic user interface set up. A basic program should be able to built using the user interface.

August 1st - August 15th - Implement Codeworld functions and types. Complete Blockly code generation.

August 16th - Deliverable - CodeWorld example programs should be able to be built.

August 16th - September 2nd - Polish user interface, improve error messages, fix bugs. Some leeway for the unexpected.

I have limited availability from 4 - 7 September due to a conference. It does not seem to interfere with any important dates.

I will communicate with my mentor if there are any other difficulties.

Stretch Goals

If time allows and a good solution presents itself the following may be implemented (but is not part of the core project), otherwise they are presented as good ideas for future versions:

  • Pattern Matching
  • Support for ADT’s
  • Support for editing list comprehensions, live previews of list comprehensions
  • Full recursion support
  • First class functions
About me     Name Stefan Jacholke University NWU Potchefstroom Course M.Eng Computer Degree B.Eng Computer and Electronic Email stefanjacholke@gmail.com Github https://github.com/stefan-j

I’m a first-year computer engineering graduate student at NWU in South Africa, studying Network Optimization and Planning.

My current interests include Programming Languages, Algorithms, and Data structures.

While I have only recently started developing in Haskell I have used it for:

  • Developing my own simple programming language
  • Developed a simple math expression simplifier
  • Am currently developing a mobile coffee purchases mobile application; the backend for the application is written in Haskell (that handles credit card purchases, queries to the POS system, interface between app and database)
  • Using Haskell for algorithmic problems (HackerRank, Codejam)
  • Used FFI binding to interface with CPLEX (commercial linear programming solver)
  • Using it to develop my framework for Metro Ethernet optimization (Master’s project)

I have experience in Web development, mostly through some of my pregraduate courses. In particular, I have:

  • Developed a Real Estate web site for advertisements using ASP.NET, MySQL, and ReactJS
  • Developed a File management site for uploading and downloading files (similar to dropbox, though simpler) using PHP, Bootstrap and Jquery and AJAX

Various experience with other technologies (though unrelated to project at hand). I also have a few other projects in different languages.

I have some small open source contributions listed on my Github profile.

More information can be given on request.

Sources

This project is based on Chris Smith’s proposal for a block-based UI for CodeWorld

Categories: Offsite Blogs

Dan Burton: Stackage LTS and GHC 8.0

Planet Haskell - Sun, 05/22/2016 - 5:19pm
The release of GHC 8.0.1 has recently been announced. Hooray! People are already asking about when LTS Haskell will include the new GHC. While I’m also excited for this to happen as soon as possible, it’s worth taking a look … Continue reading →
Categories: Offsite Blogs

ANNOUNCE: testbench-0.1.0.0

haskell-cafe - Sun, 05/22/2016 - 3:23pm
I've just released a new library onto Hackage that aims to help you writing comparison-oriented benchmarks by: a) reducing the duplication found when using criterion directly b) let you test your benchmarked values/functions to ensure that they have the same result/satisfy a given predicate c) provide more comparison-oriented output I've written more about it here: https://ivanmiljenovic.wordpress.com/2016/05/23/test-your-benchmarks/ Or you could go straight to the Hackage page here: http://hackage.haskell.org/package/testbench
Categories: Offsite Discussion

Ivan Lazar Miljenovic: Test your benchmarks!

Planet Haskell - Sun, 05/22/2016 - 8:21am

There are lies, damn lies and benchmarks.
Old Jungle saying

testbench is a new library designed to make it easier to write comparison benchmarks, ensure that they return the correct value and thus help prevent unintentional bias in benchmarks.

Motivation

About a year ago, I was working on some Haskell code that I wanted to compare to existing implementations. In Haskell, we of course have the wonderful criterion library to write benchmarks with, but whilst I’ve found it really helpful before to help me tell whether a particular function has been improving in performance as I work on it, I felt that it was a bit clunky for directly comparing implementations against each other (there used to be a [bcompare] function, but it hasn’t existed since version 1.0.0.0 which came out in August 2014).

When I tried looking at how others have approached this problem, I found that they did so by just directly using the bench and bgroup functions. From my point of view, there are two problems with this approach:

  1. There is a lot of duplication required with this: you would typically have something along the lines of: [ bench "f1" $ nf f1 a , bench "f2" $ nf f2 a ... ]

    Because of this duplication, it is too easy to have benchmarks nominally comparing two (or more) functions/values, but accidentally end up comparing apples to oranges (e.g. using whnf instead of nf).

  2. The output generated by criterion – especially as of version 1.0.0.0 – is rather verbose and tends not to lend itself well to directly comparing results to multiple benchmarks. I personally find myself starting to get swamped looking at the terminal output if there’s more than a few benchmarks, and the HTML report is even worse.As I said above, it’s great when I’m directly looking at just how one function compares as I tweak it, but not when I’m wanting to compare multiple functions.

Whilst I kept looking at existing comparison benchmarks, I even came across an example where a comparison ended up nominally showing that f1 was faster than f2… except that the result of f1 was a value with an O(1) implementation of [rnf], whereas f2 has an O(n) definition. I don’t know if this is intentional (I think it probably wasn’t) and even if this is rectified f1 was still faster… but the difference in runtimes – whilst minor in comparison to performance between the two functions – is non-negligible.

This to me demonstrated the desirability of not only having a wrapper around criterion to reduce the verbosity of comparison benchmarks, but to only be able to produce unit tests to ensure criteria are satisfied.

It’s taken me longer than I wished to produce a syntax that I was both happy with and would actually work (with lots of fighting against GHC in the form of “Why won’t you accept this? Oh, wait, now I get it; that makes sense… but can’t you accept it anyway? Pretty please?”), but I’ve now finally gotten it to a usable form and am hence releasing it.

testbench is now available on Hackage with the source on GitHub.

Example

As extremely simple and contrived examples, consider the following:

main :: IO () main = testBench $ do -- Monomorphic comparisons compareFunc "List length" (\n -> length (replicate n ()) == n) (testWith (@? "Not as long as specified") <> benchNormalForm) (mapM_ (\n -> comp ("len == " ++ show n) n) [1..5]) -- Polymorphic comparisons. -- -- Currently it isn't possible to use a Proxy as the argument to the -- function, so we're using 'undefined' to specify the type. compareFuncConstraint (Proxy :: Proxy (CUnion Eq Num)) "Number type equality" (join (==) . (0`asTypeOf`)) (baseline "Integer" (undefined :: Integer) <> benchNormalForm) $ do comp "Int" (undefined :: Int) comp "Rational" (undefined :: Rational) comp "Float" (undefined :: Float) comp "Double" (undefined :: Double)

When this is run, the result on the console is:

Cases: 9 Tried: 9 Errors: 0 Failures: 0 Mean MeanLB MeanUB Stddev StddevLB StddevUB OutlierVariance List length len == 1 22.15 ns 21.86 ns 22.88 ns 1.505 ns 742.2 ps 2.826 ns 83% len == 2 22.64 ns 22.49 ns 22.87 ns 602.0 ps 449.5 ps 825.7 ps 43% len == 3 23.39 ns 23.16 ns 23.78 ns 1.057 ns 632.6 ps 1.553 ns 68% len == 4 23.70 ns 23.51 ns 23.95 ns 773.3 ps 567.9 ps 1.050 ns 53% len == 5 24.14 ns 23.96 ns 24.71 ns 962.4 ps 307.5 ps 1.886 ns 63% Number type equality Integer 12.59 ns 12.48 ns 12.80 ns 538.0 ps 312.4 ps 944.2 ps 67% Int 12.79 ns 12.69 ns 12.98 ns 463.6 ps 320.0 ps 665.2 ps 59% Rational 12.77 ns 12.67 ns 12.93 ns 395.1 ps 290.0 ps 535.9 ps 51% Float 13.13 ns 12.88 ns 13.42 ns 869.7 ps 667.3 ps 1.212 ns 83% Double 12.74 ns 12.57 ns 13.02 ns 704.6 ps 456.5 ps 1.047 ns 78%

You can see on the top line we’ve had nine tests (run using HUnit):

  • From the first group we’ve specified that all five values must return True.
  • From the second group, we’ve specified that all inputs must return the same value as for the Integer case.

Since all the tests passed, the benchmarks are run. The output for these is a tabular format to make it easier to do vertical comparisons (though in this case the variances are all high so we should take them with a grain of salt).

Caveats

Whilst I’m quite pleased with the API for defining the actual tests/benchmarks (subject to what GHC will let me write), there’s still scope for more functionality (e.g. support for IO-based benchmarks).

However, the default output (as soon above) isn’t configurable. It’s possible to get the individual tests and benchmarks out to feed them explicitly to HUnit and criterion respectively, but if you’re after this particular output then you have to wait until all the benchmarks are complete before the results are printed. There is no support for saving results to file (either as a CSV of all the results or an HTML report), or even to control how the benchmarks are run (minimum time spent on each benchmark, etc.) or any other option currently offered by criterion.

If there is enough interest I can look at adding these in; but this satisfies my itch for now whilst getting this library out there for people to start trying out.


Filed under: Haskell
Categories: Offsite Blogs