News aggregator

RV 2016, Deadline for abstract on May 20 (AoE)

General haskell list - Thu, 05/19/2016 - 10:41am
Following several requests, the deadlines have been extended as follows: - Abstract deadline: Friday May 20 (AoE). - Paper and tutorial deadline: Friday May 27 (AoE). =============================================== RV 2016 16th International Conference on Runtime Verification September 23-30, Madrid, Spain http://rv2016.imag.fr <http://rv2016.imag.fr/> Scope Runtime verification is concerned with monitoring and analysis of software and hardware system executions. Runtime verification techniques are crucial for system correctness, reliability, and robustness; they are significantly more powerful and versatile than conventional testing, and more practical than exhaustive formal verification. Runtime verification can be used prior to deployment, for testing, verification, and debugging purposes, and after deployment for ensuring reliability, safety, and security and for providing fault containment and recovery as well as online system repair. Topics of interest to the conference include: - specification
Categories: Incoming News

Brent Yorgey: How to print things

Planet Haskell - Thu, 05/19/2016 - 10:33am

I have finally finished up writing my guide on how to print things, based on this blog post from January on my other (math) blog. It specifically enumerates the pros and cons of various methods for printing and reading loose-leaf documents (the sort of thing that academics do quite a bit, when they print out a journal article to read).

The main motivation for writing the page is to explain the (to my knowledge, novel) Möbius method for printing and reading double-sided, like this:

I actually now use this in practice. As compared to the usual method of printing double-sided, this has several advantages:

  • One always does the exact same action after finishing every page; there is no need to remember whether you are on an even or an odd page.
  • Any consecutive sequence of pages are on different sheets of paper, so it is easy to simultaneously refer to multiple pages close together. There is never any need to keep flipping a sheet back and forth to refer to the previous page (as there is with traditional double-sided printing).

But there are even new things to say about traditional double-sided printing, as well. I now know of several different algorithms for reading double-sided, each with its pros and cons; previously I had not even considered that there might be more than one way to do it.


Categories: Offsite Blogs

Proposal reminder: Add functions to get consecutive elements to Data.List

libraries list - Thu, 05/19/2016 - 6:27am
The discussion period for this proposal is near (31 of May). So far I count 1 for and 2 against the proposal. Joachim Breitner made a good enumeration of some advantages of adding these to base. Here is an enumeration of pros: * Availability in Data.List gives this pattern a common name. * A common name for this makes code easier to read and decreases the risk of getting the definition wrong. * The argument won't have to be repeated, hence making it easier to chain the functions. * List-fusion potential. Tobias Florek pointed out that `zip <*> tail` can be used to define this inline without the need for repeating the argument and made a reference to the Fairbairn threshold. This is elegant, but I am afraid that people might consider this obscure code golfing if used. Cheers Johan Holmquist ---------- Forwarded message ---------- From: Henning Thielemann <lemming< at >henning-thielemann.de> Date: 2016-04-13 13:28 GMT+02:00 Subject: Re: Proposal: Add functions to get consecutive elements to Data.List To:
Categories: Offsite Discussion

What's your favorite flavor of Iterator type

haskell-cafe - Wed, 05/18/2016 - 10:53pm
Hello, We know about Foldable, but sometimes you just want more functionality like: give me the rest of the string! Or a function to build pieces back together. I've been experimenting a bit and come up with 6 flavors of Iterators that do the same thing. Of course they all work for containers like ByteStrings, Text. 1) Haskell98 version (I like) data Iterator98 list ele = Iterator98 { next98 :: Maybe (ele, Iterator98 list ele), ... rest98 :: list, concat98 :: [list] -> list } -- How we can create an Iterator98 listIter98 :: [a] -> Iterator98 [a] a -- How the sum type looks sum98 :: (Num n) => Iterator98 listN n -> n Performance: *3 I'll also usually give the type of the constructor and sum functions. I also benchmarked the sum functions for [] and compared them to the best sum function I could come up with (which is significantly faster than the sum in Prelude!!! because it's strict. Whoever came up with the idea of making it n
Categories: Offsite Discussion

CFP (Approaching Deadline): 19th ACM/IEEE MSWiM 2016

General haskell list - Wed, 05/18/2016 - 4:00pm
==================================================== Call-For-Papers: 19th ACM*/IEEE* MSWiM 2016 Malta, Nov 13-17, 2016 http://www.mswimconf.com/2016 ==================================================== IMPORTANT: Submission deadline: May 30th, 2016 =================================================== *Pending Upon Approval ACM/IEEE* MSWiM 2016 is the 19th Annual International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems. MSWiM is an international forum dedicated to in-depth discussion of Wireless and Mobile systems, networks, algorithms and applications, with an emphasis on rigorous performance evaluation. MSWiM is a highly selective conference with a long track record of publishing innovative ideas and breakthroughs. MSWiM 2016 will be held Malta, Nov 13-17, 2016 Authors are encouraged to submit full papers presenting new research related to the theory or practice of all aspects of modeling, analysis and simulation of mobile and wireless systems. Submitted papers must
Categories: Incoming News

SPLASH'16: 1st Call for Contributions to Collocated Events

General haskell list - Wed, 05/18/2016 - 11:57am
################################################# ACM Conference on Systems, Programming, Languages, and Applications: Software for Humanity (SPLASH'16) ################################################# Amsterdam, The Netherlands Sun 30th October - Fri 4th November , 2016 http://2016.splashcon.org https://twitter.com/splashcon https://www.facebook.com/SPLASHCon/ Sponsored by ACM SIGPLAN Combined Call for Contributions to SPLASH tracks, collocated conferences, symposia and workshops: - SPLASH-I, Doctoral Symposium, Student Research Competition, Programming Languages Mentoring Workshop, Posters - Dynamic Languages Symposium (DLS) - Generative Programming: Concepts & Experiences (GPCE) - Software Language Engineering (SLE) - Scala Symposium - Workshops: AGERE, DSLDI, DSM, FOSD, ITSLE, LWC< at >SLE, META, MOBILE!, NOOL, PLATEAU, Parsing< at >SLE, REBLS, RUMPLE, SA-MDE, SEPS, VMIL, WODA The ACM SIGPLAN conference on Systems, Programming, Languages and Applications: Software for Humanity (SPLASH) embraces all aspects
Categories: Incoming News

Lee Pike: Max: Phase 1 Report

Planet Haskell - Wed, 05/18/2016 - 10:24am

I sent the following R&D report to my colleagues, but a few other folks outside Galois have expressed interest in the project, so I’m sharing it more broadly.

 

Subject: Max: Phase 1 Report

As some of you know, about nine months ago, I started a skunk-works R&D project with Brooke. Autonomous systems are all the rage these days, so we decided to try to create one. First, I have to be honest and say that although I participated in the initial project kickoff, I mostly played a supporting role after that. Brooke did all the major software and hardware development. (If you’ve worked with me on a project, this should sound pretty familiar.) Once Brooke started development, she really threw herself into it. She seemed to be working on things day and night, and it even looked a bit painful at times. She sensed she was getting close to an alpha release a few days ago, and after a final four hour sprint, she made the release at 2:30am on Mothers Day! We are officially out of stealth mode.

We call our project “Machine Awareness X”, or Max for short. The current system is capable of basic knowledge discovery and processing. Now, don’t get too excited; we expect at least a few years before it’s able to do something interesting with the knowledge acquired. I won’t go into the technical details, but the programming model is very biological—think “Game of Life” on steroids. At this point, we’ll have to continue to provide guidance and some rules, but its basically self-programming.

Following a “Johnny Ive” approach to design, Max has basically one notification method. It’s a fairly piercing audio signal used whenever his power supply is running low or there’s a hardware problem. (Frankly, sometimes it seems to just go off for no reason at all.) We designed it to be loud enough to hear across the house, but I wish it had a volume control. Next time! Otherwise, the package is quite attractive, in my opinion, even cute. Unfortunately, at 7lbs. 8oz., the hardware is heavier than even a decade-old laptop, and we expect new versions to require an even larger form factor. Fortunately, we designed the system to be self-propelling, although it’ll take a few years before that hardware is developed (the software isn’t ready for it anyways).

There’s still quite a bit of work to do. Our back-of-the-envelope estimate is that we’ll have to spend just short of two decades caring for Max before he’s fully autonomous. Even more disappointingly, we’re estimating having to spend up to a quarter million (in today’s dollars) in upkeep and maintenance! (Sadly, while others are interested in playing with the system intermittently, nobody seems that interested in joining us as early investors.) Despite all the training we’re planning to provide, the system seems too complicated to guarantee certain behaviors. For example, while more general than an autonomous car, it may take more than 15 years of training before his software is capable of piloting a conventional automobile.

I’m guessing some of you are wondering about commercialization opportunities. The good news: we expect Max to be useful to society (we haven’t found a killer app yet, though) and to generate quite a bit of revenue over its lifetime. The bad news: we don’t expect it to start producing reliable revenue for more than 20 years. What’s more, it has a lot of upkeep expenses that will only grow with time. This might sound like science fiction, but we imagine he might even replicate himself in the distant future, and will likely pour his revenues into his own replicants. In short, we don’t expect to make a dime from the project.

More seriously, we had a kid; mom and baby are doing fine. See you soon.

Regards,
Papa Pike


Categories: Offsite Blogs

Call for contribution, PLRR 2016 (Parametricity, Logical Relations & Realizability), CSL affiliated workshop

General haskell list - Wed, 05/18/2016 - 9:59am
CALL FOR CONTRIBUTIONS Workshop PLRR 2016 Parametricity, Logical Relations & Realizability September 2, Marseille, France http://lama.univ-savoie.fr/plrr2016 Satellite workshop - CSL 2016 http://csl16.lif.univ-mrs.fr/ BACKGROUND The workshop PLRR 2016 aims at presenting recent work on parametricity, logical relations and realizability, and encourage interaction between those communities. The areas of interest include, but are not limited to: * Kleene's intuitionistic realizability, * Krivine's classical realizability, * other extensions of the Curry-Howard correspondence, * links between forcing and the Curry-Howard correspondence, * parametricity, * logical relations, * categorical models, * applications to programming languages. INVITED SPEAKERS Neil Ghani (University of Strathclyde) Nick Benton (Microsoft Research, Cambridge) CONTRIBUTED TALKS We so
Categories: Incoming News

Yesod Web Framework: Are unused import warnings harmful?

Planet Haskell - Wed, 05/18/2016 - 12:15am

Which of the following snippets of code is better?

#if MIN_VERSION_base(4,8,0) import Control.Applicative ((<*)) #else import Control.Applicative ((<*), pure) #endif

Versus:

import Control.Applicative ((<*), pure)

If you are working on a project that supports multiple GHC versions, enable extra warnings via -Wall, and actually like to get your code to compile without any warnings, you'll probably say that the former is better. I'm going to claim, however, that any sane human being knows intuitively that the latter is the better version of the code, for multiple reasons:

  • It doesn't require a language extension to be enabled
  • It's much shorter without losing any useful information to the reader
  • It's more robust to future changes: if you need to add an import, you don't have to remember to update two places

However, if you look through my code bases, and the code bases of many other open source Haskell authors, you'll find the former examples regularly. I'm beginning to come to the conclusion that we've been attacking this problem the wrong way, and what we should be doing is:

  • Turning on -Wall in our code
  • Either modify -Wall in GHC to not warn about unused imports, or explicitly disable unused import warnings via -fno-warn-unused-imports
  • As many of us already do, religiously use Travis CI to check multiple GHC versions to avoid accidental regressions
  • In our Travis builds, start turning on -Werror

Maintaining complex CPP in our imports is sometimes a necessary evil, such as when APIs change. But when we are simply doing it to work around changes in what Prelude or other modules export, it's an unnecessary evil. This is similar to the change to GHC a few years back which allowed hiding (isNotExported) to not generate a warning: it made it much easier to deal with the now-no-longer-present Prelude.catch function.

While it's true that removing unused imports is a nice thing to do to our codebases from time to time, their presence does not actually indicate any potential issues with our code. My concern with the presence of these warnings is that they will lead to one of two situations:

  • We simply accept that our libraries generate warnings when compiled, which ends up hiding actionable warnings via a terrible signal-to-noise ratio
  • In an effort to clean up all warnings, we end up creating hideous messes like those above, or breaking backwards compatibility with old versions of dependencies

I haven't actually started making these modifications to my libraries, as I'm not yet fully convinced that this is a good idea. There are also other points in this design space, like explicitly marking some imports as redundant, though that would require some deeper changes to GHC and wouldn't be usable until we drop support for all current GHC versions.

Categories: Offsite Blogs

Is there a way to query the availability of anextension? Or could we?

haskell-cafe - Tue, 05/17/2016 - 10:19pm
We have __GLASGOW_HASKELL__ to tell us what GHC version we're running (if we're running GHC), and Cabal sets up MIN_VERSION_blah macros and (when applicable) the __HADDOCK_VERSION__ macro. But what if we're running some other compiler? It seems rather painful to have to write code that enables or disables various extensions based on what version N of compiler C happens to support. Not to mention that this is even a pain when just dealing with GHC, since it involves digging through release notes or waiting to see how Travis throws up. Is there some better way? If not, could we add one? __Have_ScopedTypeVariables__ could tell us if the ScopedTypeVariables extension is available. Then instead of saying "We need ScopedTypeVariables" when we can (painfully) do without, we can just use it precisely when we have it.
Categories: Offsite Discussion

Proposal: Add a catamorphism on Trees

libraries list - Tue, 05/17/2016 - 10:03pm
Daniel Wagner would like to add the following straightforward function to Data.Tree. I think this is a grand idea. foldTree :: (a -> [b] -> b) -> Tree a -> b foldTree f = go where go (Node x ts) = f x (map go ts)
Categories: Offsite Discussion

Brent Yorgey: In praise of Beeminder

Planet Haskell - Tue, 05/17/2016 - 9:13pm

In January 2013, I wrote a post explaining my use of Beeminder, after using it for six months. Well, I’ve now been using it continuously for almost four years! It has become such an integral part of my life and workflow that I literally don’t know what I would do if it went away. So I decided it was high time to write another blog post about Beeminder. This time, instead of enumerating things I am currently using it for, I will focus on things I have accomplished with the help of Beeminder. There is little doubt in my mind that I am much awesomer today than I would have been without Beeminder.

First, what is Beeminder? Here’s what I wrote three and a half years ago, which I think is still a good description:

The basic idea is that it helps you keep track of progress on any quantifiable goals, and gives you short-term incentive to stay on track: if you don’t, Beeminder takes your money. But it’s not just about the fear of losing money. Shiny graphs tracking your progress coupled with helpfully concrete short-term goals (“today you need to write 1.3 pages of that paper”) make for excellent positive motivation, too.

The key property that makes Beeminder work so well for me is that it makes long-term goals into short-term ones. I am a terrible procrastinator—due to hyperbolic discounting I can be counted on to pretty much ignore anything with long-term rewards or consequences. A vague sense that I ought to take better care of my bike is not enough to compel me to action in the present; but “inflate your tires and grease your chain before midnight or else pay $5” is.

So, what have I accomplished over the past three years?

  • First, the big one: I finished my PhD! A PhD dissertation is much too big to put off until the last minute. Writing my thesis proposal, researching and writing my dissertation itself, and making slides for my defense were all largely driven by Beeminder goals. I am honestly not sure if I would have been able to finish otherwise.
  • I landed two jobs: first, a one-year position at Williams College, and now a tenure-track position at Hendrix College. Preparing application materials and applying for academic jobs is not much fun, and it was a really tough slog putting everything together, especially while I was teaching at Williams last year. Having a Beeminder goal for spending time on my application materials was absolutely critical.
  • I finished watching every single Catsters video and writing up a toplogically sorted guide to the series.
  • Since March 2015, when I started cranking up my Beeminder goal for writing blog posts again, I have written over 80 posts on my two blogs. I also wrote more than 40 posts in late 2012-early 2013 using another goal (the gap from 2013-2015 was when I was writing my dissertation instead of blogging!).
  • Over the past three years, I have spent about 1 hour per week (typically spread over 3 or 4 days) learning biblical Hebrew. It adds up to almost 150 hours of Hebrew study—which doesn’t sound like a whole lot, but almost every minute of it was quality, focused study time. And since it has been so spread out, the material is quite firmly embedded in my long-term memory. I recently finished working through the entire introductory textbook I was using, while doing every single exercise in the associated workbook. I am still far from being an expert, but I can actually read simple things now.
  • Over the past 6 months I lost more than 15 pounds.
  • Since September I have been swimming two mornings a week: when I started, I could barely do two laps before feeling like I was going to be sick; now, I can swim 500m in under 9 minutes (just under double world record pace =).

There are lots of other things I use Beeminder for, but these are the accomplishments I am proudest of. If you want to do awesome things but can never quite seem to find the time or motivation to do them, give it a try!


Categories: Offsite Blogs

ANN: new #haskell-atom channel, Atom setup doc

haskell-cafe - Mon, 05/16/2016 - 10:26pm
Hi all, Recently I helped a newcomer set up Haskell and Atom (the text editor/IDE - not the embedded systems DSL), and also for the first time succeeded in getting a "modern IDE experience" working with my own projects. I've saved my notes so far - I hope you'll also find them useful: https://github.com/simonmichael/haskell-atom-setup In the process I found some issues, looked for help in many places, and wished the #haskell-atom IRC channel existed. So I've started it: #haskell-atom on Freenode I'm an Emacs man, but I try all the available Haskell IDEs periodically. Atom is the first one where I've succeeded in getting inline error reporting working, and it's the only one I could recommend to a new programmer or a mainstream IDE lover right now. So I think this channel is worth having, and will grow. All welcome! Best, -Simon
Categories: Offsite Discussion

representing institutions in haskell

haskell-cafe - Mon, 05/16/2016 - 8:13pm
Has anyone here taken an interest in representing institutions [1] in Haskell? Institutions originated in work on CLEAR [2] which was the first algebraic specification language with rigorous semantics. Institutions take some time to explain, or unravel if you follow the reference, but they come with a convenient slogan : "Truth is invariant under change of notation." The stretch goal of institutions is transforming logics while preserving satisfaction. They are conceived in terms of categorical abstractions, so it's natural to represent them in categories. I hope some folks here find institutions a useful domain to apply their work on categories. I've attached two samples in Haskell that uses some standard libraries : Arrow, Monad, Category and Dual. The first sample, ri-0, outlines the essentials. The category of signatures, constructions in the categories, signature morphisms, sentences, models and the satisfaction condition. Using plain old Nat reduces the complexity of the exposition. I provide a
Categories: Offsite Discussion

All combinations in order.

haskell-cafe - Mon, 05/16/2016 - 6:48pm
Hmm, I tried to post this via google groups but it was rejected. So I try to mail the list instead. Sorry for any duplicates. Hi list! How to transform a list of streams of elements into a stream of lists of elements such that all combinations of exactly one element of each input stream is present? I.e. something like this: | |type Streama =[a] |tr ::[Streame]->Stream[e] tr = sequence | But I want the order of the output lists so the ones with early elements come first. The above starts with the first elements from each stream but does not treat the streams fairly. All the elements of the last stream are used before the next element of the second last stream is used, i.e. | > tr [[1..3], [1..2], [1..3]] [[1,1,1],[1,1,2],[1,1,3],[1,2,1]..... | I don't want the third element of one stream before I have seen all the combinations of the first two elements of each stream and so on. In this case I want something like this: | [[1,1,1], [1,1,2], [1,2,1], [2,1,1], [1,2,2], [2,1,2], [2,2,1], [2,2,2
Categories: Offsite Discussion

[Final CFP] Haskell 2016

General haskell list - Mon, 05/16/2016 - 6:03pm
======================================================================== ACM SIGPLAN CALL FOR SUBMISSIONS Haskell Symposium 2016 Nara, Japan, 22-23 September 2016, directly after ICFP http://www.haskell.org/haskell-symposium/2016 ======================================================================== The ACM SIGPLAN Haskell Symposium 2016 will be co-located with the International Conference on Functional Programming (ICFP 2016) in Nara, Japan. The Haskell Symposium aims to present original research on Haskell, discuss practical experience and future development of the language, and to promote other forms of denotative programming. Topics of interest include: * Language Design, with a focus on possible extensions and modifications of Haskell as well as critical discussions of the status quo; * Theory, such as formal semantics of the present language or future extensions, type systems, effects, metatheory, and
Categories: Incoming News

[Final CFP] Haskell 2016

haskell-cafe - Mon, 05/16/2016 - 6:03pm
======================================================================== ACM SIGPLAN CALL FOR SUBMISSIONS Haskell Symposium 2016 Nara, Japan, 22-23 September 2016, directly after ICFP http://www.haskell.org/haskell-symposium/2016 ======================================================================== The ACM SIGPLAN Haskell Symposium 2016 will be co-located with the International Conference on Functional Programming (ICFP 2016) in Nara, Japan. The Haskell Symposium aims to present original research on Haskell, discuss practical experience and future development of the language, and to promote other forms of denotative programming. Topics of interest include: * Language Design, with a focus on possible extensions and modifications of Haskell as well as critical discussions of the status quo; * Theory, such as formal semantics of the present language or future extensions, type systems, effects, metatheory, and
Categories: Offsite Discussion

Magnus Therning: CMake, ExternalData, and custom fetch script

Planet Haskell - Mon, 05/16/2016 - 6:00pm

I failed to find a concrete example on how to use the CMake module ExternalData with a custom fetch script. Since I finally manage to work out how to use it I thought I’d try to help out the next person who needs to go down this route.

Why ExternalData?

I thought I’d start with a short justification of why I was looking at the module at all.

At work I work with a product that processes images and video. When writing tests we often need some rather large files (from MiB to GiB) as input. The two obvious options are:

  1. Check the files into our Git repo, or
  2. Put them on shared storage

Neither of these are very appealing. The former just doesn’t feel quite right, these are large binary files that rarely, if ever, change, why place them under version control at all? And if they do change the Git repo is likely to balloon in size and impact cloning times negatively. The latter makes it difficult to run our tests on a machine that isn’t on the office network and any changes to the files will break older tests, unless we always only add files, never modify any in place. On the other hand, the former guarantees that the files needed for testing are always available and it is possible to modify the files without breaking older tests. The pro of the latter is that we only download the files needed for the current tests.

ExternalData is one option to address this. On some level it feels like it offers a combination of both options above:

  • It’s possible to use the shared storage
  • When the shared storage isn’t available it’s possible to fall back on downloading the files via other means
  • The layout of the storage is such that modifying in place is much less likely
  • Only the files needed for the currest tests will be downloaded when building off-site
The object store

We do our building in docker images that do have our shared storage mapped in, so I’d like them to take advantage of that. At the same time I want the builds performed off-site to download the files. To get this behaviour I defined two object stores:

set(ExternalData_OBJECT_STORES ${CMAKE_BINARY_DIR}/ExternalData/Objects /mnt/shared/over/nfs/Objects )

The module will search each of these for the required files and download only if they aren’t found. Downloaded files will be put into the first of the stores. Oh, and it’s very important that the first store is given with an absolute path!

The store on the shared storage looks something like this:

/mnt/share/over/nfs/Objects └── MD5 ├── 94ed17f9b6c74a732fba7b243ab945ff └── a2036177b190fbee6e9e038b718f1c20

I can then drop a file MyInput.avi.md5 in my source tree with the md5 of the real file (e.g. a2036177b190fbee6e9e038b718f1c20) as the content. Once that is done I can follow the example found in the introduction of the reference documentation.

curl vs sftp

So far so good. This now works on-site, but for off-site use I need to fetch the needed files. The last section of the reference documentation is called Custom Fetch Scripts. It mentions that files are normally downloaded using file(DOWNLOAD). Neither there, nor in the documentation for file is there a mention of what is used under the hood to fetch the files. After asking on in #cmake I found out that it’s curl. While curl does handle SFTP I didn’t get it to work with my known_hosts file, nor with my SSH agent (both from OpenSSH). On the other hand it was rather easy to configure sftp to fetch a file from the internet-facing SSH server we have. Now I just had to hook it into CMake somehow.

Custom fetch script

As the section on “Custom Fetch Scripts” mention three things are needed:

  1. Specify the script via the ExternalDataCustomScript:// protocol.
  2. Tell CMake where it can find the fetch script.
  3. The fetch script itself.

The first two steps are done by providing a URL template and pointing to the script via a special variable:

set(ExternalData_URL_TEMPLATES "ExternalDataCustomScript://sftp/mnt/shared/over/nfs/Objects/%(algo)/%(hash)") set(ExternalData_CUSTOM_SCRIPT_sftp ${CMAKE_SOURCE_DIR}/cmake/FetchFromSftp.cmake)

It took me a ridiculous amount of time to work out how to write a script that turns out to be rather short. This is an experience that seems to repeat itself when using CMake; it could say something about me, or something about CMake.

get_filename_component(FFS_ObjStoreDir ${ExternalData_CUSTOM_FILE} DIRECTORY) get_filename_component(FFS_InputFilename ${ExternalData_CUSTOM_LOCATION} NAME) get_filename_component(FFS_OutputFilename ${ExternalData_CUSTOM_FILE} NAME) execute_process(COMMAND sftp sftp.company.com:/${ExternalData_CUSTOM_LOCATION} RESULT_VARIABLE FFS_SftpResult OUTPUT_QUIET ERROR_VARIABLE FFS_SftpErr ) if(FFS_SftpResult) set(ExternalData_CUSTOM_ERROR "Failed to fetch from SFTP - ${FFS_SftpErr}") else(FFS_SftpResult) file(MAKE_DIRECTORY ${FFS_ObjStoreDir}) file(RENAME ${FFS_InputFilename} ${FFS_ObjStoreDir}/${FFS_OutputFilename}) endif(FFS_SftpResult)

This script is run with cmake -P in the binary dir of the CMakeLists.txt where the test is defined, which means it’s oblivious about the project it’s part of. PROJECT_BINARY_DIR is empty and CMAKE_BINARY_DIR is the same as CMAKE_CURRENT_BINARY_DIR. This is the reason why the first store in ExternalData_OBJECT_STORES has to be an absolute path – it’s very difficult, if not impossible, to find the correct placement of the object store otherwise.

Categories: Offsite Blogs

Ken T Takusagawa: [szamjfab] Error messages and documentation for FTP

Planet Haskell - Mon, 05/16/2016 - 3:47pm

One of the criticisms of the Foldable/Traversable Proposal (FTP) in Haskell is that error messages get more confusing and documentation gets harder to understand.  Both of these problems could be addressed with improvements to tools.

Errors when calling a polymorphic function with a Foldable or Traversable context could have additional text repeating what the error message would be if the function were specialized to lists.

Haddock could generate additional documentation for a polymorphic function with Foldable or Traversable context: generate, as documentation, what the type signature would be if the function were specialized to lists.  Or, the type variable could be named (renamed) "list":

mapM :: (Traversable list, Monad m) => (a -> m b) -> list a -> m (list a)

Categories: Offsite Blogs

LambdaCube: Ambient Occlusion Fields

Planet Haskell - Mon, 05/16/2016 - 2:57pm

Recently I created a new example that we added to the online editor: a simple showcase using ambient occlusion fields. This is a lightweight method to approximate ambient occlusion in real time using a 3D lookup table.

There is no single best method for calculating ambient occlusion, because various approaches shine under different conditions. For instance, screen-space methods are more likely to perform better when shading (very) small features, while working at the scale of objects or rooms requires solutions that work in world space, unless the artistic intent calls for a deliberately unrealistic look. VR applications especially favour world-space effects due to the increased need for temporal and spatial coherence.

I was interested in finding a reasonable approximation of ambient occlusion by big moving objects without unpleasant temporal artifacts, so it was clear from the beginning that a screen-space postprocessing effect was out of the question. I also ruled out approaches based on raymarching and raytracing, because I wanted it to be lightweight enough for mobile and other low-performance devices, and support any possible occluder shape defined as a triangle mesh. Being physically accurate was less of a concern for me as long as the result looked convincing.

First of all, I did a little research on world-space methods. I quickly found two solutions that are the most widely cited:

  1. Ambient Occlusion Fields by Kontkanen and Laine, which uses a cube map to encode the occlusion by a single object. Each entry of the map contains coefficients for an approximation function that returns the occlusion term given the distance from the centre of the object in the direction corresponding to the entry. They also describe a way to combine the occlusion terms originating from several objects by exploiting blending.
  2. Ambient Occlusion Fields and Decals in Infamous 2, which is a more direct approach that stores occlusion information (amount and general direction) in a 3D texture fitted around the casting object. This allows a more accurate reconstruction of occlusion, especially close to or within the convex hull of the object, at the cost of higher memory requirements.

I thought the latter approach was promising and created a prototype implementation. However, I was unhappy with the results exactly where I expected this method to shine: inside and near the the object, and especially when it should have been self-shadowing.

After exhausting my patience for hacks, I had a different idea: instead of storing the general direction and strength of the occlusion at every sampling point, I’d directly store the strength in each of the six principal (axis-aligned) directions. The results surpassed all my expectations! The shading effect was very well-behaved and robust in general, and all the issues with missing occlusion went away instantly. While this meant increasing the number of terms from 4 to 6 for each sample, thanks to the improved behaviour the sampling resolution could be reduced enough to more than make up for it – consider that decreasing resolution by only 20% is enough to nearly halve the volume.

The real beef of this method is in the preprocessing step to generate the field, so let’s go through the process step by step. First of all, we take the bounding box of the object and add some padding to capture the domain where we want the approximation to work:

Next, we sample occlusion at every point by rendering the object on an 8×8 cube map as seen from that point. We just want a black and white image where the occluded parts are white. There is no real need for higher resolution or antialiasing, as we’ll have more than 256 samples affecting each of the final terms. Here’s how the cube maps look like (using 10x10x10 sampling points for the illustration):

Now we need a way to reduce each cube map to just six occlusion terms, one for each of the principal directions. The obvious thing to do is to define them as averages over half cubes. E.g. the up term is an average of the upper half of the cube, the right term is derived from the right half etc. For better accuracy, it might help to weight the samples of the cube map based on the relative directions they represent, but I chose not to do this because I was satisfied with the outcome even with simple averaging, and the difference is unlikely to be significant. Your mileage may vary.

The resulting terms can be stored in two RGB texels per sample, either in a 3D texture or a 2D one if your target platform has no support for the former (looking at you, WebGL).

To recap, here’s the whole field generation process in pseudocode:

principal_directions = {left, down, back, right, up, forward} for each sample_index in (1, 1, 1) to (x_res, y_res, z_res) pos = position of the grid point at sample_index sample = black and white 8x8 cube map capture of object at pos for each dir_index in 1 to 6 dir = principal_directions[dir_index] hemisphere = all texels of sample in the directions at acute angle with dir terms[dir_index] = average(hemisphere) field_negative[sample_index] = (r: terms[1], g: terms[2], b: terms[3]) field_positive[sample_index] = (r: terms[4], g: terms[5], b: terms[6])

This is what it looks like when sampling at a resolution of 32x32x32 (negative XYZ terms on top, positive XYZ terms on bottom):

The resulting image is basically a voxelised representation of the object. Given this data, it is very easy to extract the final occlusion term during rendering. The key equation is the following:

occlusion = dot(minField(p), max(-n, 0)) + dot(maxField(p), max(n, 0)), where

  • p = the position of the sampling point in field space (this is normalised, i.e. (0,0,0) corresponds to one corner of the original bounding box used to generate the samples, and (1,1,1) covers the opposite corner)
  • n = the normal of the surface in occluder local space
  • minField = function to sample the minimum/negative terms (a single texture lookup if we have a 3D texture, two lookups and a lerp if we have a 2D texture)
  • maxField = function to sample the maximum/positive terms

All we’re doing here is computing a weighted sum of the occlusion terms, where the weights are the clamped dot products of n with the six principal directions. These weights happen to be the same as the individual components of the normal, so instead of doing six dot products, we can get them by zeroing out the negative terms of n and -n.
Putting aside the details of obtaining p and n for a moment, let’s look at the result. Not very surprisingly, the ambient term computed from the above field suffers from aliasing, which is especially visible when moving the object. Blurring the field with an appropriate kernel before use can completely eliminate this artifact. I settled with the following 3x3x3 kernel:

1 4 1 4 9 4 1 4 1 4 9 4 9 16 9 4 9 4 1 4 1 4 9 4 1 4 1

Also, since the field is finite in size, I decided to simply fade out the terms to zero near the edge to improve the transition at the boundary. In the Infamous 2 implementation they opted for remapping the samples so the highest boundary value would be zero, but this means that every object has a different mapping that needs to be fixed with other magic coefficients later on. Here’s a comparison of the original (left) and the blurred (right) fields:

Back to the problem of sampling. Most of the work is just transforming points and vectors between coordinate systems, so it can be done in the vertex shader. Let’s define a few transformations:

  • F – occluder local to (normalised) field space, i.e. map the bounding box in the occluder’s local space to the range (0,0,0)-(1,1,1); this matrix is tied to the baked field, therefore it’s constant
  • O – occluder local to world space, i.e. occluder model transformation
  • R – receiver local to world space, i.e. receiver model transformation

I’ll use the f, o, and r subscripts to denote that a point or vector is in field, occluder local or receiver local space, e.g. pf is the field space position, or nr is the receiver’s local surface normal. When rendering an occluder, our constant input is F, O, R, and per vertex input is pr and nr. Given this data, we can now derive the values of p and n needed for sampling:

n = no = normalize(O-1 * R * nr)
p = pf + n * bias = F * O-1 * R * pr + n * bias

The bias factor is the inversely proportional to the field’s resolution (I’m using 1/32 in the example, but it could also be a non-uniform scale if the field is not cube shaped), and its role is to prevent surfaces from shadowing themselves. Note that we’re not transforming the normal into field space, since that would alter its direction.

And that’s pretty much it! So far I’m very pleased with the results. One improvement I believe might be worth looking into is reducing the amount of terms from 6 to 4 per sample, so we can halve the amount of texture space and lookups needed. To do this, I’d pick the normals of a regular tetrahedron instead of the six principal directions for reference, and compute 4 dot products in the vertex shader (the 4 reference normals could be packed in a 4×3 matrix N) to determine the contribution of each term:

weights = N * no = (dot(no, n1), dot(no, n2), dot(no, n3), dot(no, n4))
occlusion = dot(field(p), max(weights, 0))

As soon as LambdaCube is in a good shape again, I’m planning to augment our beloved Stunts example with this effect.

Categories: Offsite Blogs