News aggregator

Neil Mitchell: me :: Standard Chartered -> Barclays

Planet Haskell - Fri, 07/22/2016 - 6:03am

Summary: I've left Standard Chartered and will be starting at Barclays shortly.

I've been working at Standard Chartered for almost 8 years, but a few weeks ago I handed in my notice. While at Standard Chartered I got to work with some very smart people, on some very interesting projects. I have learnt a lot about whether Haskell works at large scales (it does!), what mistakes you can make and how to avoid those mistakes. My work at Standard Chartered lead to the Shake build system, along with 100's of other projects that alas remain locked in proprietary banking world. I've used our in-house strict Haskell (Mu) to write lots of systems, and cemented my opinion of whether languages should be strict or lazy (lazy!). I've taught many people Haskell, and sold the values of functional programming to everyone I met.

After all that, I've decided to take a job at Barclays and will be starting mid-August. In this new role I plan to use real GHC-Haskell, all the advanced features that enables (existentials, laziness etc), with Cabal/Stack to leverage all of Hackage. My experience of building on top of C/C++ libraries for the last 8 years has made me envious of the amazing quality of Haskell libraries (to a first approximation, they might be a little slower than the C/C++ on average, but they won't leak, are multithread safe, and someone has thought about the corner cases). With any luck, I'll also be able to contribute some more of these libraries back to the community.

Once I officially start at Barclays I'll be hiring a team of London-based Haskell programmers to work with. More details to follow in a few weeks. For now, I will say that for hiring decisions I'm really interested in seeing actual code. If you are interested in a Haskell job anywhere you should probably have a GitHub account with some code on it. If you've done any fun projects, hacked around with an encoding of de-Bruijn indicies with type families, or tried to implement a JavaScript parser using only functions from base which have at most 1 vowel, shove it up. Having libraries on Hackage is even better. I judge people by their best code, not their worst, so more code is always better.

In case people are curious, here are a few questions I expect to be asked:

  • Does this mean Standard Chartered is going to stop using Haskell? No. Standard Chartered has lots of Haskell programmers and will be continuing with Mu (aka the Haskell language, but not the GHC implementation).
  • What does this mean for my open source libraries? For most, it means nothing, or they will improve because I might be using them in my day job. I don't think Standard Chartered is the biggest user of any of my libraries anymore. The one exception is Bake, which was designed quite specifically to address the needs of Standard Chartered. I may continue to work on it, I may pass maintainership over to someone at Standard Chartered, or I may do nothing further with it. Time will tell, but the source will remain available indefinitely.
  • I got a call from a headhunter for Barclays, is that you? No. Currently Barclays are recruiting someone to work on FPF using Haskell. That's a totally different team. My primary method of recruitment will be this blog, not headhunters. Anyone who ever wants a job with me should talk to me, not people who suggest they are working on my behalf :).
  • Are you hiring right now? No. I will be on the 15th of August once I join.
Categories: Offsite Blogs

Robin KAY: HsQML 0.3.4.1 released: From Zürich

Planet Haskell - Fri, 07/22/2016 - 2:22am
Last night I arrived in Zürich for ZuriHac 2016 and, in the brief space between finding my way from the airport and falling asleep, released HsQML 0.3.4.1. You can download the latest release of this Haskell binding to the Qt Quick GUI framework from Hackage as usual.

This is purely a bug-fix release, fixing building with newer versions of Cabal (1.24, GHC 8) and Qt (5.7+). It also resolves some difficulties with building the library on MacOS and I've revised the MacOS install documentation with more information.

In other news, I've decommissioned the old Trac bug-tracker as I accidentally broke it some time ago during a server upgrade and failed to notice. I take this as a bad sign for its effectiveness, so please just e-mail me with any issues instead. I enjoy hearing from people trying to use the library and I always try to reply with assistance as quickly as possible, so don't be shy.

release-0.3.4.1 - 2016.07.21

  * Added support for Cabal 1.24 API.
  * Fixed linking shared builds against MacOS frameworks (needs Cabal 1.24+).
  * Fixed building with Qt 5.7.
Categories: Offsite Blogs

Proposal: Add gcoerceWith to Data.Type.Coercion

libraries list - Fri, 07/22/2016 - 12:18am
In Data.Type.Equality, we have both castWith :: (a :~: b) -> a -> b and gcastWith :: (a :~: b) -> (a ~ b => r) -> r But in Data.Type.Coercion we only have coerceWith :: Coercion a b -> a -> b It seems to me that for the sake of consistency, we should add gcoerceWith :: Coercion a b -> (Coercible a b => r) -> r gcoerceWith Coercion a = a David Feuer _______________________________________________ Libraries mailing list Libraries< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Categories: Offsite Discussion

Mark Jason Dominus: A hack for getting the email address Git will use for a commit

Planet Haskell - Thu, 07/21/2016 - 2:25am

Today I invented a pretty good hack.

Suppose I have branch topic checked out. It often happens that I want to

git push origin topic:mjd/topic

which pushes the topic branch to the origin repository, but on origin it is named mjd/topic instead of topic. This is a good practice when many people share the same repository. I wanted to write a program that would do this automatically.

So the question arose, how should the program figure out the mjd part? Almost any answer would be good here: use some selection of environment variables, the current username, a hard-wired default, and the local part of Git's user.email configuration setting, in some order. Getting user.email is easy (git config get user.email) but it might not be set and then you get nothing. If you make a commit but have no user.email, Git doesn't mind. It invents an address somehow. I decided that I would like my program to to do exactly what Git does when it makes a commit.

But what does Git use for the committer's email address if there is no user.email set? This turns out to be complicated. It consults several environment variables in some order, as I suggested before. (It is documented in git-commit-tree if you are interested.) I did not want to duplicate Git's complicated procedure, because it might change, and because duplicating code is a sin. But there seemed to be no way to get Git to disgorge this value, short of actually making a commit and examining it.

So I wrote this command, which makes a commit and examines it:

git log -1 --format=%ce $(git-commit-tree HEAD^{tree} < /dev/null)

This is extremely weird, but aside from that it seems to have no concrete drawbacks. It is pure hack, but it is a hack that works flawlessly.

What is going on here? First, the $(…) part:

git-commit-tree HEAD^{tree} < /dev/null

The git-commit-tree command is what git-commit uses to actually create a commit. It takes a tree object, reads a commit message from standard input, writes a new commit object, and prints its SHA1 hash on standard output. Unlike git-commit, it doesn't modify the index (git-commit would use git-write-tree to turn the index into a tree object) and it doesn't change any of the refs (git-commit would update the HEAD ref to point to the new commit.) It just creates the commit.

Here we could use any tree, but the tree of the HEAD commit is convenient, and HEAD^{tree} is its name. We supply an empty commit message from /dev/null.

Then the outer command runs:

git log -1 --format=%ce $(…)

The $(…) part is replaced by the SHA1 hash of the commit we just created with git-commit-tree. The -1 flag to git-log gets the log information for just this one commit, and the --format=%ce tells git-log to print out just the committer's email address, whatever it is.

This is fast—nearly instantaneous—and cheap. It doesn't change the state of the repository, except to write a new object, which typically takes up 125 bytes. The new commit object is not attached to any refs and so will be garbage collected in due course. You can do it in the middle of a rebase. You can do it in the middle of a merge. You can do it with a dirty index or a dirty working tree. It always works.

(Well, not quite. It will fail if run in an empty repository, because there is no HEAD^{tree} yet. Probably there are some other similarly obscure failure modes.)

I called the shortcut git-push program git-pusho but I dropped the email-address-finder into git-get, which is my storehouse of weird “How do I find out X” tricks.

I wish my best work of the day had been a little bit more significant, but I'll take what I can get.

[ Addendum: Twitter user @shachaf has reminded me that the right way to do this is

git var GIT_COMMITTER_IDENT

which prints out something like

Mark Jason Dominus (陶敏修) <mjd@plover.com> 1469102546 -0400

which you can then parse. @shachaf also points out that a Stack Overflow discussion of this very question contains a comment suggesting the same weird hack! ]

Categories: Offsite Blogs

Different behaviour with -XAllowAmbiguousTypes in7.10.3b and 8.0.1

haskell-cafe - Wed, 07/20/2016 - 4:52pm
Hi, while porting a library to Haskell, which deals with persisting finite state automata to various stores, depending on the user's choice and the instances provided for types s(tate) e(vent) a(ction), I ran into different behaviour in ghc 7.10.3b and 8.0.1 related to ambiguity checks. This is a minimised and somewhat contrived example: With GHC 8.0.1 I get: However, when I remove the type signature for get: GHC inferred the exact same type I provided, but this time it compiles successfully. When going back to GHC 7.10.3b, without type signature: So my questions are: *) What's with the different behaviour depending on whether the type sig is inferred or provided? *) Is GHC 7.10.3b or 8.0.1 closer to the correct behaviour w.r.t. -XAllowAmbiguousTypes? -- Regards, Max Amanshauser. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via
Categories: Offsite Discussion

pattern match on forall'ed data

haskell-cafe - Wed, 07/20/2016 - 4:18pm
Hi all, I'm surprised this doesn't work: data SomeData = forall e. (Typeable e, Eq e) => SomeData e (===) :: (Typeable a, Typeable b, Eq a, Eq b) => a -> b -> Bool (===) x y = cast x == Just y test :: SomeData' -> Bool test (SomeData' e) | e === Nothing = True test _ = False It says Could not deduce (Eq a1) arising from a use of ‘===’ How can I achieve something of the same effect? Thanks Corentin _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

RFC: New version of haskell-src-exts withoutsimplified AST

haskell-cafe - Wed, 07/20/2016 - 2:09pm
Dear all, For several months I have had a branch of haskell-src-exts which has been updated for all the new features in GHC 8. However, I have not yet released the changes as in this version I also removed the simplified AST. To those unfamiliar, HSE provides two ASTs, one which has source locations and one which doesn't. I have removed the one which doesn't. This simplifies maintenance and I think also makes the library easier to use. I have held back releasing this version as there are some people who use the simplified AST. The changes necessary for the new version are mechanical and easy to make. For example, David Fox updated haskell-names to use my branch [1] with few problems. The question is, are there any users of the library who object to this change? Matt [1]: https://github.com/haskell-suite/haskell-names/pull/73 _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinf
Categories: Offsite Discussion

Function to replace given element in list

General haskell list - Tue, 07/19/2016 - 9:08pm
Hi I'm trying to make a custom function to replace a given element in a list. Code: let i = elemIndex toReplace lst in case i of Just i -> let z = splitAt i lst x = fst z y = (snd z) in init x x ++ newNmr x ++ y Nothing -> [5] Error: I've tried a lot, but I always got an error. What am I doing wrong? Thanks! _______________________________________________ Haskell mailing list Haskell< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell
Categories: Incoming News

Local Haskell talk in Leipzig,27 July 2016 at 3:30pm in HTWK

haskell-cafe - Tue, 07/19/2016 - 6:11pm
Dear (filter (`livesInOrCloseTo` Leipzig_Germany) haskellers), we would like to invite you to a talk by Stephen Blackheath "They why and how of Functional Reactive Programming with Java/Sodium" on Wednesday, 27 July 2016, at 15:30 Uhr in Room Gu 102, Gutenberg-Bau, HTWK Leipzig Abstract: <https://portal.imn.htwk-leipzig.de/termine/vortrag-the-why-and-how-of-functional-reactive-programming-with-java-sodium? Geographical coordinates: <http://www.openstreetmap.org/?mlat=51.31248&mlon=12.37498#map=18/51.31248/12.37498> Afterwards (~ 17:00 Uhr) there will be a get-together in the Cafe Suedbrause <http://suedbrause.de>. Stop Listening! Johannes Waldmann and Heinrich Apfelmus _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

Call for Participation: ICFP 2016

haskell-cafe - Tue, 07/19/2016 - 6:44am
[ Early registration ends 17 August. ] ===================================================================== Call for Participation ICFP 2016 21st ACM SIGPLAN International Conference on Functional Programming and affiliated events September 18 - September 24, 2016 Nara, Japan http://conf.researchr.org/home/icfp-2016 ===================================================================== ICFP provides a forum for researchers and developers to hear about the latest work on the design, implementations, principles, and uses of functional programming. The conference covers the entire spectrum of work, from practice to theory, including its peripheries. A full week dedicated to functional programming: 1 conference, 1 symposium, 10 workshops, tutorials, programming contest results, student research competition, and mentoring workshop * Overview and affiliated events: http://conf.researchr.org/home/icfp-2016 * Program: http://conf.researchr.org/program/icfp-2016/program-icfp-2016 * Accepted Papers:
Categories: Offsite Discussion

Call for Participation: ICFP 2016

General haskell list - Tue, 07/19/2016 - 6:44am
[ Early registration ends 17 August. ] ===================================================================== Call for Participation ICFP 2016 21st ACM SIGPLAN International Conference on Functional Programming and affiliated events September 18 - September 24, 2016 Nara, Japan http://conf.researchr.org/home/icfp-2016 ===================================================================== ICFP provides a forum for researchers and developers to hear about the latest work on the design, implementations, principles, and uses of functional programming. The conference covers the entire spectrum of work, from practice to theory, including its peripheries. A full week dedicated to functional programming: 1 conference, 1 symposium, 10 workshops, tutorials, programming contest results, student research competition, and mentoring workshop * Overview and affiliated events: http://conf.researchr.org/home/icfp-2016 * Program: http://conf.researchr.org/program/icfp-2016/program-icfp-2016 * Accepted Papers:
Categories: Incoming News

ANN: blockhash-0.1.0.0

General haskell list - Tue, 07/19/2016 - 5:07am
Hi all! I'm pleased to announce the first release of blockhash, a perceptual image hash calculation tool based on algorithm described in Block Mean Value Based Image Perceptual Hashing by Bian Yang, Fan Gu and Xiamu Niu. https://hackage.haskell.org/package/blockhash https://github.com/kseo/blockhash Program: Usage: blockhash [-q|--quick] [-b|--bits ARG] filenames blockhash Available options: -h,--help Show this help text -q,--quick Use quick hashing method -b,--bits ARG Create hash of size N^2 bits. Library: import qualified Codec.Picture as P import Data.Blockhash import qualified Data.Vector.Generic as VG import qualified Data.Vector.Unboxed as V printHash :: FilePath -> IO () printHash :: filename = do res <- P.readImage filename case res of Left err -> putStrLn ("Fail to read: " ++ filename) Right dynamicImage -> do let rgbaImage = P.convertRGBA8 dynamicImage pixels = VG.convert (P.imageData rgbaImage) image =
Categories: Incoming News

Linking to a DLL with relative path on Windows usingstack build

haskell-cafe - Mon, 07/18/2016 - 4:06pm
Hello all, I am writing a Haskell program that controls a scientific instrument (a very sensitive camera). The instrument vendor makes a Windows DLL available that exposes a C interface. I have already set up the FFI bindings. I am building the application using Stack on Windows 8. But I am not sure how I can get “stack build” to find the DLL at link time. My Google searches have not cleared things up. My questions: 1. What global directory should I place the DLL in so “stack build” will pick it up? I tried placing it in C:\Windows\SysWOW64 but that doesn’t work. I realize I could add an "extra-lib-dirs” argument in the .cabal file but I would like to know what the default linker search paths are. But in fact I prefer to not install the DLL in a hardcoded, global location. Since the instrument is attached to a specific measurement computer, there is no point in having it installed on the developer PCs. I would prefer to have it all self-contained into a single project. In my C++ development
Categories: Offsite Discussion

Announcing MuniHac 2016

General haskell list - Mon, 07/18/2016 - 3:45pm
Hi fellow Haskellers! Together with Alexander Lehmann from TNG Technology Consulting GmbH and Andres Löh from Well-Typed LLP, I am organizing a new Haskell Hackathon that will take place in Munich, from Friday September 2 - Sunday September 4. TNG is graciously offering to host the Hackathon at their premises. This Hackathon is in the tradition of other Haskell Hackathons such as ZuriHac, HacBerlin, UHac and others. We have capacity for 80-100 Haskellers to collaborate on any project they like. Hacking on Haskell projects will be the main focus of the event, but we will also have a couple of talks by renowned Haskellers. More details and a link to the registration platform can be found on www.munihac.de Hope to see you in Munich! Best regards, Michael
Categories: Incoming News

Announcing MuniHac 2016

haskell-cafe - Mon, 07/18/2016 - 3:39pm
Hi fellow Haskellers! Together with Alexander Lehmann from TNG Technology Consulting GmbH and Andres Löh from Well-Typed LLP, I am organizing a new Haskell Hackathon that will take place in Munich, from Friday September 2 - Sunday September 4. TNG is graciously offering to host the Hackathon at their premises. This Hackathon is in the tradition of other Haskell Hackathons such as ZuriHac, HacBerlin, UHac and others. We have capacity for 80-100 Haskellers to collaborate on any project they like. Hacking on Haskell projects will be the main focus of the event, but we will also have a couple of talks by renowned Haskellers. More details and a link to the registration platform can be found on www.munihac.de Hope to see you in Munich! Best regards, Michael
Categories: Offsite Discussion

Neil Mitchell: Why did Stack stop using Shake?

Planet Haskell - Mon, 07/18/2016 - 11:57am

Summary: Stack originally used Shake. Now it doesn't. There are reasons for that.

The Stack tool originally used the Shake build system, as described on the page about Stack's origins. Recently Edward Yang asked why doesn't Stack still use Shake - a very interesting question. I've taken the information shared in that mailing list thread and written it up, complete with my comments and distortions/inferences.

Stack is all about building Haskell code, in ways that obey dependencies and perform minimal rebuilds. Already in Haskell the dependency story is somewhat muddied. GHC (as available through ghc --make) does advanced dependency tracking, including header includes and custom Template Haskell dependency directives. You can also run ghc in single-shot mode, compiling a file at a time, but the result is about 3x slower and GHC will still do some dependency tracking itself anyway. Layered on top of ghc --make is Cabal which is responsible for tracking dependencies with .cabal files, configured Cabal information and placing things into the GHC package database. Layered on top of that is Stack, which has multiple projects and needs to track information about which Stackage snapshot is active and shared build dependencies.

Shake is good at taking complex dependencies and hiding all the messy details. However, for Stack many of these messy details were the whole purpose of the project. When Michael Snoyman and Chris Done were originally writing Stack they didn't have much experience with Shake, and opted to go for simplicity and directly managing the pieces, which they viewed to be less risky.

Now that Stack is written, and works nicely, the question changes to if it is worth changing existing working code to make use of Shake. Interestingly, at the heart of Stack there is a "Shake-lite" - see Control.Concurrent.Execute. This piece could certainly be replaced by Shake, but what would the benefit be? Looking at it with my Shake implementers hat on, there are a few things that spring to mind:


  • This existing code is O(n^2) in lots of places. For the size of Stack projects, compared to the time required to compile Haskell, that probably doesn't matter.


  • Shake persists the dependencies, but the Stack code does not seem to. Would that be useful? Or is the information already persisted elsewhere? Would Shake persisting the information make stack builds which had nothing to do go faster? (The answer is almost certainly yes.)


  • Since the code is only used on one project it probably isn't as well tested as Shake, which has a lot of tests. On the other hand, it has a lot less features, so a lot less scope for bugs.


  • The code makes a lot of assumptions about the information fed to it. Shake doesn't make such assumptions, and thus invalid input is less likely to fail silently.


  • Shake has a lot of advanced dependency forms such as resources. Stack currently blocks when simultaneous configures are tried, whereas Shake would schedule other tasks to run.


  • Shake has features such as profiling that are not worth creating for a single project, but that when bundled in the library can be a useful free feature.

In some ways Stack as it stands avoids a lot of the best selling points about Shake:


  • If you have lots of complex interdependencies, Shake lets you manage
    them nicely. That's not really the case for Stack, but is in large
    heterogeneous build systems, e.g. the GHC build system.


  • If you are writing things quickly, Shake lets you manage
    exceptions/retries/robustness quickly. For a project which has the
    effort invested that Stack does, that's less important, but for things
    like MinGHC (something Stack killed), it was critically important because no one cared enough to do all this nasty engineering.


  • If you are experimenting, Shake provides a lot of pieces (resources,
    parallelism, storage) that help explore the problem space without
    having to do lots of work at each iteration. That might mean Shake is
    more of a benefit at the start of a project than in a mature project.

If you are writing a version of Stack from scratch, I'd certainly recommend thinking about using Shake. I suspect it probably does make sense for Stack to switch to Shake eventually, to simplify ongoing maintenance, but there's no real hurry.

Categories: Offsite Blogs

Edward Z. Yang: What Template Haskell gets wrong and Racket gets right

Planet Haskell - Mon, 07/18/2016 - 9:19am

Why are macros in Haskell terrible, but macros in Racket great? There are certainly many small problems with GHC's Template Haskell support, but I would say that there is one fundamental design point which Racket got right and Haskell got wrong: Template Haskell does not sufficiently distinguish between compile-time and run-time phases. Confusion between these two phases leads to strange claims like “Template Haskell doesn’t work for cross-compilation” and stranger features like -fexternal-interpreter (whereby the cross-compilation problem is “solved” by shipping the macro code to the target platform to be executed).

The difference in design can be seen simply by comparing the macro systems of Haskell and Racket. This post assumes knowledge of either Template Haskell, or Racket, but not necessarily both.

Basic macros. To establish a basis of comparison, let’s compare how macros work in Template Haskell as opposed to Racket. In Template Haskell, the primitive mechanism for invoking a macro is a splice:

{-# LANGUAGE TemplateHaskell #-} module A where val = $( litE (intPrimL 2) )

Here, $( ... ) indicates the splice, which runs ... to compute an AST which is then spliced into the program being compiled. The syntax tree is constructed using library functions litE (literal expression) and intPrimL (integer primitive literal).

In Racket, the macros are introduced using transformer bindings, and invoked when the expander encounters a use of this binding:

#lang racket (define-syntax macro (lambda (stx) (datum->syntax #'int 2))) (define val macro)

Here, define-syntax defines a macro named macro, which takes in the syntax stx of its usage, and unconditionally returns a syntax object representing the literal two (constructed using datum->syntax, which converts Scheme data into ASTs which construct them).

Template Haskell macros are obviously less expressive than Racket's (an identifier cannot directly invoke a macro: splices are always syntactically obvious); conversely, it is easy to introduce a splice special form to Racket (hat tip to Sam Tobin-Hochstadt for this code—if you are not a Racketeer don’t worry too much about the specifics):

#lang racket (define-syntax (splice stx) (syntax-case stx () [(splice e) #'(let-syntax ([id (lambda _ e)]) (id))])) (define val (splice (datum->syntax #'int 2)))

I will reuse splice in some further examples; it will be copy-pasted to keep the code self-contained but not necessary to reread.

Phases of macro helper functions. When writing large macros, it's frequently desirable to factor out some of the code in the macro to a helper function. We will now refactor our example to use an external function to compute the number two.

In Template Haskell, you are not allowed to define a function in a module and then immediately use it in a splice:

{-# LANGUAGE TemplateHaskell #-} module A where import Language.Haskell.TH f x = x + 1 val = $( litE (intPrimL (f 1)) ) -- ERROR -- A.hs:5:26: -- GHC stage restriction: -- ‘f’ is used in a top-level splice or annotation, -- and must be imported, not defined locally -- In the splice: $(litE (intPrimL (f 1))) -- Failed, modules loaded: none.

However, if we place the definition of f in a module (say B), we can import and then use it in a splice:

{-# LANGUAGE TemplateHaskell #-} module A where import Language.Haskell.TH import B (f) val = $( litE (intPrimL (f 1)) ) -- OK

In Racket, it is possible to define a function in the same file you are going to use it in a macro. However, you must use the special-form define-for-syntax which puts the function into the correct phase for a macro to use it:

#lang racket (define-syntax (splice stx) (syntax-case stx () [(splice e) #'(let-syntax ([id (lambda _ e)]) (id))])) (define-for-syntax (f x) (+ x 1)) (define val (splice (datum->syntax #'int (f 1))))

If we attempt to simply (define (f x) (+ x 1)), we get an error “f: unbound identifier in module”. The reason for this is Racket’s phase distinction. If we (define f ...), f is a run-time expression, and run-time expressions cannot be used at compile-time, which is when the macro executes. By using define-for-syntax, we place the expression at compile-time, so it can be used. (But similarly, f can now no longer be used at run-time. The only communication from compile-time to run-time is via the expansion of a macro into a syntax object.)

If we place f in an external module, we can also load it. However, we must once again indicate that we want to bring f into scope as a compile-time object:

(require (for-syntax f-module))

As opposed to the usual (require f-module).

Reify and struct type transform bindings. In Template Haskell, the reify function gives Template Haskell code access to information about defined data types:

{-# LANGUAGE TemplateHaskell #-} module A where import Language.Haskell.TH data Single a = Single a $(reify ''Single >>= runIO . print >> return [] )

This example code prints out information about Single at compile time. Compiling this module gives us the following information about List:

TyConI (DataD [] A.Single [PlainTV a_1627401583] [NormalC A.Single [(NotStrict,VarT a_1627401583)]] [])

reify is implemented by interleaving splices and typechecking: all top-level declarations prior to a top-level splice are fully typechecked prior to running the top-level splice.

In Racket, information about structures defined using the struct form can be passed to compile-time via a structure type transformer binding:

#lang racket (require (for-syntax racket/struct-info)) (struct single (a)) (define-syntax (run-at-compile-time stx) (syntax-case stx () [ (run-at-compile-time e) #'(let-syntax ([id (lambda _ (begin e #'(void)))]) (id))])) (run-at-compile-time (print (extract-struct-info (syntax-local-value (syntax single)))))

Which outputs:

'(.#<syntax:3:8 struct:single> .#<syntax:3:8 single> .#<syntax:3:8 single?> (.#<syntax:3:8 single-a>) (#f) #t)

The code is a bit of a mouthful, but what is happening is that the struct macro defines single as a syntax transformer. A syntax transformer is always associated with a compile-time lambda, which extract-struct-info can interrogate to get information about the struct (although we have to faff about with syntax-local-value to get our hands on this lambda—single is unbound at compile-time!)

Discussion. Racket’s compile-time and run-time phases are an extremely important idea. They have a number of consequences:

  1. You don’t need to run your run-time code at compile-time, nor vice versa. Thus, cross-compilation is supported trivially because only your run-time code is ever cross-compiled.
  2. Your module imports are separated into run-time and compile-time imports. This means your compiler only needs to load the compile-time imports into memory to run them; as opposed to Template Haskell which loads all imports, run-time and compile-time, into GHC's address space in case they are invoked inside a splice.
  3. Information cannot flow from run-time to compile-time: thus any compile-time declarations (define-for-syntax) can easily be compiled prior to performing expanding simply by ignoring everything else in a file.

Racket was right, Haskell was wrong. Let’s stop blurring the distinction between compile-time and run-time, and get a macro system that works.

Postscript. Thanks to a tweet from Mike Sperber which got me thinking about the problem, and a fascinating breakfast discussion with Sam Tobin-Hochstadt. Also thanks to Alexis King for helping me debug my extract-struct-info code.

Further reading. To learn more about Racket's macro phases, one can consult the documentation Compile and Run-Time Phases and General Phase Levels. The phase system is also described in the paper Composable and Compileable Macros.

Categories: Offsite Blogs

parsec get line number of lexems

haskell-cafe - Sun, 07/17/2016 - 5:50pm
hello, I am experimenting with parsec. I am building a very simple language. I have a (almost) wholy funcrionning language : syntactic check, AST building and byte code génération. Now i want to improve error checking (type checking more precisely). this should be done on the AST basis. but in the ast i lose the line number information for precise error message. so my question is : is ther a way to get the line numbers at parsers level ? i would then store it in the ast for semantic check purpose thanks for your answers Olivier_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

OCL 2016: ** Deadline Extension ** Submit Your Paper Until July 24, 2016

General haskell list - Sun, 07/17/2016 - 10:13am
(Apologies for duplicates) If you are working on the foundations, methods, or tools for OCL or textual modelling, you should now finalise your submission for the OCL workshop! *** The submission deadline has been extended to July 24th, 2016! *** CALL FOR PAPERS 16th International Workshop on OCL and Textual Modeling Co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MODELS 2016) October 2, 2016, Saint-Malo, France http://oclworkshop.github.io Modeling started out with UML and its precursors as a graphical notation. Such visual representations enable direct intuitive capturing of reality, but some of their features are difficult to formalize and lack the level of precision required to create complete and unambiguous specifications. Limitations of the graphical notations encouraged the development of text-based modeling languages that either integrate with or replace graphical notations
Categories: Incoming News

OCL 2016: ** Deadline Extension ** Submit Your Paper Until July 24, 2016

haskell-cafe - Sun, 07/17/2016 - 10:13am
(Apologies for duplicates) If you are working on the foundations, methods, or tools for OCL or textual modelling, you should now finalise your submission for the OCL workshop! *** The submission deadline has been extended to July 24th, 2016! *** CALL FOR PAPERS 16th International Workshop on OCL and Textual Modeling Co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MODELS 2016) October 2, 2016, Saint-Malo, France http://oclworkshop.github.io Modeling started out with UML and its precursors as a graphical notation. Such visual representations enable direct intuitive capturing of reality, but some of their features are difficult to formalize and lack the level of precision required to create complete and unambiguous specifications. Limitations of the graphical notations encouraged the development of text-based modeling languages that either integrate with or replace graphical notations
Categories: Offsite Discussion