News aggregator

Linking to a DLL with relative path on Windows usingstack build

haskell-cafe - Mon, 07/18/2016 - 4:06pm
Hello all, I am writing a Haskell program that controls a scientific instrument (a very sensitive camera). The instrument vendor makes a Windows DLL available that exposes a C interface. I have already set up the FFI bindings. I am building the application using Stack on Windows 8. But I am not sure how I can get “stack build” to find the DLL at link time. My Google searches have not cleared things up. My questions: 1. What global directory should I place the DLL in so “stack build” will pick it up? I tried placing it in C:\Windows\SysWOW64 but that doesn’t work. I realize I could add an "extra-lib-dirs” argument in the .cabal file but I would like to know what the default linker search paths are. But in fact I prefer to not install the DLL in a hardcoded, global location. Since the instrument is attached to a specific measurement computer, there is no point in having it installed on the developer PCs. I would prefer to have it all self-contained into a single project. In my C++ development
Categories: Offsite Discussion

Announcing MuniHac 2016

General haskell list - Mon, 07/18/2016 - 3:45pm
Hi fellow Haskellers! Together with Alexander Lehmann from TNG Technology Consulting GmbH and Andres Löh from Well-Typed LLP, I am organizing a new Haskell Hackathon that will take place in Munich, from Friday September 2 - Sunday September 4. TNG is graciously offering to host the Hackathon at their premises. This Hackathon is in the tradition of other Haskell Hackathons such as ZuriHac, HacBerlin, UHac and others. We have capacity for 80-100 Haskellers to collaborate on any project they like. Hacking on Haskell projects will be the main focus of the event, but we will also have a couple of talks by renowned Haskellers. More details and a link to the registration platform can be found on www.munihac.de Hope to see you in Munich! Best regards, Michael
Categories: Incoming News

Announcing MuniHac 2016

haskell-cafe - Mon, 07/18/2016 - 3:39pm
Hi fellow Haskellers! Together with Alexander Lehmann from TNG Technology Consulting GmbH and Andres Löh from Well-Typed LLP, I am organizing a new Haskell Hackathon that will take place in Munich, from Friday September 2 - Sunday September 4. TNG is graciously offering to host the Hackathon at their premises. This Hackathon is in the tradition of other Haskell Hackathons such as ZuriHac, HacBerlin, UHac and others. We have capacity for 80-100 Haskellers to collaborate on any project they like. Hacking on Haskell projects will be the main focus of the event, but we will also have a couple of talks by renowned Haskellers. More details and a link to the registration platform can be found on www.munihac.de Hope to see you in Munich! Best regards, Michael
Categories: Offsite Discussion

Neil Mitchell: Why did Stack stop using Shake?

Planet Haskell - Mon, 07/18/2016 - 11:57am

Summary: Stack originally used Shake. Now it doesn't. There are reasons for that.

The Stack tool originally used the Shake build system, as described on the page about Stack's origins. Recently Edward Yang asked why doesn't Stack still use Shake - a very interesting question. I've taken the information shared in that mailing list thread and written it up, complete with my comments and distortions/inferences.

Stack is all about building Haskell code, in ways that obey dependencies and perform minimal rebuilds. Already in Haskell the dependency story is somewhat muddied. GHC (as available through ghc --make) does advanced dependency tracking, including header includes and custom Template Haskell dependency directives. You can also run ghc in single-shot mode, compiling a file at a time, but the result is about 3x slower and GHC will still do some dependency tracking itself anyway. Layered on top of ghc --make is Cabal which is responsible for tracking dependencies with .cabal files, configured Cabal information and placing things into the GHC package database. Layered on top of that is Stack, which has multiple projects and needs to track information about which Stackage snapshot is active and shared build dependencies.

Shake is good at taking complex dependencies and hiding all the messy details. However, for Stack many of these messy details were the whole purpose of the project. When Michael Snoyman and Chris Done were originally writing Stack they didn't have much experience with Shake, and opted to go for simplicity and directly managing the pieces, which they viewed to be less risky.

Now that Stack is written, and works nicely, the question changes to if it is worth changing existing working code to make use of Shake. Interestingly, at the heart of Stack there is a "Shake-lite" - see Control.Concurrent.Execute. This piece could certainly be replaced by Shake, but what would the benefit be? Looking at it with my Shake implementers hat on, there are a few things that spring to mind:


  • This existing code is O(n^2) in lots of places. For the size of Stack projects, compared to the time required to compile Haskell, that probably doesn't matter.


  • Shake persists the dependencies, but the Stack code does not seem to. Would that be useful? Or is the information already persisted elsewhere? Would Shake persisting the information make stack builds which had nothing to do go faster? (The answer is almost certainly yes.)


  • Since the code is only used on one project it probably isn't as well tested as Shake, which has a lot of tests. On the other hand, it has a lot less features, so a lot less scope for bugs.


  • The code makes a lot of assumptions about the information fed to it. Shake doesn't make such assumptions, and thus invalid input is less likely to fail silently.


  • Shake has a lot of advanced dependency forms such as resources. Stack currently blocks when simultaneous configures are tried, whereas Shake would schedule other tasks to run.


  • Shake has features such as profiling that are not worth creating for a single project, but that when bundled in the library can be a useful free feature.

In some ways Stack as it stands avoids a lot of the best selling points about Shake:


  • If you have lots of complex interdependencies, Shake lets you manage
    them nicely. That's not really the case for Stack, but is in large
    heterogeneous build systems, e.g. the GHC build system.


  • If you are writing things quickly, Shake lets you manage
    exceptions/retries/robustness quickly. For a project which has the
    effort invested that Stack does, that's less important, but for things
    like MinGHC (something Stack killed), it was critically important because no one cared enough to do all this nasty engineering.


  • If you are experimenting, Shake provides a lot of pieces (resources,
    parallelism, storage) that help explore the problem space without
    having to do lots of work at each iteration. That might mean Shake is
    more of a benefit at the start of a project than in a mature project.

If you are writing a version of Stack from scratch, I'd certainly recommend thinking about using Shake. I suspect it probably does make sense for Stack to switch to Shake eventually, to simplify ongoing maintenance, but there's no real hurry.

Categories: Offsite Blogs

Edward Z. Yang: What Template Haskell gets wrong and Racket gets right

Planet Haskell - Mon, 07/18/2016 - 9:19am

Why are macros in Haskell terrible, but macros in Racket great? There are certainly many small problems with GHC's Template Haskell support, but I would say that there is one fundamental design point which Racket got right and Haskell got wrong: Template Haskell does not sufficiently distinguish between compile-time and run-time phases. Confusion between these two phases leads to strange claims like “Template Haskell doesn’t work for cross-compilation” and stranger features like -fexternal-interpreter (whereby the cross-compilation problem is “solved” by shipping the macro code to the target platform to be executed).

The difference in design can be seen simply by comparing the macro systems of Haskell and Racket. This post assumes knowledge of either Template Haskell, or Racket, but not necessarily both.

Basic macros. To establish a basis of comparison, let’s compare how macros work in Template Haskell as opposed to Racket. In Template Haskell, the primitive mechanism for invoking a macro is a splice:

{-# LANGUAGE TemplateHaskell #-} module A where val = $( litE (intPrimL 2) )

Here, $( ... ) indicates the splice, which runs ... to compute an AST which is then spliced into the program being compiled. The syntax tree is constructed using library functions litE (literal expression) and intPrimL (integer primitive literal).

In Racket, the macros are introduced using transformer bindings, and invoked when the expander encounters a use of this binding:

#lang racket (define-syntax macro (lambda (stx) (datum->syntax #'int 2))) (define val macro)

Here, define-syntax defines a macro named macro, which takes in the syntax stx of its usage, and unconditionally returns a syntax object representing the literal two (constructed using datum->syntax, which converts Scheme data into ASTs which construct them).

Template Haskell macros are obviously less expressive than Racket's (an identifier cannot directly invoke a macro: splices are always syntactically obvious); conversely, it is easy to introduce a splice special form to Racket (hat tip to Sam Tobin-Hochstadt for this code—if you are not a Racketeer don’t worry too much about the specifics):

#lang racket (define-syntax (splice stx) (syntax-case stx () [(splice e) #'(let-syntax ([id (lambda _ e)]) (id))])) (define val (splice (datum->syntax #'int 2)))

I will reuse splice in some further examples; it will be copy-pasted to keep the code self-contained but not necessary to reread.

Phases of macro helper functions. When writing large macros, it's frequently desirable to factor out some of the code in the macro to a helper function. We will now refactor our example to use an external function to compute the number two.

In Template Haskell, you are not allowed to define a function in a module and then immediately use it in a splice:

{-# LANGUAGE TemplateHaskell #-} module A where import Language.Haskell.TH f x = x + 1 val = $( litE (intPrimL (f 1)) ) -- ERROR -- A.hs:5:26: -- GHC stage restriction: -- ‘f’ is used in a top-level splice or annotation, -- and must be imported, not defined locally -- In the splice: $(litE (intPrimL (f 1))) -- Failed, modules loaded: none.

However, if we place the definition of f in a module (say B), we can import and then use it in a splice:

{-# LANGUAGE TemplateHaskell #-} module A where import Language.Haskell.TH import B (f) val = $( litE (intPrimL (f 1)) ) -- OK

In Racket, it is possible to define a function in the same file you are going to use it in a macro. However, you must use the special-form define-for-syntax which puts the function into the correct phase for a macro to use it:

#lang racket (define-syntax (splice stx) (syntax-case stx () [(splice e) #'(let-syntax ([id (lambda _ e)]) (id))])) (define-for-syntax (f x) (+ x 1)) (define val (splice (datum->syntax #'int (f 1))))

If we attempt to simply (define (f x) (+ x 1)), we get an error “f: unbound identifier in module”. The reason for this is Racket’s phase distinction. If we (define f ...), f is a run-time expression, and run-time expressions cannot be used at compile-time, which is when the macro executes. By using define-for-syntax, we place the expression at compile-time, so it can be used. (But similarly, f can now no longer be used at run-time. The only communication from compile-time to run-time is via the expansion of a macro into a syntax object.)

If we place f in an external module, we can also load it. However, we must once again indicate that we want to bring f into scope as a compile-time object:

(require (for-syntax f-module))

As opposed to the usual (require f-module).

Reify and struct type transform bindings. In Template Haskell, the reify function gives Template Haskell code access to information about defined data types:

{-# LANGUAGE TemplateHaskell #-} module A where import Language.Haskell.TH data Single a = Single a $(reify ''Single >>= runIO . print >> return [] )

This example code prints out information about Single at compile time. Compiling this module gives us the following information about List:

TyConI (DataD [] A.Single [PlainTV a_1627401583] [NormalC A.Single [(NotStrict,VarT a_1627401583)]] [])

reify is implemented by interleaving splices and typechecking: all top-level declarations prior to a top-level splice are fully typechecked prior to running the top-level splice.

In Racket, information about structures defined using the struct form can be passed to compile-time via a structure type transformer binding:

#lang racket (require (for-syntax racket/struct-info)) (struct single (a)) (define-syntax (run-at-compile-time stx) (syntax-case stx () [ (run-at-compile-time e) #'(let-syntax ([id (lambda _ (begin e #'(void)))]) (id))])) (run-at-compile-time (print (extract-struct-info (syntax-local-value (syntax single)))))

Which outputs:

'(.#<syntax:3:8 struct:single> .#<syntax:3:8 single> .#<syntax:3:8 single?> (.#<syntax:3:8 single-a>) (#f) #t)

The code is a bit of a mouthful, but what is happening is that the struct macro defines single as a syntax transformer. A syntax transformer is always associated with a compile-time lambda, which extract-struct-info can interrogate to get information about the struct (although we have to faff about with syntax-local-value to get our hands on this lambda—single is unbound at compile-time!)

Discussion. Racket’s compile-time and run-time phases are an extremely important idea. They have a number of consequences:

  1. You don’t need to run your run-time code at compile-time, nor vice versa. Thus, cross-compilation is supported trivially because only your run-time code is ever cross-compiled.
  2. Your module imports are separated into run-time and compile-time imports. This means your compiler only needs to load the compile-time imports into memory to run them; as opposed to Template Haskell which loads all imports, run-time and compile-time, into GHC's address space in case they are invoked inside a splice.
  3. Information cannot flow from run-time to compile-time: thus any compile-time declarations (define-for-syntax) can easily be compiled prior to performing expanding simply by ignoring everything else in a file.

Racket was right, Haskell was wrong. Let’s stop blurring the distinction between compile-time and run-time, and get a macro system that works.

Postscript. Thanks to a tweet from Mike Sperber which got me thinking about the problem, and a fascinating breakfast discussion with Sam Tobin-Hochstadt. Also thanks to Alexis King for helping me debug my extract-struct-info code.

Further reading. To learn more about Racket's macro phases, one can consult the documentation Compile and Run-Time Phases and General Phase Levels. The phase system is also described in the paper Composable and Compileable Macros.

Categories: Offsite Blogs

parsec get line number of lexems

haskell-cafe - Sun, 07/17/2016 - 5:50pm
hello, I am experimenting with parsec. I am building a very simple language. I have a (almost) wholy funcrionning language : syntactic check, AST building and byte code génération. Now i want to improve error checking (type checking more precisely). this should be done on the AST basis. but in the ast i lose the line number information for precise error message. so my question is : is ther a way to get the line numbers at parsers level ? i would then store it in the ast for semantic check purpose thanks for your answers Olivier_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

OCL 2016: ** Deadline Extension ** Submit Your Paper Until July 24, 2016

General haskell list - Sun, 07/17/2016 - 10:13am
(Apologies for duplicates) If you are working on the foundations, methods, or tools for OCL or textual modelling, you should now finalise your submission for the OCL workshop! *** The submission deadline has been extended to July 24th, 2016! *** CALL FOR PAPERS 16th International Workshop on OCL and Textual Modeling Co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MODELS 2016) October 2, 2016, Saint-Malo, France http://oclworkshop.github.io Modeling started out with UML and its precursors as a graphical notation. Such visual representations enable direct intuitive capturing of reality, but some of their features are difficult to formalize and lack the level of precision required to create complete and unambiguous specifications. Limitations of the graphical notations encouraged the development of text-based modeling languages that either integrate with or replace graphical notations
Categories: Incoming News

OCL 2016: ** Deadline Extension ** Submit Your Paper Until July 24, 2016

haskell-cafe - Sun, 07/17/2016 - 10:13am
(Apologies for duplicates) If you are working on the foundations, methods, or tools for OCL or textual modelling, you should now finalise your submission for the OCL workshop! *** The submission deadline has been extended to July 24th, 2016! *** CALL FOR PAPERS 16th International Workshop on OCL and Textual Modeling Co-located with ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems (MODELS 2016) October 2, 2016, Saint-Malo, France http://oclworkshop.github.io Modeling started out with UML and its precursors as a graphical notation. Such visual representations enable direct intuitive capturing of reality, but some of their features are difficult to formalize and lack the level of precision required to create complete and unambiguous specifications. Limitations of the graphical notations encouraged the development of text-based modeling languages that either integrate with or replace graphical notations
Categories: Offsite Discussion

JP Moresmau: Another Web-based Haskell IDE

Planet Haskell - Sat, 07/16/2016 - 8:16am
After giving up on EclipseFP, I've worked a bit on haskell-ide-engine and leksah, contributing little things here and there to try to make the Haskell IDE ecosystem a little bit better. But at some point, I tried to update the GTK libraries on my Ubuntu machine to get leksah to run, and broke my whole desktop. Hours of fun followed to get back to a working system. So I thought again at my efforts last year to have a web based IDE for Haskell, because using the browser as the UI saves users a lot of pain, no UI libraries to install or update!

I started another little effort that I call "reload", both because it's another take on something I had started before and of course because it issues ":reload" commands to ghci when you change files. I have changes the setup, though. Now I use Scotty for the back end, with a REST API, and I use a pure Javascript front-end, with the Polymer library providing the web component framework and material design. I also use a web socket to send back GHCi results from the back end to the browser. I still use ghcid for the backend, maybe one day when haskell-ide-engine is released I can use that instead.

The functionality is fairly simple yet: there is a file browser on the left, and the editor (I'm using the ACE web editor) on the right. There is no save button, any change is automatically saved to disk (you use source version control, right?). On the server, there is a GHCi session for each cabal component in your project, and any change causes a reload, and you can see the errors/warnings in a menu and in the editor's annotations. You can build, run tests and benchmarks, and I've just added ":info" support. The fact that we're using GHCi makes it fast, but I'm sure there's loads of wrinkles to iron out still.

Anyway, if you're interested in a test ride, just clone from Github and get going!
Categories: Offsite Blogs

Passing a cabal flag to stack

haskell-cafe - Sat, 07/16/2016 - 4:22am
I’ve been trying to compile hoodle following the instructions in the readme: https://github.com/wavewave/hoodle Went further than year or two earlier when attempt got mired in cabal-hell This time stack build went through (after a couple of deb packages needing installation) Now I get this error on trying to run GTK+ 2.x symbols detected. Using GTK+ 2.x and GTK+ 3 in the same process is not supported Asking on the hoodle mailing list I was told that one has to build poppler with option -fgtk3 Where/How to give that? Looking at https://github.com/commercialhaskell/stack/issues/191 I tried the following: 1. Delete every file/directory under .stack-work that has the word poppler 2. stack build poppler --flag gtk:gtk3 3. stack build Seems to have built Ok I get the same warning that stack seems to be generally giving — dozens of them — viz ============ No packages found in snapshot which provide a "gtk2hsC2hs" executable, which is a build-tool dependency of "glib" Missing build-tools may be caused
Categories: Offsite Discussion

Call For Participation: WADT 2016

General haskell list - Fri, 07/15/2016 - 1:58pm
Registration for WADT 2016 is now open. Early registration ends on: Monday, July 18, 2016. Note that we can offer a number of reduced rate places for students / young researchers to attend WADT'16, who are not registered as an author for a paper. These places are limited to early registration. Link: http://cs.swan.ac.uk/wadt16/ When Sep 21, 2016 - Sep 24, 2016 Where Gregynog, UK Submission Deadline June 17, 2016 (extended) Notification July 3, 2016 (extended) Final Version Due July 15, 2016 AIMS AND SCOPE The algebraic approach to system specification encompasses many aspects of the formal design of software systems. Originally born as formal method for reasoning about abstract data types, it now covers new specification frameworks and programming paradigms (such as object-oriented, aspect-oriented, agent-oriented, logic and higher-order functional programming) as well as a wide range of application areas (including informatio
Categories: Incoming News

Well-Typed.Com: Announcing MuniHac

Planet Haskell - Fri, 07/15/2016 - 12:31pm

We are happy to announce

MuniHac

Friday, September 2 – Sunday, September 4, 2016, Munich

This Hackathon is intended for everyone who is interested to write programs in Haskell, whether beginner or expert, whether hobbyist or professional.

In the tradition of other Haskell Hackathons such as ZuriHac, HacBerlin, UHac and many more, the plan is to bring together up to a hundred of Haskell enthusiasts to work together on any Haskell-related projects they like, to share experiences, and to learn new things.

This Hackathon is organized by TNG Technology Consulting GmbH and Well-Typed LLP.

Attendance is free of charge, but there is a limited capacity, so you must register!

We are going to set up a mentor program and special events for Haskell newcomers. So if you are a Haskell beginner, you are very much welcome! And if you’re an expert, we’d appreciate if you’d be willing to spend some of your time during the Hackathon mentoring newcomers. We will ask you about this during the registration process.

We’re also planning to have a number of keynote talks at the Hackathon. We’re going to announce these soon.

We hope to see you in Munich!

Categories: Offsite Blogs

another instance of MonadError

haskell-cafe - Thu, 07/14/2016 - 11:19pm
Hello, IO is an instance of MonadError IOException... However I also need to make it an instance of MonadError String... Is it possible? I'm trying to instanciate this class: class (Typeable n, Monad n, Applicative n, MonadError String n) => EvMgt n where ... instance EvMgt IO where... Any idea? _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Categories: Offsite Discussion

Douglas M. Auclair (geophf): 1HaskellADay 1Liners June 2016

Planet Haskell - Thu, 07/14/2016 - 9:22pm
  • June 13th, 2016:
    You want this list: [1, -1, 1, -1, ...]
    How would you produce this value in #Haskell ?
    • Wai Lee Chin Feman @wchinfeman
      https://gist.github.com/skatenerd/08d70c45499e1610206a
      (set plop to be identity, and set transformstate to be (*) -1)
    • Philipp Maier @AkiiZedd `iterate negate 1’
    • Patrick Mylund @pmylund concat $ repeat [1, (-1)]
      • Gary Fixler @gfixler No need for the parens in a list.
    • Jeff Foster @fffej and Kevin Meredith @Gentmen
      iterate (* (- 1)) 1
    • Spencer Janssen @spencerjanssen and Андреев Кирилл @nonaem00
      cycle [1, -1]
      • Philipp Maier @AkiiZedd:
        I’m curious: Since concat is O(n) wouldn’t it take more and more time depending on how many items you take?
      • Patrick Mylund @pmylund Looks like they compile to the same thing https://gist.github.com/patrickmn/9a92ab2a088018b2c0631f3bcfd60ebe
      • Philipp Maier @AkiiZedd I’m actually surprised the compiler can optimise this away :o Thanks for showing me ddump-simpl!
      • Eyal Lotem @EyalL concat is foldr (++), not foldl. O(1) work is done to produce the next item. [1,-1]++([1,-1]++(...
    • David Turner @DaveCTurner I'd actually write 'cycle [1,-1]' but I like the elegant, alliterative obscurity of   'iterate negate 1'
    • Fatih Karakurt @karakfa alt=1:[-x|x<-alt]
Categories: Offsite Blogs

14th ACM MobiWac 2016, MALTA

General haskell list - Thu, 07/14/2016 - 4:11pm
** We apologize if you receive multiple copies of this message ** ================================================================== The 14th ACM International Symposium on Mobility Management and Wireless Access (MobiWac 2016) November 13 - 17, 2016 - Malta http://mobiwac-symposium.org/ ================================================================== The MOBIWAC series of event is intended to provide an international forum for the discussion and presentation of original ideas, recent results and achievements by researchers, students, and systems developers on issues and challenges related to mobility management and wireless access protocols. To keep up with the technological developments, we also open up new areas such as mobile cloud computing starting from this year. Authors are encouraged to submit both theoretical and practical results of significance on all aspects of wire
Categories: Incoming News

Mark Jason Dominus: Surprising reasons to use a syntax-coloring editor

Planet Haskell - Thu, 07/14/2016 - 9:15am

[ Danielle Sucher reminded me of this article I wrote in 1998, before I had a blog, and I thought I'd repatriate it here. It should be interesting as a historical artifact, if nothing else. Thanks Danielle! ]

I avoided syntax coloring for years, because it seemed like a pretty stupid idea, and when I tried it, I didn't see any benefit. But recently I gave it another try, with Ilya Zakharevich's `cperl-mode' for Emacs. I discovered that I liked it a lot, but for surprising reasons that I wasn't expecting.

I'm not trying to start an argument about whether syntax coloring is good or bad. I've heard those arguments already and they bore me to death. Also, I agree with most of the arguments about why syntax coloring is a bad idea. So I'm not trying to argue one way or the other; I'm just relating my experiences with syntax coloring. I used to be someone who didn't like it, but I changed my mind.

When people argue about whether syntax coloring is a good idea or not, they tend to pull out the same old arguments and dust them off. The reasons I found for using syntax coloring were new to me; I'd never seen anyone mention them before. So I thought maybe I'd post them here.

Syntax coloring is when the editor understands something about the syntax of your program and displays different language constructs in different fonts. For example, cperl-mode displays strings in reddish brown, comments in a sort of brick color, declared variables (in my) in gold, builtin function names (defined) in green, subroutine names in blue, labels in teal, and keywords (like my and foreach) in purple.

The first thing that I noticed about this was that it was easier to recognize what part of my program I was looking at, because each screenful of the program had its own color signature. I found that I was having an easier time remembering where I was or finding that parts I was looking for when I scrolled around in the file. I wasn't doing this consciously; I couldn't describe the color scheme any particular part of the program was, but having red, gold, and purple blotches all over made it easier to tell parts of the program apart.

The other surprise I got was that I was having more fun programming. I felt better about my programs, and at the end of the day, I felt better about the work I had done, just because I'd spent the day looking at a scoop of rainbow sherbet instead of black and white. It was just more cheerful to work with varicolored text than monochrome text. The reason I had never noticed this before was that the other coloring editors I used had ugly, drab color schemes. Ilya's scheme won here by using many different hues.

I haven't found many of the other benefits that people say they get from syntax coloring. For example, I can tell at a glance whether or not I failed to close a string properly—unless the editor has screwed up the syntax coloring, which it does often enough to ruin the benefit for me. And the coloring also slows down the editor. But the two benefits I've described more than outweigh the drawbacks for me. Syntax coloring isn't a huge win, but it's definitely a win.

If there's a lesson to learn from this, I guess it's that it can be valuable to revisit tools that you rejected, to see if you've changed your mind. Nothing anyone said about it was persuasive to me, but when I tried it I found that there were reasons to do it that nobody had mentioned. Of course, these reasons might not be compelling for anyone else.

Addenda 2016

Looking back on this from a distance of 18 years, I am struck by the following thoughts:

  1. Syntax higlighting used to make the editor really slow. You had to make a real commitment to using it or not. I had forgotten about that. Another victory for Moore’s law!

  2. Programmers used to argue about it. Apparently programmers will argue about anything, no matter how ridiculous. Well okay, this is not a new observation. Anyway, this argument is now finished. Whether people use it or not, they no longer find the need to argue about it. This is a nice example that sometimes these ridiculous arguments eventually go away.

  3. I don't remember why I said that syntax highlighting “seemed like a pretty stupid idea”, but I suspect that I was thinking that the wrong things get highlighted. Highlighters usually highlight the language keywords, because they're easy to recognize. But this is like highlighting all the generic filler words in a natural language text. The words you want to see are exactly the opposite of what is typically highlighted.

    Syntax highlighters should be highlighting the semantic content like expression boundaries, implied parentheses, boolean subexpressions, interpolated variables and other non-apparent semantic features. I think there is probably a lot of interesting work to be done here. Often you hear programmers say things like “Oh, I didn't see the that the trailing comma was actually a period.” That, in my opinion, is the kind of thing the syntax highlighter should call out. How often have you heard someone say “Oh, I didn't see that while there”?

  4. I have been misspelling “arguments” as “argmuents” for at least 18 years.

Categories: Offsite Blogs

efficient operations on immutable structures

haskell-cafe - Thu, 07/14/2016 - 7:07am
Hi again all. From some online research, I understand that operations on complex immutable structures (for example, a "setter" function a -> (Int, Int) -> Matrix -> Matrix which alters one element in the Matrix) is not necessarily inefficient in Haskell, because the (compiler? runtime?) has some means of sharing unchanged values between the input and output structure. What I am not clear on, however, is how this works, and how you ensure that this happens. Could someone perhaps elaborate, or point me to a really good read on the subject?
Categories: Offsite Discussion

Proposal: Add `restriction` to Data.Map and Data.IntMap

libraries list - Thu, 07/14/2016 - 5:45am
Cale Gibbard proposes the following: Data.IntMap.restriction :: IntSet -> IntMap a -> IntMap a Data.Map.restriction :: Ord k => Set k -> Map k a -> Map k a In each case, the map is filtered to contain only the keys that are also found in the set. This can be implemented efficiently using a slightly stripped-down version of Data.Map.intersection. David Feuer _______________________________________________ Libraries mailing list Libraries< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Categories: Offsite Discussion

Proposal: Add Foldable helper to Control.DeepSeq

libraries list - Thu, 07/14/2016 - 4:16am
As I describe in https://github.com/haskell/deepseq/issues/17 it is possible to implement an NFData instance for any Foldable type, and we can offer a function to help do that: data Unit = Unit instance Monoid Unit where mempty = Unit Unit `mappend` Unit = Unit -- strict in both arguments, unlike () rnfFoldable :: (Foldable f, NFData a) => f a -> () rnfFoldable xs = foldMap (\x -> rnf x `seq` Unit) xs `seq` () This could be used like so: instance NFData a => NFData (F a) where rnf = rnfFoldable This version forces from left to right. It would be possible to offer another version that forces from right to left: data Unit2 = Unit2 instance Monoid Unit2 where mempty = Unit2 x `mappend` Unit2 = x `seq` Unit2 rnfFoldableRTL :: (Foldable f, NFData a) => f a -> () rnfFoldableRTL xs = foldMap (\x -> rnf x `seq` Unit2) xs `seq` () _______________________________________________ Libraries mailing list Libraries< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Categories: Offsite Discussion

How to support multiple string types in Haskell?

haskell-cafe - Thu, 07/14/2016 - 1:40am
Hi all, There are multiple string types in Haskell – String, lazy/strict ByteString, lazy/strict Text to name a few. So to make a string handling function maximally reusable, it needs to support multiple string types. One approach used by TagSoup library is to make a type class StringLike which represents the polymorphic string type and uses it where a String type is normally needed. For example, parseTags :: StringLike str => str -> [Tag str] Here parseTags takes a StringLike type instead of a fixed string type. Users of TagSoup can pick any of String, lazy/strict ByteString, lazy/strict Text because they are all instances of StringLike type class. It seems StringLike type class is quite generic but it is used only in the TagSoup package. This makes me wonder what is the idiomatic way to support multiple string types in Haskell. What other approaches do we have? Thanks, Kwang Yul Seo _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view ar
Categories: Offsite Discussion