News aggregator

OT - targeting many systems

haskell-cafe - Mon, 01/13/2014 - 11:15pm
I love Haskell, and it gets many things right. However eg its hard to target JS in an efficient way. Sometimes I just need a quick and dirty (somewhat typesafe) program which gets the job done - sometimes its js, sometimes a tool in console, sometimse web/client code (also smartphone applets). haxe.org comes close, but while the language is fun to use it still has some shortcomings - and sometimes I hit unexpected bugs. The real question I have is: - Haskell is cool - being lazy is cool - having OO is cool (because you can target Java/Flash/.. easily) - how to get best of all worlds? So I think about creating my own language is should look like more like a source code generator than language which contains different sub universes. The idea is to code some parts - such as a parser - in a Haskell like dialect, then reuse it in quick and dirty OO world (Java like). Of course additional targets could be just plain old C, because C++ is known to be very complex - and there are tons of legacy code which should
Categories: Offsite Discussion

Enabling TypeHoles by default

glasgow-user - Mon, 01/13/2014 - 8:42pm
Hello, As discussed on ghc-devs, I propose to enable -XTypeHoles in GHC by default. Rationale: (1) This way holes are far easier to use; just entering "_" allows to check type of a subexpression, no need of adding "-XTypeHoles". (2) This affects error messages only; i.e. the set of programs that compile stays the same - Haskell 2010. The only exception is that if you use -fdefer-type-errors, then a program with a hole compiles, but this seems fine with philosophy of -fdefer-type-errors. If so: would you like it to be in 7.8 or wait a cycle? My preference is 7.8, two people (John and Richard) suggested 7.10. -KG _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users< at >haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Categories: Offsite Discussion

Edward Z. Yang: Ott ⇔ PLT Redex

Planet Haskell - Mon, 01/13/2014 - 6:11pm

Ott and PLT Redex are a pair of complimentary tools for the working semanticist. Ott is a tool for writing definitions of programming languages in a nice ASCII notation, which then can be typeset in LaTeX or used to generate definitions for a theorem prover (e.g. Coq). PLT Redex is a tool for specifying and debugging operational semantics. Both tools are easy to install, which is a big plus (cough K Framework cough). Since the tools are quite similar, I thought it might be interesting to do a comparison of how various common tasks are done in both languages. (Also, I think the Redex manual is pretty terrible.)

Variables. In Ott, variables are defined by way of metavariables (metavar x), which then serve as variable (by either using the metavariable alone, or suffixing it with a number, index variable or tick).

In Redex, there is no notion of a metavariable; a variable is just another production. There are a few different ways say that a production is a variable: the simplest method is to use variable-not-otherwise-mentioned, which automatically prevents keywords from acting as variables. There are also several other variable patterns variable, variable-except and variable-prefix, which afford more control over what symbols are considered variables. side-condition may also be useful if you have a function which classifies variables.

Grammar. Both Ott and Redex can identify ambiguous matches. Ott will error when it encounters an ambiguous parse. Redex, on the other hand, will produce all valid parses; while this is not so useful when parsing terms, it is quite useful when specifying non-deterministic operational semantics (although this can have bad performance implications). check-redundancy may be useful to identify ambiguous patterns.

Binders. In Ott, binders are explicitly declared in the grammar using bind x in t; there is also a binding language for collecting binders for pattern-matching. Ott can also generate substitution/free variable functions for the semantics. In Redex, binders are not stated in the grammar; instead, they are implemented solely in the reduction language, usually using substitution (Redex provides a workhorse substitution function for this purpose), and explicitly requiring a variable to be fresh. Redex does have a special-form in the metalanguage for doing let-binding (term-let), which substitutes immediately.

Lists. Ott supports two forms of lists: dot forms and list comprehensions. A dot form looks like x1 , .. , xn and requires an upper bound. A list comprehension looks like </ xi // i IN 1 .. n />; the bounds can be omitted. A current limitation of Ott is that it doesn’t understand how to deal with nested dot forms, this can be worked around by doing a comprension over a production, and then elsewhere stating the appropriate equalities the production satisfies.

Redex supports lists using ellipsis patterns, which looks like (e ...). There is no semantic content here: the ellipses simply matches zero or more copies of e, which can lead to nondeterministic matches when there are multiple ellipses. Nested ellipses are supported, and simply result in nested lists. Bounds can be specified using side-conditions; however, Redex supports a limited form of bounding using named ellipses (e.g. ..._1), where all ellipses with the same name must have the same length.

Semantics. Ott is agnostic to whatever semantics you want to define; arbitrary judgments can be specified. One can also define judgments as usual in Redex, but Redex provides special support for evaluation semantics, in which a semantics is given in terms of evaluation contexts, thus allowing you to avoid the use of structural rules. So a usual use-case is to define a normal expression language, extend the language to have evaluation contexts, and then define a reduction-relation using in-hole to do context decomposition. The limitation is that if you need to do anything fancy (e.g. multi-hole evaluation contexts), you will have to fall back to judgment forms.

Type-setting. Ott supports type-setting by translation into LaTeX. Productions can have custom LaTeX associated with them, which is used to generate their output. Redex has a pict library for directly typesetting into PDF or Postscript; it doesn’t seem like customized typesetting is an intended use-case for PLT Redex, though it can generate reasonable Lisp-like output.

Conclusion. If I had to say what the biggest difference between Ott and PLT Redex was, it is that Ott is primarily concerned with the abstract semantic meaning of your definitions, whereas PLT Redex is primarily concerned with how you would go about matching against syntax (running it). One way to see this is in the fact that in Ott, your grammar is a BNF, which is fed into a CFG parser; whereas in PLT Redex, your grammar is a pattern language for the pattern-matching machine. This should not be surprising: one would expect each tool’s design philosophy to hew towards their intended usage.

Categories: Offsite Blogs

High level overview of GHCi?

haskell-cafe - Mon, 01/13/2014 - 2:45pm
Dear Café, I was reading http://www.aosabook.org/en/ghc.html. Figure 5.2 gives a high level overview of the compiler passes when compiling Haskell with GHC. Is anyone aware of a similar figure that gives an overview of the passes when Haskell code is interpreted with GHCi? Thank you! Maarten Faddegon
Categories: Offsite Discussion

Future imports in GHC/Base

haskell-cafe - Mon, 01/13/2014 - 1:47pm
Hello all, I was wondering if there is any particular reason for Haskell/GHC not having a kind of "import from Future", similar to Python import __future__, or, in a different take, stuff like modernizr for javascript. Was this done already? My quite simple use case: I could really use the Foldable instance for Either, and also ended up defining my own 'isLeft' and 'isRight'. Felt guilty of re-inventing the wheel after seeing it defined elsewhere, it was so obvious, of course. Then I see it defined in GHC Head - but unfortunatelly not all packages support it yet. And I can only guess that installing base 4.7 on top of GHC 7.6.3 would certainly result in Cabal Hell. I ended up copying the parts that I needed. Any one else doing this? My proposal is simple, and quite restricted: 1) Include only stuff with no dependencies on new compiler features 2) Focus mainly on additional typeclasses instances 3) Or new functions, that otherwise should not interfere with existing code. 4) Use CPP to translate the code
Categories: Offsite Discussion

upgrading from Xcode4 to Xcode5 on Mountain Lion

haskell-cafe - Mon, 01/13/2014 - 1:37pm
Hi, I have pending the Xcode 4 -> 5 upgrade for a while in my mountain lion, have any of you any pointers to fixes needed in order to get Xcode 5 runnignwith the current haskell platform? from time to time I've seen people having trouble and don't know really if its recommended to upgrade on Mountain Lion... regards, Angel Alvarez (GMAIL) angeljalvarezmiguel< at >gmail.com
Categories: Offsite Discussion

ANN: tree-view-0.1

haskell-cafe - Mon, 01/13/2014 - 1:20pm
tree-view is a package for rendering trees as foldable HTML and Unicode art. http://hackage.haskell.org/package/tree-view Example: *Data.Tree.View> drawTree $ Node "Add" [Node "Sub" [Node "3" [], Node "Mul" [Node "1" [], Node "2" []]], Node "4" []] Add ├╴Sub │ ├╴3 │ └╴Mul │ ├╴1 │ └╴2 └╴4 / Emil _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Emacs - HaskellWiki

del.icio.us/haskell - Mon, 01/13/2014 - 11:07am
Categories: Offsite Blogs

any idea why binary isn't bulding on OSX

haskell-cafe - Mon, 01/13/2014 - 10:09am
src/Data/Binary/Get.hs:423:3: error: invalid preprocessing directive #-} ^ src/Data/Binary/Get.hs:511:53: warning: missing terminating ' character [-Winvalid-pp-token]
Categories: Offsite Discussion

Restrict values in type

haskell-cafe - Mon, 01/13/2014 - 5:38am
Hi, I'm quite new to Haskell, and have been loving exploring it. I've always been a huge fan of languages that let me catch errors at compile time, finding dynamic languages like Python a nightmare to work in. I'm finding with Haskell I can take this compile time checking even further than most static languages and it has gotten me rather excited. So I was wondering if there is a Haskell way of solving my problem. I'm trying to represent an image made up of a list of strokes. Strokes are either lines, arcs or spots, and can be made using different pen shapes. data Image = Image [Stroke] data Stroke = Line Point Point PenShape | Arc Point Point Point PenShape | Spot Point PenShape data PenShape = Circle Float | Rectangle Float Float | ArbitraryPen -- Stuff (not relevant) And this is all great and works. But now I have a problem. I want to extend this such that Arc strokes are only allowed to have the Circle pen shape, and Lines are only allowed to have the Rectangle or Circle pen shapes
Categories: Offsite Discussion

CSE 230, Winter 2012 - Home

del.icio.us/haskell - Mon, 01/13/2014 - 4:24am
Categories: Offsite Blogs

Haddock changes pushed upstream

haskell-cafe - Mon, 01/13/2014 - 3:37am
Hi all, As some of you might know, I hacked on Haddock over summer as part of GSOC. After some tedious fighting with GHC validation process and help of the guys on ghc-devs, the changes were finally pushed upstream couple of hours ago. While I'm waiting for the maintainer to make a call on the official release, you can already get the changes if you compile GHC HEAD. For those not so eager about compiling HEAD, 7.8 should be coming out the changes will be in that. You can read the brief changelog at [1] and the updated documentation at [2]. We were careful to not make changes to any existing syntax that would result in a large amount of packages breaking. If you're a package maintainer, here are some things to consider: * If your documentation looks fine how it is now, you're probably fine. Read the changelog[1] as it mentions some issues that were fixed. Amongst others, Haddock will now link qualified function names properly so that's something to look out for. * none of your documentation should get p
Categories: Offsite Discussion

Structural typing of records in Haskell?

haskell-cafe - Mon, 01/13/2014 - 3:00am
Are there statically typed languages that treat records with structural typing, either imperative or functional? Why should records not be structurally typed in Haskell? From what I understand, in the below foo cannot take a Rec2 even though Rec1 and Rec2 are essentially the same. data Rec1 = Rec1 { a :: Int, b :: Bool} data Rec2 = Rec2 { a :: Int, b :: Bool} foo :: Rec1 -> Bool Rec1 and Rec2 could be in totally different code libraries. I've read that preventing Rec2 being used in foo is good for the type safety in that Rec1 and Rec2 are likely intended to have semantically different meanings and allowing interchangeability breaks this. But then why is map structurally typed. map takes an argument of type a -> b and suppose some other higher order function bar also takes an argument of type a -> b. Should map instead have the below type which prevents a function of type a -> b semantically intended for bar from being accidentally used in map. newtype Mapper a b = Mapper { fn :: a -> b } map :: Mapper a
Categories: Offsite Discussion

different behaviours with or without putStrLn

haskell-cafe - Mon, 01/13/2014 - 12:07am
Hi guys, I'm experimenting different behaviours with or without a "putStrLn"! :( Basically, with the following code, I want the evaluation to really happen on the "evaluate". I found out that it doesn't: it is evaluated elsewhere (I don't know where). If I put a putStrLn (commented below), the evaluation really happens there. *execCommand :: (TVar MyState) -> StateT MyState IO () -> IO ()execCommand ts sm = do s <- atomically $ readTVar ts s' <- execStateT sm s s'' <- evaluate s' --evaluation should happen here, but it doesn't --putStrLn $ displayMulti $ _multi s'' atomically $ writeTVar ts s''* To give you more context, I have a state that, when evaluated, might not terminate. So I added a watchdog (like in mueval), that will kill the thread in case the evaluation doesn't terminate. That's why I need to be sure of where the evaluation takes place. Thanks! Corentin _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.
Categories: Offsite Discussion

access programs own documentation

haskell-cafe - Sun, 01/12/2014 - 8:11pm
Hi guys, I'd like to access, from within my program, the program own haddock documentation. From the cabal autogen path file, I can access the location of the program's data file, binaries etc. (getBinDir, getLibDir, getDataDir) but not the location of the doc. How to do that? For example on my machine: datadir = "/home/kau/.cabal/share/i386-linux-ghc-7.6.3/Nomyx-0.4.1" The documentation is generated in "/home/kau/.cabal/share/doc/i386-linux-ghc-7.6.3/Nomyx-0.4.1", but it depends on the configuration. Thanks, Corentin _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

UTP-2014 Unifying Theories of Programming - call forpapers

General haskell list - Sun, 01/12/2014 - 7:15pm
********************************************************************** 5th International Symposium on Unifying Theories of Programming co-located with FM2014 May 12 - 13, 2014 Singapore http://www.comp.nus.edu.sg/~pat/UTP2014/index.html ********************************************************************** CALL FOR PAPERS Interest in the fundamental problem of the combination of formal notations and theories of programming has grown consistently in recent years. The theories define, in various different ways, many common notions, such as abstraction, refinement, choice, termination, feasibility, locality, concurrency and communication. Despite these differences, such theories may be unified in a way which greatly facilitates their study and comparison. Moreover, such a unification offers a means of combining different languages describing various facets and artifacts of software development in a seamless, logically consi
Categories: Incoming News