News aggregator

Announcing tasty-rerun

Haskell on Reddit - Mon, 01/20/2014 - 5:38am
Categories: Incoming News

RFC: include a cabal-install executable in future GHC releases

glasgow-user - Mon, 01/20/2014 - 2:02am
Hey everyone, I'd like to propose that GHC releases 7.8.1 onwards include a cabal-install (aka cabal) executable, but not include the library deps of cabal-install that aren't already distributed with ghc.(unless ghc should have those deps baked in, which theres very very good reasons not to do.). currently if someone wants just a basic haskell install of the freshest ghc they have to install a ghc bindist, then do a boostrap build of cabal-install by hand (if they want to actually get anything done :) ). This is not a human friendly situation for folks who are new to haskell tooling, but want to try out haskell dev on a server style vm or the like! point being: It'd be great for haskell usability (and egads amounts of config time, even by seasoned users) the ghc bindists / installers included a cabal-install binary thoughts? -Carter _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users< at >
Categories: Offsite Discussion

A modest proposal (re the Platform)

libraries list - Mon, 01/20/2014 - 1:14am
Looks like GHC 7.8 is pretty near release. And while I know that we really like to have a GHC out for a while, and perhaps see the .1 release, before we incorporate it into the Platform, this GHC, while including many new and anticipated things, seems pretty well hammered on. Combine that with the now two-month late (all my fault) HP release for 2013.4.0.0 isn't slated to really have all that much new in it, in part because it is the same GHC as the last HP release. Now - it would really look foolish, and taken poorly (methinks) if we release a HP this month - only to have GHC 7.8 release early Feb. Folks would really be head scratching, and wondering about the platform. SO - I'm proposing ditching the now late 2013.4.0.0 (I admit, I'm finding it hard to get excited by it!) and instead move right to putting out 2014.2.0.0 - aimed for mid-March to mid-April. This release would have several big changes: - GHC 7.8 - New shake based build for the Platform - Support for validation via package tests
Categories: Offsite Discussion - Sun, 01/19/2014 - 11:17pm
Categories: Offsite Blogs

Mark Jason Dominus: Notes on a system for abbreviating SQL queries

Planet Haskell - Sun, 01/19/2014 - 4:18pm

(This post inaugurates a new section on my blog, for incomplete notes. It often happens that I have some idea, usually for software, and I write up a bunch of miscellaneous notes about it, and then never work on it. I'll use this section to post some of those notes, mainly just because I think they might be interesting, but also in the faint hope that someone might get interested and do something with it.)

Why are simple SQL queries so verbose?

For example:

UPDATE batches b join products p using (product_id) join clients c using (client_id) SET b.scheduled_date = NOW() WHERE b.scheduled_date > NOW() and b.batch_processor = 'batchco' and c.login_name = 'mjd' ;

(This is 208 characters.)

I guess about two-thirds of this is unavoidable, but those join-using clauses ought to be omittable, or inferrable, or abbreviatable, or something.

b.batch_processor should be abbreviated to at least batch_processsor, since that's the only field in those three tables with that name. In fact it could probably be inferred from b_p. Similarly c.login_name -> login_name -> log or l_n.

update batches set sch_d = NOW() where sch_d > NOW() and bp = 'batchco' and cl.ln = 'mjd'

(Only 94 characters.)

cl.ln is inferrable: Only two tables begin with cl. None of the field names in the client_transaction_view table look like ln. So cl.ln unambiguously means client.login_name.

Then the question arises of how to join the batches to the clients. This is the only really interesting part of this project, and the basic rule is that it shouldn't do anything really clever. There is a graph, which the program can figure out from looking at the foreign key constraints. And the graph should clearly have a short path from batches through products to clients.

bp might be globally ambiguous, but it can be disambiguated by assuming it's in one of the three tables involved.

If something is truly ambiguous, we can issue an intelligent request for clarification:

"bp" is ambiguous. Did you mean: 1. batches.batch_processor 2. batches.bun_predictor 0. None of the above which? _ Overview
  1. Debreviate table names
  2. Figure out required joins and join fields
  3. Debreviate field names

Can 1 and 2 really be separated? They can in the example above, but maybe not in general.

I think separating 3 and putting it at the end is a good idea: don't try to use field name abbreviations to disambiguate and debreviate table names. Only go the other way. But this means that we can't debreviate cl, since it might be client_transaction_view.

What if something like cl were left as ambiguous after stage 1, and disambiguated only in stage 3? Then information would be unavailable to the join resolution, which is the part that I really want to work.

About abbreviations

Abbreviations for batch_processor:

bp b_p ba_pr batch_p

There is a tradeoff here: the more different kinds of abbreviations you accept, the more likely there are to be ambiguities.

About table inference

There could also be a preferences file that lists precedences for tables and fields: if it lists clients, then anything that could debreviate to clients or to client_transaction_view automatically debreviates to clients. The first iteration could just be a list of table names.

About join inference

Short join paths are preferred to long join paths.

If it takes a long time to generate the join graph, cache it. Build it automatically on the first run, and then rebuild it on request later on.

More examples

(this section blank)

Implementation notes

Maybe convert the input to a SQL::Abstract first, then walk the resulting structure, first debreviating names, then inserting joins, then debreviating the rest of the names. Then you can output the text version of the result if you want to.

Note that this requires that the input be valid SQL. Your original idea for the abbreviated SQL began with

update set batches.sch_d = NOW()

rather than

update batches set sch_d = NOW()

but the original version would probably be ruled out by this implementation. In this case that is not a big deal, but this choice of implementation might rule out more desirable abbreviations in the future.

Correcting dumb mistakes in the SQL language design might be in Quirky's purview. For example, suppose you do

select * from table where (something) Application notes

RJBS said he would be reluctant to use the abbreviated version of a query in a program. I agree: it would be very foolish to do so, because adding a table or a field might change the meaning of an abbreviated SQL query that was written into a program ten years ago and has worked ever since. This project was never intended to abbreviate queries in program source code.

Quirky is mainly intended for one-off queries. I picture it going into an improved replacement for the MySQL command-line client. It might also find use in throwaway programs. I also picture a command-line utility that reads your abbreviated query and prints the debreviated version for inserting into your program.

Miscellaneous notes

(In the original document this section was blank. I have added here some notes I made in pen on a printout of the foregoing, on an unknown date.)

Maybe also abbreviate update => u, where => w, and => &. This cuts the abbreviated query from 94 to 75 characters.

Since debreviation is easier [than join inference] do it first!

Another idea: "id" always means the main table's primary key field.

Categories: Offsite Blogs

Surprising interaction between ordinary comments and unary negation; recommended solution?

Haskell on Reddit - Sun, 01/19/2014 - 8:54am

Wadler's Law notwithstanding, line comments, in case of an application of unary minus, seem to break extensional equality where it would (perhaps) be expected.

Prelude> (-23) -23 Prelude> -23 -23 Prelude> 42-(-23) 65 Prelude> 42--23 42

An accepted proposal to change lexing of comments does exist, but it does not address this quirk. There is also the NegativeSyntax article which does not seem to be associated with any ticket, but is linked to from the wiki description of the NegationBindsTightly proposal, which has been "under discussion" for not quite four years.

It may seem trivial, but what would be the most straightforward way to remove this kind of ambiguity? That last GHCi input line should either produce a correct result or throw an error.

submitted by hispirus
[link] [10 comments]
Categories: Incoming News

ANN: unsafely,Flexible access control for unsafe operations and instances

haskell-cafe - Sun, 01/19/2014 - 7:30am
Yesterday, I uploaded the library `unsafely` to Hackage: This package provides you the functionality for access control for unsafe operations and instances. This purpose is somewhat similar to GHC's `NullaryTypeClasses`[^1] extension, but permits more flexible access control. With this package, you can tag functions and type-class instances as *unsafe* in type constraint. This library is useful when: * You want to restrict the access to *unsafe* operations by type constraint * You have to provide some *unsafe* type-instances for practical reasons. For example, when writing computer algebra system with type-classes, `Double` type doesn't even form a semi ring, but we need the instance `Semiring Double` if we want to combine the symbolic computations and the numerical methods. A simple example: ```haskell {-# LANGUAGE FlexibleContexts, FlexibleInstances, RankNTypes #-} {-# LANGUAGE UndecidableInstances #-} module Main where im
Categories: Offsite Discussion

evaluating CAFs at compile time

haskell-cafe - Sun, 01/19/2014 - 2:14am
So I have a large CAF which is expensive to generate. Theoretically it should be possible to totally evaluate at compile time, resulting in a bunch of constructor calls, ideally totally de-thunked and in the read-only segment of the binary. In the absence of a "eval at compile time" pragma, it seemed like TH should be able to do this, and searching haskell-cafe I turned up an old post by Wren where he is doing basically that, and eventually discovered the Lift class in However, if I understand Lift correctly (and not really understanding much of TH), you need to create instances for every type you wish to generate, which seems like it would be a pain. Even if they can be automatically derived, it would spread TH dependency gunk throughout the whole program. Is this true? Is there a library that does the equivalent of a "eval at compile time" pragma? (Wren's proposed QAF library seems to have never made it t
Categories: Offsite Discussion

structured-haskell-mode screencast - Sat, 01/18/2014 - 10:32pm
Categories: Offsite Blogs

structured-haskell-mode screencast - Sat, 01/18/2014 - 10:32pm
Categories: Offsite Blogs