News aggregator

heroku-buildpack-haste

haskell-cafe - Thu, 09/04/2014 - 2:47pm
Just to let you know that I created an heroku buildpack that install GHC and Haste in the running environment, so that web applications can invoke ghc, cabal, hastec haste-inst etc. I use it for the creation of an Haste IDE. But it may have other uses. https://github.com/agocorona/heroku-buildpack-haste
Categories: Offsite Discussion

Extensible Entities

Haskell on Reddit - Thu, 09/04/2014 - 1:54pm
Categories: Incoming News

Versioning modules, not packages

Haskell on Reddit - Thu, 09/04/2014 - 12:53pm

Let's say we have a big package, like lens.

Now, whenever there is a breaking change in any one of its modules, however small it may be, we have to bump the major version number (or else we're being evil). Bumping it means that packages that specify dependencies in the way recommended in the package versioning policy won't work with this new version, even if the change doesn't apply to them.

So my question/suggestion is the following: wouldn't it be cool if not only packages, but also modules were versioned?

Example/idea (backwards compatible, "opt-in"):

name: a-package version: 1.0 library exposed-modules: Package.M1, Package.M2

No new syntax yet, M1 and M2 inherit a-package's version.

Now we release a new major version, with changes that only affect M2.

name: a-package version: 1.1 library exposed-modules: Package.M1-1.0, Package.M2

M1 didn't change, so we specify that its version is still 1.0 (proposed new syntax). We didn't specify a version for M2, which means that it inherits the package's version, so it's at 1.1 now.

If there is a package that was using a-package's M1 like this (another sprinkle of new syntax ahead):

name: another-package version: 1.0 library build-depends: a-package (Package.M1 == 1.0.*)

then another-package will still work with the newly released a-package-1.1, as it doesn't use M2.

Pros:

  • less dependency hell
  • updates that only change a couple numbers in a .cabal file are needed less often

Cons:

  • now it's not enough to just look at a package's version when getting dependencies, .cabal files also need to be inspected
  • if there's not one, but multiple numbers, it's easier to make a mistake

Thoughts? I can't be the first person to get this idea... what are the downsides, why do no package managers work this way?

submitted by willIEverGraduate
[link] [14 comments]
Categories: Incoming News

call by need vs call by value

Haskell on Reddit - Thu, 09/04/2014 - 10:40am

given an AST, it is easy to see how using a call by value strategy we can split in chunks the AST and compute in parallel the dependencies. We simply evaluate all the arguments in parallel and past them to the corresponding functions. But how can we split and compute in parallel the dependencies of a AST using a call by need evaluation strategy? Can someone please point me to some references.

Thanks.

submitted by felipeZ
[link] [6 comments]
Categories: Incoming News

Ian Ross: Non-diffusive atmospheric flow #5: pre-processing

Planet Haskell - Thu, 09/04/2014 - 6:54am
Non-diffusive atmospheric flow #5: pre-processing September 4, 2014

Note: there are a couple of earlier articles that I didn’t tag as “haskell” so they didn’t appear in Planet Haskell. They don’t contain any Haskell code, but they cover some background material that’s useful to know (#3 talks about reanalysis data and what Z500 is, and #4 displays some of the characteristics of the data we’re going to be using). If you find terms here that are unfamiliar, they might be explained in one of these earlier articles.

The code for this post is available in a Gist.

Before we can get into the “main analysis”, we need to do some pre-processing of the Z500 data. In particular, we are interested in large-scale spatial structures, so we want to subsample the data spatially. We are also going to look only at the Northern Hemisphere winter, so we need to extract temporal subsets for each winter season. (The reason for this is that winter is the season where we see the most interesting changes between persistent flow regimes. And we look at the Northern Hemisphere because it’s where more people live, so it’s more familiar to more people.) Finally, we want to look at variability about the seasonal cycle, so we are going to calculate “anomalies” around the seasonal cycle.

We’ll do the spatial and temporal subsetting as one pre-processing step and then do the anomaly calculation seperately, just for simplicity.

Spatial and temporal subsetting

The title of the paper we’re trying to follow is “Observed Nondiffusive Dynamics in Large-Scale Atmospheric Flow”, so we need to decide what we mean by “large-scale” and to subset our data accordingly. The Z500 data from the NCEP reanalysis dataset is at 2.5° × 2.5° resolution, which turns out to be a little finer than we need, so we’re going to extract data on a 5° × 5° grid instead. We’ll also extract only the Northern Hemisphere data, since that’s what we’re going to work with.

For the temporal subsetting, we need to take 181 days of data for each year starting on 1 November each year. Since the data starts at the beginning of 1948 and goes on to August 2014 (which is when I’m writing this), we’ll have 66 years of data, from November 1948 until April 2014. As usual when handling dates, there’s some messing around because of leap years, but here it basically just comes down to which day of the year 1 November is in a given year, so it’s not complicated.

The daily NCEP reanalysis geopotential height data comes in one file per year, with all the pressure levels used in the dataset bundled up in each file. That means that the geopotential height variable in each file has coordinates: time, level, latitude, longitude, so we need to slice out the 500 mb level as we do the other subsetting.

All this is starting to sound kind of complicated, and this brings us to a regrettable point about dealing with this kind of data – it’s messy and there’s a lot of boilerplate code to read and manipulate coordinate metadata. This is true pretty much whatever language you use for processing these multi-dimensional datasets and it’s kind of unavoidable. The trick is to try to restrict this inconvenient stuff to the pre-processing phase by using a consistent organisation of your data for the later analyses. We’re going to do that here by storing all of our Z500 anomaly data in a single NetCDF file, with 181-day long winter seasons back-to-back for each year. This will make time and date processing trivial.

The code for the data subsetting is in the subset.hs program in the Gist. We’ll deal with it in a few bites.

Skipping the imports that we need, as well as a few “helper” type synonym definitions, the first thing that we need to do it open one of the input NCEP NetCDF files to extract the coordinate metadata information. This listing shows how we do this:

Right refnc <- openFile $ indir </> "hgt.1948.nc" let Just nlat = ncDimLength <$> ncDim refnc "lat" Just nlon = ncDimLength <$> ncDim refnc "lon" Just nlev = ncDimLength <$> ncDim refnc "level" let (Just lonvar) = ncVar refnc "lon" (Just latvar) = ncVar refnc "lat" (Just levvar) = ncVar refnc "level" (Just timevar) = ncVar refnc "time" (Just zvar) = ncVar refnc "hgt" Right lon <- get refnc lonvar :: SVRet CFloat Right lat <- get refnc latvar :: SVRet CFloat Right lev <- get refnc levvar :: SVRet CFloat

We open the first of the NetCDF files (I’ve called the directory where I’ve stored these things indir) and use the hnetcdf ncDim and ncVar functions to get the dimension and variable metadata for the latitude, longitude, level and time dimensions; we then read the complete contents of the “coordinate variables” (for level, latitude and longitude) as Haskell values (here, SVRet is a type synonym for a storable vector wrapped up in the way that’s returned from the hnetcdf get functions).

Once we have the coordinate variable values, we need to find the index ranges to use for subsetting. For the spatial subsetting, we find the start and end ranges for the latitudes that we want (17.5°N-87.5°N) and for the level, we find the index of the 500 mb level:

let late = vectorIndex LT FromEnd lat 17.5 lats = vectorIndex GT FromStart lat 87.5 levi = vectorIndex GT FromStart lev 500.0

using a helper function to find the correspondence between coordinate values and indexes:

data IndexStart = FromStart | FromEnd vectorIndex :: (SV.Storable a, Ord a) => Ordering -> IndexStart -> SV.Vector a -> a -> Int vectorIndex o s v val = case (go o, s) of (Nothing, _) -> (-1) (Just i, FromStart) -> i (Just i, FromEnd) -> SV.length v - 1 - i where go LT = SV.findIndex (>= val) vord go GT = SV.findIndex (<= val) vord vord = case s of FromStart -> v FromEnd -> SV.reverse v

For the temporal subsetting, we just work out what day of the year 1 November is for leap and non-leap years – since November and December together are 61 days, for each winter season we need those months plus the first 120 days of the following year:

let inov1non = 305 - 1 -- ^ Index of 1 November in non-leap years. wintertsnon = [0..119] ++ [inov1non..365-1] -- ^ Indexes of all winter days for non-leap years. inov1leap = 305 + 1 - 1 -- ^ Index of 1 November in leap years. wintertsleap = [0..119] ++ [inov1leap..366-1] -- ^ Indexes of all winter days for leap years. winterts1948 = [inov1leap..366-1] winterts2014 = [0..119] -- ^ Indexes for winters in start and end years.

This is kind of hokey, and in some cases you do need to do more sophisticated date processing, but this does the job here.

Once we have all this stuff set up, the critical part of the subsetting is easy – for each input data file, we figure out what range of days we need to read, then use a single call the getS from hnetcdf (“get slice”):

forM_ winterts $ \it -> do -- Read a slice of a single time-step of data: Northern -- Hemisphere (17.5-87.5 degrees), 5 degree resolution, 500 -- mb level only. let start = [it, levi, lats, 0] count = [1, 1, (late - lats) `div` 2 + 1, nlon `div` 2] stride = [1, 1, 2, 2] Right slice <- getS nc zvar start count stride :: RepaRet2 CShort

Here, we have a set of start indexes, a set of counts and a set of strides, one for each dimension in our input variable. Since the input geopotential height files have dimensions of time, level, latitude and longitude, we have four entries in each of our start, count and stride lists. The start values are the current day from the list of days we need (called it in the code), the level we’re interested in (levi), the start latitude index (lats) and zero, since we’re going to get the whole range of longitude. The count list gets a single time step, a single level, and a number of latitude and longitude values based on taking every other entry in each direction (since we’re subsetting from a spatial resolution of 2.5° × 2.5° to a resolution of 5° × 5°). Finally, for the stride list, we use a stride of one for the time and level directions (which doesn’t really matter anyway, since we’re only reading a single entry in each of those directions) and a stride of two in the latitude and longitude directions (which gives us the “every other one” subsetting in those directions).

All of the other code in the subsetting program is involved in setting up the output file and writing the Z500 slices out. Setting up the output file is slightly tedious (this is very common when dealing with NetCDF files – there’s always lots of metadata to be managed), but it’s made a little simpler by copying attributes from one of the input files, something that doing this in Haskell makes quite a bit easier than in Fortran or C. The next listing shows how the NcInfo for the output file is created, which can then be passed to the hnetcdf withCreateFile function to actually create the output file:

let outlat = SV.fromList $ map (lat SV.!) [lats,lats+2..late] outlon = SV.fromList $ map (lon SV.!) [0,2..nlon-1] noutlat = SV.length outlat noutlon = SV.length outlon outlatdim = NcDim "lat" noutlat False outlatvar = NcVar "lat" NcFloat [outlatdim] (ncVarAttrs latvar) outlondim = NcDim "lon" noutlon False outlonvar = NcVar "lon" NcFloat [outlondim] (ncVarAttrs lonvar) outtimedim = NcDim "time" 0 True outtimeattrs = foldl' (flip M.delete) (ncVarAttrs timevar) ["actual_range"] outtimevar = NcVar "time" NcDouble [outtimedim] outtimeattrs outz500attrs = foldl' (flip M.delete) (ncVarAttrs zvar) ["actual_range", "level_desc", "valid_range"] outz500var = NcVar "z500" NcShort [outtimedim, outlatdim, outlondim] outz500attrs outncinfo = emptyNcInfo (outdir </> "z500-subset.nc") # addNcDim outlatdim # addNcDim outlondim # addNcDim outtimedim # addNcVar outlatvar # addNcVar outlonvar # addNcVar outtimevar # addNcVar outz500var

Although we can mostly just copy the coordinate variable attributes from one of the input files, we do need to do a little bit of editing of the attributes to remove some things that aren’t appropriate for the output file. Some of these things are just conventions, but there are some tools that may look at these attributes (actual_range, for example) and may complain if the data doesn’t match the attribute. It’s easier just to remove the suspect ones here.

This isn’t pretty Haskell by any means, and the hnetcdf library could definitely do with having some higher-level capabilities to help with this kind of file processing. I may add some based on the experimentation I’m doing here – I’m developing hnetcdf in parallel with writing this!

Anyway, the result of running the subsetting program is a single NetCDF file containing 11946 days (66 × 181) of Z500 data at a spatial resolution of 5° × 5°. We can then pass this on to the next step of our processing.

Seasonal cycle removal

In almost all investigations in climate science, the annual seasonal cycle stands out as the largest form of variability (not always true in the tropics, but in the mid-latitudes and polar regions that we’re looking at here, it’s more or less always true). The problem, of course, is that the seasonal cycle just isn’t all that interesting. We learn about the difference between summer and winter when we’re children, and although there is much more to say about seasonal variations and how they interact with other phenomena in the climate system, much of the time they just get in the way of seeing what’s going on with those other phenomena.

So what do we do? We “get rid” of the seasonal cycle by looking at what climate scientists call “anomalies”, which are basically just differences between the values of whatever variable we’re looking at and values from a “typical” year. For example, if we’re interested in daily temperatures in Innsbruck over the course of the twentieth century, we construct a “typical” year of daily temperatures, then for each day of our 20th century time series, we subtract the “typical” temperature value for that day of the year from the measured temperature value to give a daily anomaly. Then we do whatever analysis we want based on those anomalies, perhaps looking for inter-annual variability on longer time scales, or whatever.

This approach is very common, and it’s what we’re going to do for our Northern Hemisphere winter-time Z500 data here. To do this, we need to do two things: we need to calculate a “typical” annual cycle, and we need to subtract that typical annual cycle from each year of our data.

OK, so what’s a “typical” annual cycle? First, let’s say a word about what we mean by “annual cycle” in this case. We’re going to treat each spatial point in our data independently, trusting to the natural spatial correlation in the geopotential height to smooth out any shorter-term spatial inhomogeneities in the typical patterns. We then do some sort of averaging in time to generate a “typical” annual cycle at each spatial point. It’s quite common to use the mean annual cycle over a fixed period for this purpose (a period of 30 years is common: 1960-1990, say). In our case, we’re going to use the mean annual cycle across all 66 years of data that we have. Here’s how we calculate this mean annual cycle (this is from the seasonal-cycle.hs program in the Gist):

-- Each year has 181 days, so 72 x 15 x 181 = 195480 data values. let ndays = 181 nyears = ntime `div` ndays -- Use an Int array to accumulate values for calculating mean annual -- cycle. Since the data is stored as short integer values, this is -- enough to prevent overflow as we accumulate the 2014 - 1948 = 66 -- years of data. let sh = Repa.Z Repa.:. ndays Repa.:. nlat Repa.:. nlon slicecount = [ndays, nlat, nlon] zs = take (product slicecount) (repeat 0) init = Repa.fromList sh zs :: FArray3 Int -- Computation to accumulate a single year's data. let doone current y = do -- Read one year's data. let start = [(y - 1948) * ndays, 0, 0] Right slice <- getA innc z500var start slicecount :: RepaRet3 CShort return $! Repa.computeS $ current Repa.+^ (Repa.map fromIntegral slice) -- Accumulate all data. total <- foldM doone init [1948..2013] -- Calculate the final mean annual cycle. let mean = Repa.computeS $ Repa.map (fromIntegral . (`div` nyears)) total :: FArray3 CShort

Since each year of data has 181 days, a full year’s data has 72 × 15 × 181 data values (72 longitude points, 15 latitude points, 181 days), i.e. 195,480 values. In the case here, since we have 66 years of data, there are 12,901,680 data values altogether. That’s a small enough number that we could probably slurp the whole data set into memory in one go for processing. However, we’re not going to do that, because there are plenty of cases in climate data analysis where the data sets are significantly larger than this, and you need to do “off-line” processing, i.e. to read data from disk piecemeal for processing.

We do a monadic fold (using the standard foldM function) over the list of years of data, and for each year we read a single three-dimensional slice of data representing the whole year and add it to the current accumulated sum of all the data. (This is what the doone function does: the only slight wrinkle here is that we need to deal with conversion from the short integer values stored in the data file to the Haskell Int values that we use in the accumulator. Otherwise, it’s just a question of applying Repa’s element-wise addition operator to the accumulator array and the year’s data.) Once we’ve accumulated the total values across all years, we divide by the number of years and convert back to short integer values, giving a short integer valued array containing the mean annual cycle – a three-dimensional array with one entry for each day in our 181-day winter season and for each latitude and longitude in the grid we’re using.

Once we have the mean annual cycle with which we want to calculate anomalies, determining the anomalies is simply a matter of subtracting the mean annual cycle from each year’s data, matching up longitude, latitude and day of year for each data point. The next listing shows the main loop that does this, reading a single day of data at a time, then subtracting the appropriate slice of the mean annual cycle to produce anomaly values (Repa’s slice function is handy here) and writing these to an output NetCDF file:

let count = [1, nlat, nlon] forM_ [0..ntime-1] $ \it -> do -- Read time slice. Right slice <- getA innc z500var [it, 0, 0] count :: RepaRet2 CShort -- Calculate anomalies and write out. let sl = Repa.Z Repa.:. (it `mod` 181) Repa.:. Repa.All Repa.:. Repa.All anom = Repa.computeS $ slice Repa.-^ (Repa.slice mean sl) :: FArray2 CShort putA outnc outz500var [it, 0, 0] count anom

The only thing we have to be a little bit careful about when we create the final anomaly output file is that we need to remove some of the attributes from the Z500 variable: because we’re now working with differences between actual values and our “typical” annual cycle, we no longer need the add_offset and scale_factor attributes that are used to convert from the stored short integer values to floating point geopotential height values. Instead, the values that we store in the anomaly file are the actual geopotential height anomaly values in metres.

After doing all this pre-processing, what we end up with is a single NetCDF file containing 66 winter seasons of daily Z500 anomalies for the region we’re interested in. The kind of rather boring data processing we’ve had to do to get to this point is pretty typical for climate data processing – you almost always need to do some sort of subsetting of your data, you often want to remove signals that aren’t interesting (like the annual cycle here, although things can get much more complicated than that). This kind of thing is unavoidable, and the best that you can really do is to try to organise things so that you do the pre-processing once and end up with data in a format that’s then easy to deal with for further processing. That’s definitely the case here, where we have fixed-length time series (181 days per winter) for each year, so we don’t need to do any messing around with dates.

In other applications, pre-processing can be a much bigger job. For functional brain imaging applications, for example, as well as extracting a three-dimensional image from the output of whatever (MRI or CT) scanner you’re using, you often need to do something about the low signal-to-noise ratios that you get, you need to compensate for subject motion in the scanner during the measurements, you need to compensate for time-varying physiological nuisance signals (breathing and heart beat), you need to spatially warp images to match an atlas image to enable inter-subject comparison, and so on. And all that is before you get to doing whatever statistical analysis you’re really interested in.

We will look at some of these “warts and all” pre-processing cases for other projects later on, but for now the small amount of pre-processing we’ve had to do here is enough. Now we can start with the “main analysis”.

Before we do that though, let’s take a quick look at what these anomaly values look like. The two figures below show anomaly plots for the same time periods for which the original Z500 data is shown in the plots in this earlier article. The “normal” anomaly plots are a bit more variable than the original Z500 plots, but the persistent pattern over the North Atlantic during the blocking episode in the second set of plots is quite clear. This gives us some hope that we’ll be able to pick out these persistent patterns in the data relatively easily.

First, the “normal” anomalies:



then the “blocking” anomalies:



data-analysis haskell <script src="http://zor.livefyre.com/wjs/v3.0/javascripts/livefyre.js" type="text/javascript"></script> <script type="text/javascript"> (function () { var articleId = fyre.conv.load.makeArticleId(null); fyre.conv.load({}, [{ el: 'livefyre-comments', network: "livefyre.com", siteId: "290329", articleId: articleId, signed: false, collectionMeta: { articleId: articleId, url: fyre.conv.load.makeCollectionUrl(), } }], function() {}); }()); </script>
Categories: Offsite Blogs

Haskell Weekly News: Issue 304

haskell-cafe - Thu, 09/04/2014 - 6:41am
Welcome to issue 304 of the HWN, an issue covering crowd-sourced bits of information about Haskell from around the web. This issue covers from August 24 to 30, 2014 Quotes of the Week * pjdelport: Haskell: Lazy evaluation, eager debugging. * trap_exit: isn't hoogle awesome? the first time I used it, I was like "for the first time in my life, google sucks" Top Reddit Stories * Hython - a Haskell-powered Python 3 interpreter Domain: github.com, Score: 95, Comments: 22 Original: [1] http://goo.gl/aI16qA On Reddit: [2] http://goo.gl/N67fnX * Using Emacs for Haskell development Domain: github.com, Score: 89, Comments: 47 Original: [3] http://goo.gl/tuPL91 On Reddit: [4] http://goo.gl/RxVWFQ * Ivory Language Domain: ivorylang.org, Score: 70, Comments: 36 Original: [5] http://goo.gl/xJJ68c On Reddit: [6] http://goo.gl/Vz9cTX * A taste of Cabalized Backpack : Inside 206-105 Domain: blog.ezyang.com, Score: 61, Comments: 73 Orig
Categories: Offsite Discussion

Suggestions on how to approach writing a specific type of server.

Haskell on Reddit - Thu, 09/04/2014 - 3:43am

We have a requirement at work for a task server. And we are thinking about writing it in haskell.

The server will do the following,

  • Wait for a connection
  • Get a json as payload.
  • The json payload will contain 3 fields. URL, TIME, CONTENT.
  • The job of the task server is to "POST" the "CONTENT" to the "URL" at the given "TIME"
  • This is the only job of this server. Nothing else. It will not be serving any html pages for now.

These are the constraints.

  • It should be scalable. When adding more servers, the load must be balance.
  • If the "POST" does not work the first time, it should retry a specified number of times before failing
  • Should be robust and reliable. A power failure should not corrupt the data.
  • It should be fast

So far, I'm leaning on using warp. But not sure if thats a good choice. The reason for this post is to get suggestions and opinions from more knowledgeable folks in the haskell community as to what libraries I can use and how I may proceed in writing this server. I wouldn't bother much if it was a personal project. Since this will be in production, I want to make sure I start on the right foot.

Thanks

submitted by desijays
[link] [14 comments]
Categories: Incoming News

Proposal: Use uninterruptibleMask for cleanup actions in Control.Exception

libraries list - Wed, 09/03/2014 - 9:56pm
I'd like to propose a change in the behavior of Control.Exception to help guarantee cleanups are not accidentally lost. For example, bracket is defined as: bracket before after thing = mask $ \restore -> do a <- before r <- restore (thing a) `onException` after a _ <- after a return r This definition has a serious problem: "after a" (in either the exception handling case, or the ordinary case) can include interruptible actions which abort the cleanup. This means bracket does not in fact guarantee the cleanup occurs. For example: readMVar = bracket takeMVar putMVar -- If async exception occurs during putMVar, MVar is broken! withFile .. = bracket (openFile ..) hClose -- Async exception during hClose leaks the file handle! Interruptible actions during "before" are fine (as long as "before" handles them properly). Interruptible actions during "after" are virtually always a bug -- at best leaking a resource, and at worst breaking the program's invariants. I propose changing all the cleanup ha
Categories: Offsite Discussion

References on how to compile lazy functional languages

Haskell on Reddit - Wed, 09/03/2014 - 4:56pm

Hello to all!

I'm interested in implement a llvm compiler for a lazy version of simply typed lambda calculus, as an exercise. Which articles / books should I read in order to start this task? I know that there is the SPJ book "Implementation of Functional Programming Languages". Is this a good start or there is some more recent reference ?

Thanks!

submitted by rodrigogribeiro
[link] [3 comments]
Categories: Incoming News

Package bounds approximation

haskell-cafe - Wed, 09/03/2014 - 9:48am
Dear Cafe, I was wondering if there exists a tool to approximate what would be the minimal versions of the dependencies required by a package. I know that calculating this exactly is a bit intractable, but I was considering whether a tool that works as follows would work: 1) Compile the package with known to work dependencies 2) For each version of dependency as d: For each function in d used by My Package: * Perform unification of the resulting type when compiled with known to work dependencies and the type exported by the api of the version of the dependency in consideration. If all unifications succeeds, mark the version as compatible, incompatible otherwise This approximation is obviously not complete. Nevertheless, I would like to get opinions about whether this would be a good/useful/feasible approximation? Does the current GHC api export enough functionality for this package to be feasible? Are there alternatives? I was consider doing this as my `hack`
Categories: Offsite Discussion

ICFP "The Ghost of Church" -- collaborate?

Haskell on Reddit - Wed, 09/03/2014 - 9:00am

I am at ICFP right now (it's great!). There seems to be some kind of mystery going on called "The Ghost of Church". It involves some QR-codes that are put up at the conference venue here and there.

I have worked on it a bit, but I am utterly stuck! Anyone else here has taken a look at it? Has anyone solved it yet?

Shall we collaborate and win this thing with "team /r/haskell"? We could use this thread to share information.

submitted by icfpfan
[link] [3 comments]
Categories: Incoming News

"Lower case" infix type operator variables

haskell-cafe - Wed, 09/03/2014 - 7:03am
I spent a few moments confused by the fact the TypeOperators was insufficient to allow the following type to be parsed     constA :: Arrow (~>) => b -> (a ~> b) My current intuition is that since I *can* write things like     newtype (~>) a b = A (a -> b) there is clashing in the type operator space for “upper case” and “lower case” identifiers. Is it possible or advisable to mitigate this clash and provide some syntax for “type operator variables”? Joseph_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

GHC not able to detect impossible GADT pattern

glasgow-user - Wed, 09/03/2014 - 6:59am
I’ve been trying to stretch GHC’s type system again and somehow managed to get myself into a position where GHC warns about a non-exhaustive pattern where adding the (according to GHC) missing pattern results in a type error (as intended by me). It seems that even with closed type families GHC can’t infer some patterns can never occur? Complete code to reproduce the issue is including below, the non-exhaustive pattern happens in the local definitions ‘f’ of push and pull. I’m open to suggestions for other approaches. Kind regards, Merijn {-# LANGUAGE ConstraintKinds #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} module Test where import Data.Proxy import GHC.Exts data Message data SocketType = Dealer | Push | Pull data SocketOperation = Read | Write type family Restrict (a :: SocketOperation) (as :: [SocketOperation]) ::
Categories: Offsite Discussion

Hackage update, part 4

Haskell on Reddit - Wed, 09/03/2014 - 5:13am
Categories: Incoming News