News aggregator

LTS Haskell

Haskell on Reddit - Sun, 12/07/2014 - 3:07am
Categories: Incoming News

FP Complete: Backporting bug fixes: Towards LTS Haskell

Planet Haskell - Sun, 12/07/2014 - 12:00am

The concept I'll be describing here is strongly related to GPS Haskell, something Mark, Duncan, and I started working on at ICFP. I'll expand on the relation to that project in the questions section below.

There's a very simple, easily understood problem that I'm sure many of us writing software in Haskell have faced: we want to have a fixed API to write code against, but we also want to get bug fixes against our dependencies. (And dare I say it, possibly even some new features.) Currently, there's no easy way to do that. Curated systems like Stackage provide us with fixed API's to code against, but even to get the benefit of just one tiny new bug fix, we currently need to move over over to a brand new snapshot, providing a brand new API, which crucially, compared to the old API may include arbitrary breaking changes. (The same applies to Linux distributions like Debian and Nix, and to the Haskell Platform as well.)

This is a well understood problem in a different context: Linux distributions themselves. What we have today in Stackage is akin to Debian unstable: a rolling release system where you update your entire environment to a new state of being. Each state of being is internally consistent, but may not be compatible with what you've done locally. In the case of Debian, your config files might break, for instance. In the case of Haskell, your program may no longer compile.

By contrast, we have systems like Ubuntu Long Term Support (LTS). In an LTS release, bug fixes are backported to a stable version for an extended period of time. This allows users to have stability without stagnation. Over the next month, a few of us in the community will be working towards the goal of an experimental "LTS Haskell" kind of project, and hope to have it ready to start testing by January. This blog post is intended to describe how this will work, and encourage people to both provide feedback to improve the system, and to get involved in the project.

The process

On January 1, 2015, we're going to take the most recent Stackage unstable snapshot and promote it to be "LTS Haskell 1.0". It will have its own URL on stackage.org, and will be tracked in a Github repo. On a regular basis (tentatively: once a week), we'll take all of the packages in this snapshot and bump them to the newest version from Hackage with the same major version number (see "example bump" below). We'll then run the full Stackage build/test procedure on this new set of package versions. Once this passes, we'll release this as "LTS Haskell 1.1". That's the whole process.

The significance of this is that every release in the 1.X series will have a backwards-compatible API with previous releases, in the same sense that we're used to with minor version bumps. That means that, barring issues of name collisions, your code will continue to compile with new releases. However, you will also get new features rolled out in minor version bumps of your dependencies and, more importantly, bug fixes that have been released.

After a certain period of time (tentatively: three months, see questions below), we'll again take the newest unstable Stackage snapshot and call that LTS Haskell 2.0. There will be an overlap period where both LTS Haskell 1 and 2 are supported (tentatively: one month), and then LTS Haskell 1 will be retired in favor of LTS Haskell 2. This will give users a chance to upgrade to a new supported release. Note that even after being retired, the old snapshots will still be available for use, the only question is whether bugfixes will still be backported.

Example bump

To clarify the bump procedure, consider the following fictitious set of packages in LTS Haskell 1.0:

  • foo-2.4.1
  • bar-3.2.2
  • baz-5.1.9

After 1.0 is released, the following releases are made to Hackage:

  • foo-2.4.1.1 is released with a bug fix
  • bar-3.2.3 is released with a new feature, which doesn't break backwards compatibility
  • baz-5.2.0 is released, which is a breaking change

In our bumping procedure, we would replace foo-2.4.1 with foo-2.4.1.1, since it has the same major version number. Similarly, bar-3.2.2 would be bumped to 3.2.3. However, baz-5.1.9 would not be bumped to baz-5.2.0, since that introduces a breaking API change. (baz's author, however, would be able to make a baz-5.1.9.1 or baz-5.1.10 release, and those would be included in the next bump.)

Design goals

There are two primary design goals underlying this simple process.

  1. We want the smallest change possible for users, and the smallest amount of work to be created for library authors. To use LTS Haskell, you would just modify your remote-repo, like you do today to use Stackage. (And hopefully in the future, even that will be simplified, once changes are adopted by the Haskell Platform and Hackage.) Library authors already release their code to Hackage with bugfixes. Instead of making them go through a process to get their changes adopted, we will automatically include them.

  2. We want to make the process as automatic as possible. The process listed above allows a new LTS Haskell candidate to be produced with zero human intervention (though some massaging may be necessary for funny situations on Hackage, see questions section below). Making the process automatic makes it that much easier to provide regular releases.

Note that these design goals are built around what's made Stackage such a successful project: minimal author dependencies, simple user story, and automation. I believe we can recreate that success, with even greater benefit now.

A request to library authors

There is one change that library authors can make that would improve the experience: support the current LTS Haskell major version of your packages, and provide bug fixes for them. That means that, if you're the maintainer of foo, LTS Haskell has foo-1.2.1, you've release foo-1.3.0, and a bug is discovered in both the 1.2 and 1.3 versions, please fix the bug with both a foo-1.2.1.1 and foo-1.3.0.1 release. This not only helps LTS Haskell users, but library users in general looking to avoid unnecessary API changes.

Questions

This sounds a lot like GPS Haskell. What's the difference? It should sound very similar; the goal of this project is to be a testing ground for what GPS Haskell will become. GPS Haskell involves multiple moving parts: Stackage, the Haskell Platform, and Hackage. It's difficult to coordinate work among all those parts and keep stability. Stackage is well set up to support experiments like this by having the multiple snapshot concept built in. The goal is to try out this idea, shake out the flaws, and hopefully when we've stabilized it, the Haskell Platform and Hackage will be ready to adopt it.

Ubuntu LTS doesn't allow new features to be added. Why are you allowing new features in addition to bugfixes? I'll admit that I originally argued against adding new features, while Mark was in favor of it. Ultimately, I changed my mind for two reasons: I saw people asking for this exact thing to be present in Stackage, and allowing backporting of new features eases the maintenance burden of library authors considerably, which is an incredible selling point. If there's demand in the future for a bugfix-only version of this, I'd be very much open to exploring the idea. But I think it's pragmatic to start initial investigation with this.

Is LTS Haskell a separate project from Stackage? I'd describe it more as an extension of Stackage, with the goal to ultimately expand to multiple projects: Stackage, the Haskell Platform, and Hackage. Said another way: on a code level, this is clearly an extension of the Stackage tooling. But ideologically, it's trying to adopt the best features of all three of those projects in a way that all of them will be able to take advantage.

Why such short support windows? The strawmen of three months between releases and a one month grace period are ridiculously short support windows. The reason I propose them is because- like I mentioned in the design goals- we want the smallest delta from how people work today. Right now, there is no concept of stable versions, and we're trying to introduce it. Starting off with a more standard support window of something like two years would be a radical shift in how library users and authors operate today. Three months is very short, but it's long enough for us to test the process. As time goes on, we should have serious community discussions on how long a support window we should have. (I, for one, am fully in favor of extending it well beyond three months.)

What kind of funny Hackage situations do you mean above? I mentioned above that manual intervention may sometimes be necessary. Consider the following situation: foo-1.1 depends on bar-1.1, and both are included in LTS Haskell 1.0. bar-1.2 is then released, which by the rules stated above, will not be included in LTS Haskell 1.1. foo-1.1.1 is also released, which should be included. However, suppose that foo-1.1.1 has a lower bound bar >= 1.2. Even though foo itself isn't changing its API, it's demanding an API change for another package. In this case, we'd have to disallow foo-1.1.1 from being included in LTS Haskell 1.1. I'm not sure if we'll be able to automate this kind of detection.

As a side note, I've long considered this a shortcoming of the Package Versioning Policy's stance on when version bumps are required, and have debated proposing a change. I'm still debating that proposal, but wouldn't object if someone else wants to make that proposal instead.

How does this affect Linux distributions? It doesn't necessarily affect them at all. However, LTS Haskell could be a very interesting set of packages for Linux distros to track, for all the same reasons given above regarding backported bugfixes. In this sense, you can think of LTS Haskell as having multiple delivery mechanisms. We're experimenting with one delivery mechanism via stackage.org now; we can have future delivery mechanisms via Debian, Fedora, Nix, and even with direct support in Hackage/cabal-install.

What can I do to help? The areas that jump to mind are:

  • Discuss on the Stackage mailing list, Reddit discussions, etc, to flesh out ideas and shake out flaws early
  • Test the snapshots, especially on Mac and Windows
  • This is a project for the community. Once the initial code gets written, improving the code base is as easy as submitting a pull request!
  • Get more packages into Stackage over the next month, so that LTS Haskell 1.0 is as complete as possible.

Do you have any more details? I originally wrote a two-part blog post with much more detail, but got feedback that the content was a bit too dense, so I rewrote the content in the format here. There's still lots of information present in those blog posts that may be of interest to some, so I've posted them as a Github Gist in case they're useful to anyone. (Note: I'm not aware of contradictions between that Gist and this post. If there are contradictions, this post takes precedence.)

What does this blog post have to do with the recent Stackage survey? Nothing directly, yet. The survey is intended to help gather more information about how people are using Stackage, and to help us make more informed decisions with future Stackage and LTS Haskell work. This blog post was written before I posted the survey, and has not incorporated the survey results in any way. Stay tuned for more information about those results in a separate blog post.

Categories: Offsite Blogs

Gabriel Gonzalez: A very general API for relational joins

Planet Haskell - Sat, 12/06/2014 - 10:49pm

Maps and tuples are useful data types for modeling relational operations. For example, suppose we have the following table, indexed by the Id column:

| Id | First Name | Last Name |
|----|------------|-----------|
| 0 | Gabriel | Gonzalez |
| 1 | Oscar | Boykin |
| 2 | Edgar | Codd |

We can model that as a Map where the key is the Id column and the value is a tuple of FirstName and LastName:

import Data.Map (Map) -- from the `containers` library
import qualified Data.Map as Map

type Id = Int
type FirstName = String
type LastName = String

names :: Map Id (FirstName, LastName)
names = Map.fromList
[ (0, ("Gabriel", "Gonzalez"))
, (1, ("Oscar" , "Boykin" ))
, (2, ("Edgar" , "Codd" ))
]

Now suppose we have another table containing Twitter handles, also indexed by Id:

| Id | Twitter Handle |
|----|----------------|
| 0 | GabrielG439 |
| 1 | posco |
| 3 | avibryant |

We can also encode that as a Map:

type TwitterHandle = String

handles :: Map Id TwitterHandle
handles = Map.fromList
[ (0, "GabrielG439")
, (1, "posco" )
, (3, "avibryant" )
]

One of the nice properties of Maps is that you can join them together:

import Control.Applicative

-- I'm using `join2` to avoid confusion with `Control.Monad.join`
join2 :: Map k v1 -> Map k v2 -> Map k (v1, v2)
join2 = ... -- Implementation elided for now

What if we could generalize join2 to work on types other than Map. Perhaps we could use the Applicative interface to implement join2:

join2 :: Applicative f => f a -> f b -> f (a, b)
join2 = liftA2 (,)

However, the Map type cannot implement Applicative in its current form. The reason why is that there is no sensible definition for pure:

pure :: v -> Map k v
pure = ???

This would require a Map that was defined for every key, which we cannot encode. Or can we?

Tables

Well, who says we need to use the Map type from containers? What if I were to encode my Map this way:

import Prelude hiding (lookup)

-- | A map encoded as a lookup function
newtype Table k v = Table { lookup :: k -> Maybe v }

-- | Encode a traditional map as a lookup function
from :: Ord k => Map k v -> Table k v
from m = Table (\k -> Map.lookup k m)

This new type of Map only permits a single operation: lookup, but because we constrain our API to this single operation we can now implement the full Applicative interface:

instance Functor (Table k) where
fmap f (Table g) = Table (\k -> fmap f (g k))
-- Same as: Table (fmap (fmap f) g)

instance Applicative (Table k) where
pure v = Table (\k -> Just v)
-- Same as: Table (pure (pure v))

Table f <*> Table x = Table (\k -> f k <*> x k)
-- Same as: Table (liftA2 (<*>) f x)

We can promote conventional Maps to this new Table type using the above from function:

names' :: Table Id (FirstName, LastName)
names' = from names

handles' :: Table Id TwitterHandle
handles' = from handles

... and now the more general Applicative join2 will work on these two tables:

>>> let table = join2 names' handles'
>>> :type table
table :: Table Id ((FirstName, LastName), TwitterHandle)
>>> lookup table 0
Just (("Gabriel","Gonzalez"),"GabrielG439")
>>> lookup table 1
Just (("Oscar","Boykin"),"posco")
>>> lookup table 2
Nothing

However, in its present form we can't dump the table's contents because we don't know which keys are present in the table. Let's fix that by adding an additional field to the Table type listing the keys. We will treat functions as being defined for all keys:

import Data.Set (Set)
import qualified Data.Set as Set

data Keys k = All | Some (Set k)

instance Ord k => Num (Keys k) where
fromInteger 0 = Some Set.empty
fromInteger n | n > 0 = All

All + _ = All
_ + All = All
Some s1 + Some s2 = Some (Set.union s1 s2)

All * ks = ks
ks * All = ks
Some s1 * Some s2 = Some (Set.intersection s1 s2)

-- | A map encoded as a lookup function
data Table k v = Table
{ keys :: Keys k
, lookup :: k -> Maybe v
}

-- | Encode a traditional map as a lookup function
from :: Ord k => Map k v -> Table k v
from m = Table
{ keys = Some (Set.fromList (Map.keys m))
, lookup = \k -> Map.lookup k m
}

Even after extending Table with keys we can still implement the Applicative interface:

instance Functor (Table k) where
fmap f (Table ks g) = Table ks (fmap (fmap f) g)

instance Ord k => Applicative (Table k) where
pure v =
Table 1 (pure (pure v))

Table s1 f <*> Table s2 x =
Table (s1 * s2) (liftA2 (<*>) f x)

... and now we can add a Show instance, too!

instance (Show k, Show v) => Show (Table k v) where
show (Table ks f) = case ks of
All -> "<function>"
Some s -> unlines (do
k <- Set.toList s
let Just v = f k
return (show (k, v)) )

Let's give it a test drive:

>>> names'
(0,("Gabriel","Gonzalez"))
(1,("Oscar","Boykin"))
(2,("Edgar","Codd"))

>>> handles'
(0,"GabrielG439")
(1,"posco")
(3,"avibryant")

>>> join2 names' handles'
(0,(("Gabriel","Gonzalez"),"GabrielG439"))
(1,(("Oscar","Boykin"),"posco"))

So far, so good!

Alternative

However, we need to support more than just inner joins. We'd also like to support left, right, and outer joins, too.

Conceptually, a left join is one in which values from the right table may be optionally present. One way we could implement this would be to define a function that converts a finite map to a function defined on all keys. This function will return Nothing for keys not present in the original finite map and Just for keys that were present:

optional :: Table k v -> Table k (Maybe v)
optional (Table ks f) =
Table All (\k -> fmap Just (f k) <|> pure Nothing)

Then we could define leftJoin in terms of join2 and optional:

leftJoin :: Table k a -> Table k b -> Table k (a, Maybe b)
leftJoin t1 t2 = join2 t1 (optional t2)

However, if we try to compile the above code, the compiler will give us a really interesting error message:

Ambiguous occurrence ‘optional’
It could refer to either ‘Main.optional’,
or ‘Control.Applicative.optional’

Apparently, Control.Applicative has an optional function, too. Let's pause to check out the type of this mysterious function:

optional :: Alternative f => f v -> f (Maybe v)

Wow! That type signature is suprisingly similar to the one we wrote. In fact, if Table k implemented the Alternative interface, the types would match.

Alternative is a type class (also provided by Control.Applicative) that greatly resembles the Monoid type class:

class Applicative f => Alternative f where
empty :: f a

(<|>) :: f a -> f a -> f a

... and the core Alternative laws are identical to the Monoid laws:

x <|> empty = x

empty <|> x = x

(x <|> y) <|> z = x <|> (y <|> z)

Let's dig even deeper and see how optional uses this Alternative type class:

optional v = fmap Just v <|> pure Nothing

Even the implementation is eerily similar! This strongly suggests that we should find a way to make our Table type implement Alternative. Perhaps something like this would work:

instance Ord k => Alternative (Table k) where
empty =
Table 0 (pure empty)

Table ks1 f1 <|> Table ks2 f2 =
Table (ks1 + ks2) (liftA2 (<|>) f1 f2)

Compare this to our Applicative instance (duplicated here):

instance Ord k => Applicative (Table k) where
pure v =
Table 1 (pure (pure v))

Table s1 f <*> Table s2 x =
Table (s1 * s2) (liftA2 (<*>) f x)

The Alternative instance mirrors our Applicative instance except that we:

  • replace pure v with empty
  • replace (<*>) with (<|>)
  • replace 1 with 0
  • replace (*) with (+)

... and what's really neat is that now the optional function from Control.Applicative behaves just like the original optional function we wrote. (Exercise: Use equational reasoning to verify this)

Derived joins

Armed with this Alternative instance, we can now implement leftJoin in terms of the optional provided by Control.Applicative:

leftJoin t1 t2 = join2 t1 (optional t2)

Sure enough, it works:

>>> leftJoin names' handles'
(0,(("Gabriel","Gonzalez"),Just "GabrielG439"))
(1,(("Oscar","Boykin"),Just "posco"))
(2,(("Edgar","Codd"),Nothing)

Let's check out the type that the compiler infers for leftJoin:

>>> :type leftJoin
leftJoin :: Alternative f => f a -> f b -> f (a, Maybe b)

Notice how there's no longer anything Table-specific about leftJoin. It works for anything that implements the Alternative interface. I could leftJoin two Maybes if I really wanted to:

>>> leftJoin (Just 1) (Just 2)
Just (1,Just 2)
>>> leftJoin (Just 1) Nothing
Just (1,Nothing)
>>> leftJoin Nothing (Just 1)
Nothing

... or two lists:

>>> leftJoin [1, 2] [3, 4]
[(1,Just 3),(1,Just 4),(1,Nothing),(2,Just 3),(2,Just 4),(2,
Nothing)]

In fact, I don't even really need specialized leftJoin or rightJoin functions. optional is sufficiently light-weight that I could inline a right join on the fly:

>>> join2 (optional names') handles'
(0,(Just ("Gabriel","Gonzalez"),"GabrielG439"))
(1,(Just ("Oscar","Boykin"),"posco"))
(3,(Nothing,"avibryant"))

What happens if I try to do an "outer join"?

>>> -- DISCLAIMER: Technically not the same as an SQL outer join
>>> let o = join2 (optional names') (optional handles')
>>> o
<function>

The above "outer join" is defined for all keys (because both sides are optional), so we get back a function! While we can't list the Table (because it's conceptually infinite), we can still perform lookups on it:

>>> lookup o 0
Just (Just ("Gabriel","Gonzalez"),Just "GabrielG439")
>>> lookup o 2
Just (Just ("Edgar","Codd"),Nothing)
>>> lookup o 3
Just (Nothing,Just "avibryant")
>>> lookup o 4
Just (Nothing, Nothing)

... and if we were to join our "infinite" table against a finite table, we get back a finite table (Exercise: Try it! Define a new finite table to join against o and see what happens)

What's nice about optional is that we can easily left-join or right-join in multiple tables at a time. If I had four tables of types:

t1 :: Table k a
t2 :: Table k b
t3 :: Table k c
t4 :: Table k d

... I could left join t2, t3, and t4 into t1 by just writing:

liftA4 (,,,) t1 (optional t2) (optional t3) (optional t4)
:: Table k (a, Maybe b, Maybe c, Maybe d)

Now that I think about it, I don't even really need to provide join2/join3/join4/join5 since they are not much shorter than using the liftA family of functions in Control.Applicative:

-- Exercise: What would `join1` be?
join2 = liftA2 (,)
join3 = liftA3 (,,)
join4 = liftA4 (,,,)
join5 = liftA5 (,,,,)

In other words, I can implement almost any imaginable join just by using liftA{n} and some permutation of optionals. I don't even know what I'd call this join:

liftA5 (,,,,) t1 (optional t2) t3 (optional t4) t5

... but the beauty is that I don't have to give a name for it. I can easily write anonymous joins on the fly using the Control.Applicative module. Moreover, the above code will work for anything that implements the Alternative interface.

Conclusion

Control.Applicative provides a very general API for relational joins: the Alternative type class (which includes Applicative, since Applicative is a super-class of Alternative). Perhaps Control.Applicative could be improved slightly by providing the join{n} family of functions listed above, but it's still highly usable in its present state.

Note that this trick only works for relational abstractions embedded within Haskell. This API can be generalized for external relational data stores (i.e. Postgres), which I will cover in a subsequent post.

Categories: Offsite Blogs

Magnus Therning: FP101x completed

Planet Haskell - Sat, 12/06/2014 - 6:00pm
TL;DR

As of last evening I finished the last exercise/homework of the course FP101x Introduction to Functional Programming. I mostly signed up for fun, but did intend all along to go through with it, watch the lectures, do the homework… in short, take the course ;)

Now that it’s done I’m happy I did. It gave about as much as I expected on the programming side, and it gave me some experience with using the edX MOOC. I will suggest the course to colleagues and I’m already looking around for the next course to take myself.

Thoughts about the course

The course was, on the whole, very good. It was well planned; I felt it started from a level suitable for people unfamiliar with FP and then progressed at a reasonable pace. It should be noted though that the lectures probably isn’t enough for someone who’s new to FP; reading the book or researching the subjects on one’s own is required to follow along.

There were a few things in the course that I don’t really see the point of though. The thing that stands out in particular is the 12th lecture that covers proofs by induction of functions that work on lists. I think it would have been worthwhile spending some time on making more “realistic programs.” Furthermore, picking the “Countdown Problem” as the topic of an entire lecture feels very “academic”. Personally I would have liked an example that comes a bit closer to a problem that better reflects that FP can be used to solve problems that occur in a professional developer’s work.

Thoughts about the edX MOOC

The edX site is fairly nice to work with. There are several things that I liked from the beginning:

  1. Several methods of logging in are offered, I opted to use my Google-plus account since I like to avoid creating yet another account if I can.
  2. Interacting with the site is natural and finding my way around was easy from the start.
  3. The page I land on after logging in shows my current courses and it’s easy to from there.
  4. There’s a page showing my progress in the course; that turned out to be the page I go to all the time since I can find the next lecture from there.

There are also a few things I find really irritating:

  1. In many of the exercises the code isn’t plain text but rather images; this makes it impossible to copy the code and paste it into a REPL.
  2. In some cases the layout of the code in the exercises, especially in the answer options, were very badly laid out. For instance in FP101x there are several question where the answer options are code and rather wordy, due to the layout the code becomes unnecessarily difficult to read. (And thanks to the previous issue with the site, it was not an option to copy the code and reformat it to make it easier to read.)
Categories: Offsite Blogs

Q: Re-wrapping nested "envelope" types

Haskell on Reddit - Sat, 12/06/2014 - 5:56pm

I have a type which is effectively a fixed size array (elements of the same type)

data T3 a = T3 a a a deriving Show

and another type with fixed number of elements that are of different type, let's say a two-tuple. I want to do the following conversion:

rewrap32 :: T3 (a, b) -> (T3 a, T3 b) rewrap32 (T3 (x1, x2) (y1, y2) (z1, z2)) = (T3 x1 y1 z1, T3 x2 y2 z2) rewrap32 $ T3 (1, 'a') (2, 'b') (3, 'c') -- (T3 1 2 3,T3 'a' 'b' 'c')

I wonder if there is more generic way do perform such conversion. I can make T3 Foldable, then I will be able to iterate over the elements, but in order to construct the two new T3's I'd have to make a Monoid around a to be able to add one element at a time. If that is possible at all, I cannot imagine how to do that.

And if it is worth the effort.

And what I shall do if I need the re-wrap it in the opposite direction,

rewrap23 :: (T3 a, T3 b) -> T3 (a, b) rewrap23 (T3 x1 y1 z1, T3 x2 y2 z2) = T3 (x1, x2) (y1, y2) (z1, z2)

Am I trying to solve an unsolvable non-problem? Or there is some common approach to similar tasks?

edit: Thanks for the advice, particularly /u/tomejaguar. I am dumb...

submitted by lukewarm
[link] [5 comments]
Categories: Incoming News

Bug in GHC parser?

Haskell on Reddit - Sat, 12/06/2014 - 5:59am

in ghci :

Prelude> :t Just . read Just . read :: Read b => String -> Maybe b Prelude> :t Just.read <interactive>:1:1: Not in scope: ‘Just.read’

(In doesn't work either in a non-interactive mode)

Are not Just.read and Just . read supposed to be the same ? Is it a bug in GHC ?

submitted by maxigit
[link] [8 comments]
Categories: Incoming News

Tom Schrijvers: PPDP 2015: Call for Papers

Planet Haskell - Sat, 12/06/2014 - 2:47am
==============================<wbr style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.6666669845581px;"></wbr>==============================<wbr style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.6666669845581px;"></wbr>==========

  Call for papers
  17th International Symposium on
 Principles and Practice of Declarative Programming
     PPDP 2015

         Special Issue of Science of Computer Programming (SCP)

   Siena, Italy, July 14-16, 2015
   (co-located with LOPSTR 2015)

 http://costa.ls.fi.upm.es/<wbr></wbr>ppdp15

==============================<wbr style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.6666669845581px;"></wbr>==============================<wbr style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.6666669845581px;"></wbr>==========

 SUBMISSION DEADLINE: 20 MARCH, 2015

PPDP  2015  is a  forum  that  brings  together researchers  from  the
declarative  programming communities, including  those working  in the
logic,  constraint  and  functional  programming paradigms,  but  also
embracing languages, database  languages, and knowledge representation
languages. The  goal is  to stimulate research  in the use  of logical
formalisms  and  methods  for  specifying, performing,  and  analyzing
computations,   including   mechanisms   for   mobility,   modularity,
concurrency,  object-orientation,  security,  verification and  static
analysis. Papers related to the use of declarative paradigms and tools
in industry and education are especially solicited. Topics of interest
include, but are not limited to

* Functional programming
* Logic programming
* Answer-set programming
* Functional-logic programming
* Declarative visual languages
* Constraint Handling Rules
* Parallel implementation and concurrency
* Monads, type classes and dependent type systems
* Declarative domain-specific languages
* Termination, resource analysis and the verification of declarative programs
* Transformation and partial evaluation of declarative languages
* Language extensions for security and tabulation
* Probabilistic modeling in a declarative language and modeling reactivity
* Memory management and the implementation of declarative systems
* Practical experiences and industrial application

This   year  the  conference   will  be   co-located  with   the  25th
International   Symposium  on   Logic-Based   Program  Synthesis   and
Transformation (LOPSTR 2015).

The conference will  be held in Siena, Italy.   Previous symposia were
held  at Canterbury  (UK),  Madrid (Spain),  Leuven (Belgium),  Odense
(Denmark), Hagenberg (Austria),  Coimbra (Portugal), Valencia (Spain),
Wroclaw (Poland),  Venice (Italy), Lisboa  (Portugal), Verona (Italy),
Uppsala  (Sweden),   Pittsburgh  (USA),  Florence   (Italy),  Montreal
(Canada), and Paris (France). You might have a look at the contents of
past PPDP symposia.

Papers  must  describe original  work,  be  written  and presented  in
English, and must not substantially overlap with papers that have been
published  or   that  are  simultaneously  submitted   to  a  journal,
conference, or  workshop with refereed proceedings.  Work that already
appeared in  unpublished or informally  published workshop proceedings
may be submitted (please contact the PC chair in case of questions).

After the symposium, a selection of the best papers will be invited to
extend their submissions in the light of the feedback solicited at the
symposium.   The papers  are expected  to include  at least  30% extra
material over and above the PPDP version. Then, after another round of
reviewing, these revised  papers will be published in  a special issue
of SCP with a target publication date by Elsevier of 2016.

Important Dates

   Abstract Submission:      14 March, 2015
   Paper submission:      20 March, 2015
   Notification:      14 May,   2015
   Camera-ready:      To be announced

   Symposium:      14-16 July, 2015

Authors  should  submit  an  electronic  copy of  the  full  paper  in
PDF. Papers  should be  submitted to the  submission website  for PPDP
2015. Each submission must include  on its first page the paper title;
authors  and   their  affiliations;   abstract;  and  three   to  four
keywords. The keywords will be used to assist the program committee in
selecting appropriate  reviewers for the paper.  Papers should consist
of   the   equivalent  of   12   pages   under   the  ACM   formatting
guidelines.  These   guidelines  are  available   online,  along  with
formatting templates  or style files. Submitted papers  will be judged
on the basis of significance, relevance, correctness, originality, and
clarity. They should  include a clear identification of  what has been
accomplished and  why it is  significant. Authors who wish  to provide
additional material to  the reviewers beyond the 12-page  limit can do
so in  clearly marked appendices:  reviewers are not required  to read
such appendices.

Program Committee

    Michael Adams, University of Utah, USA
    Puri Arenas, Complutense University of Madrid, Spain
    Amir Ben-Amram, Tel-Aviv Academic College, Israel
    Ines Castro, Universidade do Porto, Portugal
    Patrick Cousot, New York University, USA
    Gregory Duck, National University of Singapore, Singapore
    Fabio Fioravanti, University of Chieti-Pescara, Italy
    Thom FrŸhwirth, University of Ulm, Germany
    Roberto Giacobazzi, University of Verona, Italy
    Michael Hanus, CAU Kiel, Germany
    Andy King, University of Kent, UK
    F. L—pez-Fraguas, Complutense University of Madrid, Spain
    Ian Mackie, University of Sussex, UK
    Dale Miller, INRIA and LIX/Ecole Polytechnique, France
    Torsten Schaub, University of Potsdam, Germany
    Tom Schrijvers KU Leuven, Belgium
    Frank D. Valencia, CNRS and LIX, Ecole Polytechnique, France
    German Vidal, Universitat Politecnica de Valencia, Spain
    Marina Vos, University of Bath, UK
    Nobuko Yoshida, Imperial College London, UK

Program Chair

    Elvira Albert
    Complutense University of Madrid
    C/ Profesor Garcia Santesmases
    E-28040 Madrid, Spain
    Email: elvira@sip.ucm.es

Symposium Chair

    Moreno Falaschi
    Department of information engineering and mathematics
    University of Siena, Italy
    Email: moreno.falaschi@unisi.it
Categories: Offsite Blogs

Ask For Review: My First Haskell Project (Machine Learning and SGD Algorithm)

Haskell on Reddit - Sat, 12/06/2014 - 2:08am

I recently met Haskell and has since been fascinated by it. I can't explain the reason. Maybe simply because Haskell is so different from what I have been using: Java for infrastructure, Python for data crunching and machine learning, R for data analysis and plotting. I wanted to learn Haskell so I implemented one of my favorite machine learning algorithm: stochastic gradient descent (SGD). Here is the link to it. The core of the algorithm is implemented in Train.hs, the basic idea of which is iterating the input data points and update a weight vector accordingly.

I used the most obvious language syntax and features. There is inevitable many parts not haskell-ish (I didn't even tune foldr vs foldl), thus any suggestion is welcome. The aspects that I am aware of and would like suggestions the most are:

  • Heap consumption and lazy evaluation. One of the advantages of SGD is its low computational and space complexity. The data points (e.g. 10 Million ) do not need to fit in memory because they are checked one by one. Therefore, the memory consumption is supposed to be constant. However, the heap consumption of my Haskell implementation seems like beyond linear (e.g. for 100 MB of data, it consumes over 5 GB of memory until I manually kill it) . I appreciate the elegant of lazy loading the data. But I guess the extra heap consumption is due to the other lazy evaluations? It seems that I can force Strict evaluation in certain part. Given the nature of the algorithm, which part needs to force the Strictness. Is there the risk that strictness syntax pollutes all over the code? How do you solve such problems in general?

  • Language features for iterative algorithm. My algorithm iterate the data set multiple times, and a weight vector is updated each time a data point is seen. I basically implemented it with two levels of foldr(which would be two levels of for-loop in Java). This certainly involves encoding all the to-be-updated variables, and passing functions (or partially applied functions) as well. Is there any existing FP abstraction for iterative algorithm (or say, many levels of foldr, not necessarily recursion though)?

  • What else can be improved or replaced?

update 1 (2014-12-07)

I included some information on how one would run the code and where to find sample data.

I also did a basic memory tuning according to tom-md and bgamari, i.e. remove the call length x and those similar. I have posted the information on the same github page. Unfortunately, I do not see any improvement, and the memory consumption is linear rather than the expected constant. Any further advice would be very appreciated.

submitted by causality0
[link] [3 comments]
Categories: Incoming News

New gtk2hs 0.12.4 release

gtk2hs - Wed, 11/21/2012 - 12:56pm

Thanks to John Lato and Duncan Coutts for the latest bugfix release! The latest packages should be buildable on GHC 7.6, and the cairo package should behave a bit nicer in ghci on Windows. Thanks to all!

~d

Categories: Incoming News