News aggregator

New Functional Programming Job Opportunities

haskell-cafe - Mon, 01/19/2015 - 7:00pm
Here are some functional programming job opportunities that were posted recently: Functional Software Developer at Moixa Technology http://functionaljobs.com/jobs/8778-functional-software-developer-at-moixa-technology Cheers, Sean Murphy FunctionalJobs.com
Categories: Offsite Discussion

Noam Lewis: Introducing SJS, a type inferer and checker for JavaScript (written in Haskell)

Planet Haskell - Mon, 01/19/2015 - 6:12pm

TL;DR: SJS is a type inference and checker for JavaScript, in early development. The core inference engine is working, but various features and support for the full browser JS environment and libraries are in the works.

SJS (Haskell source on github) is an ongoing effort to produce a practical tool for statically verifying JavaScript code. The type system is designed to support a safe subset of JS, not a superset of JS. That is, sometimes, otherwise valid JS code will not pass type checking with SJS. The reason for not allowing the full dynamic behavior of JS, is to guarantee more safety and (as a bonus) allows fully unambiguous type inference.

The project is still in early development, but the core inference engine is more or less feature complete and the main thing that’s missing is support for all of JS’s builtin functions / methods and those that are present in a browser environment.

Compare to:

  • Google Closure Compiler, whose primary goal is “making JavaScript download and run faster”, but also has a pretty complex type-annotation centric type-checking feature. The type system is rather Java-like, with “shallow” or local type inference. Generics are supported at a very basic level. I should write a blog post about the features and limitations of closure. It’s a very stable project in production use at Google for several years now. I’ve used it myself on a few production projects. Written in Java. They seem to be working on a new type inference engine, but I don’t know what features it will have.
  • Facebook Flow, which was announced a few weeks ago (just as I was putting a finishing touch on my core type checker code!), has a much more advanced (compared to closure) type checker, and seems to be based on data flow analysis. I haven’t gotten around to exploring what exactly flow does, but it seems to be much closer in design to SJS, and obviously as a project has many more resources. There are certain differences in the way flow infers types, I’ll explore those in the near future.
  • TypeScript: a superset of JS that translates into plain JS. Being a superset of JS means that it includes all of the awful parts of JS! I’ve asked about disabling those bad features a while back (around version 0.9); from what I’ve checked, version 1.4 still seems to include them.
  • Other something-to-JS languages, such as PureScript, Roy, Haste, and GHCJS (a full Haskell to JS compiler). These all have various advantages. SJS is aimed at being able to run the code you wrote in plain JS. There are many cases where this is either desired or required.

Of all existing tools, Flow seems to be the closest to what I aim to achieve with SJS. However, SJS supports type system features such as polymorphism which are not currently supported by Flow. On the other hand, Flow has Facebook behind it, and will surely evolve in the near future.

Closure seems to be designed for adapting an existing JS code base. They include features such as implicit union types and/or a dynamic “any” type, and as far as I know don’t infer polymorphic types. The fundamental difference between SJS and some of the alternatives is that I’ve designed SJS for more safety, by supporting a (wide) subset of JS and disallowing certain dynamic typing idioms, such as assigning values of different types to the same variable (in the future this may be relaxed a bit when the types are used in different scopes, but that’s a whole other story).

Ok, let’s get down to the good stuff:

Features:

  • Full type inference: no type annotations necessary.
  • Parametric polymorphism (aka “generics”), based on Hindley-Milner type inference.
  • Row-type polymorphism, otherwise known as “static duck typing”.
  • Recursive types for true representation of object-oriented methods.
  • Correct handling of JS’s this dynamic scoping rules.

Support for type annotations for specifically constraining or for documentation is planned.

Polymorphism is value restricted, ML-style.

Equi-recursive types are constrained to at least include a row type in the recursion to prevent inference of evil recursive types.

Examples

Note: An ongoing goal is to improve readability of type signatures and error messages.

Basic

JavaScript:

var num = 2; var arrNums = [num, num];

SJS infers (for arrNums):

[TNumber]

That is, an array of numbers.

Objects:

var obj = { something: 'hi', value: num };

Inferred type:

{something: TString, value: TNumber}

That is, an object with two properties: ‘something’, of type string, and ‘value’ of type number.

Functions and this

In JS, this is one truly awful part. this is a dynamically scoped variable that takes on values depending on how the current function was invoked. SJS knows about this (pun intended) and infers types for functions indicating what this must be.

For example:

function useThisData() { return this.data + 3; }

SJS infers:

(this: {data: TNumber, ..l} -> TNumber)

In words: a function which expects this to be an object with at least one property, “data” of type number. It returns a number.

If we call a function that needs this incorrectly, SJS will be angry:

> useThisData(); Error: Could not unify: {data: TNumber, ..a} with TUndefined

Because we called useThisData without a preceding object property access (e.g. obj.useThisData), it will get undefined for this. SJS is telling us that our expected type for this is not unifiable with the type undefined.

Polymorphism

Given the following function:

function makeData(x) { return {data: x}; }

SJS infer the following type:

((this: a, b) -> {data: b})

In words: A function that takes anything for its this, and an argument of any type, call it b. It returns an object containing a single field, data of the same type b as the argument.

Row-type polymorphism (static duck typing)

Given the following function:

function getData(obj) { return obj.data; }

SJS infers:

((this: h, {data: i, ..j}) -> i)

In words: a function taking any type for this, and a parameter that contains at least one property, named “data” that has some type i (could be any type). The function returns the same type i as the data property.

SJS is an ongoing project – I hope to blog about specific implementation concerns or type system features soon.


Categories: Offsite Blogs

Jasper Van der Jeugt: Haskell Design Patterns: .Extended Modules

Planet Haskell - Mon, 01/19/2015 - 6:00pm
Introduction

For a long time, I have wanted to write a series of blogposts about Design Patterns in Haskell. This never really worked out. It is hard to write about Design Patterns.

First off, I have been writing Haskell for a long time, so mostly things feel natural and I do not really think about code in terms of Design Patterns.

Additionaly, I think there is a very, very thin line between what we call “Design Patterns” and what we call “Common Sense”. Too much on one side of the line, and you sound like a complete idiot. Too much on the other side of the line, and you sound like a pretentious fool who needs five UML diagrams in order to write a 100-line program.

However, in the last year, I have both been teaching more Haskell, and I have been reading even more code written by other people. The former made me think harder about why I do things, and the latter made me notice patterns I hadn’t thought of before, in particular if they were formulated in another way.

This has given me a better insight into these patterns, so I hope to write a couple of blogposts like this over the next couple of months. We will see how it goes – I am not exactly a prolific blogger.

The first blogpost deals with what I call .Extended Modules. While the general idea has probably been around for a while, the credit for this specific scheme goes to Bas van Dijk, Simon Meier, and Thomas Schilling.

.Extended Modules: the problem

This problem mainly resolves around organisation of code.

Haskell allows for building complex applications out of small functions that compose well. Naturally, if you are building a large application, you end up with a lot of these small functions.

Imagine we are building some web application, and we have a small function that takes a value and then sends it to the browser as JSON:

json :: (MonadSnap m, Aeson.ToJSON a) => a -> m () json x = do modifyResponse $ setContentType "application/json" writeLBS $ Aeson.encode x

The question is: where do we put this function? In small projects, these seem to inevitably end up inside the well-known Utils module. In larger, or more well-organised projects, it might end up in Foo.Web or Foo.Web.Utils.

However, if we think outside of the box, and disregard dependency problems and libraries including every possible utility function one can write, it is clearer where this function should go: in Snap.Core.

Putting it in Snap.Core is obviously not a solution – imagine the trouble library maintainers would have to deal with in order to include all these utility functions.

The basic scheme

The scheme we use to solve this is simple yet powerful: in our own application’s non-exposed modules list, we add Snap.Core.Extended.

src/Snap/Core/Extended.hs:

{-# LANGUAGE OverloadedStrings #-} module Snap.Core.Extended ( module Snap.Core , json ) where import qualified Data.Aeson as Aeson import Snap.Core json :: (MonadSnap m, Aeson.ToJSON a) => a -> m () json x = do modifyResponse $ setContentType "application/json" writeLBS $ Aeson.encode x

The important thing to notice here is the re-export of module Snap.Core. This means that, everywhere in our application, we can use import Snap.Core.Extended as a drop-in replacement for import Snap.Core.

This also makes sharing code in a team easier. For example, say that you are looking for a catMaybes for Data.Vector.

Before, I would have considered either defining this in a where clause, or locally as a non-exported function. This works for single-person projects, but not when different people are working on different modules: you end up with five implementations of this method, scattered throughout the codebase.

With this scheme, however, it’s clear where to look for such a method: in Data.Vector.Extended. If it’s not there, you add it.

Aside from utility functions, this scheme also works great for orphan instances. For example, if we want to serialize a HashMap k v by converting it to [(k, v)], we can add a Data.HashMap.Strict.Extended module.

src/Data/HashMap/Strict/Extended.hs:

{-# OPTIONS_GHC -fno-warn-orphans #-} module Data.HashMap.Strict.Extended ( module Data.HashMap.Strict ) where import Data.Binary (Binary (..)) import Data.Hashable (Hashable) import Data.HashMap.Strict instance (Binary k, Binary v, Eq k, Hashable k) => Binary (HashMap k v) where put = put . toList get = fmap fromList get

A special case of these .Extended modules is Prelude.Extended. Since you will typically import Prelude.Extended into almost all modules in your application, it is a great way to add a bunch of (very) common imports from base, so import noise is reduced.

This is, of course, quite subjective. Some might want to add a few specific functions to Prelude (as illustrated below), and others might prefer to add all of Control.Applicative, Data.List, Data.Maybe, and so on.

src/Prelude/Extended.hs:

module Prelude.Extended ( module Prelude , foldl' , fromMaybe ) where import Data.List (foldl') import Data.Maybe (fromMaybe) import Prelude Scaling up

The basic scheme breaks once our application consists of several cabal packages.

If we have a package acmecorp-web, which depends on acmecorp-core, we would have to expose Data.HashMap.Strict.Extended from acmecorp-core, which feels weird.

A simple solution is to create an unordered-containers-extended package (which is not uploaded to the public Hackage for obvious reasons). Then, you can export Data.HashMap.Strict.Extended from there.

This solution creates quite a lot of overhead. Having many modules is fine, since they are easy to manage – they are just files after all. Managing many packages, however, is harder: every package introduces a significant amount of overhead: for example, repos need to be maintained, and dependencies need to be managed explicitly in the cabal file.

An alternative solution is to simply put all of these modules together in a hackage-extended package. This solves the maintenance overhead and still gives you a very clean module hierarchy.

Conclusion

After using this scheme for over year in a large, constantly evolving Haskell application, it is clear to me that this is a great way to organise and share code in a team.

A side-effect of this scheme is that it becomes very convenient to consider some utility functions from these .Extended modules for inclusion in their respective libraries, since they all live in the same place. If they do get added, just remove the originals from hackage-extended, and the rest of your code doesn’t even break!

Thanks to Alex Sayers for proofreading!

Categories: Offsite Blogs

Experimenting with a tagged ReaderT

Haskell on Reddit - Mon, 01/19/2015 - 5:36pm

The mtl defines MonadReader r m such that it is impossible to make m an instance of multiple distinct MonadReader r, due to the functional dependency m -> r. This makes one's life difficult when trying to build a modular design: to emulate multiple readers, the MonadReader r m, HasFoo r, HasBar r pattern seems to be the best one can do (cf this link).

What about adding a tag to distinguish various MonadReader layers within a monad stack ? The point is basically to turn the functional dependency from m -> r to tag m -> r. I didn't find any study/implementation of this idea, so I've started implementing it myself here. The current implementation has some drawbacks:

  • a tag type must be defined for each layer
  • the MonadReader TagType ContextType m syntax looks a bit cluttered
  • the generic instance for ReaderT t r requires some type-level hacks

Still, I'm using it in an intermediate project to achieve full modularity, and up to now, it has proven really useful. I'm even considering packaging it into a standalone library, but first I'd like some advice from the community: what do you think of the idea ? Is it worth packaging it ? Do you have improvement suggestions ?

Thank you for your answers.

submitted by k0ra1
[link] [comment]
Categories: Incoming News

Tentative PayPal client library

Haskell on Reddit - Mon, 01/19/2015 - 4:51pm

Hey folks,

I've been working on a client library for basic PayPal functionality. Anybody interesting in taking a look before I release it to Hackage?

Here's the GitHub page: https://github.com/fanjam/paypal-adaptive-hoops

And here's my candidate Hackage upload: http://hackage.haskell.org/package/paypal-adaptive-hoops-0.4.0.1/candidate

Any constructive criticism is welcome.

Thanks so much!

submitted by darkgold
[link] [4 comments]
Categories: Incoming News

http-client: proxy environment variable support

haskell-cafe - Mon, 01/19/2015 - 3:53pm
Neil Mitchell opened an issue[1] for http_proxy and https_proxy environment variable support in http-client. I've written that support, and it's ready to go, but there's an open question: what should the default behavior be? In particular, should environment variables, by default, be checked to determine the proxy, or not? Arguments each way: In favor of using environment variables: * Matches behavior of many other tools and libraries * Allows application users control without requiring a code change from application writers Against using environment variables: * It's a change in behavior vs what http-client does today (though that could certainly be seen as just a missing feature) * Environment variables will implicitly change the behavior of code, which generally speaking can be problematic I'm leaning towards having the default behavior be: * If the user explicitly chooses a proxy setting on the manager, use that * If the user explicitly sets a proxy value on the Request, use that * If the environment
Categories: Offsite Discussion

GHC Weekly News - 2015/01/19

Haskell on Reddit - Mon, 01/19/2015 - 3:41pm
Categories: Incoming News

The GHC Team: GHC Weekly News - 2015/01/19

Planet Haskell - Mon, 01/19/2015 - 3:35pm

Hi *,

It's time for some more GHC news! The GHC 7.10 release is closing in, which has been the primary place we're focusing our attention. In particular, we're hoping RC2 will be Real Soon Now.

Some notes from the past GHC HQ meetings this week:

  • GHC 7.10 is still rolling along smoothly, and it's expected that RC2 will be cut this Friday, January 23rd. Austin sent out an email about this to ghc-devs, so we can hopefully get all the necessary fixes in.
  • Currently, GHC HQ isn't planning on focusing many cycles on any GHC 7.10 tickets that aren't highest priority. We're otherwise going to fix things as we see fit, at our leisure - but a highest priority bug is a showstopper for us. This means if you have something you consider a showstopper for the next release, you should bump the priority on the ticket and yell at us!
  • We otherwise think everything looks pretty smooth for 7.10.1 RC2 - our libraries are updated, and most of the currently queued patches (with a few minor exceptions) are done and merged.

Some notes from the mailing list include:

  • Austin has alerted everyone that soon, Phabricator will run all builds with ./validate --slow, which will increase the time taken for most builds, but will catch a wider array of bugs in commits and submitted patches - there are many cases the default ./validate script still doesn't catch. ​https://www.haskell.org/pipermail/ghc-devs/2015-January/008030.html
  • Johan Tibell asked about some clarifications for the HsBang datatype inside GHC. In response, Simon came back with some clarifications, comments, and refactorings, which greatly helped Johan. ttps://www.haskell.org/pipermail/ghc-devs/2015-January/007905.html
  • Richard Eisenberg had a question about the vectoriser: can we disable it? DPH seems to have stagnated a bit recently, bringing into question the necessity of keeping it on. There hasn't been anything done yet, but it looks like the build will get lighter, with a few more modules soon: ​https://www.haskell.org/pipermail/ghc-devs/2015-January/007986.html
  • Jan Stolarek has a simple question: what English spelling do we aim for in GHC? It seems that while GHC supports an assortment of British and American english syntactic literals (e.g. SPECIALIZE and SPECIALISE), the compiler sports an assortment of British/American identifiers on its own! ​https://www.haskell.org/pipermail/ghc-devs/2015-January/007999.html

Closed tickets the past few weeks include: #9966, #9904, #9969, #9972, #9934, #9967, #9875, #9900, #9973, #9890, #5821, #9984, #9997, #9998, #9971, #10000, #10002, #9243, #9889, #9384, #8624, #9922, #9878, #9999, #9957, #7298, and #9836.

Categories: Offsite Blogs

Project euler problem #3: comparing my solution to the official.

Haskell on Reddit - Mon, 01/19/2015 - 3:26pm

I'm just looking for some insight and feedback. I came up with this answer for problem 3; find the largest prime factor of 600851475143.

sqrtInt n = round $ sqrt $ fromIntegral n factors n = filter (\x -> n `rem` x == 0) [2..(sqrtInt n)] prime n = null $ factors n main = do let number = 600851475143 let primes = filter prime $ factors number print (last primes)

How does this stack up against the official answer given in the Haskell wiki? My answer makes a lot more sense to me coming from an imperative programming background and the magic in the official answer kind of scares me. I am curious, how does my answer compare in terms of efficiency and why?

primes = 2 : filter (null . tail . primeFactors) [3,5..] primeFactors n = factor n primes where factor n (p:ps) | p*p > n = [n] | n `mod` p == 0 = p : factor (n `div` p) (p:ps) | otherwise = factor n ps problem_3 = last (primeFactors 600851475143) submitted by Chronic8888
[link] [7 comments]
Categories: Incoming News

CFP Bx'15: 4th International Workshop on BidirectionalTransformations

General haskell list - Mon, 01/19/2015 - 2:13pm
CALL FOR PAPERS Fourth International Workshop on Bidirectional Transformations (Bx 2015) L'Aquila, Italy (co-located with STAF, July 20-24, 2015) http://bx-community.wikidot.com/bx2015:home Bidirectional transformations (Bx) are a mechanism for maintaining the consistency of at least two related sources of information. Such sources can be relational databases, software models and code, or any other document following standard or ad-hoc formats. Bx are an emerging topic in a wide range of research areas, with prominent presence at top conferences in several different fields (namely databases, programming languages, software engineering, and graph transformation), but with results in one field often getting limited exposure in the others. Bx 2015 is a dedicated venue for Bx in all relevant fields, and is part of a workshop series that was created in order to promote cross-disciplinary research and awareness in the area. As suc
Categories: Incoming News

garbage collection for a data structure

haskell-cafe - Mon, 01/19/2015 - 2:10pm
Hi, I was wondering if there was a way to check whether a particular data structure gets garbage collected in a program. A friendly person pointed me to System.Mem.Weak on the Haskell-Beginner list - however I've been unable to verify how it works, so I'm bumping it to this list. See the following toy program: I was trying to see whether the output would contain "garbage collected". I wondered if performGC is a nudge rather than an immediate "garbage collect now" instruction, and performGC is not actually performed? Or I've misunderstood finalizers in this context and they would not actually be executed when z gets garbage collected? import System.Mem.Weak import System.Mem (performGC) import Control.Concurrent (threadDelay) main :: IO () main = do let x = 5 y = "done" z = 3 a <- mkWeak z x (Just (putStrLn "garbage collected")) performGC threadDelay 20000000 print y Thank you, Elise
Categories: Offsite Discussion

GHC Trac hits ticket #10000

Haskell on Reddit - Mon, 01/19/2015 - 1:39pm
Categories: Incoming News

Book announcement: Robert Kowalski,LOGIC FOR PROBLEM SOLVING, REVISITED

General haskell list - Mon, 01/19/2015 - 12:13pm
New Book Robert Kowalski LOGIC FOR PROBLEM SOLVING, REVISITED ISBN 9783837036299 Also available as E-Book http://books.google.de/books?id=6vh1BQAAQBAJ&hl=en Algorithm = Logic + Control Robert Kowalski revisits his classic text on Computational Logic in the light of subsequent developments, extending it by a substantial commentary of fifty pages.
Categories: Incoming News

Programs for Cheap! (pdf)

Haskell on Reddit - Mon, 01/19/2015 - 8:55am
Categories: Incoming News