News aggregator

Functional Jobs: Mid/Senior Software Development Engineer at Lookingglass Cyber Solutions (Full-time)

Planet Haskell - Thu, 04/16/2015 - 3:46pm

Lookingglass is the world leader in cyber threat intelligence management. We collect and process all source intelligence, connecting organizations to valuable information through its cyber threat intelligence monitoring and management platform.

Our solutions allow customers to continuously monitor threats far and near, such as the presence of botnets, hosts associated with cybercriminal networks, unexpected route changes and the loss of network resiliency.

We are seeking qualified Software Development Engineers to join our team!

Required Skills & Experience:

  • US Citizen or Green Card Holder
  • Location: MD/VA based
  • Bachelor’s or Masters degree in: computer science, engineering, information systems or mathematics
  • 3-5 yrs experienced with full development life-cycle with shipping products
  • 2+ yrs experienced with Functional and OO languages – have worked in functional paradigms with immutable data models
  • 3+ yrs building distributed system products including messaging, NoSQL, RPC / RMI mechanisms – Including building Service Orientated Architectures
  • Proficiency with data structure and algorithm analysis
  • Experienced working in a TDD Environment

Nice to Have:

  • Product development experience in network security, content security or cyber threat intelligence
  • Experience with concurrency models
  • Experience with key-value distributed databases
  • Experience deploying production software in Haskell, OCAML, Clojure, or Lisp
  • Comfortable writing one or more of the following Javascript, GoLang, Ruby

At Lookingglass, we believe our employees are our greatest assets. We offer competitive salaries, a full benefits package, with available medical, dental, vision, and disability insurance, a 401k retirement package, and stock options. We offer generous PTO, a well supplied kitchen, and regular team activities. Most importantly, we offer the opportunity to build a world-class product with a team of talented engineers.

Get information on how to apply for this position.

Categories: Offsite Blogs

Functional Jobs: Senior/Principal Software Development Engineer at Lookingglass Cyber Solutions (Full-time)

Planet Haskell - Thu, 04/16/2015 - 3:46pm

Are you an experienced software engineer in security, networking, cloud and big data? Are you interested in cyber security or improving the security of the Internet? Do you push yourself to be creative and innovative and expect the same of others?

At Lookingglass, we are driven and passionate about what we do. We believe that teams deliver great products not individuals. We inspire each other and our customers every day with technology that improves the security of the Internet and of our customer’s. Behind our success is a team that thrives on collaboration and creativity, delivering meaningful impact to our customers.

We are currently seeking qualified Senior/Principal Software Development Engineers to join our team!

Required Skills & Experience:

  • US Citizen or Green Card Holder
  • Location: MD/VA or CA based
  • Bachelor’s or Masters degree in:computer science, engineering, information systems or mathematics
  • Strong technical leader with proven experience leading project technologies and mentoring junior development team members
  • 7-10 yrsexperienced with full development life-cycle with shipping products
  • 4-8yrs experienced with Functional and OO languages – have worked in functional paradigms with immutable data models
  • 5+ yrs building distributed system products including messaging, NoSQL, RPC / RMI mechanisms, and Service Orientated Architectures
  • Proficiency with data structure, algorithm analysis, and concurrency programming
  • Experienced working in a TDD Environment
  • Comfortable with aggressive refactoring
  • Architectural and design skills to map a solution across hardware, software, and business layers in a distributed architecture

Nice to Have:

  • Product development experience in network security, content security or cyber threat intelligence
  • Experience with CSP concurrency models
  • Experience with key-value distributed databases
  • Experience deploying production software in Haskell, OCAML, Clojure, or Lisp
  • Polyglot programmer with production experience in imperative, declarative, OO, functional, strongly/weakly typed, static/dynamic, interpreted/compiled languages

Lookingglass believes our employees are our greatest assets. We offer competitive salaries, a full benefits package, with available medical, dental, vision, and disability insurance, a 401k retirement package, and stock options. We offer generous PTO, a well supplied kitchen, and regular team activities. Most importantly, we offer the opportunity to build a world-class product with a team of talented engineers.

Get information on how to apply for this position.

Categories: Offsite Blogs

Visual control/data flow withing a Haskell expression

Haskell on Reddit - Thu, 04/16/2015 - 11:16am

So there was a recent LightTable discussion that made me think about what a pure statically type language could offer in terms of "next-gen editors". Apart from explicit state and FRP libraries enabling live programming and time travelling debuggers a la Elm I was thinking about features like visualizing control and data flow right there in the editor. Compared to imperative languages, where data and control traditionally flow "top-to-bottom", following the data flow in functional languages is a bit harder, since call graph dependencies can jump all over the place, i.e:

f x = [...] y [...] where b = h w [...] y = g w x z [...] z = j b [...] w = h x

So I was thinking, what speaks against something like this:

[1] [2]

Hovering over different identifiers would show the links between the dependencies. Functions defined somewhere else could be shown in hovering windows next to the currently inspected code paragraph and so on. All of this of course nicely integrated into some future graphics-supporting version of vim/emacs :)

SourceGraph only shows dependencies between functions, without delving into the actual definitions.

I guess I'm not the first one to think of this, are there any projects trying to do something similar?

submitted by _pka
[link] [4 comments]
Categories: Incoming News

Ongoing IHG work to improve Hackage security

haskell-cafe - Thu, 04/16/2015 - 10:33am
All, The IHG members identified Hackage security as an important issue some time ago and myself and my colleague Austin have been working on a design and implementation. The details are in this blog post: http://www.well-typed.com/blog/2015/04/improving-hackage-security We should have made more noise earlier about the fact that we're working on this. We saw that it was important to finally write this up now because other similar ideas are under active discussion and we don't want to cause too much unnecessary duplication. The summary is this: We're implementing a system to significantly improve Hackage security. It's based on a sensible design (The Update Framework) by proper crypto experts. The basic system is fully automatic and covers all packages on Hackage. A proposed extension would give further security improvements for individual packages at the cost of a modest effort from package authors. http://theupdateframework.com/ It will also allow the secure use of untrusted public Hackage mirrors, w
Categories: Offsite Discussion

Functions for type level lists?

Haskell on Reddit - Thu, 04/16/2015 - 9:45am

In some of my work on trying to implement relational algebra in haskell, I often find myself writing functions (more accurately closed type families) that work on type level lists. Basic stuff, like subtraction, intersection, lookup, etc. Is there a package where someone has already done all of these? I'm already aware of Data.Promotion.Prelude.List from the singletons package, but the way the it determines type-level equality makes me uncomfortable (https://github.com/goldfirere/singletons/issues/106). So, if anyone has seen any packages that are basically Data.List but on the type level, I would appreciate hearing about them (plus, it just seems like someone would have done this already).

submitted by andrewthad
[link] [13 comments]
Categories: Incoming News

Philip Wadler: Blame and Coercion:Together Again for the First Time

Planet Haskell - Thu, 04/16/2015 - 9:33am

Blame and Coercion:Together Again for the First Time
Jeremy Siek, Peter Thiemann, Philip Wadler.
PLDI, June 2015.

C#, Dart, Pyret, Racket, TypeScript, VB: many recent languages integrate dynamic and static types via gradual typing. We systematically develop three calculi for gradual typing and the relations between them, building on and strengthening previous work. The calculi are: λB, based on the blame calculus of Wadler and Findler (2009); λC, inspired by the coercion calculus of Henglein (1994); λS inspired by the space-efficient calculus of Herman, Tomb, and Flanagan (2006) and the threesome calculus of Siek and Wadler (2010). While λB is little changed from previous work, λC and λS are new. Together, λB, λC, and λS provide a coherent foundation for design, implementation, and optimisation of gradual types.

We define translations from λB to λC and from λC to λS. Much previous work lacked proofs of correctness or had weak correctness criteria; here we demonstrate the strongest correctness criterion one could hope for, that each of the translations is fully abstract. Each of the calculi reinforces the design of the others: λC has a particularly simple definition, and the subtle definition of blame safety for λB is justified by the simple definition of blame safety for λC. Our calculus λS is implementation-ready: the first space-efficient calculus that is both straightforward to implement and easy to understand. We give two applications: first, using full abstraction from λC to λS to validate the challenging part of full abstraction between λB and λC; and, second, using full abstraction from λB to λS to easily establish the Fundamental Property of Casts, which required a custom bisimulation and six lemmas in earlier work.
Categories: Offsite Blogs

Philip Wadler: A complement to blame

Planet Haskell - Thu, 04/16/2015 - 9:32am


A complement to blame
Philip Wadler.
SNAPL, May 2015.

Contracts, gradual typing, and hybrid typing all permit less-precisely typed and more-precisely typed code to interact. Blame calculus encompasses these, and guarantees blame safety: blame for type errors always lays with less-precisely typed code. This paper serves as a complement to the literature on blame calculus: it elaborates on motivation, comments on the reception of the work, critiques some work for not properly attending to blame, and looks forward to applications. No knowledge of contracts, gradual typing, hybrid typing, or blame calculus is assumed.
Categories: Offsite Blogs

Developing 3D Tic-Tac-Toe on Arduino in Haskell

Haskell on Reddit - Thu, 04/16/2015 - 7:39am

Hi Haskell folks in reddit, and thank @CarstenKönig for introducing here :)

I am building a three-dimensional tic-tac-toe game. I have already built the game itself that runs in command prompt (https://github.com/ryo0ka/tictactoe3d), and now I am ready to build a system that runs the game on a cube of bi-color LED lamps and an input button.

As the input through the button and the output through the cube will happen simultaneously, this is going to be my first concurrent/reactive programming in functional languages. I am clueless at all. I have read and understood the basics of those programming concepts, but can't quite apply them to my problem.

  • What kind of strategy and library is suit to this problem? (I don't know if this is a legitimate question..) I'm so lost that I need advice even about where to start from/with.

  • (In the stragety/library above,) How can an even listener (or anything that reacts to certain events in the system) "update" data, how can such mutable data (which I expect are called rather like mutable reference because data themselves should be immutable) be used like, and how do you explain that it keeps the referential transparency (please ignore this question if nonsense)?

  • (In the stragety/library above,) How does it deal with such a kind of input that the user can unintentionally input many times? For example, it's hard for us to "tap" a button at just one point of time. I want my program to recognize such sequential inputs as a single input.

About lamps, button and any components, I am on the hArduino library (http://hackage.haskell.org/package/hArduino) which basically does all the FFI thing and provides free access to Arduino boards. So I have the method to operate them and am ready to dive in the FRP implementation phase.

Also I've already soldiered up the cube and everything (http://i.imgur.com/XJ4P1ZQ.png)

Thank you so much for reading my weird English, and I would appreciate a lot if you could give me any pieces of guide or information, and stick with this topic for my further questions. Thank you again!

This question is originally from my post on Stack Overflow (http://stackoverflow.com/questions/29674926/developing-a-simple-reactive-program-with-haskell). Thank you @CarstenKönig again for guiding me here :)

submitted by ryo0ka
[link] [6 comments]
Categories: Incoming News

Well-Typed.Com: Improving Hackage security

Planet Haskell - Thu, 04/16/2015 - 3:17am

The members of the Industrial Haskell Group are funding work to improve the security of packages downloaded from Hackage. The IHG members agreed to fund this task some time ago and my colleague Austin and I have been working on the design and implementation.

In this post I will explain the problem and the design of our solution.

TL;DR

We’re implementing a system to significantly improve Hackage security. It’s based on a sensible design (The Update Framework) by proper crypto experts. The basic system is fully automatic and covers all packages on Hackage. A proposed extension would give further security improvements for individual packages at the cost of a modest effort from package authors.

It will also allow the secure use of untrusted public Hackage mirrors, which is the simplest route to better Hackage reliability. As a bonus we’re including incremental index downloads to reduce cabal update wait times. And it’s all fully backwards compatible.

Goals

The general aim is to make a significant incremental improvement to Hackage security, given limited resources to spend on the project. So that means covering as many users, packages and likely threats as we can.

We have had to think carefully about the implementation effort and what improvements will give the biggest security benefits. Our general approach has been to go for the biggest improvements first, but to leave the door open to further tightening later.

Crypto-humility and initial thoughts

Both Austin and I have some experience with cryptography and computer security. Austin used to work for a computer security company, and many years ago I worked at an anti-virus company on the problem of securing the software update process. So we know the general options. Importantly, we also know enough to know that we don’t know everything. Hopefully the combination will be enough to avoid security disasters like the recent Docker image download vulnerability.

The basic options are:

  • end-to-end verification by having package authors sign their packages; and/or
  • having the Hackage server sign the packages (or rather just the index)

Our initial thoughts were that having the server sign the index gives a bigger bang for the buck than a scheme with author signing. This is because with the server doing the signing we can cover all packages and not require any knowledge or effort by either the package authors or the developers downloading the packages. With author signing we fear we would not be able to cover the majority of packages, at least not in the near term. Without covering all the packages we cannot have untrusted Hackage mirrors. Our rough estimate was also that index signing would be less effort to implement. Of course we’re well aware that index signing provides shallower security than the end-to-end verification with author signing.

So while we started with a general idea of the kind of solutions that would fit with Hackage and our budget, we looked around for designs and analysis from better experts than ourselves.

Austin pointed out some prior art called The Update Framework (abbreviated TUF). TUF is an architecture for securing software update systems, though there is also a reference implementation in Python. It has been designed by a number of academics, based on a few published papers and practical experience with the Tor Project’s update system. There is ongoing work to use TUF to secure Python and Ruby’s package distribution systems (both of which are fairly similar to Hackage).

What is good about TUF is that it considers the problem as a whole, with a wide range of threats. It has what security researchers call a “threat model”, which is needed to be able to discuss with clarity what kinds of threats a system can protect against. As an architecture it is also flexible enough to cover a fairly wide range of update systems: one can use subsets and still be clear about what protections one has.

The elements of TUF are still based on the basic ideas of end-to-end signing and index signing, but it is put together in a comprehensive way. There are several attacks that TUF considers and protects against that we did not consider with our initial thoughts on index signing.

Indeed even if one goes for a system based only on end-to-end author-based signing (e.g. individual GPG-signed packages) then there are still several possible attacks. In retrospect this is clear, but it was a bit of a surprise since packages individually GPG-signed by their authors is often considered as the gold standard.

The Update Framework

TUF is designed to protect against adversaries that can interfere with network traffic between you and the repository server, for example someone operating a wifi network you’re connected to, or in control of a proxy or operating a mirror of the repository. To a degree it is also designed to protect against adversaries than can compromise a repository server itself.

In a full implementation of the architecture, TUF subsumes both server-based index signing and end-to-end verification with author signing.

It relies on a number of different cryptographic keys with different roles. There are one or more root keys and a few subordinate keys for particular purposes. It does not rely on an external PKI, like the public certificate authority system that SSL/TLS uses.

The key idea with having multiple keys with different roles is that one can design a system in such a way that compromising one key is not enough to break the whole system. This relies on different keys living in different places so that compromising one doesn’t automatically imply compromising the others. Some keys have to live on public facing servers and so are at greatest risk, while others can be kept offline or on private servers and so are much harder to compromise.

Attacks

TUF provides protection against a number of attacks. If you want to imagine a concrete example, consider that the attacker runs or has compromised a mirror of Hackage that you use, and then think about each of these attacks:

  • Supplying modified packages to end users.

  • Rollback attacks where an attacker gets a client to install an older version of a package than a version the client previously knew about. Consider for example a case where the older package might have known security vulnerabilities.

  • Freeze attacks where the attacker prevents the entire set of packages from being updated (e.g. by always responding with an old snapshot).

  • Mix and match attacks where the attacker supplies combinations of packages or package metadata that never existed in the upstream repository.

  • Extraneous dependencies attacks where the attacker causes clients to install dependencies that were not the intended dependencies. For example, with Hackage we might sign individual .tar.gz files but the .cabal metadata is kept separately and the attacker could control that.

  • Wrong author attacks where an otherwise legitimate author supplies a package that they are not authorised to supply.

  • Slow retrieval attacks where the attacker causes the download to be so slow that updates effectively never complete, and without the user noticing. Of course a malicious mirror can always prevent downloads, but it shouldn’t go unnoticed.

  • Endless data attacks where the attacker responds to requests with huge amounts of download data that causes problems for the client. For example a client should always securely know the size of a package before downloading it.

Interestingly, a naive approach based only on authors signing packages only solves the first of the attacks above. All the others are still possible, and several of them can lead directly to compromised client systems, while others can force you to use old possibly vulnerable software, or just DOS your system.

Roles for keys

TUF defines a number of roles for keys:

  • Root role key (or keys). The root keys are used to sign the other keys. This is the ultimate root of the trust chain. These keys would be kept very securely, preferably password-protected and offline.

  • Snapshot role key. This is used to sign the package index. This protects the package metadata and prevents mix and match and extraneous dependency attacks.

  • Timestamp role key. This is used to ensure clients only accept an up-to-date package index, e.g. at most a day old. This prevents freeze or rollback attacks.

  • Target role key. This is used for signing individual packages. It is not used directly to sign packages but to sign delegation information. There would be per-author or per-package target keys. The top-level target key is used to say which target key is allowed to be used for which package. This is what gives end-to-end author signing, and the signed delegation information protects against wrong author attacks.

  • It also has an optional mirror role. This isn’t actually needed for mirroring. It’s just for securely distributing a list of official mirrors.

Client verification process

In a full implementation, including delegated target keys, the client download and verification process goes like this:

  • The client downloads a very small timestamp file signed by the timestamp key.
  • It verifies the timestamp file signature (including checking that the timestamp key is signed by the root key(s) and is not yet expired).
  • The timestamp file includes a hash and size of the current package index.

At this point we know (with cryptographic evidence) whether the package index has changed and what size and hash to expect when we download it. We also have a bound on the freshness of this information because the timestamp signature includes a short (e.g. day or two) expiry time.

  • If the timestamp file indicates that the package index has changed then the client downloads the package index.
  • The client verifies the hash of the package index matches the expected one from the timestamp file.
  • The package index is also signed by the snapshot key and so the client verifies this signature too.
  • The package index contains hashes and sizes of all the packages.

At this point we have downloaded the package index successfully and know (again with cryptographic evidence) that it has not been tampered with since being created on the server and so we can trust all the metadata in the index.

  • The client may choose to download an individual package.
  • The client verifies that the hash of the package matches the expected one from the package index.
  • The package index also contains a signature for the package signed by one of the delegated target keys.
  • It also contains a set of target key delegation information, signed by the root target key, listing which delegated target keys are allowed to sign which packages.
  • The client verifies the key delegation information.
  • The client verifies the given target key is allowed to sign for the given package (by looking it up in the key delegation information).
  • The client verifies the package signature with the given target key.

At this point we have downloaded an individual package and we know (with cryptographic evidence) that the package tarball has not been modified since it was created on the author’s system, and that this author is authorised to provide this named package.

Discussion

The different keys provide assurance of slightly different (but somewhat overlapping) things. If we only care about man-in-the-middle attacks then we only need the timestamp key and signature, since it includes the hash of the index which includes hashes of the packages.

The addition of the snapshot key to sign the index allows the snapshot key to be kept on a different system from the public-facing index server (think of an organisation like Red Hat with an internal system only available to people uploading packages). This timestamp/snapshot separation would allow the public-facing index server to be compromised with the damage being limited to freeze attacks (and the duration of this is bounded).

The addition of the target key (and delegated target keys) gives additional protection in the case that the index server is compromised. When target keys are being used, an attacker that has taken control of the index server can perform freeze attacks, rollback and mix and match attacks, but cannot change the contents of individual packages, nor provide packages that they are not authorised to provide.

Again, it is important to emphasise that all the keys need to be kept in separate places for them to have any security benefit.

Note that slow download and endless data attacks can be prevented because except for the initial tiny timestamp file, the client knows (securely) the expected download size and so carefully written download code can at least notice slow download and endless data attacks.

The Update Framework for Hackage

TUF is an architecture (albeit with a reference implementation) designed to be flexible enough to cover many use cases (distros, individual apps etc) and so we are free to adapt it for our particular use case.

For the case of Hackage as currently designed, the timestamp/snapshot key distinction does not add anything, as both keys would need to be on the same system. This is because the Hackage server is public facing both for download and upload. There would need to be a separate more secure upload server for the timestamp/snapshot distinction to add value. So for the Hackage implementation we will merge the timestamp and snapshot keys into a single package index signing key.

We also need to adapt the format and layout of the various bits of TUF metadata to fit the existing Hackage index format. For the most part this is fairly straightforward. The timestamp file can be provided as a separate file over HTTP, since it needs to be small and downloaded before the package index. All the other files that TUF specifies can be included inside the Hackage index file. This is straightforward because the Hackage index is simply a tarball and is designed to be extensible with extra file entries.

Our development plan had us start with the package index signing but keeping in mind the target keys feature so that it can be added later without any compatibility problems. We have enough funding to complete the package index signing and we hope that with this clear design roadmap we can secure either further funding or community volunteer effort.

Implementing target keys later would involve client tools (like cabal) generating keys and an out-of-band mechanism to submit them to be signed by the holder of the central target key (in practice the Hackage admins). For updating who is allowed to upload which package, a semi-manual process will be needed for the central target key holders to check and authorise re-signing the key delegation information. Tools like cabal would also need to be extended to sign tarballs and upload the signatures. So there would be some extra work for package authors but it could be kept fairly minimal. The main cost would be that changing the set of maintainers for a package on Hackage would become an asynchronous process involving a Hackage admin re-signing something.

As a final point, Hackage allows package authors and Hackage trustees to edit .cabal file metadata after release. So when we add target keys then these parts of the index should be protected with the target keys, not just the main package index key.

Comparing with other approaches HTTPS

HTTPS is often mentioned as a way to improve Hackage security. As the TUF papers make clear, relying on the public PKI for security leaves a lot to be desired, given all the well-documented problems with trusting all the public certificate authorities in the world. In addition, relying on HTTPS does not allow for untrusted mirrors, whereas TUF’s index signing approach does allow that. Finally, on a practical note HTTPS is unfortunately complicated to support sanely on all platforms (OpenSSL is horrible, and distributing it for platforms that don’t come with it is painful). By contrast, TUF only relies on a crypto hash and signature verification algorithm. There are many choices for these including good choices that are small enough to embed existing audited C code directly into a Haskell library or tool. We plan to use ed25519 and in particular the C implementation extracted from NaCl.

GnuPG

Another approach that has been discussed and had some development effort is one based on author signing with GnuPG by Nikita Karetnikov. It was designed to fit with the existing Hackage index format with signatures being included in the package index tarball. The primary difficulty with a GnuPG-based system is in establishing the web-of-trust to a sufficient degree that signature verification can be done fully automatically without supervision from the user.

More GnuPG

Another recent GnuPG-based proposal comes from Chris Done. This proposal is under active discussion so in the spirit of constructive criticism it’s worth comparing it with TUF.

The first general point is that TUF has been designed by academic experts in the subject, based both on research and existing real-world systems. Our crypto-humility should cover not just crypto algorithms but extend to whole system designs. As I mentioned at the beginning of this post, though my initial thoughts on package index signing looked quite similar to the root+snapshot key part of the TUF system, there are many extra details that TUF covers that I would not have considered. With these in mind we can already spot underspecified areas in the proposal. For example one part of the spec says “Downloads the index from the web server or loads from cache if not modified”, but this leaves open the question of how we know securely if the index has been modified or not. TUF addresses this with the signed timestamp file. This kind of detail is not something I would have considered either, but TUF identifies the problem (that relying on HTTP cache control headers would be vulnerable to MITM) and has a solution. Generally in a system designed by crypto amateurs like us we would expect to have such holes. So we can do better by picking an existing expert design. TUF seems like a reasonable choice since it is designed to be somewhat flexible to fit different situations, and it’s being considered for both Python and Ruby’s package systems.

More specifically, the GnuPG-based proposal relies on a web of trust in which individual end users have to decide who they trust and configure their tools appropriately. A more recent version of the proposal includes signed mappings from packages to signers, which are basically statements of “I trust so and so to upload package foo”, but you still need to trust the author of that statement.

The system seems to be designed primarily for enterprise users who are using a carefully selected set of packages where a level of manual supervision to decide which authors to trust is justified. By contrast TUF’s target key delegation system is designed to be fully automatic for end users.

With TUF’s target key delegation, the chain of trust is very clear and simple: the root key signs the target key, which signs delegated keys and delegation information, and the delegated keys sign individual packages.

I also think it’s unfortunate that the signatures are not integrated as part of the Hackage index as in Nikita’s system (though it is in keeping with the general 3rd party design). On the positive side, the system has the flexibility to have 3rd parties make claims of trustworthiness, such as “I have manually audited this code” or “My CI system says this builds ok without obviously attempting to attack the system”. That extra complexity adds to the amount of manual supervision needed and seems to have been left out of the latest version of the proposal.

Finally because the proposal has nothing corresponding to the TUF snapshot key signature then all the corresponding attacks are possible, including rollback, freeze and mix-and-match attacks.

A central authority

A suggestion that has come up in recent discussion is that there should be a central authority for who is allowed to upload which package and that that information should be transparent. The criticism is that while Hackage is currently that central authority, it does not generate any verifiable evidence.

With TUF the target key delegation information is precisely that cryptographic evidence. The holders of the target key is that central authority and the signed delegation information is the cryptographic evidence of who is allowed to upload what, and that information is distributed to all clients.

Incremental updates

There has been a design for incremental updates of the Hackage index floating around for some years. With the TUF approach of putting more signatures into the index the pain of slow Hackage index downloads would become critical so our design incorporates that existing design for incremental updates to the index. This means most index updates will be 10s of kilobytes rather than multiple megabytes. This is actually even better than rsync (and likely better than git). The trick is that the tar format was originally designed to be append only (for tape drives) and so if the server simply updates the index in an append only way then the clients only need to download the tail (with appropriate checks and fallback to a full update). Effectively the index becomes an append only transaction log of all the package metadata changes. This is also fully backwards compatible.

Conclusion

We think implementing TUF for Hackage will be a major improvement to security. The initial implementation without target keys will be completely transparent to both end users and authors, and then the later addition of per-author target keys will further improve security by guarding against certain attacks even if the central server is compromised.

We accept that although we are programming experts, we are not crypto experts. We are hopeful that by using TUF we are reusing an existing expert crypto design and will not fall into the traps that typically bedevil our fellow crypto amateurs.

Categories: Offsite Blogs

Advice needed on how to improve some code

haskell-cafe - Thu, 04/16/2015 - 1:57am
Hello, I am seeking some advice on how I might improve a bit of code. The function in question reads and parses part of a binary protocol, storing the parsed info as it proceeds. parseDeviceData is called by parseDevice (shown further down). It looks to me like there should be a more concise, less repetitive way to do what parseDeviceData does. Any advice on this would be greatly appreciated. parseDeviceData :: P.Payload -> Parser P.Payload parseDeviceData pl = let mdm = P.dataMask ( P.payloadData pl ) in ( let pld = P.payloadData pl in if testBit mdm ( fromEnum D.Sys ) then parseDeviceSysData >>= ( \s -> return ( pl { P.payloadData = pld { P.sysData = Just s } } ) ) else return pl ) >>= ( \pl' -> let pld = P.payloadData pl' in if testBit mdm ( fromEnum D.GPS ) then parseDeviceGPSData >>= ( \s -> return ( pl' { P.payloadData = pld { P
Categories: Offsite Discussion

Workshop on Type Inference and Automated Proving

General haskell list - Wed, 04/15/2015 - 10:28pm
********************************************************************* WORKSHOP ON TYPE INFERENCE AND AUTOMATED PROVING Tuesday the 12th of May, 12PM to 6PM School Of Computing, University of Dundee http://staff.computing.dundee.ac.uk/frantisekfarka/tiap/ ********************************************************************* Refreshments will be available from 12:00, with talks beginning at 12:45. For the detailed programme please see the above website. Talks: Tom Schrijvers (Katholieke Universiteit Leuven) GADTs Meet Their Match: Pattern-matching Warnings that Account for GADTs, Guards, and Laziness Bob Atkey (University of Strathclyde) An Algebraic Approach to Typechecking and Elaboration Edwin Brady (University of St Andrews) Implementing a Dependently Typed Programming Language Peng Fu (University if Dundee) Nontermination Analysis for Evidence Construction in Type Class Inference Adam Gundry (Well-Typed LLP
Categories: Incoming News

bed & breakfast library update?

libraries list - Wed, 04/15/2015 - 7:52pm
I've asked a couple of times over the last eight moths for an update of this library that would compile on 7.8 to be published, and gotten no response. If you're not able or interested in maintaining this library any longer, please let the community know so that someone else can take over those duties. If you can't or don't respond, I'll have to assume you're no longer able to, and see about getting another maintainer. Thank you, Mike _______________________________________________ Libraries mailing list Libraries< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Categories: Offsite Discussion

Leuven Haskell User Group meeting on April 21

General haskell list - Wed, 04/15/2015 - 7:38pm
You are kindly invited to the special edition of the Leuven Haskell User Group with guest presentation by Amplidata. For details see: https://groups.google.com/d/msg/leuven-haskell/VNcTKtFPGL0/gQhPWtxTbg4J
Categories: Incoming News

Haskell on Windows instructions - is MinGHC versionupdate needed?

General haskell list - Wed, 04/15/2015 - 5:50pm
Hi, Currently, the Haskell website's Windows download page (https://www.haskell.org/downloads/windows) offers two alternatives, MinGHC and Haskell Platform. The MinGHC alternative has a link to MinGHC-7.8.3 installer. Should this be updated to MinGHC-7.8.4? I ask this because Long Term Support Haskell 2.3 is now using GHC 7.8.4. If you want to update the link, you only need to change the URL from 7.8.3 to 7.8.4. Howard
Categories: Incoming News

Joachim Breitner: Talk and Article on Monads for Reverse Engineering

Planet Haskell - Wed, 04/15/2015 - 5:32pm

In a recent project of mine, a tool to analyze and create files for the Ravensburger Tiptoi pen, I used two interesting Monads with good results:

  • A parser monad that, while parsing, remembers what part of the file were used for what and provides, for example, an annotated hex dump.
  • A binary writer monad that allows you to reference and write out offsets to positions in the file that are only determined “later” in the monad, using MonadFix.

As that’s quite neat, I write a blog post for the German blog funktionale-programmierung.de about it, and also held a talk at the Karlsruhe functional programmers group. If you know some German, enjoy; if not, wait until I have a reason to hold the talk in English. (As a matter of fact, I did hold the talk in English, but only spontaneously, so the text is in German only so far.)

Categories: Offsite Blogs

Syntax for has type and has kind

Haskell on Reddit - Wed, 04/15/2015 - 4:18pm

This is probably not the right place to ask, but I didn't think it belongs on SO. Currently, :: is used to express has type in type signatures, as well as has kind in extensions like KindSignatures, PolyKinds, ConstraintKinds, DataKinds etc…

Would it be more clear/concise if we use :k for has kind, and use :t as an alternative of :: to denote has type? In GHCi, :t is already short for :type and :k for :kind, so I think this would make a nice correspondence between the language syntax and the interpreter.

Granted, I'm pretty new to all these advanced language extensions, but from time to time when I see :: pops up in code where I do not expect it to, I have to pause and think, is :: here for type, or for kind?

Any thoughts?

submitted by lurking-about
[link] [6 comments]
Categories: Incoming News

Noam Lewis: Identity Crisis

Planet Haskell - Wed, 04/15/2015 - 4:10pm

Compared to other tools adding static types to JavaScript, Infernu’s main strengths are full type inference and strong type safety. Here are a few examples.

Identity Function

Here is the simplest possible function:

function id(x) { return x; } TypeScript

TypeScript‘s compiler tsc can generate a .d.ts from that:

declare function id(x: any): any;

That’s no good! We can pass any type we want to our function, but the return type is not tied to the argument type – it is treated as anything, basically untyped. So we can do this:

var n = 'hi'; n = id(5);

And TypeScript will output the following .d.ts:

declare var n: string;

That seems wrong: n is assigned a number via id(5). But wait – there is a way to turn off inference of any types (with --noImplicitAny). If we try that on our identity function, we get:

id.ts(1,13): error TS7006: Parameter 'x' implicitly has an 'any' type. Explicit Generics

Oops. Ok, but TypeScript has generics! Let’s try that: the TypeScript handbook gives exactly the example we need – we just write the type out explicitly, like so:

function identity<T>(arg: T): T { return arg; }

Great! We got what we needed, but without type inference.

Flow

Facebook’s Flow has a type system that’s (slightly?) different from TypeScript’s, and apparently works differently. Let’s try it. We can use the flow suggest command to get suggested typing (I’m using version 0.7). Here’s what we get for a single file containing only the identity function above: nothing. It doesn’t suggest any type. Ok, let’s try using our id in a way that makes no sense, to induce an error (after all, type checkers are used to find errors). Here’s bad_id.js:

/* @flow */ function id(x) { return x;} var n = 'hi'; n = id(5); var z = n; // added so we can see what flow says the type of n is here.

(Note: The /* @flow */ header is used to tell flow that it should look at this file.)
Run flow suggest bad_id.js and you get a diff-style output. I’ve ‘applied’ it to make it easier to read – here’s what flow suggests:

function id(x: number) : number{ return x;} var n: string | number = 'hi'; n = id(5); var z: number = n;

Interesting! We managed to get something without reverting to explicit type annotations. But we didn’t get an error!

First, id was inferred to take and return number, apparently because that’s the only way we’ve used it. It would be interesting to see what happens when we use id several times with different types – we’ll try that soon.

Second, n was given a union type string | number, because it takes on both types during its lifetime. It may be a matter of taste, but I would rather not have the type checker deduce implicit union types in this case (n = 'hi'; n = 5;) – instead I would expect that to be an error.

The unique (and impressive) part is that flow is able to tell that z is only ever going to have number values, and so it’s safe to assign it that type. That’s probably a result of the flow analysis they do.

Now let’s try calling id several times, with different types:

/* @flow */ function id(x) { return x;} id(5); id('hi');

Flow suggests:

function id(x: string | number) : string | number{ return x;}

Uh oh – does this means the argument and result types are no longer tied to each other? If I pass in a number, will the compiler check that I use the result only as a number (and not as a string)? Let’s try using it, doing var n = id(5), flow suggests:

var n: string | number = id(5);

Despite n only ever being assigned a number, it now has type string | number. So apparently, union types propagate implicitly, infecting everything on the way.

Explicit Generics

Fortunately, flow too has generics, and again straight out of the manual we find:

/* @flow */ function foo(x: X): X { return x; }

Great! We got what we needed, but without type inference.

Infernu

Let’s get down to business. Infernu says:

// id : a.(b -> b) function id(x) { return x; }

Cool! Without any help from us, Infernu figured out the most generic type. Take a type b, return the same type b. The magic of polymo…Wait, what’s that a. thing?

Well, JavaScript has this nice keyword called this which is dynamically scoped, meaning that this is bound to different things depending on how your function is invoked and not on how it’s defined. For example:

var obj = { hat: { type: 'top' }, getHatType: function() { return this.hat.type; } }; obj.getHatType(); // ok. var f = obj.getHatType; f(); // oops! TypeError: Cannot read property 'type' of undefined

Nasty stuff. Every JavaScript programmer should know this.

Fortunately, Infernu is here to save you. It infers not only what arguments and return types a function has, but also what this must be for the function call to work, and verifies that you use it correctly.

Infernu type signatures for functions have the following format (subject to change without notice, bla bla bla):

this.(arguments -> result)

So for our var f = obj.getHatType example, Infernu says:

// f : {hat: {type: d, ..f}, ..e}.(() -> d) var f = obj.getHatType;

Decomposing that type signature, we read that this is expected to be an object containing at least a property called ‘hat’ which is an object with at least a property called ‘type’ of some type d. The function takes no arguments (hence the empty ()) and returns the same type d that was taken from the hat.type property of this. (The ellipsis stuff ..f and ..e is due to row-type polymorphism, which will be elaborated upon in a future blog post.)

Back to our identity function, we examine the signature again: a.(b -> b) – the type of this is given an unconstrained type parameter a – so Infernu is telling us explicitly that this is allowed to be anything for our identity function. Yippy!

Summary

We saw that both TypeScript and Flow (and Google Closure too, which I haven’t shown) support generics that can express the identity function properly. They also offer weak forms of type inference that sometimes yields weakly-typed results. Infernu, on the other hand, will infer generic types automatically, and prefers to fail over giving weak typings.

There are many known discussions about subtyping (inheritance)-based type systems, represented here by TypeScript and Flow, vs. parametric polymorphism (being Infernu in this case). There are known pros and cons to both sides: but one important result is that type inference is just easier when there is no subtyping involved.

Infernu is designed to take advantage of that.


Tagged: Infernu, Javascript
Categories: Offsite Blogs