News aggregator

FP Complete: The new haskell-ide repo

Planet Haskell - Sun, 10/25/2015 - 11:00pm

Recently Alan Zimmerman announced on the haskell-cafe mailing list that there was a new haskell-ide project, with a new Github repository, a mailing list and an IRC channel. Some people have been concerned that this effort is fragmenting existing efforts, including with ide-backend (the open sourced library FP Complete announced earlier this year). I clarified this on Reddit, but wanted to take the opportunity to do so on this blog as well (and, additionally, throw some extra attention on the haskell-ide project).

Alan's announcement did not come in a vacuum; about two weeks ago, he reached out to others for feedback on a potential project. There were some side channel discussions that I was involved in, all of which were very much in favor of (and excited about!) this project. To quote myself from Reddit, we reached the following conclusion:

Both the ghc-mod and ide-backend maintainers have agreed to contribute code to this new repository and then rebase the old repos on this. The reason we're using a new repo instead of modifying one of the existing ones is so that the existing projects experience no disruption during this migration process. If this was a new set of people starting a new project without support from existing projects, I'd agree with you. But Alan's reached out to existing players already, which is an important distinction.

Michael Sloan - the current maintainer of ide-backend and one of the primary developers of both School of Haskell and FP Haskell Center - is already getting involved in this project. It's too early to decide exactly what the future of ide-backend will be relative to haskell-ide, but we're not ruling anything out. Anything from rebasing ide-backend to use haskell-ide internally, all the way to deprecating ide-backend in favor of haskell-ide, is on the table. We'll do whatever makes the most sense to help the Haskell community create great tooling.

Related to this project: a number of people have been following the development of stack-ide. We started that project not realizing how quickly existing tooling (like ghc-mod and hdevtools) would adopt support for Stack, and certainly not expecting this new haskell-ide project to offer a unifying force in the Haskell tooling space. To avoid fragmentation, we're currently holding off on further word on stack-ide, hoping instead that collaboration will help improve existing tooling not just for the Stack use case, but for cabal, cabal sandboxes, and other cases people have been targeting.

Since I'm already discussing IDE stuff, let me publicly give an answer I've given privately to a few people. A number of individuals have asked about the future of the FP Haskell Center codebase, and the possibility of open sourcing it. The summary answer is:

  • Everything related to School of Haskell is being open sourced. Most of that is done already, the rest is just in the last few stages of code cleanup.
  • The current state of the FP Haskell Center code base is difficult to simply give out, since it's based on some superseded technologies. For example, we created it in a world where Docker didn't exist, and have quite a few LXC scripts to make it work. We also have some difficult-to-manage build scripts that could be replaced by Stack. We've cleaned all of this up for School of Haskell, but have not done the same to the FP Haskell Center codebase.
  • As a general policy, we don't like to just drop unsupported code on the community at FP Complete. If there are maintainers that are interested in taking over the current FPHC codebase, we can discuss transferring it over. But simply open sourcing without any support tends not to be a helpful move (instead, it distracts from other, active projects, which we don't want to do).
  • One possibility going forward is that, once the School of Haskell web service is up, running, and stable, a new IDE project could be started that targets that same API. We're not planning on running such a project at FP Complete, but we'd be happy to provide some feedback, and include necessary changes to the SoH service to make it work.

I hope this excites others as much as it excites me: some concerted efforts on improving tooling can hopefully go a long way. A big thank you to Alan for coordinating this effort, and to Michael Sloan for leading the charge from the FP Complete side. I'm optimistic that we'll see some real strides forward in the near future.

Categories: Offsite Blogs

Does Yesod have a console for playing with persistent models?

Haskell on Reddit - Sun, 10/25/2015 - 6:58pm

It's something I'd really like to use. I've found several posts about the possibility, e.g. the Yesod Wish List. But nothing concrete yet.

(I'm gathering this info for a web framework comparison matrix for my next project: )

EDIT: Here's a screenshot of the Rails REPL showing what I'm talking about.

submitted by whither-the-dog
[link] [9 comments]
Categories: Incoming News

Christopher Done: Idle thoughts: More open, more free software

Planet Haskell - Sun, 10/25/2015 - 6:00pm

I’m a bit busy, these are just some idle thoughts.

I just upgraded my Android OS to some other kind of dessert name and a bunch of stuff changed in a way I had no desire for.

It made me think about the virtues of open source software. I can just go and change it! Free software means benefiting from the work of others without being shackled by them at the same time.

And then about the problems of open source software, which is that only developers-skilled developers-with specific knowledge, are able to approach the codebase of an app they use, update it, and then use that new software in a continuous and smooth way. Everyone else’s hands are effectively tied behind their backs.

So that got me thinking about how software could be more “open” than simply “open source”, if it was inherently more configurable. And also about better migration information from one piece of software to the next.

So I imagined a world in which when I get an update for a piece of software I could see a smart diff, as a regular human, of what the new UI and behaviour looks like, how it changed. This button moved there, changed color. Pressing this button used to exhibit X behaviour, now that behaviour is more complicated, or more limited, to trigger this action, and so on.

I believe that a properly declarative UI library with explicit state modeling, such as in Elm or whatnot, could actually handle a thing like that, but that it would have to be designed from the bottom up like that. And every component would need to have some “mock” meta-data about it, so that the migration tool could say “here’s what the old UI looks like with lorem ipsum data in it and here’s what that same data, migrated, looks like in the new UI” and you could interact with this fake UI on fake data, with no consequences. Or interact with the user’s data in a read-only “fake” way.

You could say: actually, no, I want to configure that this button will stay where it is, that the theme will stay my current dark theme, etc.

You could visualize state changes in the UI such as with the time traveling thing in Elm or React and make new decision trees, or perhaps pick between built-in behaviours.

But one key idea could be that when you update software in a new way, unless you’re removing the ability to do a feature completely (e.g. the server won’t even respond to that RPC call), then you should indicate that, in the intelligent “software diff”: then the user can say, no I still want to use that and now they have a “patched” or “forked” version of the software locally but that the maintainers of the software don’t have to worry about.

Normally configuring software is a thing developers manually hard code into the product. It seems obviously better to make software inherently configurable, from a free software perspective at least (not from a proprietary locked-in perspective).

Of course, you could write code at any time; drop down to that. But if most of the code can be self-describing at least in a high-level “do the thing or that thing” way, this would be far more accessible to general users than code itself which at the moment is magic and certainly beyond my interest to go and patch for the most part.

Categories: Offsite Blogs

Dimitri Sabadie: luminance, episode 0.6: UBO, SSBO, Stackage.

Planet Haskell - Sun, 10/25/2015 - 4:29pm

Up to now, luminance has been lacking two cool features: UBO and SSBO. Both are buffer-backed uniform techniques. That is, a way to pass uniforms to shader stages through buffers.

The latest version of luminance has one of the two features. UBO were added and SSBO will follow for the next version, I guess.

What is UBO?

UBO stands for Uniform Bbuffer Object. Basically, it enables you to create uniform blocks in GLSL in feed them with buffers. Instead of passing values directly to the uniform interface, you just write whatever values you want to to buffers, and then pass the buffer as a source for the uniform block.

Such a technique has a lot of advantages. Among them, you can pass a lot of values. It’s also cool when you want to pass values instances of a structure (in the GLSL source code). You can also use them to share uniforms between several shader programs as well as quickly change all the uniforms to use.

In luminance, you need several things. First thing first, you need… a buffer! More specifically, you need a buffer Region to store values in. However, you cannot use any kind of region. You have to use a region that can hold values that will be fetched from shaders. This is done with a type called UB a. A buffer of UB a can be used as UBO.

Let’s say you want to store colors in a buffer, so that you can use them in your fragment shader. We’ll want three colors to shade a triangle. We need to create the buffer and get the region:

colorBuffer :: Region RW (UB (V3 Float)) <- createBuffer (newRegion 3)

The explicit type is there so that GHC can infer the correct types for the Region. As you can see, nothing fancy, except that we just don’t want a Region RW (V3 Float but Region RW (UB (V3 Float)). Why RW?

Then, we’ll want to store colors in the buffer. Easy peasy:

writeWhole colorBuffer (map UB colors)

colors :: [V3 Float]
colors = [V3 1 0 0,V3 0 1 0,V3 0 0 1] -- red, green, blue

At this point, colorBuffer represents a GPU buffer that holds three colors: red, green and blue. The next part is to get the uniform interface. That part is experimental in terms of exposed interface, but the core idea will remain the same. You’re given a function to build UBO uniforms as you also have a function to build simple and plain uniforms in createProgram:

createProgram shaderList $ \uni uniBlock -> {- … -}

Don’t spend too much time reading the signature of that function. You just have to know that uni is a function that takes Either String Natural – either a uniform’s name or its integral semantic – and gives you mapped U in return and that uniBlock does the same thing, but for uniform blocks instead.

Here’s our vertex shader:

in vec2 co;
out vec4 vertexColor;

// This is the uniform block, called "Colors" and storing three colors
// as an array of three vec3 (RGB).
uniform Colors {
vec3 colors[3];

void main() {
gl_Position = vec4(co, 0., 1.);
vertexColor = vec4(colors[gl_VertexID], 1.);

So we want to get a U a mapped to that "Colors" uniform block. Easy!

(program,colorsU) <- createProgram shaderStages $ \_ uniBlock -> uniBlock "Colors"

And that’s all! The type of colorsU is U (Region rw (UB (V3 Float))). You can then gather colorBuffer and colorsU in a uniform interface to send colorBuffer to colorsU!

You can find the complete sample here.

Finally, you can augment the type you can use UB with by implementing the UniformBlock typeclass. You can derive the Generic typeclass and then use a default instance:

data MyType = {- … -} deriving (Generic)

instance UniformBlock MyTpe -- we’re good to go with buffer of MyType!luminance, luminance-samples and Stackage

I added luminance and luminance-samples into Stackage. You can then find them in the nightly snapshots and the future LTS ones.

What’s next?

I plan to add stencil support for the framebuffer, because it’s missing and people might like it included. I will of course add support for *SSBO** as soon as I can. I also need to work on cheddar but that project is complex and I’m still stuck with design decisions.

Thanks for reading my and for your feedback. Have you great week!

Categories: Offsite Blogs

Glib- build failure: multiple definition of `__debugbreak' (ghc-7.11.20151024)

haskell-cafe - Sun, 10/25/2015 - 3:17pm
In my latest attempt to finally build the gtk3 package with ghc-head 'ghc-master' (7.11.20151024) for a current project under windows x64 using the latest msys2-version and its supplied gtk3 libraries (mingw64/mingw-w64-x86_64-gtk3 3.18.2-1) I encountered this cryptical (linking) error. (See complete log for command './Setup build -v3' attached) I should add that I'm rather a beginner with regards to the Haskell language and its package distribution system cabal. Thus all thoughts, ideas and suggestions how to fix this problem are welcome. Best regards Burkhard complete building response in msys2-shell using the mingw64 script: $ ./Setup build -v3 Component build order: library creating dist\build creating dist\build\autogen Building glib- Environment: [("","C:=C:\\Windows\\System32"),("ACLOCAL_PATH","C:\\MSYS2\\mingw64\\share\\aclocal;C:\\MSYS2\\usr\\share\\aclocal"),("ALLUSERSPROFILE","C:\\ProgramData"),("APPDATA","C:\\Users\\PC-08\\AppData\\Roaming"),("CHECKDEF","C:\\Applications\\w
Categories: Offsite Discussion

Avoiding Dependency loops

haskell-cafe - Sun, 10/25/2015 - 9:36am
Hello all, I just split up a program I'm working on into several source files and ran into the following difficulty: module Process implements Process which has a field Runner. Runner is a basically a function which alters a 'System' state. So Process needs Runner, Runner needs System and thus Process needs System. module System implements among others a collection of Processes (because Processes can alter their states). So System needs Process. Eh voila, I have a loop. What I did was to leave the type of the Runner unspecified in Process, i.e. I now have (Process r) where r is the type of the Runner. Thus Process no longer needs to know about System. This does work, but it feels strange and I'm a bit worried that this is an indication for a design flaw, but I cannot see it.
Categories: Offsite Discussion

Projectional editing: Separating a program's ASTfrom its presentation

haskell-cafe - Sun, 10/25/2015 - 5:08am
These ideas were touched on in a previous thread [1]. Our work would be easier if, as data, we separated a program's AST from its layout. If, for instance, the order in which a library's functions are presented on the page were stored as a "projection", separate from the AST that defined those functions, then one could reorder the functions without obscuring the history of changes to anything that was "moved". I assume I am not alone in making a lot of edits aimed solely at allowing me to read or traverse the code faster? Those within-function changes of presentation, like across-function changes of order, obscure the record of functional (as opposed to cosmetic) changes. They don't have to. Moreover, if projections and the AST were separate data, one could use contradictory hierarchiies|projections|layouts for the same data. One projection might group functions by their common purpose (an "is-tree"), while another grouped them by priority to the reader-rewriter (a "do-tree"). Given current technology,
Categories: Offsite Discussion

Yesod Web Framework: Resurrecting servius

Planet Haskell - Sun, 10/25/2015 - 2:00am

A while ago, I wrote a small package called servius, a simple executable that serves static files with Warp, and will additionally render Hamlet and Lucius templates. In some earlier package consolidation, the tool became part of shakespeare, and eventually was commented out (due to concerns around the dependency list on Hackage looking too big).

Today, I just resurrected this package, and added support for rendering Markdown files as well. I often times end up working on Markdown files (such as for this blog, the Haskell Documentation project, and the FP Complete blog), and being able to easily view the files in a browser is useful.

As it stands, the three specially-handled file types of Hamlet (.hamlet), Lucius (.lucius), and Markdown (.markdown and .md). If others wish to add more templating or markup languages to this list, I'm more than happy to access pull requests.

Final note: this package is currently uploaded using the pvp-bounds feature of Stack, so don't be surprised when the version bounds on Hackage are more restrictive than those in the repo itself.

Categories: Offsite Blogs

Dogelang - haskell dialect compiles to cpython bytecode

Haskell on Reddit - Sat, 10/24/2015 - 6:06pm

I was looking around for a haskell dialect that plays with the existing python ecosystem and came across dogelang (link)

Seems like an interesting haskell dialect. The name alone is sufficient to keep success at bay though. What do you think?

submitted by klaxion
[link] [14 comments]
Categories: Incoming News

Haskell webserver framework

Haskell on Reddit - Sat, 10/24/2015 - 1:45pm

I am looking for a lightweight framework for building web interfaces in Haskell, to be more precise I'm looking for 'the' standard framework for doing this. My background is in Erlang, and in Erlang the defacto standard has become the cowboy framework.

So the question is what is the standard framework for building scalable http services?

submitted by fold_left
[link] [19 comments]
Categories: Incoming News

GHC on NUMA 72 core (2 processor) machine cannot use more than 50% of CPU. Why?

Haskell on Reddit - Sat, 10/24/2015 - 10:59am

I'm doing one ray-tracer on Haskell to demonstrate it, but I hit some barrier. For the seminar I have 72 core (2 processor with HT) NUMA x64 machine under Windows 7. I ran my ray-tracer with different settings, but I cannot pass more than clean 50% CPU barrier. I know under Windows that there are so called processor groups for machines with more than 64 cores.

Do you know if GHC runtime could utilize them on Windows OS?

submitted by varosi
[link] [7 comments]
Categories: Incoming News

On type safety for core Scala: "From F to DOT: Type Soundness Proofs with Definitional Interpreters"

Lambda the Ultimate - Sat, 10/24/2015 - 7:45am

From F to DOT: Type Soundness Proofs with Definitional Interpreters by Tiark Rompf and Nada Amin:

Scala's type system unifies aspects of ML-style module systems, object-oriented, and functional programming paradigms. The DOT (Dependent Object Types) family of calculi has been proposed as a new theoretic foundation for Scala and similar expressive languages. Unfortunately, type soundness has only been established for a very restricted subset of DOT (muDOT), and it has been shown that adding important Scala features such as type refinement or extending subtyping to a lattice breaks at least one key metatheoretic property such as narrowing or subtyping transitivity, which are usually required for a type soundness proof.
The first main contribution of this paper is to demonstrate how, perhaps surprisingly, even though these properties are lost in their full generality, a richer DOT calculus that includes both type refinement and a subtyping lattice with intersection types can still be proved sound. The key insight is that narrowing and subtyping transitivity only need to hold for runtime objects, but not for code that is never executed. Alas, the dominant method of proving type soundness, Wright and Felleisen's syntactic approach, is based on term rewriting, which does not make an adequate distinction between runtime and type assignment time.
The second main contribution of this paper is to demonstrate how type soundness proofs for advanced, polymorphic, type systems can be carried out with an operational semantics based on high-level, definitional interpreters, implemented in Coq. We present the first mechanized soundness proof for System F<: based on a definitional interpreter. We discuss the challenges that arise in this setting, in particular due to abstract types, and we illustrate in detail how DOT-like calculi emerge from straightforward generalizations of the operational aspects of F<:.

Not only they solve a problem that has been open for 12 years, but they also deploy interesting techniques to make the proof possible and simple. As they write themselves, that includes the first type-soundness proof for F<: using definitional interpreters — that is, at least according to some, denotational semantics.

Understated Twitter announcement here.

Categories: Offsite Discussion

A couple of quick questions about recursion-schemes

Haskell on Reddit - Sat, 10/24/2015 - 6:35am

I am looking at the recursion-schemes package (not for the first time), and would like to improve my understanding of some things:

  1. Mu and Nu are supposed to be the least and greatest fixed points, respectively, yet they both have both Foldable and Unfoldable instances. Is it the case that if you wanted to consistently distinguish between data and codata, that you would then only have Foldable (Mu f) and Unfoldable (Nu f), but not vice versa?

  2. How come there are three? Does Fix correspond to Mu or to Nu... or, somehow, to both, or neither? (Is there a fourth hanging out somewhere offstage?)

  3. I notice that if you write data ListF a x = Nil | Cons a x and plug that into Fix, you get a potentially-infinite list: "codata", while if you use data ListF' a x = Nil' | Cons' a !x, you get a strictly-finite list: "data". Does the strictness of the recursive position always determine whether the result is data or codata? What if I do something silly like data WeirdTreeF a x = Leaf | Branch x a !x, where it's heterogenous?

  4. What if I plug those into Mu and Nu? Is data-vs.-codataness determined by the fixed point operator, or by the base functor?

  5. Is there a word (instead of "data-vs.-codataness") to encompass data and codata, in the way that "color" is a word to encompass red and green, or "polarity" is for positive and negative?

submitted by glaebhoerl
[link] [13 comments]
Categories: Incoming News

type error formatting

glasgow-user - Sat, 10/24/2015 - 3:48am
Here's a typical simple type error from GHC: Derive/Call/India/Pakhawaj.hs:142:62: Couldn't match type ‘Text’ with ‘(a1, Syllable)’ Expected type: [([(a1, Syllable)], [Sequence Bol])] Actual type: [([Syllable], [Sequence Bol])] Relevant bindings include syllables :: [(a1, Syllable)] (bound at Derive/Call/India/Pakhawaj.hs:141:16) best_match :: [(a1, Syllable)] -> Maybe (Int, ([(a1, Syllable)], [(a1, Sequence Bol)])) (bound at Derive/Call/India/Pakhawaj.hs:141:5) In the second argument of ‘mapMaybe’, namely ‘all_bols’ In the second argument of ‘($)’, namely ‘mapMaybe (match_bols syllables) all_bols’ I've been having more trouble than usual reading GHC's errors, and I finally spent some time to think about it. The problem is that this new "relevant bindings include" section gets in between the expected and actual types (I still don't like that wording but I've gotten used to it), which is the most critica
Categories: Offsite Discussion