I've tried 'time' but the interface confuse me. Is it just me or you guys have the same experience?submitted by eckyputr
[link] [37 comments]
After having released version 0.9 of my reactive-banana library, I now want to discuss the significant API changes that I have planned for the next release of the library, version number 1.0. These changes will not be backward compatible.
Since its early iterations (version 0.2), the goal of reactive-banana has been to provide an efficient push-based implementation of functional reactive programming (FRP) that uses (a variation of) the continuous-time semantics as pioneered by Conal Elliott and Paul Hudak. Don’t worry, this will stay that way. The planned API changes may be radical, but they are not meant to change the direction of the library.
I intend to make two major changes:
The API for dynamic event switching will be changed to use a monadic approach, and will become more similar to that of the sodium FRP library. Feedback that I have received indicates that the current approach using phantom types is just too unwieldy.
The type Event a will be changed to only allow a single event occurrence per moment, rather than multiple simultaneous occurrences. In other words, the types in the module Reactive.Banana.Experimental.Calm will become the new default.
These changes are not entirely cast in stone yet, they are still open for discussion. If you have an opinion on these matters, please do not hesitate to write a comment here, send me an email or to join the discussion on github on the monadic API!
The new API is not without precedent: I have already implemented a similar design in my threepenny-gui library. It works pretty well there and nobody complained, so I have good reason to believe that everything will be fine.
Still, for completeness, I want to summarize the rationale for these changes in the following sections.Dynamic Event Switching
One major impediment for early implementations of FRP was the problem of so-called time leaks. The key insight to solving this problem was to realize that the problem was inherent to the FRP API itself and can only be solved by restricting certain types. The first solution with first-class events (i.e. not arrowized FRP) that I know is from an article by Gergeley Patai [pdf].
In particular, the essential insight is that any FRP API which includes the functionsaccumB :: a -> Event (a -> a) -> Behavior a switchB :: Behavior a -> Event (Behavior a) -> Behavior a
with exactly these types is always leaky. The first combinator accumulates a value similar to scanl, whereas the second combinator switches between different behaviors – that’s why it’s called “dynamic event switching”. A more detailed explanation of the switchB combinator can be found in a previous blog post.
One solution the problem is to put the result of accumB into a monad which indicates that the result of the accumulation depends on the “starting time” of the event. The combinators now have the typesaccumB :: a -> Event (a -> a) -> Moment (Behavior a) switchB :: Behavior a -> Event (Behavior a) -> Behavior a
This was the aforementioned proposal by Gergely and has been implemented for some time in the sodium FRP library.
A second solution, which was inspired by an article by Wolfgang Jeltsch [pdf], is to introduce a phantom type to keep track of the starting time. This idea can be expanded to be equally expressive as the monadic approach. The combinators becomeaccumB :: a -> Event t (a -> a) -> Behavior t a switchB :: Behavior t a -> Event t (forall s. Moment s (Behavior s a) -> Behavior t a
Note that the accumB combinator keeps its simple, non-monadic form, but the type of switchB now uses an impredicative type. Moreover, there is a new type Moment t a, which tags a value of type a with a time t. This is the approach that I had chosen to implement in reactive-banana.
There is also a more recent proposal by Atze van der Ploeg and Koen Claessen [pdf], which dissects the accumB function into other, more primitive combinators and attributes the time leak to one of the parts. But it essentially ends up with a monadic API as well, i.e. the first of the two mentioned alternatives for restricting the API.
When implementing reactive-banana, I intentionally decided to try out the second alternative, simply in order to explore a region of the design space that sodium did not. With the feedback that people have sent me over the years, I feel that now is a good time to assess whether this region is worth staying in or whether it’s better to leave.
The main disadvantage of the phantom type approach is that it relies not just on rank-n types, but also on impredicative polymorphism, for which GHC has only poor support. To make it work, we need to wrap the quantified type in a new data type, like thisnewtype AnyMoment f a = AnyMoment (forall t. Moment t (f t a))
Note that we also have to parametrize over a type constructor f, so that we are able to write the type of switchB asswitchB :: forall t a. Behavior t a -> Event t (AnyMoment Behavior a) -> Behavior t a
Unfortunately, wrapping and unwrapping the AnyMoment constructor and getting the “forall”s right can be fairly tricky, rather tedious, outright confusing, or all three of it. As Oliver Charles puts it in an email to me:
Right now you’re required to provide an AnyMoment, which in turn means you have to trim, and then you need a FrameworksMoment, and then an execute, and then you’ve forgotten what you were donig! :-)
Another disadvantage is that the phantom type t “taints” every abstraction that a library user may want to build on top of Event and Behavior. For instance, image a GUI widget were some aspects are modeled by a Behavior. Then, the type of the widget will have to include a phantom parameter t that indicates the time at which the widget was created. Ugh.
On the other hand, the main advantage of the phantom type approach is that the accumB combinator can keep its simple non-monadic type. Library users who don’t care much about higher-order combinators like switchB are not required to learn about the Moment monad. This may be especially useful for beginners.
However, in my experience, when using FRP, even though the first-order API can carry you quite far, at some point you will invariably end up in a situation where the expressivitiy of dynamic event switching is absolutely necessary. For instance, this happens when you want to manage a dynamic collection of widgets, as demonstrated by the BarTab.hs example for the reactive-banana-wx library. The initial advantage for beginners evaporates quickly when faced with managing impredicative polymorphism.
In the end, to fully explore the potential of FRP, I think it is important to make dynamic event switching as painless as possible. That’s why I think that switching to the monadic approach is a good idea.Simultaneous event occurences
The second charge is probably less controversial, but also breaks backward compatibility.
The API includes a combinator for merging two event streams,union :: Event a -> Event a -> Event a
If we think of Event as a list of values with timestamps, Event a = [(Time,a)], this combinator works like this:union ((timex,x):xs) ((timey,y):ys) | timex < timey = (timex,x) : union xs ((timey,y):ys) | timex > timey = (timey,y) : union ((timex,x):xs) yss | timex == timey = ??
But what happens if the two streams have event occurrences that happen at the same time?
Before answering this question, one might try to argue that simultaneous event occurrences are very unlikely. This is true for external events like mouse movement or key presses, but not true at all for “internal” events, i.e. events derived from other events. For instance, the event e and the event fmap (+1) e certainly have simultaneous occurrences.
In fact, reasoning about the order in which simultaneous occurrences of “internal” events should be processed is one of the key difficulties of programming graphical user interfaces. In response to a timer event, should one first draw the interface and then update the internal state, or should one do it the other way round? The order in which state is updated can be very important, and the goal of FRP should be to highlight this difficulty whenever necessary.
In the old semantics (reactive-banana versions 0.2 to 0.9), using union to merge two event streams with simultaneous occurrences would result in an event stream where some occurrences may happen at the same time. They are still ordered, but carry the same timestamp. In other words, for a stream of eventse :: Event a e = [(t1,a1), (t2,a2), …]
it was possible that some timestamps coincide, for example t1 == t2. The occurrences are still ordered from left to right, though.
In the new semantics, all event occurrences are required to have different timestamps. In other to ensure this, the union combinator will be removed entirely and substituted by a combinatorunionWith f :: (a -> a -> a) -> Event a -> Event a -> Event a unionWith f ((timex,x):xs) ((timey,y):ys) | timex < timey = (timex,x) : union xs ((timey,y):ys) | timex > timey = (timey,y) : union ((timex,x):xs) yss | timex == timey = (timex,f x y) : union xs ys
where the first argument gives an explicit prescription for how simultaneous events are to be merged.
The main advantage of the new semantics is that it simplies the API. For instance, with the old semantics, we also needed two combinatorscollect :: Event a -> Event [a] spill :: Event [a] -> Event a
to collect simultaneous occurrences within an event stream. This is no longer necessary with the new semantics.
Another example is the following: Imagine that we have an input event e :: Event Int whose values are numbers, and we want to create an event that sums all the numbers. In the old semantics with multiple simultaneous events, the event and behavior defined asbsum :: Behavior Int esum :: Event Int esum = accumE 0 ((+) <@> e) bsum = stepper 0 esum
are different from those defined bybsum = accumB 0 ((+) <@> e) esum = (+) <$> bsum <@ e
The reason is that accumE will take into account simultaneous occurrences, but the behavior bsum will not change until after the current moment in time. With the new semantics, both snippets are equal, and accumE can be expressed in terms of accumB.
The main disadvantage of the new semantics is that the programmer has to think more explicitly about the issue of simultaneity when merging event streams. But I have argued above that this is actually a good thing.
In the end, I think that removing simultaneous occurrences in a single event stream and emphasizing the unionWith combinator is a good idea. If required, s/he can always use an explicit list type Event [a] to handle these situations.
(It just occurred to me that maybe a type class instanceinstance Monoid a => Monoid (Event a)
could give us the best of both worlds.)
This summarizes my rationale for these major and backward incompatible API changes. As always, I appreciate your comments!
With the no-reinstall cabal project coming soon, it seems that cabal is back on track to face the stack attack.
Which one do use, why ?submitted by maxigit
[link] [73 comments]
This afternoon I’ll be getting on a plane to Vancouver for ICFP. I’m looking forward to seeing many friends, of course, but I also enjoy meeting new people—whether or not they are “famous”, whether or not I think they can “advance my career”. So I’ll just throw this out there: if you will be in Vancouver this week and would like to meet me, just leave a comment and I will make a point of trying to find you to chat! I’ll be attending the Haskell Implementor’s Workshop, the Ally Skills Tutorial, ICFP itself, the Haskell Symposium, and FARM, but there’s also plenty of time to chat in the hallway or over a meal.
Over this summer, Vishal Agrawal has been working on a GSoC project to move Cabal to more Nix-like package management system. More simply, he is working to make it so that you'll never get one of these errors from cabal-install again:Resolving dependencies... In order, the following would be installed: directory-18.104.22.168 (reinstall) changes: time-1.4.2 -> 1.5 process-22.214.171.124 (reinstall) extra-1.0 (new package) cabal: The following packages are likely to be broken by the reinstalls: process-126.96.36.199 hoogle-4.2.35 haskell98-188.8.131.52 ghc-7.8.3 Cabal-184.108.40.206 ...
However, these patches change a nontrivial number of moving parts in Cabal and cabal-install, so it would be very helpful to have willing guinea pigs to help us iron out some bugs before we merge it into Cabal HEAD. As your prize, you'll get to run "no-reinstall Cabal": Cabal should never tell you it can't install a package because some reinstalls would be necessary.
Here's how you can help:
- Make sure you're running GHC 7.10. Earlier versions of GHC have a hard limitation that doesn't allow you to reinstall a package multiple times against different dependencies. (Actually, it would be useful if you test with older versions of GHC 7.8, but only mostly to make sure we haven't introduced any regressions here.)
- git clone https://github.com/ezyang/cabal.git (I've added some extra corrective patches on top of Vishal's version in the course of my testing) and git checkout cabal-no-pks.
- In the Cabal and cabal-install directories, run cabal install.
- Try building things without a sandbox and see what happens! (When I test, I've tried installing multiple version of Yesod at the same time.)
It is NOT necessary to clear your package database before testing. If you completely break your Haskell installation (unlikely, but could happen), you can do the old trick of clearing out your .ghc and .cabal directories (don't forget to save your .cabal/config file) and rebootstrapping with an old cabal-install.
Please report problems here, or to this PR in the Cabal tracker. Or chat with me in person next week at ICFP. :)
Anyone have thoughts on this talk by C++/D guru Andrei Alexandrescu? Talk is focused on pretty low level C++ memory allocator use case, but since generic programming is an important paradigm, I'm curious what people here think.
EDIT: link https://www.youtube.com/watch?v=mCrVYYlFTrAsubmitted by klaxion
[link] [6 comments]
I want to write concurrent programs in an event based way. I've seen how to design this somewhat using Mutex patterns with MVar, but it seems like I have to design a lot of the patterns in the main using do notation, and forking io's that will "listen" to a particular variable. I would like an Observer pattern that is a little cleaner, that lets me write success, error, and last callbacks, and that has an API that allows me to "trigger" an event, rather than using putMVar. Does anyone know of a library to do that?submitted by umib0zu
[link] [22 comments]
Pretty much what the title says, what gui library do you recommend? I'm on gnu/Linux if that matterssubmitted by Fruxel
[link] [23 comments]
I was playing around with J. Garrett Morris take on extensible variants (available here but I cant get it to work (using the type family encoding).
Using his type family encoding I can get the following simple DSL to compile:evalConst (Const x) r = x evalSum (Plus x y) r = r x + r y mkConst e = In (Inl (Const e)) mkPlus e f = In (Inr (Plus e f)) eval' = cases (evalConst ? evalSum) main = do let x = eval' (mkPlus (mkConst 1) (mkConst 2)) print x return ()
However if I try extend the DSL it fails to compile with an ambiguous type error (the order of operands to the (?) function doesn't seem to affect anything other than how mkProduct would be implemented):evalProduct (Times x y) r = r x * r y mkProduct e f = In (Inl (Times e f)) eval'' = cases (evalProduct ? (evalConst ? evalSum)) main = do let x = eval'' (mkProduct (mkConst 3) (mkPlus (mkConst 1) (mkConst 2))) print x return ()
am I missing something or does this not actually work without instance chaining? The code is available on github.
A second question for anyone familiar with this line of academic research is this: is this line of research actually worth pursuing? I cant see how this would lead to something that is actually practical - like monad transformers. I love this kind of work because it is fun and has some beauty to it but it feels like datatypes a la carte are a dead end.submitted by meta_circular
[link] [2 comments]
I'm pretty new to Haskell and wanted to deploy a Snap webapp to Heroku.
I found two buildpacks for Haskell on Heroku, one using Cabal and the other using Halcyon. However, I'm using stack locally and I feel a bit uneasy about using one build tool locally and another one in production.
I had a few questions about this setup:
- Is there a Heroku buildpack that uses stack?
- Does it matter if I use stack locally but cabal/halcyon during deploy?
- Is it worth building a Heroku buildpack that uses stack?
Thanks for your time!submitted by bash125
[link] [10 comments]
Everytime you upload or upvote something, it gets saved to your LocalStorage for the site. Once something gets downvoted to zero, it disappear. When you go to the site, whatever is in your LocalStorage gets uploaded and upvoted again. So stories can come and go as users connect and disconnect, and only the most popular stories will always be visible on the site (since at least one connected user needs to have uploaded or upvoted a story for it to be visible).
Of course, there is still a server, that could decide to censor stories, modify text, but at least you can always check that what you have on YOUR machine is the data you wanted. you can always copy that data elsewhere easily for safe keeping (browser developer tools let you inspect your LocalStorage content).
I write and maintain a lot documentation, both open source and commercially. Quite a bit of the documentation I maintain is intended to be collaborative documentation. Over the past few years, through my own observations and insights from others (yes, this blog post is basically a rip-off of a smaller comment by Greg), I've come up with a theory on collaborative documentation, and I'm interested in feedback.
tl;dr: people don't seem to trust Wiki content, nor explore it. They're also more nervous about editing Wiki content. Files imply: this is officially part of the project, and people feel comfortable sending a PR
When talking about documentation, there are three groups to consider: the maintainers, the contributors, and the readers. The most obvious medium for collaborative documentation is a Wiki. Let's see how each group sees a Wiki:
Maintainers believe they're saying "feel free to make any changes you want, the Wiki is owned by the community." By doing that, they are generally hoping to greatly increase collaboration.
Contributors, however, seem to be intimidated by a Wiki. Most contributors do not feel completely confident in their ability to add correct content, adhere to standards, fit into the right outline, etc. So paradoxically, by making the medium as open as possible, the Wiki discourages contribution.*
Readers of documentation greatly appreciate well structured content, and want to be able to trust the accuracy of the content they're reading. Wikis do not inspire this confidence. Despite my previous comments about contributors, readers are (logically) concerned that Wiki content may have been written by someone uninformed, or may have fallen out of date.
By contrast, let's take a different model for documentation: Markdown files in a Github repository:
Maintainers have it easy: they maintain documentation together with their code. The documentation can be forked and merged just like the code itself.
Contributors- at least in my experience- seem to love this. I've gotten dozens (maybe even hundreds) of people sending minor to major pull requests to documentation I maintain on open source projects this way. Examples range from the simplest (inline API documentation) to the most theoretically most complex (the content for the Yesod book). Since our target audience is developers, and developers already know how to send pull requests, this just feels natural.
Readers trust content of the repository itself much more. It's more official, because it means someone with commit access to the project agreed that this should belong here.
This discussion came up for me again when I started thinking about writing a guide for the Haskell tool stack. I got halfway through writing this blog post two weeks ago, and decided to finish it when discussing with other stack maintainers why I decided to make this a file instead of another Wiki page. Their responses were good confirmation to this theory:
Ok, that makes a lot of sense to me. We might want to consider moving reference material to files (for example, the stack.yaml documentation). Another nice thing about that is that it means the docs follow the versions (so no more confusion about whether the stack.yaml page is for current master vs. latest release).
I can confirm this sentiment is buried in me somewhere, I've definitely felt this way (as a user/developer contributing little bits). On a more technical note, the workflow with editing the wiki doesn't offer up space for review - it is a done deal, there is no PR.
While Wikis still have their place (at the very least when collaborating with non-technical people), I'm quite happy with the file-as-a-collaborative-document workflow (that- I again admit- Greg introduced me to). My intended behavior moving forward is:
- Keep documentation in the same repo as the project
- Be liberal about who has commit access to repos
* I've seen a similar behavior with code itself: while many people (myself included in the past) are scared to give too many people commit access to a repository, my experience (following some advice from Edward Kmett) with giving access more often rather than less has never led to bad maintainer decisions. Very few people are actually malicious, and most will be cautious about breaking a project they love. (Thought experiment: how would you act if you were suddenly given commit access to a major open source project (GHC/Linux/etc)? I'm guessing you wouldn't go through making serious modifications without asking for your work to be reviewed.)