News aggregator

ghc-mod not finding stack project dependencies

Haskell on Reddit - Mon, 01/11/2016 - 8:36pm

I've set up emacs for use with ghc-mod by following this tutorial. It works fine, but it doesn't detect imported dependencies like parsec. Is there a way to link ghc-mod to the current stack project?

submitted by JD95
[link] [6 comments]
Categories: Incoming News

Proposal: Add Peano numbers to base

libraries list - Mon, 01/11/2016 - 7:29pm
I have seen these redefined many times now. Examples: https://hackage.haskell.org/package/numericpeano-0.2.0.0/docs/Numeric-Peano.html#t:Nat https://hackage.haskell.org/package/numeric-prelude-0.4.2/docs/Number-Peano.html#t:T https://hackage.haskell.org/package/type-fun-0.0.1/docs/TypeFun-Data-Peano.html#t:N https://hackage.haskell.org/package/number-0.1.1.0/docs/Data-Number-Peano.html#t:Nat https://hackage.haskell.org/package/Peano-0.0.4/docs/Data-Peano.html#t:Peano I often see them used as DataKinds. Too, operations on them can be lazy, which is sometimes useful. I filed a ticket: https://ghc.haskell.org/trac/ghc/ticket/11402 Discussion period: 2 weeks
Categories: Offsite Discussion

[RFC] github-0.14.0 release candidate

haskell-cafe - Mon, 01/11/2016 - 6:54pm
Hi, There are lot of breaking changes In upcoming github package, which provides accoss to the Github API, v3 [1] I’d like to hear feedback before pushing the actual release. There are many breaking changes, and would be nice to avoid new breaking changes very soon. So if you spot something which can still be fixed, don’t hesitate to contact me or create a issue on Github [3]. You can find the release candidate at Hackage [2]. Cheers, Oleg Grenrus - [1]: https://developer.github.com/v3/ - [2]: http://hackage.haskell.org/package/github-0.14.0/candidate - [3]: https://github.com/phadej/github/issues _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

Is the GHC.TypeLits API unfinished?

Haskell on Reddit - Mon, 01/11/2016 - 4:20pm

I decided to check out the new GHC.TypeLits module today and got stuck trying to implement the simplest of "dependently-typed" functionality. For example, here's a length-indexed vector.

data Vec (n :: Nat) a where -- Nice integer literal syntax Nil :: Vec 0 a -- Uses GHC.TypeLits.+ type family Cons :: a -> Vec n a -> Vec (n + 1) a

Any function that uses this type should have a KnownNat n constraint, but already we run into trouble when matching a Cons constructor because for some KnownNat n, n ~ m+1 we don't know KnownNat m. Presumably we need this constraint to implement recursive functions and such.

So, I tossed the constraint into the type:

Cons :: KnownNat n => a -> Vec n a -> Vec (n + 1) a

Seems amiss.

Next up, (!).

(!) :: (KnownNat i, KnownNat n, CmtNat i n ~ LT) => Vec n a -> proxy i -> a

And now, I get stuck in the same way because doing value-level pattern matching does not affect any type equalities:

(Cons x xs) ! i | natVal i == 0 = x -- here, we know that CmpNat i n' ~ LT, where n' ~ n + 1, but the type checker doesn't | otherwise = ???

So, my question is, is this API usable for common type-level tasks? I feel like a lot of convenience is lost by not having an inductively defined Nat kind, like the familiar promoted data Nat = Z | S Nat. But, I'm new to this stuff and may be missing something.

Thanks!

submitted by MitchellSalad
[link] [9 comments]
Categories: Incoming News

Porting this application to Haskell

Haskell on Reddit - Mon, 01/11/2016 - 2:46pm

I just got hired by a small start-up (http://dialectica.us/) that is creating an interactive web application to for high school and university debaters to flow/record debates. They are considering switching implementations from AngularJs to something more scalable and efficient. Would Yesod be a viable option to suggest?

Does it have good support for interactive web applications and this sort of image manipulation?

Mock up: https://dialectica.firebaseapp.com/#/

submitted by ChavXO
[link] [comment]
Categories: Incoming News

Plugin system for an e-commerce/CMS website? (XPOST from /r/haskellquestions)

Haskell on Reddit - Mon, 01/11/2016 - 2:36pm

Note: Recently I posted this question on /r/haskellquestions with no response

I deal a lot with e-commerce, but in interpreted languages. Often they have some type of plugin/module system that you can purchase modules from 3rd parties and plug them into the system. How would you do that in Haskell? Perhaps with Yesod?

It would seem that it would be hard to do while the server is running and would require it to be there during compile time, which is fine.

Perhaps some type of Event-Observer system?

New to Haskell and not sure the best way to achieve this, preferably with as much type safety as possible.

 

 

Additional clarification: It seems that in the CMS/E-Commerce field the frameworks/platforms that provide a way for 3rd parties to create plugins/modules succeed as people can customize their installation for what they needTM. I was trying to think of the best way to do that with Haskell so that you have good type safety as well, but I don't have the experience to do this properly.

Mostly this is a curiosity, but on the side I wouldn't mind creating a e-commerce platform and would definitely want to have some type of plugin system.

submitted by sbditto85
[link] [5 comments]
Categories: Incoming News

Why does shift's function return result in Cont and reset reinsert result into Cont again?

haskell-cafe - Mon, 01/11/2016 - 12:28pm
Hi. I've read "Introduction to Programming with Shift and Reset" by Kenichi Asai and Oleg Kiselyov , where shift and reset defined as But why should function f return result in Cont, which i'll unwrap immediately? And why should i reinsert delimited continuation's result into Cont again in reset? Wouldn't it be simpler (better?) to just define shift/reset like Moreover, Bubble-up semantics proof of Cont bubble elimination is (literally) correct only with reset' (below is my bubble implementation, may be i've written it wrong?): w) t)) id id -- Dmitriy Matrosov _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Categories: Offsite Discussion

A Haskell program for beginners?

Haskell on Reddit - Mon, 01/11/2016 - 11:08am

My programming experience thus far only includes Console Applications- I mainly made programs to take in carefully prepared data and output the correct answer(s) as a way to learn/test algorithms. As such, a lot of of computing terminology beyond algorithmics is gibberish to me.

I used visual studio because I could just install it and start writing programs straight away. Visual studio also could tell me exactly which line was erroring, and the contents of all variables at breakpoints. This is along with all the convenience features like selecting code and commenting/de-commenting it all, the automatic indentation, and auto-complete.

What (free or low cost) Windows programs are there for Haskell that would give me the same kind of assistance and convenience? (at least when writing console applications)

submitted by MrPRambo
[link] [15 comments]
Categories: Incoming News

Haskell books to be published in 2015?

Haskell on Reddit - Mon, 01/11/2016 - 9:56am

Other than "Haskell Programming from first principles" are there any haskell books on the horizon to be published in 2016?

submitted by rnnnnr
[link] [10 comments]
Categories: Incoming News

New release of the book Haskell Programming fromfirst principles

haskell-cafe - Mon, 01/11/2016 - 9:45am
I'd been reticent in the past to announce the book on the mailing list, but it's pretty comprehensive now and we have enough ecstatic readers learning Haskell with it that I thought I'd share what we've been up to. We're writing this Haskell book (http://haskellbook.com/) because many have found learning Haskell to be difficult and it doesn't have to be. We have a strong focus on writing it to be a book for learning and teaching - it's not just a reference or review of topics. Particularly, we strive to make the book suitable for self-learners. We think Haskell is a really nice language and learning Haskell should be as nice as using it is. The new release puts the book at 26 chapters and 1,156 pages. You can track our progress here: http://haskellbook.com/progress.html The latest release included parser combinators, composing types, and monad transformers. My coauthor Julie Moronuki has never programmed before learning Haskell to work with me on this book. She has written about using the book to teach h
Categories: Offsite Discussion

Bill Atkins: GCD and Parallel Collections in Swift

Planet Haskell - Mon, 01/11/2016 - 7:39am
One of the benefits of functional programming is that it's straightforward to parallelize operations. Common FP idioms like map, filter and reduce can be adapted so they run on many cores at once, letting you get instant parallelization wherever you find a bottleneck.

The benefits of these parallel combinators are huge. Wherever you find a bottleneck in your program, you can simply replace your call to map with a call to a parallel map and your code will be able to take advantage of all the cores on your system. On my eight-core system, for example, simply using a parallel map can theoretically yield an eight-fold speed boost. Of course, there are a few reasons you might not see that theoretical speed improvement: namely, the overhead of creating threads, splitting up the work, synchronizing data between the threads, etc. Nevertheless, if you profile your code and focus on hotspots, you can see tremendous improvements with simple changes.

Swift doesn't yet come with parallel collections functions, but we can build them ourselves, using Grand Central Dispatch:
// requires Swift 2.0 or higher
extension Array {    public func pmap<t>(transform: (Element -> T)) -> [T] {</t>
        guard !self.isEmpty else {            return []
        }
                var result: [(Int, [T])] = []
                let group = dispatch_group_create()        let lock = dispatch_queue_create("pmap queue for result", DISPATCH_QUEUE_SERIAL)                let step: Int = max(1, self.count / NSProcessInfo.processInfo().activeProcessorCount) // step can never be 0                for var stepIndex = 0; stepIndex * step < self.count; stepIndex++ {
            let capturedStepIndex = stepIndex

            var stepResult: [T] = []
            dispatch_group_async(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {                for i in (capturedStepIndex * step)..<((capturedStepIndex + 1) * step) {
                    if i < self.count {
                        let mappedElement = transform(self[i])
                        stepResult += [mappedElement]
                    }
                }

                dispatch_group_async(group, lock) {
                    result += [(capturedStepIndex, stepResult)]
                }
            }
        }
                dispatch_group_wait(group, DISPATCH_TIME_FOREVER)                return result.sort { $0.0 < $1.0 }.flatMap { $0.1 }
   }
}
pmap takes the same arguments as map but runs the function across all of your system's CPUs. Let's break the function down, step by step.
  1. In the case of an empty array, pmap returns early, since the overhead of splitting up the work and synchronizing the results is non-trivial. We might take this even further by falling back to standard map for arrays with a very small element count.
  2. Create a Grand Central Dispatch group that we can associate with the GCD blocks we'll run later on. Since all of these blocks will be in the same group, the invoking thread can wait for the group to be empty at the end of the function and know for certain that all of the background work has finished before returning to the caller.
  3. Create a dedicated, sequential lock queue to control access to the result array. This is a common pattern in GCD: simulating a mutex with a sequential queue. Since a sequential queue will never run two blocks simultaneously, we can be sure that whatever operations we perform in this queue will be isolated from one another.
  4. Next, pmap breaks the array up into "steps", based on the host machine's CPU count (since this is read at runtime from NSProcessInfo, this function will automatically scale up to use all available cores). Each step is dispatched to one of GCD's global background queues. In the invoking thread, this for loop will run very, very quickly, since all it does is add closures to background queues.
  5. The main for loop iterates through each "step," capturing the stepIndex in a local variable, capturedStepIndex. If we don't do this, the closures passed to dispatch_group_async will all refer to the same storage location - as the for loop increments, all of the workers will see stepIndex increase by one and will all operate on the same step. By capturing the variable, each worker has its own copy of stepIndex, which never changes as the for loop proceeds.
  6. We calculate the start and end indices for this step. For each array element in that range, we call transform on the element and add it to this worker's local stepResult array. Because it's unlikely that the number of elements in the array will be exactly divisible by a given machine's processor count, we check that i never goes beyond the end of the array, which could otherwise happen in the very last step.
  7. After an entire step has been processed, we add this worker's results to the master result array. Since the order in which workers will finish is nondeterministic, each element of the result array is a tuple containing the stepIndex and the transformed elements in that step's range. We use the lock queue to ensure that all changes to the result array are synchronized. 
      • Note that we only have to enter this critical section once for each core - an alternative implementation of pmap might create a single master result array of the same size as the input and set each element to its mapped result as it goes. But this would have to enter the critical section once for every array element, instead of just once for each CPU, generating more memory and processor contention and benefiting less from spatial locality. 
      • We use dispatch_sync instead of dispatch_async because we want to be sure that the worker's changes have been applied to the masterResults array before declaring this worker to be done. If we were to use dispatch_async, the scheduler could very easily finish all of the step blocks but leave one or more of these critical section blocks unprocessed, leaving us with an incomplete result.
  8. Back on the original thread, we call dispatch_group_wait, which waits until all blocks in the group have completed. At this point, we know that all work has been done and all changes to the master results array have been made.
  9. The final line sorts the master array by stepIndex (since steps finish in a nondeterministic order) and then flattens the master array in that order.
To see how this works, let's create a simple profile function:
func profile(desc: String, block: () -> A) -> Void {
    let start = NSDate().timeIntervalSince1970    block()
        let duration = NSDate().timeIntervalSince1970 - start
    print("Profiler: completed \(desc) in \(duration * 1000)ms")
}We'll test this out using a simple function called slowCalc, which adds a small sleep delay before each calculation, to ensure that each map operation does enough work. In production code, you should never sleep in code submitted to a GCD queue - this is purely to simulate a slow calculation for demonstration purposes. Without this little delay, the overhead of parallelization would be too great to see a speedup:

func slowCalc(x: Int) -> Int {    NSThread.sleepForTimeInterval(0.1)    return x * 2}
let smallTestData: [Int] = [Int](0..<10)let largeTestData = [Int](0..<300)
profile("large dataset (sequential)") { largeTestData.map { slowCalc($0) } }profile("large dataset (parallel)") { largeTestData.pmap { slowCalc($0) } }
On my eight-core machine, this results in:
Profiler: completed large dataset (sequential) in 31239.7990226746ms
Profiler: completed large dataset (parallel) in 4005.04493713379ms

an 7.8-fold increase, which is about what you'd expect.
It's important thing to remember that if each iteration doesn't do enough work, the overhead of splitting up work, setting up worker blocks and synchronizing data access will far outweigh the time savings of parallelization. The amount of overhead involved can be surprising. This code is identical to the above, except that it doesn't add the extra delay.

profile("large dataset (sequential, no delay)") { largeTestData.map { $0 * 2 } }profile("large dataset (parallel, no delay)") { largeTestData.pmap { $0 * 2 } }
On my machine, it results in:
Profiler: completed large dataset (sequential, no delay) in 53.4629821777344ms
Profiler: completed large dataset (parallel, no delay) in 161.548852920532ms

The parallel version is three times slower than the sequential version! This is a really important consideration when using parallel collection functions:
  1. Make sure that each of your iterations does enough work to make parallelization worth it.
  2. Parallel collections are not a panacea - you can't just sprinkle them throughout your code and assume you'll get a performance boost. You still need to profile for hotspots, and it's important to focus on bottlenecks found through profiling, rather than hunches about what parts of your code are slowest.
  3. Modern CPUs are blindingly fast - basic operations like addition or multiplication are so fast that it's not worth parallelizing these, unless your array is very large.
You can use the same techniques to implement a parallel filter function:

// requires Swift 2.0 or higherextension Array {    public func pfilter(includeElement: Element -> Bool) -> [Element] {        guard !self.isEmpty else {            return []        }                var result: [(Int, [Element])] = []                let group = dispatch_group_create()        let lock = dispatch_queue_create("pmap queue for result", DISPATCH_QUEUE_SERIAL)                let step: Int = max(1, self.count / NSProcessInfo.processInfo().activeProcessorCount) // step can never be 0                for var stepIndex = 0; stepIndex * step < self.count; stepIndex++ {            let capturedStepIndex = stepIndex                        var stepResult: [Element] = []            dispatch_group_async(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {                for i in (capturedStepIndex * step)..<((capturedStepIndex + 1) * step) {                    if i < self.count && includeElement(self[i]) {                        stepResult += [self[i]]                    }                }                                dispatch_group_async(group, lock) {                    result += [(capturedStepIndex, stepResult)]                }            }        }                dispatch_group_wait(group, DISPATCH_TIME_FOREVER)                return result.sort { $0.0 < $1.0 }.flatMap { $0.1 }    }}
This code is almost exactly identical to pmap - only the logic in the inner for loop is different.

We can now start using these combinators together (again, we have to use a slowed-down predicate function in order to see the benefit from parallelization):

func slowTest(x: Int) -> Bool {    NSThread.sleepForTimeInterval(0.1)    return x % 2 == 0}
profile("large dataset (sequential)") { largeTestData.filter { slowTest($0) }.map { slowCalc($0) } }profile("large dataset (sequential filter, parallel map)") { largeTestData.filter { slowTest($0) }.pmap { slowCalc($0) } }profile("large dataset (parallel filter, sequential map)") { largeTestData.pfilter { slowTest($0) }.map { slowCalc($0) } }profile("large dataset (parallel filter, parallel map)") { largeTestData.pfilter { slowTest($0) }.pmap { slowCalc($0) } }
which results in:
Profiler: completed large dataset (sequential) in 1572.28803634644msProfiler: completed large dataset (sequential filter, parallel map) in 1153.90300750732msProfiler: completed large dataset (parallel filter, sequential map) in 642.061948776245msProfiler: completed large dataset (parallel filter, parallel map) in 231.456995010376ms
Using one parallel combinator gives a slight improvement; combining the two parallel operations gives us an almost sevenfold performance improvement over the basic sequential implementation.
Here are some other directions to pursue:
  1. Implement parallel versions of find, any/exists and all. These are tricky because their contracts stipulate that processing stops as soon as they have a result. So you'll have to find some way to stop your parallel workers as soon as the function has its answer.
  2. Implement a parallel version of reduce. The benefit of doing this is that reduce is a "primitive" higher-order function - you can easily implement pmap and pfilter given an existing parallel reduce function.
  3. Generalize these functions to work on all collections (not just arrays), using Swift 2's protocol extensions.
Categories: Offsite Blogs

Bill Atkins: NSNotificationCenter, Swift and blocks

Planet Haskell - Mon, 01/11/2016 - 7:32am
The conventional way to register observers with NSNotificationCenter is to use the target-action pattern. While this gets the job done, it's inherently not type-safe.

For example, the following Swift snippet will compile perfectly:

    NSNotificationCenter.defaultCenter().addObserver(self, selector: Selector("itemAdded:"),      name: MyNotificationItemAdded, object: nil)
even though at runtime it will fail unless self has a method named itemAdded that takes exactly one parameter (leaving off that last colon in the selector will turn this line into a no-op). Plus, this method gives you no way to take advantages of Swift's closures, which would allow the observer to access local variables in the method that adds the observer and would eliminate the need to create a dedicated method to handle the event.
A better way to do this is to use blocks. And NSNotificationCenter does include a block-based API:
    NSNotificationCenter.defaultCenter().addObserverForName(MyNotificationItemAdded, object: nil, queue: nil) { note in      // ...    }
This is much nicer, especially with Swift's trailing closure syntax. There are no method names to be looked up at runtime, we can refer to local variables in the method that registered the observer and we can perform small bits of logic in reaction to events without having to create and name dedicated methods.
The catch comes in resource management. It's very important that an object remove its event observers when it's deallocated, or else NSNotificationCenter will try to invoke methods on invalid pointers.
The traditional target-action method has the one advantage that we can easily handle this requirement with a single call in deinit:
  deinit {    NSNotificationCenter.defaultCenter().removeObserver(self)  }
With the block API, however, since there is no explicit target object, each call to addObserverForName returns "an opaque object to act as observer." So your observer class would need to track all of these objects and then remove them all from the notification center in deinit, which is a pain.
In fact, the hassle of having to do bookkeeping on the observer objects almost cancels out the convenience of using the block API. Frustrated by this situation, I sat down and created a simple helper class, NotificationManager:
class NotificationManager {  private var observerTokens: [AnyObject] = []
  deinit {    deregisterAll()  }
  func deregisterAll() {    for token in observerTokens {      NSNotificationCenter.defaultCenter().removeObserver(token)    }
    observerTokens = []  }
  func registerObserver(name: String!, block: (NSNotification! -> Void)) {    let newToken = NSNotificationCenter.defaultCenter().addObserverForName(name, object: nil, queue: nil, usingBlock: block)

    observerTokens.append(newToken)  }    func registerObserver(name: String!, forObject object: AnyObject!, block: (NSNotification! -> Void)) {    let newToken = NSNotificationCenter.defaultCenter().addObserverForName(name, object: object, queue: nil, usingBlock: block)        observerTokens.append(newToken)  }}
First, this simple class provides a Swift-specialized API around NSNotificationCenter.  It provides an additional convenience method without an object parameter (rarely used, in my experience) to make it easier to use trailing-closure syntax. But most importantly, it keeps track of the observer objects generated when observers are registered, and removes them when the object is deinit'd.

A client of this class can simply keep a member variable of type NotificationManager and use it to register its observers. When the parent class is deallocated, the deinit method will automatically be called on its NotificationManager member variable, and its observers will be properly disposed of:

class MyController: UIViewController {  private let notificationManager = NotificationManager()    override init() {    notificationManager.registerObserver(MyNotificationItemAdded) { note in      println("item added!")    }        super.init()  }    required init(coder: NSCoder) {    fatalError("decoding not implemented")  }}
When the MyController instance is deallocated, its NotificationManager member variable will be automatically deallocated, triggering the call to deregisterAll that will remove the dead objects from NSNotificationCenter.
In my apps, I add a notificationManager instance to my common UIViewController base class so I don't have to explicitly declare the member variable in all of my controller subclasses.

Another benefit of using my own wrapper around NSNotificationCenter is that I can add useful functionality, like group observers: an observer that's triggered when any one of a group of notifications are posted:

struct NotificationGroup {  let entries: [String]    init(_ newEntries: String...) {    entries = newEntries  }
}
extension NotificationManager {  func registerGroupObserver(group: NotificationGroup, block: (NSNotification! -> ()?)) {    for name in group.entries {      registerObserver(name, block: block)    }  }}
This can be a great way to easily set up an event handler to run when, for example, an item is changed in any way at all:
   let MyNotificationItemsChanged = NotificationGroup(      MyNotificationItemAdded,      MyNotificationItemDeleted,      MyNotificationItemMoved,      MyNotificationItemEdited    )
    notificationManager.registerGroupObserver(MyNotificationItemsChanged) { note in      // ...    }
Categories: Offsite Blogs

ANN: true-name 0.1.0.0 released

haskell-cafe - Mon, 01/11/2016 - 6:44am
It is with some shame that I announce ‘true-name’, a package to assist one in violating those pesky module abstraction boundaries via the magick of Template Haskell. http://hackage.haskell.org/package/true-name Take ‘Control.Concurrent.Chan’ for example; you can get your grubby mitts on the ‘Chan’ data constructor, despite it not being exported. Here we pattern match on it, and bind ‘chanR’ and ‘chanW’ to the ‘MVar’s containing the read and write ends of the channel respectively: Now, the type of ’chanR’ references the unexported ‘Stream’ and ‘ChItem’ types. We need the ‘ChItem’ data constructor―which is hidden under a few indirections—but that's not a problem: This gives us a rather dodgy ‘peekChan’. This sort of thing is usually a Bad Idea™, but may sometimes be more palatable than the alternatives. Full example: https://github.com/liyang/true-name/blob/master/sanity.hs Taking another example, suppose we want the ‘Array’ type constructor hidde
Categories: Offsite Discussion

ANN: true-name 0.1.0.0 released

General haskell list - Mon, 01/11/2016 - 6:40am
It is with some shame that I announce ‘true-name’, a package to assist one in violating those pesky module abstraction boundaries via the magick of Template Haskell. http://hackage.haskell.org/package/true-name Take ‘Control.Concurrent.Chan’ for example; you can get your grubby mitts on the ‘Chan’ data constructor, despite it not being exported. Here we pattern match on it, and bind ‘chanR’ and ‘chanW’ to the ‘MVar’s containing the read and write ends of the channel respectively: Now, the type of ’chanR’ references the unexported ‘Stream’ and ‘ChItem’ types. We need the ‘ChItem’ data constructor―which is hidden under a few indirections—but that's not a problem: This gives us a rather dodgy ‘peekChan’. This sort of thing is usually a Bad Idea™, but may sometimes be more palatable than the alternatives. Full example: https://github.com/liyang/true-name/blob/master/sanity.hs Taking another example, suppose we want the ‘Array’ type constructor hidde
Categories: Incoming News

Access files one at a time?

Haskell on Reddit - Mon, 01/11/2016 - 3:43am

Hello,

I am writing a function that reads and writes data to hard disk. Since the application is multithread, I think the data on hard disk should be accessed one at a time. Can anyone tell me how can I do that?

submitted by notooth1
[link] [7 comments]
Categories: Incoming News

New release of the book Haskell Programming from first principles

Haskell on Reddit - Mon, 01/11/2016 - 1:46am

I'd been reticent in the past to ping y'all about each release, but it's pretty comprehensive now and we have enough ecstatic readers learning Haskell with it that I thought I'd update y'all on what we've been up to.

We're writing this Haskell book (http://haskellbook.com/) because many have found learning Haskell to be difficult and it doesn't have to be. We have a strong focus on writing it to be a book for learning and teaching - it's not just a reference or review of topics. Particularly, we strive to make the book suitable for self-learners. We think Haskell is a really nice language and learning Haskell should be as nice as using it is.

The new release puts the book at 26 chapters and 1,156 pages. You can track our progress here: http://haskellbook.com/progress.html

The latest release included parser combinators, composing types, and monad transformers.

My coauthor Julie Moronuki has never programmed before learning Haskell to work with me on this book. She has written about using the book to teach her 10 year old son as well https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/

Julie has also written about learning Haskell more generally https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/

We'll be looking for a press to do a print run of the book soon as it's about 80% done. If anyone has any pointers or recommendations on whom to work with, particularly university presses, please email me. My email can be found on my Github profile at https://github.com/bitemyapp/

If you've been reading the book, please speak up and share your thoughts. We have some reader feedback on the site at http://haskellbook.com/feedback.html

submitted by Mob_Of_One
[link] [110 comments]
Categories: Incoming News

String manipulation Guide?

Haskell on Reddit - Sun, 01/10/2016 - 11:32pm

I haven found a complete guide to introduce string manipulation in Haskell, such as OverloadedStrings, String, Text, Bytestring, formatting(pretty printing) ....

-- update: I find a outline by fpco : all about strings

submitted by n00bomb
[link] [7 comments]
Categories: Incoming News

A bug in multiset-0.3.0 package; not sure how to report

Haskell on Reddit - Sun, 01/10/2016 - 9:19pm

I found an easy-to-fix bug in multiset-0.3.0 package, which lists 'libraires@haskell.org' as its maintainer. I'm not sure how to report it -- the mailing list doesn't accept my e-mails.

The bug is as follows: in Data.MultiSet, the functions maxView and minView have identical definitions. In maxView, one should replace deleteFindMin with deleteFindMax.

submitted by ilyagr
[link] [2 comments]
Categories: Incoming News

How to tell which MVar we are BlockedIndefinitelyOnMVar on?

Haskell on Reddit - Sun, 01/10/2016 - 8:10pm

Is there a way to get the MVar from a BlockedIndefinitelyOnMVar exception, or otherwise attach metadata to a MVar that is retrievable on BlockedIndefinitelyOnMVar?

submitted by fread2281
[link] [3 comments]
Categories: Incoming News