News aggregator

Commutativity and Unordered Sets in Haskell

Haskell on Reddit - Mon, 07/27/2015 - 8:29pm

Before you start flinging rotten tomatoes, I realize that commutative operations aren't important in Haskell for a reason. That being said, there was an interesting discussion on #haskell today - how can Set be foldable, when sets are supposed to be unordered?

It was quickly answered that the Haskell Set is indeed ordered. But, this still didn't scuff my interest, so I decided to make a library for commutative binary operations.

It turns out to parallel Monoid and Foldable very closely, but there is an interesting result - I can't conceivably make a correct Commutative instance at the binary operation level. The nature of pure functional programming states that functions are to be consistent - f a should be the same f a all the time. However, the nature of commutativity states that f a b could be f b a at any later point - it doesn't matter.

So, I pulled out my stapler, unsafePerformIO, and viola! We have "commutative" instances that randomly chose between a <~> b and b <~> a.

A cleaner solution might be to define the commute operation at the set level, much like mconcat. So, I bantered around and made "Mergeable" - the analogue to an unordered Foldable (which has about as terrible semantics as it sounds). From this, I made instances for lists, both possibly empty and non-empty.

Where do we go from here? Is there any hope for our obsessions? I also made a basic unordered set implementation which could serve as another instance for Mergeable, but I don't see much practical / performant utility.

Has there been much history of commutative attempts in Haskell? What are the underlying issues? Would we actually gain anything (for the compiler) to state that a whole set can be evaluated in any order, or that the set can be partitioned in memory in any way?

submitted by Spewface
[link] [7 comments]
Categories: Incoming News

Netwire 5: how to use Events?

Haskell on Reddit - Mon, 07/27/2015 - 3:13pm

I am trying to understand how Events are supposed to be used in Netwire 5. I failed to google up any meaningful tutorials or examples.

(Virually) all functions that produce events are wires that wrap their input into the Event and produce it as output:

something -> Wire s e m a (Event a)

(Virually) all functions that consume events expect for their input another Wire wrapped in the Event:

Wire s e m a (b, Event (Wire s e m a b)) -> Wire s e m a b

How to jump from one to the other?! I feel dumb. As an example, I would like to produce the number of events that had occurred up to now:

module Main where import Control.Monad.IO.Class import Control.Wire import Prelude hiding ((.)) -- Produce number of events that occured up to the present time countWire :: (HasTime t s, Monad m) => Int -> Wire s () m (Event a) Int countWire init = ???????? main :: IO () main = testWireM liftIO clockSession_ $ countWire 0 . periodic 2 . pure ()

Thanks in advance

submitted by lukewarm
[link] [3 comments]
Categories: Incoming News

What is the difference between . and $

Haskell on Reddit - Mon, 07/27/2015 - 12:48pm

Isn't $ a more general purpose . ? I'm just confused as to why both exist

submitted by czipperz
[link] [29 comments]
Categories: Incoming News

General purpose libraries

Haskell on Reddit - Mon, 07/27/2015 - 11:21am

If you think the title doesn't make sense, let me explain.

Many libraries are specific to a particular domain, e.g. operation research, machine learning, parsing, etc...

Other libraries (like Scalaz in Scala) are more general in scope and change the way we structure our code, no matter the domain.

While we know when to look for a library of the first type, we usually stumble upon libraries of the second type and only then realize how useful they are.

So here's my question: are there libraries of the second type I should know of?

submitted by Kiuhnm
[link] [17 comments]
Categories: Incoming News

Examples of "Functional Thinking" successes in industry?

Haskell on Reddit - Mon, 07/27/2015 - 10:29am

By Functional Thinking I don't just mean using a functional language, or even making everything a pure function. While true FP languages are gaining traction, most projects are still done in languages where full functional style makes no sense. By Functional Thinking I mean generally organizing a project into multiple smaller composable pieces, using contracts or whatever, rather than the monolithic framework style most often seen. Essentially, I'm looking for examples of/info on how to transfer styles like u/tekmo 's category/functor design pattern, and conal's(?) denotational design in industry settings.

submitted by dogodel
[link] [20 comments]
Categories: Incoming News

LambdaCube 3D tour summary

Haskell on Reddit - Mon, 07/27/2015 - 10:14am
Categories: Incoming News

Munich Haskell Meeting, 2015-07-29 < at > 19:30

haskell-cafe - Mon, 07/27/2015 - 10:02am
Dear all, This week, our monthly Munich Haskell Meeting will take place again on Wednesday, July 29 at Cafe Puck at 19h30. For details see here: http://www.haskell-munich.de/dates If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-jul-2015/ Everybody is welcome! cu,
Categories: Offsite Discussion

LambdaCube: The LambdaCube 3D Tour

Planet Haskell - Mon, 07/27/2015 - 3:29am
Prologue

This blog has been silent for a long time, so it’s definitely time for a little update. A lot of things happened since the last post! Our most visible achievement from this period is the opening of the official website for LambdaCube 3D at ‒ wait for it! ‒ lambdacube3d.com, which features an interactive editor among other things. The most important, on the other hand, is the fact that Péter Diviánszky joined the team in the meantime and has been very active in the development of the latest LambdaCube generation. Oh yes, we have a completely new system now, and the big change is that LambdaCube 3D is finally an independent DSL instead of a language embedded in Haskell. Consult the website for further details.

Soon we’ll be writing more about the work done recently, but the real meat of this post is to talk about our summer adventure: the LambdaCube 3D Tour. This is the short story of how Csaba and Péter presented LambdaCube 3D during a 7-week tour around Europe.

The idea

Csaba started to work on LambdaCube 3D full time in the autumn of 2014 and Péter joined him at the same time. We knew that we should advertise our work somehow, preferably without involving individual sponsors. Csaba already had experience with meetups and hackathons, and Péter had bought his first car in the summer, so it seemed feasible to travel around Europe by car and give presentations for various self-organised tech communities.

Planning

At the end of March 2015 we decided that the tour should start around ZuriHack 2015, which we wanted to participate in anyway. We had a preliminary route which was quite close to the final itinerary shown on the following map:

We included Munich because we knew that Haskell is used there in multiple ways and we missed it last year during the way to ZuriHack 2014. We included Paris later because a friend of us was at MGS 2015 and while chatting with Nicolas Rolland, he got interested in our tour and invited us to Paris. We included Nottingham because Ambrus Kaposi, Peter’s previous student, was doing research in HoTT there and he invited us too. We included Helsinki because Gergely, our team member and Csaba’s previous Haskell teacher, lives there and we have also heard that lots of indie game developers work in Helsinki. The other destinations were included mainly because we knew that there are Haskell communities there.

Preparation

After deciding the starting time, we made sure to fully focus on the tour. We made a detailed work schedule for 4 x 2 weeks such that we would have enough content for the presentations. The work went surprisingly well, we reached most milestones although the schedule was quite tight. We have done more for the success of LambdaCube 3D in this two-month period than in the previous half year.

We had a rehearsal presentation in Budapest at Prezi, mainly in front of our friends. It was a good idea because we got lots of feedback on our style of presentation. We felt that the presentation was not that successful but we were happy that we could learn from it.

We hardly dealt with other details like where we would sleep or how to travel. All we did was the minimum effort to kickstart the trip: Peter brought his 14 years old Peugeot 206 to a review, he sent some CouchSurfing requests and Csaba asked some of his friends whether they can host us. We bought food for the tour at the morning when we left Budapest. Two oszkar.com (car-sharing service) clients were waiting for us at the car while we were packing our stuff for the tour.

Travelling

Up until Vienna there were four of us in the car with big packs on our laps. This was rather inconvenient, so we mostly skipped transporting others later on. Otherwise, the best moments were when we occasionally gave a ride to friends or hitchhikers.

Peter was relatively new to driving but at the end of the tour he already appreciated travelling on smaller local roads because highways turned out to be a bit boring for him. Csaba was the navigator using an open-source offline navigation software running on an NVidia Shield, which we won at the Function 2014 demoscene party.

Being lazy with actual route planning hit back only once when it was combined with a bad decision: we bought an expensive EuroTunnel ticket instead of going by ferry to England. Even with this expense, the whole journey including all costs was about the same price as visiting a one-week conference somewhere in Europe. This was possible because we only paid once for accommodation, in a hostel. Otherwise we were staying at friends or CouchSurfing hosts. We slept in a tent on two occasions. We learned during our tour about the Swedish law, The Right of Public Access (Allemansrätten), which allows everyone to visit somebody else’s land, use their waters for bathing and riding boats on, and to pick the wild flowers, mushrooms, berries. We didn’t have the chance to try all these activities, though.

All of our hosts were amazing and very different people, living in different conditions. We hope that we can return the hospitality either to them or to someone else in need in Budapest.

Presentations

Our main task was to give one presentation every 3-4 days on average, which was a quite convenient rate. We were lucky because our first presentation was strikingly successful in Munich. We got so many questions during and after the presentation that we could only leave 4 hours after the beginning. At first we made free-style presentations but near the end we switched to using slides because they are altogether more reliable. Sometimes it was refreshing to diverge from the presentation and speak about Agda or laziness if the audience was interested about it.

Summary

We reached approximately 200 people at 12 presentations and we had lots of private discussions about our interests. This was combined with a diverse summer vacation with lots of impressions. We think of it as a total success!

In hindsight, there are a few things we would do differently if we were to begin the tour now:

  • First of all, we would arrange more private appointments. For example, we were invited to have a dinner at Microsoft Research in Cambridge by Nicolas Rolland. We wanted to greet Simon Peyton Jones so we went up to his room. We could see him through the glass talking on the phone but we didn’t want to disturb him because we had no appointment arranged with him.
  • If we had spent more time improving our presentation during the tour, it would have made a lot of difference. We couldn’t really do this because first we patched the most annoying shortages in our software and later we were not focused enough (we had to adapt to the ever-changing environment).
  • Half the time would be enough to achieve our primary goal. 7 weeks is a really long time for such a tour.
  • No meetups on Friday-Saturday-Sunday. Especially no meetups on Fridays in London…
  • No meetups in the height of summer; the beginning of June is still OK.

It matters a lot to us that the importance of our work was confirmed by the feedback we received during the trip. It’s also clear that C/C++ programmers would only be interested in an already successful system. On the other hand, functional programmers (especially Haskellers) are open to cooperation right now.

What’s next?

The tour ended two weeks ago, and by now we had enough rest to carry on with development, taking into account the all feedback from the tour. We shall write about our plans for LambdaCube 3D in another blog post soon enough. Stay tuned!

Categories: Offsite Blogs

Daniel Mlot (duplode): Migrating a Project to stack

Planet Haskell - Mon, 07/27/2015 - 12:00am

This post consists of notes on how I converted one of my Haskell projects to stack. It provides a small illustration of how flexible stack can be in accomodating project organisation quirks on the way towards predictable builds. If you want to see the complete results, here are links to the Bitbucket repository of Stunts Cartography, the example project I am using, and specifically to the source tree immediately after the migration.

The first decision to make when migrating a project is which Stackage snapshot to pick. It had been a while since I last updated my project, and building it with the latest versions of all its dependencies would require a few adjustments. That being so, I chose to migrate to stack before any further patches. Since one of the main dependencies was diagrams 1.2, I went for lts-2.19, the most recent LTS snapshot with that version of diagrams 1.

$ stack init --resolver lts-2.19

stack init creates a stack.yaml file based on an existing cabal file in the current directory. The --resolver option can be used to pick a specific snapshot.

One complicating factor in the conversion to stack was that two of the extra dependencies, threepenny-gui-0.5.0.0 (one major version behind the current one) and zip-conduit, wouldn’t build with the LTS snapshot plus current Hackage without version bumps in their cabal files. Fortunately, stack deals very well with situations like this, in which minor changes to some dependency are needed. I simply forked the dependencies on GitHub, pushed the version bumps to my forks and referenced the commits in the remote GitHub repository in stack.yaml. A typical entry for a Git commit in the packages section looks like this:

- location: git: https://github.com/duplode/zip-conduit commit: 1eefc8bd91d5f38b760bce1fb8dd16d6e05a671d extra-dep: true

Keeping customised dependencies in public remote repositories is an excellent solution. It enables users to build the package without further intervention without requiring developers to clumsily bundle the source tree of the dependencies with the project, or waiting for a pull request to be accepted upstream and reach Hackage.

With the two tricky extra dependencies being offloaded to Git repositories, the next step was using stack solver to figure out the rest of them:

$ stack solver --modify-stack-yaml This command is not guaranteed to give you a perfect build plan It's possible that even with the changes generated below, you will still need to do some manual tweaking Asking cabal to calculate a build plan, please wait extra-deps: - parsec-permutation-0.1.2.0 - websockets-snap-0.9.2.0 Updated /home/duplode/Development/stunts/diagrams/stack.yaml

Here is the final stack.yaml:

flags: stunts-cartography: repldump2carto: true packages: - '.' - location: git: https://github.com/duplode/zip-conduit commit: 1eefc8bd91d5f38b760bce1fb8dd16d6e05a671d extra-dep: true - location: git: https://github.com/duplode/threepenny-gui commit: 2dd88e893f09e8e31378f542a9cd253cc009a2c5 extra-dep: true extra-deps: - parsec-permutation-0.1.2.0 - websockets-snap-0.9.2.0 resolver: lts-2.19

repldump2carto is a flag defined in the cabal file. It is used to build a secondary executable. Beyond demonstrating how the flags section of stack.yaml works, I added it because stack ghci expects all possible build targets to have been built 2.

As I have GHC 7.10.1 from my Linux distribution and the LTS 2.19 snapshot is made for GHC 7.8.4, I needed stack setup as an additional step. That command locally installs (in ~/.stack) the GHC version required by the chosen snapshot.

That pretty much concludes the migration. All that is left is demonstrating: stack build to compile the project…

$ stack build JuicyPixels-3.2.5.2: configure Boolean-0.2.3: download # etc. (Note how deps from Git are handled seamlessly.) threepenny-gui-0.5.0.0: configure threepenny-gui-0.5.0.0: build threepenny-gui-0.5.0.0: install zip-conduit-0.2.2.2: configure zip-conduit-0.2.2.2: build zip-conduit-0.2.2.2: install # etc. stunts-cartography-0.4.0.3: configure stunts-cartography-0.4.0.3: build stunts-cartography-0.4.0.3: install Completed all 64 actions.

… stack ghci to play with it in GHCi…

$ stack ghci Configuring GHCi with the following packages: stunts-cartography GHCi, version 7.8.4: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. -- etc. Ok, modules loaded: GameState, Annotation, Types.Diagrams, Pics, Pics.MM, Annotation.Flipbook, Annotation.LapTrace, Annotation.LapTrace.Vec, Annotation.LapTrace.Parser.Simple, Annotation.Parser, Types.CartoM, Parameters, Composition, Track, Util.Misc, Pics.Palette, Output, Util.ByteString, Util.ZipConduit, Replay, Paths, Util.Reactive.Threepenny, Util.Threepenny.Alertify, Widgets.BoundedInput. *GameState> :l src/Viewer.hs -- The Main module. -- etc. *Main> :main Welcome to Stunts Cartography. Open your web browser and navigate to localhost:10000 to begin. Listening on http://127.0.0.1:10000/ [27/Jul/2015:00:55:11 -0300] Server.httpServe: START, binding to [http://127.0.0.1:10000/]

… and looking at the build output in the depths of .stack-work:

$ .stack-work/dist/x86_64-linux/Cabal-1.18.1.5/build/sc-trk-viewer/sc-trk-viewer Welcome to Stunts Cartography 0.4.0.3. Open your web browser and navigate to localhost:10000 to begin. Listening on http://127.0.0.1:10000/ [26/Jul/2015:20:02:54 -0300] Server.httpServe: START, binding to [http://127.0.0.1:10000/]

With the upcoming stack 0.2 it will be possible to use stack build --copy-bins --local-bin-path <path> to copy any executables built as part of the project to a path. If the --local-bin-path option is omitted, the default is ~/.local/bin. (In fact, you can already copy executables to ~/.local/bin with stack 0.1.2 through stack install. However, I don’t want to overemphasise that command, as stack install not being equivalent to cabal install can cause some confusion.)

Hopefully this report will give you an idea of what to expect when migrating your projects to stack. Some details may appear a little strange, given how familiar cabal-install workflows are, and some features are still being shaped. All in all, however, stack works very well already: it definitely makes setting up reliable builds easier. The stack repository at GitHub, and specially the wiki therein, offers lots of helpful information, in case you need further details and usage tips.

<section class="footnotes">
  1. As a broader point, it just seems polite to, when possible, pick a LTS snapshot over than a nightly for a public project. It is more likely that those interested in building your project already have a specific LTS rather than an arbitrary nightly.

  2. That being so, a more natural arrangement would be treating repldump2carto as a full-blown subproject by giving it its own cabal file and adding it to the packages section. I would then be able to load only the main project in GHCi with stack ghci stunts-cartography.

</section> Comment on GitHub (see the full post for a reddit link)

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Categories: Offsite Blogs

How can I learn the theoretical side of haskell?

Haskell on Reddit - Sun, 07/26/2015 - 9:50pm

I see that most of the books on Haskell show how to use haskell for practical projects. I'm fascinated by the theory which seems to come so naturally to haskell developers!

I have an undergrad in Comp. Sci., which included the usual set of discrete math and theoretical classes, which I did my best to ignore (15 years ago i was even more dumb than i am now). Three or four years ago I spent several months reading whatever would show up on lambda-the-ultimate.org. This doesn't mean much, beyond that fact that I am aware of the existence of type theory, lambda calculus, category theory, etc. I've worked through the first few chapters of Ben Pierce's "Types and Programming Languages."

What I find very exciting is captured by Conal Elliot's recent talk about what FRP actually means. His emphasis on getting the model correct: to build a theoretical model of some concept in a few lines of code, then check that model for correctness; building software based on those models, rather than in an ad-hoc manner.

Haskell's monads, monoids, applicatives, arrows and other types (which i don't really understand) seem to capture important abstractions which show up again and again in various domains (again, Conal's example is relevant).

I want to become fluent in these abstractions. I basically want to understand papers related to these technologies. I get the impression that I need a course in abstract math (groups, fields, rings, etc.) in order to form a foundation.

I should point out that I'm not interested in theory for the sake of theory. I'm ultimately interested in it because, as far as I can tell, it provides very firm foundation for building practical software. If I can think of a domain in terms of a handful of rules, rather than 40k lines of code which includes database connections, network error handling, etc., I can build more interesting systems.

Rather than learn a set of techniques, I'm more interested in the foundations.

Any pointers to books or set of steps I should take? Unfortunately I can't pursue a full-on academic degree, since I'm already doing so for a different subject.

submitted by noclaf
[link] [14 comments]
Categories: Incoming News

Bill Atkins: GCD and Parallel Collections in Swift

Planet Haskell - Sun, 07/26/2015 - 7:40pm
One of the benefits of functional programming is that it's straightforward to parallelize operations. Common FP idioms like map, filter and reduce can be adapted so they run on many cores at once, letting you get instant parallelization wherever you find a bottleneck.

The benefits of these parallel combinators are huge. Wherever you find a bottleneck in your program, you can simply replace your call to map with a call to a parallel map and your code will be able to take advantage of all the cores on your system. On my eight-core system, for example, simply using a different function can theoretically yield an eight-fold speed boost. (Of course, there are a few reasons you might not see that theoretical speed improvement: namely, the overhead of creating threads, splitting up the work, synchronizing data between the threads, etc. Nevertheless, if you profile your code and focus on hotspots, you can see tremendous improvements with simple changes.)

Swift doesn't yet come with parallel collections functions, but we can build them ourselves, using Grand Central Dispatch:
// requires Swift 2.0 or higher
extension Array {    public func pmap<t>(transform: (Element -> T)) -> [T] {</t>
        guard !self.isEmpty else {            return []
        }
                var result: [(Int, [T])] = []
                let group = dispatch_group_create()        let lock = dispatch_queue_create("pmap queue for result", DISPATCH_QUEUE_SERIAL)                let step: Int = max(1, self.count / NSProcessInfo.processInfo().activeProcessorCount) // step can never be 0                for var stepIndex = 0; stepIndex * step < self.count; stepIndex++ {
            let capturedStepIndex = stepIndex

            var stepResult: [T] = []
            dispatch_group_async(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {                for i in (capturedStepIndex * step)..<((capturedStepIndex + 1) * step) {
                    if i < self.count {
                        let mappedElement = transform(self[i])
                        stepResult += [mappedElement]
                    }
                }

                dispatch_group_async(group, lock) {
                    result += [(capturedStepIndex, stepResult)]
                }
            }
        }
                dispatch_group_wait(group, DISPATCH_TIME_FOREVER)                return result.sort { $0.0 < $1.0 }.flatMap { $0.1 }
   }
}
pmap takes the same arguments as map but simply runs the function across all of your system's CPUs. Let's break the function down, step by step.
  1. In the case of an empty array, pmap returns early, since the overhead of splitting up the work and synchronizing the results is non-trivial. We might take this even further by falling back to a sequential map for arrays with a very small element count.
  2. Create a Grand Central Dispatch group that we can associate with the GCD blocks we'll run later on. Since all of these blocks will be in the same group, the invoking thread can wait for the group to be empty at the end of the function and know for certain that all of the background work has finished.
  3. Create a dedicated, sequential lock queue to control access to the result array. This is a common pattern in GCD: simulating a mutex with a sequential queue. Since a sequential queue will never run two blocks simultaneously, we can be sure that whatever operations we perform in this queue will be isolated from one another.
  4. Next, pmap breaks the array up into "steps", based on the host machine's CPU count (since this is read from NSProcessInfo, this function will automatically receive performance benefits when run on machines with more cores). Each step is dispatched to one of GCD's global background queues. In the invoking thread, this for loop will run very, very quickly, since all it does is add closures to background queues.
  5. The main for loop iterates through each "step," capturing the stepIndex in a local variable. If we don't do this, the closures passed to dispatch_group_async will all refer to the same storage location - as the for loop increments, all of the workers will see stepIndex increase by one and will all operate on the same step. By capturing the variable, each worker has its own copy of stepIndex, which never changes as the for loop proceeds.
  6. We calculate the start and end indices for this step. For each array element in that range, we call transform on the element and add it to this worker's local stepResult array. Because it's unlikely that the number of elements in the array will be divisible by a given machine's processor count, we check that i never goes beyond the end of the array, which could otherwise happen in the very last step.
  7. After an entire step has been processed, we add this worker's results to the master result array. Since the order in which workers will finish is nondeterministic, each element of the result array is a tuple containing the stepIndex and the transformed elements in that step's range. We use the lock queue to ensure that all changes to the result array are synchronized. 
      • Note that we only have to enter this critical section once for each core - an alternative implementation of pmap might create a single master result array of the same size as the input and set each element to its mapped result as it goes. But this would have to enter the critical section once for every array element, instead of just once for each CPU, generating more memory and processor contention and benefiting less from spatial locality. 
      • We use dispatch_sync instead of dispatch_async because we want to be sure that the worker's changes have been applied to the masterResults array before declaring this worker to be done. If we were to use dispatch_async, the scheduler could very easily finish all of the step blocks but leave one or more of these critical section blocks unprocessed, leaving us with an incomplete result.
  8. On the original thread, we call dispatch_group_wait, which waits until all blocks in the group have completed. At this point, we know that all work has been done and all changes to the master results array have been made.
  9. The final line sorts the master array by stepIndex (since steps finish in a nondeterministic order) and then flattens the master array in that order.
To see how this works, let's create a simple profile function:
func profile(desc: String, block: () -> A) -> Void {
    let start = NSDate().timeIntervalSince1970    block()
        let duration = NSDate().timeIntervalSince1970 - start
    print("Profiler: completed \(desc) in \(duration * 1000)ms")
}
We'll test this out using a simple function called slowCalc, which adds a small sleep delay before each calculation, to ensure that each map operation does enough work (in production code, you should never sleep in code submitted to a GCD queue - this is purely for demonstration purposes). Without this little delay, the overhead of parallelization would be too great:

func slowCalc(x: Int) -> Int {    NSThread.sleepForTimeInterval(0.1)    return x * 2}
let smallTestData: [Int] = [Int](0..<10)let largeTestData = [Int](0..<300)
profile("large dataset (sequential)") { largeTestData.map { slowCalc($0) } }profile("large dataset (parallel)") { largeTestData.pmap { slowCalc($0) } }
On my eight-core machine, this results in:
Profiler: completed large dataset (sequential) in 31239.7990226746ms
Profiler: completed large dataset (parallel) in 4005.04493713379ms

an 7.8-fold increase, which is about what you'd expect.
It's important thing to remember that if each iteration doesn't do enough work, the overhead of splitting up work, setting up worker blocks and synchronizing data access will far outweigh the time savings of parallelization. The amount of overhead involved can be surprising. This code is identical to the above, except that it doesn't add the extra delay.

profile("large dataset (sequential, no delay)") { largeTestData.map { $0 * 2 } }profile("large dataset (parallel, no delay)") { largeTestData.pmap { $0 * 2 } }
On my machine, it results in:
Profiler: completed large dataset (sequential, no delay) in 53.4629821777344ms
Profiler: completed large dataset (parallel, no delay) in 161.548852920532ms

The parallel version is three times slower than the sequential version! This is a really important consideration when using parallel collection functions:
  1. Make sure that each of your iterations does enough work to make parallelization worth it.
  2. Parallel collections are not a panacea - you can't just sprinkle them throughout your code and assume you'll get a performance boost. You still need to profile for hotspots, and it's important to focus on bottlenecks found through profiling, rather than hunches about what parts of your code are slowest.
  3. Modern CPUs are blindingly fast - an addition or multiplication operation is so fast that it's not worth parallelizing these, unless your array is very large.
You can use the same techniques to implement a parallel filter function:

// requires Swift 2.0 or higherextension Array {    public func pfilter(includeElement: Element -> Bool) -> [Element] {        guard !self.isEmpty else {            return []        }                var result: [(Int, [Element])] = []                let group = dispatch_group_create()        let lock = dispatch_queue_create("pmap queue for result", DISPATCH_QUEUE_SERIAL)                let step: Int = max(1, self.count / NSProcessInfo.processInfo().activeProcessorCount) // step can never be 0                for var stepIndex = 0; stepIndex * step < self.count; stepIndex++ {            let capturedStepIndex = stepIndex                        var stepResult: [Element] = []            dispatch_group_async(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {                for i in (capturedStepIndex * step)..<((capturedStepIndex + 1) * step) {                    if i < self.count && includeElement(self[i]) {                        stepResult += [self[i]]                    }                }                                dispatch_group_async(group, lock) {                    result += [(capturedStepIndex, stepResult)]                }            }        }                dispatch_group_wait(group, DISPATCH_TIME_FOREVER)                return result.sort { $0.0 < $1.0 }.flatMap { $0.1 }    }}
This code is almost exactly identical to pmap - only the logic in the inner for loop is different.

We can now start using these combinators together (again, we have to use a slowed-down predicate function in order to see the benefit from parallelization):

func slowTest(x: Int) -> Bool {    NSThread.sleepForTimeInterval(0.1)    return x % 2 == 0}
profile("large dataset (sequential)") { largeTestData.filter { slowTest($0) }.map { slowCalc($0) } }profile("large dataset (sequential filter, parallel map)") { largeTestData.filter { slowTest($0) }.pmap { slowCalc($0) } }profile("large dataset (parallel filter, sequential map)") { largeTestData.pfilter { slowTest($0) }.map { slowCalc($0) } }profile("large dataset (parallel filter, parallel map)") { largeTestData.pfilter { slowTest($0) }.pmap { slowCalc($0) } }
which results in:
Profiler: completed large dataset (sequential) in 1572.28803634644msProfiler: completed large dataset (sequential filter, parallel map) in 1153.90300750732msProfiler: completed large dataset (parallel filter, sequential map) in 642.061948776245msProfiler: completed large dataset (parallel filter, parallel map) in 231.456995010376ms
Using one parallel combinator gives a slight improvement; combining the two parallel operations gives us a fivefold performance improvement over the basic sequential implementation.
Here are some other directions to pursue:
  1. Implement parallel versions of find, any/exists and all. These are tricky because their contracts stipulate that processing stops as soon as they have a result. So you'll have to find some way to stop your parallel workers as soon as the function has its answer.
  2. Implement a parallel version of reduce. The benefit of doing this is that reduce is a "primitive" higher-order function - you can easily implement pmap and pfilter given a parallel reduce function.
  3. Generalize these functions to work on all collections (not just arrays), using Swift 2's protocol extensions.
Categories: Offsite Blogs

Ken T Takusagawa: [hqoeierf] Planck frequency pitch standard

Planet Haskell - Sun, 07/26/2015 - 6:56pm

We present a fanciful alternative to the musical pitch standard A440 by having some piano key note, not necessarily A4, have a frequency that is an integer perfect power multiple of the Planck time interval.

Let Pf = Planck frequency = 1/plancktime = 1/(5.3910604e-44 s) = 1.8549226e+43 Hz.

We first consider some possibilities of modifying A = 440 Hz as little as possible.  Sharpness or flatness is given in cents, where 100 cents = 1 semitone.

F3 = Pf / 725926^7 = 174.6141 Hz, or A = 440.0000 Hz, offset = -0.00003 cents
G3 = Pf / 714044^7 = 195.9977 Hz, or A = 440.0000 Hz, offset = -0.00013 cents
E3 = Pf / 135337^8 = 164.8137 Hz, or A = 439.9999 Hz, offset = -0.00030 cents
G3 = Pf / 132437^8 = 195.9978 Hz, or A = 440.0001 Hz, offset = 0.00045 cents
D#5 = Pf / 31416^9 = 622.2542 Hz, or A = 440.0001 Hz, offset = 0.00053 cents
A#3 = Pf / 12305^10 = 233.0825 Hz, or A = 440.0011 Hz, offset = 0.00442 cents
C#5 = Pf / 1310^13 = 554.3690 Hz, or A = 440.0030 Hz, offset = 0.01176 cents
A#3 = Pf / 360^16 = 233.0697 Hz, or A = 439.9770 Hz, offset = -0.09058 cents
A#1 = Pf / 77^22 = 58.2814 Hz, or A = 440.0824 Hz, offset = 0.32419 cents
D#4 = Pf / 50^24 = 311.2044 Hz, or A = 440.1095 Hz, offset = 0.43060 cents
E1 = Pf / 40^26 = 41.1876 Hz, or A = 439.8303 Hz, offset = -0.66769 cents
B5 = Pf / 22^30 = 990.0232 Hz, or A = 441.0052 Hz, offset = 3.95060 cents
F#3 = Pf / 10^41 = 185.4923 Hz, or A = 441.1774 Hz, offset = 4.62660 cents
A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = 8.67126 cents
G6 = Pf / 3^84 = 1549.3174 Hz, or A = 434.7625 Hz, offset = -20.73121 cents
G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 43.48887 cents

Next some modifications of other pitch standards, used by continental European orchestras.

Modifications of A = 441 Hz:

C#6 = Pf / 106614^8 = 1111.2503 Hz, or A = 441.0000 Hz, offset = -0.00007 cents
F2 = Pf / 39067^9 = 87.5055 Hz, or A = 441.0000 Hz, offset = -0.00011 cents
G#2 = Pf / 38322^9 = 104.0620 Hz, or A = 440.9995 Hz, offset = -0.00184 cents
G1 = Pf / 6022^11 = 49.1109 Hz, or A = 441.0006 Hz, offset = 0.00240 cents
B5 = Pf / 22^30 = 990.0232 Hz, or A = 441.0052 Hz, offset = 0.02044 cents
F#3 = Pf / 10^41 = 185.4923 Hz, or A = 441.1774 Hz, offset = 0.69644 cents
A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = 4.74110 cents
E7 = Pf / 5^57 = 2673.2253 Hz, or A = 446.0410 Hz, offset = 19.67702 cents
G6 = Pf / 3^84 = 1549.3174 Hz, or A = 434.7625 Hz, offset = -24.66137 cents
G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 39.55871 cents

Modifications of A = 442 Hz:

D#6 = Pf / 547981^7 = 1250.1649 Hz, or A = 442.0000 Hz, offset = 0.00014 cents
G6 = Pf / 530189^7 = 1575.1097 Hz, or A = 442.0002 Hz, offset = 0.00086 cents
G#6 = Pf / 525832^7 = 1668.7709 Hz, or A = 442.0003 Hz, offset = 0.00116 cents
F#4 = Pf / 122256^8 = 371.6759 Hz, or A = 441.9996 Hz, offset = -0.00170 cents
A5 = Pf / 30214^9 = 883.9990 Hz, or A = 441.9995 Hz, offset = -0.00194 cents
F#4 = Pf / 11744^10 = 371.6767 Hz, or A = 442.0006 Hz, offset = 0.00242 cents
A7 = Pf / 217^17 = 3535.9843 Hz, or A = 441.9980 Hz, offset = -0.00769 cents
D2 = Pf / 151^19 = 73.7503 Hz, or A = 442.0024 Hz, offset = 0.00939 cents
A2 = Pf / 62^23 = 110.4885 Hz, or A = 441.9539 Hz, offset = -0.18072 cents
D#3 = Pf / 38^26 = 156.2976 Hz, or A = 442.0764 Hz, offset = 0.29903 cents
D#4 = Pf / 37^26 = 312.6662 Hz, or A = 442.1768 Hz, offset = 0.69244 cents
A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = 0.81985 cents
E7 = Pf / 5^57 = 2673.2253 Hz, or A = 446.0410 Hz, offset = 15.75576 cents
G6 = Pf / 3^84 = 1549.3174 Hz, or A = 434.7625 Hz, offset = -28.58262 cents
G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 35.63745 cents

Modifications of A = 443 Hz:

F#5 = Pf / 590036^7 = 745.0342 Hz, or A = 443.0000 Hz, offset = 0.00003 cents
C7 = Pf / 508595^7 = 2107.2749 Hz, or A = 443.0000 Hz, offset = -0.00007 cents
F7 = Pf / 488038^7 = 2812.8743 Hz, or A = 442.9999 Hz, offset = -0.00020 cents
B2 = Pf / 140193^8 = 124.3126 Hz, or A = 442.9998 Hz, offset = -0.00093 cents
A5 = Pf / 109676^8 = 885.9985 Hz, or A = 442.9992 Hz, offset = -0.00296 cents
B7 = Pf / 25564^9 = 3978.0160 Hz, or A = 443.0012 Hz, offset = 0.00456 cents
G#1 = Pf / 5988^11 = 52.2668 Hz, or A = 442.9982 Hz, offset = -0.00722 cents
B1 = Pf / 391^16 = 62.1581 Hz, or A = 443.0125 Hz, offset = 0.04895 cents
A6 = Pf / 226^17 = 1772.0760 Hz, or A = 443.0190 Hz, offset = 0.07422 cents
F7 = Pf / 163^18 = 2811.5701 Hz, or A = 442.7946 Hz, offset = -0.80308 cents
A#3 = Pf / 60^23 = 234.8805 Hz, or A = 443.3954 Hz, offset = 1.54462 cents
E6 = Pf / 35^26 = 1326.0401 Hz, or A = 442.5128 Hz, offset = -1.90507 cents
E2 = Pf / 34^27 = 82.8696 Hz, or A = 442.4704 Hz, offset = -2.07100 cents
C#2 = Pf / 18^33 = 69.8768 Hz, or A = 443.6902 Hz, offset = 2.69500 cents
A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = -3.09255 cents
E7 = Pf / 5^57 = 2673.2253 Hz, or A = 446.0410 Hz, offset = 11.84337 cents
G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 31.72506 cents

Planck time is not known to high precision due to uncertainty of the gravitational constant G.  Fortunately coincidentally, musical instruments are not tuned to greater than 7 significant digits of precision, either.

Source code in Haskell. The algorithm is not clever; it simply brute forces every perfect integer power multiple of Planck time, with base less than 1 million, and within the range of an 88-key piano.  The code also can base the fundamental frequency off the hydrogen 21 cm line or off the frequency of cesium used for atomic clocks.

Inspired by Scientific pitch, which set C4 = 2^8 Hz = 256 Hz, or A = 430.538964609902 Hz, offset = -37.631656229590796 cents.

Categories: Offsite Blogs