News aggregator

Ken T Takusagawa: [hqoeierf] Planck frequency pitch standard

Planet Haskell - Sun, 07/26/2015 - 6:56pm

We present a fanciful alternative to the musical pitch standard A440 by having some piano key note, not necessarily A4, have a frequency that is an integer perfect power multiple of the Planck time interval.

Let Pf = Planck frequency = 1/plancktime = 1/(5.3910604e-44 s) = 1.8549226e+43 Hz.

We first consider some possibilities of modifying A = 440 Hz as little as possible.  Sharpness or flatness is given in cents, where 100 cents = 1 semitone.

F3 = Pf / 725926^7 = 174.6141 Hz, or A = 440.0000 Hz, offset = -0.00003 cents
G3 = Pf / 714044^7 = 195.9977 Hz, or A = 440.0000 Hz, offset = -0.00013 cents
E3 = Pf / 135337^8 = 164.8137 Hz, or A = 439.9999 Hz, offset = -0.00030 cents
G3 = Pf / 132437^8 = 195.9978 Hz, or A = 440.0001 Hz, offset = 0.00045 cents
D#5 = Pf / 31416^9 = 622.2542 Hz, or A = 440.0001 Hz, offset = 0.00053 cents
A#3 = Pf / 12305^10 = 233.0825 Hz, or A = 440.0011 Hz, offset = 0.00442 cents
C#5 = Pf / 1310^13 = 554.3690 Hz, or A = 440.0030 Hz, offset = 0.01176 cents
A#3 = Pf / 360^16 = 233.0697 Hz, or A = 439.9770 Hz, offset = -0.09058 cents
A#1 = Pf / 77^22 = 58.2814 Hz, or A = 440.0824 Hz, offset = 0.32419 cents
D#4 = Pf / 50^24 = 311.2044 Hz, or A = 440.1095 Hz, offset = 0.43060 cents
E1 = Pf / 40^26 = 41.1876 Hz, or A = 439.8303 Hz, offset = -0.66769 cents
B5 = Pf / 22^30 = 990.0232 Hz, or A = 441.0052 Hz, offset = 3.95060 cents
F#3 = Pf / 10^41 = 185.4923 Hz, or A = 441.1774 Hz, offset = 4.62660 cents
A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = 8.67126 cents
G6 = Pf / 3^84 = 1549.3174 Hz, or A = 434.7625 Hz, offset = -20.73121 cents
G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 43.48887 cents

Next some modifications of other pitch standards, used by continental European orchestras.

Modifications of A = 441 Hz:

C#6 = Pf / 106614^8 = 1111.2503 Hz, or A = 441.0000 Hz, offset = -0.00007 cents
F2 = Pf / 39067^9 = 87.5055 Hz, or A = 441.0000 Hz, offset = -0.00011 cents
G#2 = Pf / 38322^9 = 104.0620 Hz, or A = 440.9995 Hz, offset = -0.00184 cents
G1 = Pf / 6022^11 = 49.1109 Hz, or A = 441.0006 Hz, offset = 0.00240 cents
B5 = Pf / 22^30 = 990.0232 Hz, or A = 441.0052 Hz, offset = 0.02044 cents
F#3 = Pf / 10^41 = 185.4923 Hz, or A = 441.1774 Hz, offset = 0.69644 cents
A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = 4.74110 cents
E7 = Pf / 5^57 = 2673.2253 Hz, or A = 446.0410 Hz, offset = 19.67702 cents
G6 = Pf / 3^84 = 1549.3174 Hz, or A = 434.7625 Hz, offset = -24.66137 cents
G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 39.55871 cents

Modifications of A = 442 Hz:

D#6 = Pf / 547981^7 = 1250.1649 Hz, or A = 442.0000 Hz, offset = 0.00014 cents
G6 = Pf / 530189^7 = 1575.1097 Hz, or A = 442.0002 Hz, offset = 0.00086 cents
G#6 = Pf / 525832^7 = 1668.7709 Hz, or A = 442.0003 Hz, offset = 0.00116 cents
F#4 = Pf / 122256^8 = 371.6759 Hz, or A = 441.9996 Hz, offset = -0.00170 cents
A5 = Pf / 30214^9 = 883.9990 Hz, or A = 441.9995 Hz, offset = -0.00194 cents
F#4 = Pf / 11744^10 = 371.6767 Hz, or A = 442.0006 Hz, offset = 0.00242 cents
A7 = Pf / 217^17 = 3535.9843 Hz, or A = 441.9980 Hz, offset = -0.00769 cents
D2 = Pf / 151^19 = 73.7503 Hz, or A = 442.0024 Hz, offset = 0.00939 cents
A2 = Pf / 62^23 = 110.4885 Hz, or A = 441.9539 Hz, offset = -0.18072 cents
D#3 = Pf / 38^26 = 156.2976 Hz, or A = 442.0764 Hz, offset = 0.29903 cents
D#4 = Pf / 37^26 = 312.6662 Hz, or A = 442.1768 Hz, offset = 0.69244 cents
A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = 0.81985 cents
E7 = Pf / 5^57 = 2673.2253 Hz, or A = 446.0410 Hz, offset = 15.75576 cents
G6 = Pf / 3^84 = 1549.3174 Hz, or A = 434.7625 Hz, offset = -28.58262 cents
G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 35.63745 cents

Modifications of A = 443 Hz:

F#5 = Pf / 590036^7 = 745.0342 Hz, or A = 443.0000 Hz, offset = 0.00003 cents
C7 = Pf / 508595^7 = 2107.2749 Hz, or A = 443.0000 Hz, offset = -0.00007 cents
F7 = Pf / 488038^7 = 2812.8743 Hz, or A = 442.9999 Hz, offset = -0.00020 cents
B2 = Pf / 140193^8 = 124.3126 Hz, or A = 442.9998 Hz, offset = -0.00093 cents
A5 = Pf / 109676^8 = 885.9985 Hz, or A = 442.9992 Hz, offset = -0.00296 cents
B7 = Pf / 25564^9 = 3978.0160 Hz, or A = 443.0012 Hz, offset = 0.00456 cents
G#1 = Pf / 5988^11 = 52.2668 Hz, or A = 442.9982 Hz, offset = -0.00722 cents
B1 = Pf / 391^16 = 62.1581 Hz, or A = 443.0125 Hz, offset = 0.04895 cents
A6 = Pf / 226^17 = 1772.0760 Hz, or A = 443.0190 Hz, offset = 0.07422 cents
F7 = Pf / 163^18 = 2811.5701 Hz, or A = 442.7946 Hz, offset = -0.80308 cents
A#3 = Pf / 60^23 = 234.8805 Hz, or A = 443.3954 Hz, offset = 1.54462 cents
E6 = Pf / 35^26 = 1326.0401 Hz, or A = 442.5128 Hz, offset = -1.90507 cents
E2 = Pf / 34^27 = 82.8696 Hz, or A = 442.4704 Hz, offset = -2.07100 cents
C#2 = Pf / 18^33 = 69.8768 Hz, or A = 443.6902 Hz, offset = 2.69500 cents
A7 = Pf / 7^47 = 3537.6749 Hz, or A = 442.2094 Hz, offset = -3.09255 cents
E7 = Pf / 5^57 = 2673.2253 Hz, or A = 446.0410 Hz, offset = 11.84337 cents
G#7 = Pf / 2^132 = 3406.9548 Hz, or A = 451.1929 Hz, offset = 31.72506 cents

Planck time is not known to high precision due to uncertainty of the gravitational constant G.  Fortunately coincidentally, musical instruments are not tuned to greater than 7 significant digits of precision, either.

Source code in Haskell. The algorithm is not clever; it simply brute forces every perfect integer power multiple of Planck time, with base less than 1 million, and within the range of an 88-key piano.  The code also can base the fundamental frequency off the hydrogen 21 cm line or off the frequency of cesium used for atomic clocks.

Inspired by Scientific pitch, which set C4 = 2^8 Hz = 256 Hz, or A = 430.538964609902 Hz, offset = -37.631656229590796 cents.

Categories: Offsite Blogs

Roman Cheplyaka: Better Yaml Parsing

Planet Haskell - Sun, 07/26/2015 - 2:00pm

Michael Snoyman’s yaml package reuses aeson’s interface (the Value type and ToJSON & FromJSON classes) to specify how datatypes should be serialized and deserialized.

It’s not a secret that aeson’s primary goal is raw performance. This goal may be at odds with the goal of YAML: being human readable and writable.

In this article, I’ll explain how a better way of parsing human-written YAML may work. The second direction – serializing to YAML – also needs attention, but I’ll leave it out for now.

Example: Item

To demonstrate where the approach taken by the yaml package is lacking, I’ll use the following running example.

{-# LANGUAGE OverloadedStrings #-} import Data.Aeson (FromJSON(..), withObject, withText, (.:), (.:?), (.!=)) import Data.Yaml (decodeEither) import Data.Text (Text) import Control.Applicative data Item = Item Text -- title Int -- quantity deriving Show

The fully-specified Item in YAML may look like this:

title: Shampoo quantity: 100

In our application, most of the time the quantity will be 1, so we’ll allow two alternative simplified forms. In the first one, the quantity field is omitted and defaulted to 1:

title: Shampoo

In the second form, the object will be flattened to a bare string:


Here’s a reasonably idiomatic way to write an aeson parser for this format:

defaultQuantity :: Int defaultQuantity = 1 instance FromJSON Item where parseJSON v = parseObject v <|> parseString v where parseObject = withObject "object" $ \o -> Item <$> o .: "title" <*> o .:? "quantity" .!= defaultQuantity parseString = withText "string" $ \t -> return $ Item t defaultQuantity Shortcomings of FromJSON

The main requirement for a format written by humans is error detection and reporting.

Let’s see how the parser we’ve defined copes with humanly errors.

> decodeEither "{title: Shampoo, quanity: 2}" :: Either String Item Right (Item "Shampoo" 1)

Unexpected result, isn’t it? If you look closer, you’ll notice that the word quantity is misspelled. But our parser didn’t have any problem with that. Such a typo may go unnoticed for a long time and quitely affect how your application works.

For another example, let’s say I am a returning user who vaguely remembers the YAML format for Items. I might have written something like

*Main Data.ByteString.Char8> decodeEither "{name: Shampoo, quanity: 2}" :: Either String Item Left "when expecting a string, encountered Object instead"

“That’s weird. I could swear this app accepted some form of an object where you could specify the quantity. But apparently I’m wrong, it only accepts simple strings.”

How to fix it Check for unrecognized fields

To address the first problem, we need to know the set of acceptable keys. This set is impossible to extract from a FromJSON parser, because it is buried inside an opaque function.

Let’s change parseJSON to have type FieldParser a, where FieldParser is an applicative functor that we’ll define shortly. The values of FieldParser can be constructed with combinators:

field :: Text -- ^ field name -> Parser a -- ^ value parser -> FieldParser a optField :: Text -- ^ field name -> Parser a -- ^ value parser -> FieldParser (Maybe a)

The combinators are analogous to the ones I described in JSON validation combinators.

So how do we implement FieldParser? One (“initial”) way is to use a free applicative functor and later interpret it in two ways: as a FromJSON-like parser and as a set of valid keys.

But there’s another (“final”) way which is to compose the applicative functor from components, one per required semantics. The semantics of FromJSON is given by ReaderT Object (Either ParseError). The semantics of a set of valid keys is given by Constant (HashMap Text ()). We take the product of these semantics to get the implementation of FieldParser:

newtype FieldParser a = FieldParser (Product (ReaderT Object (Either ParseError)) (Constant (HashMap Text ())) a)

Notice how I used HashMap Text () instead of HashSet Text? This is a trick to be able to subtract this from the object (represented as HashMap Text Value) later.

Another benefit of this change is that it’s no longer necessary to give a name to the object (often called o), which I’ve always found awkward.

Improve error messages

Aeson’s approach to error messages is straightforward: it tries every alternative in turn and, if none succeeds, it returns the last error message.

There are two approaches to get a more sophisticated error reporting:

  1. Collect errors from all alternatives and somehow merge them. Each error would carry its level of “matching”. An alternative that matched the object but failed at key lookup matches better than the one that expected a string instead of an object. Thus the error from the first alternative would prevail. If there are multiple errors on the same level, we should try to merge them. For instance, if we expect an object or a string but got an array, then the error message should mention both object and string as valid options.

  2. Limited backtracking. This is what Parsec does. In our example, when it was determined that the object was “at least somewhat” matched by the first alternative, the second one would have been abandoned. This approach is rather restrictive: if you have two alternatives each expecting an object, the second one will never fire. The benefit of this approach is its efficiency (sometimes real, sometimes imaginary), since we never more than one alternative deeply.

It turns out, when parsing Values, we can remove some of the backtracking without imposing any restrictions. This is because we can “factor out” common parser prefixes. If we have two parsers that expect an object, this is equivalent to having a single parser expecting an object. To see this, let’s represent a parser as a record with a field per JSON “type”:

data Parser a = Parser { parseString :: Maybe (Text -> Either ParseError a) , parseArray :: Maybe (Vector Value -> Either ParseError a) , parseObject :: Maybe (HashMap Text Value -> Either ParseError a) ... }

Writing a function Parser a -> Parser a -> Parser a which merges individual fields is then a simple exercise.

Why is every field wrapped in Maybe? How’s Nothing different from Just $ const $ Left "..."? This is so that we can see which JSON types are valid and give a better error message. If we tried to parse a JSON number as an Item, the error message would say that it expected an object or a string, because only those fields of the parser would be Just values.


As you might notice, the Parser type above can be mechanically derived from the Value datatype itself. In my actual implementation, I use generics-sop with great success to reduce the boilerplate. To give you an idea, here’s the real definition of the Parser type:

newtype ParserComponent a fs = ParserComponent (Maybe (NP I fs -> Either ParseError a)) newtype Parser a = Parser (NP (ParserComponent a) (Code Value))

We can then apply a Parser to a Value using this function.

I’ve implemented this YAML parsing layer for our needs at Signal Vine. We are happy to share the code, in case someone is interested in maintaining this as an open source project.

Categories: Offsite Blogs

fun fact: value of truncate -Infinity depends onoptimisation level

haskell-cafe - Sun, 07/26/2015 - 1:45pm
Just for general amusement - I was debugging very strange behaviour in a straightforward deterministic program, where the output value seemed to depend on ghc's optimization level. Well, it turned out that I (accidentally) computed truncate (logBase 2 0) :: Int, and that's 0 with -O0, but -9223372036854775808 with -O2.
Categories: Offsite Discussion

My first Haskell project, a Ruby Marshal parser, feedback wanted

Haskell on Reddit - Sun, 07/26/2015 - 9:17am

Hey all

In my spare time over the last month or two I've been working on writing my first Haskell library and would greatly appreciate feedback from other Haskellers before uploading it to Hackage.

It's a small library that uses cereal to parse a subset of Ruby objects serialised with Ruby's Marshal serialisation format. The reason for doing so is to enable me -- and maybe others too -- to gradually migrate their Rails applications from Ruby over to Haskell.

Any feedback greatly appreciated! Cheers!

submitted by unsymbol
[link] [14 comments]
Categories: Incoming News

Proposal: Add missing Monoid for ZipList

libraries list - Sat, 07/25/2015 - 9:50pm There's a Monoid that matches what the Applicative for ZipList does that seems to be missing. instance Monoid a => Monoid (ZipList a) where mempty = pure mempty mappend = liftA2 mappend It's been brought up before: Not only is it useful when it's the Monoid you want, but it serves an educational purpose for highlighting the relationship between Monoid and Applicative as well. Are there any good reasons not to have it? I'd like to limit discussion to two weeks. _______________________________________________ Libraries mailing list Libraries< at >
Categories: Offsite Discussion

Is it possible to run IO action in the QuickCheck Gen monad?

Haskell on Reddit - Sat, 07/25/2015 - 8:10pm

If I have an IO action that generates some random value of type T, is there a way to convert that into the implementation of arbitrary in Arbitrary T?

submitted by enzozhc
[link] [12 comments]
Categories: Incoming News

Why does this work?

Haskell on Reddit - Sat, 07/25/2015 - 11:47am

I've got the following function declaration:

grabRange :: Ord a => a -> a -> [a] -> [a] grabRange l u = filter (\x -> l < x && x < u)

And it works with:

grabRange 3 12 [1,2,3,4,5,67,7,8,9,67,6,54,4,56,6,7,7,5,4,12,11] [4,5,7,8,9,6,4,6,7,7,5,4,11]

It doesnt make sense in my limited knowledge of currying:

a -> (a -> ([a]???? -> ([a])))

How come I don't have to name the [a] and where does it get the x from?

Edit: Sorry for the late response. Compliments to the Haskell community. Never have I gotten this many and ellaborate answers to my questions!

submitted by schrodingers_paradox
[link] [20 comments]
Categories: Incoming News

A problem with the presentation of monads

Haskell on Reddit - Sat, 07/25/2015 - 4:18am

Basically, they aren't clear enough.


  • There is no syntax indicating the code in a do block is being transformed, since it is being hidden.

  • After de sugaring a do block, you find (>>=)s, which is a type class function.

  • Virtually no modules document their implementation of bind.

The consequence is that in order to understand what's actually happening in a do block, you need to read the source code for a module.

In my mind, this is a large factor in making monads seem magical.

Simple solution:

  • Document your bind function.

This should give a sense of how specific monads operate internally.

I think authors should be encouraged to document more class functions in general. Unfortunately, I don't know a natural way to accomplish this outside of a culture shift.


  • Still no obvious indication of a code transformation in a do block.
submitted by fruitbooploops
[link] [44 comments]
Categories: Incoming News

Dimitri Sabadie: Introducing Luminance, a safer OpenGL API

Planet Haskell - Sat, 07/25/2015 - 4:03am

A few weeks ago, I was writing Haskell lines for a project I had been working on for a very long time. That project was a 3D engine. There are several posts about it on my blog, feel free to check out.

The thing is… Times change. The more it passes, the more I become mature in what I do in the Haskell community. I’m a demoscener, and I need to be productive. Writing a whole 3D engine for such a purpose is a good thing, but I was going round and round in circles, changing the whole architecture every now and then. I couldn’t make my mind and help it. So I decided to stop working on that, and move on.

If you are a Haskell developer, you might already know Edward Kmett. Each talk with him is always interesting and I always end up with new ideas and new knowledge. Sometimes, we talk about graphics, and sometimes, he tells me that writing a 3D engine from scratch and release it to the community is not a very good move.

I’ve been thinking about that, and in the end, I agree with Edward. There’re two reasons making such a project hard and not interesting for a community:

  1. a good “3D engine” is a specialized one – for FPS games, for simulations, for sport games, for animation, etc. If we know what the player will do, we can optimize a lot of stuff, and put less details into not-important part of the visuals. For instance, some games don’t really care about skies, so they can use simple skyboxes with nice textures to bring a nice touch of atmosphere, without destroying performance. In a game like a flight simulator, skyboxes have to be avoided to go with other techniques to provide a correct experience to players. Even though an engine could provide both techniques, apply that problem to almost everything – i.e. space partitionning for instance – and you end up with a nightmare to code ;
  2. an engine can be a very bloated piece of software – because of point 1. It’s very hard to keep an engine up to date regarding technologies, and make every one happy, especially if the engine targets a large audience of people – i.e. hackage.

Point 2 might be strange to you, but that’s often the case. Building a flexible 3D engine is a very hard and non-trivial task. Because of point 1, you utterly need to restrict things in order to get the required level of performance or design. There are people out there – especially in the demoscene world – who can build up 3D engines quickly. But keep in mind those engines are limited to demoscene applications, and enhancing them to support something else is not a trivial task. In the end, you might end up with a lot of bloated code you’ll eventually zap later on to build something different for another purpose – eh, demoscene is about going dirty, right?! ;)


So… Let’s go back to the basics. In order to include everyone, we need to provide something that everyone can download, install, learn and use. Something like OpenGL. For Haskell, I highly recommend using gl. It’s built against the gl.xml file – released by Khronos. If you need sound, you can use the complementary library I wrote, using the same name convention, al.

The problem with that is the fact that OpenGL is a low-level API. Especially for new comers or people who need to get things done quickly. The part that bothers – wait, no, annoys – me the most is the fact that OpenGL is a very old library which was designed two decades ago. And we suffer from that. A lot.

OpenGL is a stateful graphics library. That means it maintains a state, a context, in order to work properly. Maintaining a context or state is a legit need, don’t get it twisted. However, if the design of the API doesn’t fit such a way of dealing with the state, we come accross a lot of problems. Is there one programmer who hasn’t experienced black screens yet? I don’t think so.

The OpenGL’s API exposes a lot of functions that perform side-effects. Because OpenGL is weakly typed – almost all objects you can create in OpenGL share the same GL(u)int type, which is very wrong – you might end up doing nasty things. Worse, it uses an internal binding system to select the objects you want to operate on. For instance, if you want to upload data to a texture object, you need to bind the texture before calling the texture upload function. If you don’t, well, that’s bad for you. There’s no way to verify code safety at compile-time.

You’re not convinced yet? OpenGL doesn’t tell you directly how to change things on the GPU side. For instance, do you think you have to bind your vertex buffer before performing a render, or is it sufficient to bind the vertex array object only? All those questions don’t have direct answers, and you’ll need to dig in several wikis and forums to get your answers – the answer to that question is “Just bind the VAO, pal.”

What can we do about it?

Several attempts to enhance that safety have come up. The first thing we have to do is to wrap all OpenGL object types into proper types. For instance, we need several types for Texture and Framebuffer.

Then, we need a way to ensure that we cannot call a function if the context is not setup for. There are a few ways to do that. For instance, indexed monads can be a good start. However, I tried that, and I can tell you it’s way too complicated. You end up with very long types that make things barely unreadable. See this and this for excerpts.


In my desperate quest of providing a safer OpenGL’s API, I decided to create a library from scratch called luminance. That library is not really an OpenGL safe wrapper, but it’s very close to being that.

luminance provides the same objects than OpenGL does, but via a safer way to create, access and use them. It’s an effort for providing safe abstractions without destroying performance down and suited for graphics applications. It’s not a 3D engine. It’s a rendering framework. There’s no light, asset managers or that kind of features. It’s just a tiny and simple powerful API.


luminance is still a huge work in progress. However, I can already show an example. The following example opens a window but doesn’t render anything. Instead, it creates a buffer on the GPU and perform several simple operations onto it.

-- Several imports.
import Control.Monad.IO.Class ( MonadIO(..) )
import Control.Monad.Trans.Resource -- from the resourcet package
import Data.Foldable ( traverse_ )
import Graphics.Luminance.Buffer
import Graphics.Luminance.RW
import Graphics.UI.GLFW -- from the GLFW-b package
import Prelude hiding ( init ) -- clash with GLFW-b’s init function

windowW,windowH :: Int
windowW = 800
windowH = 600

windowTitle :: String
windowTitle = "Test"

main :: IO ()
main = do
-- Initiate the OpenGL context with GLFW.
windowHint (WindowHint'Resizable False)
windowHint (WindowHint'ContextVersionMajor 3)
windowHint (WindowHint'ContextVersionMinor 3)
windowHint (WindowHint'OpenGLForwardCompat False)
windowHint (WindowHint'OpenGLProfile OpenGLProfile'Core)
window <- createWindow windowW windowH windowTitle Nothing Nothing
makeContextCurrent window
-- Run our application, which needs a (MonadIO m,MonadResource m) => m
-- we traverse_ so that we just terminate if we’ve failed to create the
-- window.
traverse_ (runResourceT . app) window

-- GPU regions. For this example, we’ll just create two regions. One of floats
-- and the other of ints. We’re using read/write (RW) regions so that we can
-- send values to the GPU and read them back.
data MyRegions = MyRegions {
floats :: Region RW Float
, ints :: Region RW Int

-- Our logic.
app :: (MonadIO m,MonadResource m) => Window -> m ()
app window = do
-- We create a new buffer on the GPU, getting back regions of typed data
-- inside of it. For that purpose, we provide a monadic type used to build
-- regions through the 'newRegion' function.
region <- createBuffer $
<$> newRegion 10
<*> newRegion 5
clear (floats region) pi -- clear the floats region with pi
clear (ints region) 10 -- clear the ints region with 10
readWhole (floats region) >>= liftIO . print -- print the floats as an array
readWhole (ints region) >>= liftIO . print -- print the ints as an array
floats region `writeAt` 7 $ 42 -- write 42 at index=7 in the floats region
floats region @? 7 >>= traverse_ (liftIO . print) -- safe getter (Maybe)
floats region @! 7 >>= liftIO . print -- unsafe getter
readWhole (floats region) >>= liftIO . print -- print the floats as an array

Those read/write regions could also have been made read-only or write-only. For such regions, some functions can’t be called, and trying to do so will make your compiler angry and throw errors at you.

Up to now, the buffers are created persistently and coherently. That might cause issues with OpenGL synchronization, but I’ll wait for benchmarks before changing that part. If benchmarking spots performance bottlenecks, I’ll introduce more buffers and regions to deal with special cases.

luminance doesn’t force you to use a specific windowing library. You can then embed it into any kind of host libraries.

What’s to come?

luminance is very young. At the moment of writing this article, it’s only 26 commits old. I just wanted to present it so that people know it exists will be released as soon as possible. The idea is to provide a library that, if you use it, won’t create black screens because of framebuffers incorrectness or buffers issues. It’ll ease debugging OpenGL applications and prevent from making nasty mistakes.

I’ll keep posting about luminance as I get new features implemented.

As always, keep the vibe, and happy hacking!

Categories: Offsite Blogs

R-like Box plot using Frames and Chart

Haskell on Reddit - Fri, 07/24/2015 - 11:43pm

While I was doing one of those online stats courses I tried to replicate some of the R exploratory things with Haskell. I almost immediately got stuck with doing the box plots. After looking around a bit an being unable to find anything I tried to put something together myself using acowley's Frame library and Chart.

I though I could post it here in case someone finds it helpful and also for some feedback on how to improve it.

Thanks to /u/acowley for some great help in getting this working.

submitted by rrottier
[link] [10 comments]
Categories: Incoming News

non-ASCII filepaths in a C function

libraries list - Fri, 07/24/2015 - 10:52pm
In my 'soxlib' package I have written a binding to sox_format_t * sox_open_read( char const * path, sox_signalinfo_t const * signal, sox_encodinginfo_t const * encoding, char const * filetype); I construct the C filepath "path" from a Haskell FilePath using Foreign.C.String.withCString. This works for ASCII and non-ASCII characters in Linux. However, non-ASCII characters let sox_open_read fail on Windows. What is the correct way to convert FilePath to "char *"?
Categories: Offsite Discussion

Memoization in Haskell

Haskell on Reddit - Fri, 07/24/2015 - 4:28pm

Hi there. I'm kind of a beginner at Haskell but I'm trying to broaden my programming abilities now that I got some time off. I've been trying a bunch of Project Euler problems I haven't solved yet with Haskell and I've run into a program regarding no variables: I don't know how to handle memoization.

Can someone tell me how dynamic programming is possible with Haskell even though there are no variables? I'm sure there must be some way to do this.

submitted by FlaconPunch
[link] [10 comments]
Categories: Incoming News