News aggregator

Mark Jason Dominus: A message to the aliens, part 6/23 (chemistry)

Planet Haskell - Fri, 08/21/2015 - 7:33am

Earlier articles: Introduction Common features Page 1 (numerals) Page 2 (arithmetic) Page 3 (exponents) Page 4 (algebra) Page 5 (geometry)

This is page 6 of the Cosmic Call message. An explanation follows.

The 10 digits again:


Page 6 discusses fundamental particles of matter, the structure of the hydrogen and helium atoms, and defines glyphs for the most important chemical elements.

Depicted at top left is the hydrogen atom, with a proton in the center and an electron circulating around the outside. This diagram is equated to the glyph for hydrogen.

The diagram for helium is similar but has two electrons, and its nucleus has two protons and also two neutrons.


The illustrations may puzzle the aliens, depending on how they think of atoms. (Feynman once said that this idea of atoms as little solar systems, with the elctrons traveling around the nucleus like planets, was a hundred years old and out of date.) But the accompanying mass and charge data should help clear things up. The first formula says

the mass of the proton is 1836 times the mass of the electron, and that 1836, independent of the units used and believed to be a universal and fundamental constant, ought to be a dead giveaway about what is being discussed here.

If you want to communicate fundamental constants, you have a bit of a problem. You can't tell the aliens that the speed of light is furlongs per fortnight without first explaining furlongs and fortnights (as is actually done on a later page). But the proton-electron mass ratio is dimensionless; it's 1836 in every system of units. (Although the value is actually known to be 1836.15267; I don't know why a more accurate value wasn't given.)

This is the first use of subscripts in the document. It also takes care of introducing the symbol for mass. The following formula does the same for charge : .

The next two formulas, accompanying the illustration of the helium atom, describe the mass (1.00138 protons) and charge (zero) of the neutron. I wonder why the authors went for the number 1.00138 here instead of writing the neutron-electron mass ratio of 1838 for consistency with the previous ratio. I also worry that this won't be enough for the aliens to be sure about the meaning of . The 1836 is as clear as anything can be, but the 0 and -1 of the corresponding charge ratios could in principle be a lot of other things. Will the context be enough to make clear what is being discussed? I suppose it has to; charge, unlike mass, comes in discrete units and there is nothing like the 1836.

The second half of the page reiterates the symbols for hydrogen and helium and defines symbols for eight other chemical elements. Some of these appear in organic compounds that will be discussed later; others are important constitutents of the Earth. It also introduces symbol for “union” or “and”: . For example, sodium is described as having 11 protons and 12 neutrons.


Most of these new glyphs are not especially mnemonic, except for hydrogen—and aluminium, which is spectacular.

The blog is going on hiatus until early September. When it returns, the next article will discuss page 7, shown at right. It has three errors. Can you find them? (Click to enlarge.)

Categories: Offsite Blogs

What are Haskellers' critiques of F# and OCaml?

Haskell on Reddit - Fri, 08/21/2015 - 7:06am

I enjoyed so much discussion about What are haskellers critiques of clojure?(, What are Haskellers' critiques of Scala? ( and similar that I'd like to hear also opinions about ML-family languages.

submitted by gsscoder
[link] [111 comments]
Categories: Incoming News

How is it possible that compiled code is slower than interpreted?

Haskell on Reddit - Fri, 08/21/2015 - 5:15am

I would normally post this in /r/haskellquestions but I feel that this is probably relevant to a broader audience.

I just noticed that the compiled version of a test programm is running twice (on average 2.23x) slower than the interpreted version of the same program.

I have no clue why this is possible, so I want to ask you to speculate on probable causes and/or ways to track this down.

I am using GHC 7.10.1 and 7.10.2 on a current Arch Linux and Ubuntu 14.04.2. Compiling with and without -fllvm and with several -O levels.

Code to reproduce this can be found here: (you'll need too). Just do the common cabal sandbox, cabal run dance. This will display the time per iteration.

submitted by goliatskipson
[link] [31 comments]
Categories: Incoming News

SQL optimizer in Haskell

Haskell on Reddit - Fri, 08/21/2015 - 4:40am

I need to implement a SQL optimizer for a research project. I know how to do that in C++ but I am not really looking forward to it (writing an optimizer is really tedious). I am now thinking whether Haskell would be a good match for that task. But since my Haskell experience is limited I need to answer a few questions before I can design the system and I am wondering whether some people in this community would be kind enough to give me some pointers. First question: does anyone see any big problems in using Haskell to do that?

The reason I think Haskell is a good match, is the following: an optimizer generates an relation algebra expression out of a SQL query. It then converts this expression into other, equivalent expressions and tests them against a cost model. At one point it will decide, that the ordering of the operators is good enough (using some heuristics - since optimizing a SQL query is an NP-complete problem) and it will generate an execution plan out of the resulting expression. While you can do it in C++ (of course), Haskell has some benefits:

  • Expression conversions need to be correct (otherwise the user will get a wrong result) - since we are talking about an algebra the type system should help me to write correct code.
  • It handles a lot of tree structures. Conceptually, for every step the optimizer generates a new tree and throws the old away - this sounds very functional to me.
  • Haskell seems to be fast enough (since we have strong storage and execution separation scaling out the processing is trivial and I am willing to sacrifice a few CPU cycles there - my hope is of course that I will be, in return, able to get a better optimizer).

One problem is that the lower level is implemented in C++, so I will have to implement a C interface to communicate with Haskell (but this is feasable). But I am unsure whether I will be able to handle the following potential problems:

  • We use our own threading model (since we have better knowledge about the workload than the OS or another general runtime - like the one from Haskell). To execute a query you need to pass a function (which would be Haskells execution tree) to a client handle that will run the function in another thread. This should be fine unless the Haskell runtime makes some assumptions about the threading or if it tries to do its own threading. Also, are there hidden locks? Since we have fibers these might result in deadlocks and poor performance (the GC is one problem for sure, but maybe there are others).
  • The operators within the execution model will be special iterators. My plan is to translate them to a Haskell list and lazy evaluation should make sure that the iterator does not get forwarded further than needed. Is this assumption correct? I don't want to use a state monad here (or at least I don't want to propagate it all the way up)...

I am grateful for answers and sorry for the long post and the fuzzy question

submitted by cppd
[link] [6 comments]
Categories: Incoming News

Which Map implementation is better optimised for memory?

Haskell on Reddit - Fri, 08/21/2015 - 4:08am

I am stream-processing lots of data and accumulating some stuff from the stream along the road. This data I accumulate is simply a map (or, sometimes, a map of maps) where a ByteString is a key and Int is a value. Consider a word counting example, it is somehow close to what I am doing. The data itself isn't that huge, it only takes ~200Mb when written on disk, but it takes ~15-18 gigabytes in memory when I use Data.Map. I have tried things like IntMap and HashMap, doesn't help much in terms of memory. Judy arrays aren super fine, but since they can only accept Word as keys they are not very useful in my situation.

So here is the question: is there anything "better" implementation of a Map that is more optimised for memory consumption? The current ratio 200MB:18GB doesn't seem to be very usable...

submitted by alexeyraga
[link] [13 comments]
Categories: Incoming News

OcaPic: Programming PIC microcontrollers in OCaml

Lambda the Ultimate - Fri, 08/21/2015 - 12:33am

Most embedded systems development is done in C. It's rare to see a functional programming language target any kind of microcontroller, let alone an 8-bit microcontroller with only a few kB of RAM. But the team behind the OcaPic project has somehow managed to get OCaml running on a PIC18 microcontroller. To do so, they created an efficient OCaml virtual machine in PIC assembler (~4kB of program memory), and utilized some clever techniques to postprocess the compiled bytecode to reduce heap usage, eliminate unused closures, reduce indirections, and compress the bytecode representation. Even if you're not interested in embedded systems, you may find some interesting ideas there for reducing overheads or dealing with constrained resource budgets.

Categories: Offsite Discussion

A general data-type construction function

Haskell on Reddit - Thu, 08/20/2015 - 10:45pm

Hey guys, I've just had a simple idea of the general construction interface for data-types using a typeclass. Please see the following GHCi session:

$ ghci -XNoImplicitPrelude GHCi, version 7.10.2: :? for help > import BasePrelude BasePrelude> :set -XFlexibleInstances BasePrelude> :{ BasePrelude| class Constructor a where BasePrelude| construct :: a BasePrelude| BasePrelude| instance Constructor (a -> b -> (a, b)) where BasePrelude| construct = (,) BasePrelude| BasePrelude| instance Constructor (a -> b -> c -> (a, b, c)) where BasePrelude| construct = (,,) BasePrelude| BasePrelude| BasePrelude| data T3 a b c = T3 a b c deriving (Show) BasePrelude| BasePrelude| instance Constructor (a -> b -> c -> T3 a b c) where BasePrelude| construct = T3 BasePrelude| :} BasePrelude> construct 'a' 'b' 'c' :: (Char, Char, Char) ('a','b','c') BasePrelude> construct 'a' 'b' 'c' :: T3 Char Char Char T3 'a' 'b' 'c' BasePrelude> construct <$> pure 'a' <*> pure 'b' <*> pure 'c' :: Maybe (Char, Char, Char) Just ('a','b','c')

Things to notice: the Constructor instance gets determined by the result type, so it may be inferred from your outer function signature or you'll have to specify it yourself as I do in the session above. The typeclass is identical to Default, so it may be used instead.

Let's discuss whether the thing is viable and needs releasing.

submitted by nikita-volkov
[link] [13 comments]
Categories: Incoming News

ANN: cabal-bounds 1.0.0

haskell-cafe - Thu, 08/20/2015 - 10:23pm
cabal-bounds[1] is a command line programm for managing the bounds/versions of the dependencies in a cabal file. Changes for 1.0.0 ================= * automatically find the cabal and setup-config files * ignore the base library by default Perhaps the two most relevant use cases: Initialize Bounds ================= If you have started a new project, created a cabal file, added dependencies to it, build it, and now want to set the lower and upper bounds of the dependencies according to the currently used versions of the build, then you can just call: $> cabal-bounds update This call will update the bounds of the dependencies of the cabal file in the working directory. Raise the Upper Bounds ====================== If you have several cabalized projects, then it can be quite time consuming to keep the bounds of your dependencies up to date. Especially if you're following the [package versioning policy](<>), then you want to raise your upp
Categories: Offsite Discussion

WTH is up with the Numeric type??

Haskell on Reddit - Thu, 08/20/2015 - 10:12pm

EDIT: THe question is meant to read

WTH is up with the Natural type??

For those of you who don't know (I didn't know until about a week ago), Haskell has a new (since base 4.8) unbounded, unsigned integral number type called Natural, where Natural is to Word as Integer is to Int:

As you can see from the link, it has been placed in its own brand spanking new module, Numeric.Natural.

Why is it in its own module, and why the funny name?

I guess the answer to the first question is "Because of all the type astronauts creating their own Nat(ural) types while trying to add dependent types to Haskell." I think the module name is bizarre, though.

What gives?

submitted by BoteboTsebo
[link] [15 comments]
Categories: Incoming News

octree implementation - followup to "How to modelouter space for a MUD"

haskell-cafe - Thu, 08/20/2015 - 9:30pm
It looks like Octree is what I want if I can solve a particular problem. First let me articulate what I am looking for. (1) Objects in 3-d space with no spatial extent. Collision is determined by two objects occupying the same point.' (2) Efficient insert-delete-update #2 seems to be the problem. I found this library, thankfully. I believe it's exactly what I am looking for, but for quadtrees. I'm a little overwhelmed as this is a new data structure for me. I believe if I can grok what is going on in this library, I can take these ideas and extend them to the octree. I'm referring specifically to this module And these functions setLocation :: Eq a => Location -> a -> QuadTree a -> QuadTree a setLocation = set . atLocation atLocation (Having difficulty cutting and pasting this to mail.) I'd like to be able to visualize how the tree is being compressed, it may help to be able to see wha
Categories: Offsite Discussion

how to do invertible type families?

Haskell on Reddit - Thu, 08/20/2015 - 3:55pm

Back in the days of fundeps we could define a type class that gives us an invertible type function, e.g.: class Foo a b | a -> b, b -> a. These days we can use type families to implement each direction of that invertible type function, but this runs into coherence issues during type inference, because just using type families looses track of the fact that the two families are related. Is there any way to state —once and for all— that a ~ Foo (UnFoo a) and b ~ UnFoo (Foo b) hold for all types?

submitted by winterkoninkje
[link] [5 comments]
Categories: Incoming News

ANN: cabal-bounds 1.0.0

Haskell on Reddit - Thu, 08/20/2015 - 3:24pm

cabal-bounds[1] is a command line programm for managing the bounds/versions of the dependencies in a cabal file.

Changes for 1.0.0
  • automatically find the cabal and setup-config files
  • ignore the base library by default

Perhaps the two most relevant use cases:

Initialize Bounds

If you have started a new project, created a cabal file, added dependencies to it, build it, and now want to set the lower and upper bounds of the dependencies according to the currently used versions of the build, then you can just call:

$> cabal-bounds update

This call will update the bounds of the dependencies of the cabal file in the working directory.

Raise the Upper Bounds

If you have several cabalized projects, then it can be quite time consuming to keep the bounds of your dependencies up to date. Especially if you're following the package versioning policy, then you want to raise your upper bounds from time to time, to allow the building with newer versions of the dependencies.

cabal-bounds tries to automate this update process to some degree. So a typical update process might look like:

# update the version infos of all libraries $> cabal update # drops the upper bound of all dependencies of the cabal file in the working directory $> cabal-bounds drop --upper # create a cabal sandbox for building your project, this ensures that you're really using # the newest available versions of the dependencies, otherwise you would be constraint # to the already installed versions $> cabal sandbox init # build your project $> cabal install # update the upper bound of all dependencies of the cabal file in the working directory $> cabal-bounds update --upper

Please consult the README for further informations.


submitted by dan00
[link] [9 comments]
Categories: Incoming News

How to build API wrapper client in Haskell (best practice ) ?

Haskell on Reddit - Thu, 08/20/2015 - 2:05pm

Hi there

I am planning to write a small API REST client using Haskell. It will allow user to authenticate, make some search in the database, and do some updates to his profile. So I was wondering if there any suggestions, tutorials, best practices, architecture, patterns in terms of approaching these problems ?

Any suggestions and links are welcome !

thanks in advance

submitted by raw909
[link] [6 comments]
Categories: Incoming News

An implementation of the board game Diplomacy

Haskell on Reddit - Thu, 08/20/2015 - 12:51pm

Announcing the diplomacy library and server.

If you like to play Diplomacy face-to-face, try using this server! Each player can participate using their own smartphone or laptop.

submitted by alexvieth
[link] [11 comments]
Categories: Incoming News

How to accomplish edits of a large dataset in Haskell

Haskell on Reddit - Thu, 08/20/2015 - 12:01pm

I have a pet project of making a little MUD (Multi User Dungeon) with Haskell to learn the language better. I currently have a Haskell function that will dig out a 'dungeon' area and hold information about each of the rooms with the connections between them. I even have some separated-out IO functions to save/load them! Neat.

So here's the tricky part I can't wrap my head around. What I want to do is allow the user to load up a 'dungeon', then traverse to each room. Ok, not hard. But I want them to be able to edit the description of the room. I obviously can't just edit the description 'field' of the room, as everything's immutable.

Currently my structure is that I have a list of Room data types that include description as part of their definition. I also have a list of Connection data types that contain references to two rooms, one for either end. So if I alter a Room, it feels like I have to recreate the Room and Connection lists, and both are going to be a pain! There has to be a better way of doing this.

Of course, I could just manually edit the file's saved descriptions, but that's not the point of learning functional programming. ;)

Any advice/ideas appreciated!

submitted by nevertras
[link] [4 comments]
Categories: Incoming News

Help with Understanding the Output of Haskell Program Coverage

Haskell on Reddit - Thu, 08/20/2015 - 11:58am

Hi Everyone,

I wanted to learn about my test coverage and learned from the Haskell Wiki about the hpc tool. Since I am using stack, I was able to do stack test --coverage to get a test coverage report, which is nice.

I do have a few questions in understanding these reports and how to improve my test coverage. I am happy to update the Wiki based on the responses I get here.

I am looking at the generated HPC html file and the questions I have are:

  1. How do I exclude some files from hpc report when using stack? If I cannot, do I have to run the hpc tool manually with the exclude flag? Should the binary generated by stack be used for this?

  2. What are Alternatives mentioned in the hpc-index.html file?

  3. What's the difference between bold and non-bold lines in the report for a specific module? RWH books says 'we see the actual source of the program, marked up in bold yellow for code that wasn't tested, and code that was executed simply bold.' What about those lines that not bolded? How do they impact the test coverage report?

  4. I see a lot of yellow highlights for things like deriving (Show) -- the Show is highlighted. I am not sure how this impacts the test coverage report. I derive them so I can print them out and debug during development. Is it recommended that I remove them in the final code before testing? I want to get a pretty accurate view and don't want these non-evaluated expressions to impact it.

In my module that declares most of the custom types for persistent, I have something like (I know I can use `FromJSON, etc. in deriving and I have it like below because I sometimes manually create a json object so easy to add):

data CustomType = Manual | Auto deriving (Show, Eq, Generic) instance FromJSON CustomType instance ToJSON CustomType derivePersistField "CustomType"

The hpc report has data and instance lines as non-bold (#3 above) and only derivePersistField is bolded. It says 40 covered out of a totalof 90 or so -- which indicates poor coverage. Most of these things are data / instance declarations. Should I be doing something else to get better coverage? If these are fine, how do I ignore them so the report gives a realistic picture? (Note: I am converting them To & From JSON frequently in app and those end points are being tested)

Many data fields were created mainly to send back JSON. I put some values in them and return them to the client. In the HPC report, it has flagged many of the fields within my record as 'never been called' -- it is a bit puzzling to me as wouldn't they be called at the time of creating the JSON? For example: Validation messages are being sent to the client and I have a declaration that looks like:

data ValidationError = ValidationError { field :: Text, message :: Text } deriving (Show, Generic) instance ToJSON ValidationError

I see yellow highlights for field and message. I am getting back validation errors in my client. Why would these fields be never called. I set them and call toJSON but I would expect the JSON call will access the fields?

Appreciate your help.

submitted by ecognium
[link] [2 comments]
Categories: Incoming News

Instead of a kitchen sink language Haskell should be expressed as a base language and customizations of that language

Haskell on Reddit - Thu, 08/20/2015 - 11:39am

The base language might look something like

type Var = String data Core = ModuleName String | Exports [Export] | Imports [Import] | Decls [Decl] ... type Decl = Data | Fun Var Expression | TypeSig Var Type ... data Expression = Case ... | LetVar Var Expression | Lambda Var Expression ...

An example of a core program might look like

module Foo where exports fooBar imports System.IO (...) Data.String (...) Data.Bool (...) fooBar = \happy -> case happy of True -> putStrLn "hello" False -> putStrLn "go away"

Then we could express all sorts of expansions to the base language. the PatternBindings extension would allow us to use pattern bindings instead of just binding to a var. The IfThenElse extension would allow the use of 'if' instead of 'case'. The Do extension introduces the obivous. Type classes would be introduced by the TypeClasses extension.

In addition to extensions we would also have "flavours". A flavour is just a known collection of base extensions and alternative extensions. So for example we could have

flavour Haskell98 builtin98 avail98 builtin98 = [IfThenElse, Do, PatternBindings] avail98 = [RankNTypes, TypeClasses] flavour Haskell2010 builtin2010 avail2010 builtin2010 = [Haskell98, GADTs, ...] avail2010 = [TupleSections, MultiwayIf, ...]

The yearly language standardization process would consist of collecting extension proposals and deciding what will be considered builtin for that year, or just available. Then effort is made to ensure that the accepted extensions all play well together.

The advantages of this approach would be having a well-factored language and reducing standardization stalls.


I have simplified my thinking. A flavour is just a set of compatible extensions.

And to further this idea of giving a more abstract description of haskell-like languages. We would write the standard something like follows

A haskell-like language should have a way to Define a module Declare exports Create a type class Declare a datatype Declare a type alias Give a type signature Define a function Declare the fixity of a function Give an expression Bind expressions to variables Pattern match on an expression haskell-like language there are expressions like the following numbers lists characters strings bools btyes ... arithmetic logical operations lambda

We would also give rules regarding the semantics of haskell-like languages and express them with Core.

We would have extensions. For example, PatternBindings and FunctionClauses and a LetPatterns extension that depends on those two extensions. With these extensions we have the syntax for let and the current way of defining functions and have specified a portion of the syntax of haskell98.

We would also use extensions to mark to changes in semantics.

submitted by zandekar
[link] [17 comments]
Categories: Incoming News