The "four fours" is an interesting mathematical puzzle: "Given 4 instances of the digit "4", represent all integers using mathematical symbols and operators in common use." The symbols and the way of combining the fours vary from version to version of the puzzle (sometimes .4 is allowed, or the power operator).
I was thinking that it would be fun to solve it in haskell, generating all possible solutions and taking the first n. But that took me to one interesting question: is there a way to generate a lazy list of all possible values of an algebraic data type?
In my idea I would have:data Fours = F | FF | FFF | FFFF
to represent 4, 44, 444 and 4444. You can easily add .4, .44, .444 and .4444. Then the auxiliary functionsdigits :: Fours -> Double digits F = 1 .... value :: Fours -> Double value F = 4 ...
(Of course, there could be more interesting representations, like (Four Int Int) indicating the number of fours before and after the decimal point, but I am more interested in a type with a finite number of values).
And the expression type:data Expr = Num Fours | Sum Expr Expr | Rest Expr Expr | Mul Expr Expr | Div Expr Expr | Fact Expr | Sqrt Expr eval :: Expr -> Double eval Num a = Value a ...
Now, if I were able to get a list of all possible Expr, I would filter those who have exactly four fours, sort by its value and voila! But I guess that would imply to construct a non-trivial map from Int -> Expr. I'm looking for an elegant way to get all possible answers.submitted by mvaliente2001
[link] [9 comments]
First off, I really don't know anything about Haskell. My only exposure to the language has come from my heavy use of xmonad, which is written and configured in Haskell. I have managed to write a config that suits my purposes quite well, but that doesn't mean I understand it.
My problem is that I have something that looks like this in my code:myStartupHook = spawn "xsetbg ~/somebackground.png"
which sets the background image of the desktop. I like this. However, I am using MATLAB and have found out Java sucks balls in the most impressively obscure ways. Xmonad and java do not play nice together, and to get things working I need a line like:myStartupHook = setWMName "LG3D"
My question is how do I combine these two desirable commands into one line? Is that possible?submitted by Registar
[link] [11 comments]
As the performance of those monads is said to be abysmal, I rewrote RWS-like transformers (here).
These transformers are not really generic Writer instances (Only work on list, I needed them for a system where I mostly need cons, so this made sense to me), and doubled the speed of my program that way.
Is there already something like that on Hackage ? Would you think it is worth having it ?submitted by bartavelle
[link] [5 comments]
So, I'm trying to write a game using Reactive-banana and SDL. I've setup a gameloop based on this. The problem is that when I run the game it's just a black screen, it doesn't render the white cube as it is supposed to do. I've even tried making ship a constant event of some value and still it doesn't render. I've also double-checked the render function, and it does work if I just loop it. Anyone got an idea of what might be wrong?
Here is my code: https://gist.github.com/klrr/7660892submitted by klrr_
[link] [7 comments]
Amdahl's law for predicting the future of multicores considered harmful B.H.H. Juurlink , C. H. Meenderinck, ACM SIGARCH Computer Architecture News, Volume 40 Issue 2, May 2012, Pages 1-9.
Several recent works predict the future of multicore systems or identify scalability bottlenecks based on Amdahl's law. Amdahl's law implicitly assumes, however, that the problem size stays constant, but in most cases more cores are used to solve larger and more complex problems. There is a related law known as Gustafson's law which assumes that runtime, not the problem size, is constant. In other words, it is assumed that the runtime on p cores is the same as the runtime on 1 core and that the parallel part of an application scales linearly with the number of cores. We apply Gustafson's law to symmetric, asymmetric, and dynamic multicores and show that this leads to fundamentally different results than when Amdahl's law is applied. We also generalize Amdahl's and Gustafson's law and study how this quantitatively effects the dimensioning of future multicore systems.
Holden Karau: I've written a book "Fast Data Processing with Spark" which covers Python, Scala, and Java
Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book guides you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster.
Personally, while the fast nature of Spark is not to be understated, I really enjoy its functional style APIs and find it a lovely environment to code in.
Basicaly, how would one insert an node at the nth dimension of a bianary tree, such as the one from Data.Tree?
Example:insertAtLevel 2 (Node 3 [Node 4 , Node 5, [Node 2 ]]) (Node 3 )
would giveNode 3 [Node 4 , Node 5 , [Node 2 , Node 3 ]] submitted by Undo_all
[link] [8 comments]
I was just messing around with the loeb function that was posted here a few days ago and wanted to try and make it effectful.mmoeb f x = mfix go where go g = f ($ g) x mloeb :: :: (MonadFix m, Traversable t) => t (t b -> m b) -> m (t b) mloeb x = mmoeb mapM
then we just make up a IO-indexing function likegetIndex xs = getLine >>= (\x -> return $ xs !! (read x))
then you canmloeb [getIndex, getIndex, const $ return 3]
which works like you'd expect. Obviously you can substitute mapM for anything that lets you dofmapM :: (Monad m, Functor f) => (a -> m b) -> f a -> m (f b)
I'd kind of like to be able to provide a 'seed' value, so for example if I wanted to make a Game of Life I could just have a grid of the function setting the rules, and a seed grid, and just take the sequence. I'm not sure exactly how to do that though.submitted by onomatic
[link] [7 comments]