I was just looking back at my first big Haskell project, which you can find here: https://github.com/Julek/klyslee
It's a small "A.I." which creates music through a genetic algorithms. I put a couple of the tunes it made at various points online at (https://soundcloud.com/klyslee/tracks plus one now as a "restart", be careful of your years on the earliest ones!).
Looking back on this code, it's a bit embarrassing now, so I was wondering, what have you guys learnt looking back at your first big projects?submitted by julek1024
[link] [17 comments]
Now that I've been using Haskell for almost a year I find going back to imperative languages incredibly frustrating. I went and brushed up on my Python and the biggest problem I had with it was how verbose it is (which frustrated me more than the lack of compile time optimisation or type safety...).
I realise that what I'm really liking about Haskell more than anything is how terse it is. Once you understand the syntax it's not just quick to write but it's easy to understand.
I've seen a bit of J around and it looks like it might push that a bit far. Anyone have any experience with it? Once you understand the syntax is it possible to decipher what someone else's code means or is it a "write only language"?
What imperative languages do Haskellers out there like?submitted by TheCriticalSkeptic
[link] [84 comments]
Fellow archers, (haha),
I've been working with Arrows and their various formulations recently. In particular, I've created a quasiquoter ArrowInit for proc-do notation that allows me to implement the CCA package without a pre-processor by de-sugaring the proc/do and then lifting functions appropriately. I am interested in a more generic endeavor to use a similar quasi-quoter to take arbitrary notation, and optimize/normalize while degrading gracefully. The more restrictive DeepArrow/circat/GArrow/Profunctor's allow for optimizations. My idea would be for a quasiquoter to detect which variant is possible for a particular expression, and then to instantiate for that restricted version.
There seems to be a constant low-level history of this sort of effort regarding Arrows, and I think the key is to expose the nice features of proc-do notation into these efforts. If this works, someone can draw out something like:runItA = runConcurrently [arrow| proc (a,b) -> y <- getURL -< a z <- getURL -< b return (y,z) |]
and obtain the equivalent of using (<*> or ***) whererunItV = [arrow| proc n -> a <- A -< n y <- B -< a z <- C -< a return (y,z) |]
would be something like: A >>> (B &&& C) but be in a restricted arrow (note no use of arr, and all right hand side expressions are unchanged variables, no arbitrary functions needed). This allows users to define optimized versions of <*>, ***, &&& and so on that they get to use a comfortable proc notation with a range of abstractions.
Which of the restricted Arrow approaches would be ideal, and what sort of direction should an effort like this go in? Some similar approaches seem to have died out and I'd like to avoid that fate. Is there a particular effort I should join efforts with? Any direction/advice would be appreciated.
In a sense, this is similar to the ApplicativeDo proposal, but a step in between Applicative and Monad. ApplicativeDo notices when an expression can become an applicative, "ArrowDo" may notice when an expression can become an arrow-like expression.-- (syntax is a bit off, but the idea should be clear) runItC n = do a <- A -< n b <- B -< a C -< (a,b)
This has no Applicative expression due to the reuse of a, but it can be an Arrow A >>> (returnA &&& B) >>> C instead of a monad. It can also be a 'restricted Arrow' due to a lack of arbitrary expressions, only tupling rearrangement.runItD n = do Just a <- A -< n+1 b <- B -< a+2 d <- D -< a+3 C -< (a,b+d)
This variant should look something like: arr (\n->n+1) >>> A >>> arr (\(Just a)->(a,(a+2,a+3))) >>> second ( B *** D >>> arr (\(b,d) -> b+d)) >>> C )
I think all this should be possible. Worthwhile? Needed? I'm not sure, but I'm willing to give it a shot. It also connects various abstractions cleanly into a single framework. So far I've been able to implement the ArrowInit variant of this scheme.submitted by tomberek
[link] [12 comments]
I tried this in GHC:main = do let a = 2 print a let a = a+1 print a
It prints "2" and hangs. But "let" is supposed to create a new scope, right? The "a" on the left-hand side of "a = a+1" and the "a" on the right-hand side are different, so why the (apparent) infinite recursion?
I'm told that the above is just a sugared version of the following:main = (\a -> (print a >> ((\a -> print a) (a+1)))) 2
And this works fine. It prints "2", then "3", and then quits.
So why doesn't the first one do the same thing?submitted by ggchappell
[link] [19 comments]
Hello, Just wanted to know where the issues in each of these code snippets are coming from.
Couldn't match expected type Integer' with actual typem0 Integer'. Just trying to return an altered result. What would the declaration be if I wanted to return possible doubles as well?f :: Integer -> Integer f x = do return ((x*x*x*x*x)-10*(x*x*x)+30*(x))
Couldn't match type Int' withDouble'. Basically want to return fractions, passing in numberous parameters as well.ternarySearch :: IO() -> Int-> Int-> Int-> Int -> Double ternarySearch f a b tau = do if (abs(b - a) < tau) then do return (a + b) / 2 else do return 5 submitted by DESU-troyer
[link] [4 comments]
Sandboxes are great, but having to build them from scratch is time (and disk space) consuming, which is annoying. Wouldn't it be great if we could copy an existing one when starting a new one? Sadly, this is known not to work, but what doesn't seem to be so widely known is that we can copy package DBs, so that we can have packages registered in one sandbox that live in another. This can be a huge time saver.
Details in the "Copying Sandboxes" section of Comprehensive Haskell Sandboxes, Revisited.submitted by edsko
[link] [13 comments]