News aggregator

Baltimore Haskell Users Group?

Haskell on Reddit - Fri, 04/25/2014 - 3:14pm

So I think Haskell is rad. But it's also a but confusing. I'd like to get better. I lazily Googled and couldn't find any mention of a Baltimore Area users Group. To the best of anyone's knowledge, does one exist? Sorry if this is the wrong sub for this.

submitted by sdroadie
[link] [20 comments]
Categories: Incoming News

mth

del.icio.us/haskell - Fri, 04/25/2014 - 2:09pm
Categories: Offsite Blogs

Which language to use for working through Okasaki?

Haskell on Reddit - Fri, 04/25/2014 - 1:46pm

I'm starting to work through Okasaki's 'Purely functional data structures'. The main text uses Standard ML, which is strict (except for lazy keyword which is used only occasionally). Appendix contains translation of programs into Haskell, which is lazy (seq and friends are not used). Does it mean that they are not equivalent? I would imagine that the question of (non-)laziness is crucial for analyzing running time of algorithms.

Also, if I choose Standard ML, which implementation should I use? Looks like the book uses a made up dialect with 'lazy' keyword and $ operator.

submitted by MintyGrindy
[link] [13 comments]
Categories: Incoming News

Lens from the ground up

Haskell on Reddit - Fri, 04/25/2014 - 1:39pm
Categories: Incoming News

What is this meta-pattern?

Haskell on Reddit - Fri, 04/25/2014 - 11:08am

Consider the following type classes:

class Functor f => Applicative f where pure :: a -> f a (<*>) :: f (a -> b) -> f a -> f b class Functor f => Monoidal f where unit :: f () (*) :: f a -> f b -> f (a,b)

These are equivalent, but that's tangential. Consider instead the similarity in types between (<*>) and ($), unit and (), and (*) and (,). They're all very similar, in that we can derive the types of the former from the latter like so:

derive f (a -> b) = f a -> derive f b derive f a = f a

So the pattern here is: hit each arg with the functor, and also the final return type. Somehow, Applicative is just "functions" with this derived type, and Monoidal is just "tuples" with this derived type.

What's the general principle here? Are there other similar patterns, e.g. for monad's type pattern? What's going on here, in general, when some structure like functions or tuples is lifted into these functorial domains?

submitted by psygnisfive
[link] [11 comments]
Categories: Incoming News

Cryptol

del.icio.us/haskell - Fri, 04/25/2014 - 9:12am
Categories: Offsite Blogs

Novice haskell programmer here --- having trouble getting contents of file

Haskell on Reddit - Fri, 04/25/2014 - 8:29am

How can I get the contents of this file back to my main program and still be able to close the file?

getContentsOfFile :: String -> IO String getContentsOfFile f = do handle <- openFile "PriceData/Daily/Yhoo.csv" ReadMode contents <- hGetContents handle --hClose handle return contents main :: IO () main = do contents <- getContentsOfFile "PriceData/Daily/Yhoo.csv" putStr contents submitted by dhjdhj
[link] [11 comments]
Categories: Incoming News

Feedback on first Haskell program

Haskell on Reddit - Fri, 04/25/2014 - 8:20am

Hello,

I wrote my first Haskell program, it is command-line utility to parse JSON and apply Lens expression to the input.

You can see the program on Github

There is just a few lines of code so maybe there is not much to comment but I would appreciate any feedback or improvement suggestions.

Thank you!

submitted by Kototama
[link] [4 comments]
Categories: Incoming News

Cryptol

del.icio.us/haskell - Fri, 04/25/2014 - 2:47am
Categories: Offsite Blogs

Thoughts on Hakyll vs. Octopress for an experienced Haskeller but HTML newbie

Haskell on Reddit - Thu, 04/24/2014 - 11:31pm

I'm starting a personal website/blog, just the typical thing you see a lot of programmers keeping. I'm looking at using Github Pages, but as I don't want to hand-code everything, I'll be using a static site generator. I'm choosing between Hakyll and Octopress, but I wanted to get some opinions before I put in the effort of learning and installing one.

I'll start by saying that I'm comfortable with Monads, EDSLs, and Haskell in general. Conversely, I have next to no HTML/CSS skill. What I'm really wondering is, how much hand-HTML does Hakyll require you to do?

In particular:

  1. Does Hakyll have any support for visual "themes"? If not, are there libraries/examples of visual layouts I could choose from?

  2. Does Hakyll integrate well with Github pages?

  3. How hard is it to get things like a Facebook like button, twitter sidebar, comments, etc. working in Hakyll? Is there anything equivalent to Octopress's plugins?

  4. Octopress advertises itself as dealing well with code snippets. Does Hakyll deal well with code within pages?

I realize that many of these things will not be built into Hakyll, so cases where it's not built in but there are lots of easy examples to take from are welcome.

Reccomendations of one over the other are welcome, but I'm more looking for feedback and knowledge from people who have used Hakyll (and Octopress, though that's more tangential to this subreddit).

Thanks!

EDIT: Generalized to "comments" instead of just "disqus comments"

submitted by jmite
[link] [15 comments]
Categories: Incoming News

Douglas M. Auclair (geophf): 'T' is for Theorem-proving

Planet Haskell - Thu, 04/24/2014 - 8:36pm

'T' is for Theorem-proving
So, you've been theorem-proving all your life.
Let's just get that one out there.
When you're given a math problem that says:
"Solve x + 1 = 2 (for x) (of course)"
And you say, "Why, x is 1, of course," quick as thinking, you've just proved a theorem.
Here, let me show you:
x + 1 = 2 (basis)     -1 = -1 (reflexive)x + 0 = 1 (addition)x       = 1 (identity)
Q.E.D. ('cause you rock)
So, yeah. Theorem proving is this, correction: theorem proving is simply this: going from step 1, to step 2 to step 3 until you've got to where you want to be.
How do you go from step 1 to step 2, etc? The same way you do everything! You follow the rules you're given.
Let's prove a theorem, right now.
So, in classical logic, we have a theorem that says
p → p
That is, if you've got a p, that implies (that you've got a) p.
Toughie!
But proving that? How do we go about doing that?
Well, in classical logic, you're given three basic axioms (thanks to sigfpe for his article "Adventures in Classic Land"):
1. p → (q → p)2. (p → (q → r)) → ((p → q) → (p → r))3. ¬¬p → p
So, p → p. How do we get there? Let's start with axiom 1 above.
1. p → ((p → p) → p) axiom 1 (where q = (p → p))2. (p → ((p → p) → p) → ((p → (p → p)) → (p → p)) axiom 2 (where q = (p → p) and r = p)3. (p → (p → p)) → (p → p) modus ponens4. p → (p → p) axiom 1 (where q = p)5. p → p modus ponens
Q.E.D. (a.k.a. whew!)
I used something called modus ponens in the above proof. It says, basically, that if you've proved something that you're depending on, you can drop it. Or, logically:
p → q, p       ∴ q
Or, we have "p implies q" we've proved p, so now we can just use q.
Now, there's been a lot of thought put into theory, coming up with theorems and proving them, and this has been over a long time, from Aristotle up to now. The big question for a long time was that ...
Well, theorem proving is a lot of hard work. Is there any way to automate it?
So there's been this quest to make theorem proving mechanical.
But to make theorem-proving mechanical, you have to mechanize everything, the axioms, the rules, and the process. Up until recent history, theory has been a lot of great minds spouting truths, and 'oh, here's a new way to look at the world!'
And that has been great (it has), but it hasn't helped mechanize things.
Then along came Frege. What Frege did was to give us a set of symbols that represented logical connectives and then symbols that represented things.
And there you had it: when you have objects and their relations, you have an ontology, a knowledge-base. Frege provided the tools to represent knowledge as discreet things that could be manipulated by following the rules of the (symbolized) relations.
He gave abstraction to knowledge and an uniform way of manipulating those abstractions, so, regardless of the underlying knowledge be it about money or chickens or knowledge, itself, it could be represented and then manipulated.
That way you could go from step 1 to step 2 to step 3, etc, until you arrived at your conclusion, or, just as importantly, arrived at a conclusion (including one that might possibly say what you were trying to prove was inadmissible).
Since Frege, there has been (a lot of) refinement to his system, but we've been using his system since because it works so well. He was the one who came up with the concept of types ('T' is for Types) and from that we've improved logic to be able to deliver this blog post to you on a laptop that is, at base, a pile of sand that constantly proving theorems in a descendent of Frege's logic.
Let's take a look at one such mechanized system. It's a Logic Framework, and is called tweLF. The first example proof from the Princeton team that uses this system is 'implies true,' and it goes like this:
imp_true: pf B -> pf (A imp B) = ...
That's the declaration. It's saying: the proof of B, or pf B, implies the proof of A implies B, or pf (A imp B).
How can you claim such a thing?
Well, you prove it.
imp_true: pf B -> pf (A imp B) = [p1: pf B]            % you have the proof of B % ... but now what? for we don't have the proof that A implies B.
So the 'now what' is our theorem-proving adventure. Mechanized.
We don't have the proof that A implies B, but we have a logical loop-hole (called a hole) that a proof some something that's true is its proof:
hole: {C} pf C.
Which says, mechanized what I just said in words, so with that:
imp_true: pf B -> pf (A imp B) = [p1: pf B] (hole (A imp B)).
And there you go.
... that is, if you're fine with a big, gaping loophole in your proof of such a thing.
If you are satisfied, then, well, here, sit on this rusty nail until you get unsatisfied.
So there.
Okay, so we have an implication, but we need to prove that.
So we introduce the implication into the proof, which is defined as:
imp_i: (pf A -> pf B) -> pf (A imp B)
SO! we have that, and can use it:
imp_true: pf B -> pf (A imp B) = [p1 : pf B] (imp_i ([p2: pf A] (hole B))).
So, we have the proof of A and that leads to B. But wait! We already have B, don't we? It's p1 (that is [p1 : pf B])
BAM!
imp_true: pf B -> pf (A imp B) = [p1 : pf B] (imp_i ([p2 : pf A] p1)).
DONE!
This is what theorem-proving is: you start with what you want, e.g.:
imp_true: pf B -> pf (A imp B)
which is "implies-true is that if you have B proved, then you have the proof that (A implies B) is true, too."
Then you take what you've got:
pf B
And give it a value:
[p1 : pf B]
Then you apply your rules (in this case, implication introduction, or 'imp_i') to fill the holes in your proof, to get:
[p1: pf B] (imp_i [p2: pf A] p1)
Which is the demonstration of your proof, and that's why we say 'Q.E.D.' or 'quod est demonstratum.' 'Thus it is demonstrated.'
Ta-dah! Now you have all the tools you need to prove theorems.
Now go prove Fermat's Last Theorem.
eheh.
(I am so mean!)
Okay, okay, so maybe Fermat's Last Theorem is a bit much, but if you want to try your hand at proving some theorems, there's a list of some simple theorems to prove.
Categories: Offsite Blogs

Lenses don't compose backwards.

Haskell on Reddit - Thu, 04/24/2014 - 8:34pm

Lenses don't compose backwards. I know that there are a lot of people who believe this, but it's just not true. I'm not sure where this misunderstanding comes from, but I suspect it's from thinking of lenses as fancy field accessors. After all, a field accessor composes left to right, doesn't it? If I have t :: (a,(b,c)) and I want to get out the b element, I have to do fst.snd :: (a,(b,c)) -> b. The lens to do is all wonky, by comparison: while _1 corresponds to fst and _2 to second, I have to write _2._1, which is backwards!

Except all of that is wrong. Lenses are not accessors. The right way to think about lenses is as focusers. A lens "focuses" on a particular location in a structure, to do something in that location. Let's look at another kind of focuses first: map functions. Consider the following three map functions:

map :: (a -> b) -> [a] -> [b] mapTree :: (a -> b) -> Tree a -> Tree b mapMaybe :: (a -> b) -> Maybe a -> Maybe b

Now of course these are all familiar to you and are the fmaps of the obvious functors, but look at what they do abstractly: they take an action on elements --- and a -> b --- and live it to actions on structures containing those elements --- [a] -> [b], etc. That is to say, they focus simultaneously on all of the a elements in a structure, and then let you act on those elements. Now look at the type of a lens:

_1 :: Functor f => (a -> f b) -> (a,c) -> f (b,c)

If we ignore the functor stuff briefly, we can see that it looks an awful lot like the type of a map function. In fact, with the right choice of f, it is a map function! Namely, it would be mapFst :: (a -> b) -> (a,c) -> (b,c)

Lenses, like map functions, focus on elements. However, unlike map functions, lenses have the capacity to focus on individual elements, rather than all elements simultaneously, and they don't simply make it possible to act on the elements but also to retrieve them. The functor stuff in that type gives us options: pick f = Identity and we get a map function, pick f = Const a an we get back an accessor, etc.

But, how does this explain why lenses don't in fact compose backwards? Well, if lenses are analogous to maps, but more general, then we ought to look at how maps compose. Here's a perfectly good composition of map functions:

map.mapTree.mapMaybe :: (a -> b) -> [Tree (Maybe a)] -> [Tree (Maybe b)]

Notice how the left-most map function is associated with the outermost structure, and so forth. This maps complete sense from the perspective of maps-as-focuses: the first map focuses on elements of a [], the second on elements of a Tree, and the third on elements of a Maybe. When we finally feed it some action to perform, it pushes that action down into the focused locations, and performs it. Nothing mysterious.

But now let's look at some special other map functions:

mapHead :: (a -> a) -> [a] -> [a] mapHead f (x:xs) = f x : xs mapTop :: (a -> a) -> Tree a -> Tree a mapTop f (Branch x l r) = Branch (f x) l r mapJust :: (a -> b) -> Maybe a -> Maybe b mapJust f (Just x) = Just (f x)

We can compose these together like so:

mapHead.mapTop.mapJust :: (a -> a) -> [Tree (Maybe a)] -> [Tree (Maybe a)]

This is also not a magical thing to do. It's just map and mapTree specialized to only apply in one location instead of all locations. So these map functions only focus on one place. This composite map mapHead.mapTop.mapJust is not mysterious at all, and definitely isn't "composed backwards".

If you agree with me so far, then you now know why lenses don't compose backwards. Why? Because, remember, if we pick our lens's f correctly, we get a map. And guess what? that's just what we've done:

_head :: Functor f => (a -> f a) -> [a] -> f [a] _top :: Functor f => (a -> f a) -> Tree a -> f (Tree a) _just :: Functor f => (a -> f b) -> Maybe a -> f (Maybe b) _head._top._just :: Functor f => (a -> f a) -> [Tree (Maybe a)] -> f [Tree (Maybe a)]

_head at f = Identity is just mapHead, _top is just mapTop, and _just is just mapJust.

The composite lens just peeks into the head of the list, into the top tree node's label, and into the element of the Maybe, and applies a function. It pushes down one layer of structure at a time, outside in. But that's not backwards at all, since these are just those very same maps focused on only one element instead of all of them, with a slightly fancier type!

So how can maps compose normally, but when we rename them and make their types slightly more general, they suddenly compose "backwards"? They can't.

Lenses don't compose backwards.

submitted by psygnisfive
[link] [35 comments]
Categories: Incoming News

How to get Haskell value out of request url using Yesod

Haskell on Reddit - Thu, 04/24/2014 - 7:18pm

Suppose I have a Yesod route like:

/foo/#SomeDataType

Is there a handler that will parse a request to /foo/data to give me a SomeDataType? Surely, Yesod itself must parse the route itself and bind a #SomeDataValue to a handler. I want to do something like that myself. Where to look?

submitted by 2piix
[link] [2 comments]
Categories: Incoming News

Ken T Takusagawa: [ffuprlwq] Counting

Planet Haskell - Thu, 04/24/2014 - 6:45pm

countWith :: [a] -> [[a]];
countWith l = [] : liftM2 (flip (:)) (countWith l) l;

Enumerates all the length-zero lists, then all the length-1 lists, then the length-2 lists, and so on, for a given list of elements (the "digits"). Vaguely inspired by Perl's .. operator which creates a list, e.g., for("a".."zzz"). Unlike Perl, the lists increase little-endianly.

> countWith "01"
["", "0", "1", "00", "10", "01", "11", "000", "100", "010", "110", "001", "101", "011", "111", "0000", "1000", "0100", "1100", "0010", "1010", "0110", "1110", "0001", "1001", "0101", "1101", "0011", "1011", "0111", "1111", "00000", "10000", "01000", "11000", "00100", "10100", "01100", "11100", "00010" ...

Here is a more understandable way of writing it. The list is used as the nondeterminism monad.

countWith l = [] : do { t <- countWith l; h <- l; return $ h:t; };
Categories: Offsite Blogs