Does Haskell have any use in real world? That can not be acomplished by traditional sequential programming (iterative, I believe it's called).
Most of the applications I've seen are just remakes of some other ones but in Haskell. Based on this, why should I give up a good chunk of my time to learn Haskell than giving up a good chunk of my time to learn more about say C++ (OpenGL, Unreal etc) or C# (QT, Unity).submitted by Zyborg23
[link] [12 comments]
A couple months ago, after finally "getting" Monads and do-notation, I was hit with a sense that the answer was very anti-climactic. I was also kind of annoyed, because it felt like this allegedly-complicated thing could be explained much more succinctly, without any cute language, without any metaphors, without allusions to any other programming languages, and in a way that really sticks.
Basically, start by assuming a competent reader.
Then demonstrate data types, and then pattern-based overloading.
Don't say things like, "Haskell is about composability". (That meant nothing for a few chapters.)
Instead, talk about:
- function-composition operators like the '.' in "f . g x" (and '.' is a great starting point, because most people remember it from math class);
- the fact that such operators allow chaining as in "f3 . f2 . f1 x" (again, easy-to-grasp);
- if they know any POSIX shell scripting (or even scripting with cmd.exe), talk about the shell pipe operator as another kind of function-composition operator that allows chaining with "command1 <x | command2 | command3" (ok, so there's one metaphor, except that it's not that far off);
- the fact that Haskell encourages the use of many different kinds of function-composition operators;
- the fact that >>= is one such operator, which naturally leads into
- the fact that >>= allows operations on x to be chained as in "x >>= f1 >>= f2 >>= f3"; and
- the fact that >>= is defined using the same pattern-based overloading mentioned earlier (so that the >>= evaluated at run-time depends on the left-side operand).
Then talk about do-notation de-sugaring, monad laws to show when the nesting lambdas are equivalent to the above chaining... and then you're pretty much done.
(And if they're confused in any particular part of that, they can ask, and we can answer with a couple of simple examples.)
Then you can talk about composability.
For someone with experience in C, C++, Python, or Java, that should be doable in a very short time.
And then the reader (or viewer---maybe it should be a Khan Academy-style video) can understand simple programs that use IO inside do-notation in main, and then they can understand why such programs can behave correctly even though, at first glance, they don't seem to check e.g. for EOF or the absence of IO errors.
Wouldn't it have been nice if all that had been laid out for you on day one so that you could actually make sense of small, practical programs?
But then, maybe I just didn't look in the right place. Does a tutorial like that exist already?
[EDIT] /u/bss03 was one of several to give me a good reason why I don't want to try writing any such tutorial at this time.
False alarm, everyone! (:
Thanks to everyone who posted a link, especially Dan Burton, who just re-posted his Zero-Analogy blog entry in response to this thread.
[EDIT 2] Do any of these links belong in the sidebar?submitted by 0wx
[link] [101 comments]
My first and biggest concern is simply that it's harder to read code which uses highly polymorphic functions unnecessarily.
You can see that even by considering plain-old fmap vs map: it's harder to read "fmap f . fmap g" than "map f . map g" because with the former you're having to manually search for more information about what Functor is being used in each case.
My second concern is that the more you overload, the more you risk having something unexpected happen:
Say we have two variables:
*Borders.Base.Utils> let a' = ["alice", "bob"]
*Borders.Base.Utils> let a = (True, a')
we want to count the characters so we type:
*Borders.Base.Utils> length $ concat a
..but we've accidentally forgotten the prime character...
... right now we get:
Couldn't match expected type ‘[[a0]]’
with actual type ‘(Bool, [[Char]])’
In the first argument of ‘concat’, namely ‘a’
In the second argument of ‘($)’, namely ‘concat a’</interactive>
so we fix it and get our desired result:
*Borders.Base.Utils> length $ concat a'
...but under the FTP proposals (where concat would become Data.Foldable.concat) we get:
*Borders.Base.Utils> length $ Data.Foldable.concat a
(because pairs are Foldable in their second argument).
This cannot be a good thing.
I believe that the more generalised functions of FTP should be opt-in (i.e. you should - as at present - need to import them explicitly).
Having problems with my indention and alignment in notepad++, after working on a .lhs script for a while when I reopened it today I found everything to be unaligned. please help!
I'm working on a Haskell program which is currently over 500 lines. I'm very careful to make sure all my functions have neat and identical alignment but I have opened notepad++ today to discover that all my functions are no longer aligned neatly and the whole thing looks a mess. As far as I'm aware I've not done anything and it seems to be the case with all my scripts, wondered if anyone had any idea what is going on?
Here's an example of what I meanfun :: Int -> Int fun a = a
Has becomefun :: Int -> Int fun a = a
I know it's not super important but it's very frustrating and wondered if anyone knew a fix?
EDIT: I'm an idiot thanks for all your help its sorted now!submitted by LJackso
[link] [15 comments]
Lookingglass is seeking a qualified Senior Development Engineer to join our team!
Are you an experienced senior software engineer in security, networking, cloud and big data? Are you interested in cyber security or improving the security of the Internet? Do you push yourself to be creative and innovative and expect the same of others?
At Lookingglass, we are driven and passionate about what we do. We believe that teams deliver great products not individuals. We inspire each other and our customers every day with technology that improves the security of the Internet and of our customer’s. Behind our success is a team that thrives on collaboration and creativity, delivering meaningful impact to our customers.Skills & Requirements Required:
- US Citizen or Green Card Holder
- Location: MD/VA or CA based
- Bachelor’s or Masters degree in: computer science, engineering, information systems or mathematics
- 5-8 yrs experienced with full development life-cycle with shipping products
- 4-8 yrs experienced with Functional (particularly Clojure) and OO languages – have worked in functional paradigms with immutable data models
- List item 5+ yrs building distributed system products including messaging, NoSQL, RPC / RMI mechanisms – Including building Service Orientated Architectures
- Proficiency with data structure and algorithm analysis • Experienced working in a TDD Environment
- Comfortable with aggressive refactoring
- Architectural and design skills to map a solution across hardware, software, and business layers in a distributed architecture
- Product development experience in network security, content security or cybersecurity intelligence
- Experience with CSP concurrency models
- Experience with key-value distributed databases
- Experience deploying production software in Haskell, OCAML, or Lisp
Get information on how to apply for this position.
So I've been using Haskell for a while now, and I was just reading through Wadler's "The essence of functional programming" and I saw that in that paper he refers to what we now call "return" as "unitM" (that is, the unit operation of monads). How did the standard term become "return" and was there concern about naming confusion with "return" in imperative languages?submitted by hemamorphy
[link] [33 comments]
I write a lot about types. Up until now however, I’ve only made passing references to the thing I’ve actually been studying in most of my free time lately: proof theory. Now I have a good reason for this: the proof theory I’m interested in is undeniably intertwined with type theory and computer science as a whole. In fact, you occasionally see someone draw the triangleType Theory / \ / \ Proof Theory ---- Category Theory
Which nicely summarizes the lay of the land in the world I’m interested in. People will often pick up something will understood on one corner of the triangle and drag it off to another, producing a flurry of new ideas and papers. It’s all very exciting and leads to really cool stuff. I think the most talked about example lately is homotopy type theory which drags a mathematical structure (weak infinite groupoids) and hoists off to type theory!
If you read the [unprofessional, mostly incorrect, and entirely more fun to read] blog posts on these subjects you’ll find most of the lip service is paid to category theory and type theory with poor proof theory shunted off to the side.
In this post, I’d like to jot down my notes on Frank Pfenning’s introduction to proof theory materials to change that in some small way.What is Proof Theory
The obvious question is just “What is proof theory?”. The answer is that proof theory is the study of proofs. In this world we study proofs as first class mathematical objects which we prove interesting things about. This is the branch of math that formalizes our handwavy notion of a proof into a precise object governed by rules.
We can then prove things like “Given a proof that Γ ⊢ A and another derivation of Γ, A ⊢ B, then we can produce a derivation of Γ ⊢ B. Such a theorem is utterly crazy unless we can formalize what it means to derive something.
From this we grow beautiful little sets of rules and construct derivations with them. Later, we can drag these derivations off to type theory and use them to model all sorts of wonderful phenomena. My most recent personal example was when folks noticed that the rules for modal logic perfectly capture what the semantics of static pointers ought to be.
So in short, proof theory is devoted to answering that question that every single one of your math classes dodged
Professor, what exactly is a proof?Basic Building Blocks
In every logic that we’ll study we’ll keep circling back to two core objects: judgments and propositions. The best explanation of judgments I’ve read comes from Frank Pfenning
A judgment is something we may know, that is, an object of knowledge. A judgment is evident if we in fact know it.
So judgments are the things we’ll structure our logic around. You’ve definitely heard of one judgment: A true. This judgment signifies whether or not some proposition A is true. Judgments can be much fancier though: we might have a whole bunch of judgments like n even, A possible or A resource.
These judgments act across various syntactic objects. In particular, from our point of view we’ll understand the meaning of a proposition by the ways we can prove it, that is the proofs that A true is evident.
We prove a judgment J through inference rules. An inference rule takes the formJ₁ J₂ .... Jₓ ————————————— J
Which should be read as “When J₁, J₂ … and Jₓ hold, then so does J”. Here the things above the line are premises and the ones below are conclusions. What we’ll do is define a bunch of these inference rules and use them to construct proofs of judgments. For example, we might have the inference rulesn even —————— ———————————— 0 even S(S(n)) even
for the judgment n even. We can then form proofs to show that n even holds for some particular n.—————— 0 even ———————————— S(S(0)) even —————————————————— S(S(S(S(0)))) even
This tree for example is evidence that 4 even holds. We apply second inference rule to S(S(S(S(0)))) first. This leaves us with one premise to show, S(S(0)) even. For this we repeat the process and end up with the new premise that 0 even. For this we can apply the first inference rule which has no premises completing our proof.
One judgment we’ll often see is A prop. It simply says that A is a well formed proposition, not necessarily true but syntactically well formed. This judgment is defined inductively over the structure of A. An example judgment would beA prop B prop —————————————— A ∧ B prop
Which says that A ∧ B (A and B) is a well formed proposition if and only if A and B are! We can imagine a whole bunch of these rulesA prop B prop —————— —————— ————————————— ... ⊤ prop ⊥ prop A ∨ B prop
that lay out the propositions of our logic. This doesn’t yet tell us how prove any of these propositions to be true, but it’s a start. After we formally specify what sentences are propositions in our logic we need to discuss how to prove that one is true. We do this with a different judgment A true which is once again defined inductively.
For example, we might want to give meaning to the proposition A ∧ B. To do this we define its meaning through the inference rules for proving that A ∧ B true. In this case, we have the ruleA true B true —————————————— (∧ I) A ∧ B true
I claim that this defines the meaning of ∧: to prove a conjunction to be true we must prove its left and right halves. The rather proof-theoretic thing we’ve done here is said that the meaning of something is what we use to prove it. This is sometimes called the “verificationist perspective”. Finally, note that I annotated this rule with the name ∧ I simply for convenience to refer it.
Now that we know what A ∧ B means, what does have a proof of it imply? Well we should be able to “get out what we put in” which would mean we’d have two inference rulesA ∧ B true A ∧ B true —————————— —————————— A true B true
We’ll refer to these rules as ∧ E1 and ∧ E2 respectively.
Now for a bit of terminology, rules that let us “introduce” new proofs of propositions are introduction rules. Once we have a proof, we can use it to construct other proofs. The rules for how we do that are called elimination rules. That’s why I’ve been adding I’s and E’s to the ends of our rule names.
How do we convince ourselves that these rules are correct with respect to our understanding of ∧? This question leads us to our first sort of proofs-about-proofs we’ll make.Local Soundness and Completeness
What we want to say is that the introduction and elimination rules match up. This should mean that anytime we prove something with an by an introduction rule followed by an elimination rule, we should be able to rewrite to avoid this duplication. This also hints that the rules aren’t too powerful: we can’t prove anything with the elimination rules that we didn’t have a proof for at some point already.
For ∧ this proof looks like thisD E – – A B D —————— ∧I ⇒ ———— A ∧ B A —————— ∧E 1 A
So whenever we introduce a ∧ and then eliminate it with ∧ E1 we can always rewrite our proof to not use the elimination rules. Here notice that D and E range over derivations in this proof. They represent a chain of rule applications that let us produce an A or B in the end. Note I got a bit lazy and started omitting the true judgments, this is something I’ll do a lot since it’s mostly unambiguous.
The proof for ∧E2 is similar.D E – – A B E ————— ∧I ⇒ ———— A ∧ B B ————— ∧E 2 B
Given this we say that the elimination rules for ∧ are “locally sound”. That is, when used immediately after an elimination rule they don’t let us produce anything truly new.
Next we want to show that if we have a proof of A ∧ B, the elimination rules give us enough information that we can pick the proof apart and produce a reassembled A ∧ B.D D ————– ————– D A ∧ B A ∧ B ————— ⇒ —————∧E1 ——————∧E2 A ∧ B A B ———————————————— ∧I A ∧ B
This somewhat confusion derivation takes our original proof of A ∧ B and pulls it apart into proof of A and B and uses these to assemble a new proof of A ∧ B. This means that our elimination rules give us all the information we put in so we say their locally complete.
The two of these properties combined, local soundness and completeness are how we show that an elimination rule is balanced with its introduction rule.
If you’re more comfortable with programming languages (I am) our local soundness property is equivalent to stating thatfst (a, b) ≡ a snd (a, b) ≡ b
And local completeness is thata ≡ (fst a, snd a)
The first equations are reductions and the second is expansion. These actually correspond the eta and beta rules we expect a programming language to have! This is a nice little example of why proof theory is useful, it gives a systematic way to define some parts of the behavior of a program. Given the logic a programming language gives rise to we can double check that all rules are locally sound and complete which gives us confidence our language isn’t horribly broken.Hypothetical Judgments
Before I wrap up this post I wanted to talk about one last important concept in proof theory: judgments with hypotheses. This is best illustrated by trying to write the introduction and elimination rules for “implies” or “entailment”, written A ⊃ B.
Clearly A ⊃ B is supposed to mean we can prove B true assume A true to be provable. In other words, we can construct a derivation of the formA true —————— . . . —————— B true
We can notate our rules then as—————— u A true —————— . . . —————— B true A ⊃ B A —————————— u —————————— A ⊃ B true B true
This notation is a bit clunky, so we’ll opt for a new one: Γ ⊢ J. In this notation Γ is some list of judgments we assume to hold and J is the thing we want to show holds. Generally we’ll end up with the ruleJ ∈ Γ ————— Γ ⊢ J
Which captures the fact that Γ contains assumptions we may or may not use to prove our goal. This specific rule may vary depending on how we want express how assumptions work in our logic (substructural logics spring to mind here). For our purposes, this is the most straightforward characterization of how this ought to work.
Our hypothetical judgments come with a few rules which we call “structural rules”. They modify the structure of judgment, rather than any particular proposition we’re trying to prove.Weakening Γ ⊢ J ————————— Γ, Γ' ⊢ J Contraction Γ, A, A, Γ' ⊢ J ——————————————— Γ, A, Γ' ⊢ J Exchange Γ' = permute(Γ) Γ' ⊢ A ———————————————————————— Γ ⊢ A
Finally, we get a substitution principle. This allows us to eliminate some of the assumptions we made to prove a theorem.Γ ⊢ A Γ, A ⊢ B ———————————————— Γ ⊢ B
These 5 rules define meaning to our hypothetical judgments. We can restate our formulation of entailment with less clunky notation then asA prop B prop —————————————— A ⊃ B prop Γ, A ⊢ B Γ ⊢ A ⊃ B Γ ⊢ A ————————— —————————————————— Γ ⊢ A ⊃ B Γ ⊢ B
One thing in particular to note here is that entailment actually internalizes the notion of hypothetical judgments into our logic. This the aspect of it that made it behave so differently then the other connectives we looked at.
As an exercise to the reader: prove the local soundness and completeness of these rules.Conclusion
In this post we’ve layed out a bunch of rules and I’ve hinted that a bunch more are possible. When put together these rules define a logic using “natural deduction”, a particular way of specifying proofs that uses inference rules rather than axioms or something entirely different.
Hopefully I’ve inspired you to poke a bit further into proof theory, in that case I heartily recommend Frank Pfenning’s lectures at the Oregon Summer School for Programming Languages.