Hello fellow haskellers me and some guys from DTU are currently working on a project, where we are learning Haskell.
We occured a problem.
we have a data set from a accelerometer on the form [[double,double,double,double]] where the last double is a timestamp.
We want to double integrate the data so we can get the positions.
anyone who can help us
Best regards a humble Haskellersubmitted by kisum1994
[link] [9 comments]
First-order formal logic is a standard topic in computer science. Not so for second-order logic: which, though used the default in fields of mathematics such as topology and analysis, is usually not treated in standard courses in mathematical logic. For today’s Theory Lunch I discussed some classical theorems that hold for first-order logic, but not for second-order logic: my talk was based on Boolos’ classical textbook.
We consider languages made of symbols that represents either objects, or functions, or relations: in particular, unary relations, or equivalently, sets. A sentence on such a language is a finite sequence of symbols from the language and from the standard logical connectives and quantifiers ( for conjunction, for disjunction, for negation, etc.) according to the usual rules, such that every variable is bounded by some quantifier. A first-order sentence only has quantifiers on objects, while a second-order sentence can have quantifiers on functions and relations (in particular, sets) as well.
For example, the set is made of the following first-order sentences on the language :
Of course, second-order logic is much more expressive than first order logic. The natural question is: how much?
The answer is: possibly, too much more than we would like.
To discuss how it is so, we recall the notion of model. Informally, a model of a set of sentences is a “world” where all the sentences in the set are true. For instance, the set of natural numbers with the usual zero, successor, addition, multiplication, and ordering is a model of . A model for a set of sentences is also a model for every theorem of that set, i.e., every sentence that can be derived in finitely many steps from those of the given set by applying the standard rules of logic.
For sets of first-order sentences, the following four results are standard:
Compactness theorem. (Tarski and Mal’tsev) Given a set of first-order sentences, if every finite subset of has a model, then has a model.
Upwards Löwenheim-Skolem theorem. If a set of first-order sentences has a model of infinite cardinality , then it also has models of every cardinality .
Downwards Löwenheim-Skolem theorem. If a set of first-order sentences on a finite or countable language has a model, then it also has a finite or countable model.
Completeness theorem. (Gödel) Given a set of first-order sentences, if a first-order sentence is true in every model of , then is a theorem of .
All of these facts fail for second-order theories. Let us see how:
We start by considering the following second-order sentence:
Lemma 1. The sentence is true in a model if and only if the universe of is at most countable.
The informal reason is that intuitively means:
the universe is a monoid on a single generator
Let us now consider the following second-order sentence:
Lemma 2. The sentence is true in a model if and only if the universe of is infinite.
The informal reason is that intuitively means:
the universe contains a copy of the natural numbers
Theorem 1. Both Löwenheim-Skolem theorems fail for sets of second-order sentences.
Proof. only has countably infinite models. only has uncountably infinite models.
Let us now consider the set of all the sentences of together with the following second-order sentence:
Clearly, is the induction principle: which is an axiom in second-order Peano arithmetics, but only an axiom scheme in first-order PA.
Lemma 3. Every model of is isomorphic to the set of natural numbers with zero, successor, addition, multiplication, and ordering.
The informal reason is that , though finite, is powerful enough to tell numbers from each other: therefore, in every model of , each numeral (th iteration of the successor, starting from ) can be denoted by at most one item in the universe of the model. On the other hand, is powerful enough to reconstruct every numeral.
Theorem 2. The compactness theorem fails for sets of second-order sentences.
Proof. Let be a constant outside the language of . Consider the set made of all the sentences from and all the sentences of the form . Then every finite subset of has a model, which can be obtained from the set of natural numbers by interpreting as some number strictly greater than all of the values such that . However, a model of is also a model of , and must be isomorphic to the set of natural numbers: but no interpretation of the constant is possible within such model.
We can now prove
Theorem 3. The completeness theorem does not hold for second-order sentences.
In other words, second-order logic is semantically inadequate: it is not true anymore that all “inequivocably true” sentences are theorems. The proof will be based on the following two facts:
Fact 1. (Gödel) The set of the first-order formulas which are true in every model of is recursively enumerable.
Fact 2. (Tarski) The set of first-order formulas which are true in is not recursively enumerable.
Fact 1 is actually a consequence of the completeness theorem: the set of first-order formulas which are true in every model of is the same as the set of first-order sentences that are provable from , and that set is recursively enumerable by producing every possible proof! To prove Theorem 3 it will thus be sufficient to prove that Fact 1 does not hold for second-order sentences.
Proof of Theorem 3. We identify with the conjunction of all its formulas, which are finitely many.
Let be a first-order sentence in the language of . Because of what we saw while discussing the compactness theorem, is true in if and only if it is true in every model of : this, in turn, is the same as saying that is true in every model of . Indeed, let be a model of : if is isomorphic to , then is true in if and only if is true in ; if is not isomorphic to , then is false in , which makes true in . This holds whatever is.
Fix a Gödel numbering for sentences. There exists a recursive function that, for every sentence , transforms the Gödel number of the first-order sentence into the Gödel number of the second-order sentence .
Suppose now, for the sake of contradiction, that the set of second-order sentences that are true in every model of is recursively enumerable. Then we could get a recursive enumeration of the set of first-order sentences which are true in the standard model of by taking the Gödel number of such a sentence , turning it into that of via the aforementioned recursive function, and feeding the latter number to the semialgorithm for second-order sentences that are true in every model of . But because of Tarski’s result, no such recursive enumeration exists.
- George S. Boolos et al. Computability and Logic. Fifth Edition. Cambridge University Press, 2007
Elevence reinvents how software helps financial firms do business.
We build our solution on formal methods and functional programming. We care about open-source, distributed systems and cryptography. We are growing our core software engineering team in Zurich.
You are a developer with expert functional programming skills in one of the following domains:
- Compiler development
- Formal methods and theorem provers
- Distributed systems
- Algorithmic trading
You have 2+ years experience in functional programming (preferrably Haskell), and care about delivering high-quality code within an agile team.
Get information on how to apply for this position.
Hello I implemented 2 different merge sorts here: https://github.com/huseyinyilmaz/datastructures/blob/b79ffc2648f91550bc792a5a5ad01138661beb07/haskell/src/Sorting.hs#L24-L60
One is top down and the other is bottom up. If I would guess I would say that bottom up version should work faster because it does not need to figure out middle of the list to split. But that is not the case at all. For some reason mergesort works with reasonable complexity and mergesort2 is just terrible.|------------+-----------+------------+--------------| | item count | 1000 | 10.000 | 100.000 | |------------+-----------+------------+--------------| | mergesort | 0.03 secs | 0.29 secs | 3.60 secs | | mergesort2 | 0.15 secs | 19.28 secs | 2634.78 secs | |------------+-----------+------------+--------------|
Can somebody see why that might be?submitted by yilmazhuseyin
[link] [10 comments]
I'm an engineer at Karius, a "stealth-mode" biotech startup in Menlo Park, CA, and we're looking to hire a few folks to write software (and we use Haskell!).
Currently only hiring locally or with relocation (though that could change in the future, so feel free to get in touch regardless!).
We are a team of crazy biologists, engineers, data scientists and clinicians on a mission to change forever the way infectious diseases are diagnosed and treated. We face incredibly interesting challenges in software engineering, machine learning and molecular biology, as we push the limits of diagnostics and genomic technologies. We're hiring computational biologists, software engineers and data scientists.
If you're a software engineer, we're looking for experience in front-end, back-end, web development, intrastructure, devops, bioinformatics, and machine learning. We have a varied list of challenges; we build large data processing pipelines to analyze data from in-house DNA sequencers, separate the signal from the noise and extract what we need, and visualize this in ways that are helpful for scientists and doctor; we build web apps and tools for biologists and doctors to use to plan, conduct, and analyze experiments; we work closely with molecular biologists to analyze data generated by these experiments and develop novel computational biology methods.
Our technology stack, as of right now:
- Python (for bioinformatics)
- Rails (for one backend codebase in maintenance mode)
- React and ES6 (for front-end interfaces)
- Haskell (for infrastructure and new development)
- Backed by AWS and Docker
We just put our first large Haskell application into production and are planning on continuing with Haskell; this is an opportunity to use Haskell at a cutting-edge biotechnology startup.
If any of this sounds exciting to you, please don't hesitate to get in touch with us by emailing Greg Stock at firstname.lastname@example.org.
Take a look at our job postings on AngelList for more detail, though they won't say much about Haskell.NiftyIon
[link] [5 comments]
I'm learning calculus for the first time. If you were me, how would you use Haskell to get calculus-fu and increase Haskell-fu?
Is there a library or set of libraries that I could use in the context of the course, i.e., in class, and during homework, that would effectively let me do everything in haskell?
Specifically, I want to be able to write programs for my problem sets/homework, and write my class notes in haskell. Bonus points if I could graph stuff inside of GHCI (I could try to implement one but...I would prefer to just have one to refer to).
I have a practical question. I've completed Brent Yorgey's CIS194, so I have a decent grasp of Haskell up to and including Monads/Applicative-Functors.
I have a bachelor's in philosophy, and have taken a few logic courses. I returned to school last fall, with the intention of taking as many math courses as I possibly can as preparation to enter a graduate program in logic/cs.
I'm taking Calculus I for the first time. I want to do everything I possibly can to do well in the course, but also get better at haskell. How would I do this?
Last semester, I took pre-calculus, and while I could implement a lot of the functions, I eventually ran into problems with taking square roots, imaginary numbers, trigonometric functions, etc.
I know how to use emacs, tex, and bash.submitted by socratesthefoolish
[link] [19 comments]