Is there a name for defining recursive functions as infinite lists of input/output pairs by zipping two inductive datatypes?
Recursive functions are usually defined by directly calling a function inside its own body.Nat = Z | S Nat double Z = Z double (S x) = S (S (double x)))
What if, instead of defining them this way, we just enumerated two recursive datatypes and zipped them?
To be more descriptive, mind the following functions:enum Nat = [Z, S Z, S S Z, S S S Z ...] # a b = \x -> (zip (enum a) (enum b)) !! (elem x (enum a))
That is, enum enumerates elements of a recursive datatype. Now, using #, there is an interesting way to define some recursive functions.Even = Z | S (S Even) double = Nat # Even
This makes double equivalent to:double x = [(0,0),(1,2),(2,4),(3,6)...] !! x
In other words, instead of defining the function recursively, we just created an infinite list with the input/output pairs of that function, by zipping two recursive datatypes together. I never heard of this approach being used, so my question is: this there a name for this? Any relevant papers? What kinds of functions can be defined this way?submitted by SrPeixinho
[link] [3 comments]
In the Stackage maintainer's agreement, there's a section about keeping your package compatible with the newest versions of all dependencies. What the maintainer's agreement doesn't (yet) discuss is when it's important to be compatible with old versions of a package. The reasons for this are not immediately obvious, especially as it affects a smaller subset of the Hackage author population. This blog post will cover some of the reasons for this goal.
The original impetus for writing this was to get one specific message across: please continue supporting transformers-0.3.0.0! For the explanation of why, please keep reading.Non-upgradeable packages
The simplest case to discuss is packages like base and template-haskell. Not only are these packages shipped with GHC, but they cannot be upgraded. As a result, if you have a package that says base >= 4.7, it will only work with GHC 7.8 and later. Users who are still using 7.6 (or 7.4... or earlier... yes, those people do in fact exist) will have no means of using your package.
That of course brings up a question of how many versions of GHC you want to support. I'd highly recommend always supporting the most recent Haskell Platform release, as many users (especially Windows users) stick to that. Going back an extra version as well isn't a bad idea either, especially as some distributions (e.g., Ubuntu) tend to ship relatively old GHC versions.Upgradeable, GHC-shipped packages
This issue is more subtle. In addition to non-upgradeable packages, GHC includes a number of packages which can be installed separately, resulting in one copy of the package in your global database, and one in your user database. (Yes, you can also install into the global database, but I'm covering the common case here.) Examples of these packages are bytestring, binary, and containers.
The first problem with this is that it can lead to end-user confusion. How many of you have tried working in GHCi, or just compiling code with ghc --make, and gotten a message along the lines of "Could not match type ByteString with ByteString"? That usually comes from two versions of a package being available.
Now that's just a bit of an annoyance, and building your code with cabal will almost always avoid it. But there's a second, more serious problem. Some of these upgradeable packages are in turn depended upon by non-upgradeable packages. For example, template-haskell depends on containers. As a result, imagine if you try to use containers 0.5 and template-haskell when on GHC 7.4. Since template-haskell depends on containers-0.4.2.1, you'll run into issues.
Another problem is the ghc package (aka GHC-the-library). With GHC 7.8.2, I have the following dependencies for the installed ghc package:depends: Cabal-126.96.36.199-9a922a1eb7c28f3b842ec080141cce40 array-0.5.0.0-9f212a0e8caa74d931af75060b4de2ab base-188.8.131.52-018311399e3b6350d5be3a16b144df9b bin-package-db-0.0.0.0-1742af7e25e78544d39ad66b24fbcc26 bytestring-0.10.4.0-7de5230c6d895786641a4de344336838 containers-0.5.5.1-19036437a266c66c860794334111ee93 directory-184.108.40.206-a0555efb610606fd4fd07cd3bba0eac2 filepath-220.127.116.11-15473fd51668a6d87ee97211624eb8aa hoopl-18.104.22.168-2477f10040d16e4625a4a310015c7bb6 hpc-0.6.0.1-6b2f98032f6f0d7ac5618b78a349a835 process-22.214.171.124-eaf7dde3bcb1e88fafb7f0f02d263145 template-haskell-126.96.36.199-dcc8c210fb02937e104bc1784d7b0f06 time-1.4.2-b47642c633af921325b5eb4d5824b9de transformers-0.3.0.0-7df0c6cd5d27963d53678de79b98ed66 unix-188.8.131.52-23f79f72106a0fbca2437feb33a4e846
So if I try to use- for example- transformers 0.4.1.0 and a package requiring ghc at the same time, I'll run into a conflict. And there are actually a large number of such packages; just doctest has over 100 dependencies.Haskell Platform
The last reason is the one I hear the most pushback about from package authors. The Haskell Platform pegs users at specific versions of dependencies. For example, the most recent HP release pegs text at 0.11.3.1. Now imagine that you write a package that depends on text >= 1.0. A user with the Haskell Platform installed will likely get warnings from cabal when installing your package about conflicting versions of text, and possibly breaking other packages that depend on it.
I can tell you what I've personally done about this situation. For my open source packages, I make sure to keep compatibility with the Haskell Platform released version of a package. Sometimes this does lead to some ugliness. Two examples are:
- streaming-commons has to have a copy of some of the streaming text code, since it was not available before text 1.1. (And due to an issue with cabal, we can't even conditionally include the code.)
- In chunked-data, I wasn't able to rely upon the hGetChunk function, and instead needed to use CPP to include a far less efficient backup approach when using older versions of text.
In the Stackage project, I run versions of the build both with and without Haskell Platform constraints. There are actually a whole slew of conditionals in the version selection which say "if you're using HP, then use this older version of a dependency." However, as time goes on, more and more packages are simply not supporting the HP-pegged versions of packages anymore.Future changes
I'm not commenting here on the value of HP-pegged versions, but simply pointing out a reality: if you want your users to have a good experience, especially Windows users, it's probably a good idea to keep compatibility with the older HP-provided versions. I also think the ramifications of the HP approach really need to be discussed by the community, it seems like there's not much discussion going on about the impact of the HP.
Also, regarding the packages shipped with GHC: there have certainly been discussions about improving this situation. I know that removing the Cabal dependency from ghc has been discussed, and would certainly improve the situation somewhat. If others want to kick off a conversation on improving things, I'd be happy to participate, but I frankly don't have any concrete ideas on how to make things better right now.
Someone was asking about cabal on #haskell just earlier tonight, specifically what it does do and what it does not do, and a discussion ensued. I commented that once I understood why sandboxes were needed at all, I had somewhat of an epiphany in the sense that now I cannot begin to fathom why cabal was implemented the way it was to begin with.
That is to say, as cabal is a tool that is (with caveats) used for downloading libraries to use in development of software, it makes practically 0 sense for it to install packages to a global repository, when the very nature of software development means that you might need to be working on different programs that require different versions of the same package.
One guy in IRC(timthelion) pointed out that there had been a project called hellno(https://github.com/db81/hellno) that manages cabal packages by only building them once and then copying binary packages of the correct versions around as needed.
So, I was wondering: why doesn't cabal do this? Is there an effort to do something like this, etc? I ask here because it wouldn't surprise me there are some parts of the picture I might be missing completely, as I don't really know much about the intricacies of package management. I know that with sandboxes, we basically get the same thing, but we have to build the packages from scratch. But IMO, that is more than a minor inconvenience, considering the dependency graphs of some packages.submitted by IceDane
[link] [22 comments]
I have seen Hölder’s inequality and Minkowski’s inequality proved in several ways but this seems the most perspicuous (to me at any rate).Young’s Inequality
If and such that
A and satisfying the premise are known as conjugate indices.
Since is convex we have
Substituting in appropriate values gives
Now take exponents.Hölders’s Inequality
Let and be conjugate indices with and let and then and
By Young’s inequality
By applying a counting measure to we also obtain
By Hölder’s inequality
and is finite since is a vector space.
Goal: Maintainable cross-platform code that is compatible with multiple versions of its dependencies.
Conventional Haskell solution: Use CPP in traditional (aka K&R style) mode to preprocess Haskell files to adapt to platform, environment, and dependency specifics. Traditional CPP is used so that CPP doesn't make too many assumptions about C syntax. The cabal user guide gives CPP examples further giving the impression that this is the preferred and standard approach.
Problem: clang authors prefer ANSI CPP and, reluctantly, only support some bits of traditional CPP. This is creating problems on OSX Mavericks where clang is the default and clang's CPP rejects some Haskell sources that previously worked with gcc's CPP. As several people have pointed out to me, it is a hack to use CPP for Haskell in the first place. Our hack is bad and we should feel bad :)
Question: What is a solid and practical alternative to CPP? I want a solution that is solid and known good. I'm optimistically looking for suggestions that have worked over several ghc releases and on OSX, Windows, and Linux.
Below is a list of alternatives that I'm vaguely aware of. I haven't actually tried them yet and so I may have some of the details wrong. Suggestions, corrections, experience reports, and pros&cons lists are all greatly appreciated:
- Supply a custom preprocessor with each package that needs CPP support and feed that to ghc. This could work as long as the preprocessor is fairly general. cpphs been around for a while, but when lens tried to use it they hit some rough edges.
- Put cpp-options: -traditional in the cabal file: lens-4.1.1 and newer uses this, but it may not do anything? clang's cpp --help lists -traditional-cpp but not -traditional. Does it accept both for compatibility with gcc? Also, I checked ghc-7.6.1 and ghc-7.6.3 (linux and OSX Mavericks, respectively) and both are passing -traditional to CPP already.
- Continue (ab)using CPP and:
- test each release with clang's CPP. This could get messy as it would likely require isolating CPP bits into standalone modules with very controlled used of syntax to reduce the risk of CPP hitting a syntax error.
- require gcc's CPP. I believe this is the current direction that ghc is moving in. It requires that OSX installs of ghc are a bit more involved. It also has a tendency to generate more bug reports as anyone who uses the wrong CPP runs the risk of thinking a particular package is broken.
Thank you for your time!
I understand my immediate CPP issue a bit better now. Traditional CPP doesn't understand # and ## (the stringification and token concatenation operations defined in ANSI CPP). Furthermore, gcc's CPP strips out comments and spaces as it goes. So you can get token concatenation this way:#define C(a,b) a/**/b
C(Foo,Bar) would become the token FooBar. clang on the other hand, tokenizes a and b while treating the comment as a token separator. So it generates Foo Bar.
I rewrote the macro to work with ANSI CPP, but now I can't get ghc to invoke CPP without -traditional. I tried adding -optP-ansi but it gets ignored because -traditional is also on the command line. When I use -pgmP cpp, it fails because clang's CPP doesn't use the same command line options as gcc's CPP.
I can get rid of -traditional by using -pgmP gcc -optP-E -optP-ansi, but that is wrong because it may not be the same gcc that ghc is using.
I rewrote the macros to work with both traditional and ansi CPP. Not my favorite solution. It means more boilerplate.submitted by dagit
[link] [15 comments]
I wrote a simple server that I could connect to over telnet to run some basic commands, but things are breaking and throwing exceptions and it's not telling me why.all_you_need_to_know
[link] [3 comments]
I am learning how to do testing in Haskell. What quickcheck/smallcheck style framework do you guys recommend I use? I'm having trouble understanding the differences. Many of the pages I've seen that make comparisons talk about Quickcheck, but they dont establish whether they are referring to Quickcheck1 or Quickcheck2.submitted by cessationoftime
[link] [9 comments]
foldr seems more mathematically pure but can have bad stack usage. Is there a theoretically grounded and nonugly way to get the advantages of foldl' f(also general)? I guess as this seems to suggest we can simply rely on the compiler for optimizations but that seems silly.
Why is foldr more pretty than foldl'? Consider the definition of foldr:foldr onCons onNil = go where go  = onNil go (x:xs) = onCons x (fold xs)
foldl' is ugly in comparison:foldl' onCons = go where go onNil  = onNil go onNil (x:xs) = let onNil' = onCons onNil x in seq onNil' (go onNil' xs) submitted by sstewartgallus
[link] [7 comments]
So, I had to write a program in Haskell and a program in java that does the same thing. I'm new to Haskell but I managed to get a basic program working. However, I'm struggling what to write about in the written part of the homework.
It says we need to
Identify any features of the languages which might have an adverse effect on the quality of the programs.
Look at how the respective languages support expressibility of data and control abstractions and discuss this
Any pointers? I think this is quite a tough one but any help is much appreciated
Edit: I believe we're expected to compare and contrast in a declarative vs imperative sort of waysubmitted by the16
[link] [10 comments]