Someone was asking about cabal on #haskell just earlier tonight, specifically what it does do and what it does not do, and a discussion ensued. I commented that once I understood why sandboxes were needed at all, I had somewhat of an epiphany in the sense that now I cannot begin to fathom why cabal was implemented the way it was to begin with.
That is to say, as cabal is a tool that is (with caveats) used for downloading libraries to use in development of software, it makes practically 0 sense for it to install packages to a global repository, when the very nature of software development means that you might need to be working on different programs that require different versions of the same package.
One guy in IRC(timthelion) pointed out that there had been a project called hellno(https://github.com/db81/hellno) that manages cabal packages by only building them once and then copying binary packages of the correct versions around as needed.
So, I was wondering: why doesn't cabal do this? Is there an effort to do something like this, etc? I ask here because it wouldn't surprise me there are some parts of the picture I might be missing completely, as I don't really know much about the intricacies of package management. I know that with sandboxes, we basically get the same thing, but we have to build the packages from scratch. But IMO, that is more than a minor inconvenience, considering the dependency graphs of some packages.submitted by IceDane
[link] [22 comments]
I have seen Hölder’s inequality and Minkowski’s inequality proved in several ways but this seems the most perspicuous (to me at any rate).Young’s Inequality
If and such that
A and satisfying the premise are known as conjugate indices.
Since is convex we have
Substituting in appropriate values gives
Now take exponents.Hölders’s Inequality
Let and be conjugate indices with and let and then and
By Young’s inequality
By applying a counting measure to we also obtain
By Hölder’s inequality
and is finite since is a vector space.
Goal: Maintainable cross-platform code that is compatible with multiple versions of its dependencies.
Conventional Haskell solution: Use CPP in traditional (aka K&R style) mode to preprocess Haskell files to adapt to platform, environment, and dependency specifics. Traditional CPP is used so that CPP doesn't make too many assumptions about C syntax. The cabal user guide gives CPP examples further giving the impression that this is the preferred and standard approach.
Problem: clang authors prefer ANSI CPP and, reluctantly, only support some bits of traditional CPP. This is creating problems on OSX Mavericks where clang is the default and clang's CPP rejects some Haskell sources that previously worked with gcc's CPP. As several people have pointed out to me, it is a hack to use CPP for Haskell in the first place. Our hack is bad and we should feel bad :)
Question: What is a solid and practical alternative to CPP? I want a solution that is solid and known good. I'm optimistically looking for suggestions that have worked over several ghc releases and on OSX, Windows, and Linux.
Below is a list of alternatives that I'm vaguely aware of. I haven't actually tried them yet and so I may have some of the details wrong. Suggestions, corrections, experience reports, and pros&cons lists are all greatly appreciated:
- Supply a custom preprocessor with each package that needs CPP support and feed that to ghc. This could work as long as the preprocessor is fairly general. cpphs been around for a while, but when lens tried to use it they hit some rough edges.
- Put cpp-options: -traditional in the cabal file: lens-4.1.1 and newer uses this, but it may not do anything? clang's cpp --help lists -traditional-cpp but not -traditional. Does it accept both for compatibility with gcc? Also, I checked ghc-7.6.1 and ghc-7.6.3 (linux and OSX Mavericks, respectively) and both are passing -traditional to CPP already.
- Continue (ab)using CPP and:
- test each release with clang's CPP. This could get messy as it would likely require isolating CPP bits into standalone modules with very controlled used of syntax to reduce the risk of CPP hitting a syntax error.
- require gcc's CPP. I believe this is the current direction that ghc is moving in. It requires that OSX installs of ghc are a bit more involved. It also has a tendency to generate more bug reports as anyone who uses the wrong CPP runs the risk of thinking a particular package is broken.
Thank you for your time!
I understand my immediate CPP issue a bit better now. Traditional CPP doesn't understand # and ## (the stringification and token concatenation operations defined in ANSI CPP). Furthermore, gcc's CPP strips out comments and spaces as it goes. So you can get token concatenation this way:#define C(a,b) a/**/b
C(Foo,Bar) would become the token FooBar. clang on the other hand, tokenizes a and b while treating the comment as a token separator. So it generates Foo Bar.
I rewrote the macro to work with ANSI CPP, but now I can't get ghc to invoke CPP without -traditional. I tried adding -optP-ansi but it gets ignored because -traditional is also on the command line. When I use -pgmP cpp, it fails because clang's CPP doesn't use the same command line options as gcc's CPP.
I can get rid of -traditional by using -pgmP gcc -optP-E -optP-ansi, but that is wrong because it may not be the same gcc that ghc is using.
I rewrote the macros to work with both traditional and ansi CPP. Not my favorite solution. It means more boilerplate.submitted by dagit
[link] [15 comments]
I wrote a simple server that I could connect to over telnet to run some basic commands, but things are breaking and throwing exceptions and it's not telling me why.all_you_need_to_know
[link] [3 comments]
I am learning how to do testing in Haskell. What quickcheck/smallcheck style framework do you guys recommend I use? I'm having trouble understanding the differences. Many of the pages I've seen that make comparisons talk about Quickcheck, but they dont establish whether they are referring to Quickcheck1 or Quickcheck2.submitted by cessationoftime
[link] [9 comments]
foldr seems more mathematically pure but can have bad stack usage. Is there a theoretically grounded and nonugly way to get the advantages of foldl' f(also general)? I guess as this seems to suggest we can simply rely on the compiler for optimizations but that seems silly.
Why is foldr more pretty than foldl'? Consider the definition of foldr:foldr onCons onNil = go where go  = onNil go (x:xs) = onCons x (fold xs)
foldl' is ugly in comparison:foldl' onCons = go where go onNil  = onNil go onNil (x:xs) = let onNil' = onCons onNil x in seq onNil' (go onNil' xs) submitted by sstewartgallus
[link] [7 comments]