I have seen Hölder’s inequality and Minkowski’s inequality proved in several ways but this seems the most perspicuous (to me at any rate).Young’s Inequality
If and such that
A and satisfying the premise are known as conjugate indices.
Since is convex we have
Substituting in appropriate values gives
Now take exponents.Hölders’s Inequality
Let and be conjugate indices with and let and then and
By Young’s inequality
By applying a counting measure to we also obtain
By Hölder’s inequality
and is finite since is a vector space.
Goal: Maintainable cross-platform code that is compatible with multiple versions of its dependencies.
Conventional Haskell solution: Use CPP in traditional (aka K&R style) mode to preprocess Haskell files to adapt to platform, environment, and dependency specifics. Traditional CPP is used so that CPP doesn't make too many assumptions about C syntax. The cabal user guide gives CPP examples further giving the impression that this is the preferred and standard approach.
Problem: clang authors prefer ANSI CPP and, reluctantly, only support some bits of traditional CPP. This is creating problems on OSX Mavericks where clang is the default and clang's CPP rejects some Haskell sources that previously worked with gcc's CPP. As several people have pointed out to me, it is a hack to use CPP for Haskell in the first place. Our hack is bad and we should feel bad :)
Question: What is a solid and practical alternative to CPP? I want a solution that is solid and known good. I'm optimistically looking for suggestions that have worked over several ghc releases and on OSX, Windows, and Linux.
Below is a list of alternatives that I'm vaguely aware of. I haven't actually tried them yet and so I may have some of the details wrong. Suggestions, corrections, experience reports, and pros&cons lists are all greatly appreciated:
- Supply a custom preprocessor with each package that needs CPP support and feed that to ghc. This could work as long as the preprocessor is fairly general. cpphs been around for a while, but when lens tried to use it they hit some rough edges.
- Put cpp-options: -traditional in the cabal file: lens-4.1.1 and newer uses this, but it may not do anything? clang's cpp --help lists -traditional-cpp but not -traditional. Does it accept both for compatibility with gcc? Also, I checked ghc-7.6.1 and ghc-7.6.3 (linux and OSX Mavericks, respectively) and both are passing -traditional to CPP already.
- Continue (ab)using CPP and:
- test each release with clang's CPP. This could get messy as it would likely require isolating CPP bits into standalone modules with very controlled used of syntax to reduce the risk of CPP hitting a syntax error.
- require gcc's CPP. I believe this is the current direction that ghc is moving in. It requires that OSX installs of ghc are a bit more involved. It also has a tendency to generate more bug reports as anyone who uses the wrong CPP runs the risk of thinking a particular package is broken.
Thank you for your time!
I understand my immediate CPP issue a bit better now. Traditional CPP doesn't understand # and ## (the stringification and token concatenation operations defined in ANSI CPP). Furthermore, gcc's CPP strips out comments and spaces as it goes. So you can get token concatenation this way:#define C(a,b) a/**/b
C(Foo,Bar) would become the token FooBar. clang on the other hand, tokenizes a and b while treating the comment as a token separator. So it generates Foo Bar.
I rewrote the macro to work with ANSI CPP, but now I can't get ghc to invoke CPP without -traditional. I tried adding -optP-ansi but it gets ignored because -traditional is also on the command line. When I use -pgmP cpp, it fails because clang's CPP doesn't use the same command line options as gcc's CPP.
I can get rid of -traditional by using -pgmP gcc -optP-E -optP-ansi, but that is wrong because it may not be the same gcc that ghc is using.
I rewrote the macros to work with both traditional and ansi CPP. Not my favorite solution. It means more boilerplate.submitted by dagit
[link] [15 comments]
I wrote a simple server that I could connect to over telnet to run some basic commands, but things are breaking and throwing exceptions and it's not telling me why.all_you_need_to_know
[link] [3 comments]
I am learning how to do testing in Haskell. What quickcheck/smallcheck style framework do you guys recommend I use? I'm having trouble understanding the differences. Many of the pages I've seen that make comparisons talk about Quickcheck, but they dont establish whether they are referring to Quickcheck1 or Quickcheck2.submitted by cessationoftime
[link] [9 comments]
foldr seems more mathematically pure but can have bad stack usage. Is there a theoretically grounded and nonugly way to get the advantages of foldl' f(also general)? I guess as this seems to suggest we can simply rely on the compiler for optimizations but that seems silly.
Why is foldr more pretty than foldl'? Consider the definition of foldr:foldr onCons onNil = go where go  = onNil go (x:xs) = onCons x (fold xs)
foldl' is ugly in comparison:foldl' onCons = go where go onNil  = onNil go onNil (x:xs) = let onNil' = onCons onNil x in seq onNil' (go onNil' xs) submitted by sstewartgallus
[link] [7 comments]
So, I had to write a program in Haskell and a program in java that does the same thing. I'm new to Haskell but I managed to get a basic program working. However, I'm struggling what to write about in the written part of the homework.
It says we need to
Identify any features of the languages which might have an adverse effect on the quality of the programs.
Look at how the respective languages support expressibility of data and control abstractions and discuss this
Any pointers? I think this is quite a tough one but any help is much appreciated
Edit: I believe we're expected to compare and contrast in a declarative vs imperative sort of waysubmitted by the16
[link] [10 comments]
Hi Haskell community.
I'm benchmarking Haskell compilers but most of the benchmarks I have are completely contrived and don't relate to how Haskell is used in the wild. So, I'm looking for libraries which are in common use (ie. they're on hackage) and easy to benchmark. The haskell-src-exts package is an example of what I'm looking for. One benchmark result for parsing Haskell code is worth more than all the fibonacci, n-queens, and prime number benchmarks put together. Please help me find more libraries where performance improvements actually matter.
So, if you're using a Haskell library for anything CPU intensive, please let me know. I'd love to include it in my benchmark suite.
The end result will look something like this: http://mirror.seize.it/linux-ghc-7.8.2-2014.05.11.html and this http://mirror.seize.it/linux-ajhc-ghc-uhc-2014.05.11.html
(PS. I've already looked at nofib and fibon.)submitted by Lemmih
[link] [19 comments]