News aggregator

ICFP 2014 : Program - Thu, 09/18/2014 - 11:48pm
Categories: Offsite Blogs

Proposal: Add a few extra members to Foldable and Traversable classes

libraries list - Thu, 09/18/2014 - 10:37pm
To go along with Herbert Valerio Riedel's moving functions from Foldable and Traversable to Prelude, I realized that `null` and `size` make sense for all Foldables; Reid W. Barton noted that these can have optimized implementations for many containers. He also noted that scanl1 and scanr1 make sense for general Traversables. I therefore propose: Add `null` and `size` to the Foldable class. Default implementations: null = foldr (\_ _ -> False) True size = foldl' (\acc _ -> acc+1) 0 Add `scanl1` and `scanr1` to the Traversable class. Default implementations are a little less simple. David _______________________________________________ Libraries mailing list Libraries< at >
Categories: Offsite Discussion

Reimplementing Streams, Lists, and Tries with Fix, Free, Product and Compose

Haskell on Reddit - Thu, 09/18/2014 - 7:34pm

So I've been playing around with some wrapper types I'm sure I've only rediscovered, using them to reimplement some standard types:

newtype Product1 f g a = Product1 { getProduct1 :: (f a, g a) } instance (Functor f, Functor g) => Functor (Product1 f g) where fmap f = Product1 . (fmap f *** fmap f) . getProduct1 newtype Sum1 f g a = Sum1 { getSum1 :: Either (f a) (g a) } instance (Functor f, Functor g) => Functor (Sum1 f g) where fmap f = Sum1 . (fmap f +++ fmap f) . getSum1 newtype Fix1 f a = Fix1 { getFix1 :: f (Fix1 f) a } instance Functor (f (Fix1 f)) => Functor (Fix1 f) where fmap f = Fix1 . fmap f . getFix1 newtype Compose1 f g a b = Compose1 { getCompose1 :: f (g a) b } instance (Functor (f (g a))) => Functor (Compose1 f g a) where fmap f = Compose1 . fmap f . getCompose1 newtype Double1 f g a = Double1 { getDouble1 :: f g g a } instance Functor (f g g) => Functor (Double1 f g) where fmap f = Double1 . fmap f . getDouble1 type Free1 f = Fix1 (Sum1 Identity `Compose1` f) -- Free1 f a ~ Either a (f (Free1 f) a) type Link = Product1 Identity -- Link a ~ (a, f a) type Stream = Fix1 Link -- Stream a ~ (a, Stream a) type List = Fix1 (Compose Maybe `Compose1` Link) -- List a ~ Maybe (a, List a) type NonEmptyList = Fix1 (Link `Compose1` Compose Maybe) -- NonEmptyList a ~ (a, Maybe (NonEmptyList a)) type Trie f = Fix1 (Product1 Maybe `Compose1` Compose f) -- Trie f a ~ (Maybe a, f (Trie f a)) type TrieForest f = Fix1 (Compose f `Compose1` Product1 Maybe) -- TrieForest f a ~ f (Maybe a, TrieForest f a) type BinaryTree = Free1 (Double1 Product1) -- BinaryTree a ~ Either a (BinaryTree a, BinaryTree a) type BST i = Free1 (Product1 (Const i) `Compose1` Double1 Product1) -- BST i a ~ Either a (BST i a, i, BST i a)

Are there more standard names for these types? Is there a collective name for this kind of variant, or somewhere I can go to read up more on them?

submitted by rampion
[link] [4 comments]
Categories: Incoming News

Confusion with Parsec

Haskell on Reddit - Thu, 09/18/2014 - 5:35pm

I'm following Let's Build a Browser Engine in Haskell, and trying to write all the code myself, with just the signatures (or function names) as a starting point.

It got most of it working, but I have a big bug in my code. I compared it finally to the code from the tutorial, and narrowed it down to one part. I've modified my code to be almost identical, just highlighting the main difference which is breaking for me.

My Code:

parseElement = do (tag, attrs) <- between (char '<') (char '>') tagData children <- parseChildren string $ "</" ++ tag ++ ">" return $ Dom.elem (T.pack tag) attrs children parseChildren = spaces >> manyTill parseChild end where end = eof <|> (lookAhead (string "</") >> return ()) parseChild = spacesAfter parseNode

Tutorial Code:

parseElement = do (tag, attrs) <- between (char '<') (char '>') tagData children <- parseChildren string $ tag ++ ">" -- "</" is consumed by parseChildren, maybe bad form? return $ Dom.elem (T.pack tag) attrs children parseChildren = spaces >> manyTill parseChild end where end = eof <|> (try (string "</") >> return ()) parseChild = spacesAfter parseNode

So the difference is that I'm looking ahead for "</" inside parseChildren, in order to know it's the last child node, and then consuming it inside parseElement. I only just saw the "maybe bad form?" part of the comment in the code when I pasted it here, as it was off the screen in my editor, but that was essentially the reason why I tried to approach it this way. For some reason, the "</" is still consumed and I get an error:

unexpected "f" expecting "</"

(where the input text is "<p><foo></foo</p>").

submitted by peterjoel
[link] [4 comments]
Categories: Incoming News

Confused about AllowAmbiguousTypes

haskell-cafe - Thu, 09/18/2014 - 4:00pm
I am confused about the behaviour of the AllowAmbiguousTypes extension. I tried the basic example of the user manual: Using ghc 7.8.3, without the extension, this will cause a type error at the type signature (line 2). Adding AllowAmbgiuousTypes causes an error at the call site instead (line 4). So far so good. However, if I replace the (C a) constraint with (Num a), no type error is reported, and the example runs fine! This is true with or without the extension. Using ghc 7.6.3, however, the example is still rejected with an "ambiguous constraint" error. Is this a bug in ghc 7.8? Is ghc somehow able to prove the Num constraint even though a is universally quantified?
Categories: Offsite Discussion

Chung-chieh Shan: Derailing

Planet Haskell - Thu, 09/18/2014 - 11:33am

A type checker that derails:
“show :: a -> String”
“x :: Int”
“show x”
“but you can’t show functions”

A proof checker that derails:
“Birds fly. Tweety the Canary is a bird. So Tweety flies.”
“But Fido the Ostrich doesn’t fly.”

Another type checker that derails:
“main = putStrLn s”
“OK, how about s = show id”

Another proof checker that derails:
“Fido doesn’t fly.”
“But Fido is a bird, and birds fly.”

A helpful type checker:
“show id”
“but you can’t show functions”

A helpful proof checker:
“Birds fly. Fido is a bird. So Fido flies.”
“But Fido is an ostrich, and ostriches don’t fly.”

Another helpful type checker:
“main = putStrLn s”
“OK, how about s = show 42”

Another helpful proof checker:
“Tweety doesn’t fly.”
“But Tweety is a bird, and birds fly.”

Categories: Offsite Blogs

JP Moresmau: Learning Scala via unit tests

Planet Haskell - Thu, 09/18/2014 - 11:07am
Last month I followed a big data training session, and we used Scala to write algorithms for Spark. I had looked at Scala some years ago but I think at the time the Eclipse support wasn't very good (the pot calling the kettle black, eh?), and it piqued my curiosity again. So I started looking at tutorials, and found Scala Labs. The idea is interesting: you get a project with sources and tests, and you need to make all the tests pass. The source code for both the code to change and the unit tests is heavily documented, and guides you through the language nicely. And seeing the green ticks appear always triggers the proper reward areas of my brain (-:

I had a little issue getting the code to compile using Eclipse, since there's a sbt-launch-0.13.0.jar library at the root of the project, and Eclipse used that as the top library, and it had a weird version of the List class, that wasn't genericized! I removed that jar from the class path and everything worked fine.

I'm not aware of a similar tutorial for Haskell, but that would be a good idea!
Categories: Offsite Blogs

Stdlib Monad error messages unhelpful.

Haskell on Reddit - Thu, 09/18/2014 - 11:00am

I was just writing a small thing from /r/dailyprogrammers.

f ::(Eq a) => a -> (Int, a) -> Writer [(Int, a)] (Int, a) f y (n,x) | y == x = return (n+1, x) | otherwise = do tell (n,x) return (1,y)

I forget to wrap the (n,x) in brackets to make it a list. The error message I got was not very helpful at all, saying

Could not deduce (MonadWriter (Int, a) (WriterT [(Int, a)] Data.Functor.Identity.Identity)) arising from a use of ‘tell’ from the context (Eq a) bound by the type signature for f :: Eq a => a -> (Int, a) -> Writer [(Int, a)] (Int, a) at 180.hs:42:5-57 In a stmt of a 'do' block: tell (n, x) In the expression: do { tell (n, x); return (1, y) } In an equation for ‘f’: f y (n, x) | y == x = return (n + 1, x) | otherwise = do { tell (n, x); return (1, y) }

I did try and read it, but it didn't mention the lack of a Monoid constrain at all, or anything. Eventually, I just wrote

data Writer w a = Writer w a instance (Monoid w) => Monad (Writer w) where return x = Writer mempty x (Writer w a) >>= f = let Writer w' b = f a in Writer (w `mappend` w') b tell x = Writer x ()

And then I got a reasonable message,

Couldn't match expected type ‘[(Int, a)]’ with actual type ‘(Int, a)’

and I realized what went wrong. Is there some way to get reasonable error messages without re-writing the stdlib to be less polymorphic? I already had type signatures on everything. Is there import Control.Monad.Basic.Writer or something so I could get reasonable errors?

submitted by Tyr42
[link] [14 comments]
Categories: Incoming News

The <&> for things with <|>

Haskell on Reddit - Thu, 09/18/2014 - 8:44am

The Alternative typeclass lets us talk about a relationship where we get the first succeeding item. It seems useful to also be able to get the last succeeding item only if all items succeed (this is essentially extending the "boolean or" analogy for <|> to "boolean and"):

a <&> b = join $ fmap (\x -> case x of { Nothing -> a; Just _ -> b }) (optional a)

This requires Monad, and there is quite likely a better way to do it. I'm wondering if others have defined something similar before. Also any adivce on a better way to write it would be welcome, especially if it can be done without Monad.

submitted by singpolyma
[link] [21 comments]
Categories: Incoming News

Improving ghc error messages

haskell-cafe - Thu, 09/18/2014 - 8:13am
Interesting article: I do hope some of these changes make it into a future ghc. I'd particularly like to highlight this one: No instance for (Show ([a0] -> Int)) arising from a use of `show' Possible fix: add an instance declaration for (Show ([a0] -> Int)) The second line looks like it was added by someone who'd read an article about making error messages helpful: "suggest a possible way to fix the error". Like creating a Show instance for a function type!?! I must have seen that "possible fix" 1000 times, and 999 of them it was nothing like the right fix. Now I know just to ignore it. Simply removing the second line would be an improvement: it doesn't add anything to the first line. Toby.
Categories: Offsite Discussion

Bound and free variables

haskell-cafe - Thu, 09/18/2014 - 7:49am
Hi *, I have a simple question about terminology regarding bound and free variables. Assume I have: let f x = let g y = ... in g c in ... Now: - `c` is free in `g` and `f` - `y` is bound in `g` - `x` is free in `g`. - `x` is bound in `f` What about `y` in `f`? Is it also bound in `f`? If so then it certainly is bound in a different way that `x`. Is there a terminology that allows to distinguish these different forms of bound variables? Janek
Categories: Offsite Discussion

Ian Ross: Non-diffusive atmospheric flow #6: principal components analysis

Planet Haskell - Thu, 09/18/2014 - 7:39am
Non-diffusive atmospheric flow #6: principal components analysis September 18, 2014

The pre-processing that we’ve done hasn’t really got us anywhere in terms of the main analysis we want to do – it’s just organised the data a little and removed the main source of variability (the seasonal cycle) that we’re not interested in. Although we’ve subsetted the original geopotential height data both spatially and temporally, there is still a lot of data: 66 years of 181-day winters, each day of which has 72×15 Z500 values. This is a very common situation to find yourself in if you’re dealing with climate, meteorological, oceanographic or remote sensing data. One approach to this glut of data is something called dimensionality reduction, a term that refers to a range of techniques for extracting “interesting” or “important” patterns from data so that we can then talk about the data in terms of how strong these patterns are instead of what data values we have at each point in space and time.

I’ve put the words “interesting” and “important” in quotes here because what’s interesting or important is up to us to define, and determines the dimensionality reduction method we use. Here, we’re going to side-step the question of determining what’s interesting or important by using the de facto default dimensionality reduction method, principal components analysis (PCA). We’ll take a look in detail at what kind of “interesting” and “important” PCA give us a little later.

PCA is, in principle, quite a simple method, but it causes many people endless problems. There are some very good reasons for this:

  • PCA is in some sense nothing more than a generic change of basis operation (with the basis we change to chosen in a special way). The result of this is that a lot of the terminology used about PCA is also very generic, and hence very confusing (words like “basis”, “component”, “eigenvector”, “projection” and so on could mean more or less anything in this context!).

  • PCA is used in nearly every field where multivariate data is analysed, and is the archetypical “unsupervised learning” method. This means that it has been invented, reinvented, discovered and rediscovered many times, under many different names. Some other names for it are: empirical orthogonal function (EOF) analysis, the Karhunen-Loève decomposition, proper orthogonal decomposition (POD), and there are many others. Each of these different fields also uses different terms for the different outputs from PCA. This is very confusing: some people talk about principal components, some about empirical orthogonal functions and principal component time series, some about basis functions, and so on. Here, we’re going to try to be very clear and careful about the names that we use for things to try to alleviate some of the confusion.

  • There is a bit of a conceptual leap that’s necessary to go from very basic examples of using PCA to using PCA to analyse the kind of spatio-temporal data we have here. I used to say something like: “Well, there’s a nice two-dimensional example, and it works just the same in 100 dimensions, so let’s just apply it to our atmospheric data!” A perfectly reasonable reponse to that is: “WHAT?! Are you an idiot?”. Here, we’re going to take that conceptual leap slowly, and describe exactly how the “change of basis” view of PCA works for spatio-temporal data.

  • There are some aspects of the scaling of the different outputs from PCA that are really confusing. In simple terms, PCA breaks your data down into two parts, and you could choose to put the units of your data on either one of those parts, normalising the other part. Which one you put the units on isn’t always an obvious choice and it’s really easy to screw things up if you do it wrong. We’ll look at this carefully here.

So, there’s quite a bit to cover in the next couple of articles. In this article, we will: explain the basic idea of PCA with a very simple (two-dimensional!) example; give a recipe for how to perform PCA on a data set; talk about why PCA works from an algebraic standpoint; talk about how to do these calculations in Haskell. Then in the next article, we will: describe exactly how we do PCA on spatio-temporal data; demonstrate how to perform PCA on the Z500 anomaly data; show how to visualise the Z500 PCA results and save them for later use. What we will end up with from this stage of our analysis is a set of “important” spatial patterns (we’ll see what “important” means for PCA) and time series of how strong each of those spatial patterns is at a particular point in time. The clever thing about this decomposition is that we can restrict our attention to the few most “important” patterns and discard all the rest of the variability in the data. That makes the subsequent exploration of the data much simpler.

The basic idea of PCA

We’re going to take our first look at PCA using a very simple example. It might not be immediately obvious how the technique we’re going to develop here will be applicable to the spatio-temporal Z500 data we really want to analyse, but we’ll get to that a little later, after we’ve seen how PCA works in this simple example and we’ve done a little algebra to get a clearer understanding of just why the “recipe” we’re going to use works the way that it does.

Suppose we go to the seaside and measure the shells of mussels1. We’ll measure the length and width of each shell and record the data for each mussel as a two-dimensional (length, width) vector. There will be variation in the sizes and shapes of the mussels, some longer, some shorter, some fatter, some skinnier. We might end up with data that looks something like what’s shown below, where there’s a spread of length in the shells around a mean of about 5 cm, a spread in the width of shells around a mean of about 3 cm, and there’s a clear correlation between shell length and width (see Figure 1 below). Just from eyeballing this picture, it seems apparent that maybe measuring shell length and width might not be the best way to represent this data – it looks as though it could be better to think of some combination of length and width as measuring the overall “size” of a mussel, and some other combination of length and width as measuring the “fatness” or “skinniness” of a mussel. We’ll see how a principal components analysis of this data extracts these two combinations in a clear way.

The code for this post is available in a Gist. The Gist contains a Cabal file as well as the Haskell source, to make it easy to build. Just do something like this to build and run the code in a sandbox:

git clone pca-2d cd pca-2d cabal sandbox init cabal install ./.cabal-sandbox/bin/pca-2d

Just for a slight change, I’m going to produce all the plots in this section using Haskell, specifically using the Chart library. We’ll use the hmatrix library for linear algebra, so the imports we end up needing are:

import Control.Monad import Numeric.LinearAlgebra.HMatrix import Graphics.Rendering.Chart.Easy hiding (Matrix, Vector, (|>), scale) import Graphics.Rendering.Chart.Backend.Cairo

There are some name overlaps between the monadic plot interface provided by the Graphics.Rendering.Chart.Easy module and hmatrix, so we just hide the overlapping ones.

We generate 500 synthetic data points:

-- Number of test data points. n :: Int n = 500 -- Mean, standard deviation and correlation for two dimensions of test -- data. meanx, meany, sdx, sdy, rho :: Double meanx = 5.0 ; meany = 3.0 ; sdx = 1.2 ; sdy = 0.6 ; rho = 0.75 -- Generate test data. generateTestData :: Matrix Double generateTestData = let seed = 1023 mean = 2 |> [ meanx, meany ] cov = matrix 2 [ sdx^2 , rho*sdx*sdy , rho*sdx*sdy , sdy^2 ] samples = gaussianSample seed n mean cov in fromRows $ filter ((> 0) . minElement) $ toRows samples

The mussel shell length and width values are generated from a two-dimensional Gaussian distribution, where we specify mean and standard deviation for both shell length and width, and the correlation between the length and width (as the usual Pearson correlation coefficient). Given this information, we can generate samples from the Gaussian distribution using hmatrix’s gaussianSample function. (If we didn’t have this function, we would calculate the Cholesky decomposition of the covariance matrix we wanted, generate samples from a pair of standard one-dimensional Gaussian distributions and multiple two-dimensional vectors of these samples by one of the Cholesky factors of the covariance matrix – this is just what the gaussianSample function does for us.) We do a little filtering in generateTestData to make sure that we don’t generate any negative values2.

The main program that drives the generation of the plots we’ll look at below is:

main :: IO () main = do let dat = generateTestData (varexp, evecs, projs) = pca dat (mean, cov) = meanCov dat cdat = fromRows $ map (subtract mean) $ toRows dat forM_ [(PNG, "png"), (PDF, "pdf"), (SVG, "svg")] $ \(ptype, suffix) -> do doPlot ptype suffix dat evecs projs 0 doPlot ptype suffix cdat evecs projs 1 doPlot ptype suffix cdat evecs projs 2 doPlot ptype suffix cdat evecs projs 3 putStrLn $ "FRACTIONAL VARIANCE EXPLAINED: " ++ show varexp

and you can see the doPlot function that generates the individual plots in the Gist. I won’t say a great deal about the plotting code, except to observe that the new monadic API to the Chart library makes generating this kind of simple plot in Haskell no harder than it would be using Gnuplot or something similar. The plot code produces one of four plots depending on an integer parameter, which ranges from zero (the first plot above) to three. Because we’re using the Cairo backend to the Chart library, we can generate image output in any of the formats that Cairo supports – here we generate PDF (to insert into LaTeX documents), SVG (to insert into web pages) and PNG (for a quick look while we’re playing with the code).

The main program above is pretty simple: generate test data, do the PCA calculation (by calling the pca function, which we’ll look at in detail in a minute), do a little bit of data transformation to help with plotting, then call the doPlot function for each of the plots we want. Here are the plots we produce, which we’ll refer to below as we work through the PCA calculation:

Synthetic mussel shell test data for two-dimensional PCA example.

Centred synthetic mussel shell test data for two-dimensional PCA example.

PCA eigenvectors for two-dimensional PCA example.

Data projection onto PCA eigenvectors for two-dimensional PCA example.

Let’s now run through the “recipe” for performing PCA, looking at the figures above in parallel with the code for the pca function:

1 2 3 4 5 6 7 8 9 10 11 pca :: Matrix Double -> (Vector Double, Matrix Double, Matrix Double) pca xs = (varexp, evecs, projs) where (mean, cov) = meanCov xs (_, evals, evecCols) = svd cov evecs = fromRows $ map evecSigns $ toColumns evecCols evecSigns ev = let maxelidx = maxIndex $ cmap abs ev sign = signum (ev ! maxelidx) in cmap (sign *) ev varexp = scale (1.0 / sumElements evals) evals projs = fromRows $ map project $ toRows xs project x = evecs #> (x - mean)

We’ll look at just why this recipe works in the next section, but for the moment, let’s just see what happens:

  1. We start with our original mussel shell data (Figure 1 above).

  2. We calculate the mean and covariance of our data (line 3 of the pca function listing). PCA analyses the deviations of our data from the mean, so we effectively look at “centred” data, as shown in Figure 2, where we’ve just removed the mean from each coordinate in our data. The mean and covariance calculation is conveniently done using hmatrix’s meanCov function.

  3. Then we calculate the eigendecomposition of the covariance matrix. Because the covariance matrix is a real symmetric matrix, by construction, we know that the eigenvectors will form a complete set that we can use as a basis to represent our data. (We’re going to blithely ignore all questions of possible degeneracy here – for real data, “almost surely” means always!) Here, we do the eigendecomposition using a singular value decomposition (line 4 in the listing of the pca function). The singular values give us the eigenvalues and the right singular vectors give us the eigenvectors. The choice here to use SVD (via hmatrix’s svd function) rather than some other means of calculating an eigendecomposition is based primarily on the perhaps slightly prejudiced idea that SVD has the best and most stable implementations – here, hmatrix calls out to LAPACK to do this sort of thing, so there’s probably not much to choose, since the other eigendecomposition implementations in LAPACK are also good, but my prejudice in favour of SVD remains! If you want some better justification for why SVD is “the” matrix eigendecomposition, take a look at this very interesting historical review of the development of SVD: G. W. Stewart (1993). On the early history of the singular-value decomposition. SIAM Rev. 35(4), 551-566.

  4. We do a little manipulation of the directions of the eigenvectors (lines 5-8 in the listing), flipping the signs of them to make the largest components point in the positive direction – this is mostly just to make the eigenvectors look good for plotting. The eigenvectors are shown in the Figure 3: we’ll call them e1 (the one pointing to the upper right) and e2 (the one pointing to the upper left). Note that these are unit vectors. We’ll talk about this again when we look at using PCA for spatio-temporal data.

  5. Once we have unit eigenvectors, we can project our (centred) data points onto these eigenvectors (lines 10 and 11 of the listing: the project function centres a data point by taking off the mean, then projects onto each eigenvector using hmatrix’s matrix-vector product operator #>). Figure 4 shows in schematic form how this works – we pick out one data point in green and draw lines parallel and orthogonal to the eigenvectors showing how we project the data point onto the eigenvectors. Doing this for each data point is effectively just a change of basis: instead of representing our centred data value by measurements along the x- and y-axes, we represent it by measurements in the directions of e1 and e2. We’ll talk more about this below as well.

  6. Finally, the eigenvalues from the eigendecomposition of the covariance matrix tell us something about how much of the total variance in our input data is “explained” by the projections onto each of the eigenvectors. I’ve put the word “explained” in quotes because I don’t think it’s a very good word to use, but it’s what everyone says. Really, we’re just saying how much of the data variance lies in the direction of each eigenvector. Just as you can calculate the variance of the mussel length and width individually, you can calculate the variance of the projections onto the eigenvectors. The eigenvalues from the PCA eigendecomposition tell you how much variance there is in each direction, and we calculate the “fraction of variance explained” for each eigenvector and return it from the pca function.

So, the pca function returns three things: eigenvalues (actually fractional explained variance calculated from the eigenvalues) and eigenvectors from the PCA eigendecomposition, plus projections of each of the (centred) data points onto each of the eigenvectors. The terminology for all these different things is very variable between different fields. We’re going to sidestep the question of what these things are called by always explicitly referring to PCA eigenvectors (or, later on when we’re dealing with spatio-temporal data, PCA eigenpatterns), PCA explained variance fractions and PCA projected components. These terms are a bit awkward, but there’s no chance of getting confused this way. We could choose terminology from one of the fields where PCA is commonly used, but that could be confusing for people working in other fields, since the terminology in a lot of cases is not very well chosen.

Together, the PCA eigenvectors and PCA projected components constitute nothing more than a change of orthonormal basis for representing our input data – the PCA output contains exactly the same information as the input data. (Remember that the PCA eigenvectors are returned as unit vectors from the pca function, so we really are just looking at a simple change of basis.) So it may seem as though we haven’t really done anything much interesting with our data. The interesting thing comes from the fact that we can order the PCA eigenvectors in decreasing order of the explained variance fraction. If we find that data projected onto the first three (say) eigenvectors explains 80% of the total variance in our data, then we may be justified in considering only those three components. In this way, PCA can be used as a dimensionality reduction method, allowing us to use low-dimensional data analysis and visualisation techniques to deal with input data that has high dimensionality.

This is exactly what we’re going to do with the Z500 data: we’re going to perform PCA, and take only the leading PCA eigenvectors and components, throwing some information away. The way that PCA works guarantees that the set of orthogonal patterns we keep are the “best” patterns in terms of explaining the variance in our data. We’ll have more to say about this in the next section when we look at why our centre/calculate covariance/eigendecomposition recipe works.

The algebra of PCA

In the last section, we presented a “recipe” for PCA (at least for two-dimensional data): centre the data; calculate the covariance matrix; calculate the eigendecomposition of the covariance matrix; project your centred data points onto the eigenvectors. The eigenvalues give you a measure of the proportion of the variance in your data in the direction of the corresponding eigenvector. And the projection of the data points onto the PCA eigenvectors is just a change of basis, from whatever original basis your data was measured in (mussel shell length and width as the two components of each data point in the example) to a basis with the PCA eigenvectors as basis vectors.

So why does this work? Obviously, you can use whatever basis you like to describe your data, but why is the PCA eigenbasis useful and interesting? I’ll explain this quite quickly, since it’s mostly fairly basic linear algebra, and you can read about it in more detail in more or less any linear algebra textbook3.

To start with, let’s review some facts about eigenvectors and eigenvalues. For a matrix A, an eigenvector u and its associated eigenvalue λ satisfy


The first thing to note is that any scalar multiple of u is also an eigenvector, so an eigenvector really refers to a “direction”, not to a specific vector with a fixed magnitude. If we multiply both sides of this by uT and rearrange a little, we get


The denominator of the fraction on the right hand side is just the length of the vector u. Now, we can find the largest eigenvalue λ1 and corresponding eigenvector u1 by solving the optimisation problem

u1=arg maxuTu=1uTAu,

where for convenience, we’ve restricted the optimisation to find a unit eigenvector, and we find λ1 directly from the fact that Au1=λ1u1.

We can find next largest (in magnitude) eigenvalue and corresponding eigenvector of the matrix A by projecting the rows of A into the subspace orthogonal to u1 to give a new matrix A1 and solving the optimisation problem

u2=arg maxuTu=1uTA1u,

finding the second largest eigenvalue λ2 from A1u2=λ2u2. Further eigenvectors and eigenvalues can be found in order of decreasing eigenvalue magnitude by projecting into subspaces orthogonal to all the eigenvectors found so far and solving further optimisation problems.

This link between this type of optimisation problem and the eigenvectors and eigenvalues of a matrix is the key to understanding why PCA works the way that it does. Suppose that we have centred our (K-dimensional) data, and that we call the N centred data vectors xi, i=1,2,…N. If we now construct an N×K matrix X whose rows are the xi, then the sample covariance of the data is


Now, given a direction represented as a unit vector u, we can calculate the data variance in that direction as ∣∣Xu∣∣2, so that if we want to know the direction in which the data has the greatest variance, we solve an optimisation problem of the form

u1=arg maxuTu=1(Xu)TXu=arg maxuTu=1uTCu.

But this optimisation problem is just the eigendecomposition optimisation problem for the covariance matrix C. This demonstrates we can find the directions of maximum variance in our data by looking at the eigendecomposition of the covariance matrix C in decreasing order of eigenvalue magnitude.

There are a couple of things to add to this. First, the covariance matrix is, by construction, a real symmetric matrix, so its eigenvectors form a complete basis – this means that we really can perform a change of basis from our original data to the PCA basis with no loss of information. Second, because the eigenvectors of the covariance matrix are orthogonal, the projections of our data items onto the eigenvector directions (what we’re going to call the PCA projected components) are uncorrelated. We’ll see some consequences of this when we look at performing PCA on the Z500 data. Finally, and related to this point, it’s worth noting that PCA is a linear operation – the projected components are linearly uncorrelated, but that doesn’t mean that there can’t be some nonlinear relationship between them. There are generalisations of PCA to deal with this case, but we won’t be talking about them for the purposes of this analysis.

Everything we’ve done here is pretty straightforward, but you might be wondering why we would want to change to this PCA basis at all? What’s the point? As I noted above, but is worth reiterating, the most common use for PCA, and the way that we’re going to use it with the Z500 data, is as a dimensionality reduction method. For the Z500 data, we have, for each day we’re looking at, 72×15=1080 spatial points, which is a lot of data to look at and analyse. What we usually do is to perform PCA, then ignore all but the first few leading PCA eigenvectors and projected components. Because of the way the optimisation problems described above are set up, we can guarantee that the leading m PCA eigenvectors span the m-dimensional subspace of the original data space containing the most data variance, and we can thus convince ourselves that we aren’t missing interesting features of our data by taking only those leading components. We’ll see how this works in some detail when we do the PCA analysis of the Z500 data, but in the mussel measurement case, this would correspond to thinking just of the projection of the mussel length and width data along the leading e1 eigendirection, so reducing the measurements to a single “size” parameter that neglects the variation in fatness or skinniness of the mussels. (This two-dimensional case is a bit artificial. Things will make more sense when we look at the 1080-dimensional case for the Z500 data.)

  1. Well, not really, since I live in the mountains of Austria and there aren’t too many mussels around here, so I’ll generate some synthetic data!

  2. Obviously, a Gaussian distribution is not right for quantities like lengths that are known to be positive, but here we’re just generating some data for illustrative purposes, so we don’t care all that much. If we were trying to model this kind of data though, we’d have to be more careful.

  3. I like Gilbert Strang’s Linear Algebra and its Applications, although I’ve heard from some people that they think it’s a bit hard for a first textbook on the subject – if you’ve had any exposure to this stuff before though, it’s good.

data-analysis haskell <script src="" type="text/javascript"></script> <script type="text/javascript"> (function () { var articleId = fyre.conv.load.makeArticleId(null); fyre.conv.load({}, [{ el: 'livefyre-comments', network: "", siteId: "290329", articleId: articleId, signed: false, collectionMeta: { articleId: articleId, url: fyre.conv.load.makeCollectionUrl(), } }], function() {}); }()); </script>
Categories: Offsite Blogs

cryptography and homomorphic library support?

haskell-cafe - Thu, 09/18/2014 - 5:38am
Hi, I just perused the crypto section of Hackage and I didn't see any support for homomorphc functions. I didn't see anything .. maybe I missed. If no support, I wouldn't mind giving a shot at working on this as long as no one has already implemented. If someone is working on this, please inform me of contact info so perhaps I can join the team. Kind regards, Vasya
Categories: Offsite Discussion

Haskell Weekly News: Issue 306

General haskell list - Thu, 09/18/2014 - 5:28am
Welcome to issue 306 of the HWN, an issue covering crowd-sourced bits of information about Haskell from around the web. This issue covers from September 07 to 13, 2014 Quotes of the Week * prophile: at this point ekmett is more part of the language than a practitioner of it * Qfwfq: Simon is a title, not a forename. Top Reddit Stories * Haskell for all: Morte: an intermediate language for super-optimizing functional programs Domain:, Score: 110, Comments: 83 Original: [1] On Reddit: [2] * AMP has landed in GHC Main! Domain:, Score: 88, Comments: 71 Original: [3] On Reddit: [4] * [Meta] What's with all these downvotes on beginner questions? Domain: self.haskell, Score: 69, Comments: 67 Original: [5] On Reddit: [6] * Haskell Implementors Workshop 2014 videos [youtube] Doma
Categories: Incoming News

optional ghc-options based on GHC version?

haskell-cafe - Thu, 09/18/2014 - 3:38am
Hi, I have a project here where GHC's option -flate-dmd-anal gets out some extra speed but the flag is not available on 7.6.3. How can I specify in the cabal file that I only want to pass this through when building with 7.8.3 or newer? It'd be a shame to not take advantage of the extra speed on 7.8.3 just because we want to support 7.6.3 too for a bit longer. I was unable to find the info online. Thanks.
Categories: Offsite Discussion

Boolean formula: part2

haskell-cafe - Wed, 09/17/2014 - 8:48pm
Hello! I have following code: buildFormulas :: [Variable] -> Int -> [Formula] buildFormulas vars 0 = map Var vars buildFormulas vars n = join $ forM [0 .. n - 1 ] $ \i -> do x <- buildFormulas vars i y <- buildFormulas vars (n - i - 1) z <- buildFormulas vars (n - 1) let t1 = case z of (Negation _) -> [] _ -> [Negation z] t1 ++ [conj x y, impl x y, disj x y] It is correct, typechecks, but it is insanely slow (calculating take 35 (buildFormulas [P, Q, R] 5) hangs my computer). My guess is that it builds too much before returning first results. Can you suggest more efficent way enumerate formulas of set of variables? -- Best regards, Dmitry Bogatov <KAction< at >>, Free Software supporter, esperantisto and netiquette guardian. GPG: 54B7F00D _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe< at >
Categories: Offsite Discussion