Eigenvectors and covariance
Finding the directions a transformation leaves alone.
Directions that stay put
Take a matrix and apply it to a vector. In general, the output points in a different direction than the input. The vector gets rotated, sheared, sent somewhere new. That is what matrices do.
But try this specific matrix and this specific vector:
The input is . The output is . Same direction. The vector was stretched by a factor of 3, but it did not rotate. The matrix, which in general rotates and shears everything, left this particular direction alone.
Is that a coincidence of the input we chose? Let's try another:
The input is . The output is . Again: same direction, just scaled, this time by a factor of 2. A second direction that the matrix does not rotate.
Now try something generic, like : . That is not a multiple of . This vector does get rotated. So the property is special to particular directions.
These special directions are called eigenvectors, and they turn out to be the key to PCA, to stability analysis in dynamical systems, and to understanding covariance matrices.
Eigenvectors and eigenvalues
A direction that a matrix scales but does not rotate is called an eigenvector. The scale factor is its eigenvalue. In symbols:
The matrix acts on and produces the same vector , multiplied by the number . For the matrix above, we found two eigenvectors: with eigenvalue 3, and with eigenvalue 2.
What does the eigenvalue tell you? If , the vector gets stretched. If , it gets compressed. If , it gets flipped. If , it gets sent to zero: the matrix annihilates that direction entirely. (That is a null-space direction, from the previous post.)
Watch the figure. A circle of unit vectors gets mapped to an ellipse. Most directions change angle. But the eigenvector directions stay on their original line: they land on the ellipse at exactly the point you would reach by walking along the same direction, just farther or closer to the origin. The eigenvalue is the stretch factor.Some matrices have fewer than independent eigenvectors (defective matrices). Others have complex eigenvalues, which correspond to rotations rather than pure scalings. We set both aside here. Complex eigenvalues will matter in the dynamics posts, where they produce oscillations.
For neural dynamics, eigenvalues control stability. If your system evolves as , then along an eigenvector direction the system simply scales: . When , the component decays. When , it explodes. The eigenvalues are the knobs controlling how each independent mode of the dynamics grows or shrinks.
Diagonalization
In Post 2, we said that the same transformation looks different in different bases. A messy matrix in one basis might become simple in another. Eigenvectors tell you which basis makes it simplest.
Our matrix has eigenvectors and . These are two independent vectors, so they form a basis. What does the matrix look like in this basis?
In the eigenvector basis, the matrix sends to and to . No mixing between the two directions. The matrix, which in the standard basis had an off-diagonal entry (the 1 in the top-right corner, coupling the two coordinates), becomes diagonal:
Each axis scales independently. No cross-talk. This is the eigendecomposition. Stack the eigenvectors as columns of a matrix and the eigenvalues on the diagonal of :
Read this as a three-step process. Convert from the standard basis to the eigenvector basis (multiply by ). Scale each coordinate independently by its eigenvalue (multiply by ). Convert back (multiply by ). In the eigenvector basis, the transformation has no mixing between dimensions. All the complexity of the matrix was just a consequence of looking at it in the wrong coordinates.
The eigenvalues are invariant. Switch to any basis and they stay the same: they are a property of the transformation, not of the coordinates. The trace (sum of diagonal entries) equals . The determinant equals . Both are invariant too. The entries of the matrix change with the basis. These quantities do not.Two matrices related by for some invertible are called similar. Similar matrices represent the same transformation in different bases. They share eigenvalues, trace, determinant, and rank.
The covariance matrix
Now let's build a specific matrix from neural data and see what its eigenvectors tell us.
You record from two neurons over many time bins. Center the data: subtract each neuron's mean, so the average firing rate is zero. At each time bin , you have a centered firing-rate vector .
Take one time bin and form the outer product: . This is a 2-by-2 matrix:
The diagonal entries are the squared firing rates. The off-diagonal entries are the products of the two neurons' rates. If both neurons fire above their mean at the same time, the off-diagonal product is positive. If one is above and the other below, it is negative. This one matrix captures the pattern of co-activation at a single moment.
Notice that each outer product has rank 1: every row is a scaled copy of every other row. One moment of data gives you a rank-1 snapshot of co-fluctuation.
Now average over all time bins:
where is the centered data matrix (rows are time bins, columns are neurons). This is the covariance matrix. Entry tells you how much neurons and tend to fluctuate together. Entry is the variance of neuron .Equation (7) uses in the denominator rather than . The sample covariance uses , which corrects for bias. The distinction does not matter for our purposes. What matters: always center first. If you forget, the first principal component may simply point toward the mean instead of capturing variance. Cunningham and Yu [4] and Yu et al. [8] (GPFA) both work with centered covariance structure as the starting point for neural dimensionality reduction.
Let's make this concrete. Suppose your two neurons have covariance matrix:
Neuron 1 has variance 4, neuron 2 has variance 9, and their covariance is 3 (positively correlated: when one is above its mean, the other tends to be too). The total variance is , which is the trace.
Now: what are this matrix's eigenvectors?
The spectral theorem
The covariance matrix has two properties that make it much easier to work with than a general matrix.
First, it is symmetric: . The covariance of neuron 1 with neuron 2 equals the covariance of neuron 2 with neuron 1. This follows from the outer product: .
Second, it is positive semidefinite: for any vector ,
Why? Expand it: . A sum of squares. Always nonnegative. The quantity has a name: it is the variance of the data projected onto the direction . So equation (9) says variance is never negative. Good.
These two properties together give the spectral theorem. For a symmetric matrix:
The eigenvectors are orthogonal. They point in perpendicular directions. And the eigenvalues are all real numbers (no complex eigenvalues, no oscillations).
For a positive semidefinite matrix, the eigenvalues are additionally nonnegative. The proof is one line: if , then , and since is PSD, so . Zero eigenvalues mean the data do not move in that direction. Positive eigenvalues mean there is real variance there.
So the eigendecomposition of the covariance matrix simplifies to:
where is orthogonal (its columns are perpendicular, each of unit length) and is diagonal with nonnegative entries. Compare this to the general eigendecomposition from equation (5). For symmetric matrices, . The inverse is the transpose. Free. No computation. This is the payoff of orthogonality we saw in the basis post.The spectral theorem says: every real symmetric matrix can be diagonalized by an orthogonal change of basis, with real eigenvalues. Most of multivariate statistics rests on this single fact.
Let's find the eigenvectors of our covariance matrix from equation (8). Solving the characteristic equation : , which gives . The eigenvalues are and .
Check: , which is the trace (total variance). The eigenvalues split the total variance into two perpendicular components. The first eigenvector direction captures 10.41 out of 13 units of variance. The second captures 2.59. Most of the action is along one direction.
That direction is the first principal component.
How did we find those eigenvalues? We solved the characteristic equation , which gives a polynomial whose roots are the eigenvalues. For a 2-by-2 matrix this is a quadratic; for an matrix it is degree . Once you have the eigenvalues, the eigenvectors come from finding the null space of for each .Nobody solves the characteristic equation for large matrices. Iterative methods (QR algorithm, Lanczos) find eigenvalues without forming the polynomial. But the characteristic equation is the right way to understand what eigenvalues are: roots of a polynomial defined by the matrix.
Polynomials can have complex roots. A rotation matrix like has eigenvalues : no real direction is preserved, because a 90-degree rotation moves every direction. Complex eigenvalues come in conjugate pairs , where controls growth/decay and controls rotation rate. For neural dynamics, complex eigenvalues mean the system oscillates. The rotational dynamics in motor cortex [3] correspond to complex eigenvalues of the estimated transition matrix.
Matrix powers make this precise. From , raising to the -th power gives . Each eigenvalue gets raised independently. A real eigenvalue with decays exponentially; with , it explodes. A complex eigenvalue with spirals inward; with , it orbits forever.
One more tool the covariance matrix gives us. Take a unit vector and compute . Expanding with gives : the variance of the data projected onto direction . This is a quadratic form. The covariance matrix encodes a variance field: point in any direction and read off the spread. The eigenvectors are where this field is extremal: the largest eigenvalue is the maximum variance in any direction.
What if you want all directions to have the same variance? Define and transform: . The new covariance is . This is whitening: rotate to the eigenvector basis, then scale each direction by one over its standard deviation. The result has unit variance in every direction and zero correlation between directions. CCA needs this preprocessing to work, and we will see why in that post.
What comes next
We now have all the pieces. A neural population's firing-rate data gives you a covariance matrix. That matrix is symmetric and positive semidefinite, so the spectral theorem applies: it can be diagonalized by an orthogonal matrix of eigenvectors, with nonnegative eigenvalues on the diagonal.
The eigenvectors are perpendicular directions. In the eigenvector basis, the covariance matrix is diagonal: each direction has its own variance, and there is no correlation between directions. The eigenvalues tell you the variance along each eigenvector direction: the largest eigenvalue picks out the direction of maximum spread, the second-largest picks out the direction of maximum spread perpendicular to the first, and so on.
And because the eigenvector basis is orthonormal, extracting coordinates is just dot products (equation (2) from Post 2). Reconstructing from a subset of coordinates is just the truncated sum (equation (4) from that same post). Keep the first eigenvectors — the ones with the largest eigenvalues — and you capture as much variance as possible in dimensions.
That is PCA. No new operation. It is the spectral theorem applied to the covariance matrix, combined with the truncated reconstruction formula from the basis post. Every concept in this series played a role: vectors as population states, independence as the number of real dimensions, dot products as coordinates, basis change as the core modeling decision, column space as what a matrix preserves, projection as the best approximation in a subspace, and eigenvectors as the directions that make the covariance matrix diagonal.
One decomposition ties everything together. The singular value decomposition factorizes any matrix — square or rectangular, symmetric or not — into a rotation, a scaling, and another rotation. It turns out to be the computational backbone of PCA, CCA, least squares, and subspace identification. That is the next post.
References
- Strang, G. Introduction to Linear Algebra, 6th ed. Wellesley-Cambridge Press, 2023.
- 3Blue1Brown. "Essence of Linear Algebra" video series, 2016.
- Churchland, M. M., Cunningham, J. P., Kaufman, M. T., et al. "Neural population dynamics during reaching," Nature, vol. 487, pp. 51-56, 2012.
- Cunningham, J. P. and Yu, B. M. "Dimensionality reduction for large-scale neural recordings," Nature Neuroscience, vol. 17, pp. 1500-1509, 2014.
- Axler, S. Linear Algebra Done Right, 4th ed. Springer, 2024.
- Strang, G. "The fundamental theorem of linear algebra," The American Mathematical Monthly, vol. 100, no. 9, pp. 848-855, 1993.
- Safaie, M., Chang, J. C., Park, J., et al. "Preserved neural dynamics across animals performing similar behaviour," Nature, vol. 623, pp. 765-771, 2023.
- Yu, B. M., Cunningham, J. P., Santhanam, G., et al. "Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity," Journal of Neurophysiology, vol. 102, no. 1, pp. 614-635, 2009.