Eigenvalues and Eigenvectors
ns neandersolve.eigen
(:require
(:refer [fmap!]]
[uncomplicate.fluokitten.core
[uncomplicate.neanderthal:refer [axpy! col copy dia entry entry! gd
[core
ge mm mrows mv nrm2 raw scal vctr
view-ge view-tr]]:refer [dge]]
[native :refer [ev! org qrf trf tri!]]
[linalg :refer [abs]]])) [math
Linear transformations fundamentally change vectors in both magnitude and direction. However, certain special vectors maintain their direction under transformation, being only scaled by a factor. These vectors reveal intrinsic properties of the transformation that are crucial for understanding its behavior.
The Eigenvalue Equation
For a square matrix
then
Power Iteration Method
The power iteration method provides a simple way to find the dominant eigenvalue and its corresponding eigenvector. Starting with a random vector
This process converges to the eigenvector corresponding to the largest (in magnitude) eigenvalue. The convergence rate depends on the ratio between the largest and second-largest eigenvalues.
defn power-iteration-step
("Performs a single step of the power iteration method.
Returns [new-lambda new-vector]"
[A x]let [y (mv A x)
(
lambda (nrm2 y)/ 1.0 lambda) y)]
x-next (scal ( [lambda x-next]))
defn power-iteration
("Implements the power method for finding the dominant eigenvalue and eigenvector.
Returns [eigenvalue eigenvector] after convergence or max iterations."
([A]1e-10 100))
(power-iteration A
([A tol max-iter]let [n (mrows A)
(1.0)]
x0 (entry! (vctr A n) ;; (println "Matrix dimensions:" (mrows A) "x" (ncols A))
;; (println "Initial vector size:" (dim x0))
;; (println "Initial vector:" x0)
loop [x x0
(0
iter 0.0]
lambda-prev ;; (println "\nIteration:" iter)
let [[lambda x-next] (power-iteration-step A x)]
(;; (println "Current lambda:" lambda)
;; (println "Lambda diff:" (abs (- lambda lambda-prev)))
if (or (< (abs (- lambda lambda-prev)) tol)
(>= iter max-iter))
(
[lambda x-next]recur x-next (inc iter) lambda))))))) (
Consider a simple example matrix:
def test-matrix (dge 3 3 [4 2 3
(2 5 1
3 1 6]))
The power iteration method finds its dominant eigenvalue and vector:
(power-iteration test-matrix)
9.143895446103055 #RealBlockVector[double, n:3, stride:1]
[0.57 0.44 0.69 ]
[ ]
QR Algorithm
While power iteration finds the dominant eigenvalue, the QR algorithm can compute all eigenvalues simultaneously. The method is based on the QR decomposition, where a matrix
The algorithm proceeds iteratively:
- Start with
- At step
, compute - Form
As
defn qr-step
("Performs a single step of QR iteration.
Returns the next matrix in the sequence."
[A]let [fact (qrf A) ; Get QR factorization
(;; _ (println "QR factorization created")
; Get orthogonal matrix Q
Q (org fact) ;; _ (println "Q matrix extracted")
;; _ (println "Q:" Q)
; The R matrix is stored in the :or field of the factorization
:or fact) {:uplo :upper}) ; Get upper triangular matrix R directly from factorization
R (view-tr (;; _ (println "R matrix extracted")
println "R:" R)
_ (; Compute next iteration
result (mm R Q)] result))
defn qr-algorithm
("Implements the QR algorithm for computing all eigenvalues.
Returns a vector of eigenvalues after convergence."
([A]1e-10 100))
(qr-algorithm A
([A tol max-iter]let [A0 (copy A)]
(;; (println "\nStarting QR algorithm")
;; (println "Initial matrix:" A0)
loop [Ak A0
(0]
k ;; (println "\nIteration:" k)
let [Ak+1 (qr-step Ak)
(1 (dia Ak+1) (dia Ak)))]
diff (nrm2 (axpy! -;; (println "Current diagonal:" (dia Ak))
;; (println "Next diagonal:" (dia Ak+1))
;; (println "Difference:" diff)
if (or (< diff tol) (>= k max-iter))
(1)
(dia Ak+recur Ak+1 (inc k)))))))) (
Let’s examine the convergence on our test matrix:
(qr-algorithm test-matrix)
double, n:3, stride:4]
#RealBlockVector[9.14 4.38 1.47 ]
[
Inverse Iteration
Once we have eigenvalues, we can find their corresponding eigenvectors using inverse iteration. This method is based on the observation that if
The algorithm applies power iteration to
- Start with a random vector
- Solve
- Set
This process converges to the eigenvector corresponding to the eigenvalue closest to
defn inverse-iteration
("Implements inverse iteration for finding eigenvector given eigenvalue.
Returns the corresponding eigenvector after convergence."
([A lambda]1e-10 100))
(inverse-iteration A lambda
([A lambda tol max-iter]let [n (mrows A)
(;; _ (println "Matrix dimension:" n)
I (dge n n)dotimes [i n] (entry! I i i 1.0))
_ (;; _ (println "Identity matrix:" I)
- lambda) I)
scaled-I (scal (;; _ (println "Scaled identity matrix (-λI):" scaled-I)
;; _ (println "Original matrix A:" A)
1.0 (copy A) scaled-I) ; A - λI
A-lambda-I (axpy! ;; _ (println "A - λI:" A-lambda-I)
1.0)]
x0 (entry! (vctr A n) loop [x x0
(0]
iter ;; (println "\nIteration:" iter)
;; (println "Current x:" x)
let [y (copy x)
(;; _ (println "y before solve:" y)
; Create fresh LU factorization for each solve
LU (trf (copy A-lambda-I));; _ (println "LU matrix:" LU)
; Solve using tri! on fresh LU factorization
y-next (mv (tri! LU) y)
y-norm (nrm2 y-next);; _ (println "y after solve:" y-next)
;; _ (println "y norm:" y-norm)
/ 1.0 y-norm) y-next)]
x-next (scal (;; (println "Next x:" x-next)
if (or (< (nrm2 (axpy! -1 x-next x)) tol)
(>= iter max-iter))
(
x-nextrecur x-next (inc iter)))))))) (
Using our test matrix and an eigenvalue from QR algorithm:
def lambda1 (entry (qr-algorithm test-matrix) 0)) (
def v1 (inverse-iteration test-matrix lambda1)) (
We can verify this is indeed an eigenpair:
let [Av (mv test-matrix v1)
(
lambdav (scal lambda1 v1)]1 Av lambdav))) (nrm2 (axpy! -
6.670682632020425E-12
The small residual norm confirms that
Complete Eigendecomposition
While the previous methods find individual eigenvalues and eigenvectors, many applications require the complete eigendecomposition. A matrix
where
The implementation handles both symmetric and general matrices efficiently:
- For symmetric matrices, we use specialized algorithms that preserve symmetry
- For general matrices, we handle potential complex eigenvalues
- In both cases, eigenvalues are sorted by magnitude for consistency
defn eigendecomposition
("Computes complete eigendecomposition of a matrix.
Returns [eigenvalues eigenvectors] as matrices with eigenvalues sorted in descending order."
[A]let [n (mrows A)
(instance? uncomplicate.neanderthal.internal.cpp.structures.RealUploMatrix A)
symmetric? (;; Create matrices based on matrix type
[eigenvals eigenvecs]if symmetric?
(;; For symmetric matrices - use matrix directly
let [d (gd A n) ;; Diagonal matrix for eigenvalues
(;; Matrix for eigenvectors
evecs (ge A n n)] nil evecs) ;; Use symmetric matrix directly
(ev! A (view-ge (dia d)) ;; Extract diagonal values directly to vector
identity (dia d)) evecs])
[(fmap! ;; For general matrices
let [evals (raw (ge A n 2))
(
evecs (ge A n n)]nil evecs)
(ev! (copy A) evals ;; Extract first column (real parts) directly
identity (col evals 0)) evecs]))
[(fmap! ;; Create result matrices for sorted values
sorted-vals (vctr A n)
sorted-vecs (ge A n n);; Find indices sorted by absolute eigenvalue magnitude
vec (sort-by #(- (abs (entry eigenvals %))) (range n)))]
perm (;; Copy values in sorted order using efficient operations
dotimes [i n]
(let [src-idx (perm i)]
(
(entry! sorted-vals i (entry eigenvals src-idx));; Use axpy! for efficient column copy
1.0 (col eigenvecs src-idx) (col sorted-vecs i))))
(axpy! [sorted-vals sorted-vecs]))
(eigendecomposition test-matrix)
double, n:3, stride:1]
[#RealBlockVector[9.14 4.38 1.47 ]
[ double, mxn:3x3, layout:column]
#RealGEMatrix[
▥ ↓ ↓ ↓ ┓ 0.57 -0.02 0.82
→ 0.44 -0.83 -0.33
→ 0.69 0.55 -0.47
→
┗ ┛ ]
defn eigendecomposition!
("In-place version of eigendecomposition that modifies the input matrices.
Requires pre-allocated eigenvals and eigenvecs matrices of correct dimensions."
[A eigenvals eigenvecs]nil eigenvecs)) (ev! (copy A) eigenvals
Verification and Testing
To ensure the correctness of our eigendecomposition, we verify that each computed pair
The verification process checks:
- Eigenvalue equation:
- Vector normalization:
- For symmetric matrices, orthogonality:
for
defn is-eigenpair?
("Verifies if (lambda, v) is an eigenpair of matrix A within tolerance."
([A lambda v]1e-8))
(is-eigenpair? A lambda v
([A lambda v tol]let [Av (mv A v)
(
lambdav (scal lambda v)]< (nrm2 (axpy! -1 Av lambdav)) tol)))) (
require '[neandersolve.descriptive :refer
( [center! standardize! min-max! feature-scale!]])
defn test-eigenpairs
("Tests and prints the eigenpairs of a given matrix.
Standardizes the matrix before computing eigendecomposition."
[A]let [[eigenvals eigenvecs] (eigendecomposition A)
(
n (mrows A)]println "\nTesting eigenpairs:")
(loop [i 0]
(when (< i n)
(let [lambda (entry eigenvals i)
(
v (col eigenvecs i)
Av (mv A v)
lambdav (scal lambda v)1 Av lambdav))]
diff (nrm2 (axpy! -println "Eigenvalue" i ":" lambda)
(println "Eigenvector" i ":" (seq v))
(println "Difference |Av - λv|:" diff)
(println "Is eigenpair?" (is-eigenpair? A lambda v))
(println)
(recur (inc i)))))
( [eigenvals eigenvecs]))
We can test our implementation with different matrix preprocessing:
(test-eigenpairs test-matrix)
double, n:3, stride:1]
[#RealBlockVector[9.14 4.38 1.47 ]
[ double, mxn:3x3, layout:column]
#RealGEMatrix[
▥ ↓ ↓ ↓ ┓ 0.57 -0.02 0.82
→ 0.44 -0.83 -0.33
→ 0.69 0.55 -0.47
→
┗ ┛ ]
Original matrix
(test-eigenpairs (standardize! (copy test-matrix)))
double, n:3, stride:1]
[#RealBlockVector[2.1 1.1 2.46E-17]
[double, mxn:3x3, layout:column]
#RealGEMatrix[
▥ ↓ ↓ ↓ ┓ 0.16 0.68 0.32
→ 0.77 0.05 0.76
→ -0.62 -0.73 0.57
→
┗ ┛ ]
Standardized
(test-eigenpairs (center! (copy test-matrix)))
double, n:3, stride:1]
[#RealBlockVector[4.53 1.47 -0.00 ]
[ double, mxn:3x3, layout:column]
#RealGEMatrix[
▥ ↓ ↓ ↓ ┓ 0.08 -0.81 0.60
→ 0.74 0.34 0.68
→ -0.67 0.47 0.43
→
┗ ┛ ]
Centered
(test-eigenpairs (min-max! (copy test-matrix)))
double, n:3, stride:1]
[#RealBlockVector[1.45 1.00 0.55 ]
[ double, mxn:3x3, layout:column]
#RealGEMatrix[
▥ ↓ ↓ ↓ ┓ 0.67 0.00 -0.67
→ -0.00 0.85 0.00
→ 0.75 -0.53 0.75
→ -
┗ ┛ ]
Min-max scaled
(test-eigenpairs (feature-scale! (copy test-matrix)))
double, n:3, stride:1]
[#RealBlockVector[2.15 1.13 -0.29 ]
[ double, mxn:3x3, layout:column]
#RealGEMatrix[
▥ ↓ ↓ ↓ ┓ 0.21 -0.70 0.37
→ 0.74 -0.10 0.73
→ -0.64 0.71 0.57
→
┗ ┛ ]
Feature scaled
Matrix Powers and Similarity
One powerful application of eigendecomposition is efficient computation of matrix powers. For a diagonalizable matrix
This allows us to compute high powers of matrices without repeated multiplication, which is particularly useful in:
- Markov chains: Computing steady-state distributions
- Dynamical systems: Analyzing long-term behavior
- Network analysis: Computing multi-step connections
The implementation demonstrates this with a simple example:
comment
(let [A (dge 2 2 [-4 3 -6 5])
(2 2)
evec (dge eval (ev! (copy A) (raw A) nil evec)
inv-evec (tri! (trf evec))
d (mm inv-evec (mm A evec))]9) (dia d))
(fmap! (pow d))
source: src/neandersolve/eigen.clj