image processing - PCA using SVD in OpenCV -


i have matrix m of m*n dimension. m contains n number of data each has m dimension , m very large n.

now question is, how compute or steps or procedure find pca of m using svd in opencv keeping eigenvectors containing 99% of total load or energy ?

you need first compute covariance matrix c data matrix m. can either use opencv's calccovarmatrix function or compute c = (m - mu)' x (m - mu) assumed data samples stored rows in m , mu mean of data samples , a' matrix transposed.

next, perform svd on c usu' = svd(c), u' u transposed. in case v' svd same u' because c symmetric , positive definite (if c full rank) or semidefinite if rank deficient. u contains eigenvectors of c.

what want keep k number of eigenvectors i.e. k number of columns(or rows? got check opencv docs whether returns eigenvectors rows or columns) of u corresponding singular values in matrix s corresponds k largest singular values , sum divided sum of singular values >= 0.99. singular values here corresponds variances each corresponding feature in feature vectors , keep top k retains 0.99 i.e. 99% of variance/energy.

these eigenvectors packed matrix, uk, pca bases. because these eigenvectors happen orthogonal each other, transpose of uk, uk', projection matrix. dimension-reduced point of new test sample x, compute x_reduced = uk'*(x - mu);


Comments

Popular posts from this blog

Why does Ruby on Rails generate add a blank line to the end of a file? -

keyboard - Smiles and long press feature in Android -

node.js - Bad Request - node js ajax post -