Kernel Principal Component Analysis
Looking for more research papers to read, I scanned my Hands-On Machine Learning notes for the many papers that were referenced there. This is one of those papers. These papers are mainly on machine learning and deep learning topics.
Reference Kernel Principal Component Analysis Paper
Introduction
A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible -pixel products in images. This paper gives the derivation of the method of presents experimental results on polynomial feature extraction for pattern recognition.
Assume for the moment that our data mapped into feature space, , is centered, i.e. . To do PCA for the covariance matrix
we have to find Eigenvalues and Eigenvectors satisfying . All solutions lie in the span of . This implies that we may consider the equivalent system
and that there exists equivalent coefficients such that
Substituting and defining an matrix by
we arrive at
where denotes the column vector with entries . To find solutions to the above, we solve the Eigenvalue problem
for nonzero Eigenvalues. We normalize the solutions belonging to nonzero Eigenvalues by requiring that the corresponding vectors in be normalized, i.e. . This translates to:
For princupal component extraction, we compute projections of the image of a test point onto the Eigenvectors in according to
Kernels that have been successfully used in support vector machines include polynomial kernels, radial basis functions, and sigmoid kernels.
Comments
You can read more about how comments are sorted in this blog post.
User Comments
There are currently no comments for this article.