Incremental and Decremental SVM Learning
Looking for more research papers to read, I scanned my Hands-On Machine Learning notes for the many papers that were referenced there. This is one of those papers. These papers are mainly on machine learning and deep learning topics.
Reference Incremental and Decremental SVM Learning Paper
Introduction
An on-line recursive algorithms for training support vector machines, one vector at a time, is presented. Training a support vector machine (SVM) requires solving a quadratic programming (QP) problem in a number of coefficients equal to the number of training examples. For very large datasets, standard numeric techniques for QP become infeasible. Practical techniques decomes the problem into manageable subproblems over part of the data, or, in the limit, perform iterative pairwise or component-wise optimization. An on-line alternative that formulates the exact solution for training data interns of that for data and one new training point, is presented here.
Incremental learning and, in particular, decremental unlearning offer a simple and computationally efficient scheme for on-line SVM training and exact leave-one-out evaluation of the generalization performance on the training data. The procedures can be directly extended to a broader class of kernel learning machines with convex quadratic cost functional under linear constraints, including SV regression.
Comments
You can read more about how comments are sorted in this blog post.
User Comments
There are currently no comments for this article.