Sequential Minimal Optimization: A fast Algorithm for Training Support Vector Machines
Looking for more research papers to read, I scanned my Hands-On Machine Learning notes for the many papers that were referenced there. This is one of those papers. These papers are mainly on machine learning and deep learning topics.
Reference Sequential Minimal Optimization: A fast Algorithm for Training Support Vector Machines Paper
Introduction
This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problmes are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size.
SVMs have empirically been shown to give good generalization performannce on a wide variety of problems such as handwritten character recognition, face detection, pedestrian detection, and test categorization. Vladmir Vapnik invented Support Vector Machines in 1979. In its simplest, linear form, an SVM is a hyperplane that separates a set of positive examples from a set of negative examples with maximum margin In the linear case, the margin is deined by the distance of the hyperplane to the nearestr of the positive and negative exmaples.
Comments
You can read more about how comments are sorted in this blog post.
User Comments
There are currently no comments for this article.