Reduction techniques for training support vector machines
Date Issued
2002
Date
2002
Author(s)
Lin, Kuan-Ming
DOI
2006092712285158514
Abstract
Recently two kinds of reduction techniques which aimed at saving training time
for SVM problems with nonlinear kernels were proposed. Instead of solving the stan-
dard SVM formulation, these methods explicitly alter the SVM formulation, and so-
lutions for them are used to classify data. The rst approach, reduced support vector
machine (RSVM) [21], preselects a subset of data as support vectors and solves a
smaller optimization problem. The second approach [11] uses imcomplete Cholesky
factorization (ICF) to obtain a low-rank approximation of the kernel matrix. There-
fore, an easier optimization problem is obtained. We nd that several issues of their
practical uses have not been fully discussed yet. For example, we do not know if
they possess comparable generalization ability as the standard SVM. In addition,
we would like to see for how large problems they outperform SVM on training time.
In this thesis we show that the formulation of each technique is already in a form
of linear SVM and discuss several suitable implementations. Experiments indicate
that in general the test accuracy of both techniques is a little lower than that of the
standard SVM. In addition, for problems with up to tens of thousands of data, if the
percentage of support vectors is not high, existing implementations for SVM is quite
competitive on the training time. Thus, the two techniques will be mainly useful
for either larger problems or those with many support vectors. Experiments in this
thesis also serve as comparisons of (1) dierent implementations for linear SVM; (2)
standard SVM using linear and quadratic cost functions; and (3) two ICF algorithms
for positive denite dense matrices.
Publisher
臺北市:國立臺灣大學資訊工程學系
Type
other
File(s)![Thumbnail Image]()
Loading...
Name
ntuthesis.pdf
Size
278.1 KB
Format
Adobe PDF
Checksum
(MD5):668b8b4aba60e7eea783fb72dc902d70
