國立臺灣大學資訊工程學系Lin, Kuan-MingKuan-MingLin2006-09-272018-07-052006-09-272018-07-052002http://ntur.lib.ntu.edu.tw//handle/246246/2006092712285158514Recently two kinds of reduction techniques which aimed at saving training time for SVM problems with nonlinear kernels were proposed. Instead of solving the stan- dard SVM formulation, these methods explicitly alter the SVM formulation, and so- lutions for them are used to classify data. The rst approach, reduced support vector machine (RSVM) [21], preselects a subset of data as support vectors and solves a smaller optimization problem. The second approach [11] uses imcomplete Cholesky factorization (ICF) to obtain a low-rank approximation of the kernel matrix. There- fore, an easier optimization problem is obtained. We nd that several issues of their practical uses have not been fully discussed yet. For example, we do not know if they possess comparable generalization ability as the standard SVM. In addition, we would like to see for how large problems they outperform SVM on training time. In this thesis we show that the formulation of each technique is already in a form of linear SVM and discuss several suitable implementations. Experiments indicate that in general the test accuracy of both techniques is a little lower than that of the standard SVM. In addition, for problems with up to tens of thousands of data, if the percentage of support vectors is not high, existing implementations for SVM is quite competitive on the training time. Thus, the two techniques will be mainly useful for either larger problems or those with many support vectors. Experiments in this thesis also serve as comparisons of (1) dierent implementations for linear SVM; (2) standard SVM using linear and quadratic cost functions; and (3) two ICF algorithms for positive denite dense matrices.application/pdf284772 bytesapplication/pdfzh-TWReduction techniques for training support vector machinesotherhttp://ntur.lib.ntu.edu.tw/bitstream/246246/2006092712285158514/1/ntuthesis.pdf