README 22 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634
  1. Libsvm is a simple, easy-to-use, and efficient software for SVM
  2. classification and regression. It solves C-SVM classification, nu-SVM
  3. classification, one-class-SVM, epsilon-SVM regression, and nu-SVM
  4. regression. It also provides an automatic model selection tool for
  5. C-SVM classification. This document explains the use of libsvm.
  6. Libsvm is available at
  7. http://www.csie.ntu.edu.tw/~cjlin/libsvm
  8. Please read the COPYRIGHT file before using libsvm.
  9. Table of Contents
  10. =================
  11. - Quick Start
  12. - Installation and Data Format
  13. - `svm-train' Usage
  14. - `svm-predict' Usage
  15. - `svm-scale' Usage
  16. - Tips on Practical Use
  17. - Examples
  18. - Precomputed Kernels
  19. - Library Usage
  20. - Java Version
  21. - Building Windows Binaries
  22. - Additional Tools: Sub-sampling, Parameter Selection, Format checking, etc.
  23. - Python Interface
  24. - Additional Information
  25. Quick Start
  26. ===========
  27. If you are new to SVM and if the data is not large, please go to
  28. `tools' directory and use easy.py after installation. It does
  29. everything automatic -- from data scaling to parameter selection.
  30. Usage: easy.py training_file [testing_file]
  31. More information about parameter selection can be found in
  32. `tools/README.'
  33. Installation and Data Format
  34. ============================
  35. On Unix systems, type `make' to build the `svm-train' and `svm-predict'
  36. programs. Run them without arguments to show the usages of them.
  37. On other systems, consult `Makefile' to build them (e.g., see
  38. 'Building Windows binaries' in this file) or use the pre-built
  39. binaries (Windows binaries are in the directory `windows').
  40. The format of training and testing data file is:
  41. <label> <index1>:<value1> <index2>:<value2> ...
  42. .
  43. .
  44. .
  45. Each line contains an instance and is ended by a '\n' character. For
  46. classification, <label> is an integer indicating the class label
  47. (multi-class is supported). For regression, <label> is the target
  48. value which can be any real number. For one-class SVM, it's not used
  49. so can be any number. Except using precomputed kernels (explained in
  50. another section), <index>:<value> gives a feature (attribute) value.
  51. <index> is an integer starting from 1 and <value> is a real
  52. number. Indices must be in an ASCENDING order. Labels in the testing
  53. file are only used to calculate accuracy or errors. If they are
  54. unknown, just fill the first column with any numbers.
  55. A sample classification data included in this package is
  56. `heart_scale'. To check if your data is in a correct form, use
  57. `tools/checkdata.py' (details in `tools/README').
  58. Type `svm-train heart_scale', and the program will read the training
  59. data and output the model file `heart_scale.model'. If you have a test
  60. set called heart_scale.t, then type `svm-predict heart_scale.t
  61. heart_scale.model output' to see the prediction accuracy. The `output'
  62. file contains the predicted class labels.
  63. There are some other useful programs in this package.
  64. svm-scale:
  65. This is a tool for scaling input data file.
  66. svm-toy:
  67. This is a simple graphical interface which shows how SVM
  68. separate data in a plane. You can click in the window to
  69. draw data points. Use "change" button to choose class
  70. 1, 2 or 3 (i.e., up to three classes are supported), "load"
  71. button to load data from a file, "save" button to save data to
  72. a file, "run" button to obtain an SVM model, and "clear"
  73. button to clear the window.
  74. You can enter options in the bottom of the window, the syntax of
  75. options is the same as `svm-train'.
  76. Note that "load" and "save" consider data in the
  77. classification but not the regression case. Each data point
  78. has one label (the color) which must be 1, 2, or 3 and two
  79. attributes (x-axis and y-axis values) in [0,1].
  80. Type `make' in respective directories to build them.
  81. You need Qt library to build the Qt version.
  82. (available from http://www.trolltech.com)
  83. You need GTK+ library to build the GTK version.
  84. (available from http://www.gtk.org)
  85. The pre-built Windows binaries are in the `windows'
  86. directory. We use Visual C++ on a 32-bit machine, so the
  87. maximal cache size is 2GB.
  88. `svm-train' Usage
  89. =================
  90. Usage: svm-train [options] training_set_file [model_file]
  91. options:
  92. -s svm_type : set type of SVM (default 0)
  93. 0 -- C-SVC
  94. 1 -- nu-SVC
  95. 2 -- one-class SVM
  96. 3 -- epsilon-SVR
  97. 4 -- nu-SVR
  98. -t kernel_type : set type of kernel function (default 2)
  99. 0 -- linear: u'*v
  100. 1 -- polynomial: (gamma*u'*v + coef0)^degree
  101. 2 -- radial basis function: exp(-gamma*|u-v|^2)
  102. 3 -- sigmoid: tanh(gamma*u'*v + coef0)
  103. 4 -- precomputed kernel (kernel values in training_set_file)
  104. -d degree : set degree in kernel function (default 3)
  105. -g gamma : set gamma in kernel function (default 1/k)
  106. -r coef0 : set coef0 in kernel function (default 0)
  107. -c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
  108. -n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
  109. -p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
  110. -m cachesize : set cache memory size in MB (default 100)
  111. -e epsilon : set tolerance of termination criterion (default 0.001)
  112. -h shrinking: whether to use the shrinking heuristics, 0 or 1 (default 1)
  113. -b probability_estimates: whether to train an SVC or SVR model for probability estimates, 0 or 1 (default 0)
  114. -wi weight: set the parameter C of class i to weight*C in C-SVC (default 1)
  115. -v n: n-fold cross validation mode
  116. The k in the -g option means the number of attributes in the input data.
  117. option -v randomly splits the data into n parts and calculates cross
  118. validation accuracy/mean squared error on them.
  119. See libsvm FAQ for the meaning of outputs.
  120. `svm-predict' Usage
  121. ===================
  122. Usage: svm-predict [options] test_file model_file output_file
  123. options:
  124. -b probability_estimates: whether to predict probability estimates, 0 or 1 (default 0); for one-class SVM only 0 is supported
  125. model_file is the model file generated by svm-train.
  126. test_file is the test data you want to predict.
  127. svm-predict will produce output in the output_file.
  128. `svm-scale' Usage
  129. =================
  130. Usage: svm-scale [options] data_filename
  131. options:
  132. -l lower : x scaling lower limit (default -1)
  133. -u upper : x scaling upper limit (default +1)
  134. -y y_lower y_upper : y scaling limits (default: no y scaling)
  135. -s save_filename : save scaling parameters to save_filename
  136. -r restore_filename : restore scaling parameters from restore_filename
  137. See 'Examples' in this file for examples.
  138. Tips on Practical Use
  139. =====================
  140. * Scale your data. For example, scale each attribute to [0,1] or [-1,+1].
  141. * For C-SVC, consider using the model selection tool in the tools directory.
  142. * nu in nu-SVC/one-class-SVM/nu-SVR approximates the fraction of training
  143. errors and support vectors.
  144. * If data for classification are unbalanced (e.g. many positive and
  145. few negative), try different penalty parameters C by -wi (see
  146. examples below).
  147. * Specify larger cache size (i.e., larger -m) for huge problems.
  148. Examples
  149. ========
  150. > svm-scale -l -1 -u 1 -s range train > train.scale
  151. > svm-scale -r range test > test.scale
  152. Scale each feature of the training data to be in [-1,1]. Scaling
  153. factors are stored in the file range and then used for scaling the
  154. test data.
  155. > svm-train -s 0 -c 5 -t 2 -g 0.5 -e 0.1 data_file
  156. Train a classifier with RBF kernel exp(-0.5|u-v|^2), C=10, and
  157. stopping tolerance 0.1.
  158. > svm-train -s 3 -p 0.1 -t 0 data_file
  159. Solve SVM regression with linear kernel u'v and epsilon=0.1
  160. in the loss function.
  161. > svm-train -c 10 -w1 1 -w-1 5 data_file
  162. Train a classifier with penalty 10 for class 1 and penalty 50
  163. for class -1.
  164. > svm-train -s 0 -c 100 -g 0.1 -v 5 data_file
  165. Do five-fold cross validation for the classifier using
  166. the parameters C = 100 and gamma = 0.1
  167. > svm-train -s 0 -b 1 data_file
  168. > svm-predict -b 1 test_file data_file.model output_file
  169. Obtain a model with probability information and predict test data with
  170. probability estimates
  171. Precomputed Kernels
  172. ===================
  173. Users may precompute kernel values and input them as training and
  174. testing files. Then libsvm does not need the original
  175. training/testing sets.
  176. Assume there are L training instances x1, ..., xL and.
  177. Let K(x, y) be the kernel
  178. value of two instances x and y. The input formats
  179. are:
  180. New training instance for xi:
  181. <label> 0:i 1:K(xi,x1) ... L:K(xi,xL)
  182. New testing instance for any x:
  183. <label> 0:? 1:K(x,x1) ... L:K(x,xL)
  184. That is, in the training file the first column must be the "ID" of
  185. xi. In testing, ? can be any value.
  186. All kernel values including ZEROs must be explicitly provided. Any
  187. permutation or random subsets of the training/testing files are also
  188. valid (see examples below).
  189. Note: the format is slightly different from the precomputed kernel
  190. package released in libsvmtools earlier.
  191. Examples:
  192. Assume the original training data has three four-feature
  193. instances and testing data has one instance:
  194. 15 1:1 2:1 3:1 4:1
  195. 45 2:3 4:3
  196. 25 3:1
  197. 15 1:1 3:1
  198. If the linear kernel is used, we have the following new
  199. training/testing sets:
  200. 15 0:1 1:4 2:6 3:1
  201. 45 0:2 1:6 2:18 3:0
  202. 25 0:3 1:1 2:0 3:1
  203. 15 0:? 1:2 2:0 3:1
  204. ? can be any value.
  205. Any subset of the above training file is also valid. For example,
  206. 25 0:3 1:1 2:0 3:1
  207. 45 0:2 1:6 2:18 3:0
  208. implies that the kernel matrix is
  209. [K(2,2) K(2,3)] = [18 0]
  210. [K(3,2) K(3,3)] = [0 1]
  211. Library Usage
  212. =============
  213. These functions and structures are declared in the header file
  214. `svm.h'. You need to #include "svm.h" in your C/C++ source files and
  215. link your program with `svm.cpp'. You can see `svm-train.c' and
  216. `svm-predict.c' for examples showing how to use them. We define
  217. LIBSVM_VERSION in svm.h, so you can check the version number.
  218. Before you classify test data, you need to construct an SVM model
  219. (`svm_model') using training data. A model can also be saved in
  220. a file for later use. Once an SVM model is available, you can use it
  221. to classify new data.
  222. - Function: struct svm_model *svm_train(const struct svm_problem *prob,
  223. const struct svm_parameter *param);
  224. This function constructs and returns an SVM model according to
  225. the given training data and parameters.
  226. struct svm_problem describes the problem:
  227. struct svm_problem
  228. {
  229. int l;
  230. double *y;
  231. struct svm_node **x;
  232. };
  233. where `l' is the number of training data, and `y' is an array containing
  234. their target values. (integers in classification, real numbers in
  235. regression) `x' is an array of pointers, each of which points to a sparse
  236. representation (array of svm_node) of one training vector.
  237. For example, if we have the following training data:
  238. LABEL ATTR1 ATTR2 ATTR3 ATTR4 ATTR5
  239. ----- ----- ----- ----- ----- -----
  240. 1 0 0.1 0.2 0 0
  241. 2 0 0.1 0.3 -1.2 0
  242. 1 0.4 0 0 0 0
  243. 2 0 0.1 0 1.4 0.5
  244. 3 -0.1 -0.2 0.1 1.1 0.1
  245. then the components of svm_problem are:
  246. l = 5
  247. y -> 1 2 1 2 3
  248. x -> [ ] -> (2,0.1) (3,0.2) (-1,?)
  249. [ ] -> (2,0.1) (3,0.3) (4,-1.2) (-1,?)
  250. [ ] -> (1,0.4) (-1,?)
  251. [ ] -> (2,0.1) (4,1.4) (5,0.5) (-1,?)
  252. [ ] -> (1,-0.1) (2,-0.2) (3,0.1) (4,1.1) (5,0.1) (-1,?)
  253. where (index,value) is stored in the structure `svm_node':
  254. struct svm_node
  255. {
  256. int index;
  257. double value;
  258. };
  259. index = -1 indicates the end of one vector.
  260. struct svm_parameter describes the parameters of an SVM model:
  261. struct svm_parameter
  262. {
  263. int svm_type;
  264. int kernel_type;
  265. int degree; /* for poly */
  266. double gamma; /* for poly/rbf/sigmoid */
  267. double coef0; /* for poly/sigmoid */
  268. /* these are for training only */
  269. double cache_size; /* in MB */
  270. double eps; /* stopping criteria */
  271. double C; /* for C_SVC, EPSILON_SVR, and NU_SVR */
  272. int nr_weight; /* for C_SVC */
  273. int *weight_label; /* for C_SVC */
  274. double* weight; /* for C_SVC */
  275. double nu; /* for NU_SVC, ONE_CLASS, and NU_SVR */
  276. double p; /* for EPSILON_SVR */
  277. int shrinking; /* use the shrinking heuristics */
  278. int probability; /* do probability estimates */
  279. };
  280. svm_type can be one of C_SVC, NU_SVC, ONE_CLASS, EPSILON_SVR, NU_SVR.
  281. C_SVC: C-SVM classification
  282. NU_SVC: nu-SVM classification
  283. ONE_CLASS: one-class-SVM
  284. EPSILON_SVR: epsilon-SVM regression
  285. NU_SVR: nu-SVM regression
  286. kernel_type can be one of LINEAR, POLY, RBF, SIGMOID.
  287. LINEAR: u'*v
  288. POLY: (gamma*u'*v + coef0)^degree
  289. RBF: exp(-gamma*|u-v|^2)
  290. SIGMOID: tanh(gamma*u'*v + coef0)
  291. PRECOMPUTED: kernel values in training_set_file
  292. cache_size is the size of the kernel cache, specified in megabytes.
  293. C is the cost of constraints violation. (we usually use 1 to 1000)
  294. eps is the stopping criterion. (we usually use 0.00001 in nu-SVC,
  295. 0.001 in others). nu is the parameter in nu-SVM, nu-SVR, and
  296. one-class-SVM. p is the epsilon in epsilon-insensitive loss function
  297. of epsilon-SVM regression. shrinking = 1 means shrinking is conducted;
  298. = 0 otherwise. probability = 1 means model with probability
  299. information is obtained; = 0 otherwise.
  300. nr_weight, weight_label, and weight are used to change the penalty
  301. for some classes (If the weight for a class is not changed, it is
  302. set to 1). This is useful for training classifier using unbalanced
  303. input data or with asymmetric misclassification cost.
  304. nr_weight is the number of elements in the array weight_label and
  305. weight. Each weight[i] corresponds to weight_label[i], meaning that
  306. the penalty of class weight_label[i] is scaled by a factor of weight[i].
  307. If you do not want to change penalty for any of the classes,
  308. just set nr_weight to 0.
  309. *NOTE* Because svm_model contains pointers to svm_problem, you can
  310. not free the memory used by svm_problem if you are still using the
  311. svm_model produced by svm_train().
  312. *NOTE* To avoid wrong parameters, svm_check_parameter() should be
  313. called before svm_train().
  314. - Function: double svm_predict(const struct svm_model *model,
  315. const struct svm_node *x);
  316. This function does classification or regression on a test vector x
  317. given a model.
  318. For a classification model, the predicted class for x is returned.
  319. For a regression model, the function value of x calculated using
  320. the model is returned. For an one-class model, +1 or -1 is
  321. returned.
  322. - Function: void svm_cross_validation(const struct svm_problem *prob,
  323. const struct svm_parameter *param, int nr_fold, double *target);
  324. This function conducts cross validation. Data are separated to
  325. nr_fold folds. Under given parameters, sequentially each fold is
  326. validated using the model from training the remaining. Predicted
  327. labels (of all prob's instances) in the validation process are
  328. stored in the array called target.
  329. The format of svm_prob is same as that for svm_train().
  330. - Function: int svm_get_svm_type(const struct svm_model *model);
  331. This function gives svm_type of the model. Possible values of
  332. svm_type are defined in svm.h.
  333. - Function: int svm_get_nr_class(const svm_model *model);
  334. For a classification model, this function gives the number of
  335. classes. For a regression or an one-class model, 2 is returned.
  336. - Function: void svm_get_labels(const svm_model *model, int* label)
  337. For a classification model, this function outputs the name of
  338. labels into an array called label. For regression and one-class
  339. models, label is unchanged.
  340. - Function: double svm_get_svr_probability(const struct svm_model *model);
  341. For a regression model with probability information, this function
  342. outputs a value sigma > 0. For test data, we consider the
  343. probability model: target value = predicted value + z, z: Laplace
  344. distribution e^(-|z|/sigma)/(2sigma)
  345. If the model is not for svr or does not contain required
  346. information, 0 is returned.
  347. - Function: void svm_predict_values(const svm_model *model,
  348. const svm_node *x, double* dec_values)
  349. This function gives decision values on a test vector x given a
  350. model.
  351. For a classification model with nr_class classes, this function
  352. gives nr_class*(nr_class-1)/2 decision values in the array
  353. dec_values, where nr_class can be obtained from the function
  354. svm_get_nr_class. The order is label[0] vs. label[1], ...,
  355. label[0] vs. label[nr_class-1], label[1] vs. label[2], ...,
  356. label[nr_class-2] vs. label[nr_class-1], where label can be
  357. obtained from the function svm_get_labels.
  358. For a regression model, label[0] is the function value of x
  359. calculated using the model. For one-class model, label[0] is +1 or
  360. -1.
  361. - Function: double svm_predict_probability(const struct svm_model *model,
  362. const struct svm_node *x, double* prob_estimates);
  363. This function does classification or regression on a test vector x
  364. given a model with probability information.
  365. For a classification model with probability information, this
  366. function gives nr_class probability estimates in the array
  367. prob_estimates. nr_class can be obtained from the function
  368. svm_get_nr_class. The class with the highest probability is
  369. returned. For regression/one-class SVM, the array prob_estimates
  370. is unchanged and the returned value is the same as that of
  371. svm_predict.
  372. - Function: const char *svm_check_parameter(const struct svm_problem *prob,
  373. const struct svm_parameter *param);
  374. This function checks whether the parameters are within the feasible
  375. range of the problem. This function should be called before calling
  376. svm_train() and svm_cross_validation(). It returns NULL if the
  377. parameters are feasible, otherwise an error message is returned.
  378. - Function: int svm_check_probability_model(const struct svm_model *model);
  379. This function checks whether the model contains required
  380. information to do probability estimates. If so, it returns
  381. +1. Otherwise, 0 is returned. This function should be called
  382. before calling svm_get_svr_probability and
  383. svm_predict_probability.
  384. - Function: int svm_save_model(const char *model_file_name,
  385. const struct svm_model *model);
  386. This function saves a model to a file; returns 0 on success, or -1
  387. if an error occurs.
  388. - Function: struct svm_model *svm_load_model(const char *model_file_name);
  389. This function returns a pointer to the model read from the file,
  390. or a null pointer if the model could not be loaded.
  391. - Function: void svm_destroy_model(struct svm_model *model);
  392. This function frees the memory used by a model.
  393. - Function: void svm_destroy_param(struct svm_parameter *param);
  394. This function frees the memory used by a parameter set.
  395. Java Version
  396. ============
  397. The pre-compiled java class archive `libsvm.jar' and its source files are
  398. in the java directory. To run the programs, use
  399. java -classpath libsvm.jar svm_train <arguments>
  400. java -classpath libsvm.jar svm_predict <arguments>
  401. java -classpath libsvm.jar svm_toy
  402. java -classpath libsvm.jar svm_scale <arguments>
  403. Note that you need Java 1.5 (5.0) or above to run it.
  404. You may need to add Java runtime library (like classes.zip) to the classpath.
  405. You may need to increase maximum Java heap size.
  406. Library usages are similar to the C version. These functions are available:
  407. public class svm {
  408. public static final int LIBSVM_VERSION=286;
  409. public static svm_model svm_train(svm_problem prob, svm_parameter param);
  410. public static void svm_cross_validation(svm_problem prob, svm_parameter param, int nr_fold, double[] target);
  411. public static int svm_get_svm_type(svm_model model);
  412. public static int svm_get_nr_class(svm_model model);
  413. public static void svm_get_labels(svm_model model, int[] label);
  414. public static double svm_get_svr_probability(svm_model model);
  415. public static void svm_predict_values(svm_model model, svm_node[] x, double[] dec_values);
  416. public static double svm_predict(svm_model model, svm_node[] x);
  417. public static double svm_predict_probability(svm_model model, svm_node[] x, double[] prob_estimates);
  418. public static void svm_save_model(String model_file_name, svm_model model) throws IOException
  419. public static svm_model svm_load_model(String model_file_name) throws IOException
  420. public static String svm_check_parameter(svm_problem prob, svm_parameter param);
  421. public static int svm_check_probability_model(svm_model model);
  422. }
  423. The library is in the "libsvm" package.
  424. Note that in Java version, svm_node[] is not ended with a node whose index = -1.
  425. Building Windows Binaries
  426. =========================
  427. Windows binaries are in the directory `windows'. To build them via
  428. Visual C++, use the following steps:
  429. 1. Open a DOS command box (or Visual Studio Command Prompt) and change
  430. to libsvm directory. If environment variables of VC++ have not been
  431. set, type
  432. "C:\Program Files\Microsoft Visual Studio 8\VC\bin\vcvars32.bat"
  433. You may have to modify the above according which version of VC++ or
  434. where it is installed.
  435. 2. Type
  436. nmake -f Makefile.win clean all
  437. 3. (optional) To build python interface, download and install Python.
  438. Edit Makefile.win and change PYTHON_INC and PYTHON_LIB to your python
  439. installation. Type
  440. nmake -f Makefile.win python
  441. and then copy windows\python\svmc.pyd to the python directory.
  442. Another way is to build them from Visual C++ environment. See details
  443. in libsvm FAQ.
  444. - Additional Tools: Sub-sampling, Parameter Selection, Format checking, etc.
  445. ============================================================================
  446. See the README file in the tools directory.
  447. Python Interface
  448. ================
  449. See the README file in python directory.
  450. Additional Information
  451. ======================
  452. If you find LIBSVM helpful, please cite it as
  453. Chih-Chung Chang and Chih-Jen Lin, LIBSVM: a library for
  454. support vector machines, 2001.
  455. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm
  456. LIBSVM implementation document is available at
  457. http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf
  458. For any questions and comments, please email cjlin@csie.ntu.edu.tw
  459. Acknowledgments:
  460. This work was supported in part by the National Science
  461. Council of Taiwan via the grant NSC 89-2213-E-002-013.
  462. The authors thank their group members and users
  463. for many helpful discussions and comments. They are listed in
  464. http://www.csie.ntu.edu.tw/~cjlin/libsvm/acknowledgements