SVMstructSupport Vector Machine for Complex OutputsAuthor: Thorsten Joachims <thorsten@joachims.org>Cornell University Department of Computer Science Version: 2.00 |
SVMstruct is a Support Vector Machine (SVM) algorithm for predicting multivariate outputs. Unlike regular SVMs, which consider only univariate predictions like in classification and regression, SVMstruct can predict complex objects like trees, sequences, or sets. Examples of problems with complex outputs are natural language parsing, sequence alignment in protein homology detection, markov models for part-of-speech tagging, and many more.
The sparse approximation algorithm implemented in SVMstruct is described in [1][2]. The implementation is based on the SVMlight quadratic optimizer [3].
SVMstruct can be thought of as an API for implementing different kinds of complex prediction algorithms. Currently, we have implemented the following learning tasks:
http://kodiak.cs.cornell.edu/svm_struct/current/svm_struct.tar.gzThe archive contains the source code of the most recent version of SVMstruct as well as the source code of the SVMlight quadratic optimizer. Unpack the archive using the shell command:
gunzip –c svm_struct.tar.gz | tar xvf –This expands the archive into the current directory, which now contains all relevant files. You can compile SVMstruct with the empty API using the command:
makeIt will output some warnings, since the functions of the API are only templates and do not return values as required. However, it should produce the executable svm_struct_learn. To implement your own instantiation, you will need to edit the follwoing files:
svm_struct_learn -c 1.0 train.dat model.datwhich trains and SVM on the training set train.dat and outputs the learned rule to model.dat using the regularization parameter C set to 1.0. The format of the train file and the model file depend on the particular instantiation of SVMstruct. Other options are:
General options: -? -> this help -v [0..3] -> verbosity level (default 1) -y [0..3] -> verbosity level for svm_light (default 0) Learning options: -c float -> C: trade-off between training error and margin (default 0.01) -d [1,2] -> L-norm to use for slack variables. Use 1 for L1-norm, use 2 for squared slacks. (default 1) -r [0..] -> Slack rescaling method to use for loss. 1: slack rescaling 2: margin rescaling (default 1) -l [0..] -> Loss function to use. 0: zero/one loss (default 0) Optimization options (see [1]): -q [2..] -> maximum size of QP-subproblems (default 10) -n [2..q] -> number of new variables entering the working set in each iteration (default n = q). Set nThe options starting with -u are those specific to the instantiation. For more details on the meaning of these options consult reference [1].eps: Allow that error for termination criterion (default 0.01) -h [5..] -> number of iterations a variable needs to be optimal before considered for shrinking (default 100) Output options: -a string -> write all alphas to this file after learning (in the same order as in the training set) Structure learning options: -u* string -> custom parameters that can be adapted for struct learning. The * can be replaced by any character and there can be multiple options starting with -u.
[1] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support Vector Learning for Interdependent and Structured Output Spaces, ICML, 2004.
[2] T. Joachims. Learning to Align Sequences: A Maximum Margin Approach, Technical Report, August, 2003.
[3] T. Joachims, Making Large-Scale SVM Learning Practical. Advances in Kernel Methods - Support Vector Learning, B. Schölkopf and C. Burges and A. Smola (ed.), MIT Press, 1999.