![]()
Figure 1: Different hyperplanes separate data points.
A hyperplane is defined through w,b as a set of points such that H={x|wTx+b=0}. Let the margin γ be defined as the distance from the hyperplane to the closest point across both classes.
Consider some point x. Let d be the vector from H to x of minimum length. Let xP be the projection of x onto H. It follows then that:
xP=x−d∈H.
d is parallel to w, so d=αw for some α∈R.
xP∈H which implies wTxP+b=0
therefore wTxP+b=wT(x−d)+b=wT(x−αw)+b=0
which implies α=wTx+bwTw
The length of d:
‖
Margin of \mathcal{H} with respect to D:
\gamma(\mathbf{w},b)=\min_{\mathbf{x}\in D}\frac{\left | \mathbf{w}^T\mathbf{x}+b \right |}{\left \| w \right \|_{2}}
By definition, the margin and hyperplane are scale invariant: \gamma(\beta\mathbf{w},\beta b)=\gamma(\mathbf{w},b), \forall \beta \neq 0
Note that if the hyperplane is such that \gamma is maximized, it must lie right in the middle of the two classes. In other words, \gamma must be the distance to the closest point within both classes. (If not, you could move the hyperplane towards data points of the class that is further away and increase \gamma, which contradicts that \gamma is maximized.)
Then our objective becomes: \max_{\mathbf{w},b}\frac{1}{\left \| \mathbf{w} \right \|_{2}}\cdot 1 = \min_{\mathbf{w},b}\left \| \mathbf{w} \right \|_{2} = \min_{\mathbf{w},b}\left \| \mathbf{w} \right \|_{2}^{2} (Where we made use of the fact f(z)=z^2 is a monotonically increasing function for z\geq 0 and \|\mathbf{w}\|\geq 0.)
The new optimization problem becomes:These constraints are still hard to deal with, however luckily we can show that (for the optimal solution) they are equivalent to a much simpler formulation. \begin{align} &\min_{\mathbf{w},b}\mathbf{w}^T\mathbf{w}&\\ &\textrm{s.t.} \ \ \ \forall i \ y_{i}(\mathbf{w}^T \mathbf{x}_{i}+b) \geq 1 & \end{align}
This new formulation is a quadratic optimization problem. The objective is quadratic and the constraints are all linear. We can be solve it efficiently with any QCQP (Quadratically Constrained Quadratic Program) solver. It has a unique solution whenever a separating hyper plane exists. It also has a nice interpretation: Find the simplest hyperplane (where simpler means smaller \mathbf{w}^\top\mathbf{w}) such that all inputs lie at least 1 unit away from the hyperplane on the correct side.
If the data is low dimensional it is often the case that there is no separating hyperplane between the two classes. In this case, there is no solution to the optimization problems stated above. We can fix this by allowing the constraints to be violated ever so slight with the introduction of slack variables: \begin{matrix} \min_{\mathbf{w},b}\mathbf{w}^T\mathbf{w}+C\sum_{i=1}^{n}\xi _{i} \\ s.t. \ \forall i \ y_{i}(\mathbf{w}^T\mathbf{x}_{i}+b)\geq 1-\xi_i \\ \forall i \ \xi_i \geq 0 \end{matrix} The slack variable \xi_i allows the input \mathbf{x}_i to be closer to the hyperplane (or even be on the wrong side), but there is a penalty in the objective function for such "slack". If C is very large, the SVM becomes very strict and tries to get all points to be on the right side of the hyperplane. If C is very small, the SVM becomes very loose and may "sacrifice" some points to obtain a simpler (i.e. lower \|\mathbf{w}\|_2^2) solution.
![]()
Figure 2: The five plots above show different boundary of hyperplane and the optimal hyperplane separating example data, when C=0.01, 0.1, 1, 10, 100.