10: Empirical Risk Minimization

Cornell CS 4/5780 (Spring 2024)

previous
next



Video II

Recap

Remember the unconstrained SVM Formulation minw Ci=1nmax[1yi(wxi+b)h(xi),0]HingeLoss+w22l2Regularizer The hinge loss is the SVM's error function of choice, whereas the l2-regularizer penalizes (overly) complex solutions. This is an example of empirical risk minimization with a loss function and a regularizer r, minw1ni=1n(hw(xi),yi)Loss+λr(w)Regularizer, where the loss function is a continuous function which penalizes training error, and the regularizer is a continuous function which penalizes classifier complexity. Here, we define λ as 1C from the previous lecture.[1]

Commonly Used Binary Classification Loss Functions

Different Machine Learning algorithms use different loss functions; Table 4.1 shows just a few (here we assume yi{+1,1} ):
Loss (hw(xi,yi))UsageComments
Hinge-Lossmax[1hw(xi)yi,0]p
  • Standard SVM(p=1)
  • (Differentiable) Squared Hingeless SVM (p=2)
  • When used for Standard SVM, the loss function denotes the size of the margin between linear separator and its closest points in either class. Only differentiable everywhere with p=2.
    Log-Loss log(1+ehw(xi)yi)Logistic Regression One of the most popular loss functions in Machine Learning, since its outputs are well-calibrated probabilities.
    Exponential Loss ehw(xi)yiAdaBoost This function is very aggressive. The loss of a mis-prediction increases exponentially with the value of hw(xi)yi. This can lead to nice convergence results, for example in the case of Adaboost, but it can also cause problems with noisy data.
    Zero-One Loss δ(sign(hw(xi))yi) Actual Classification Loss Non-continuous and thus impractical to optimize.

    Table 4.1: Loss Functions With Classification y{1,+1}

    Quiz: What do all these loss functions look like with respect to z=yh(x)?

    Figure 4.1: Plots of Common Classification Loss Functions - x-axis: h(xi)yi, or "correctness" of prediction; y-axis: loss value

    Some questions about the loss functions:
    1. Which functions are strict upper bounds on the 0/1-loss?
    2. What can you say about the hinge-loss and the log-loss as z?
    3. Some additional notes on loss functions:
    4. 1. As z, the log-loss and the hinge loss become increasingly parallel.
    5. 2. The exponential loss and the hinge loss are both upper bounds of the zero-one loss. (For the exponential loss, this is an important aspect in Adaboost, which we will cover later.)
    6. 3. Zero-one loss is zero when the prediction is correct, and one when incorrect.
    7. Commonly Used Regression Loss Functions

      Regression algorithms (where a prediction can lie anywhere on the real-number line) also have their own host of loss functions:
      Loss (hw(xi,yi))Comments
      Squared Loss
      (h(xi)yi)2
      • Most popular regression loss function
      • Estimates Mean Label
      • Also known as Ordinary Least Squares (OLS)
      • 🙂 Differentiable everywhere
      • 😡 Somewhat sensitive to outliers/noise
      Absolute Loss
      |h(xi)yi|
      • Also a very popular loss function
      • Estimates Median Label
      • 🙂 Less sensitive to noise
      • 😡 Not differentiable at 0
      Huber Loss
      • 12(h(xi)yi)2 if |h(xi)yi|<δ,
      • otherwise δ(|h(xi)yi|δ2)
      • Also known as Smooth Absolute Loss
      • "Best of Both Worlds" of Squared and Absolute Loss
      • Once-differentiable
      • Takes on behavior of Squared-Loss when loss is small, and Absolute Loss when loss is large.
      Log-Cosh Loss
      log(cosh(h(xi)yi)), cosh(x)=ex+ex2
      • 🙂 Similar to Huber Loss, but twice differentiable everywhere
      • 😡 More expensive to compute

      Table 4.2: Loss Functions With Regression, i.e. yR

      Quiz: What do the loss functions in Table 4.2 look like with respect to z=h(xi)yi?

      Figure 4.2: Plots of Common Regression Loss Functions - x-axis: h(xi)yi, or "error" of prediction; y-axis: loss value

      Regularizers

      When we investigate regularizers it helps to change the formulation of the optimization problem from an unconstrained to a constraint formulation, to obtain a better geometric intuition: minw,bi=1n(hw(x),yi)+λr(w)minw,bi=1n(hw(x),yi) subject to: r(w)B For each λ0, there exists B0 such that the two formulations above are equivalent, and vice versa. In previous sections, we have already seen the l2-regularizer in the context of SVMs, Ridge Regression, or Logistic Regression. Besides the l2-regularizer, other types of useful regularizers and their properties are listed in Table 4.3.
      Regularizer r(w)Properties
      l2-Regularization r(w)=ww=w22
      • 🙂 Strictly Convex
      • 🙂 Differentiable
      • 😡 Uses weights on all features, i.e. relies on all features to some degree (ideally we would like to avoid this) - these are known as Dense Solutions.
      l1-Regularization r(w)=|w|1
      • Convex (but not strictly)
      • 😡 Not differentiable at 0 (the point which minimization is intended to bring us to
      • Effect: Sparse (i.e. not Dense) Solutions
      lp-Norm wp=(i=1d|vi|p)1/p
      • 😡 Non-convex
      • 🙂 Very sparse solutions (if 0<p<1 )
      • 😡 Not differentiable, Initialization dependent

      Table 4.3: Most popular Regularizers

      Figure 4.3: Plots of Common Regularizers

      Famous Special Cases

      This section includes several special cases that deal with risk minimization, such as Ordinary Least Squares, Ridge Regression, Lasso, and Logistic Regression. Table 4.4 provides information on their loss functions, regularizers, as well as solutions.
      Loss and Regularizer Comments
      Ordinary Least Squares minw1ni=1n(wxiyi)2
      • Squared Loss
      • No Regularization
      • Closed form solution:
      • w=(XX)1Xy
      • X=[x1,...,xn]
      • y=[y1,...,yn]
      Ridge Regression minw,b1ni=1n(wxi+byi)2+λw22
      • Squared Loss
      • l2-Regularization
      • w=(XX+λI)1Xy
      Lasso minw,b1ni=1n(wxi+byi)2+λ|w|1
      • 🙂 sparsity inducing (good for feature selection)
      • 🙂 Convex
      • 😡 Not strictly convex (no unique solution)
      • 😡 Not differentiable (at 0)
      • Solve with (sub)-gradient descent or SVEN
      Elastic Net
      minw,b1ni=1n(wxi+byi)2 +α|w|1+(1α)w22

      α(0,1)
      • 🙂 Strictly convex (i.e. unique solution)
      • 🙂 sparsity inducing (good for feature selection)
      • 🙂 Dual of squared-loss SVM, see SVEN
      • 😡 Non-differentiable
      Logistic Regression minw,b1ni=1nlog(1+eyi(wxi+b))
      • Often l1 or l2 Regularized
      • Solve with gradient descent.
      • Pr(y|x)=11+ey(wx+b)
      Linear Support Vector Machine minw,bCi=1nmax[1yi(wxi+b),0] +w22
      • Typically l2 regularized (sometimes l1).
      • Quadratic program.
      • When kernelized leads to sparse solutions.
      • Kernelized version can be solved very efficiently with specialized algorithms (e.g. SMO)

      Table 4.4: Special Cases

      Some additional notes on the Special Cases:
      1. Ridge Regression is very fast and can be solved in closed form if the data isn't too high dimensional (in just 1 line of code.)
      2. There is an interesting connection between Ordinary Least Squares and the first principal component of PCA (Principal Component Analysis). PCA also minimizes square loss, but looks at perpendicular loss (the horizontal distance between each point and the regression line) instead.
      3. [1] In Bayesian Machine Learning, it is common to optimize λ, but for the purposes of this class, it is assumed to be fixed.