Remember the Bayes Optimal classifier: If we are provided with P(X,Y) we can predict the most likely label for x, formally argmaxyP(y|x). It is therefore worth considering if we can estimate P(X,Y) directly from the training data. If this is possible (to a good approximation) we could then use the Bayes Optimal classifier in practice on our estimate of P(X,Y).
In fact, many supervised learning can be viewed as estimating P(X,Y). Generally, they fall into two categories:
Suppose you find a coin and it's ancient and very valuable. Naturally, you ask yourself, "What is the probability that this coin comes up heads when I toss it?"
You toss it n=10 times and obtain the following sequence of outcomes: D={H,T,T,H,H,H,T,T,T,T}. Based on these samples, how would you estimate P(H)?
We observed nH=4 heads and nT=6 tails. So, intuitively,
P(H)≈nHnH+nT=410=0.4
The estimator we just mentioned is the Maximum Likelihood Estimate (MLE). For MLE you typically proceed in two steps: First, you make an explicit modeling assumption about what type of distribution your data was sampled from. Second, you set the parameters of this distribution so that the data you observed is as likely as possible.
Let us return to the coin example. A natural assumption about a coin toss is that the distribution of the observed outcomes is a binomial distribution. The binomial distribution has two parameters n and θ and it captures the distribution of n independent Bernoulli (i.e. binary) random events that have a positive outcome with probability θ. In our case n is the number of coin tosses, and θ could be the probability of the coin coming up heads (e.g. P(H)=θ). Formally, the binomial distribution is defined as P(D∣θ)=(nH+nTnH)θnH(1−θ)nT,
MLE Principle: Find ˆθ to maximize the likelihood of the data, P(D;θ):
ˆθMLE=argmaxθP(D;θ)
Often we can solve this maximization problem with a simple two step procedure: 1. plug in all the terms for the distribution, and take the log of the function. 2. Compute its derivative, and equate it with zero. Taking the log of the likelihood (often referred to as the log-likelihood) does not change its maximum (as the log is a monotonic function, and the likelihood positive), but it turns all products into sums which are much easier to deal with when you differentiate. Equating the derivative with zero is a standard way to find an extreme point. (To be precise you should verify that it really is a maximum and not a minimum, by verifying that the second derivative is negative.)
Returning to our binomial distribution, we can now plug in the definition and compute the log-likelihood: ˆθMLE=argmaxθP(D;θ)=argmaxθ(nH+nTnH)θnH(1−θ)nT=argmaxθlog(nH+nTnH)+nH⋅log(θ)+nT⋅log(1−θ)=argmaxθnH⋅log(θ)+nT⋅log(1−θ)
Assume you have a hunch that θ is close to 0.5. But your sample size is small, so you don't trust your estimate.
Simple fix: Add m imaginery throws that would result in θ′ (e.g. θ=0.5). Add m Heads and m Tails to your data.
ˆθ=nH+mnH+nT+2m
Model θ as a random variable, drawn from a distribution P(θ). Note that θ is not a random variable associated with an event in a sample space. In frequentist statistics, this is forbidden. In Bayesian statistics, this is allowed and you can specify a prior belief P(θ) defining what values you believe θ is likely to take on.
Now, we can look at P(θ∣D)=P(D∣θ)P(θ)P(D) (recall Bayes Rule!), where
A natural choice for the prior P(θ) is the Beta distribution: P(θ)=θα−1(1−θ)β−1B(α,β)
Why is the Beta distribution a good fit?
So far, we have a distribution over θ. How can we get an estimate for θ?
A few comments:
Note that MAP is only one way to get an estimator. There is much more information in P(θ∣D), and it seems like a shame to simply compute the mode and throw away all other information. A true Bayesian approach is to use the posterior predictive distribution directly to make prediction about the label Y of a test sample with features X: P(Y∣D,X)=∫θP(Y,θ∣D,X)dθ=∫θP(Y∣θ,D,X)P(θ|D)dθ
Another exception is actually our coin toss example. To make predictions using θ in our coin tossing example, we can use P(heads∣D)=∫θP(heads,θ∣D)dθ=∫θP(heads∣θ,D)P(θ∣D)dθ (Chain rule: P(A,B|C)=P(A|B,C)P(B|C).)=∫θθP(θ∣D)dθ=E[θ|D]=nH+αnH+α+nT+β
As always the differences are subtle. In MLE we maximize log[P(D;θ)] in MAP we maximize log[P(D|θ)]+log[P(θ)]. So essentially in MAP we only add the term log[P(θ)] to our optimization. This term is independent of the data and penalizes if the parameters, θ deviate too much from what we believe is reasonable. We will later revisit this as a form of regularization, where log[P(θ)] will be interpreted as a measure of classifier complexity.