Cornell
Department of Computer Science Colloquium
4:15pm, Thursday December 13th, 2001
B17 Upson Hall
Learning Theory for Large Models
David McAllester
AT&T Research Labs
http://www.research.att.com/~dmac/
Occam's
razor provides a foundation for learning theory ---
In
language modeling, for example, the best performing models memorize the training
data. The talk will then present a
nonBayesian theoretical framework for understanding data-memorizing models.
This theoretical framework includes a new general approach to proving
concentration inequalities with applications to the particular case of
leave-one-out performance estimators.