Mathematics & Machine Learning Seminar
Recent empirical successes of deep learning have exposed significant gaps in our fundamental understanding of learning and optimization mechanisms. Modern best practices for model selection are in direct contradiction to the methodologies suggested by classical analyses. Similarly, the efficiency of SGD-based local methods used in training modern models, appeared at odds with the standard intuitions on optimization.
First, I will present evidence, empirical and mathematical, that necessitated revisiting classical statistical notions, such as over-fitting. I will continue to discuss the emerging understanding of generalization, and, in particular, the "double descent" risk curve, which extends the classical U-shaped generalization curve beyond the point of interpolation.
Second, I will discuss why the landscapes of over-parameterized neural networks are generically not convex, even locally. Instead they satisfy the Polyak-Lojasiewicz (PL) condition across most of the parameter space instead, presents a powerful framework for optimization in general over-parameterized models and allows SGD-type methods to converge to a global minimum.
Finally I will briefly comment on feature learning, which appears to be a key ingredient in the success of modern Deep Learning.