Computing + Mathematical Sciences Seminar
Abstract : Optimization has always played a central role in machine learning: advances in the field of optimization and mathematical programming have greatly influenced machine learning models and algorithms. However the connection between optimization and learning is much deeper. In this talk, I will argue that it is beneficial to view learning problems directly as corresponding optimization problems. Taking this stand I will demonstrate how the mirror descent algorithm (a generalization of the gradient descent algorithm) is both universal and near optimal for convex learning problems. In obtaining this result we shall use the online convex optimization paradigm as a key intermediate tool. We will see how the sample complexity of learning problems is inherently tied to efficiency of corresponding convex optimization problems.