CMX Student/Postdoc Seminar
The first part of this talk is concerned with competitive optimization, where multiple agents try to minimize their respective objectives that each depend on all agents' actions. We propose competitive gradient descent (CGD) and show that it is a natural generalization of gradient descent to competitive optimization, with good practical performance. We then show how ideas from information geometry can be used to extend CGD to competitive mirror descent (CMD) that can incorporate a wide range of convex constraints.
The second part of the talk is concerned with the approximate factorization of dense kernel matrices arising as Green's matrices of elliptic PDE or covariance matrices of smooth Gaussian processes. For a given sparsity pattern, we show that the optimal (in KL-divergence) sparse inverse-Cholesky factor of the kernel matrix can be computed in closed form, embarrassingly parallel. By exploiting the conditional independence properties of finitely smooth Gaussian processes, we show that these factors can be computed in near-linear complexity, improving the state of the art for fast solvers of Green's matrices of elliptic PDE.