Caltech/UCLA/USC Joint Analysis Seminar
UCLA, MS6221
Given a desired target distribution and an initial guess of its samples, what is the best way to evolve the locations of the samples so that they accurately represent the desired distribution? A classical solution to this problem is to evolve the samples according to Langevin dynamics, a stochastic particle method for the Fokker-Planck equation. In today's talk, I will contrast this with a nonlocal, deterministic particle method inspired by the porous medium equation. Using the Wasserstein gradient flow structure of the equations and Serfaty's scheme of Gamma-convergence of gradient flows, I will show that, as the number of samples increases and the interaction scale goes to zero, the interacting particle system indeed converges to a solution of the porous medium equation. I will close by discussing practical implications of this result to both sampling and the training dynamics two-layer neural networks. This is based on joint work with Karthik Elamvazhuthi, Matt Haberland, and Olga Turanova.