EE Systems Seminar
Abstract:
Non-local means (NLM) is a well-known and influential image denoising algorithm. Since its publication in 2005, the NLM algorithm has been widely cited and compared against many more advanced algorithms in the denoising literature. However, NLM's high computational complexity remains an open issue to the image processing community.
In this talk, I will present a scalable NLM algorithm, called the Monte-Carlo Non-local Means (MCNLM). Different from the classical NLM which computes the distances between every pair of pixel patches in the image, MCNLM computes only a subset of randomly selected pairs of patches. Two major analytical questions of MCNLM will be discussed. First, using the statistical large deviation theory, I will provide theoretical guarantees of MCNLM for any random sampling strategy. Second, I will discuss the optimal sampling pattern which maximizes the rate of convergence.
MCNLM has marginal memory and programming costs compared to the original NLM algorithm, yet it is scalable to large-scale problems. In our experiment, apart from the denoising images using the noisy image itself, we also applied MCNLM to denoise image patches using external databases. On a database containing 10 billion patches, we demonstrate 3 orders of magnitudes in speed up.
(Joint work with Todd Zickler and Yue Lu)