In recent years, denoising diffusion models have become a crucial area of research due to their prevalence in the rapidly expanding field of generative AI. While recent statistical advances have provided theoretical insights into the generative capabilities of idealised denoising diffusion models for high-dimensional target data, practical implementations often incorporate thresholding procedures for the generating process to address problems arising from the unbounded state space of such models. This mismatch between theoretical design and practical implementation of diffusion models has been explored empirically by using a reflected diffusion process as the driver of noise instead. In this talk, we investigate statistical guarantees of these denoising reflected diffusion models. In particular, we establish minimax optimal rates of convergence in total variation up to a polylogarithmic factor under Sobolev smoothness assumptions. Our main contributions include a rigorous statistical analysis of this novel class of denoising reflected diffusion models, as well as a refined methodology for score approximation in both time and space, achieved through spectral decomposition and rigorous neural network analysis.
This talk is based on joint work with Asbjørn Holk and Lukas Trottner.
Personal website of Claudia Strauch