Analysis Seminar

Yunan YangCornell University
Adaptive State-Dependent Diffusion for Global Optimization With and Without Gradient

Monday, February 5, 2024 - 2:30pm
Malott 406

We develop and analyze a stochastic optimization strategy with and without the derivative/gradient information. A key feature is the state-dependent adaptive variance. This is a fundamental difference between our approach and simulated annealing (SA). We prove global convergence in probability with the algebraic rate in both scenarios and give quantitative results in numerical examples, which is much faster than the classic logarithmic convergence.

A striking fact is the derivative-free result, where the convergence can be achieved without explicit information about the gradient and even without comparing different objective function values as in established methods such as the simplex method and simulated annealing. The analysis for the derivative-free case involves a study of a diffusion equation with a strongly degenerate diffusion coefficient, for which the well-posedness is in question. We regularize the PDE and achieve a finite-time estimate, which shows the concentration of the solution at the global minimum of the target optimization problem.