Al fine di migliorare la tua esperienza di navigazione, questo sito utilizza i cookie di profilazione di terze parti. Chiudendo questo banner o accedendo ad un qualunque elemento sottostante acconsenti all’uso dei cookie.

A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization

4 marzo 2021 ore 17:00 - 18:00

Aula Luiss Research Seminars, Sede di Luiss

Speaker: Katya Scheinberg, Cornell University

Abstract

In this paper, we analyze several methods for approximating gradients of noisy functions using only function values. These methods include finite differences, linear interpolation, Gaussian smoothing and smoothing on a unit sphere. The methods differ in the number of functions sampled, the choice of the sample points, and the way in which the gradient approximations are derived. For each method, we derive bounds on the number of samples and the sampling radius which guarantee favorable convergence properties for a line search or fixed step size descent method. To this end, we use the results in [Berahas et al., 2019] and show how each method can satisfy the sufficient conditions. We analyze the convergence properties even when this condition is only satisfied with some sufficiently large probability at each iteration, as happens to be the case with Gaussian smoothing and smoothing on a unit sphere. Finally, we present numerical results evaluating the quality of the gradient approximations as well as their performance in conjunction with a line search derivative-free optimization algorithm