Dr. Clayton Webster
Oden Institute for Engineering & Computational Sciences,
The University of Texas at Austin, Austin, TX
Lirio AI Research & Behavioral Reinforcement and Learning Lab (BReLL),
Lirio, LLC., Knoxville, TN

"Smoothing-based gradient descent for high-dimensional nonconvex optimization"

Tuesday, Nov 7, 2023

Abstract:

This talk is focused on a class of smoothing-based gradient descent methods when applied to high-dimensional non-convex optimization problems. In particular, Gaussian smoothing is employed to define a nonlocal gradient that reduces high-frequency noise, small variations, and rapid fluctuations in the computation of the descent directions and additionally preserves the structure or features of the loss landscape. The amount of smoothing is controlled by the standard deviation of the Gaussian distribution, with larger values resulting in broader and more pronounced smoothing effects, while smaller values preserve more details of the function. The resulting Gaussian smoothing gradient descent (GSmoothGD) approach can facilitate gradient descent in navigating away from and/or avoiding local minima with increased ease, thereby substantially enhancing their overall performance when applied to non-convex optimization problems. As such, this work also provides rigorous theoretical error estimates on the GSmoothGD iterates rate of convergence, that exemplify the impact of underlying function convexity, smoothness, and input dimension, as well as the smoothing radius. We also present several strategies to combat the curse of dimensionality as well as updating the smoothing parameter, aimed at diminishing the impact of local minima, and therefore, rendering the attainment of global minima more achievable. Computational evidence complements the present theory and shows the effectiveness of the GSmoothGD method compared to other smoothing-based algorithms, momentum-based approaches, backpropagation-based techniques, and classical gradient-based algorithms from numerical optimization. Finally, applications to various personalization tasks using MNIST, CIFAR10, and Spotify datasets demonstrate the advantage of GSmoothGD when used to solve reinforcement learning problems.

Attachments:
Download this file (seminar-with-clayton-webster.png)seminar-with-clayton-webster.png[Advertisement]1047 kB