Till innehåll på sidan

Yura Malitsky: Adaptive Gradient Descent without Descent

Abstract: In this talk I will present some recent results for the most classical optimization method — gradient descent. We will show that a simple zero cost rule is sufficient to completely automate gradient descent. The method adapts to the local geometry, with convergence guarantees depending only on the smoothness in a neighborhood of a solution. The presentation is based on a joint work with K. Mishchenko, see https://arxiv.org/abs/1910.09529.

Tid: Fr 2021-10-15 kl 11.00 - 12.00

Plats: Seminar room 3721

Språk: English

Föreläsare: Yura Malitsky, Linköping University

The seminar will also be available via Zoom Meeting
https://kth-se.zoom.us/j/63658381373

Innehållsansvarig:Per Enqvist
Tillhör: Institutionen för matematik
Senast ändrad: 2021-10-06