Skip to main content

Patrick Rebeschini: Implicit Regularization for Optimal Sparse Recovery

Time: Mon 2019-10-07 15.15 - 16.15

Location: F11, KTH

Participating: Patrick Rebeschini, Oxford

Export to calendar

Abstract

Ridge regression is a fundamental paradigm in machine learning and statistics, and it has long been known to be closely connected to the implicit regularization properties of gradient descent methods, cf. early stopping. Over the past decade, this connection has sparked research into a variety of directions aimed at developing computationally efficiency estimators, including acceleration, mini-batching, averaging, sketching, sub-sampling, preconditioning, and decentralization. Sparse recovery is another cornerstone of modern statistics and learning frameworks. Yet, perhaps surprisingly, here the connection to implicit regularization is not as well developed. Most results in the literature only involve limit statements (holding at convergence, for infinitesimal step sizes), apply to regimes with no (or limited) noise, and do not focus on computational efficiency. In this talk, we address the following question: Can we establish an implicit regularization theory for gradient descent to yield optimal sparse recovery in noisy settings, achieving minimax rates with the same cost of reading the data? We will highlight the key ideas to obtain some first results here, along with a few related research directions.
(Based on joint work with Tomas Vaškevičius and Varun Kanade)