Skip to main content

Paul Beame: Time-space tradeoffs for learning finite functions from random evaluations, with applications to polynomials

Time: Mon 2019-03-25 12.00

Location: Room 4523, Lindstedtsvägen 5 - floor 5, KTH

Participating: Paul Beame

Export to calendar

PRACTICALITIES
Lunch is served at 12:00 noon (register at www.csc.kth.se/tcs/seminars/lunch/2019-03-25  at the latest the Saturday before, but preferably earlier). The presentation starts at 12:10 pm and ends at 1 pm. Those of us who wish reconvene after a short break for 1½-2 hours of more technical discussions.
There will be a more detailed technical discussionafter the seminar.

ABSTRACT
We develop an extension of recent analytic methods for obtaining time-space tradeoff lower bounds for problems of learning from uniformly random labelled examples. With our methods we can obtain bounds for learning concept classes of finite functions from random evaluations even when the
sample space of random inputs can be significantly smaller than the concept class of functions and the function values can be from an arbitrary finite set.

To obtain our results, we reduce the time-space complexity of learning from random evaluations to the question of how much the corresponding evaluation matrix amplifies the 2-norms of almost uniform probability distributions. To analyze the latter, we formulate it as a semidefinite program, and
analyze its dual. (Similar results to ours using related but somewhat different techniques were independently shown by Garg, Raz, and Tal.)

As applications we show that any algorithm that learns n-variate an polynomial function of degree at most d over any prime field \(F_p\) with probability \(p^{-O(n)}\) or with prediction advantage \(p^{-O(n)}\) given
evaluations on randomly chosen inputs either requires space \(\Omega(n(log_p N)/d)\) or time \(p^{\Omega(n/d)}\) where \(N=(n/d)^{\Theta(d)}\) is the dimension of the space of such polynomials. These bounds, which are based on new bounds on the bias of polynomials over \(F_p\), are asymptotically optimal for polynomials of arbitrary constant degree and constant p since they match the tradeoffs achieved by natural learning algorithms for the problems.

Joint work with Shayan Oveis Gharan and Xin Yang.

The speaker was invited by Jakob Nordström.

For more information about upcoming seminars, visit