Advances in Parallel and Private Stochastic Optimization from Ball Acceleration
Consider an oracle which takes a point x
and returns the minimizer of a convex function f in an ℓ2
ball of radius r around x. While it is straightforward to show that ≈r−1 queries to this oracle suffice to minimize f
to high accuracy in a unit ball, perhaps surprisingly, we established recently that r−2/3 queries is the tight rate up to logarithmic factors. The resulting framework, also known as ball acceleration, has advanced the state-of-the-art for a host of fundamental optimization problems exhibiting local stability. I will provide an overview of the ball acceleration framework, its approximation-tolerant implementation, and its applications, with an emphasis on parallel and private variants of stochastic convex optimization and outstanding open directions.