Computer Science/Discrete Mathematics Seminar II

Sparsifying Sums of Functions

Consider a function on $R^n$ that can be written as a sum of functions $f = f_1$ + $f_2$ + ... + $f_m$, for $m >> n$.

The question of approximating f by a reweighted sum using only a small number of summands has many applications in CS theory, mathematical analysis, data science, and numerical linear algebra.

We'll see that sparsification is possible in substantial generality. For instance, if the functions ${f_i}$ are arbitrary norms, symmetric submodular functions, or come from a general class of linear models, then one can sparsify down to a number of terms that is nearly linear in the dimension. As an application, we obtain algorithms with near-optimal running time a variety of linear regression problems, including $l_p$ regression for $1 < p < 2$.

The most common way of building sparsifiers is by random sampling the functions $f_i$ to produce a sparse reweighting. Attempting to analyze such a sampling algorithm naturally leads to the study of chaining methods, entropy numbers, etc. In the first part of the talk, we will walk through such a proof for the special case of spectral sparsification, where each $f_i$ is the square of a linear function, i.e., $f_i(x):= <a_i, x>^2$ for some $a_i$ in $R^n$. Then, time permitting, we will discuss extensions beyond this case.

Based on joint work with Arun Jambulapati, James Lee, and Aaron Sidford.

Date & Time

November 21, 2023 | 10:30am – 12:30pm

Location

Simonyi Hall 101 and Remote Access