We use critical block sensitivity, a new complexity measure
introduced by Huynh and Nordstrom (STOC 2012) to study the
communication complexity of search problems. Our main result is a
simple proof that if S is a search problem with high
critical...
We introduce and study a new type of learning problem for
probability distributions over the Boolean hypercube {−1,1}n. As in
the standard PAC learning model, a learning problem in our
framework is defined by a class C of Boolean functions over
{−1...
Given data stream \(D = \{p_1,p_2,...,p_m\}\) of size \(m\) of
numbers from \(\{1,..., n\}\), the frequency of \(i\) is defined as
\(f_i = |\{j: p_j = i\}|\). The \(k\)-th frequency moment of \(D\)
is defined as \(F_k = \sum_{i=1}^n f_i^k\). We...
Classical matrix perturbation bounds, such as Weyl (for
eigenvalues) and David-Kahan (for eigenvectors) have, for a long
time, been playing an important role in various areas: numerical
analysis, combinatorics, theoretical computer science...
Motivated by questions in Social Choice Theory I will consider
the following extremal problem in Combinatorial Geometry. Call a
sequence of vectors of length n with −1, 1 entries feasible if it
contains a subset whose sum is positive in more than n...
The goal of (general-purpose) program obfuscation is to make an
arbitrary computer program "unintelligible" while preserving its
functionality. The problem of program obfuscation is well studied
both in theory and in practice. Though despite its...
In this lecture, I will talk about moment based SDP hierarchies
(which are duals of SOS relaxations for polynomial optimization) in
the context of graph partitioning. The focus will be on a certain
way of rounding such hierarchies, whose quality is...
For a permutation p, let Sn(p) be the number of permutations on
n letters avoiding p. Stanley and Wilf conjectured that, for each
permutation p, Sn(p)1/n tends to a finite limit L(p). Marcus and
Tardos proved the Stanley-Wilf conjecture by a...
A common way for lower bounding the expansion of a graph is by
looking the second smallest eigenvalue of its Laplacian matrix.
Also known as the easy direction of Cheeger's inequality, this
bound becomes too weak when the expansion is o(1). In 2004...
Deep learning, a modern version of neural nets, is increasingly
seen as a promising way to implement AI tasks such as speech
recognition and image recognition. Most current algorithms are
heuristic and have no provable guarantees. This talk will...