Computer Science/Discrete Mathematics Seminar II
Learnability and Automatizability
We consider the complexity of properly learning concept classes, i.e. when the learner must output a hypothesis of the same form as the unknown concept. We present the following new upper and lower bounds on well-known concept classes: 1. We show that unless NP = RP, there is no polynomial-time learning algorithm for DNF formulae where the hypothesis is an OR-of-thresholds. Note that as special cases, we show that neither DNF nor OR-of-thresholds are properly learnable unless NP = RP. Previous hardness results have required strong restrictions on the size of the output DNF formula. We also prove that it is NP-hard to learn the intersection of \ell \geq 2 halfspaces by the intersection of $k$ halfspaces for any constant k \geq 0. Previous work held for the case when k = \ell. 2. Assuming that NP \not \subseteq DTIME(2^{n^\epsilon}) for a certain constant \epsilon <1 the intractability of a problem in the WP hierarchy, we show that it is not possible to learn size $s$ decision trees by size s^{k} decision trees for any k \geq 0. Previous hardness results for learning decision trees held for k \leq 2. 3. We present the first non-trivial upper bounds on properly learning DNF formulae and decision trees. In particular we show how to learn size s DNF by DNF in time 2^{\tilde{O}(\sqrt{n \log s})}, and how to learn size s decision trees by decision trees in time n^{O(\log s)}. The hardness results for DNF formulae and intersections of halfspaces are obtained via specialized graph products for amplifying the hardness of approximating the chromatic number as well as applying recent work on the hardness of approximate hypergraph coloring. The hardness results for decision trees, as well as the new upper bounds, are obtained by developing a connection between automatizability in proof complexity and learnability, which may have other applications. Joint work with Mark Braverman, Vitaly Feldman, Adam R. Klivans, Toniann Pitassi