Theoretical Machine Learning Seminar
Assumption-free prediction intervals for black-box regression algorithms
There has been tremendous progress in designing accurate black-box prediction methods (boosting, random forests, bagging, neural nets, etc.) but for deployment in the real world, it is useful to quantify uncertainty beyond making point-predictions. I will summarize recent work that my collaborators and I have done over the last few years, for designing a large class of such methods that enables predictive inference without any assumptions (on the algorithm, the distribution of the covariates or outcomes), instead just relying on exchangeability of the test and training points. I will cover some past work by others, some recent work by the BaCaRaTi group (Rina Barber, Emmanuel Candes, myself, Ryan Tibshirani), and some ongoing work with Arun Kumar Kuchibhotla and Chirag Gupta. [Relevant papers: http://arxiv.org/abs/1905.02928, https://arxiv.org/pdf/1910.10562.pdf, https://arxiv.org/abs/1903.04684, https://arxiv.org/pdf/1904.06019.pdf]