Computer Science/Discrete Mathematics Seminar II
Omniprediction and Multigroup Fairness
Consider a scenario where we are learning a predictor, whose predictions will be evaluated by their expected loss. What if we do not know the precise loss at the time of learning, beyond some generic properties (like convexity)? What if the same predictor will be used in several applications in the future, each with their own loss function? Can we still learn predictors that have strong guarantees?
This motivates the notion of omnipredictors: predictors with strong loss minimization guarantees across a broad family of loss functions, relative to a benchmark hypothesis class. Omniprediction turns out to be intimately connected to multigroup fairness notions such as multicalibration, and also to other topics like boosting, swap regret minimization, and the approximate rank of matrices. This talk will present some recent work in this area, emphasizing these connections.