Theoretical Machine Learning Seminar
Instance-Hiding Schemes for Private Distributed Learning
An important problem today is how to allow multiple distributed entities to train a shared neural network on their private data while protecting data privacy. Federated learning is a standard framework for distributed deep learning Federated Learning, and one would like to assure full privacy in that framework . The proposed methods, such as homomorphic encryption and differential privacy, come with drawbacks such as large computational overhead or large drop in accuracy. This work introduces a new and simple encryption of training data, which hides the information in it and allows its use in the usual deep learning pipeline. The encryption is inspired by classic notion of instance-hiding in cryptography. Experiments show that it allows training with fairly small effect on final accuracy.
We also give some theoretical analysis of privacy guarantees for this encryption, showing that violating privacy requires attackers to solve a difficult computational problem.
Joint work with Yangsibo Huang, Zhao Song, and Kai Li. To appear at ICML 2020.
Date & Time
Location
Remote Access Only - see link belowSpeakers
Affiliation
Event Series
Categories
Notes
We welcome broad participation in our seminar series. To receive login details, interested participants will need to fill out a registration form accessible from the link below. Upcoming seminars in this series can be found here.