University of Massachusetts Amherst

Search Google Appliance

Links

Rachel Cummings, Adaptive Learning With Robust Generalization Guarantees

Machine Learning and Friends Lunch
March 17

who: Rachel Cummings, Caltech
when: 12:00pm Thursday, 3/17
where: cs151
pizza: Antonio's
generous sponsor: Yahoo!

Abstract
(Joint work with Katrina Ligett, Kobbi Nissim, Aaron Roth, and Steven Zhiwei Wu)
The traditional notion of generalization --- i.e., learning a hypothesis whose empirical error is close to its true error --- is surprisingly brittle. As has recently been noted [DFH+15b], even if several algorithms have this guarantee in isolation, the guarantee need not hold if the algorithms are composed adaptively. In this paper, we study three notions of generalization ---increasing in strength--- that are robust to post-processing and amenable to adaptive composition, and examine the relationships between them. We call the weakest such notion Robust Generalization. A second, intermediate, notion is the stability guarantee known as differential privacy. The strongest guarantee we consider we call Perfect Generalization. We prove that every hypothesis class that is PAC learnable is also PAC learnable in a robustly generalizing fashion, albeit with an exponential blowup in sample complexity. We conjecture that a stronger version of this theorem also holds that avoids any blowup in sample complexity (and, in fact, it would, subject to a longstanding conjecture [LW86, War03]). It was previously known that differentially private algorithms satisfy robust generalization. In this paper, we show that robust generalization is a strictly weaker concept, and that there is a learning task that can be carried out subject to robust generalization guarantees, yet cannot be carried out subject to differential privacy, answering an open question of [DFH+15a]. We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.
Bio
Rachel Cummings is a Ph.D. candidate in Computing and Mathematical Sciences at the California Institute of Technology. She received her B.A. degrees in Mathematics and Economics from the University of Southern California and her M.S. degree in Computer Science from Northwestern University. Her research interests lie in the intersection of computer science and economics, specifically problems surrounding algorithmic game theory, data privacy, and learning theory. She won the Best Paper Award at DISC 2014, and she is the recipient of a Simons Award for Graduate Students in Theoretical Computer Science.