Abstract:While deep learning has demonstrable success on many tasks, the point estimates provided by standard deep models can lead to overfitting and provide no uncertainty quantification on predictions. However, when models are applied to critical domains such as autonomous driving, precision health care, or criminal justice, reliable measurements of a model's predictive uncertainty may be as crucial as correctness of its predictions. At the same time, increasing attention in recent literature is being paid to separating sources of predictive uncertainty, with the goal of separating types of uncertainties reducible through additional data collection from those that represent stochasticity inherent in the data generation process. In this talk, we examine a number of deep (Bayesian) models that promise to capture complex forms for predictive uncertainties, we also examine metrics commonly used to such uncertainties. We aim to highlight strengths and limitations of the models as well as the metrics; we also discuss potential ways to improve both in meaningful ways for downstream tasks.
Bio:Weiwei received her Ph.D. in pure math from Wesleyan University, where she specialized in higher categorical structures in algebraic topology, and her post-doctoral work at Goettingen Unversity involved categorification of knot invariants. At Harvard, Weiwei works with Finale Doshi-Velez (Harvard dtak) on deep Bayesian and generative models focusing on modeling complex forms of uncertainty and noise, as well as on developing inference methods that enforce down-stream task desiderata.