The unprecedented prediction accuracy of modern machine learning beckons for its application in a wide range of real-world applications, including autonomous robots, fine-grained computer vision, scientific experimental design, and many others. In order to create trustworthy AI systems, we must safeguard machine learning methods from catastrophic failures and provide calibrated uncertainty estimates. For example, we must account for the uncertainty and guarantee the performance for safety-critical systems, like autonomous driving and health care, before deploying them in the real world. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data. To properly leverage learning in such domains, especially safety-critical ones, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples.
In this talk, I will describe a distributionally robust learning framework that offers accurate uncertainty estimation and rigorous guarantees under data distribution shift. This framework yields appropriately conservative yet still accurate predictions to guide real-world decision-making and is easily integrated with modern deep learning. I will showcase the practicality of this framework in applications on agile robotic control and computer vision. I will also introduce a survey of other real-world applications that would benefit from this framework for future work.
Anqi (Angie) Liu is an Assistant Professor in the Department of Computer Science at the Whiting School of Engineering of the Johns Hopkins University. She is broadly interested in developing principled machine learning algorithms for building more reliable, trustworthy, and human-compatible AI systems in the real world. Her research focuses on enabling the machine learning algorithms to be robust to the changing data and environments, to provide accurate and honest uncertainty estimates, and to consider human preferences and values in the interaction. She is particularly interested in high-stake applications that concern the safety and societal impact of AI. Previously, she completed her postdoc in the Department of Computing and Mathematical Sciences of the California Institute of Technology. She obtained her Ph.D. from the Department of Computer Science of the University of Illinois at Chicago. She has been selected as the 2020 EECS Rising Stars. Her publications appear in top machine learning conferences like NeurIPS, ICML, ICLR, AAAI, and AISTATS.