Calendar

Oct
2
Wed
LCSR Seminar: David Blei “The Blessings of Multiple Causes” @ Hackerman B-17
Oct 2 @ 12:00 pm – 1:00 pm

Abstract:

Causal inference from observational data is a vital problem, but it comes with strong assumptions. Most methods require that we observe all confounders, variables that affect both the causal variables and the outcome variables. But whether we have observed all confounders is a famously untestable assumption. We describe the deconfounder, a way to do causal inference with weaker assumptions than the classical methods require.
How does the deconfounderwork? While traditional causal methods measure the effect of a single cause on an outcome, many modern scientific studies involve multiple causes, different variables whose effects are simultaneously of interest. The deconfounderuses the correlation among multiple causes as evidence for unobserved confounders, combining unsupervised machine learning and predictive model checking to perform causal inference.We demonstrate the deconfounderon real-world data and simulation studies, and describe the theoretical requirements for the deconfounderto provide unbiased causal estimates.

 

Bio:

David Bleiis a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. He studies probabilistic machine learning, including its theory, algorithms, and application. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), BlavatnikFaculty Award (2013), ACM-Infosys Foundation Award (2013), a Guggenheim fellowship (2017), and a Simons Investigator Award (2019). He is the co-editor-in-chief of the Journal of Machine Learning Research.He is a fellow of the ACM and the IMS.
[*] https://arxiv.org/abs/1805.06826

 

 LCSR Seminar Video Link

Oct
9
Wed
LCSR Seminar: Zhou Yu “Enabling Machines with Situational Awareness, Communication, and Decision-Making Abilities Leveraging Multimodal Information” @ Hackerman B-17
Oct 9 @ 12:00 pm – 1:00 pm

Abstract:

Humans interact with other humans or the world through information from various channels including vision, audio, language, haptics, etc.  To simulate intelligence, machines require similar abilities to process and combine information from different channels to acquire better situation awareness, better communication ability, and better decision-making ability. In this talk, we describe three projects. In the first study, we enable a robot to utilize both vision and audio information to achieve better user understanding. Then we use incremental language generation to improve the robot’s communication with a human. In the second study, we utilize multimodal history tracking to optimize policy planning in task-oriented visual dialogs. In the third project, we tackle the well-known trade-off between dialog response relevance and policy effectiveness in visual dialog generation. We propose a new machine learning procedure that alternates from supervised learning and reinforcement learning to optimum language generation and policy planning jointly in visual dialogs. We will also cover some recent ongoing work on image synthesis through dialogs, and generating social multimodal dialogs with a blend of GIF and words.

 

Bio:

Zhou Yu is an Assistant Professor at the Computer Science Department at UC Davis. She received her PhD from Carnegie Mellon University in 2017.  Zhou is interested in building robust and multi-purpose dialog systems using fewer data points and less annotation. She also works on language generation, vision and language tasks. Zhou’s work on persuasive dialog systems received an ACL 2019 best paper nomination recently. Zhou was featured in Forbes as 2018 30 under 30 in Science for her work on multimodal dialog systems. Her team recently won the 2018 Amazon Alexa Prize on building an engaging social bot for a $500,000 cash award.

 

 LCSR Seminar Video Link

Oct
16
Wed
LCSR Seminar: Nikolai Matni “Safety and robustness guarantees with learning in the loop” @ Hackerman B-17
Oct 16 @ 12:00 pm – 1:00 pm

Abstract:

In this talk, we present recent progress towards developing learning-based control strategies for the design of safe and robust autonomous systems. Our approach is to recognize that machine learning algorithms produce inherently uncertain estimates or predictions, and that this uncertainty must be explicitly quantified (e.g., using non-asymptotic guarantees of contemporary high-dimensional statistics) and accounted for (e.g., using robust control/optimization) when designing safety critical systems. We focus on the optimal control of unknown systems, and show that by integrating modern tools from high-dimensional statistics and robust control, we can provide, to the best of our knowledge, the first end-to-end finite data robustness, safety, and performance guarantees for learning and control. We also briefly highlight how these ideas can be extended to the large-scale distributed setting by similarly integrating tools from structured linear inverse problems with tools from distributed robust and optimal control.  As a whole, these results provide a rigorous and contemporary perspective on safe reinforcement learning as applied to continuous control. We conclude with our vision for a general theory of safe learning and control, with the ultimate goal being the design of robust and high performing data-driven autonomous systems.

 

Bio:

Nikolai Matni is an assistant professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the GRASP Lab, PRECISE Center, and Applied Mathematics and Computational Science graduate group. Prior to joining Penn, Nikolai was a postdoctoral scholar in EECS at UC Berkeley. He has also held a position as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of safety-critical and data-driven cyber-physical systems.  Nikolai was awarded the IEEE CDC 2013 Best Student Paper Award (first ever sole author winner) and the IEEE ACC 2017 Best Student Paper Award (as co-advisor).

 LCSR Seminar Video Link

Oct
23
Wed
LCSR Seminar: Cornelia Fermuller @ Hackerman B-17
Oct 23 @ 12:00 pm – 1:00 pm

Abstract:

TBA

 

Bio:

TBA

 LCSR Seminar Video Link

Oct
30
Wed
LCSR Seminar: Career Services “Interviewing” @ Hackerman B-17
Oct 30 @ 12:00 pm – 1:00 pm

Abstract:

TBA

 

Bio:

TBA

 

 LCSR Seminar Video Link

Nov
6
Wed
LCSR Seminar: Xinyan Deng “Learn to Fly Like a Hummingbird by an At-scale Bio-inspired Robot: The Highly Robust, Resilient, and Maneuverable Flapping Flight” @ Hackerman B-17
Nov 6 @ 12:00 pm – 1:00 pm

Abstract

Flying insects and hummingbirds demonstrate remarkable aerial maneuverability, robustness, and resilience to their environment and their morphological changes.  Upon a looming threat, hummingbird can perform a rapid 180-degree escape turn in just six wingbeats; Hawk moth can adjust to real-time wing area loss during hovering; Migrating butterflies can tolerate wind gusts disturbances while flying thousands of miles.  The interest in micro air vehicles capable of hovering and fast maneuvers has led to several efforts to develop bio-inspired insect and hummingbird robots.  However, it is only through the understanding of their underlying flight mechanisms can we create novel robots with key bio-inspired principles that allow them to approach the performance of their natural counterparts.  To this end, we use a combined approach of dynamics and control theories, fluid experiments, and robotic platforms. In this talk I will highlight our recent findings including: 1) Learning extreme maneuvers such as rapid escape and tight body flips on an at-scale hummingbird robot equipped with just two actuators; 2) Sensing through flapping wings and their resilience to cluttered environment and dynamic morphological damage; 3) Flapping wing in turbulence and its gust mitigating potentials.

 

Biography

Xinyan Deng is an Associate Professor at the School of Mechanical Engineering at Purdue University.  She received her B.S. degree from the School of Electrical Engineering and Automation at Tianjin University, and her Ph.D. degree from the Department of Mechanical Engineering at the University of California at Berkeley.  Her background is in controls and robotics, and her research interest include the principles of aerial and aquatic locomotion in animals, bio-inspired robots, and cyber physical security of autonomous systems. She received the NSF CAREER Award in 2006 on flying insect and robot research. She received the B.F.S. Schaefer Outstanding Faculty Scholar Award from Purdue University in 2015.  Her work is highly interdisciplinary and has appeared in top robotics, biology, fluids, and computer science journals and conferences.  She served as the Co-Chair for the Technical Committee on Bio-robotics of the IEEE Robotics and Automation Society from 2009-2013.  She has chaired and co-chaired varies IEEE and ASME conference workshops, NSF workshops, conference symposiums and sessions on bio-inspired robotics. Her research has been funded by federal agencies including NSF, AFOSR, AFRL, and ONR.

 

 LCSR Seminar Video Link

Nov
13
Wed
LCSR Seminar: Juan Wachs “The Cyber Touch – Empowering Medical Robots Through Gestures” @ Hackerman B-17
Nov 13 @ 12:00 pm – 1:00 pm

Abstract

At the time, the only robots present in the operating room are those that extend surgeons capabilities through tele-operation, such as the da-Vinci robot. Alternatively, a new type of robots is emerging that would understand natural language, and mainly non-verbal language, such as gestures, which is the main form of interaction in the operating room. But it also turns out that medics and first responders also use a combination of communication modalities to collaborate with robots outside the OR, in austere settings such as the battlefield or in rural settings. Endowing robots with the capability of recognizing intention through body language, predicting and informing the surgical team about future surgical tasks is a key challenge in trauma care. In this talk, I will highlight three applications that showcase robots working with doctors in a semi-autonomous manner. Such work has applications to the DoD, and was made possible through collaborations with hospitals: Indiana University of Medicine, the Naval Medical Center Portsmouth at Norfolk, VA and Womack Army Medical Center at Fort Bragg, North Carolina. Significant breakthroughs in this research led to major publications, such as Annals of Surgery and Surgery. News releases covering this work appear at NPR “Surgical Technology Aims to Mimic ‘Teleporting’”. NPR Inside Indiana Business. Sept. 28, 2015 and more recently featured in the WIRED magazine “How Technology is Helping Surgeons Collaborate from Across the World” (07/2018) and Inside Indiana Business with Gerry Dick. TV Show. October 4, 2018. The research showcased is supported through the kind generosity of DoD and NSF

 

Biosketch

Dr. Juan Wachs is the James A. and Sharon M. Tompkins Rising Star Associate Professor in the Industrial Engineering School at Purdue University, Professor of Biomedical Engineering (by courtesy) and an Adjunct Associate Professor of Surgery at IU School of Medicine. He is the director of the Intelligent Systems and Assistive Technologies (ISAT) Lab at Purdue, and he is affiliated with the Regenstrief Center for Healthcare Engineering. He completed postdoctoral training at the Naval Postgraduate School’s MOVES Institute under a National Research Council Fellowship from the National Academies of Sciences. Dr. Wachs received his B.Ed.Tech in Electrical Education in ORT Academic College, at the Hebrew University of Jerusalem campus. His M.Sc and Ph.D in Industrial Engineering and Management from the Ben-Gurion University of the Negev, Israel. He is the recipient of the 2013 Air Force Young Investigator Award, and the 2015 Helmsley Senior Scientist Fellow, and 2016 Fulbright U.S. Scholar, the James A. and Sharon M. Tompkins Rising Star Associate Professor, 2017, and an ACM Distinguished Speaker 2018. He is also the Associate Editor of IEEE Transactions in Human-Machine Systems, Frontiers in Robotics and AI.

 

 

 LCSR Seminar Video Link

Nov
20
Wed
LCSR Seminar: Guoquan Huang “Visual-Inertial State Estimation” @ Hackerman B-17
Nov 20 @ 12:00 pm – 1:00 pm

Abstract:

As autonomous vehicles are emerging in many different application domains from self-driving cars and drone delivery to underwater survey, state estimation, as one of the most important enabling technologies for autonomous systems, becomes more important than ever before. While tremendous progress in autonomous navigation has been made in the past decades, many challenges still remain. For example, many current sate estimation algorithms of robot localization tend to become inconsistent (i.e., the state estimates are biased and the error covariance estimates are different from the true ones), causing mission failure in a short period of time. If resources available to vehicles are limited, designing consistent efficient estimators becomes even more challenging. In this talk, I will present some of our recent work on taking up these challenges. I will discuss our observability-based methodology for improving estimation consistency, and deep learning for loop closure, in the context of simultaneous localization and mapping (SLAM) and visual-inertial navigation system (VINS). In particular, I will highlight our recent results on visual-inertial state estimation and its extensions.

 

Bio:

Guoquan (Paul) Huang is currently an Assistant Professor of Mechanical Engineering (ME), Electrical and Computer Engineering (ECE), and Computer and Information Sciences (CIS), at the University of Delaware (UD), where he is leading the Robot Perception and Navigation Group (RPNG). He also holds an Adjunct Professor position at the Zhejiang University, China. He was a Senior Consultant (2016-2018) at the Huawei 2012 Laboratories and a Postdoctoral Associate (2012-2014) at MIT CSAIL (Marine Robotics). He received the B.Eng. (2002) in Automation (Electrical Engineering) from the University of Science and Technology Beijing, China, and the M.Sc. (2009) and Ph.D. (2013) in Computer Science from the University of Minnesota. From 2003 to 2005, he was a Research Assistant with the Department of Electrical Engineering, Hong Kong Polytechnic University. His research interests include sensing, localization, mapping, perception and navigation of autonomous ground, aerial, and underwater vehicles. Dr. Huang received the 2006 Academic Excellence Fellowship from the University of Minnesota, 2011 Chinese Government Award for Outstanding Self-Financed Students Abroad, 2015 UD Research Award (UDRF), 2016 NSF CRII Award, 2017 UD Makerspace Faculty Fellow, 2018 SATEC Robotics Delegation (one of ten US experts invited by ASME),  2018 Google Daydream Faculty Research Award, 2019 Google AR/VR Faculty Research Award, and was the Finalist for the 2009 Best Paper Award from the Robotics: Science and Systems Conference (RSS).

 

 LCSR Seminar Video Link

Dec
4
Wed
LCSR Seminar: Career Services @ Hackerman B-17
Dec 4 @ 12:00 pm – 1:00 pm

Abstract:

TBA

 

Bio:

TBA

 

 LCSR Seminar Video Link

Jan
29
Wed
LCSR Seminar: Robert Pless “Supporting Sex Trafficking Investigations with Deep Metric Learning” @ Hackerman B-17
Jan 29 @ 12:00 pm – 1:00 pm

Abstract:

This talk shares work to develop traffickCam, a system to support sex trafficking investigations by recognizing the hotel rooms in pictures of trafficking victims.  I’ll share context for this project and ways that this system is currently being used at the National Center for Missing and Exploited Children, as well as special challenges that come from this problem domain such as dramatic differences in rooms within a hotel and the similarity of rooms across chains.  Attacking these problems led us to specific improvements in large scale classification with Deep Metric Learning, including novel training algorithms, visual explainability and new visualization approaches to compare and understand the representations they learn.

Bio:

Robert Pless is the Patrick and Donna Martin Professor and Chair of Computer Science at George Washington University.  Dr. Pless was born at Johns Hopkins hospital in 1972, has a Bachelors Degree in Computer Science from Cornell University in 1994 and a PhD from the University of Maryland, College Park in 2000.  He was on the faculty of Computer Science at Washington University in St. Louis from 2000-2017. His research focuses on geometrical and statistical Computer Vision.

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Laboratory for Computational Sensing + Robotics