Over the last years, advances in deep learning and GPU-based computing have enabled significant progress in several areas of robotics, including visual recognition, real-time tracking, object manipulation, and learning-based control. This progress has turned applications such as autonomous driving and delivery tasks in warehouses, hospitals, or hotels into realistic application scenarios. However, robust manipulation in complex settings is still an open research problem. Various research efforts show promising results on individual pieces of the manipulation puzzle, including manipulator control, touch sensing, object pose detection, task and motion planning, and object pickup. In this talk, I will present our work in integrating such components into a complete manipulation system. Specifically, I will describe a robot manipulator that can open and close cabinet doors and drawers in a kitchen, detect and pickup objects, and move these objects to desired locations. Our baseline system is designed to be applicable in a wide variety of environments, only relying on 3D articulated models of the kitchen and the relevant objects. I will discuss lessons learned so far, and various research directions toward enabling more robust and general manipulation systems that do not rely on existing models.
Dieter Fox is Senior Director of Robotics Research at NVIDIA. He is also a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter obtained his Ph.D. from the University of Bonn, Germany. His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as mapping, object detection and tracking, manipulation, and activity recognition. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE and the AAAI, and recipient of the 2020 Pioneer in Robotics and Automation Award. Dieter also received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.
Surgical robots offer a potential future for combatting doctors shortages, decreased access to care, and the longer wait-times. My lab has looked towards developing autonomous surgical robots that can break the dependency on having a human surgeon perform each procedure, which is not scalable to meet the increasing population of patients, and suffers from a large and unpredictable variability amongst doctor experiences, training, and even day-to-day alertness. However, with very limited exceptions, we (as roboticists) are not there yet — the hurdles facing surgical robotics AI and automation comprise a host of multidisciplinary problems, from challenging computer vision problems robot and scene estimation, to control challenges with flexible and complex surgical instrumentation, to sub-second reactive motion planning in constrained and dynamic environments. In this talk, I will show how my lab’s research towards autonomous surgical robots have led us to develop computationally efficient methods for deformable SLAM, model-free robot learning, neural motion planning, and machine learning models for trajectory optimization. Furthermore, I will show how these techniques, many of which driven by data, are ubiquitous in that they expand not only to different surgical robots (both commercially available and those developed in the lab) but also to a broader set of applications across robot manipulation and bio-inspired robotics.
Michael Yip is an Assistant Professor of Electrical and Computer Engineering at UC San Diego, IEEE RAS Distinguished Lecturer, Hellman Fellow, and Director of the Advanced Robotics and Controls Laboratory (ARCLab). His group currently focuses on solving problems in data-efficient and computationally efficient robot control and motion planning through the use of various forms of learning representations, including deep learning and reinforcement learning strategies. His lab applies these ideas to surgical robotics and the automation of surgical procedures. Previously, Dr. Yip’s research has investigated different facets of haptics, soft robotics, artificial muscles, computer vision, and teleoperation. Dr. Yip’s work has been recognized through several best paper awards at ICRA, including the inaugural best paper award for IEEE’s Robotics and Automation Letters. Dr. Yip has previously been a Research Associate with Disney Research Los Angeles in 2014, a Visiting Professor with Amazon Robotics’ Machine Learning and Computer Vision group in Seattle, WA in 2018, and a Visiting Professor at Stanford University in 2019. He received a B.Sc. in Mechatronics Engineering from the University of Waterloo, an M.S. in Electrical Engineering from the University of British Columbia, and a Ph.D. in Bioengineering from Stanford University.