This talk is an overview of our ongoing research in cataract surgery within the Language of Surgery project. Videos of the surgical field are a rich and easily accessible source of data on the extent and nature of care provided to patients in the operating room. This is a tremendous opportunity to demystify care in the surgical field, which is otherwise a black-box in many ways. Methods to analyze videos of the surgical field have multiple applications, one of which is a co-pilot for surgeons that supports their learning in the operating room throughout their career. Using cataract surgery as the prototype, this talk covers research to enable a learning platform that provides objective skill assessments and personalized feedback for surgeons.
Swaroop Vedula is a medical doctor with surgical training and an epidemiologist, with post-doctoral training in computer science. He is a research faculty in the Malone Center for Engineering in Healthcare. Dr. Vedula’s research spans technology for objective skill assessment and personalized feedback for surgeons, surgical data science to analyze care in the operating room and its association with patient outcomes, robotic assistance for skill acquisition and surgical coaching, and explainable prediction models for clinical decision support.
Human interaction with the physical world is increasingly mediated by automation — planes assist pilots, cars assist drivers, and robots assist surgeons. Such semi-autonomous machines will eventually pervade our world, doing dull and dirty work, assisting the elderly and disabled, and responding to disasters. Recent results (e.g. from the DARPA Robotics Challenge) demonstrate that, once a robot reaches a task area and grasps the necessary tool, handle, or wheel, they are able to plan and execute whole-body motions to accomplish complex goals. However, robots frequently lose their balance and fall en route to tasks, necessitating human supervision and intervention. Integrating legged machines in daily life will require safe and stable telelocomotion, that is, robot ambulation guided by humans. This talk presents our efforts to tackle the telelocomotion problem from the bottom-up and top-down, analyzing contact-rich robot dynamics to derive design principles for intrinsically-stable terradynamics, and leveraging the theory of human sensorimotor learning and control to design provably-safe interfaces for nonlinear control systems including legged robots.
Sam Burden earned his BS with Honors in Electrical Engineering from the University of Washington in Seattle in 2008. He earned his PhD in Electrical Engineering and Computer Sciences from the University of California in Berkeley in 2014, where he subsequently spent one year as a Postdoctoral Scholar. In 2015, he returned to UW EE (now ECE) as an Assistant Professor; in 2016, he received a Young Investigator Program award from the Army Research Office (ARO-YIP). Sam is broadly interested in discovering and formalizing principles of sensorimotor control. Specifically, he focuses on applications in dynamic and dexterous robotics, neuromechanical motor control, and human-cyber-physical systems. In his spare time, he teaches robotics to students of all ages in classrooms and campus events.
Peter A. Sheppard – Sr. Intellectual Property Manager Johns Hopkins Technology Ventures
“Intellectual Property Primer For Conflict of Interest Training.”
Laura M. Evans – Senior Policy Associate, Director, Homewood IRB
“Conflicts of Interest: Identification, Review, and Management.”
Abstract: Despite 50 years of research, robots remain remarkably clumsy, limiting their reliability for warehouse order fulfillment, robot-assisted surgery, and home decluttering. The First Wave of grasping research is purely analytical, applying variations of screw theory to exact knowledge of pose, shape, and contact mechanics. The Second Wave is purely empirical: end-to-end hyperparametric function approximation (aka Deep Learning) based on human demonstrations or time-consuming self-exploration. A “New Wave” of research considers hybrid methods that combine analytic models with stochastic sampling and Deep Learning models. I’ll present this history with new results from our lab on grasping diverse and previously-unknown objects.
Bio: Ken Goldberg is the William S. Floyd Distinguished Chair in Engineering at UC Berkeley and an award-winning roboticist, filmmaker, artist and popular public speaker on AI and robotics. Ken trains the next generation of researchers and entrepreneurs in his research lab at UC Berkeley; he has published over 300 papers, 3 books, and holds 9 US Patents. Ken’s artwork has been featured in 70 art exhibits including the 2000 Whitney Biennial. He is a pioneer in technology and artistic visual expression, bridging the “two cultures” of art and science. With unique skills in communication and creative problem solving, invention, and thinking on the edge, Ken has presented over 600 invited lectures at events around the world.
Robotic assisted surgery (RAS) systems, incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, such as manufacturing and aviation. A limited form of autonomous RAS with pre-planned functionality was introduced in orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions. The fundamental problems in soft tissue surgery include unpredictable shape changes, tissue deformations, and perception challenges.
My research goal is to transform current manual and teleoperated robotic soft tissue surgery to autonomous robotic surgery, improving patient outcomes by reducing the reliance on the operating surgeon, eliminating human errors, and increasing precision and speed. This presentation will introduce our Intelligent Medical Robotic Systems and Equipment (IMERSE) lab and discuss our novel strategies to overcome the challenges encountered in soft tissue autonomous surgery. Presentation topics will include: a) a robotic system for supervised autonomous laparoscopic anastomosis, b) magnetically steered robotic suturing, c) development of patient specific biodegradable nanofiber tissue-engineered vascular grafts to optimally repair congenital heart defects (CHD), and d) our work on COVID-19 mitigation in ICU robotics, safe testing, and safe intubation.
Bio: Axel Krieger, PhD, and his IMERSE team joined LCSR in July 2020. He is an Assistant Professor in the Department of Mechanical Engineering at the Johns Hopkins University. He is leading a team of students, scientists, and engineers in the research and development of robotic tools and laparoscopic devices. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger is an inventor of over twenty patents and patent applications. Licensees of his patents include medical device start-ups Activ Surgical and PeriCor as well as industry leaders such as Siemens, Philips, and Intuitive Surgical. Before joining the Johns Hopkins University, Professor Axel Krieger was Assistant Professor in Mechanical Engineering at the University of Maryland and Assistant Research Professor and Program Lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing devices and software systems from concept to FDA approval and market introduction. Dr. Krieger completed his undergraduate and master’s degrees at the University of Karlsruhe in Germany and his doctorate at Johns Hopkins, where he pioneered an MRI guided prostate biopsy robot used in over 50 patient procedures at three hospitals.
Over the last years, advances in deep learning and GPU-based computing have enabled significant progress in several areas of robotics, including visual recognition, real-time tracking, object manipulation, and learning-based control. This progress has turned applications such as autonomous driving and delivery tasks in warehouses, hospitals, or hotels into realistic application scenarios. However, robust manipulation in complex settings is still an open research problem. Various research efforts show promising results on individual pieces of the manipulation puzzle, including manipulator control, touch sensing, object pose detection, task and motion planning, and object pickup. In this talk, I will present our work in integrating such components into a complete manipulation system. Specifically, I will describe a robot manipulator that can open and close cabinet doors and drawers in a kitchen, detect and pickup objects, and move these objects to desired locations. Our baseline system is designed to be applicable in a wide variety of environments, only relying on 3D articulated models of the kitchen and the relevant objects. I will discuss lessons learned so far, and various research directions toward enabling more robust and general manipulation systems that do not rely on existing models.
Dieter Fox is Senior Director of Robotics Research at NVIDIA. He is also a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter obtained his Ph.D. from the University of Bonn, Germany. His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as mapping, object detection and tracking, manipulation, and activity recognition. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE and the AAAI, and recipient of the 2020 Pioneer in Robotics and Automation Award. Dieter also received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.
Surgical robots offer a potential future for combatting doctors shortages, decreased access to care, and the longer wait-times. My lab has looked towards developing autonomous surgical robots that can break the dependency on having a human surgeon perform each procedure, which is not scalable to meet the increasing population of patients, and suffers from a large and unpredictable variability amongst doctor experiences, training, and even day-to-day alertness. However, with very limited exceptions, we (as roboticists) are not there yet — the hurdles facing surgical robotics AI and automation comprise a host of multidisciplinary problems, from challenging computer vision problems robot and scene estimation, to control challenges with flexible and complex surgical instrumentation, to sub-second reactive motion planning in constrained and dynamic environments. In this talk, I will show how my lab’s research towards autonomous surgical robots have led us to develop computationally efficient methods for deformable SLAM, model-free robot learning, neural motion planning, and machine learning models for trajectory optimization. Furthermore, I will show how these techniques, many of which driven by data, are ubiquitous in that they expand not only to different surgical robots (both commercially available and those developed in the lab) but also to a broader set of applications across robot manipulation and bio-inspired robotics.
Michael Yip is an Assistant Professor of Electrical and Computer Engineering at UC San Diego, IEEE RAS Distinguished Lecturer, Hellman Fellow, and Director of the Advanced Robotics and Controls Laboratory (ARCLab). His group currently focuses on solving problems in data-efficient and computationally efficient robot control and motion planning through the use of various forms of learning representations, including deep learning and reinforcement learning strategies. His lab applies these ideas to surgical robotics and the automation of surgical procedures. Previously, Dr. Yip’s research has investigated different facets of haptics, soft robotics, artificial muscles, computer vision, and teleoperation. Dr. Yip’s work has been recognized through several best paper awards at ICRA, including the inaugural best paper award for IEEE’s Robotics and Automation Letters. Dr. Yip has previously been a Research Associate with Disney Research Los Angeles in 2014, a Visiting Professor with Amazon Robotics’ Machine Learning and Computer Vision group in Seattle, WA in 2018, and a Visiting Professor at Stanford University in 2019. He received a B.Sc. in Mechatronics Engineering from the University of Waterloo, an M.S. in Electrical Engineering from the University of British Columbia, and a Ph.D. in Bioengineering from Stanford University.
Automating the design and creation of complex robots is challenging due to the complexity of the design search space and physical processes required. To address this, new approaches are required to understand how to design and optimize robotic structures for a given task. This talk introduces a number of techniques and processes for the computational design of robots, focusing on automated design, rapid fabrication, and task-specific learning. This includes approaches ranging from biologically inspired design, to developing terrain optimized robots by searching over 10,000s of possible designs, and Bayesian based approaches for rapid task learning. Different application scenarios for these approaches are also presented. The talk concludes with a vision for the future in which bespoke robots can be automatically created for a given task.
Josie Hughes completed her Undergraduate, Masters and PhD at the University of Cambridge. She finished her PhD in 2018, developing robots which utilize embodied mechanics and sensory coordination for advanced capabilities. Her research focused on manipulation, sensor technologies and new approaches for designing and fabricating complex anthropomorphic manipulators. Josie is now working as a Post-Doctoral Research Associate in the Distributed Robotics Lab, MIT. At MIT she is working on computational design methods, wearable technologies and new novel robot fabrication methods. Her work has been published in Science Robotics, Nature Machine Intelligence, Soft Robotics and many other conferences and journals. Additionally, she has lead teams which have won over 5 International Robotics Competitions.