The advent of laparoscopic cholecystectomy almost 30 years ago would change forever the way surgeons visualize and interact with target anatomy Patients continue to benefit from different yet related image guided therapies that also allow access to pathology by minimally invasive means. As we continue to depend upon images to guide and inform patient interventions it is instructive to review the advances made in surgical visualization over its recent history and look forward to issues that will need to be addressed toward optimization of interventional visualization. These issues will be reviewed from the perspective of a clinician, and not a computer scientist nor physicist with attention also paid to the often neglected topics of ergonomics and human factors considerations in surgical visualization.
Dr. Park is Chairman of the Department of Surgery at Anne Arundel Medical Center in Annapolis, MD and Professor of Surgery at Johns Hopkins University School of Medicine. Dr. Park has made major advancements in the improvement of laparoscopic techniques for complex hernia repair, foregut and spleen surgery.
Previously Dr. Park was the Dr. Alex Gillis Professor and Chairman of the Department of Surgery at Dalhousie University in Halifax, NS. Prior to this appointment, Park served as the Campbell and Jeanette Plugge Professor and Vice Chair for the Department of Surgery, the Head of the Division of General Surgery at the University of Maryland Medical Center, and the Chair of the Maryland Advanced Simulation, Training, Research, and Innovation (MASTRI) Center.
He is a member of the American Surgical Association, and is a Fellow of the Royal College of Surgeons of Canada, American College of Surgeons and the College of Surgeons of Central Eastern and Southern Africa. Having a long held commitment to the training of surgeons in sub Saharan Africa, he is a past president of the Pan African Academy of Christian Surgeons (PAACS).
Currently a member of the Board of Directors of the SAGES, he has also served as the Fellowship Council’s founding President and as its Board Chair. He is editor-in-chief of Surgical Innovation. The author of over 250 scholarly articles and book chapters, he is widely published in the areas of hernia, solid organ laparoscopy, foregut surgery , surgical education, the “Operating Room of the Future” and surgical ergonomics. Dr. Park holds 20 patents and has been instrumental in the development and application of new technologies in endoscopic surgery.
Heart failure (HF) represents a significant healthcare burden in the United States and worldwide. With a prevalence of 5.7 million in the US, HF costs the nation an estimated $30.7 billion each year. About half of people who develop HF die within 5 years of diagnosis.
Continuous monitoring of cardiac function in HF using implantable electronic devices suggests reductions in mortality, all-cause hospitalizations and HF related hospitalizations. However, most of the current monitoring approaches aim for collecting the data (heart rate, pressure, oxygen saturation, metabolites) that are derivative representations of the primary – mechanical pumping – function of the heart.
Current therapy for end-stage HF, when medical management options have been exhausted, includes heart, lung or heart-lung transplantation, or mechanical circulatory support when a donor organ is not available. Several ventricular assist devices (VADs) provide short and long-term mechanical circulatory support for either left or right ventricles, or both. The ventricles have a complex geometry and contraction pattern that involves coordinated motion of the ventricular free walls and the ventricular septum. Current VAD designs do not address these anatomic and physiologic features of the ventricles, as the VADs are designed as pumps that unload the target ventricle by rerouting blood through an artificial circuit. Moreover, blood contact with the artificial circuit necessitates permanent anticoagulation and predisposes patients to bleeding and thromboembolic complications.
We have designed 1) implantable stretchable sensors that continuously acquire myocardial strain data and 2) soft robotic VADs (SR-VADs) with ventricular septal bracing as innovative approaches to continuously monitor ventricular function and to assist native ventricular contraction in end-stage HF. We demonstrated proof of concept in large animal studies by showing that functional prototypes can be safely and rapidly implanted on a beating heart and function for several hours. Future directions include designing sensors that capture multiaxial strain signal, manufacturing soft actuators that fully mimic ventricular motion, incorporating sensors for organ-in-the-loop control and validating the approach in longer-term studies.
Nikolay V. Vasilyev graduated from Sechenov First Moscow State Medical University. He completed his residency and fellowship training in cardiovascular surgery at Bakoulev Center for Cardiovascular Surgery in Moscow, and his research fellowship at the Cleveland Clinic, Cleveland, Ohio, USA. Dr. Vasilyev currently serves as a Staff Scientist at the Department of Cardiac Surgery at Boston Children’s Hospital and as an Assistant Professor of Surgery at the Division of Surgery at Harvard Medical School. His research has been focused on development of image-guided beating-heart cardiovascular interventions and cardiac surgical robotics. This includes clinically driven device design, development of imaging techniques and image processing, computer modeling and simulation. To date Dr. Vasilyev has published over fifty peer-reviewed papers, five book chapters and received four patents, with four more applications are pending. He is a member of the European Association of Cardiothoracic Surgery, where he served on the International Co-Operation Committee, and a member of the American Heart Association and American Society for Artificial Internal Organs. He is a Co-Founder and a Director of a start-up company Nido Surgical Inc.
Bats have a complex skeletal morphology, with both ball-and-socket and revolute joints that interconnect the bones and muscles to create a musculoskeletal system with over 40 degrees of freedom, some of which are passive. Replicating this biological system in a small, lightweight, low-power air vehicle is not only infeasible, but also undesirable; trajectory planning and control for such a system would be intractable, precluding any possibility for synthesizing complex agile maneuvers, or for real-time control. Thus, our goal is to design a robot whose kinematic structure is topologically much simpler than a bat’s, while still providing the ability to mimic the bat-wing morphology during flapping flight, and to find optimal trajectories that exploit the natural system dynamics, enabling effective controller design.
The kinematic design of our robot is driven by motion capture experiments using live bats. In particular, we use principal component analysis to capture the essential bat-wing shape information, and solve a nonlinear optimization problem to determine the optimal kinematic parameters for a simplified parallel kinematic wing structure. We then derive the Lagrangian dynamic equations for this system, along with a model for the aerodynamic forces. We use a shooting-based optimizer to locate physically feasible, periodic solutions to this system, and an event-based control scheme is then derived in order to track the desired trajectory. We demonstrate our results with flight experiments on our robotic bat.
Seth Hutchinson is Professor and KUKA Chair for Robotics in the School of Interactive Computing at the Georgia Institute of Technology, where he also serves as Associate Director of the Institute for Robotics and Intelligent Machines. His research in robotics spans the areas of planning, sensing, and control. He has published more than 200 papers on these topics, and is coauthor of the books “Principles of Robot Motion: Theory, Algorithms, and Implementations,” published by MIT Press, and “Robot Modeling and Control,” published by Wiley.
Hutchinson currently serves on the editorial board of the International Journal of Robotics Research and chairs the steering committee of the IEEE Robotics and Automation Letters. He was Founding Editor-in-Chief of the IEEE Robotics and Automation Society’s Conference Editorial Board (2006-2008) and Editor-in-Chief of the IEEE Transaction on Robotics (2008-2013).
Hutchinson is an Emeritus Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign, where he was Professor of ECE until 2018, serving as Associate Head for Undergraduate Affairs from 2001 to 2007. He received his Ph.D. from Purdue University in 1988. Hutchinson is a Fellow of the IEEE.
Everybody knows that adding fiducial markers to a scene will improve the performance of Structure-from-Motion (SfM) algorithms for vision-based 3D reconstruction, but nobody knows exactly how. I’ll show you several obvious ways to use markers that work poorly. Then, I’ll show you a simple but less obvious way to use them that seems to work very well.
Timothy Bretl comes from the University of Illinois at Urbana-Champaign, where he is both an Associate Professor and the Associate Head for Undergraduate Programs in the Department of Aerospace Engineering. He holds an affiliate appointment in the Coordinated Science Laboratory, where he leads a research group that works on a diverse set of projects in robotics and neuroscience (http://bretl.csl.illinois.edu/). He has also received every award for undergraduate teaching that is granted by his department, college, and campus.
The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.
Veronica J. Santos is an Associate Professor in the Mechanical and Aerospace Engineering Department at the University of California, Los Angeles, and Director of the UCLA Biomechatronics Lab (http://BiomechatronicsLab.ucla.edu). Dr. Santos received her B.S. in mechanical engineering with a music minor from the University of California at Berkeley (1999), was a Quality and R&D Engineer at Guidant Corporation, and received her M.S. and Ph.D. in mechanical engineering with a biometry minor from Cornell University (2007). As a postdoc at the University of Southern California, she contributed to the development of a biomimetic tactile sensor for prosthetic hands. From 2008 to 2014, Dr. Santos was an Assistant Professor of Mechanical and Aerospace Engineering at Arizona State University. Her research interests include human hand biomechanics, human-machine systems, haptics, tactile sensors, machine perception, prosthetics, and robotics for grasp and manipulation. Dr. Santos was selected for an NSF CAREER Award (2010), three engineering teaching awards (2012, 2013, 2017), an ASU Young Investigator Award (2014), and as a National Academy of Engineering Frontiers of Engineering Education Symposium participant (2010). She currently serves as an Editor for the IEEE International Conference on Robotics and Automation (2017-2019), an Associate Editor for the ASME Journal of Mechanisms and Robotics (2016-2019), and an Associate Editor for the ACM Trans on Human-Robot Interaction (2018- 2021).
Autonomous driving has been an active area of research and development over the last decade. Despite considerable progress, there are many open challenges including automated driving in dense and urban scenes. In this talk, we give an overview of our recent work on simulation and navigation technologies for autonomous vehicles. We present a novel simulator, AutonoVi-Sim, that uses recent developments in physics-based simulation, robot motion planning, game engines, and behavior modeling. We describe novel methods for interactive simulation of multiple vehicles with unique steering or acceleration limits taking into account vehicle dynamics constraints. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians AutonoVi-Sim also facilitates data analysis, allowing for capturing video from the vehicle’s perspective, exporting sensor data such as relative positions of other traffic participants, camera data for a specific sensor, and detection and classification results. We highlight its performance in traffic and driving scenarios. We also present novel multi-agent simulation algorithms using reciprocal velocity obstacles that can model the behavior and trajectories of different traffic agents in dense scenarios, including cars, buses, bicycles and pedestrians. We also present novel methods for extracting trajectories from videos and use them for behavior modeling and safe navigation.
Dinesh Manocha is the Paul Chrisman Iribe Chair in Computer Science & Electrical and Computer Engineering at the University of Maryland College Park. He is also the Phi Delta Theta/Matthew Mason Distinguished Professor Emeritus of Computer Science at the University of North Carolina – Chapel Hill. He has won many awards, including Alfred P. Sloan Research Fellow, the NSF Career Award, the ONR Young Investigator Award, and the Hettleman Prize for scholarly achievement. His research interests include multi-agent simulation, virtual environments, physically-based modeling, and robotics. He has published more than 500 papers and supervised more than 36 PhD dissertations. He is an inventor of 9 patents, several of which have been licensed to industry. His work has been covered by the New York Times, NPR, Boston Globe, Washington Post, ZDNet, as well as DARPA Legacy Press Release. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc. He is a Fellow of AAAI, AAAS, ACM, and IEEE and also received the Distinguished Alumni Award from IIT Delhi. See http://www.cs.umd.edu/~dm
Perception precedes action, in both the biological world as well as the technologies maturing today that will bring us autonomous cars, aerial vehicles, robotic arms and mobile platforms. The problem of probabilistic state estimation via sensor measurements takes on a variety of forms, resulting in information about our own motion as well as the structure of the world around us. In this talk, I will discuss some approaches that my research group has been developing that focus on estimating these quantities online and in real-time in extreme environments where dust, fog and other visually obscuring phenomena are widely present and when sensor calibration is altered or degraded over time. These approaches include new techniques in computer vision, visual-inertial SLAM, geometric reconstruction, nonlinear optimization, and even some sensor development. The methods I discuss have an application-specific focus to ground vehicles in the subterranean environment, but are also currently deployed in the agriculture, search and rescue, and industrial human-robot collaboration contexts.
Chris Heckman is an Assistant Professor and the Jacques Pankove Faculty Fellow in the Department of Computer Science at the University of Colorado at Boulder, where he also holds appointments in the Aerospace Engineering Sciences and Electrical and Computer Engineering departments. Professor Heckman earned his B.S. in Mechanical Engineering from UC Berkeley in 2008 and his Ph.D. in Theoretical and Applied Mechanics from Cornell University in 2012, where he was an NSF Graduate Research Fellow. He had postdoctoral appointments at the Naval Research Laboratory in Washington, D.C. as an NRC Research Associate, and in the Autonomous Robotics and Perception Group at CU Boulder as a Research Scientist, before joining the faculty there in 2016. He currently is leading one of the funded competition teams in the DARPA Subterranean Challenge; his past work has been funded by NSF, DARPA and multiple industry partners. His research focuses on developing mathematical and systems-level frameworks for autonomous control and perception, particularly vision and sensor fusion. His work applies concepts of nonlinear dynamical systems to the design of control systems for autonomous agents, in particular ground and aquatic vehicles, enabling them to navigate uncertain and rapidly-changing environments. A hallmark of his research is the implementation of these systems on experimental platforms.
My lab creates medical robots not only for minimally invasive surgery, but also for tissue regeneration. This talk will describe two of our technologies. The first consists of a class of robot implants designed to apply traction forces over a period of weeks inside the body so as to induce the regeneration of soft tissues. Applications include lengthening the esophagus and bowel for the treatment of congenital defects and disease. The second technology is a type of continuum robot that is based on concentrically combining pre-curved elastic tubes. We are using this technology to create multi-armed systems for intracranial endoscopic surgery. We are also developing image-guided catheters that can navigate autonomously inside the blood-filled beating heart.
Pierre E. Dupont is Chief of Pediatric Cardiac Bioengineering and holder of the Edward P. Marram Chair at Boston Children’s Hospital. He is also a Professor of Surgery at Harvard Medical School. His research group develops robotic instrumentation and imaging technology for medical applications. He received the BS, MS and PhD degrees in Mechanical Engineering from Rensselaer Polytechnic Institute, Troy, NY, USA. After graduation, he was a Postdoctoral Fellow in the School of Engineering and Applied Sciences at Harvard University, Cambridge, MA, USA. He subsequently moved to Boston University, Boston, MA, USA where he was a Professor of Mechanical Engineering and Biomedical Engineering. He is an IEEE Fellow, a Senior Editor for the IEEE Transactions on Robotics and a member of the Advisory Board for Science Robotics.
As the world is getting instrumented with numerous sensors, cameras, and robots, there is potential to transform fields as diverse as environmental monitoring, search and rescue, security and surveillance, localization and mapping, and structure inspection. One of the great technical challenges is to control the sensors, cameras, and robots intelligently in order to extract useful information. In this talk, I will present a unified approach for autonomous information acquisition, aimed at improving the accuracy and efficiency of tracking evolving phenomena of interest. I will formulate a decision problem for maximizing relevant information measures and focus on the design of scalable control strategies for multiple sensing systems. First, I will present an approximation algorithm for non-greedy informative planning with linear Gaussian models. The approach reduces the complexity in the length of the planning horizon and in the number of sensors and provides suboptimality guarantees. An application to active multi-robot localization and mapping will be presented. Next, I will remove the linear Gaussian assumptions and will address active object recognition and robot localization using detected objects. The techniques presented in this talk offer an effective and scalable approach for controlled information acquisition.
George J. Pappas is the Joseph Moore Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences and Mechanical Engineering and Applied Mechanics. He is member of the GRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, and the National Science Foundation PECASE.