The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.
Veronica J. Santos is an Associate Professor in the Mechanical and Aerospace Engineering Department at the University of California, Los Angeles, and Director of the UCLA Biomechatronics Lab (http://BiomechatronicsLab.ucla.edu). Dr. Santos received her B.S. in mechanical engineering with a music minor from the University of California at Berkeley (1999), was a Quality and R&D Engineer at Guidant Corporation, and received her M.S. and Ph.D. in mechanical engineering with a biometry minor from Cornell University (2007). As a postdoc at the University of Southern California, she contributed to the development of a biomimetic tactile sensor for prosthetic hands. From 2008 to 2014, Dr. Santos was an Assistant Professor of Mechanical and Aerospace Engineering at Arizona State University. Her research interests include human hand biomechanics, human-machine systems, haptics, tactile sensors, machine perception, prosthetics, and robotics for grasp and manipulation. Dr. Santos was selected for an NSF CAREER Award (2010), three engineering teaching awards (2012, 2013, 2017), an ASU Young Investigator Award (2014), and as a National Academy of Engineering Frontiers of Engineering Education Symposium participant (2010). She currently serves as an Editor for the IEEE International Conference on Robotics and Automation (2017-2019), an Associate Editor for the ASME Journal of Mechanisms and Robotics (2016-2019), and an Associate Editor for the ACM Trans on Human-Robot Interaction (2018- 2021).
Autonomous driving has been an active area of research and development over the last decade. Despite considerable progress, there are many open challenges including automated driving in dense and urban scenes. In this talk, we give an overview of our recent work on simulation and navigation technologies for autonomous vehicles. We present a novel simulator, AutonoVi-Sim, that uses recent developments in physics-based simulation, robot motion planning, game engines, and behavior modeling. We describe novel methods for interactive simulation of multiple vehicles with unique steering or acceleration limits taking into account vehicle dynamics constraints. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians AutonoVi-Sim also facilitates data analysis, allowing for capturing video from the vehicle’s perspective, exporting sensor data such as relative positions of other traffic participants, camera data for a specific sensor, and detection and classification results. We highlight its performance in traffic and driving scenarios. We also present novel multi-agent simulation algorithms using reciprocal velocity obstacles that can model the behavior and trajectories of different traffic agents in dense scenarios, including cars, buses, bicycles and pedestrians. We also present novel methods for extracting trajectories from videos and use them for behavior modeling and safe navigation.
Dinesh Manocha is the Paul Chrisman Iribe Chair in Computer Science & Electrical and Computer Engineering at the University of Maryland College Park. He is also the Phi Delta Theta/Matthew Mason Distinguished Professor Emeritus of Computer Science at the University of North Carolina – Chapel Hill. He has won many awards, including Alfred P. Sloan Research Fellow, the NSF Career Award, the ONR Young Investigator Award, and the Hettleman Prize for scholarly achievement. His research interests include multi-agent simulation, virtual environments, physically-based modeling, and robotics. He has published more than 500 papers and supervised more than 36 PhD dissertations. He is an inventor of 9 patents, several of which have been licensed to industry. His work has been covered by the New York Times, NPR, Boston Globe, Washington Post, ZDNet, as well as DARPA Legacy Press Release. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc. He is a Fellow of AAAI, AAAS, ACM, and IEEE and also received the Distinguished Alumni Award from IIT Delhi. See http://www.cs.umd.edu/~dm
Perception precedes action, in both the biological world as well as the technologies maturing today that will bring us autonomous cars, aerial vehicles, robotic arms and mobile platforms. The problem of probabilistic state estimation via sensor measurements takes on a variety of forms, resulting in information about our own motion as well as the structure of the world around us. In this talk, I will discuss some approaches that my research group has been developing that focus on estimating these quantities online and in real-time in extreme environments where dust, fog and other visually obscuring phenomena are widely present and when sensor calibration is altered or degraded over time. These approaches include new techniques in computer vision, visual-inertial SLAM, geometric reconstruction, nonlinear optimization, and even some sensor development. The methods I discuss have an application-specific focus to ground vehicles in the subterranean environment, but are also currently deployed in the agriculture, search and rescue, and industrial human-robot collaboration contexts.
Chris Heckman is an Assistant Professor and the Jacques Pankove Faculty Fellow in the Department of Computer Science at the University of Colorado at Boulder, where he also holds appointments in the Aerospace Engineering Sciences and Electrical and Computer Engineering departments. Professor Heckman earned his B.S. in Mechanical Engineering from UC Berkeley in 2008 and his Ph.D. in Theoretical and Applied Mechanics from Cornell University in 2012, where he was an NSF Graduate Research Fellow. He had postdoctoral appointments at the Naval Research Laboratory in Washington, D.C. as an NRC Research Associate, and in the Autonomous Robotics and Perception Group at CU Boulder as a Research Scientist, before joining the faculty there in 2016. He currently is leading one of the funded competition teams in the DARPA Subterranean Challenge; his past work has been funded by NSF, DARPA and multiple industry partners. His research focuses on developing mathematical and systems-level frameworks for autonomous control and perception, particularly vision and sensor fusion. His work applies concepts of nonlinear dynamical systems to the design of control systems for autonomous agents, in particular ground and aquatic vehicles, enabling them to navigate uncertain and rapidly-changing environments. A hallmark of his research is the implementation of these systems on experimental platforms.
My lab creates medical robots not only for minimally invasive surgery, but also for tissue regeneration. This talk will describe two of our technologies. The first consists of a class of robot implants designed to apply traction forces over a period of weeks inside the body so as to induce the regeneration of soft tissues. Applications include lengthening the esophagus and bowel for the treatment of congenital defects and disease. The second technology is a type of continuum robot that is based on concentrically combining pre-curved elastic tubes. We are using this technology to create multi-armed systems for intracranial endoscopic surgery. We are also developing image-guided catheters that can navigate autonomously inside the blood-filled beating heart.
Pierre E. Dupont is Chief of Pediatric Cardiac Bioengineering and holder of the Edward P. Marram Chair at Boston Children’s Hospital. He is also a Professor of Surgery at Harvard Medical School. His research group develops robotic instrumentation and imaging technology for medical applications. He received the BS, MS and PhD degrees in Mechanical Engineering from Rensselaer Polytechnic Institute, Troy, NY, USA. After graduation, he was a Postdoctoral Fellow in the School of Engineering and Applied Sciences at Harvard University, Cambridge, MA, USA. He subsequently moved to Boston University, Boston, MA, USA where he was a Professor of Mechanical Engineering and Biomedical Engineering. He is an IEEE Fellow, a Senior Editor for the IEEE Transactions on Robotics and a member of the Advisory Board for Science Robotics.
As the world is getting instrumented with numerous sensors, cameras, and robots, there is potential to transform fields as diverse as environmental monitoring, search and rescue, security and surveillance, localization and mapping, and structure inspection. One of the great technical challenges is to control the sensors, cameras, and robots intelligently in order to extract useful information. In this talk, I will present a unified approach for autonomous information acquisition, aimed at improving the accuracy and efficiency of tracking evolving phenomena of interest. I will formulate a decision problem for maximizing relevant information measures and focus on the design of scalable control strategies for multiple sensing systems. First, I will present an approximation algorithm for non-greedy informative planning with linear Gaussian models. The approach reduces the complexity in the length of the planning horizon and in the number of sensors and provides suboptimality guarantees. An application to active multi-robot localization and mapping will be presented. Next, I will remove the linear Gaussian assumptions and will address active object recognition and robot localization using detected objects. The techniques presented in this talk offer an effective and scalable approach for controlled information acquisition.
George J. Pappas is the Joseph Moore Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences and Mechanical Engineering and Applied Mechanics. He is member of the GRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, and the National Science Foundation PECASE.
LCSR Holiday Potluck
Friday, December 14th
You can help by contributing your favorite holiday dish (regional specialties strongly encouraged!) to this pot-luck get together – sign up here for your yummy contribution.
Main dishes will be provided, as will plates, napkins, utensils, etc.
Representing a major paradigm shift from open surgery, minimally invasive surgery (MIS) assisted by robots and sensing is emerging by accessing the surgical targets via either keyholes or natural orifices. It is challenging to get delicate and safe manipulations due to the constraints imposed by the mode of robotic access, confined workspace, complicated surgical environments and the limited available technologies, particularly in terms of endoluminal curvilinear targeting and guidance. Addressing the aforementioned challenges and aiming at human-centered flexible robots, this talk will share our recent biorobotic researches in continuum mechanisms, compliance modulations, delicate sensing, collaborative human-robot interactions, mostly in the context of medical applications. The compliant continuum robotics with embodied intelligence allows us to bypass critical important intracranial or intracorporeal structures, to conform its shape to be compliant with curvy passages, and have direct access to the target sites under proper planning and navigation, thus significantly reducing invasiveness and trauma of surgery.
Hongliang Ren is currently an assistant professor and leading a research group on medical mechatronics in the Biomedical Engineering Department of National University of Singapore (NUS). He is an affiliated Principal Investigator for the Singapore Institute of Neurotechnology (SINAPSE), NUS (Suzhou) Research Institute, and Advanced Robotics Center at National University of Singapore (NUS). Dr. Ren received his Ph.D. in Electronic Engineering (Specialized in Biomedical Engineering) from The Chinese University of Hong Kong (CUHK) in 2008. Prior to joining NUS, he was a Research Fellow at The Johns Hopkins University, Children’s Hospital Boston & Harvard Medical School, and Children’s National Medical Center, USA. His main areas of interest include Biorobotics & Intelligent Control, Medical Mechatronics, Computer-Integrated Surgery, and Multisensor Data Fusion in Surgical Robotics. Dr. Ren is IEEE Senior Member and currently serves as Associate Editor for IEEE Transactions on Automation Science & Engineering (T-ASE) and Medical & Biological Engineering & Computing (MBEC). He is the recipient of NUS Young Investigator Award, IAMBE Early Career Award 2018 & Interstellar Early Career Investigator Award 2018.
Robotic assisted surgery (RAS) systems, such as the da Vinci (Intuitive Surgical), incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgery (MIS) approach, reducing collateral damage and patient recovery times. However, current state-of-the-art tele-robotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, ranging from manufacturing to self-driving cars. A limited form of autonomous RAS with pre-planned functionality was introduced in bony orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating deformable and unstructured soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions.
The goal of this research is to develop a robotic surgical system to perform complex soft tissue surgeries such as anastomosis and tumor resections autonomously – with the ultimate goal of improving surgical outcome and reducing procedure times. The system consists of a lightweight robot arm, custom interchangeable robotic tools for suturing and electro-cautery, a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system, and autonomous robot control algorithms. We demonstrated that the outcome of supervised autonomous anastomoses is superior to surgery performed by expert surgeons and RAS techniques in ex vivo and in vivo porcine studies. We also demonstrated autonomous tumor resection results using visual servoing with consistent tumor margins.
Assistant Professor Dr. Axel Krieger joined the University of Maryland, Department of Mechanical Engineering in the Clark School of Engineering, in 2017. He is leading a group of students, scientists, and engineers in the research and development of robotic tools and laparoscopic devices. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger holds several licensed patents for biomedical devices. Before joining the University of Maryland, Professor Axel Krieger was Assistant Research Professor and program lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing devices and software systems from concept to FDA approval and market introduction. Dr. Krieger completed his undergraduate and master’s degrees at the University of Karlsruhe in Germany and his doctorate at Johns Hopkins, where he pioneered an MRI guided prostate biopsy robot used in over 50 patient procedures at three hospitals.