Calendar

Mar
4
Wed
Marcia O’Malley: Natural Sensory Feedback for Intuitive Prosthesis Control @ B17 Hackerman Hall
Mar 4 @ 12:00 pm – 1:00 pm

Abstract

Able bodied individuals can easily take for granted the dexterous capabilities of the human hand. Key to our ability to easily manipulate common objects are the rich sensory cues conveying force and object properties, often without need for visual attention to the task. For amputees, these manipulations can require significant time, visual attention, and cognitive effort due to the lack of sensory feedback even in the most advanced prosthetic hands. In this talk I will describe our approach to improving dexterous manipulation with prosthetic hands, and a series of experiments that have provided new insight on the importance of providing natural sensory feedback cues to the residual limb for prosthesis users. I will also briefly describe the other major research thrusts of my group, including robotic rehabilitation of the upper limb following stroke and incomplete spinal cord injury, and quantitative assessment of motor skill for training in virtual environments, with a special focus on endovascular surgical procedures.

 

Speaker Bio

Marcia O’Malley received the B.S. degree in mechanical engineering from Purdue University in 1996, and the M.S. and Ph.D. degrees in mechanical engineering from Vanderbilt University in 1999 and 2001, respectively. She is currently Professor of Mechanical Engineering and of Computer Science at Rice University and directs the Mechatronics and Haptic Interfaces Lab. She is an Adjunct Associate Professor in the Departments of Physical Medicine and Rehabilitation at both Baylor College of Medicine and the University of Texas Medical School at Houston. Additionally, she is the Director of Rehabilitation Engineering at TIRR-Memorial Hermann Hospital, and is a co-founder of Houston Medical Robotics, Inc. Her research addresses issues that arise when humans physically interact with robotic systems, with a focus on training and rehabilitation in virtual environments. In 2008, she received the George R. Brown Award for Superior Teaching at Rice University. O’Malley is a 2004 ONR Young Investigator and the recipient of the NSF CAREER Award in 2005. She is a Fellow of the American Society of Mechanical Engineers, and currently serves on the editorial board for the ASME Journal of Mechanisms and Robotics.

Mar
11
Wed
Terry Peters: Augmented Reality and Ultrasound for Interventional Cardiac Guidance @ B17 Hackerman Hall
Mar 11 @ 12:00 pm – 1:00 pm

Abstract

Many inter-cardiac interventions are performed either via open-heart surgery, or using minimally invasive approaches, where instrumentation is introduced into the cardiac chambers via the vascular system or heart wall. While many of the latter procedures are often employed under x-ray guidance, for some of these x-ray imaging is not appropriate, and ultrasound is the preferred intra-operative imaging modality. One such procedure involves the repair of a mitral-valve leaflet, using an instrument introduced into the heart via the apex. The standard of care for this procedure employs a 3D Trans-esophageal probe as guidance, but using primarily its bi-plane mode, with full 3D only being used sporadically. In spite of the clinical success of this procedure, many problems are encountered during the navigation of the instrument to the site of the therapy. To overcome these difficulties, we have developed a guidance platform that tracks the US probe and instrument, and augments the US mages with virtual   elements representing the instrument and target, to optimise the navigation process. Results of using this approach on animal studies have demonstrated increased performance in multiple metrics, including total tool distance from ideal pathway, total navigation time, and total tool path lengths, by factors of 3,4,and 5 respectively, as well as a 40 fold reduction in the number of times an instrument intruded into potentially unsafe zones in the heart. Ongoing work involves the application of these ideas for aortic valve replacement as well.

 

Speaker Bio

Dr. Terry Peters is a Scientist in the Imaging Research Laboratories at the Robarts Research Institute (RRI), London, ON, Canada, and Professor in the Departments of Medical Imaging and Medical Biophysics at Western University London, Canada, as well as a member of the Graduate Programs in Neurosciences and Biomedical Engineering. He is also an adjunct Professor at McGill University in Montreal. He received his graduate training at the University of Canterbury in New Zealand in Electrical Engineering, under the direction of Professor Richard Bates. His PhD work dealt with fundamental issues in Computed Tomography image reconstruction, and resulted in a seminal paper on the topic in 1971, just prior to the beginning of CT’s commercial development and worldwide application. For the past 30 years, his research has built on this foundation, focusing on the application of computational hardware and software advances to medical imaging modalities in surgery and therapy. Starting in 1978 at the Montreal Neurological Institute (MNI), Dr. Peters’ lab pioneered many of the image-guidance techniques and applications for image-guided neurosurgery.   In 1997, he was recruited by the Robarts Research Institute at Western, to establish a focus of image-guided surgery and therapy within the Robarts Imaging Research Laboratories. His lab has expanded over the past seventeen years to encompass image-guided procedures of the heart, brain and abdomen. He has authored over 250 peer-reviewed papers and book chapters, a similar number of abstracts, and has delivered over 200 invited presentations. He has mentored over 85 trainees at the Masters, Doctoral and Postdoctoral levels.

 

He is a Fellow of the Institute of Electrical and Electronics Engineers, the Canadian College of Physicists in Medicine; the Canadian Organization of Medical Physicists; the American Association of Physicists in Medicine, the Australasian College of Physical Scientists and Engineers in Medicine; the MICCAI Society, and the Institute of Physics. In addition, he received the Dean’s Award for Research Excellence at Western University in 2011, the Hellmuth Prize for Achievement in Research from Western in 2012, and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society’s Enduring Impact Award in 2014.

Mar
25
Wed
Magnus Egerstedt: From Global Properties to Local Rules in Multi-Agent Systems @ B17 Hackerman Hall
Mar 25 @ 12:00 pm – 1:00 pm

Abstract

The last few years have seen significant progress in our understanding of how one should structure multi-robot systems. New control, coordination, and communication strategies have emerged and, in this talk, we discuss some of these developments. In particular, we will show how one can go from global, geometric, team-level specifications to local coordination rules for achieving and maintaining formations, area coverage, and swarming behaviors. One aspect of this concerns how users can interact with networks of mobile robots in order to inject new, global information and objectives. We will also investigate what global objectives are fundamentally implementable in a distributed manner on a collection of spatially distributed and locally interacting agents.

 

Speaker Bio

Magnus Egerstedt is the Schlumberger Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology, where he serves as Associate Chair for Research and External Affairs. He received the M.S. degree in Engineering Physics and the Ph.D. degree in Applied Mathematics from the Royal Institute of Technology, Stockholm, Sweden, the B.A. degree in Philosophy from Stockholm University, and was a Postdoctoral Scholar at Harvard University. Dr. Egerstedt conducts research in the areas of control theory and robotics, with particular focus on control and coordination of complex networks, such as multi-robot systems, mobile sensor networks, and cyber-physical systems. Magnus Egerstedt is the Deputy Editor-in-Chief for the IEEE Transactions on Network Control Systems, the director of the Georgia Robotics and Intelligent Systems Laboratory (GRITS Lab), a Fellow of the IEEE, and a recipient of the ECE/GT Outstanding Junior Faculty Member Award, the HKN Outstanding Teacher Award, the Alum of the Year Award from the Royal Institute of Technology, and the U.S. National Science Foundation CAREER Award.

Apr
1
Wed
CANCELLED George J. Pappas: Active Information Acquisition with Mobile Robots and Configurable Sensing Systems @ B17 Hackerman Hall
Apr 1 @ 12:00 pm – 1:00 pm

Due to circumstances beyond our control, today’s seminar has been cancelled.

 

 

Abstract

As the world is getting instrumented with numerous sensors, cameras, and robots, there is potential to transform industries as diverse as environmental monitoring, search and rescue, security and surveillance, localization and mapping, and structure inspection. Successful estimation techniques for gathering information in these scenarios have been designed and implemented. However, one of the great technical challenges today is to intelligently control the sensors, cameras, and robots in order to extract information actively and autonomously. In this talk, I will present a unified approach for active information acquisition, aimed at improving the accuracy and efficiency of tracking evolving phenomena of interest. I formulate a decision problem for maximizing relevant information measures and focus on the design of scalable control strategies for multiple sensing systems. First, I will present a greedy approach for information acquisition via applications in source seeking and mobile robot localization. Next, information acquisition with a longer planning horizon will be considered in the context of linear Gaussian models. I will develop an approximation algorithm with suboptimality guarantees to reduce the complexity in the planning horizon and the number of sensors and will present an application to active multi-robot localization and mapping. Finally, non-greedy information acquisition with general sensing models will be used for active object recognition. The techniques presented in this talk offer an effective and scalable approach for controlled information acquisition with multiple sensing systems.

 

Speaker Bio

George J. Pappas is the Joseph Moore Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is member of theGRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, theO. Hugo Schuck Best Paper Award, and the National Science Foundation PECASE.

 

Apr
8
Wed
Julie Shah: Integrating Robots into Team-Oriented Environments @ B17 Hackerman Hall
Apr 8 @ 12:00 pm – 1:00 pm

Abstract

Recent advances in computation, sensing, and hardware enable robotics to perform an increasing percentage of traditionally manual tasks in manufacturing. Yet, often the assembly mechanic cannot be removed entirely from the process. This provides new economic motivation to explore opportunities where human workers and industrial robots may work in close physical collaboration. In this talk, I present the development of new algorithmic techniques for collaborative plan execution that scale to real-world industrial applications. I also discuss the design of new models for robot planning, which use insights and data derived from the planning and execution strategies employed by successful human teams, to support more seamless robot participation in human work practices. This includes models for human-robot team training, which involves hands-on practice to clarify sequencing and timing of actions, and for team planning, which includes communication to negotiate and clarify allocation and sequencing of work. The aim is to support both the human and robot workers in co-developing a common understanding of task responsibilities and information requirements, to produce more effective human-robot partnerships.

 

Speaker Bio

Julie Shah is an Assistant Professor in the Department of Aeronautics and Astronautics at MIT and leads the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory. Shah received her SB (2004) and SM (2006) from the Department of Aeronautics and Astronautics at MIT, and her PhD (2010) in Autonomous Systems from MIT. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. She has developed innovative methods for enabling fluid human-robot teamwork in time-critical, safety-critical domains, ranging from manufacturing to surgery to space exploration. Her group draws on expertise in artificial intelligence, human factors, and systems engineering to develop interactive robots that emulate the qualities of effective human team members to improve the efficiency of human-robot teamwork. In 2014, Shah was recognized with an NSF CAREER award for her work on “Human-aware Autonomy for Team-oriented Environments,” and by the MIT Technology Review TR35 list as one of the world’s top innovators under the age of 35. Her work on industrial human-robot collaboration was also recognized by the Technology Review as one of the 10 Breakthrough Technologies of 2013, and she has received international recognition in the form of best paper awards and nominations from the International Conference on Auto- mated Planning and Scheduling, the American Institute of Aeronautics and Astronautics, the IEEE/ACM International Conference on Human-Robot Interaction, the International Symposium on Robotics, and the Human Factors and Ergonomics Society.

Apr
15
Wed
Jim Freudenberg: 15 years of Embedded Control Systems at the University of Michigan @ B17 Hackerman Hall
Apr 15 @ 12:00 pm – 1:00 pm

Abstract

In 1998 we were asked by the local automotive industry to create a class to better train engineers to wirote embedded control software. We have now taught the class for 15 years at the University of Michigan and at ETH Zurich, and are currently serving almost 200 students/year. In this talk, we describe the motivations for the class from an automotive perspective, as well as the instructional lab, which uses an industry state of the art microprocessor (Freescale MPC5643L) and haptic wheels. We then describe the NSF sponsored research into haptics and cyberphysical systems that we perform in the lab, in collaboration with Professor Brent Gillespie of the UofM Mechanical Engineering Department.

 

Apr
22
Wed
Louis Whitcomb: Nereid Under-Ice: A Remotely Operated Underwater Vehicle for Oceanographic Access Under Ice
Apr 22 @ 12:00 pm – 1:00 pm

Abstract

This talk reports recent advances in underwater robotic vehicle research to enable novel oceanographic operations in extreme ocean environments, with focus on two recent novel vehicles developed by a team comprised of the speaker and his collaborators at the Woods Hole Oceanographic Institution. First, the development and operation of the Nereus underwater robotic vehicle will be briefly described, including successful scientific observation and sampling dive operations at hadal depths of 10,903 m. on a NSF sponsored expedition to the Challenger Deep of the Mariana Trench – the deepest place on Earth. Second, development and first sea trials of the new Nereid Under-Ice (UI) underwater vehicle will be described. NUI is a novel remotely-controlled underwater robotic vehicle capable of being teleoperated under ice under remote real-time human supervision. We report the results of NUI’s first under-ice deployments during a July 2014 expedition aboard R/V Polarstern at 83° N 6 W° in the Arctic Ocean – approximately 200 km NE of Greenland.

 

Speaker Bio

Louis L. Whitcomb completed a B.S. in Mechanical Engineering in 1984 and a Ph.D. in Electrical Engineering in 1992 at Yale University. From 1984 to 1986 he was a Research and Development engineer with the GMFanuc Robotics Corporation in Detroit, Michigan. He joined the Department of Mechanical Engineering at the Johns Hopkins University in 1995, after post doctoral fellowships at the University of Tokyo and the Woods Hole Oceanographic Institution. His research focuses on the navigation, dynamics, and control of robot systems – including industrial, medical, and underwater robots. Whitcomb is a principal investigator of the Nereus and Nereid Under-Ice Projects. He is former (founding) Director of the JHU Laboratory for Computational Sensing and Robotics. Whitcomb is presently Professor and Chairman at the Department of Mechanical Engineering, with secondary appointment in Computer Science, at the Johns Hopkins University’s Whiting School of Engineering. Whitcomb received teaching awards at Johns Hopkins in 2001, 2002, 2004, and 2011, was awarded a National Science Foundation Career Award, and an Office of Naval Research Young Investigator Award. He is a Fellow of the IEEE. He is also Adjunct Scientist, Department of Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution.

Apr
29
Wed
David Han: : Science of Autonomy in DOD basic research @ B17 Hackerman Hall
Apr 29 @ 12:00 pm – 1:00 pm

Abstract

The Department of Defense (DOD) funds long-term basic research in a wide variety of scientific and engineering fields with a goal of exploiting new knowledge to enhance-and where possible, transform-future capabilities. DOD-funded research is known for high-risk endeavors that have led to paradigm shifts in the nation’s technical capabilities. Revolutionary technologies such as GPS, Gallium Arsenide (GaAs) Microwave Electronics, Magnetic Random Access Memory (MRAM), and Kalman Filter are few of the examples.

 

Recently, the Science of Autonomy (SoA) came into focus as one of the DOD research interests with expectations of a great impact. The effort addresses critical multi-disciplinary research challenges that cut across different DOD departments and warfighting areas/domains. This involves control theory, computational intelligence, human factors, and related fields such as biology/animal behavior/cognition, economics/management theory, cognitive science/psychology, and neuroscience. Today’s presentation will briefly describe the backgrounds, some key aspects of these challenges, and the programs tackling these challenges.

 

Speaker Bio

David Han is the Associate Director for Basic Research in Machine Intelligence
and Robotics in the Office of the Assistant Secretary of Defense Research &
Engineering (ASD (R&E)). The Basic Research Office (BRO) of the ASD (R&E)
oversees the entire basic research portfolio of the US DoD. He is an ASME
fellow and an IEEE senior member, and had been certified as a Professional
Engineer (PE) in mechanical branch in the State of Hawaii in 1985. Dr. Han
received a BS from Carnegie-Mellon University, and a MSE and a PhD from
Johns Hopkins University.
Early part of his career includes naval nuclear engineering at Pearl Harbor Naval
shipyard and design engineering at the R.M. Towill Corporation in Honolulu. He
had been a research engineer at the Naval Surface Warfare Center (NSWC) at
White Oak in the underwater weapons program, and had worked as a senior
professional staff at Johns Hopkins University Applied Physics Laboratory (JHU
APL) in naval missile defense and satellite power programs. He had been with
the University of Maryland at College Park as a visiting associate professor and
the Deputy Director of the Center for Energetic Concepts Development (CECD),
and also he was the Distinguished IWS Chair Professor of the Systems
Engineering Department of the US Naval Academy in Annapolis. He spent over
eleven total years as a program officer at the Office of Naval Research (ONR)
managing basic and applied research, and advanced technology programs. From
2012 to 2014 he served as the Deputy Director of Research of the Office of
Naval Research (ONR) overseeing the Discovery and Invention (D&I) portfolio of
over $900 million dollars annually of basic and applied research.
Dr. Han has authored/coauthored over 60 peer-reviewed papers including 4 book
chapters. He has taught at Johns Hopkins University, University of Maryland
Baltimore County, and Korea University. His research interests include
image/speech processing and recognition, machine learning, and human robot
interaction.

Jun
11
Thu
Fijs W.B. van Leeuwen: Image guidance technologies as an add on to robotic surgery @ 320 Hackerman Hall
Jun 11 @ 12:00 pm – 1:00 pm

Abstract:

 

Image guided interventions are increasingly gaining interest from surgical and radiological disciplines. In addition to radiological imaging technologies, radionuclear imaging and optical imaging have the potential to visualize molecular features of disease. To fully exploit the potential of these technologies, it is instrumental to understand the advantages and shortcomings that come with them. Based on this knowledge the clinically applied guidance approaches and well-known surgical planning such as US, CT, MRI, SPECT, and PET can be placed in perspective. The approaches used for radionuclear and optical modalities are often complementary, a feature that can be exploited further with the use of hybrid tracers. The latter molecular parameters can be detected both in-depth (using radionuclear imaging modalities) and superficially (using optical imaging modalities).

The (da Vinci) robot provides an ideal platform to effectively integrate the image guidance technologies in clinical routine and provides a solid basis for international dissemination of successful technologies.

In his talk Dr. van Leeuwen will illustrate the clinical implementation of a range of radionuclear and optical guidance technologies during robot assisted laparoscopic prostatectomy (RALP) combined with sentinel lymph node dissection.

 

Bio:

Fijs did his masters in Chemistry at the Bioinorganic and Coordination Chemistry group (prof. dr. Jan Reedijk; Leiden Institute of Chemistry) followed by a PhD at the MESA+ Institute for Nanotechnology (University of Twente) in the former Supramolecular Chemistry and Technology group headed by prof. dr. David Reinhoudt. During this period he also performed research at the Irradiation & Development business unit (dr. Ronald Schram) of the Dutch Nuclear Research and Consultancy group (NRG) in Petten. After obtaining his PhD he made the shift to biomedical research by pursuing a postdoctoral fellowship in the Chemical Biology group (Dr. Huib Ovaa) at the department of Cellular Biochemistry at the Netherlands Cancer Institute – Antoni van Leeuwenhoek Hospital (NKI-AvL). After being awarded a personal VENI grant from the Dutch Research Counsel he moved, within the NKI-AvL, to the clinical departments of Radiology and Nuclear Medicine were he became senior postdoctoral fellow in the medical image processing group of dr. Kenneth Gilhuijs. Under the guidance of the diagnostic oncology devision heads, initially prof. dr. Marc van de Vijver and later by prof. dr. Laura van ‘t Veer, he started to set up his own molecular imaging research line. In 2009 he obtained a personal cancer career award from the Dutch Cancer Society (KWF) on the development of multimodal imaging agents and was appointed associate staff scientist at the NKI-AVL. In 2010 he obtained a VIDI-grant from the Dutch Research Counsel on the development of imaging agents for surgical guidance. Soon after this he moved to the Leiden University Medical Center (LUMC) to become an associate professor at the department of Radiology (2011). Here he received an ERC-starting grant (2012) for the illumination of peripheral nerve structures. At the LUMC he heads the highly multidisciplinary Interventional Molecular Imaging laboratory, wherein the “from molecule to man” principle is actively pursued.

Jun
15
Mon
Nicolas Padoy: Radiation Exposure Monitoring in the Hybrid Operating Room using a Multi-camera RGBD System @ B17 Hackerman Hall
Jun 15 @ 10:00 am – 11:00 am

Abstract:

The growing use of image-guided minimally-invasive surgical procedures is confronting clinicians and surgical staff to new radiation exposure risks from X-ray imaging devices. Furthermore, the current surgical practice of wearing a single dosimeter at chest level to measure radiation exposure does not provide a sufficiently accurate estimation of radiation absorption throughout the body. Therefore, our aim is to develop a global radiation awareness system that can more accurately estimate intra-operative radiation exposure, thereby increasing staff awareness of radiation exposure risks and enabling the implementation of well-adapted safety measures.
In this talk, I will present our work towards the development of such a system. I will first present a computer vision approach that combines data from wireless dosimeters with the simulation of radiation propagation in order to compute a global radiation risk map in the area near the X-ray device. A multi-camera RGBD system is used to estimate the layout of the room and display the estimated risk map using augmented reality. By using real-time wireless dosimeters in our system, we can both calibrate the simulation and validate its accuracy at specific locations in real-time.
I will then describe our recent work on human pose estimation and activity recognition using RGBD data recorded during real X-ray guided interventions. Among other applications, the pose estimation of the persons present in the room will allow the computation of the radiation exposure per body part over time and the recognition of surgical activities will permit the correlation of these activities with the radiation risk they pose to staff and clinicians.

 

 

Bio:

Nicolas Padoy is an Assistant Professor at the University of Strasbourg, holding a Chair of Excellence in medical robotics within the ICube laboratory. He leads the research group CAMMA on Computational Analysis and Modeling of Medical Activities, which focuses on computer vision, activity recognition and the applications thereof to surgical workflow analysis and human-machine cooperation during surgery. He graduated with a Maîtrise in Computer Science from the Ecole Normale Supérieure de Lyon in 2003 and with a Diploma in Computer Science from the Technische Universität München (TUM), Munich, in 2005. He completed his PhD jointly between the Chair for Computer Aided Medical Procedures at TUM and the INRIA group MAGRIT in Nancy. Subsequently, he was a postdoctoral researcher and later an Assistant Research Professor in the Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, USA.

Laboratory for Computational Sensing + Robotics