Calendar

Oct
7
Wed
Peter Kazanzides: Remote Teleoperation for Satellite Servicing (“Satellite Surgery”) @ B17 Hackerman Hall
Oct 7 @ 12:00 pm – 1:00 pm

Abstract

We are developing methods for telerobotic on-orbit servicing of spacecraft under ground-based supervisory control of human operators to perform tasks in the presence of uncertainty and telemetry time delay of several seconds. As an initial application, we consider the case where the remote slave robot is teleoperated to cut the tape that secures a flap of multi-layer insulation (MLI) over a satellite access panel. This talk will present a delay tolerant control methodology, using virtual fixtures, hybrid position/force control, and environment modeling, that is robust to modeling and registration errors. The task model is represented by graphical primitives and virtual fixtures on the teleoperation master and by a hybrid position/force controller on the slave robot. The virtual fixtures guide the operator through a model-based simulation of the task, and the goal of the slave controller is to reproduce this action (after a few seconds of delay) or, if measurements are not consistent with the models, to stop motion and alert the operator. Experiments, including IRB-approved multi-user studies, are performed with a ground-based test platform where the master console of a da Vinci Research Kit is used to teleoperate a Whole Arm Manipulator (WAM) robot.

 

Speaker Bio

Peter Kazanzides has been working in the field of surgical robotics since 1989, when he started as a postdoctoral researcher with Russell Taylor at the IBM T.J. Watson Research Center. Dr. Kazanzides co-founded Integrated Surgical Systems (ISS) in November 1990 to commercialize the robotic hip replacement research performed at IBM and the University of California, Davis. As Director of Robotics and Software, he was responsible for the design, implementation, validation and support of the ROBODOC System, which has been used for more than 20,000 hip and knee replacement surgeries. Dr. Kazanzides joined the Engineering Research Center for Computer-Integrated Surgical Systems and Technology (CISST ERC) in December 2002 and currently holds an appointment as a Research Professor of Computer Science at Johns Hopkins University. This talk highlights the extension of his research in computer assisted surgery to encompass “surgery” on satellites.

Oct
14
Wed
Nabil Simaan: Modeling and Control of Intelligent Surgical Robots for Enabling Complementary Situational Awareness @ B17 Hackerman Hall
Oct 14 @ 12:00 pm – 1:00 pm

Abstract

In the past two decades surgical robots have been used as tools that augment human dexterity and manipulation capabilities. Our research goal at the Advanced Robotics and Mechanism Applications (ARMA) laboratory is to extend this concept of augmenting user skill to include assistance in sensing, sub-task execution, and situational awareness within the context of surgery. Motivating surgical applications from the areas of cochlear implant surgery, retinal micro-surgery, minimally invasive, less invasive & natural orifice surgery will be presented. Within the context of these surgical applications, we will focus on our efforts in modeling, designing and controlling intelligent surgical robots capable of sensing the environment and using the sensed information for task execution assistance. The talk will describe recent results on the design and control of continuum robots capable of performing contact detection and localization of contact. A modeling approach for compliant motion control of continuum robots will be presented along with clinical motivation from the area of minimally invasive surgery of the upper airways and trans-urethral resection of bladder cancer. Time permitting; we will also describe assistive telemanipulation frameworks for micro-stent deployment and for cochlear implant electrode array insertion.

 

Speaker Bio

Dr. Nabil Simaan received his Ph.D. in mechanical engineering from the Technion—Israel Institute of Technology, in 2002. His Masters and Ph.D. research focused on the design, synthesis, and singularity analysis of parallel robots for medical applications, stiffness synthesis and modulation for parallel robots with actuation and kinematic redundancies. His graduate advisor was Dr. Moshe Shoham. In 2003, he was a Postdoctoral Research Scientist at Johns Hopkins University National Science Foundation (NSF) Engineering Research Center for Computer-Integrated Surgical Systems and Technology (ERC-CISST), where he focused on minimally invasive robotic assistance in confined spaces under the supervision of Dr. Russell H. Taylor. In 2005, he joined Columbia University, New York, NY, as an Assistant Professor of mechanical engineering and the Director of the Advanced Robotics and Mechanisms Applications (ARMA) Laboratory. In 2009 he received the NSF Career award for young investigators to design new algorithms and robots for safe interaction with the anatomy. He was promoted to Associate Professor in 2010 and he subsequently joined Vanderbilt University. He is a Senior Member of the IEEE. He serves as an Editor for IEEE International Conference on Robotics and Automation (ICRA), Associate Editor for IEEE Transactions on Robotics (TRO), Editorial board member of Robotica, Area Chair for Robotics Science and Systems (2014, 2015) and Corresponding Co-Chair for the IEEE Technical Committee on Surgical Robotics.

Oct
21
Wed
Marco Pavone: Certifiable Planning for Autonomous Vehicles @ B17 Hackerman Hall
Oct 21 @ 12:00 pm – 1:00 pm

Abstract

This talk addresses the problem of designing motion planning algorithms with rigorous correctness guarantees, with the goal of making planning for autonomous vehicles trusted and certifiable. In the first part of the talk, I will consider the problem of de-randomizing popular sampling-based motion planning algorithms such as the probabilistic roadmap (PRM) algorithm. Randomization, in fact, makes several tasks challenging, including certification and use of offline computation. Leveraging properties of deterministic low-dispersion sequences, I will show that there exist deterministic versions of PRM (and related batch-processing algorithms) that are deterministically asymptotically optimal, enjoy deterministic convergence rates, have improved computational and space complexity properties, and provide superior practical performance.

 

In the second part of the talk, I will switch to the problem of motion planning under uncertainty. I will present a novel framework whereby motion plans are selected by sampling via Monte Carlo the execution of a reference tracking controller. I will discuss the design of statistical variance-reduction techniques, namely control variates and importance sampling, to make such a sampling procedure amenable to real-time implementation. The advantages of this framework include asymptotic correctness of collision probability estimation and the availability of associated confidence estimates.

 

I will conclude the talk by discussing applications in the domain of spacecraft autonomous maneuvering.

 

Speaker Bio

Dr. Marco Pavone is an Assistant Professor of Aeronautics and Astronautics at Stanford University, where he is the Director of the Autonomous Systems Laboratory. Before joining Stanford, he was a Research Technologist within the Robotics Section at the NASA Jet Propulsion Laboratory. He received a Ph.D. degree in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 2010. Dr. Pavone’s areas of expertise lie in the fields of controls and robotics. His main research interests are in the development of methodologies for the analysis, design, and control of autonomous systems, with an emphasis on autonomous aerospace vehicles and large-scale robotic networks. He is a recipient of an NSF CAREER Award, a NASA Early Career Faculty Award, a Hellman Faculty Scholar Award, and was named NASA NIAC Fellow in 2011. He is currently serving as an Associate Editor for the IEEE Control Systems Magazine. His work has been reported in many scientific publications as well as popular press outlets, including ABC, NBC, The Economist, Forbes, and Reuters.

Oct
28
Wed
LCSR/ERC Seminar: Bernhard Fuerst and Jie (Jack) Zhang @ B17 Hackerman Hall
Oct 28 @ 12:00 pm – 1:00 pm

Bernhard Fuerst

Robotics and Multi-Modal Imaging in Computer Assisted Interventions

Abstract

Providing the desired and correct image information to the surgeon during the intervention is of crucial importance to reduce the task load and duration of the surgery, while increasing the accuracy and patient outcome. Our approach is to automate simple task (e.g. robotic ultrasound), provide novel imaging techniques (e.g. da Vinci SPECT) and fuse information from different images. Therefore, this talk will be focussing on our work on imaging techniques applicable during medical interventions, and the registration of images from different or the same imaging modalities.

 

Speaker Bio

Bernhard Fuerst is a research engineer at the Engineering Research Center at Johns Hopkins University. He received his Bachelor’s degree in Biomedical Computer Science at the University for Medical Technology in Austria in 2009 while researching on improving life sciences through semantic search techniques. His Master’s degree in Biomedical Computing was awarded by the Technical University in Munich, Germany in 2011. During his studies he joined Siemens Corporate Research in Princeton to research biomechanical simulations for compensation of respiratory motion, and Georgetown University to investigate techniques for meta-optimization using particle swarm optimizers. Since joining The Johns Hopkins University in 2013, he worked on establishing Dr. Nassir Navab’s research group to focus on robotic ultrasound, minimally invasive nuclear imaging, and intraoperative imaging technologies.

 

and

Jie (Jack) Zhang

A Low power Pixel-Wise Coded Exposure CMOS imager for insect based sensor node

Abstract

There are growing interests on converting insects such as beetles or dragonflies into the carriers for low power image sensors for simultaneous localization and mapping tasks (SLAM). These small biological entities have excellent maneuverability and can enter extreme areas not accessible to human. Due to physical constraints, CMOS Video Cameras mounted on the insects must be small and low power. It also needs to provide videos with high spatial (high pixel counts, good SNR) and temporal resolution (high frame-rate, low motion blur) when the scene is changing under different lighting conditions.

To address these tradeoffs simultaneously, we present a CMOS image sensor with Compressed Sensing based Pixel Wise Coded Exposure imaging method. This architecture can provide up to 20x more frame rate with both high spatial and temporal resolution compared to a traditional image sensor with the same readout speed. The power consumption of the imager is 41uW while providing 100fps video.

 

Bio

Jie (Jack) Zhang received the B.Sc. degree in Electrical Engineering, in 2010, from The Johns Hopkins University, Baltimore, MD, where he is currently pursuing the Ph.D. degree in Electrical Engineering. He has been an International Scholar with the Ultra Low Power – Biomedical Circuit group at IMEC, Belgium, from October 2011 to July 2012. The focus of his research is image sensor design, compressive sensing, analog and mixed Signal Circuits for biomedical applications.

Nov
4
Wed
Carla Pugh: Signatures: What can Sensors and Motion Tracking Technology Tell us about Technical Skills Performance? @ B17 Hackerman Hall
Nov 4 @ 12:00 pm – 1:00 pm

Speaker Bio

Doctor Carla Pugh is currently Vice-Chair of Education and Patient Safety in the Department of Surgery at University Wisconsin, Madison. She is also Director of UW-Health’s Inter-Professional Simulation Program. Her clinical area of expertise is Acute Care Surgery. Dr Pugh obtained her undergraduate degree at U.C. Berkeley in Neurobiology and her medical degree at Howard University School of Medicine. Upon completion of her surgical training at Howard University Hospital, she went to Stanford University and obtained a PhD in Education. Her research interests include the use of simulation technology for medical and surgical education. Dr. Pugh holds a method patent on the use of sensor and data acquisition technology to measure and characterize the sense of touch. Currently, over two hundred medical and nursing schools are using one of her sensor enabled training tools for their students and trainees. The use of simulation technology to assess and quantitatively define hands-on clinical skills is one of her major research areas. In addition to obtaining an NIH R-01 in 2010 (to validate a sensorized device for high stakes clinical skills assessments), her work has received numerous awards from various medical and engineering organizations. In 2011 Dr. Pugh received the Presidential Early Career Award for Scientists and Engineers. Dr Pugh is also the developer of several decision-based simulators that are currently being used to assess intra-operative judgment and team skills. This work was recently funded by a 2 million dollar grant from the Department of Defense.

Dec
2
Wed
Greg Hager: From Mimicry to Mastery: Creating Machines that Augment Human Skill
Dec 2 @ 12:00 pm – 1:00 pm

Abstract

We are entering an era where people will interact with smart machines to enhance the physical aspects of their lives, just as smart mobile devices have revolutionized how we access and use information. Robots already provide surgeons with physical enhancements that improve their ability to cure disease, we are seeing the first generation of robots that collaborate with humans to enhance productivity in manufacturing, and a new generation of startups are looking at ways to enhance our day to day existence through automated driving and delivery.

 

In this talk, I will use examples from surgery and manufacturing to frame some of the broad science, technology, and commercial trends that are converging to fuel progress on human-machine collaborative systems. I will describe how surgical robots can be used to observe surgeons “at work” and to define a “language of manipulation” from data, mirroring the statistical revolution in speech processing. With these models, it is possible to recognize, assess, and intelligently augment surgeons’ capabilities. Beyond surgery, new advances in perception, coupled with steadily declining costs and increasing capabilities of manipulation systems, have opened up new science and commercialization opportunities around manufacturing assistants that can be instructed “in-situ.” Finally, I will close with some thoughts on the broader challenges still be to surmounted before we are able to create true collaborative partners.

 

Speaker Bio

Gregory D. Hager is the Mandell Bellmore Professor of Computer Science at Johns Hopkins University. His research interests include collaborative and vision-based robotics, time-series analysis of image data, and medical applications of image analysis and robotics. He has published over 300 articles and books in these areas. Professor Hager is also Chair of the Computing Community Consortium, a board member of the Computing Research Association, and is currently a member of the governing board of the International Federation of Robotics Research. In 2014, he was awarded a Hans Fischer Fellowship in the Institute of Advanced Study of the Technical University of Munich where he also holds an appointment in Computer Science. He is a fellow of the IEEE for his contributions to Vision-Based Robotics, and has served on the editorial boards of IEEE TRO, IEEE PAMI, and IJCV. Professor Hager received his BA in Mathematics and Computer Science Summa Cum Laude at Luther College (1983), and his MS (1986) and PhD (1988) from the University of Pennsylvania. He was a Fulbright Fellow at the University of Karlsruhe, and was on the faculty of Yale University prior to joining Johns Hopkins. He is founding CEO of Clear Guide Medical.

Apr
6
Wed
LCSR Seminar: Ali Kamen: Imaging for Personalized Healthcare @ B17 Hackerman Hall
Apr 6 @ 12:00 pm – 1:00 pm

Abstract

In this talk, I will give a perspective on past and current trends in medical imaging particularly regarding the role of imaging in personalized medicine. I will then outline core technologies enabling the advancements, with specific focus on empirical and mechanistic modeling. In addition, I demonstrate some example clinical applications on how mechanistic and empirical models derived based on imaging are used for treatment planning and therapy outcome analysis. I conclude by providing a future outlook for the utilization of imaging in the healthcare continuum.

Sep
14
Wed
Rebecca Schulman:Toward robotic materials: Self-Assembling Programmable Adaptive Structures with Molecules @ B17 Hackerman Hall
Sep 14 @ 12:00 pm – 1:00 pm

Abstract

While robots at the human size scale are generally composed of structures that are moved by a small set of actuators that shift materials or components with a well-defined shape, other principles for designing moving structures can control movement at the micron scale. For example, cells can move by disassembling parts of their rigid skeleton, or cytoskeleton, and reassembling new components in a different location. The structures that are disassembled and reassembled are often filaments that grow, shrink and form junctions between one another. Networks of rigid filaments serve as a cheap, reusable, movable scaffold that shapes and reshapes the cell.

Could we design synthetic materials to perform tasks of engineering interest at the micron scale? I’ll describe how we are using ideas from DNA nanotechnology to build synthetic filaments and how we can program where and when filaments assemble and disassemble and how they organize. We are able to use quantitative control over microscopic parameters, modeling and automated analysis to build increasingly sophisticated structures that can find, connect and move locations in the environment, form architectures and heal when damaged.

 

 

 

Laboratory for Computational Sensing + Robotics