Calendar

Oct
28
Wed
LCSR/ERC Seminar: Bernhard Fuerst and Jie (Jack) Zhang @ B17 Hackerman Hall
Oct 28 @ 12:00 pm – 1:00 pm

Bernhard Fuerst

Robotics and Multi-Modal Imaging in Computer Assisted Interventions

Abstract

Providing the desired and correct image information to the surgeon during the intervention is of crucial importance to reduce the task load and duration of the surgery, while increasing the accuracy and patient outcome. Our approach is to automate simple task (e.g. robotic ultrasound), provide novel imaging techniques (e.g. da Vinci SPECT) and fuse information from different images. Therefore, this talk will be focussing on our work on imaging techniques applicable during medical interventions, and the registration of images from different or the same imaging modalities.

 

Speaker Bio

Bernhard Fuerst is a research engineer at the Engineering Research Center at Johns Hopkins University. He received his Bachelor’s degree in Biomedical Computer Science at the University for Medical Technology in Austria in 2009 while researching on improving life sciences through semantic search techniques. His Master’s degree in Biomedical Computing was awarded by the Technical University in Munich, Germany in 2011. During his studies he joined Siemens Corporate Research in Princeton to research biomechanical simulations for compensation of respiratory motion, and Georgetown University to investigate techniques for meta-optimization using particle swarm optimizers. Since joining The Johns Hopkins University in 2013, he worked on establishing Dr. Nassir Navab’s research group to focus on robotic ultrasound, minimally invasive nuclear imaging, and intraoperative imaging technologies.

 

and

Jie (Jack) Zhang

A Low power Pixel-Wise Coded Exposure CMOS imager for insect based sensor node

Abstract

There are growing interests on converting insects such as beetles or dragonflies into the carriers for low power image sensors for simultaneous localization and mapping tasks (SLAM). These small biological entities have excellent maneuverability and can enter extreme areas not accessible to human. Due to physical constraints, CMOS Video Cameras mounted on the insects must be small and low power. It also needs to provide videos with high spatial (high pixel counts, good SNR) and temporal resolution (high frame-rate, low motion blur) when the scene is changing under different lighting conditions.

To address these tradeoffs simultaneously, we present a CMOS image sensor with Compressed Sensing based Pixel Wise Coded Exposure imaging method. This architecture can provide up to 20x more frame rate with both high spatial and temporal resolution compared to a traditional image sensor with the same readout speed. The power consumption of the imager is 41uW while providing 100fps video.

 

Bio

Jie (Jack) Zhang received the B.Sc. degree in Electrical Engineering, in 2010, from The Johns Hopkins University, Baltimore, MD, where he is currently pursuing the Ph.D. degree in Electrical Engineering. He has been an International Scholar with the Ultra Low Power – Biomedical Circuit group at IMEC, Belgium, from October 2011 to July 2012. The focus of his research is image sensor design, compressive sensing, analog and mixed Signal Circuits for biomedical applications.

Nov
4
Wed
Carla Pugh: Signatures: What can Sensors and Motion Tracking Technology Tell us about Technical Skills Performance? @ B17 Hackerman Hall
Nov 4 @ 12:00 pm – 1:00 pm

Speaker Bio

Doctor Carla Pugh is currently Vice-Chair of Education and Patient Safety in the Department of Surgery at University Wisconsin, Madison. She is also Director of UW-Health’s Inter-Professional Simulation Program. Her clinical area of expertise is Acute Care Surgery. Dr Pugh obtained her undergraduate degree at U.C. Berkeley in Neurobiology and her medical degree at Howard University School of Medicine. Upon completion of her surgical training at Howard University Hospital, she went to Stanford University and obtained a PhD in Education. Her research interests include the use of simulation technology for medical and surgical education. Dr. Pugh holds a method patent on the use of sensor and data acquisition technology to measure and characterize the sense of touch. Currently, over two hundred medical and nursing schools are using one of her sensor enabled training tools for their students and trainees. The use of simulation technology to assess and quantitatively define hands-on clinical skills is one of her major research areas. In addition to obtaining an NIH R-01 in 2010 (to validate a sensorized device for high stakes clinical skills assessments), her work has received numerous awards from various medical and engineering organizations. In 2011 Dr. Pugh received the Presidential Early Career Award for Scientists and Engineers. Dr Pugh is also the developer of several decision-based simulators that are currently being used to assess intra-operative judgment and team skills. This work was recently funded by a 2 million dollar grant from the Department of Defense.

Nov
11
Wed
Aaron Ames: Towards the Humanoid Robots of Science Fiction @ B17 Hackerman Hall
Nov 11 @ 12:00 pm – 1:00 pm

Abstract

The humanoid robot DURUS was unveiled to the public in the midst of the DARPA Robotics Challenge (DRC). While the main competition took place in the stadium, DURUS took part in the Robot Endurance Test with the goal of demonstrating locomotion that is an order of magnitude more efficient than existing bipedal walking on humanoid robots, e.g., the ATLAS robot utilized in the DRC. During this accessible public demonstration of humanoid robotic walking, DURUS walked continuously for over 2½ hours covering over 2 km—all on a single 1.1 kWh battery. At the core of this success was a methodology for designing and realizing dynamic and efficient walking gaits on bipedal robots through a mathematical framework that utilizes hybrid systems models coupled with nonlinear controllers that provably result in stable locomotion. This mathematical foundation allowed for the full utilization of novel mechanical components of DURUS, including: efficient cycloidal gearboxes (allowing for almost lossless transmission of power) and compliant elements at the ankles (absorbing the impacts at foot-strike). Through this combination of formal controller design and novel mechanical design, the humanoid robot DURUS was able to achieve the most efficient walking ever recorded on a humanoid robot. This talk will outline the key elements of the methodology used to achieve this result, demonstrate the extensibility to other bipedal robots and robotic assistive devices, e.g., prostheses, and consider the question: when will the humanoid robots of science fiction become science fact?

 

Speaker Bio

Aaron D. Ames joined the Georgia Institute of Technology in July 2015 as an Associate Professor in the George W. Woodruff School of Mechanical Engineering and the School of Electrical and Computer Engineering. Prior to joining Georgia Tech, he was an Associate Professor and Morris E. Foster Faculty Fellow II in Mechanical Engineering at Texas A&M University, with joint appointments in Electrical & Computer Engineering and Computer Science & Engineering. Dr. Ames received a B.S. in Mechanical Engineering and a B.A. in Mathematics from the University of St. Thomas in 2001, and he received a M.A. in Mathematics and a Ph.D. in Electrical Engineering and Computer Sciences from UC Berkeley in 2006. He served as a Postdoctoral Scholar in Control and Dynamical Systems at the California Institute of Technology from 2006 to 2008. At UC Berkeley, he was the recipient of the 2005 Leon O. Chua Award for achievement in nonlinear science and the 2006 Bernard Friedman Memorial Prize in Applied Mathematics. Dr. Ames received the NSF CAREER award in 2010 for his research on bipedal robotic walking and its applications to prosthetic devices, and is the recipient of the 2015 Donald P. Eckman Award recognizing an outstanding young engineer in the field of automatic control. His lab designs, builds and tests novel bipedal robots, humanoids and prostheses with the goal of achieving human-like bipedal robotic walking and translating these capabilities to robotic assistive devices.

 

Dec
2
Wed
Greg Hager: From Mimicry to Mastery: Creating Machines that Augment Human Skill
Dec 2 @ 12:00 pm – 1:00 pm

Abstract

We are entering an era where people will interact with smart machines to enhance the physical aspects of their lives, just as smart mobile devices have revolutionized how we access and use information. Robots already provide surgeons with physical enhancements that improve their ability to cure disease, we are seeing the first generation of robots that collaborate with humans to enhance productivity in manufacturing, and a new generation of startups are looking at ways to enhance our day to day existence through automated driving and delivery.

 

In this talk, I will use examples from surgery and manufacturing to frame some of the broad science, technology, and commercial trends that are converging to fuel progress on human-machine collaborative systems. I will describe how surgical robots can be used to observe surgeons “at work” and to define a “language of manipulation” from data, mirroring the statistical revolution in speech processing. With these models, it is possible to recognize, assess, and intelligently augment surgeons’ capabilities. Beyond surgery, new advances in perception, coupled with steadily declining costs and increasing capabilities of manipulation systems, have opened up new science and commercialization opportunities around manufacturing assistants that can be instructed “in-situ.” Finally, I will close with some thoughts on the broader challenges still be to surmounted before we are able to create true collaborative partners.

 

Speaker Bio

Gregory D. Hager is the Mandell Bellmore Professor of Computer Science at Johns Hopkins University. His research interests include collaborative and vision-based robotics, time-series analysis of image data, and medical applications of image analysis and robotics. He has published over 300 articles and books in these areas. Professor Hager is also Chair of the Computing Community Consortium, a board member of the Computing Research Association, and is currently a member of the governing board of the International Federation of Robotics Research. In 2014, he was awarded a Hans Fischer Fellowship in the Institute of Advanced Study of the Technical University of Munich where he also holds an appointment in Computer Science. He is a fellow of the IEEE for his contributions to Vision-Based Robotics, and has served on the editorial boards of IEEE TRO, IEEE PAMI, and IJCV. Professor Hager received his BA in Mathematics and Computer Science Summa Cum Laude at Luther College (1983), and his MS (1986) and PhD (1988) from the University of Pennsylvania. He was a Fulbright Fellow at the University of Karlsruhe, and was on the faculty of Yale University prior to joining Johns Hopkins. He is founding CEO of Clear Guide Medical.

Jan
29
Fri
LCSR Special Seminar: @ 228 Malone Hall
Jan 29 @ 10:00 am – 11:00 am

Nobuhiko Sugano “CAOS in THA”

Abstract

Various systems of computer-assisted orthopaedic surgery (CAOS) for total hip arthroplasty (THA) have been developed since the early 1990’s. These included computer assisted preoperative planning, robotic devices, navigation, and patient specific surgical templates. The first clinically applied system was an active robotic system (ROBODOC) which performed femoral implant cavity preparation as programmed preoperatively. Several reports on cementless THA with ROBODOC showed better stem alignment and less variance in limb-length inequality on radiographic evaluation, less incidence of pulmonary embolic events trans-esophageal cardio-echogram, and less stress shielding on DEXA analysis than conventional manual methods. On the other hand, some studies raise issues with active systems including a steep learning curve, muscle and nerve damage, and technical complications such as a procedure stop due to a bone motion during cutting requiring re-registration and registration failure. Semi-active robotic systems such as Acrobot and Rio were developed for ease of surgeon acceptance. The drill bit at the tip of the robotic arm is moved by a surgeon’s hand, but it does not move outside of a milling path boundary which is defined according to 3D-image-based preoperative planning.

Thanks to advances in 3D sensor technology, navigation systems were developed. Navigation is a passive system which does not perform any actions on patients. It only provides information and guidance to the surgeon who still uses conventional tools to perform the surgery. There are three types of navigation; CT-based navigation, imageless navigation, and fluoro-navigation. CT-based navigation is the most accurate, but the preoperative planning on CT images takes time that increases cost and radiation exposure. Imageless navigation does not use CT images, but its accuracy depends on the technique of landmark pointing and it does not take into account the individual uniqueness of anatomy. Fluoroscopic navigation is good for trauma and spine surgeries, but its benefits are limited in hip and knee reconstruction surgeries. Several studies have shown that cup alignment with navigation is more precise than conventional mechanical instruments, and that it is useful for optimizing limb length, range of motion, and stability. Recently, patient specific templates based on CT images have attracted attention and some early reports on cup placement and resurfacing showed improved accuracy of the procedures. These various CAOS systems have pros and cons. Nonetheless, CAOS is a useful tool to help surgeons perform accurately what surgeons want to do in order to better achieve their clinical objectives. Thus, it is important that the surgeon fully understands what he or she should be trying to achieve in THA for each patient.

 

Bio

Dr Nobuhiko Sugano received MD (1985) and PhD (1994) from Osaka University Graduate School of Medicine in Japan. In 1995 he was invited to Baylor College of Medicine in Houston, Texas, USA where he actively worked on three dimensional morphologic analysis of hip joint as Assistant Professor with Prof Philip C Noble. Those works were employed in design of hip joint implants for Japanese patients. In 2002, he received Otto-AuFranc award from the Hip Society for the research. Those computer research works also leaded him to play the role of leading orthopaedic surgeon in the area of computer-aided surgery from 1997. He developed a CT-based navigation system with his colleagues at Osaka University, and its clinical trial started in 1998. He also conducted several clinical studies using ROBODOC. His main interest in this area is robotics, navigation and postoperative motion analysis for arthroplasty. He received Maurice E. Müller Award for Excellence in Computer Assisted Surgery in 2011. He is currently a professor of Department of Orthopaedic Medical Engineering in Osaka University Graduate School of Medicine and Osaka University Hospital.

 

and

 

Yoshinobu Sato: “Automated musculoskeletal segmentation and THA planning from CT images”

 

Bio

Yoshinobu Sato received his B.S., M.S. and Ph.D degrees in Information and Computer Sciences from Osaka University, Japan in 1982, 1984, 1988 respectively. From 1988 to 1992, he was a Research Engineer at the NTT Human Interface Laboratories. In 1992, he joined the Division of Functional Diagnostic Imaging of Osaka University Medical School. From 1996 to 1997, he was a Research Fellow in the Surgical Planning Laboratory, Harvard Medical School and Brigham and Women’s Hospital. From 1999 to 2014, He was an Associate Professor at Osaka University Graduate School of Medicine. From 2014, he is a Professor of Information Science at Nara Institute of Science and Technology (NAIST), Japan. His research interests include medical image analysis, computer assisted surgery, and computational anatomy. Dr. Sato was Program Chair of MICCAI 2013. He is a member of IEEE, MICCAI, and CAOS-International, and an editorial board member of Medical Image Analysis journal and International Journal of Computer Assisted Radiology and Surgery.

Feb
3
Wed
Henry Lin: Virtual Reality Surgical Simulation: “It’s not just a game. It’s a matter of saving lives.” @ B17 Hackerman Hall
Feb 3 @ 12:00 pm – 1:00 pm

Abstract

Increasingly, robotic technologies are targeting the general public.  The adoption of these technologies depends on understanding the human-machine user experience. Research in how users learn to use a technology (learning curves) and how to train users (training methodologies) are crucial in driving its success. Intuitive Surgical’s da Vinci surgical system is an interesting case study on how a complex machine can positively impact a highly technical and sensitive field – surgery.  To augment the hands-on training of the da Vinci, the company introduced the VR-based da Vinci Skills Simulator. The Skills Simulator provides hands-on technical training on the surgeon console. In addition to the cost and time benefits of training on the simulator, it also provides various forms of feedback and evaluation. This talk will discuss the research that goes into developing an effective VR-based surgical simulator – from designing modules with clear training goals to developing proper metrics for feedback and analysis to implementing proper scoring systems to scientifically validating the modules.

 

Bio

Henry Lin manages the Surgical Simulation Development and Research Team at Intuitive Surgical. He started in the Medical Research Group investigating surgical skill evaluation. He then moved to the Simulation Group to apply his research within the surgical simulation environment. He received his Ph.D. from Johns Hopkins University in 2010 – in the Computational Interaction and Robotics Lab under the guidance of Dr. Gregory Hager. His dissertation research, “Surgical Motion”, focused on understanding surgeon technical skill through the analysis of da Vinci kinematics data and video. He received the 2005 MICCAI Best Student Paper Award and the Link Fellowship for his work. Post-JHU, Dr. Lin spent 2 years as a Post-Doc in the NIAAA at NIH understanding brain morphology changes due to alcohol abuse.  He maintains his academic interests – publishing research manuscripts, serving as a reviewer for technical conferences, including MICCAI and M2CAI, and reviews for Intuitive Surgical’s clinical and technical grant programs. Dr. Lin also has degrees in Computer Science from Columbia University and Carnegie Mellon University.

Feb
10
Wed
LCSR Seminar: Bilge Mutlu: Human-Centered Principles and Methods for Designing Robotic Technologies @ B17 Hackerman Hall
Feb 10 @ 12:00 pm – 1:00 pm

Abstract

The increasing emergence of robotic technologies that serve as automated tools, assistants, and collaborators promises tremendous benefits in everyday settings from the home to manufacturing facilities.  While robotic technologies promise interactions that can be far more complex than those with conventional ones, their successful integration into the human environment requires these interactions to also be natural and intuitive.  To achieve complex but intuitive interactions, designers and developers must simultaneously understand and address human and computational challenges.  In this talk, I will present my group’s work on building human-centered guidelines, methods, and tools to address these challenges in order to facilitate the design of robotic technologies that are more effective, intuitive, acceptable, and even enjoyable.  In particular, through a series of projects, this work demonstrates how a marrying of knowledge about people and computational methods can enable effective user interactions with social, assistive, and telepresence robots and the development of novel tools and methods that support complex design tasks across the key stages of the design process.  The talk will also include our ongoing work that applies these guidelines to the development of real-world applications of robotic technology and that targets the successful integration of these technologies into everyday settings.

 

Bio

Bilge Mutlu is an associate professor of computer science, psychology, and industrial engineering at the University of Wisconsin–Madison.  He received his Ph.D. degree from Carnegie Mellon University’s Human-Computer Interaction Institute in 2009.  His background combines training in interaction design, human-computer interaction, and robotics with industry experience in product design and development.  Dr. Mutlu is a former Fulbright Scholar and the recipient of the NSF CAREER award as well as several best paper awards and nominations, including HRI 2008, HRI 2009, HRI 2011, UbiComp 2013, IVA 2013, RSS 2013, HRI 2014, CHI 2015, and ASHA 2015.  His research has been covered by national and international press including the NewScientist, MIT Technology Review, Discovery News, Science Nation, and Voice of America.  He has served in the Steering Committee of the HRI Conference and the Editorial Board of IEEE Transactions on Affective Computing, co-chairing the Program Committees for ROMAN 2016, HRI 2015, ROMAN 2015, and ICSR 2011, the Program Sub-committees on Design for CHI 2013 and CHI 2014, and the organizing committee for HRI 2017.

Feb
17
Wed
LCSR Seminar Cancelled @ B17 Hackerman Hall
Feb 17 @ 12:00 pm – 1:00 pm

Laboratory for Computational Sensing + Robotics