SimpleITK is a simplified, open source, multi-language interface to the National Library of Medicine’s Insight Segmentation and Registration Toolkit (ITK), a C++ open source image analysis toolkit which is widely used in academia and industry. SimpleITK is available in multiple programing languages including: Python, R, Java, C#, C++, Lua, Ruby, and TCL. Binary versions of the toolkit are available for the GNU Linux, Apple OS X, and Microsoft Windows operating systems. For researchers, the toolkit facilitates rapid prototyping and evaluation of image-analysis workflows with minimal effort using their programming language of choice. For educators and students, the toolkit’s concise interface and support of scripting languages facilitates experimentation with well-known algorithms, allowing them to focus on algorithmic understanding rather than low level programming skills.
The toolkit development process follows best software engineering practices including code reviews and continuous integration testing, with results displayed online allowing everyone to gauge the status of the current code and any code that is under consideration for incorporation into the toolkit. User support is available through a dedicated mailing list, the project’s Wiki, and on GitHub. The source code is freely available on GitHub under an Apache-2.0 license (github.com/SimpleITK/SimpleITK). In addition, we provide a development environment which supports collaborative research and educational activities in the Python and R programming languages using the Jupyter notebook web application. It too is freely available on GitHub under an Apache-2.0 license (github.com/InsightSoftwareConsortium/SimpleITK-Notebooks).
The first part of the presentation will describe the motivation underlying the development of SimpleITK, its development process and its current state. The second part of the presentation will be a live demonstration illustrating the capabilities of SimpleITK as a tool for reproducible research.
Dr. Ziv Yaniv is a senior computer scientist with the Office of High Performance Computing and Communications, at the National Library of Medicine, and at TAJ Technologies Inc. He obtained his Ph.D. in computer science from The Hebrew University of Jerusalem, Jerusalem Israel. Previously he was an assistant professor in the department of radiology, Georgetown university, and a principal investigator at Children’s National Hospital in Washington DC. He was chair of SPIE Medical Imaging: Image-Guided Procedures, Robotic Interventions, and Modeling (2013-2016) and program chair for the Information Processing in Computer Assisted Interventions (IPCAI) 2016 conference.
He believes in the curative power of open research, and has been actively involved in development of several free open source toolkits, including the Image-Guided Surgery Toolkit (IGSTK), the Insight Registration and Segmentation toolkit (ITK) and SimpleITK.
This talk will show that attitude Kalman filters can be simple in design while also being robust and accurate despite the highly nonlinear nature of attitude (i. e., orientation) estimation. Three different filters are discussed, all using quaternions and small-angle approximations of attitude errors: an Extended Kalman filter as well as an Unscented Kalman filter for a gyro-based situation, and an Extended Kalman filter for a gyro-less one. In additon to the three-axis attitude, all of the filters also estimate corrections to the angular velocity – random walk modeled biases in the gyro measured case, and first-order Markov modeled corrections in the gyro-less case, which involves angular velocity computed from mass properties and control data.
The filters are evaluated using extensive real and simulated data from low-Earth orbiting NASA satellites such as Tropical Rainfall Measurement Mission, Solar, Anomalous, and Magnetospheric Particle Explorer, Earth Radiation Budget Satellite, Wide Field Infrared Explorer, and Fast Auroral Snapshot Explorer. The evaluations predominantly involve stressing “magnetometer-only” scenarios, i. e., using only a three-axis magnetometer to sense the attitude. Comparisons are made with attitude and rate knowledge obtained using coarse sensors and single-frame algorithms, and also with results from an Unscented Kalman filter with a more complicated attitude pameterization.
Dr. Murty Challa received a B.Sc. in physics from Andhra University, Visakhapatnam, India, and a Ph.D. in physics from the University of Georgia, Athens, Georgia. His professional interests and actvities include: estimation and data fusion algorithms such as Kalman filters, batch estimators, and simultaneous localization and mapping; track correlation/ association; guidance, navigation, and control for spacecraft and unmanned vehicles; missile defense; quantum computing; statistical mechanics; computational physics; solid state physics/ materials science. He is currently a member of the Senior Professional Staff of Johns Hopkins Applied Physics Laboratory (JHU/APL), Maryland, USA. Prior to JHU/APL, he was senior staff at Institute for Defense Analyses, Alexandria, VA, and at Computer Sciences Corporation supporting NASA Goddard Space Flight Center, Greenbelt, MD. Dr. Challa’s academic positions include post-doctoral appointments in physics at Michigan State University and Virginia Commonwealth University, and an adjunct position in physics at George Washington University. He has also served as a consultant to Iridium Satellite, LLC.
This talk and demonstration will give an overview of the open-source Robot Operating System (ROS) software ecosystem for robot systems development. ROS is a modular open-source software system whose core is a publish-subscribe middleware system for C++ and Python under Linux that supports message passing, recording and playback of messages, distributed parameters, and extensive introspection tools. Other open source publish subscribe systems well known to robotics developers include Lightweight Communications and Marshalling (LCM) and the Mission Orientated Operating Suite (MOOS). In addition to message passing, the ROS ecosystem offers an extensive set of tools and software packages that employ standard message definitions for robotics, a robot geometry library and description language, device interface libraries for numerous COTS devices, localization and navigation packages, three-dimensional visualization with Rviz, and physics-based robot simulation with Gazebo. ROS is now the most widely used software system for robotics research. Also see: http://www.ros.org
If you are unfamiliar with ROS you are encouraged to bring their notebook computers to this seminar so that you can browse in real-time to web pages mentioned in this talk.
Louis L. Whitcomb is a Professor in the Department of Mechanical Engineering, with secondary appointment in Computer Science, at the Johns Hopkins University’s Whiting School of Engineering. His research focuses on the navigation, dynamics, and control of robot systems – with applications to robotics in extreme environments including space and underwater robots. He is former (founding) Director of the JHU Laboratory for Computational Sensing and Robotics. He received teaching awards at Johns Hopkins in 2001, 2002, 2004, and 2011, was awarded the NSF Career Award and the ONR Young Investigator Award. He is a Fellow of the IEEE.
Autonomy in robotic surgeries has advantages of improved surgical efficiency, decreased surgeon fatigue, and potentially leads to improved surgical outcomes. This talk presents our latest efforts towards surgical robotic autonomy on 1) Raven II surgical robot sensorless gripping force and state estimation, 2) motion planning based on Recurrent Neural Networks(RNNs), and 3) software design for preoperative planning in endoscopic sinus and skull base surgeries. In force estimation experiments, we compared a non-parametric estimator (Gaussian Process Regression) with a nonlinear filter (Unscented Kalman Filter) based method and found the results from former is more conservative than the latter. In motion planning, we demonstrated RNN control schemes can be easily adapt to various optimization targets and constraints, such as joint velocity regularization or soft obstacle avoidance. In preoperative planning, we demonstrated the design of an interactive preoperative surgical planning software, which helps us to better understand endoscopic surgeries.
Yangming Li is an acting instructor at BioRobotics Lab, University of Washington. Prior to this position, he was an associate professor at Institute of Intelligent Machines, Chinese Academy of Sciences, where he received Young Researcher Award (2011) from National Science Foundation of China. He also worked as Postdoc at University of Michigan. He received a PhD from a joint program by University of Science and Technology of China and University of Michigan in 2010. His early research interest was Simultaneous Localization and Mapping, and now he is interested in improving surgical outcomes with robotic surgeries.
Thin, flexible robots able to bend and elongate can help surgeons reach deeper and more accurately into the human body than ever before, through increasingly smaller incisions. This talk will cover recent breakthroughs in design, control, and sensing that are rapidly pushing the boundaries of surgical robotics to smaller scales, greater accuracy, and more effective interaction with surgeons. Mechanics-based models of elastic robots provide the basis for these advancements, which in turn provide the raw materials necessary for building effective surgical robotic systems. These systems can offer autonomous, teleoperated, or hand-held surgeon-robot interactions. The talk will cover both recent advancements in concentric tube robots, and also new ideas in reconfigurable parallel continuum robots that assemble inside the body. An important theme of the talk will be the fascinating process of partnering with surgeons to create robots suitable for real-world operating room environments that have the potential to be powerful weapons in the fight against lung disease, brain tumors, hemorrhagic stroke, epilepsy, deafness, and urologic disorders.
Robert J. Webster III received his B.S. in electrical engineering from Clemson University in 2002, and his M.S. and Ph.D. in mechanical engineering from the Johns Hopkins University in 2004 and 2007 where he worked in the CISST-ERC, which was the precursor to the LCSR. In 2008 he joined the mechanical engineering faculty of Vanderbilt University, where he currently directs the Medical Engineering and Discovery Laboratory. He founded and serves on the steering committee for the Vanderbilt Institute for Surgery and Engineering, which brings together physicians and engineers to solve challenging clinical problems. Prof. Webster’s research interests include surgical robotics, medical device design, image-guided surgery, and continuum robotics. He is a recipient of the IEEE Robotics and Automation Society Early Career Award, the National Science Foundation CAREER award, the Robotics Science and Systems Early Career Spotlight Award, IEEE Volz award, and the award for Excellence in Teaching from Vanderbilt University.
Unmanned aerial-aquatic vehicles (UAAVs) have the potential to dramatically improve remote access of underwater environments. In particular, fixed-wing UAAVs offer a promising means of enabling efficient locomotion in both aerial and aquatic domains through the use of a lifting surface. In this talk, I will present our approach for enabling multi-domain mobility with a small fixed-wing UAAV consisting of a single-propeller and a delta-wing planform. To this end, I will describe how our approach, which relies almost entirely on commercial off-the-shelf (COTs) components, uses feedback control and optimal trajectory design to solve the water-exit problem. I will demonstrate, through both simulation and hardware experiments, that our approach is indeed feasible, and that it has the potential to offer a robust, low-cost solution for enabling mobility in and across the air and water domains.
Dr. Joseph Moore is a member of the Senior Professional Staff at the Johns Hopkins University Applied Physics Lab (JHU/APL). In 2014, he received his Ph.D. in Mechanical Engineering from the Massachusetts Institute of Technology, where he was a Graduate Research Assistant in the Robot Locomotion Group and developed control algorithms for robust post-stall perching with a fixed-wing unmanned aerial vehicle (UAV). He also holds a B.S. in both Mechanical and Electrical Engineering from Rensselaer Polytechnic Institute. While at JHU/APL, Dr. Moore has continued to develop control algorithms for enabling agile flight with fixed-wing UAVs. He has also worked on developing algorithms for control and motion planning of mobile manipulators and heterogeneous multi-robot teams. His current work focuses on extreme short-field landings with fixed-wing UAVs, unmanned aerial-aquatic vehicles (UAAVs), and the development of algorithms for nonlinear model predictive control.
This seminar presents essential skills for interviewing for engineers seeking academic and industrial positions, including interviewing for jobs in industry and academia, and for interviewing for admission graduate school. Interviewing well is a skill that takes preparation and practice. This seminar is one of the occasional LCSR seminar series on professional development.
Louis L. Whitcomb is Professor and former Chair (2013-2017) of the Department of Mechanical Engineering, with secondary appointment in Computer Science, at the Johns Hopkins University’s Whiting School of Engineering. He is was the founding Director (2007-2013) of the JHU Laboratory for Computational Sensing and Robotics, where he is presently interim director of Robotics MSE Program. He completed a B.S. in Mechanical Engineering in 1984 and a Ph.D. in Electrical Engineering in 1992 at Yale University. From 1984 to 1986 he was a R&D engineer with the GMFanuc Robotics Corporation in Detroit, Michigan. He joined the Department of Mechanical Engineering at the Johns Hopkins University in 1995, after post doctoral fellowships at the University of Tokyo and the Woods Hole Oceanographic Institution. His research focuses on the navigation, dynamics, and control of robot systems – with applications to robotics in extreme environments including space and underwater robots. Whitcomb is a co-principal investigator of the Nereus and Nereid Under-Ice Projects. He is former (founding) Director of the JHU Laboratory for Computational Sensing and Robotics. He received teaching awards at Johns Hopkins in 2001, 2002, 2004, and 2011, was awarded a NSF Career Award, and an ONR Young Investigator Award. He is a Fellow of the IEEE. He is an Adjunct Scientist, Department of Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution.
The ocean presents a number of unique challenges to successful exploration and data collection. Robotics systems and techniques provide a means to overcome the deep ocean’s unique challenges. The autonomous underwater vehicle Sentry operates alongside the tethered ROV Jason and manned submersible Alvin to provide the National Science Foundation’s deep water research capability. Sentry has completed over
450 dives in support of science operations in over 8 years of operations. This talk will review the unique operating challenges the ocean prevents, describe the AUV Sentry and what sort of data it collects, and introduce some future directions in underwater robotics research.
Ian Vaughn is a Research Engineer at the Woods Hole Oceanographic Institution in Woods Hole, MA. He conducts at-sea operations and on-shore engineering work with the AUV Sentry with an emphasis on data processing and software development. Previously, he completed a PhD in Ocean Engineering at the University of Rhode Island using the Ocean Exploration Trust’s Hercules ROV. Ian has completed research cruises with ocean robots all over the world.
Ian Vaughn, PhD.
Deep Submergence Laboratory
Department of Applied Ocean Physics and Engineering Woods Hole Oceanographic Institution Woods Hole, MA, USA
Implementing frequency response using grid-connected inverters is one of the popular alternatives to mitigate the dynamic degradation experienced in low inertia power systems. However, such solution faces several challenges as inverters do not intrinsically possess the natural response to power fluctuations that synchronous generators have. Thus, to synthetically generate “virtual” inertia, inverters need to take frequency measurements, which are usually noisy, and subsequently make changes in the output power, which is therefore delayed. As a result, it is not a priori clear the whether virtual inertia will indeed mitigate the degradation, or some alternative control strategy will be necessary. In this talk, we present a comprehensive analysis and design framework that provides the tools required to answer this question. First, we develop novel stability analysis tools for power systems, which allows for the decentralized design of inverter-based controllers. The method requires that each inverter satisfies a standard H-infinity design requirement that depends on the dynamics of the components and inverters at each bus, and the aggregate susceptance of the transmission lines connected to it. It is robust to network and delay uncertainty, and when no network information is available reduces to the standard passivity condition for stability. Second, by selecting relevant performance outputs and signal norms, we define system-wide performance metrics that explicitly quantify the effect of frequency measurements noise and power disturbances on the overall system performance. Using a novel modal decomposition, we derive closed-form expressions for system performance that explicitly capture the impact of network topology, generator and inverter control parameters, and machine rating heterogeneity. Finally, we leverage this framework to design a new dynamic droop control (iDroop) mechanism for grid-connected inverters that exploits classical lead/lag compensation to outperform standard droop control and virtual inertia alternatives in both joint noise and disturbance mitigation and delay robustness.
Enrique Mallada is an assistant professor of electrical and computer engineering at Johns Hopkins University. Before joining Hopkins in 2016, he was a post-doctoral fellow at the Center for the Mathematics of Information at the California Institute of Technology from 2014 to 2016. He received his ingeniero en telecomunicaciones degree from Universidad ORT, Uruguay, in 2005 and his Ph.D. degree in electrical and computer engineering with a minor in applied mathematics from Cornell University in 2014. Dr. Mallada was awarded the ECE Director’s Ph.D. Thesis Research Award for his dissertation in 2014, the Cornell University’s Jacobs Fellowship in 2011 and the Organization of American States scholarship from 2008 to 2010. His research interests lie in the areas of control, networked dynamics, and optimization, with applications to engineering networks such as power systems and the Internet.
Artificial intelligence has begun to impact healthcare in areas including electronic health records, medical images, and genomics. But one aspect of healthcare that has been largely left behind thus far is the physical environments in which healthcare delivery takes place: hospitals and assisted living facilities, among others. In this talk I will discuss my work on endowing hospitals with ambient intelligence, using computer vision-based human activity understanding in the hospital environment to assist clinicians with complex care. I will first present an implementation of an AI-Assisted Hospital where we have equipped units at two partner hospitals with visual sensors. I will then discuss my work on human activity understanding, a core problem in computer vision. I will present deep learning methods for dense and detailed recognition of activities, and efficient action detection, important requirements for ambient intelligence. I will discuss these in the context of two clinical applications, hand hygiene compliance and automated documentation of intensive care unit activities. Finally, I will present work and future directions for integrating this new source of healthcare data into the broader clinical data ecosystem, towards full realization of an AI-Assisted Hospital.
Serena Yeung is a PhD candidate at Stanford University in the Artificial Intelligence Lab, advised by Fei-Fei Li and Arnold Milstein. Her research focuses on deep learning and computer vision algorithms for video understanding and human activity recognition. More broadly, she is passionate about using these algorithms to equip healthcare spaces with ambient intelligence, in particular an AI-Assisted Hospital. Serena is the lead graduate student in the Stanford Partnership in AI-Assisted Care (PAC), a collaboration between the Stanford School of Engineering and School of Medicine. She interned at Facebook AI Research in 2016, and Google Cloud AI in 2017. She was also co-instructor for Stanford’s CS231N course on Convolutional Neural Networks for Visual Recognition in 2017.