Robotic platforms now deliver vast amounts of sensor data from large unstructured environments. In attempting to process and interpret this data there are many unique challenges in bridging the gap between prerecorded datasets and the field. This talk will present recent work addressing the application of deep learning techniques to robotic perception. Deep learning has pushed successes in many computer vision tasks through the use of standardized datasets. We focus on solutions to several novel problems that arise when attempting to deploy such techniques on fielded robotic systems. The themes of the talk are twofold: 1) How can we integrate such learning techniques into the traditional probabilistic tools that are well known in robotics? and 2) Are there ways of avoiding the labor-intensive human labeling required for supervised learning? These questions give rise to several lines of research based around dimensionality reduction, adversarial learning, and simulation. We will show this work applied to three domains: self-driving cars, acoustic localization, and optical underwater reconstruction. This talk will show results on field data from the monitoring of Australia’s Coral Reefs, the archeological mapping of a 5,000-year-old submerged city, and the operation of a level-4 self-driving car in urban environments.
Matthew Johnson-Roberson is Assistant Professor of Engineering in the Department of Naval Architecture & Marine Engineering and the Department of Electrical Engineering and Computer Science at the University of Michigan. He received a PhD from the University of Sydney in 2010. There he worked on Autonomous Underwater Vehicles for long-term environment monitoring. Upon joining the University of Michigan faculty in 2013, he created the DROP (Deep Robot Optical Perception) Lab, which researches a wide variety of perception problems in robotics including SLAM, 3D reconstruction, scene understanding, data mining, and visualization. He has held prior postdoctoral appointments with the Centre for Autonomous Systems – CAS at KTH Royal Institute of Technology in Stockholm and the Australian Centre for Field Robotics at the University of Sydney. He is a recipient of the NSF CAREER award (2015).
Deep Networks are very successful for many visual task but their performance still fall far short of human visual abilities. Humans can learn from a few examples, with very weak supervision, can adapt to unknown factors like occlusion, can generalize from objects we know to objects which we do not. This talk will describe some state of the art work on deep networks but also discuss some of their limitations.
Alan Yuille received his B.A. in mathematics from the University of Cambridge in 1976, and completed his Ph.D. in theoretical physics at Cambridge in 1980. He then held a postdoctoral position with the Physics Department, University of Texas at Austin, and the Institute for Theoretical Physics, Santa Barbara. He then became a research scientists at the Artificial Intelligence Laboratory at MIT (1982-1986) and followed this with a faculty position in the Division of Applied Sciences at Harvard (1986-1995), rising to the position of associate professor. From 1995-2002 he worked as a senior scientist at the Smith-Kettlewell Eye Research Institute in San Francisco. From 2002-2016 he was a full professor in the Department of Statistics at UCLA with joint appointments in Psychology, Computer Science, and Psychiatry. In 2016 he became a Bloomberg Distinguished Professor in Cognitive Science and Computer Science at Johns Hopkins University. He has won a Marr prize, a Helmholtz prize, and is a Fellow of IEEE.
The sophistication of Unmanned Aerial Vehicles (UAV), otherwise known as drones, is increasing while their cost is decreasing and is quickly approaching consumer prices. This technology, like most others, adds tremendous value to humanity but also challenges. This dichotomy has motivated our research from the early 90’s to develop more capable platforms and more recently to explore technologies that can mitigate the risks associated with drone proliferation. We will present examples of our work on both sides of this spectrum. One particular area with tremendous potential impact is on the sensor and processing (payload) side. We have been the thought leaders on computational sensors and are on a path to reaching size, weight, and power constraints commensurate or exceeding biological equivalents. This revolution in integrated sensing and computing is likely to enable a new class of autonomous and very capable systems. In particular, we are exploring the interface between biological and engineered systems. Biological creatures are highly efficient, autonomous, and mobile with minimal sensory requirements. Their endurance and mobility remain far unmatched especially as the size decreases and that is the subject of intense research. We believe that solutions that build on the best of both worlds may produce better performance than either on its own and our focus is on the optimal integration of engineered payloads with natural hosts. Another complementary area of our research is small robotics with our recent focus for endoscopic medical procedures. In particular, we are developing a self-propelled aiding endoscope based on biomimetic peristaltic locomotion, and potential solutions may reside in what is becomingly known as soft robotics.
Dr. Rizk is currently an Associate Research Professor for JHU ECE, a lecturer for JHU ME, a Science and Technology (S&T) and Innovation consultant for JHU APL, local industries, and government leadership, and an entrepreneur. Prior to Nov 2016, he was a Principal Staff, Systems/Lead Engineer, S&T Advisor, Innovation Lead, member of the S&T committee, and member of the Innovation Steering Group for the Air and Missile Defense Sector at APL. He has had 15 intellectual property filings since 2014 and received 9 internal and external achievement awards. He has been recognized as a top innovator, thought leader, and successful Principal Investigator, and has demonstrated an effective model for R&D that yielded multiple innovative and far-reaching concepts and technologies. He was a pioneer in UAV technology and led a small team that developed and demonstrated the first four-rotor (quad copter) UAV system in the early 90’s. More recently, he has been the forerunner in developing a new multi-mode / multi-mission sensor architecture that is low C-SWaP and likely to revolutionize the associated missions/applications space and platforms. In addition, he is currently developing a new vision for future unmanned systems. Dr. Rizk has been teaching the Mechatronics courses at JHU since Spring of 2015 and is developing a new design course to be offered in Fall 2017 for which he was awarded a teaching innovation grant. During his APL tenure, he also provided systems engineering and S&T support to senior DOD leadership and large acquisition programs. In addition to providing effective technical, innovative, and mentoring leadership and management, Dr. Rizk has demonstrated a collaborative spirit, successfully working with various FFRDC’s, government labs, academia, and industry of various sizes. He also made key contributions during his time at Rockwell Aerospace, McDonald Douglas, and Boeing. He is a senior member of IEEE, AIAA, and a member of AUVSI.
Image-guided therapy is a clinical procedure under 2-D or 3-D image guidance such as MRI and CT images to accurately deliver surgical devices to diseased or cancerous tissue. This emerging field is interdisciplinary, combining the technology of robotics, computer science, engineering and medicine. Image-guided therapy allows faster, safer and more accurate minimally invasive surgery and diagnosis. In this talk, Dr. Tse will present the technological challenges in the field, followed by his research in MRI-guided therapy for brachytherapy, ablation and stem cell treatment in the prostate, the heart and the spine. These procedures consist of the latest imaging and robotic technology in minimally invasive therapy.
Dr. Zion Tse is an Assistant Professor in the College of Engineering and the Principal Investigator of the Medical Robotics Lab at the University of Georgia. Formerly, he was a visiting scientist in the Center for Interventional Oncology at National Institutes of Health, and a research fellow in the Radiology Department at Harvard Medical School, Brigham and Women’s Hospital. He received his PhD in Medical Robotics from Imperial College London, UK. His academic and professional experience has related to mechatronics, medical devices and surgical robotics. Dr. Tse has designed and prototyped a broad range of novel clinical devices, most of which have been tested in animal and human trials.
Human-controlled robotic systems can greatly improve healthcare by synthesizing information, sharing knowledge with the human operator, and assisting with the delivery of care. This talk will highlight projects related to new technology for surgical simulation and training, as well as a more in depth discussion of a novel teleoperated robotic system that enables complex needle-based medical procedures, currently not possible. The central element to this work is understanding how to integrate the human with the physical system in an intuitive and natural way, and how to leverage the relative strengths between the human and mechatronic system to improve outcomes.
Ann Majewicz completed B.S. degrees in Mechanical Engineering and Electrical Engineering at the University of St. Thomas, the M.S.E. degree in Mechanical Engineering at Johns Hopkins University, and the Ph.D. degree in Mechanical Engineering at Stanford University. Dr. Majewicz joined the Department of Mechanical Engineering as an Assistant Professor in August 2014, where she directs the Human-Enabled Robotic Technology Laboratory. She holds at courtesy appointment in the Department of Surgery at UT Southwestern Medical Center. Her research interests focus on the interface between humans and robotic systems, with an emphasis on improving the delivery of surgical and interventional care, both for the patient and the provider.
This is the Fall 2017 Kick-Off Seminar, presenting an overview of LCSR, useful information, and an introduction to the faculty and labs.
Robots hold promise in assisting people in a variety of domains including healthcare services, household chores, collaborative manufacturing, and educational learning. In supporting these activities, robots need to engage with humans in cooperative interactions in which they work together toward a common goal in a socially intuitive manner. Such interactions require robots to coordinate actions, predict task intent, direct attention, and convey relevant information to human partners. In this talk, I will present how techniques in human-computer interaction, artificial intelligence, and robotics can be applied in a principled manner to create and study intuitive interactions between humans and robots. I will demonstrate social, cognitive, and task benefits of effective human-robot teams in various application contexts. I will discuss broader impacts of my research, as well as future directions of my research focusing on intuitive computing.
Chien-Ming Huang is an Assistant Professor of Computer Science in the Whiting School of Engineering at The Johns Hopkins University. His research seeks to enable intuitive interactions between humans and machines to augment human capabilities. Dr. Huang received his Ph.D. in Computer Science at the University of Wisconsin–Madison in 2015, his M.S. in Computer Science at the Georgia Institute of Technology in 2010, and his B.S. in Computer Science at National Chiao Tung University in Taiwan in 2006. His research has been awarded a Best Paper Runner-Up at Robotics: Science and Systems (RSS) 2013 and has received media coverage from MIT Technology Review, Tech Insider, and Science Nation.
The ability to manufacture micro-scale sensors and actuators has inspired the robotics community for over 30 years. There have been huge success stories; MEMS inertial sensors have enabled an entire market of low-cost, small UAVs. However, the promise of ant-scale robots has largely failed. Ants can move high speeds on surfaces from picnic tables to front lawns, but the few legged microrobots that have walked have done so at slow speeds (< 1 body length/sec) on smooth silicon wafers. In addition, the vision of large numbers of microfabricated sensors interacting directly with the environment has suffered in part due to the brittle materials used in microfabrication. This talk will present our progress in the design of sensors, mechanisms, and actuators that utilize new microfabrication processes to incorporate materials with widely varying moduli and functionality to achieve more robustness, dynamic range, and complexity in smaller packages. Results include skins of soft tactile or strain sensors with high dynamic range, new models of bio-inspired jumping mechanisms, and magnetically actuated legged microrobots from 1 gram down to 1 milligram that provide insights into simple design and control for high speed locomotion in small-scale mobile robots.
Sarah Bergbreiter joined the University of Maryland, College Park in 2008 and is currently an Associate Professor of Mechanical Engineering, with a joint appointment in the Institute for Systems Research. She received her B.S.E. degree in Electrical Engineering from Princeton University in 1999, and the M.S. and Ph.D. degrees from the University of California, Berkeley in 2004 and 2007 with a focus on microrobotics. Her research uses inspiration from microsystems and biology to improve robotics performance at all scales. She has been awarded several honors including the DARPA Young Faculty Award in 2008, the NSF CAREER Award in 2011, and the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2013 for her research on engineering robotic systems down to sub-millimeter size scales. She also received the Best Conference Paper Award at IEEE ICRA 2010 on her work incorporating new materials into microrobotics and the NTF Award at IEEE IROS 2011 for early demonstrations of jumping microrobots. She currently serves on DARPA’s Microsystems Exploratory Council and as an associate editor for IEEE Transactions on Robotics and ASME Journal on Mechanisms and Robotics.
SimpleITK is a simplified, open source, multi-language interface to the National Library of Medicine’s Insight Segmentation and Registration Toolkit (ITK), a C++ open source image analysis toolkit which is widely used in academia and industry. SimpleITK is available in multiple programing languages including: Python, R, Java, C#, C++, Lua, Ruby, and TCL. Binary versions of the toolkit are available for the GNU Linux, Apple OS X, and Microsoft Windows operating systems. For researchers, the toolkit facilitates rapid prototyping and evaluation of image-analysis workflows with minimal effort using their programming language of choice. For educators and students, the toolkit’s concise interface and support of scripting languages facilitates experimentation with well-known algorithms, allowing them to focus on algorithmic understanding rather than low level programming skills.
The toolkit development process follows best software engineering practices including code reviews and continuous integration testing, with results displayed online allowing everyone to gauge the status of the current code and any code that is under consideration for incorporation into the toolkit. User support is available through a dedicated mailing list, the project’s Wiki, and on GitHub. The source code is freely available on GitHub under an Apache-2.0 license (github.com/SimpleITK/SimpleITK). In addition, we provide a development environment which supports collaborative research and educational activities in the Python and R programming languages using the Jupyter notebook web application. It too is freely available on GitHub under an Apache-2.0 license (github.com/InsightSoftwareConsortium/SimpleITK-Notebooks).
The first part of the presentation will describe the motivation underlying the development of SimpleITK, its development process and its current state. The second part of the presentation will be a live demonstration illustrating the capabilities of SimpleITK as a tool for reproducible research.
Dr. Ziv Yaniv is a senior computer scientist with the Office of High Performance Computing and Communications, at the National Library of Medicine, and at TAJ Technologies Inc. He obtained his Ph.D. in computer science from The Hebrew University of Jerusalem, Jerusalem Israel. Previously he was an assistant professor in the department of radiology, Georgetown university, and a principal investigator at Children’s National Hospital in Washington DC. He was chair of SPIE Medical Imaging: Image-Guided Procedures, Robotic Interventions, and Modeling (2013-2016) and program chair for the Information Processing in Computer Assisted Interventions (IPCAI) 2016 conference.
He believes in the curative power of open research, and has been actively involved in development of several free open source toolkits, including the Image-Guided Surgery Toolkit (IGSTK), the Insight Registration and Segmentation toolkit (ITK) and SimpleITK.
This talk will show that attitude Kalman filters can be simple in design while also being robust and accurate despite the highly nonlinear nature of attitude (i. e., orientation) estimation. Three different filters are discussed, all using quaternions and small-angle approximations of attitude errors: an Extended Kalman filter as well as an Unscented Kalman filter for a gyro-based situation, and an Extended Kalman filter for a gyro-less one. In additon to the three-axis attitude, all of the filters also estimate corrections to the angular velocity – random walk modeled biases in the gyro measured case, and first-order Markov modeled corrections in the gyro-less case, which involves angular velocity computed from mass properties and control data.
The filters are evaluated using extensive real and simulated data from low-Earth orbiting NASA satellites such as Tropical Rainfall Measurement Mission, Solar, Anomalous, and Magnetospheric Particle Explorer, Earth Radiation Budget Satellite, Wide Field Infrared Explorer, and Fast Auroral Snapshot Explorer. The evaluations predominantly involve stressing “magnetometer-only” scenarios, i. e., using only a three-axis magnetometer to sense the attitude. Comparisons are made with attitude and rate knowledge obtained using coarse sensors and single-frame algorithms, and also with results from an Unscented Kalman filter with a more complicated attitude pameterization.
Dr. Murty Challa received a B.Sc. in physics from Andhra University, Visakhapatnam, India, and a Ph.D. in physics from the University of Georgia, Athens, Georgia. His professional interests and actvities include: estimation and data fusion algorithms such as Kalman filters, batch estimators, and simultaneous localization and mapping; track correlation/ association; guidance, navigation, and control for spacecraft and unmanned vehicles; missile defense; quantum computing; statistical mechanics; computational physics; solid state physics/ materials science. He is currently a member of the Senior Professional Staff of Johns Hopkins Applied Physics Laboratory (JHU/APL), Maryland, USA. Prior to JHU/APL, he was senior staff at Institute for Defense Analyses, Alexandria, VA, and at Computer Sciences Corporation supporting NASA Goddard Space Flight Center, Greenbelt, MD. Dr. Challa’s academic positions include post-doctoral appointments in physics at Michigan State University and Virginia Commonwealth University, and an adjunct position in physics at George Washington University. He has also served as a consultant to Iridium Satellite, LLC.