Robotic platforms now deliver vast amounts of sensor data from large unstructured environments. In attempting to process and interpret this data there are many unique challenges in bridging the gap between prerecorded datasets and the field. This talk will present recent work addressing the application of deep learning techniques to robotic perception. Deep learning has pushed successes in many computer vision tasks through the use of standardized datasets. We focus on solutions to several novel problems that arise when attempting to deploy such techniques on fielded robotic systems. The themes of the talk are twofold: 1) How can we integrate such learning techniques into the traditional probabilistic tools that are well known in robotics? and 2) Are there ways of avoiding the labor-intensive human labeling required for supervised learning? These questions give rise to several lines of research based around dimensionality reduction, adversarial learning, and simulation. We will show this work applied to three domains: self-driving cars, acoustic localization, and optical underwater reconstruction. This talk will show results on field data from the monitoring of Australia’s Coral Reefs, the archeological mapping of a 5,000-year-old submerged city, and the operation of a level-4 self-driving car in urban environments.
Matthew Johnson-Roberson is Assistant Professor of Engineering in the Department of Naval Architecture & Marine Engineering and the Department of Electrical Engineering and Computer Science at the University of Michigan. He received a PhD from the University of Sydney in 2010. There he worked on Autonomous Underwater Vehicles for long-term environment monitoring. Upon joining the University of Michigan faculty in 2013, he created the DROP (Deep Robot Optical Perception) Lab, which researches a wide variety of perception problems in robotics including SLAM, 3D reconstruction, scene understanding, data mining, and visualization. He has held prior postdoctoral appointments with the Centre for Autonomous Systems – CAS at KTH Royal Institute of Technology in Stockholm and the Australian Centre for Field Robotics at the University of Sydney. He is a recipient of the NSF CAREER award (2015).
Deep Networks are very successful for many visual task but their performance still fall far short of human visual abilities. Humans can learn from a few examples, with very weak supervision, can adapt to unknown factors like occlusion, can generalize from objects we know to objects which we do not. This talk will describe some state of the art work on deep networks but also discuss some of their limitations.
Alan Yuille received his B.A. in mathematics from the University of Cambridge in 1976, and completed his Ph.D. in theoretical physics at Cambridge in 1980. He then held a postdoctoral position with the Physics Department, University of Texas at Austin, and the Institute for Theoretical Physics, Santa Barbara. He then became a research scientists at the Artificial Intelligence Laboratory at MIT (1982-1986) and followed this with a faculty position in the Division of Applied Sciences at Harvard (1986-1995), rising to the position of associate professor. From 1995-2002 he worked as a senior scientist at the Smith-Kettlewell Eye Research Institute in San Francisco. From 2002-2016 he was a full professor in the Department of Statistics at UCLA with joint appointments in Psychology, Computer Science, and Psychiatry. In 2016 he became a Bloomberg Distinguished Professor in Cognitive Science and Computer Science at Johns Hopkins University. He has won a Marr prize, a Helmholtz prize, and is a Fellow of IEEE.
The sophistication of Unmanned Aerial Vehicles (UAV), otherwise known as drones, is increasing while their cost is decreasing and is quickly approaching consumer prices. This technology, like most others, adds tremendous value to humanity but also challenges. This dichotomy has motivated our research from the early 90’s to develop more capable platforms and more recently to explore technologies that can mitigate the risks associated with drone proliferation. We will present examples of our work on both sides of this spectrum. One particular area with tremendous potential impact is on the sensor and processing (payload) side. We have been the thought leaders on computational sensors and are on a path to reaching size, weight, and power constraints commensurate or exceeding biological equivalents. This revolution in integrated sensing and computing is likely to enable a new class of autonomous and very capable systems. In particular, we are exploring the interface between biological and engineered systems. Biological creatures are highly efficient, autonomous, and mobile with minimal sensory requirements. Their endurance and mobility remain far unmatched especially as the size decreases and that is the subject of intense research. We believe that solutions that build on the best of both worlds may produce better performance than either on its own and our focus is on the optimal integration of engineered payloads with natural hosts. Another complementary area of our research is small robotics with our recent focus for endoscopic medical procedures. In particular, we are developing a self-propelled aiding endoscope based on biomimetic peristaltic locomotion, and potential solutions may reside in what is becomingly known as soft robotics.
Dr. Rizk is currently an Associate Research Professor for JHU ECE, a lecturer for JHU ME, a Science and Technology (S&T) and Innovation consultant for JHU APL, local industries, and government leadership, and an entrepreneur. Prior to Nov 2016, he was a Principal Staff, Systems/Lead Engineer, S&T Advisor, Innovation Lead, member of the S&T committee, and member of the Innovation Steering Group for the Air and Missile Defense Sector at APL. He has had 15 intellectual property filings since 2014 and received 9 internal and external achievement awards. He has been recognized as a top innovator, thought leader, and successful Principal Investigator, and has demonstrated an effective model for R&D that yielded multiple innovative and far-reaching concepts and technologies. He was a pioneer in UAV technology and led a small team that developed and demonstrated the first four-rotor (quad copter) UAV system in the early 90’s. More recently, he has been the forerunner in developing a new multi-mode / multi-mission sensor architecture that is low C-SWaP and likely to revolutionize the associated missions/applications space and platforms. In addition, he is currently developing a new vision for future unmanned systems. Dr. Rizk has been teaching the Mechatronics courses at JHU since Spring of 2015 and is developing a new design course to be offered in Fall 2017 for which he was awarded a teaching innovation grant. During his APL tenure, he also provided systems engineering and S&T support to senior DOD leadership and large acquisition programs. In addition to providing effective technical, innovative, and mentoring leadership and management, Dr. Rizk has demonstrated a collaborative spirit, successfully working with various FFRDC’s, government labs, academia, and industry of various sizes. He also made key contributions during his time at Rockwell Aerospace, McDonald Douglas, and Boeing. He is a senior member of IEEE, AIAA, and a member of AUVSI.
Image-guided therapy is a clinical procedure under 2-D or 3-D image guidance such as MRI and CT images to accurately deliver surgical devices to diseased or cancerous tissue. This emerging field is interdisciplinary, combining the technology of robotics, computer science, engineering and medicine. Image-guided therapy allows faster, safer and more accurate minimally invasive surgery and diagnosis. In this talk, Dr. Tse will present the technological challenges in the field, followed by his research in MRI-guided therapy for brachytherapy, ablation and stem cell treatment in the prostate, the heart and the spine. These procedures consist of the latest imaging and robotic technology in minimally invasive therapy.
Dr. Zion Tse is an Assistant Professor in the College of Engineering and the Principal Investigator of the Medical Robotics Lab at the University of Georgia. Formerly, he was a visiting scientist in the Center for Interventional Oncology at National Institutes of Health, and a research fellow in the Radiology Department at Harvard Medical School, Brigham and Women’s Hospital. He received his PhD in Medical Robotics from Imperial College London, UK. His academic and professional experience has related to mechatronics, medical devices and surgical robotics. Dr. Tse has designed and prototyped a broad range of novel clinical devices, most of which have been tested in animal and human trials.
Grid of the Future: Controlling the Edge
The evolution of the grid faces significant challenges if it is to integrate and accept more energy from renewable generation and other Distributed Energy Resources (DERs). To maintain grid’s reliability and turn intermittent power sources into major contributors to the U.S. energy mix, we have to think about the grid differently and design it to be smarter and more flexible.
ARPA-E is interested in disruptive technologies that enable increased integration of DERs by real-time adaptation while maintaining grid reliability and reducing cost for customers with smart technologies. The potential impact is significant, with projected annual energy savings of more than 3 quadrillion BTU and annual CO2 emissions reductions of more than 250 million metric tons.
This talk will identify opportunities in developing next generation control technologies and grid operation paradigms that address these challenges and enable secure, stable, and reliable transmission and distribution of electrical power. Innovative approaches to coordinated management of bulk generation, DERs, flexible loads, and storage assets with multiple roles, and revenue streams will be discussed. Summary of ARPA-E NODES (Network Optimized Distributed Energy Systems) Program funding development of these technologies will be presented.
Dr. Sonja Glavaski is a Program Director at the Advanced Research Projects Agency-Energy (ARPA-E) overseeing portfolio of projects developing innovative and disruptive technologies that would facilitate energy efficiency, more efficient renewable energy generation, and enable electricity grid to be more responsive and resilient. Her technical focus area is data analytics, and distributed control of complex, cyber-physical systems with emphasis on operations and security of energy systems. Dr. Sonja Glavaski spearheaded development and is currently helming ARPA-E NODES Program that aims to develop transformational grid management and control methods to create a virtual energy storage system based on use of flexible loads and distributed energy resources (DERs).
Prior to joining ARPA-E, Dr. Glavaski served as Control Systems Group Leader at United Technologies Research Center advancing knowledge and technology in the area of control & intelligent systems. Before being at UTRC, Dr. Glavaski led key programs at Eaton Innovation Center and Honeywell Labs. During her 20-plus-year career, Dr. Glavaski has contributed significantly to technical advancements in numerous product areas, including energy systems, hybrid vehicles, energy efficient building HVAC/R systems, and aircraft systems.
Dr. Glavaski received Ph.D. and MS in Electrical Engineering from California Institute of Technology, and Dipl. Ing and MS in Electrical Engineering from University of Belgrade.
Human-controlled robotic systems can greatly improve healthcare by synthesizing information, sharing knowledge with the human operator, and assisting with the delivery of care. This talk will highlight projects related to new technology for surgical simulation and training, as well as a more in depth discussion of a novel teleoperated robotic system that enables complex needle-based medical procedures, currently not possible. The central element to this work is understanding how to integrate the human with the physical system in an intuitive and natural way, and how to leverage the relative strengths between the human and mechatronic system to improve outcomes.
Ann Majewicz completed B.S. degrees in Mechanical Engineering and Electrical Engineering at the University of St. Thomas, the M.S.E. degree in Mechanical Engineering at Johns Hopkins University, and the Ph.D. degree in Mechanical Engineering at Stanford University. Dr. Majewicz joined the Department of Mechanical Engineering as an Assistant Professor in August 2014, where she directs the Human-Enabled Robotic Technology Laboratory. She holds at courtesy appointment in the Department of Surgery at UT Southwestern Medical Center. Her research interests focus on the interface between humans and robotic systems, with an emphasis on improving the delivery of surgical and interventional care, both for the patient and the provider.
Carl Kaiser, PhD
AUV Program Manager
National Deep Submergence Facility
Woods Hole Oceanographic Institution
Over the last 15 years Autonomous Underwater Vehicles (AUVs) have migrated finicky experiments to a mature capability providing routine operational support to deep sea scientists. Moreover, the boundaries of science that can be conducted with AUVs are advancing rapidly and in unexpected directions. The AUV Sentry entered the National Deep Submergence Facility (NDSF) in 2010 and has completed more than 420 dives in support of Ocean Science. Sentry operates up to 190 days per year and is a “fly-away” system that can be shipped to a vessel of opportunity anywhere in the world by land, sea, or air freight. Sentry has a unique design emphasizing maneuverability, steep terrain and extreme mission flexibility. It carries a wide range of standard sensors including a Multibeam Echo Sounder, a Sidescan Sonar, a Sub Bottom Profiler, a high resolution color camera and a variety of water chemistry sensors. A substantial number of custom sensors have been added and recently even sampling has been performed. Payload re-configuration between cruises and even between dives is routine and tens of new capabilities are added every year.
Increasingly acoustic communications are being used to interact with AUVs mid-mission for monitoring or mission intervention. However, these capabilities are still new and we have only scratched the surface of what is possible.
This talk will begin with a presentation of the AUV Sentry and typical science missions. It will then discuss the present state of the art in acoustic interaction and will conclude with a look at possible future directions for these technologies.
Dr. Carl Kaiser has a Bachelors, Masters, and PhD in Mechanical Engineering and Robotics from Colorado State University. Following graduate school, he made a brief foray into the corporate world of Southeast Asian manufacturing and supply chains before returning to academia. He has been at Woods Hole Oceanographic Institution since 2010 and is the Autonomous Underwater Vehicle Program Manger for the National Deep Submergence Facility as well as a Woods Hole Oceanographic Institution principle investigator focusing on novel applications of and technologies for Autonomous Underwater Vehicles in the deep ocean. He has spent more than a year at sea with various deep Submergence vehicles and several additional months in the field with them in various ports or shallow water test facilities.
Avik De is a PhD candidate at the GRASP laboratory in the University of Pennsylvania advised by Dr Daniel Koditschek. He graduated with a BS/MS in Mechanical Engineering from Johns Hopkins University in 2010, during which he performed an empirical study on how/when human beings inject feedback to stabilize a 1-dimensional paddle juggling task. Bio-inspiration remains a key research interest, and during his PhD, he switched his efforts into modular/compositional control of dynamic locomotion, as well as the design of dynamic locomotor systems. He co-founded “Ghost Robotics” in 2016, commercializing research that led to the creation of a family of power-dense direct-drive legged robots with high actuation bandwidth and proprioceptive sensing capabilities. He has in part created curriculum for two online courses: “Robotics: Mobility”, and “Robotics: Capstone” on coursera.