Carl Kaiser, PhD
AUV Program Manager
National Deep Submergence Facility
Woods Hole Oceanographic Institution
Over the last 15 years Autonomous Underwater Vehicles (AUVs) have migrated finicky experiments to a mature capability providing routine operational support to deep sea scientists. Moreover, the boundaries of science that can be conducted with AUVs are advancing rapidly and in unexpected directions. The AUV Sentry entered the National Deep Submergence Facility (NDSF) in 2010 and has completed more than 420 dives in support of Ocean Science. Sentry operates up to 190 days per year and is a “fly-away” system that can be shipped to a vessel of opportunity anywhere in the world by land, sea, or air freight. Sentry has a unique design emphasizing maneuverability, steep terrain and extreme mission flexibility. It carries a wide range of standard sensors including a Multibeam Echo Sounder, a Sidescan Sonar, a Sub Bottom Profiler, a high resolution color camera and a variety of water chemistry sensors. A substantial number of custom sensors have been added and recently even sampling has been performed. Payload re-configuration between cruises and even between dives is routine and tens of new capabilities are added every year.
Increasingly acoustic communications are being used to interact with AUVs mid-mission for monitoring or mission intervention. However, these capabilities are still new and we have only scratched the surface of what is possible.
This talk will begin with a presentation of the AUV Sentry and typical science missions. It will then discuss the present state of the art in acoustic interaction and will conclude with a look at possible future directions for these technologies.
Dr. Carl Kaiser has a Bachelors, Masters, and PhD in Mechanical Engineering and Robotics from Colorado State University. Following graduate school, he made a brief foray into the corporate world of Southeast Asian manufacturing and supply chains before returning to academia. He has been at Woods Hole Oceanographic Institution since 2010 and is the Autonomous Underwater Vehicle Program Manger for the National Deep Submergence Facility as well as a Woods Hole Oceanographic Institution principle investigator focusing on novel applications of and technologies for Autonomous Underwater Vehicles in the deep ocean. He has spent more than a year at sea with various deep Submergence vehicles and several additional months in the field with them in various ports or shallow water test facilities.
Avik De is a PhD candidate at the GRASP laboratory in the University of Pennsylvania advised by Dr Daniel Koditschek. He graduated with a BS/MS in Mechanical Engineering from Johns Hopkins University in 2010, during which he performed an empirical study on how/when human beings inject feedback to stabilize a 1-dimensional paddle juggling task. Bio-inspiration remains a key research interest, and during his PhD, he switched his efforts into modular/compositional control of dynamic locomotion, as well as the design of dynamic locomotor systems. He co-founded “Ghost Robotics” in 2016, commercializing research that led to the creation of a family of power-dense direct-drive legged robots with high actuation bandwidth and proprioceptive sensing capabilities. He has in part created curriculum for two online courses: “Robotics: Mobility”, and “Robotics: Capstone” on coursera.
This is the Fall 2017 Kick-Off Seminar, presenting an overview of LCSR, useful information, and an introduction to the faculty and labs.
Robots hold promise in assisting people in a variety of domains including healthcare services, household chores, collaborative manufacturing, and educational learning. In supporting these activities, robots need to engage with humans in cooperative interactions in which they work together toward a common goal in a socially intuitive manner. Such interactions require robots to coordinate actions, predict task intent, direct attention, and convey relevant information to human partners. In this talk, I will present how techniques in human-computer interaction, artificial intelligence, and robotics can be applied in a principled manner to create and study intuitive interactions between humans and robots. I will demonstrate social, cognitive, and task benefits of effective human-robot teams in various application contexts. I will discuss broader impacts of my research, as well as future directions of my research focusing on intuitive computing.
Chien-Ming Huang is an Assistant Professor of Computer Science in the Whiting School of Engineering at The Johns Hopkins University. His research seeks to enable intuitive interactions between humans and machines to augment human capabilities. Dr. Huang received his Ph.D. in Computer Science at the University of Wisconsin–Madison in 2015, his M.S. in Computer Science at the Georgia Institute of Technology in 2010, and his B.S. in Computer Science at National Chiao Tung University in Taiwan in 2006. His research has been awarded a Best Paper Runner-Up at Robotics: Science and Systems (RSS) 2013 and has received media coverage from MIT Technology Review, Tech Insider, and Science Nation.
The ability to manufacture micro-scale sensors and actuators has inspired the robotics community for over 30 years. There have been huge success stories; MEMS inertial sensors have enabled an entire market of low-cost, small UAVs. However, the promise of ant-scale robots has largely failed. Ants can move high speeds on surfaces from picnic tables to front lawns, but the few legged microrobots that have walked have done so at slow speeds (< 1 body length/sec) on smooth silicon wafers. In addition, the vision of large numbers of microfabricated sensors interacting directly with the environment has suffered in part due to the brittle materials used in microfabrication. This talk will present our progress in the design of sensors, mechanisms, and actuators that utilize new microfabrication processes to incorporate materials with widely varying moduli and functionality to achieve more robustness, dynamic range, and complexity in smaller packages. Results include skins of soft tactile or strain sensors with high dynamic range, new models of bio-inspired jumping mechanisms, and magnetically actuated legged microrobots from 1 gram down to 1 milligram that provide insights into simple design and control for high speed locomotion in small-scale mobile robots.
Sarah Bergbreiter joined the University of Maryland, College Park in 2008 and is currently an Associate Professor of Mechanical Engineering, with a joint appointment in the Institute for Systems Research. She received her B.S.E. degree in Electrical Engineering from Princeton University in 1999, and the M.S. and Ph.D. degrees from the University of California, Berkeley in 2004 and 2007 with a focus on microrobotics. Her research uses inspiration from microsystems and biology to improve robotics performance at all scales. She has been awarded several honors including the DARPA Young Faculty Award in 2008, the NSF CAREER Award in 2011, and the Presidential Early Career Award for Scientists and Engineers (PECASE) in 2013 for her research on engineering robotic systems down to sub-millimeter size scales. She also received the Best Conference Paper Award at IEEE ICRA 2010 on her work incorporating new materials into microrobotics and the NTF Award at IEEE IROS 2011 for early demonstrations of jumping microrobots. She currently serves on DARPA’s Microsystems Exploratory Council and as an associate editor for IEEE Transactions on Robotics and ASME Journal on Mechanisms and Robotics.
Incredible biological mechanisms have emerged through evolution, and can provide a wellspring of inspiration for engineers. One promising area of biological inspiration is the design of devices and robots made of compliant materials, as part of a larger field of research in soft robotics. In this talk, the research topics of soft robotics currently underway in the mLab at Oregon State University will be presented. Soft active materials designed and researched in the mLab include liquid metal, biodegradable elastomers, and electroactive materials. Bioinspired mechanisms include octopus-inspired soft muscles, gecko-inspired adhesives, and soft wearable sensors. However, the biological mechanisms that serve as a source of inspirations are made of materials that are vastly more compliant than the metal and plastic that engineers and roboticists normally use. To imitate and improve on nature’s design, we must create mechanisms with materials like fabric and rubber which is difficult to integrate into traditional fabrication techniques. To address these limitation, the mLab is also innovating in multi-material 3D printing to rapidly and directly fabricate soft robots. Though significant challenges remain to be solved, the development of such soft materials and devices promises to bring robots more and more into our daily lives.
Dr. Yiğit Mengüç works at the interface of mechanical science and robotics, creating soft devices inspired by nature and applied to robotics. He received his B.S., 2006, at Rice University his M.S., 2008, and Ph.D., 2011, in Mechanical Engineering at Carnegie Mellon University. He completed his postdoctoral work at Harvard University’s Wyss Institute for Biologically Inspired Engineering in 2014 and is now an assistant professor of Robotics and Mechanical Engineering at Oregon State University where he founded and leads the mLab. He received an Office of Naval Research Young Investigator Program (ONR YIP) Award in 2016 to develop cephalopod-inspired robots.
SimpleITK is a simplified, open source, multi-language interface to the National Library of Medicine’s Insight Segmentation and Registration Toolkit (ITK), a C++ open source image analysis toolkit which is widely used in academia and industry. SimpleITK is available in multiple programing languages including: Python, R, Java, C#, C++, Lua, Ruby, and TCL. Binary versions of the toolkit are available for the GNU Linux, Apple OS X, and Microsoft Windows operating systems. For researchers, the toolkit facilitates rapid prototyping and evaluation of image-analysis workflows with minimal effort using their programming language of choice. For educators and students, the toolkit’s concise interface and support of scripting languages facilitates experimentation with well-known algorithms, allowing them to focus on algorithmic understanding rather than low level programming skills.
The toolkit development process follows best software engineering practices including code reviews and continuous integration testing, with results displayed online allowing everyone to gauge the status of the current code and any code that is under consideration for incorporation into the toolkit. User support is available through a dedicated mailing list, the project’s Wiki, and on GitHub. The source code is freely available on GitHub under an Apache-2.0 license (github.com/SimpleITK/SimpleITK). In addition, we provide a development environment which supports collaborative research and educational activities in the Python and R programming languages using the Jupyter notebook web application. It too is freely available on GitHub under an Apache-2.0 license (github.com/InsightSoftwareConsortium/SimpleITK-Notebooks).
The first part of the presentation will describe the motivation underlying the development of SimpleITK, its development process and its current state. The second part of the presentation will be a live demonstration illustrating the capabilities of SimpleITK as a tool for reproducible research.
Dr. Ziv Yaniv is a senior computer scientist with the Office of High Performance Computing and Communications, at the National Library of Medicine, and at TAJ Technologies Inc. He obtained his Ph.D. in computer science from The Hebrew University of Jerusalem, Jerusalem Israel. Previously he was an assistant professor in the department of radiology, Georgetown university, and a principal investigator at Children’s National Hospital in Washington DC. He was chair of SPIE Medical Imaging: Image-Guided Procedures, Robotic Interventions, and Modeling (2013-2016) and program chair for the Information Processing in Computer Assisted Interventions (IPCAI) 2016 conference.
He believes in the curative power of open research, and has been actively involved in development of several free open source toolkits, including the Image-Guided Surgery Toolkit (IGSTK), the Insight Registration and Segmentation toolkit (ITK) and SimpleITK.
This talk will show that attitude Kalman filters can be simple in design while also being robust and accurate despite the highly nonlinear nature of attitude (i. e., orientation) estimation. Three different filters are discussed, all using quaternions and small-angle approximations of attitude errors: an Extended Kalman filter as well as an Unscented Kalman filter for a gyro-based situation, and an Extended Kalman filter for a gyro-less one. In additon to the three-axis attitude, all of the filters also estimate corrections to the angular velocity – random walk modeled biases in the gyro measured case, and first-order Markov modeled corrections in the gyro-less case, which involves angular velocity computed from mass properties and control data.
The filters are evaluated using extensive real and simulated data from low-Earth orbiting NASA satellites such as Tropical Rainfall Measurement Mission, Solar, Anomalous, and Magnetospheric Particle Explorer, Earth Radiation Budget Satellite, Wide Field Infrared Explorer, and Fast Auroral Snapshot Explorer. The evaluations predominantly involve stressing “magnetometer-only” scenarios, i. e., using only a three-axis magnetometer to sense the attitude. Comparisons are made with attitude and rate knowledge obtained using coarse sensors and single-frame algorithms, and also with results from an Unscented Kalman filter with a more complicated attitude pameterization.
Dr. Murty Challa received a B.Sc. in physics from Andhra University, Visakhapatnam, India, and a Ph.D. in physics from the University of Georgia, Athens, Georgia. His professional interests and actvities include: estimation and data fusion algorithms such as Kalman filters, batch estimators, and simultaneous localization and mapping; track correlation/ association; guidance, navigation, and control for spacecraft and unmanned vehicles; missile defense; quantum computing; statistical mechanics; computational physics; solid state physics/ materials science. He is currently a member of the Senior Professional Staff of Johns Hopkins Applied Physics Laboratory (JHU/APL), Maryland, USA. Prior to JHU/APL, he was senior staff at Institute for Defense Analyses, Alexandria, VA, and at Computer Sciences Corporation supporting NASA Goddard Space Flight Center, Greenbelt, MD. Dr. Challa’s academic positions include post-doctoral appointments in physics at Michigan State University and Virginia Commonwealth University, and an adjunct position in physics at George Washington University. He has also served as a consultant to Iridium Satellite, LLC.
This talk and demonstration will give an overview of the open-source Robot Operating System (ROS) software ecosystem for robot systems development. ROS is a modular open-source software system whose core is a publish-subscribe middleware system for C++ and Python under Linux that supports message passing, recording and playback of messages, distributed parameters, and extensive introspection tools. Other open source publish subscribe systems well known to robotics developers include Lightweight Communications and Marshalling (LCM) and the Mission Orientated Operating Suite (MOOS). In addition to message passing, the ROS ecosystem offers an extensive set of tools and software packages that employ standard message definitions for robotics, a robot geometry library and description language, device interface libraries for numerous COTS devices, localization and navigation packages, three-dimensional visualization with Rviz, and physics-based robot simulation with Gazebo. ROS is now the most widely used software system for robotics research. Also see: http://www.ros.org
If you are unfamiliar with ROS you are encouraged to bring their notebook computers to this seminar so that you can browse in real-time to web pages mentioned in this talk.
Louis L. Whitcomb is a Professor in the Department of Mechanical Engineering, with secondary appointment in Computer Science, at the Johns Hopkins University’s Whiting School of Engineering. His research focuses on the navigation, dynamics, and control of robot systems – with applications to robotics in extreme environments including space and underwater robots. He is former (founding) Director of the JHU Laboratory for Computational Sensing and Robotics. He received teaching awards at Johns Hopkins in 2001, 2002, 2004, and 2011, was awarded the NSF Career Award and the ONR Young Investigator Award. He is a Fellow of the IEEE.