Unmanned aerial-aquatic vehicles (UAAVs) have the potential to dramatically improve remote access of underwater environments. In particular, fixed-wing UAAVs offer a promising means of enabling efficient locomotion in both aerial and aquatic domains through the use of a lifting surface. In this talk, I will present our approach for enabling multi-domain mobility with a small fixed-wing UAAV consisting of a single-propeller and a delta-wing planform. To this end, I will describe how our approach, which relies almost entirely on commercial off-the-shelf (COTs) components, uses feedback control and optimal trajectory design to solve the water-exit problem. I will demonstrate, through both simulation and hardware experiments, that our approach is indeed feasible, and that it has the potential to offer a robust, low-cost solution for enabling mobility in and across the air and water domains.
Dr. Joseph Moore is a member of the Senior Professional Staff at the Johns Hopkins University Applied Physics Lab (JHU/APL). In 2014, he received his Ph.D. in Mechanical Engineering from the Massachusetts Institute of Technology, where he was a Graduate Research Assistant in the Robot Locomotion Group and developed control algorithms for robust post-stall perching with a fixed-wing unmanned aerial vehicle (UAV). He also holds a B.S. in both Mechanical and Electrical Engineering from Rensselaer Polytechnic Institute. While at JHU/APL, Dr. Moore has continued to develop control algorithms for enabling agile flight with fixed-wing UAVs. He has also worked on developing algorithms for control and motion planning of mobile manipulators and heterogeneous multi-robot teams. His current work focuses on extreme short-field landings with fixed-wing UAVs, unmanned aerial-aquatic vehicles (UAAVs), and the development of algorithms for nonlinear model predictive control.
This seminar presents essential skills for interviewing for engineers seeking academic and industrial positions, including interviewing for jobs in industry and academia, and for interviewing for admission graduate school. Interviewing well is a skill that takes preparation and practice. This seminar is one of the occasional LCSR seminar series on professional development.
Louis L. Whitcomb is Professor and former Chair (2013-2017) of the Department of Mechanical Engineering, with secondary appointment in Computer Science, at the Johns Hopkins University’s Whiting School of Engineering. He is was the founding Director (2007-2013) of the JHU Laboratory for Computational Sensing and Robotics, where he is presently interim director of Robotics MSE Program. He completed a B.S. in Mechanical Engineering in 1984 and a Ph.D. in Electrical Engineering in 1992 at Yale University. From 1984 to 1986 he was a R&D engineer with the GMFanuc Robotics Corporation in Detroit, Michigan. He joined the Department of Mechanical Engineering at the Johns Hopkins University in 1995, after post doctoral fellowships at the University of Tokyo and the Woods Hole Oceanographic Institution. His research focuses on the navigation, dynamics, and control of robot systems – with applications to robotics in extreme environments including space and underwater robots. Whitcomb is a co-principal investigator of the Nereus and Nereid Under-Ice Projects. He is former (founding) Director of the JHU Laboratory for Computational Sensing and Robotics. He received teaching awards at Johns Hopkins in 2001, 2002, 2004, and 2011, was awarded a NSF Career Award, and an ONR Young Investigator Award. He is a Fellow of the IEEE. He is an Adjunct Scientist, Department of Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution.
The ocean presents a number of unique challenges to successful exploration and data collection. Robotics systems and techniques provide a means to overcome the deep ocean’s unique challenges. The autonomous underwater vehicle Sentry operates alongside the tethered ROV Jason and manned submersible Alvin to provide the National Science Foundation’s deep water research capability. Sentry has completed over
450 dives in support of science operations in over 8 years of operations. This talk will review the unique operating challenges the ocean prevents, describe the AUV Sentry and what sort of data it collects, and introduce some future directions in underwater robotics research.
Ian Vaughn is a Research Engineer at the Woods Hole Oceanographic Institution in Woods Hole, MA. He conducts at-sea operations and on-shore engineering work with the AUV Sentry with an emphasis on data processing and software development. Previously, he completed a PhD in Ocean Engineering at the University of Rhode Island using the Ocean Exploration Trust’s Hercules ROV. Ian has completed research cruises with ocean robots all over the world.
Ian Vaughn, PhD.
Deep Submergence Laboratory
Department of Applied Ocean Physics and Engineering Woods Hole Oceanographic Institution Woods Hole, MA, USA
Implementing frequency response using grid-connected inverters is one of the popular alternatives to mitigate the dynamic degradation experienced in low inertia power systems. However, such solution faces several challenges as inverters do not intrinsically possess the natural response to power fluctuations that synchronous generators have. Thus, to synthetically generate “virtual” inertia, inverters need to take frequency measurements, which are usually noisy, and subsequently make changes in the output power, which is therefore delayed. As a result, it is not a priori clear the whether virtual inertia will indeed mitigate the degradation, or some alternative control strategy will be necessary. In this talk, we present a comprehensive analysis and design framework that provides the tools required to answer this question. First, we develop novel stability analysis tools for power systems, which allows for the decentralized design of inverter-based controllers. The method requires that each inverter satisfies a standard H-infinity design requirement that depends on the dynamics of the components and inverters at each bus, and the aggregate susceptance of the transmission lines connected to it. It is robust to network and delay uncertainty, and when no network information is available reduces to the standard passivity condition for stability. Second, by selecting relevant performance outputs and signal norms, we define system-wide performance metrics that explicitly quantify the effect of frequency measurements noise and power disturbances on the overall system performance. Using a novel modal decomposition, we derive closed-form expressions for system performance that explicitly capture the impact of network topology, generator and inverter control parameters, and machine rating heterogeneity. Finally, we leverage this framework to design a new dynamic droop control (iDroop) mechanism for grid-connected inverters that exploits classical lead/lag compensation to outperform standard droop control and virtual inertia alternatives in both joint noise and disturbance mitigation and delay robustness.
Enrique Mallada is an assistant professor of electrical and computer engineering at Johns Hopkins University. Before joining Hopkins in 2016, he was a post-doctoral fellow at the Center for the Mathematics of Information at the California Institute of Technology from 2014 to 2016. He received his ingeniero en telecomunicaciones degree from Universidad ORT, Uruguay, in 2005 and his Ph.D. degree in electrical and computer engineering with a minor in applied mathematics from Cornell University in 2014. Dr. Mallada was awarded the ECE Director’s Ph.D. Thesis Research Award for his dissertation in 2014, the Cornell University’s Jacobs Fellowship in 2011 and the Organization of American States scholarship from 2008 to 2010. His research interests lie in the areas of control, networked dynamics, and optimization, with applications to engineering networks such as power systems and the Internet.
Artificial intelligence has begun to impact healthcare in areas including electronic health records, medical images, and genomics. But one aspect of healthcare that has been largely left behind thus far is the physical environments in which healthcare delivery takes place: hospitals and assisted living facilities, among others. In this talk I will discuss my work on endowing hospitals with ambient intelligence, using computer vision-based human activity understanding in the hospital environment to assist clinicians with complex care. I will first present an implementation of an AI-Assisted Hospital where we have equipped units at two partner hospitals with visual sensors. I will then discuss my work on human activity understanding, a core problem in computer vision. I will present deep learning methods for dense and detailed recognition of activities, and efficient action detection, important requirements for ambient intelligence. I will discuss these in the context of two clinical applications, hand hygiene compliance and automated documentation of intensive care unit activities. Finally, I will present work and future directions for integrating this new source of healthcare data into the broader clinical data ecosystem, towards full realization of an AI-Assisted Hospital.
Serena Yeung is a PhD candidate at Stanford University in the Artificial Intelligence Lab, advised by Fei-Fei Li and Arnold Milstein. Her research focuses on deep learning and computer vision algorithms for video understanding and human activity recognition. More broadly, she is passionate about using these algorithms to equip healthcare spaces with ambient intelligence, in particular an AI-Assisted Hospital. Serena is the lead graduate student in the Stanford Partnership in AI-Assisted Care (PAC), a collaboration between the Stanford School of Engineering and School of Medicine. She interned at Facebook AI Research in 2016, and Google Cloud AI in 2017. She was also co-instructor for Stanford’s CS231N course on Convolutional Neural Networks for Visual Recognition in 2017.
Master’s & PhD Robotics students; come prepare for Robotics Industry Day’s employer networking event. Get Elevator Pitch tips and practice networking with peers. Pizza will be served. Please make sure to register for the event.
The Laboratory for Computational Sensing and Robotics will highlight its elite robotics students and showcase cutting-edge research projects in areas that include Medical Robotics, Extreme Environments Robotics, Human-Machine Systems for Manufacturing, BioRobotics and more. JHU Robotics Industry Day will take place from 8 a.m. to 3 p.m. in Hackerman Hall on the Homewood Campus at Johns Hopkins University.
Robotics Industry Day will provide top companies and organizations in the private and public sectors with access to the LCSR’s forward-thinking, solution-driven students. The event will also serve as an informal opportunity to explore university-industry partnerships.
You will experience dynamic presentations and discussions, observe live demonstrations, and participate in speed networking sessions that afford you the opportunity to meet Johns Hopkins most talented robotics students before they graduate.
|8:00||Registration and Continental Breakfast||Glass Pavilion, Levering Hall|
|8:30||Welcome: Larry Nagahara, Dean|
|8:35||Introduction to LCSR: Russell H. Taylor, Director|
|8:55||Louis Whitcomb, LCSR Education|
|9:05||Gregory Hager, Director of MECH|
|9:20||Brian Roberts, NASA|
|9:35||Bruce Lichorowic, Galen Robotics|
|9:50||Ashley Llorens, JHU Applied Physics Lab|
|10:30||Christy Wyskiel, Johns Hopkins Technology Ventures|
|10:45||Simon DiMaio, Intuitive Surgical|
|11:00||Benjamin Gibbs, READY Robotics|
|11:15||Clif Burdette, Acoustic MedSystems|
|11:30||Chien-Ming Huang, New LCSR Faculty|
|11:50||Closing: Russell H. Taylor, Director|
|12:00||Lunch||Glass Pavilion, Levering Hall|
|12:30-2:00||Poster + Demo Session||Hackerman Hall, Robotorium|
|2:00-3:00||Student and Industry Reception||Hackerman Hall, 320|
|3:30-5:30||SAB Meeting||Malone 107|
Please contact Ashley Moriarty if you have any questions.
Speaker: Prof. Samuel Kadoury, Ph.D., P.Eng., Polytechnique Montreal, Canada Research Chair in Medical Imaging and Assisted Interventions
Spinal deformities such as adolescent idiopathic scoliosis are complex 3D deformations of the musculoskeletal trunk. For the past two decades, 3D spine reconstructions obtained from diagnostic scans have assisted orthopedists assess the severity of deformations and establish treatment strategies. However, these procedures required significant manual intervention and were not suited for routine clinical practice. This presentation will expose computational methods recently developed in our lab based on machine learning and statistical analysis to automatically reconstruct the personalized spine geometry from X-rays, classify various deformation patterns in 3D, predict disease progression and perform intra-operative guidance during surgical procedures, with the use of biomechanical simulation models and multi-modal registration. Experiments performed at the CHU Sainte-Justine Hospital on adolescent patients demonstrate the potential clinical benefit of capturing statistical variations in the spine geometry to help diagnose and treat this disease.
Samuel Kadoury is an associate professor in the Computer and Software Engineering Department at Polytechnique Montreal and researcher at the University of Montreal Research Hospital Center. He is the director of the Medical Image Computing and Analysis Lab at Polytechnique Montreal and holds the Canada Research Chair in Medical Imaging and Assisted Interventions. He obtained his M.Sc. from McGill University and his Ph.D. in biomedical engineering at the University of Montreal, with his thesis on orthopedic imaging. He completed a post-doctoral fellowship at Ecole Centrale de Paris and was a clinical research scientist for Philips Research at the National Institutes of Health, developing image-guided systems for liver and prostate cancer. Dr. Kadoury has published and presented his work in a number of conferences and journals such as Radiology, ISMRM, IEEE TMI, Medical Image Analysis, MICCAI and IPCAI, and served as Area Chair for conferences such as MICCAI and CVPR. He has also been granted five international and US patents the field of image-guided interventions and is co-recipient of the NIH merit award and the RSNA Cum Laude Award for his work in artificial intelligence for liver cancer detection.
This presentation will discuss how symmetric and asymmetric motions can be used for training and rehabilitation. Many daily tasks require that a person use both hands simultaneously, such as moving a large book or opening the lid on a jar. Such bimanual tasks are difficult for people who have a stroke, but the tight neural coupling across the body has been hypothesized to allow individuals to self-rehabilitate by physically coupling their hands. The interaction discussed here separates the task and guidance forces by guiding one hand so the user can actively recreate the motion with their other hand that receives task-related forces. This method is based on the ability of humans to easily move their hands through similar paths, such as drawing circles, compared to the difficulty of simultaneously drawing a square with one hand and a circle with the other. Experiments were performed to characterize the reference frames, interaction stiffnesses, and trajectories that humans can recreate.
The second half of this presentation will focus on gait rehabilitation for individuals with asymmetric impairments. Asymmetric gait is caused by many impairments, such as leg-length discrepancy, prosthetics, and stroke. Using a model of gait based on kinematic synchronization, it is shown that some types of symmetry can be generated in a person with an asymmetric impairment, but not simultaneously in both motions and forces. To balance the limitation of always having some asymmetries, perception of gait is used to put limits upon what appears symmetric even if it is not perfectly symmetric. One rehabilitation method, the Gait Enhancing Mobile Shoe (GEMS), uses an exaggerated asymmetric motion to generate an after-effect that has a better walking pattern. The GEMS uses a Kinetic Shape wheel to passively redirect the user’s natural downward forces while walking into a backward motion that generates a corrective after-effect. The Kinetic Shape has also been applied to the tip of a walking crutch to assist in locomotion. At the conclusion of this talk, you should have a better understanding of the symmetries and asymmetries that exist in your daily motions.
Dr. Kyle Reed is an Associate Professor in the Department of Mechanical Engineering at the University of South Florida. He was a Post-Doctoral Scholar in the Laboratory for Computational Sensing and Robotics at Johns Hopkins University from 2007-2009. He received his PhD from Northwestern University in 2007 and B.S. from the University of Tennessee in 2001, all in Mechanical Engineering. He has received funding from NSF, NIH, Florida High Tech Corridor, and industry. His research interests are in medical/rehabilitation robotics and human-centered robotics, which include designing intuitive and cooperative devices that interact with humans, as well as engineering education. More information about Dr. Reed and his research can be found at http://reedlab.eng.usf.edu
Steered particles offer a method for targeted therapy, interventions, and drug delivery in regions inaccessible by large robots. Magnetic actuation has the benefits of requiring no tethers, being able to operate from a distance, and in some cases allows imaging for feedback (e.g. MRI). However, for MRI and setups where the distances between external magnets are much larger than the robot workspace, the magnetic field is approximately uniform across the workspace. Moreover, the system is severely under-actuated when there are more particles than control inputs. In my talk I’ll share tricks we use to overcome this underactuation for coverage, manipulation, self-assembly, and steering large numbers of particles. You can help — visit http://swarmcontrol.net and play some games.
Aaron Becker’s passion is robotics and control. Currently as an Assistant Professor in Electrical and Computer Engineering at the University of Houston, he is building a robotics lab. NSF selected Aaron for the CAREER award in 2016 to study massive manipulation with swarms: using a shared input to drive large populations of robots to arbitrary goal states. Becker won the Best Paper award at IROS 2014. As a Research Fellow in a joint appointment with Boston Children’s Hospital and Harvard Medical School, he implemented robotics powered and controlled by the magnetic field of an MRI. As a Postdoctoral Research Associate at Rice University, Aaron investigated control of distributed systems and nanorobotics with experts in the fields. His online game http://swarmcontrol.net seeks to understand the best ways to control a swarm of robots by a human. The project achieves this through a community of game-developed experts. Aaron earned his PhD in Electrical & Computer Engineering at the University of Illinois at Urbana-Champaign.