Abstract: Networked control systems arise in a wide range of applications. These systems typically have a global control objective, while the control is distributed and relies only on local feedback from a neighborhood around each site. In this talk, I will address the question of what this implies in terms of limitations to the overall performance of such systems, in particular as the networks grow large. We consider networked dynamical systems with double integrator dynamics, controlled with linear consensus-like algorithms. Such systems can be used to model, for example, vehicular formation dynamics and synchronization in electric power networks. We assume that the systems are subject to distributed disturbances and study performance in terms of H2 norm metrics that capture the notion of network coherence. In the context of power networks, we also show how such metrics can be used to quantify losses due to non-equilibrium power flows. With localized, static feedback control, there are known performance limitations that cause these metrics to scale unfavorably with the network size. We discuss the underlying reasons for these unfavorable scalings and propose distributed dynamic feedback controllers, which under certain conditions alleviate the limitations of static feedback.
Bio: Emma Tegling (née Sjödin) received her Ph.D. degree in Electrical Engineering in January 2019, and her B.Sc. and M.Sc. degrees, both in Engineering Physics, in 2011 and 2013. All degrees are from KTH Royal Institute of Technology. At present, she is a postdoctoral researcher with the Division of Decision and Control Systems at KTH. Emma has also spent time as a visiting researcher at California Institute of Technology in 2011, the Johns Hopkins University in 2013 and the University of California at Santa Barbara in 2015. Prior to her doctoral work, she was a strategy consultant with Ericsson. Emma’s research interests are within analysis and control of large-scale networked systems, with a particular focus on highly distributed power grids.
Ultrasound is ubiquitous in clinical practice because it is safe, portable, inexpensive and real time. However, the image quality of ultrasound is much less than MRI or X-ray CT because the contrast of ultrasound is typically low and ultrasonic images are rife with speckle. We have been developing different techniques to improve ultrasonic imaging by providing new sources of image contrast and improving spatial resolution. These new techniques include: development of quantitative ultrasound, ultrasound tomography with limited angle backscatter, novel super resolution beamforming techniques and coding techniques for effectively improving transducer bandwidth. In addition to imaging, we have developed communication protocols using ultrasound as the communication channel and have demonstrated data rates capable of streaming high definition video. In this talk we will discuss different applications of these ultrasonic imaging and communications techniques. Specifically, we will show how quantitative ultrasound approaches have been successful at classifying tissue state, monitoring focused ultrasound therapy, detecting early response of breast cancer to neoadjuvant chemotherapy and the automatic detection of nerves in the imaging field. We will demonstrate how our super resolution technique can improve image quality for specific imaging tasks such as detecting bright specular scatterers. Finally, we will discuss the ability of ultrasound to act as the communication channel for implanted medical devices.
Professor Oelze was born in Hamilton, New Zealand in 1971. He earned a B.S. in Physics and Mathematics (1994, Harding University) and Ph.D. in Physics (2000, OleMiss). From 2000 to 2002 Dr. Oelze served as a post-doc in the Department of Electrical and Computer Engineering (ECE) inside the Bioacoustics Research Laboratory at the University of Illinois at Urbana-Champaign (UIUC). From 2002 to 2004, Dr. Oelze was a NIH fellow conducting research in quantitative ultrasound techniques for biomedical ultrasound applications in cancer detection. Dr. Oelze joined the faculty of ECE at the UIUC in 2005 and continues to serve as a Professor and Associate Head for Graduate Affairs. He is a Professor in the Carle Illinois College of Medicine. His research interests include biomedical ultrasound, quantitative ultrasound imaging for improving cancer diagnostics and monitoring therapy response, ultrasound bioeffects, ultrasound tomography techniques, ultrasound-based therapy, beamforming and applications of coded excitation to ultrasonic imaging. Currently, Dr. Oelze is a fellow of the AIUM, a senior member of the IEEE, and a member of ASA. He is a member of the Technical Program Committee of the IEEE Ultrasonics Symposium. He currently serves as an associate editor-in-chief of IEEE Transactions on Ultrasonic, Ferroelectrics, and Frequency Control, associate editor of Ultrasonic Imaging and associate editor for IEEE Transactions on Biomedical Engineering.
The Laboratory for Computational Sensing and Robotics will highlight its elite robotics students and showcase cutting-edge research projects in areas that include Medical Robotics, Extreme Environments Robotics, Human-Machine Systems for Manufacturing, BioRobotics and more. JHU Robotics Industry Day will take place from 8 a.m. to 4 p.m. in Levering Hall on the Homewood Campus at Johns Hopkins University.
Robotics Industry Day will provide top companies and organizations in the private and public sectors with access to the LCSR’s forward-thinking, solution-driven students. The event will also serve as an informal opportunity to explore university-industry partnerships.
You will experience dynamic presentations and discussions, observe live demonstrations, and participate in speed networking sessions that afford you the opportunity to meet Johns Hopkins most talented robotics students before they graduate.
|9:00am||Registration Open and Breakfast||Glass Pavilion|
|9:30 am||Welcome WSE: Larry Nagahara, Associate Dean for Research||Glass Pavilion|
|9:35am||Introduction to LCSR: Russell H. Taylor, Director||Glass Pavilion|
|9:55 am||LCSR Education: Louis Whitcomb, Deputy Director||Glass Pavilion|
|10:05 am||Student Research Talk – Farshid Alambeigi||Glass Pavilion|
|10:15 am||Student Research Talk – Will Pryor||Glass Pavilion|
|10:25 am||Student Research Talk – Ayushi Sinha||Glass Pavilion|
|10:35 am||Student Research Talk – Zak Harris||Glass Pavilion|
|10:45 am||Coffee Break||Glass Pavilion|
|11:05 am||Industry Talk – Ameet Jain from Phillips||Glass Pavilion|
|11:20 am||Industry Talk – Amy Blank from Barrett Technology||Glass Pavilion|
|11:40 am||Mathias Unberath, New LCSR Faculty||Glass Pavilion|
|11:59 am||Closing: Russell H. Taylor, Director||Glass Pavilion|
|12:00 pm||Lunch: Resume Roundtables||Glass Pavilion|
|Location: Hackerman Hall|
|1:00-3:00 pm||Poster and Demo Session||Hackerman Hall|
|1:45-2:45 pm||Guided Krieger Tours||Meet next to 134|
|3:00-4:00 pm||Student and Industry Reception||Hackerman South Lobby|
|5:30-7:30pm||Alumni Dinner||Gertrude’s Terrace|
Please contact Ashley Moriarty if you have any questions.
Aero- and hydrodynamics have helped us understand how animals fly and swim and develop aerial and aquatic vehicles and robots that move through air and water rapidly, agilely, and efficiently. By contrast, we know surprisingly little about how terrestrial animals move so well in natural terrain, and even the best robots still struggle in complex terrain such as earthquake rubble, cluttered buildings, forest floor, and Martian rocks, an ability required for important applications like search and rescue, structural examination, environmental monitoring, and planetary exploration.
By integrating biology, engineering, and physics studies and developing new experimental tools and theoretical models, our lab is creating the new field of terradynamics to describe complex locomotor-terrain interaction (analogous to fluid-structure interaction), and using terradynamics to better understand animal locomotion and advance robot locomotion in complex terrain.
In this talk, I will give an overview of research in my lab at Hopkins over the last three years to create terradynamics for locomotion in complex 3-D terrain. Particularly, I will highlight: (1) How we create “locomotion energy landscapes” to understand how insects and legged robots transition between different forms of movement to traverse highly cluttered terrain. (2) How limbless snakes traverse large steps and inspire a snake robot that outperforms previous ones. I will also briefly survey other recent and ongoing projects in the lab.
Chen Li is an Assistant Professor in the Department of Mechanical Engineering at Johns Hopkins University, and affiliated with JHU’s Laboratory for Computational Sensing and Robotics (LCSR). Dr. Li received his B.S. degree from Peking University in 2005 and Ph.D. degree from Georgia Institute of Technology in 2011, both in physics. From 2012 to 2015, he performed postdoctoral research in integrative biology and robotics at University of California, Berkeley.
Dr. Li is a recipient of a Miller Research Fellowship from University of California, Berkeley in 2012, a Burroughs Wellcome Fund Career Award at the Scientific Interface in 2015, an Army Research Office Young Investigator Award in 2017, and a Beckman Young Investigator Award in 2018. He is selected as an alumnus of Kavli Frontiers of Science, National Academy of Sciences in 2019. His research achievements have been recognized by publication in prestigious journals including Science and PNAS, as well as selection for one Best Paper (Advanced Robotics 2017), two Highlight Papers (IROS 2016, Bioinspiration & Biomimetics 2015), and two Best Student Papers (Robotics: Science & Systems 2012, Society for Integrative & Comparative Biology 2009).
For more information, please visit https://li.me.jhu.edu.
This talk will focus on modeling and advanced control of robots that physically or cognitively interact with humans. This type of interaction can be found on devices that assist and augment human capabilities, as well as provide motor rehabilitation therapy to impaired individuals. The first part of the talk will present research on myoelectric control interfaces for a variety of robotic mechanisms. Results of a novel method for robust myoelectric control of robots will be presented. This work supports a shift in myoelectric control schemes towards proportional simultaneous controls learned through development of unique muscle synergies. The ability to enhance, retain, and generalize control, without needing to recalibrate or retrain the system, supports control schemes promoting synergy development, not necessarily user-specific decoders trained on a subset of existing synergies, for efficient myoelectric interfaces designed for long-term use. The second part of the talk will focus on a novel approach to robotic interventions for gait therapy, which takes advantage of mechanisms of inter-limb coordination, using a novel robotic system, called Variable Stiffness Treadmill (VST) developed in the HORC Lab in ASU. The methods and results of the presented approach will lay the foundation for model-based rehabilitation strategies for impaired walkers. Finally, results on a novel control interface between humans and multi-agent systems will be presented. The human user will be in control of a swarm of Unmanned aerial vehicles (UAVs) and will be able to provide high-level commands to these agents. The proposed brain-swarm interface allows for advancements in swarm high-level information perception, leading to augmentation of decision capabilities of manned-unmanned systems and promoting the symbiosis between human and machine systems for comprehensive situation awareness.
Panagiotis (Panos) Artemiadis received the Diploma and the Ph.D. degree in mechanical engineering from the National Technical University of Athens, Athens, Greece, in 2003 and 2009, respectively. From 2007-2009 he worked as Visiting Researcher in Brown University and the Toyota Technological Institute in Chicago. From 2009 to 2011, he was a Postdoctoral Research Associate in the Mechanical Engineering Department, Massachusetts Institute of Technology (MIT). Since 2011, he has been with Arizona State University, where he is currently an Associate Professor in the Mechanical and Aerospace Engineering Department, and the Director of the Human-Oriented Robotics and Control Laboratory (http://horc.engineering.asu.edu/). He is also the Graduate Program Chair for the new MS Degree in Robotics and Autonomous Systems at ASU. His research interests include the areas of rehabilitation robotics, control systems, system identification, brain–machine interfaces and human–swarm interaction. He serves as Editor-in-Chief and Associate Editor in many scientific journals and scientific committees, three of his papers have been nominated or awarded best paper awards, while he has received many awards for his research and teaching (more info at http://www.public.asu.edu/~partemia/.). He is the recipient of the 2014 DARPA Young Faculty Award and the 2014 AFOSR Young Investigator Award, as well as the 2017 ASU Fulton Exemplar Faculty Award. He has the (co-)author of over 80 papers in scientific journals and peer-reviewed conferences, as well as 9 patents (3 issued, 6 pending).
Deliberate navigation in previously unseen environments and detection of novel objects instances are some of the key functionalities of intelligent agents engaged in fetch and delivery tasks. While data-driven deep learning approaches fueled rapid progress in object category recognition and semantic segmentation by exploiting large amounts of labelled data, extending this learning paradigm to robotic setting comes with challenges.
To overcome the need for large amount of labeled data for training object instance detectors we use active self-supervision provided by a robot traversing an environment. The knowledge of ego-motion enables the agent to effectively associate multiple object hypotheses, which serve as training data for learning novel object embeddings from unlabelled data. The object detectors trained in this manner achieve higher mAP compared to off-the-shelf detectors trained on this limited data.
I will describe an approach towards semantic target driven navigation, which entails finding a way through a complex environment to a target object. The proposed approach learns navigation policies on top of representations that capture spatial layout and semantic contextual cues. The choice of this representation exploits models trained on large standard vision datasets, enables better generalization and joint use of simulated environments and real images for effective training of navigation policies.
Jana Kosecka is Professor at the Department of Computer Science, George Mason University. She obtained Ph.D. in Computer Science from University of Pennsylvania. Following her PhD, she was a postdoctoral fellow at the EECS Department at University of California, Berkeley. She is the recipient of David Marr’s prize and received the National Science Foundation CAREER Award. Jana is a chair of IEEE technical Committee of Robot Perception, Associate Editor of IEEE Robotics and Automation Letters and International Journal of Computer Vision, former editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. She held visiting positions at Stanford University, Google and Nokia Research. She is a co-author of a monograph titled Invitation to 3D vision: From Images to Geometric Models. Her general research interests are in Computer Vision and Robotics. In particular she is interested ‘seeing’ systems engaged in autonomous tasks, acquisition of static and dynamic models of environments by means of visual sensing and human-computer interaction.
12:00pm Presentation: Shahriar Sefati
Title: FBG-Based Position Estimation of Highly Deformable Continuum Manipulators: Model-Dependent vs. Data-Driven Approaches
Abstract: Fiber Bragg Grating (FBG) is a promising strategy for sensing in flexible medical instruments and continuum manipulators. Conventional shape sensing techniques using FBG involve finding the curvature at discrete FBG active areas and integrating curvature over the length of the continuum dexterous manipulator (CDM) for tip position estimation. However, due to limited number of sensing locations and many geometrical assumptions, these methods are prone to large error propagation especially when the CDM undergoes large deflections or interacts with obstacles. In this talk, I will give an overview of complications in using the conventional tip position estimation methods that are dependent on sensor model, and propose a new data-driven method that overcomes these challenges. The method’s performance is evaluated on a CDM developed for orthopedic applications, and the results are compared to conventional model-dependent methods during large deflection bending and interactions with obstacles.
Bio: Shahriar Sefati is a Ph.D. candidate in the Department of Mechanical Engineering at Johns Hopkins University, and affiliated with Biomechanical and Image-guided Surgical Systems (BIGSS) as part of Laboratory for Computational Sensing and Robotics (LCSR). He received his B.S. degree in Mechanical Engineering from Sharif University of Technology in 2014 and M.S.E. degree in Computer Science from Johns Hopkins University in 2017. He has also been a robotics and controls engineer intern at Verb Surgical, Inc. in summer 2018. Shahriar’s research focuses on continuum manipulators and flexible robotics for less-invasive surgery.
12:30 Presenation: Iulian Iordachita
Title: Safe Robot-assisted Retinal Surgery
Abstract: Modern patient health care involves maintenance and restoration of health by medication or surgical intervention. This research talk focuses solely on surgical procedures, like retinal surgery, where surgeons perform high-risk but necessary treatments whilst facing significant technical and human limitations in an extremely constrained environment. Inaccuracy in tool positioning and movement are among the important factors limiting performance in retinal surgery. The challenges are further exacerbated by the fact that in the majority of contact events, the forces encountered are below the tactile perception of the surgeon. Inability to detect surgically relevant forces leads to a lack of control over potentially injurious factors that result in complications. This situation is less than optimal and can significantly benefit from the recent advances in robot assistance, sensor feedback and human machine interface design. Robotic assistance may be ideally suited to address common problems encountered in most (micro)manipulation tasks, including hand tremor, poor tool manipulation resolution, and accessibility, and to open up surgical strategies that are beyond human capability. Various force sensors have been developed for microsurgery and minimally invasive surgery. Optical fibers strain sensors, specifically fiber Bragg gratings (FBG), are very sensitive, capable of detecting sub-micro strain chances, are very small in size, lightweight, biocompatible, sterilizable, multiplexable, and immune to electrostatic and electromagnetic noise. In retinal surgery, FBG-based force-sensing tools can provide the necessary information that will guide the surgeon through any maneuver, effectively reduce forces with improved precision and potentially improve the safety and efficacy of the surgical procedure. Optical fiber-based sensorized instruments in correlation with robot-assisted (micro)surgery could address the current limitations in surgery by integrating novel technology that transcend human sensor-motor capabilities into robotic systems that provide both, real-time significant information and physical support to the surgeon, with the ultimate goal of improving clinical care and enabling novel therapies.
Modeling and control problems generally get harder the more Degrees of Freedom (DoF) are involved, suggesting that moving with many legs or grasping with many fingers should be difficult to describe. In this talk I will show how insights from the theory of geometric mechanics, a theory developed about 20-30 years ago by Marsden, Ostrowski, and Bloch, might turn that notion on its head. I will motivate the claim that when enough legs contact the ground, the complexity associated with momentum is gone, to be replaced by a problem of slipping contacts. In this regime, equations of motion are replaced by a “connection” which is both simple to estimate in a data driven form, and easy to simulate by adopting some non-conventional friction models. The talk will contain a brief intro to geometric mechanics, and consist mostly of results showing that: (i) this class of models is more general than may seem at first; (ii) they can be used for very rapid hardware in the loop gait optimization of both simple and complex robots; (iii) they motivate a simple motion model that fits experimental results remarkably well. If successful, this research agenda could improve motion planning speeds for multi-contact robotic systems by several orders of magnitude, and explain how simple animals can move so well with many limbs.
Shai Revzen is an Assistant Professor in the University of Michigan, Ann Arbor. His primary appointment is in the department of Electrical Engineering and Computer Science in the College of Engineering. He holds a courtesy faculty appointment in the Department of Ecology and Evolutionary Biology, and is an Assistant Director of the Michigan Robotics Institute. He received his PhD in Integrative Biology from the University of California at Berkeley, and an M.Sc. in Computer Science from the Hebrew University in Jerusalem. In addition to his academic work, Shai was Chief Architect R&D of the convergent systems division of Harmonic Lightwaves (HLIT), and a co-founder of Bio-Systems Analysis, a biomedical technology start-up. As principal investigator of the Biologically Inspired Robotics and Dynamical Systems (BIRDS) lab, Shai sets the research agenda and approach of the lab: a focus on fundamental science, realizing its transformative influence on robotics and other technology. Work in the lab is equally split between robotics, mathematics, and biology.