Calendar

Feb
10
Wed
LCSR Seminar: Shan Lin “Exploring Robust Real-time Instrument Segmentation for Endoscopic Sinus Surgery” @ https://wse.zoom.us/s/94623801186
Feb 10 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Vision-based surgical instrument segmentation, which aims to detect instrument regions in surgery images, is often a critical component for the computer or robot-assisted surgical systems. While advanced algorithms including deep CNN models have achieved promising instrument segmentation results, several limitations remain unsolved: (1) The robustness and generalization ability of existing algorithms is still insufficient for challenging surgery images, and (2) deep networks usually come with high computation cost, which needed to be addressed for time-sensitive applications during surgery. In this talk, I will present two algorithms to address these challenges. First, I will introduce a lightweight CNN that can achieve better segmentation performance with less inference time on low-quality endoscopic sinus surgery videos compared with several advanced deep networks. I will then discuss a domain adaptation method that can transfer the knowledge learned from relevant and labeled datasets for instrument segmentation on an unlabeled dataset.

 

Biography:

Shan Lin is a PhD candidate in the Electrical and Computer Engineering department at the University of Washington working with Prof. Blake Hannaford on medical robotics. Her research focuses on surgical instrument segmentation and skill assessment.

 

Feb
17
Wed
LCSR Seminar: James Bellingham “Ocean Observing in the Age of Robots” @ https://wse.zoom.us/s/94623801186
Feb 17 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Progress in the ocean sciences has been fundamentally limited by the high cost of observing the ocean interior, which in turn has been driven by the necessity that humans go to sea to make those measurements. That linkage is being broken. We are on the cusp of an age where robotic systems will operate routinely without the on-site attendance of humans. In this talk I will discuss design of survey-class Autonomous Underwater Vehicles and multi-platform observing systems, some implications for the future of marine systems, and the impact on how we do science at sea. These topics are impossible to discuss without considering the larger ocean technology enterprise. The use of robotics has been a key enabler for the offshore oil and gas industry and is making large inroads to defense. As robotics become more capable and accessible, their impacts will spread, enabling entirely new ocean enterprises. Thus marine robotics both promise to greatly improve our ability to observe the ocean, while at the same time offering a powerful enabling technology for ocean industries.

 

Biography:

James G. Bellingham research activities center on the creation of new, high-performance classes of underwater robots and the design and operations of large-scale multi-platform field programs. He has led and participated in research expeditions around the world from the Arctic to the Antarctic.  Jim founded the Consortium for Marine Robotics at the Woods Hole Oceanographic Institution (WHOI), founded the Autonomous Underwater Vehicles Laboratory at MIT, and co-founded Bluefin Robotics. He was Director of Engineering and Chief Technologist at the Monterey Bay Aquarium Research Institute (MBARI).  Jim serves on numerous advisory and National Academies studies.  His awards include the Lockheed Martin Award for Ocean Science and Engineering, the MIT Fourteenth Robert Bruce Wallace lecturer, the Blue Innovation Rising Tides Award, and the Navy Superior Public Service Award.

 

Feb
24
Wed
LCSR Seminar: Hao Su “High-Performance Soft Wearable Robots for Human Augmentation and Rehabilitation” @ https://wse.zoom.us/s/94623801186
Feb 24 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Wearable robots for physical augmentation of humans are the new frontier of robotics, but they are typically rigid, bulky, and limited in lab settings for steady-state walking assistance. To overcome those challenges, the first part of the talk will present a new design paradigm that leverages high torque density motors to enable the electrification of robotic actuation. Thus, our rigid and soft robots are able to achieve unprecedented performances, including most lightweight powered exoskeleton, high compliance, and high bandwidth human-robot interaction. The second part of the talk will focus on AI-powered controllers that estimate human dynamics and assist multimodal locomotion with superhuman performance to walk longer, squat more, jump higher, and swim faster. We use robots as a tool for scientific discovery to explore new research fields, including wearable robots for pediatric rehabilitation and pain relief of musculoskeletal disorders. Our breakthrough advances in bionic limbs will provide greater mobility and new hope to those with physical disabilities. We envision that our work will enable a paradigm shift of wearable robots from lab-bounded rehabilitation machines to ubiquitous personal robots for workplace injury prevention, pediatric and elderly rehabilitation, home care, and space exploration.

 

Biography:

Hao Su is Irwin Zahn Endowed Assistant Professor in the Department of Mechanical Engineering at the City University of New York, City College. He is the Director of the Biomechatronics and Intelligent Robotics (BIRO) Lab. He was a postdoctoral research fellow at Harvard University and the Wyss Institute for Biologically Inspired Engineering. Before this role, he was a Research Scientist at Philips Research North America, where he designed robots for lung and cardiac surgery. He received his Ph.D. degree at Worcester Polytechnic Institute. Dr. Su received the NSF CAREER Award, Best Medical Robotics Paper Runner-up Award at the IEEE International Conference on Robotics and Automation (ICRA), and Philips Innovation Transfer Award. His research is sponsored by NSF (National Robotics Initiative, Cyber-Physical Systems, Future of Work), NIH R01, National Institute on Disability, Independent Living, and Rehabilitation Research (NIDLRR), and Toyota Mobility Foundation. He is currently directing a Center of Assistive and Personal Robotics for Independent Living (APRIL) funded by the National Science Foundation and Department of Health and Human Services.

 

Mar
3
Wed
LCSR Seminar: Chad Jenkins “Semantic Robot Programming… and Maybe Making the World a Better Place” @ https://wse.zoom.us/s/94623801186
Mar 3 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

The visions of interconnected heterogeneous autonomous robots in widespread use are a coming reality that will reshape our world. Similar to “app stores” for modern computing, people at varying levels of technical background will contribute to “robot app stores” as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers. In order for people to fluently program autonomous robots, a robot must be able to interpret user instructions that accord with that user’s model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit a critical missing component is the grounding of semantic symbols in a manner that addresses both uncertainty in low-level robot perception and intentionality in high-level reasoning. Such a grounding will enable robots to fluidly work with human collaborators to perform tasks that require extended goal-directed autonomy.

 

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

 

Biography:

Odest Chadwicke Jenkins, Ph.D., is a Professor of Computer Science and Engineering and Associate Director of the Robotics Institute at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. His research often intersects topics in computer vision, machine learning, and computer animation. Prof. Jenkins has been recognized as a Sloan Research Fellow and is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR), the Air Force Office of Scientific Research (AFOSR) and the National Science Foundation (NSF). Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science and the Association for the Advancement of Artificial Intelligence, and Senior Member of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. He is an alumnus of the Defense Science Study Group (2018-19).

 

Mar
10
Wed
LCSR Seminar: Peter Kazanzides “Robotics and mixed reality to assist human task performance” @ https://wse.zoom.us/s/94623801186
Mar 10 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

The capabilities of artificial intelligence and robotics have advanced significantly in recent years, but many tasks still require human involvement or oversight for at least some phases. This is especially true for critical tasks, such as surgery or space operations, where the costs of failure are high. We therefore consider approaches, such as mixed reality visualization, interactive interfaces and mechanical assistance, that can enable more effective partnerships between humans and machines. This presentation will highlight several examples in applications of computer-assisted interventions in the operating room and in space.

 

Biography:

Peter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988 and began work on surgical robotics as a postdoctoral researcher, advised by Russell H. Taylor, at the IBM T.J. Watson Research Center. Dr. Kazanzides co-founded Integrated Surgical Systems (ISS) in November 1990 to commercialize the robotic hip replacement research performed at IBM and the University of California, Davis. As Director of Robotics and Software, he was responsible for the design, implementation, validation and support of the ROBODOC System, which has been used for more than 20,000 hip and knee replacement surgeries. Dr. Kazanzides joined Johns Hopkins University in December 2002 and is currently appointed as a Research Professor of Computer Science. He is a member of the Laboratory for Computational Sensing and Robotics (LCSR) and directs the Sensing, Manipulation and Real-Time Systems (SMARTS) lab.  His research interests include medical robotics, space robotics, and mixed reality, which share the common themes of human/machine interfaces to keep the human in the loop, real-time sensing to account for uncertainty, and system engineering to enable deployment in the real world.

 

Mar
17
Wed
LCSR Seminar: Joe Moore “Precision Post-Stall Flight with Aerobatic Fixed-Wing UAVs” @ https://wse.zoom.us/s/94623801186
Mar 17 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Fixed-wing unmanned aerial vehicles (UAVs) offer significant performance advantages over rotary-wing UAVs in terms of speed, endurance, and efficiency. However, these vehicles have traditionally been severely limited with regards to take-off, landing, and overall maneuverability. In this talk, I will discuss our recent efforts to exploit post-stall aerodynamics to dramatically increase the agility of fixed-wing UAVs. I will first present results in precision post-stall landing and demonstrate that previous results in fixed-wing perching can be scaled to larger vehicles. I will then discuss our efforts to achieve quadcopter-like agility with fixed-wing vehicles when navigating in constrained environments. Our approach relies on a receding-horizon nonlinear model predictive control (NMPC) strategy to reduce the vehicle’s minimum turning radius via “post-stall turns”. We demonstrate this approach on a small 24-inch wingspan UAV in indoor environments and on a larger, 42-inch UAV in an urban environment. Finally, I will discuss ongoing work to address challenges such as onboard sensing, automatic take-off, and aerobatic fixed-wing swarms.

 

Biography:

Dr. Joseph Moore is a member of the senior technical staff at the Johns Hopkins University Applied Physics Laboratory and an Assistant Research Professor in the Mechanical Engineering Department at the JHU Whiting School of Engineering. Dr. Moore received his Ph.D. in 2014 in Mechanical Engineering from the Massachusetts Institute of Technology where he demonstrated that the LQR-Trees algorithm can generate a robust post-stall perching controller for a fixed-wing glider. While at JHU/APL, Dr. Moore has spent his time developing control, localization and motion planning algorithms for air, ground and hybrid aerial-aquatic vehicles. His paper on the design and analysis of a fixed-wing aerial aquatic vehicle was nominated for Best UAV Paper at ICRA 2018. He is the Principal Investigator and Project Manager for the ONR Short-field Landing Program, which seeks to enable aggressive post-stall landing maneuvers with large Group 1 Unmanned Aerial Systems. He is also the Principal Investigator of JHU/APL’s DARPA OFFSET Sprint 4 and Sprint 5 efforts, which seeks to develop a swarm of aerobatic fixed-wing vehicles capable of high-speed navigation in urban environments.

 

Mar
24
Wed
LCSR Seminar: “Human Subjects Experiments in Robotics Research” @ https://wse.zoom.us/s/94623801186
Mar 24 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

TBA

 

Biography:

Jeremy D. Brown,  the John C. Malone Assistant Professor in the Department of Mechanical Engineering, explores the interface between humans and robotics, with a specific focus on medical applications and haptic feedback. Brown is a graduate of the Atlanta University Center’s Dual Degree Engineering Program, earning bachelor’s degrees in Applied Physics and Mechanical Engineering from Morehouse College and the University of Michigan, respectively. He received his MSE and PhD in Mechanical Engineering at the University of Michigan, where he worked on haptic feedback for upper-extremity prosthetic devices. Prior to joining Johns Hopkins in 2017, he was a postdoctoral research fellow at the University of Pennsylvania.

Chien-Ming Huang, a John C. Malone Assistant Professor in the Department of Computer Science, studies human-machine teaming and creates innovative, intuitive, personalized technologies to provide social, physical, and behavioral support for people with a variety of abilities and characteristics, including children with autism spectrum disorders. Huang, who joined the Hopkins faculty in 2017, has received several awards, including being named a prestigious John C. Malone Assistant Professor at JHU. In 2018, he was selected for the Association for Computing Machinery’s (ACM) Conference on Human Factors in Computing Systems (referred to as CHI) Early Career Symposium and its New Educators Workshop for the ACM’s Special Interest Group on Computer Science Education. As a PhD candidate, Huang  received “Best Paper Runner-up” and “Best Student Poster Runner-up” honors at the 2013 Robotics: Science and Systems (RSS) conference and was named a 2012 Human Robot Interaction (HRI) Pioneer.

 

Mar
31
Wed
LCSR Seminar: Auke Ijspeert “Investigating animal locomotion using biorobots” @ https://wse.zoom.us/s/94623801186
Mar 31 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

The ability to efficiently move in complex environments is a fundamental property both for animals and for robots, and the problem of locomotion and movement control is an area in which neuroscience, biomechanics, and robotics can fruitfully interact. In this talk, I will present how biorobots and numerical models can be used to explore the interplay of the four main components underlying animal locomotion, namely central pattern generators (CPGs), reflexes, descending modulation, and the musculoskeletal system. Going from lamprey to human locomotion, I will present a series of models that tend to show that the respective roles of these components have changed during evolution with a dominant role of CPGs in lamprey and salamander locomotion, and a more important role for sensory feedback and descending modulation in human locomotion. I will also present a recent project showing how robotics can provide scientific tools for paleontology. Interesting properties for robot and lower-limb exoskeleton locomotion control will finally be discussed.

 

Biography:

Auke Ijspeert is a professor at EPFL (Lausanne, Switzerland) since 2002, and head of the Biorobotics Laboratory. He has a BSc/MSc in physics from EPFL (1995), a PhD in artificial intelligence from the University of Edinburgh (1999). He is an IEEE Fellow. His research interests are at the intersection between robotics, computational neuroscience, nonlinear dynamical systems and applied machine learning. He is interested in using numerical simulations and robots to gain a better understanding of animal locomotion, and in using inspiration from biology to design novel types of robots and controllers. He is also investigating how to assist persons with limited mobility using exoskeletons and assistive furniture.

 

Apr
7
Wed
LCSR Seminar: Robin Murphy “From the World Trade Center to the COVID-19 Pandemic: Robots and Disasters” @ https://wse.zoom.us/s/94623801186
Apr 7 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

This talk will describe how ground, aerial, and marine robots have been used in disasters, most recently the coronavirus pandemic. During the pandemic so far, 338 instances of robots in 48 countries protecting healthcare workers from unnecessary exposure, handling the surge in demand for clinical care, preventing infections, restoring economic activity, and maintaining individual quality of life have been reported.  The uses span six sociotechnical work domains and 29 different use cases representing different missions, robot work envelopes, and human-robot interaction dyads.  The dataset also confirms a model of adoption of robotics technology for disasters. Adoption favors robots that maximize the suitability for established use cases while minimizing risk of malfunction, hidden workload costs, or unintended consequences as measured by the NASA Technical Readiness Assessment metrics. Regulations do not present a major barrier but availability, either in terms of inventory or prohibitively high costs, does.  The model suggests that in order to be prepared for future events, roboticists should partner with responders now, investigate how to rapidly manufacture complex, reliable robots on demand, and conduct fundamental research on predicting and mitigating risk in extreme or novel environments.\

 

Biography:

Dr. Robin R. Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M University, a TED speaker, and an IEEE and ACM Fellow. She helped create the fields of disaster robotics and human-robot interaction, deploying robots to 29 disasters in five countries including the 9/11 World Trade Center, Fukushima, the Syrian boat refugee crisis, Hurricane Harvey, and the Kilauea volcanic eruption. Murphy’s contributions to robotics have been recognized with the ACM Eugene L. Lawler Award for Humanitarian Contributions, a US Air Force Exemplary Civilian Service Award medal, the AUVSI Foundation’s Al Aube Award, and the Motohiro Kisoi Award for Rescue Engineering Education (Japan). She has written the best-selling textbook Introduction to AI Robotics (2nd edition 2019) and the award-winning Disaster Robotics (2014), plus serving an editor for the science fiction/science fact focus series for the journal Science Robotics. She co-chaired the White House OSTP and NSF workshops on robotics for infectious diseases and recently co-chaired the National Academy of Engineering/Computing Community Consortium workshop on robots for COVID-19.

 

Apr
21
Wed
LCSR Seminar: Gordon Berman “Measuring behavior across scales” @ https://wse.zoom.us/s/94623801186
Apr 21 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

When we think of animal behavior, what typically comes to mind are actions – running, eating, swimming, grooming, flying, singing, resting. Behavior, however, is more than the catalogue of motions that an organism can perform. Animals organize their repertoire of actions into sequences and patterns whose underlying dynamics last much longer than any particular behavior. How an organism modulates these dynamics affects its success at accessing food, reproducing, and myriad other tasks essential for survival. Animals regulate these patterns of behavior via many interacting internal states (hunger, reproductive cycle, age, etc.) that we cannot directly measure. Studying these hidden states’ dynamics, accordingly, has proven challenging due to a lack of measurement techniques and theoretical understanding. In this talk, I will outline our efforts to uncover the latent dynamics that underlie long timescale structure in animal behavior. Looking across a variety of organisms, we use a novel methodology to measure animals’ full behavioral repertoires to find the existence of a non-trivial form of long timescale dynamics that cannot be explained using standard mathematical frameworks. I will present how temporal coarse-graining can be used to understand how these dynamics are generated and how the found course-grained states can be related to the internal states governing behavior through a combination of machine learning techniques and dynamical systems modeling.  Inferring these hidden dynamics presents a new opportunity to generate insights into the neural and physiological mechanisms that animals use to select actions.

Biography:

Gordon J. Berman, Ph.D., Assistant Professor of Biology, Emory University Co-Director, Simons-Emory International Consortium on Motor Control Chair of Recruitment for the Emory Neuroscience Graduate Program . Our lab uses theoretical, computational, and data-driven approaches to gain quantitative insight into entire repertoires of animal behaviors, aiming to make connections to the neurobiology, genetics, and evolutionary histories and that underlie them. Get more information here.

 

Laboratory for Computational Sensing + Robotics