Abstract:
TBA
Biography:
Jeremy D. Brown, the John C. Malone Assistant Professor in the Department of Mechanical Engineering, explores the interface between humans and robotics, with a specific focus on medical applications and haptic feedback. Brown is a graduate of the Atlanta University Center’s Dual Degree Engineering Program, earning bachelor’s degrees in Applied Physics and Mechanical Engineering from Morehouse College and the University of Michigan, respectively. He received his MSE and PhD in Mechanical Engineering at the University of Michigan, where he worked on haptic feedback for upper-extremity prosthetic devices. Prior to joining Johns Hopkins in 2017, he was a postdoctoral research fellow at the University of Pennsylvania.
Chien-Ming Huang, a John C. Malone Assistant Professor in the Department of Computer Science, studies human-machine teaming and creates innovative, intuitive, personalized technologies to provide social, physical, and behavioral support for people with a variety of abilities and characteristics, including children with autism spectrum disorders. Huang, who joined the Hopkins faculty in 2017, has received several awards, including being named a prestigious John C. Malone Assistant Professor at JHU. In 2018, he was selected for the Association for Computing Machinery’s (ACM) Conference on Human Factors in Computing Systems (referred to as CHI) Early Career Symposium and its New Educators Workshop for the ACM’s Special Interest Group on Computer Science Education. As a PhD candidate, Huang received “Best Paper Runner-up” and “Best Student Poster Runner-up” honors at the 2013 Robotics: Science and Systems (RSS) conference and was named a 2012 Human Robot Interaction (HRI) Pioneer.
Abstract:
The ability to efficiently move in complex environments is a fundamental property both for animals and for robots, and the problem of locomotion and movement control is an area in which neuroscience, biomechanics, and robotics can fruitfully interact. In this talk, I will present how biorobots and numerical models can be used to explore the interplay of the four main components underlying animal locomotion, namely central pattern generators (CPGs), reflexes, descending modulation, and the musculoskeletal system. Going from lamprey to human locomotion, I will present a series of models that tend to show that the respective roles of these components have changed during evolution with a dominant role of CPGs in lamprey and salamander locomotion, and a more important role for sensory feedback and descending modulation in human locomotion. I will also present a recent project showing how robotics can provide scientific tools for paleontology. Interesting properties for robot and lower-limb exoskeleton locomotion control will finally be discussed.
Biography:
Auke Ijspeert is a professor at EPFL (Lausanne, Switzerland) since 2002, and head of the Biorobotics Laboratory. He has a BSc/MSc in physics from EPFL (1995), a PhD in artificial intelligence from the University of Edinburgh (1999). He is an IEEE Fellow. His research interests are at the intersection between robotics, computational neuroscience, nonlinear dynamical systems and applied machine learning. He is interested in using numerical simulations and robots to gain a better understanding of animal locomotion, and in using inspiration from biology to design novel types of robots and controllers. He is also investigating how to assist persons with limited mobility using exoskeletons and assistive furniture.
Abstract:
This talk will describe how ground, aerial, and marine robots have been used in disasters, most recently the coronavirus pandemic. During the pandemic so far, 338 instances of robots in 48 countries protecting healthcare workers from unnecessary exposure, handling the surge in demand for clinical care, preventing infections, restoring economic activity, and maintaining individual quality of life have been reported. The uses span six sociotechnical work domains and 29 different use cases representing different missions, robot work envelopes, and human-robot interaction dyads. The dataset also confirms a model of adoption of robotics technology for disasters. Adoption favors robots that maximize the suitability for established use cases while minimizing risk of malfunction, hidden workload costs, or unintended consequences as measured by the NASA Technical Readiness Assessment metrics. Regulations do not present a major barrier but availability, either in terms of inventory or prohibitively high costs, does. The model suggests that in order to be prepared for future events, roboticists should partner with responders now, investigate how to rapidly manufacture complex, reliable robots on demand, and conduct fundamental research on predicting and mitigating risk in extreme or novel environments.\
Biography:
Dr. Robin R. Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M University, a TED speaker, and an IEEE and ACM Fellow. She helped create the fields of disaster robotics and human-robot interaction, deploying robots to 29 disasters in five countries including the 9/11 World Trade Center, Fukushima, the Syrian boat refugee crisis, Hurricane Harvey, and the Kilauea volcanic eruption. Murphy’s contributions to robotics have been recognized with the ACM Eugene L. Lawler Award for Humanitarian Contributions, a US Air Force Exemplary Civilian Service Award medal, the AUVSI Foundation’s Al Aube Award, and the Motohiro Kisoi Award for Rescue Engineering Education (Japan). She has written the best-selling textbook Introduction to AI Robotics (2nd edition 2019) and the award-winning Disaster Robotics (2014), plus serving an editor for the science fiction/science fact focus series for the journal Science Robotics. She co-chaired the White House OSTP and NSF workshops on robotics for infectious diseases and recently co-chaired the National Academy of Engineering/Computing Community Consortium workshop on robots for COVID-19.
Abstract:
When we think of animal behavior, what typically comes to mind are actions – running, eating, swimming, grooming, flying, singing, resting. Behavior, however, is more than the catalogue of motions that an organism can perform. Animals organize their repertoire of actions into sequences and patterns whose underlying dynamics last much longer than any particular behavior. How an organism modulates these dynamics affects its success at accessing food, reproducing, and myriad other tasks essential for survival. Animals regulate these patterns of behavior via many interacting internal states (hunger, reproductive cycle, age, etc.) that we cannot directly measure. Studying these hidden states’ dynamics, accordingly, has proven challenging due to a lack of measurement techniques and theoretical understanding. In this talk, I will outline our efforts to uncover the latent dynamics that underlie long timescale structure in animal behavior. Looking across a variety of organisms, we use a novel methodology to measure animals’ full behavioral repertoires to find the existence of a non-trivial form of long timescale dynamics that cannot be explained using standard mathematical frameworks. I will present how temporal coarse-graining can be used to understand how these dynamics are generated and how the found course-grained states can be related to the internal states governing behavior through a combination of machine learning techniques and dynamical systems modeling. Inferring these hidden dynamics presents a new opportunity to generate insights into the neural and physiological mechanisms that animals use to select actions.
Biography:
Gordon J. Berman, Ph.D., Assistant Professor of Biology, Emory University Co-Director, Simons-Emory International Consortium on Motor Control Chair of Recruitment for the Emory Neuroscience Graduate Program . Our lab uses theoretical, computational, and data-driven approaches to gain quantitative insight into entire repertoires of animal behaviors, aiming to make connections to the neurobiology, genetics, and evolutionary histories and that underlie them. Get more information here.
Abstract:
Autonomous systems offer the promise of providing greater safety and access. However, this positive impact will only be achieved if the underlying algorithms that control such systems can be certified to behave robustly. This talk will describe a pair of techniques grounded in infinite dimensional optimization to address this challenge.
The first technique, which is called Reachability-based Trajectory Design, constructs a parameterized representation of the forward reachable set, which it then uses in concert with predictions to enable real-time, certified, collision checking. This approach, which is guaranteed to generate not-at-fault behavior, is demonstrated across a variety of different real-world platforms including ground vehicles, manipulators, and walking robots. The second technique is a modeling method that allows one to represent a nonlinear system as a linear system in the infinite-dimensional space of real-valued functions. By applying this modeling method, one can employ well-understood linear model predictive control techniques to robustly control nonlinear systems. The utility of this approach is verified on a soft robot control task.
Biography:
Ram Vasudevan is an assistant professor in Mechanical Engineering and the Robotics Institute at the University of Michigan. He received a BS in Electrical Engineering and Computer Sciences, an MS degree in Electrical Engineering, and a PhD in Electrical Engineering all from the University of California, Berkeley. He is a recipient of the NSF CAREER Award and the ONR Young Investigator Award. His work has received best paper awards at the IEEE Conference on Robotics and Automation, the ASME Dynamics Systems and Controls Conference, and IEEE OCEANS Conference and has been finalist for best paper at Robotics: Science and Systems.
The Spring 2021 Final Project Presentation Session for Computer Integrated Surgery II will be held Thursday, May 6th from 18:00 to 21:00 Eastern Time via Zoom. This year, we have 18 amazing projects supported by grad students, faculties, surgeons and companies. We are excited to invite you to join our event to see what the students have achieved with the effort of the past semester.
Connection information
Join Zoom Meeting: https://wse.zoom.us/j/635091574
Meeting ID: 635 091 574
Password: 001987
Agenda
– 18:00—18:10 Arrival and greetings
– 18:10—18:30 1 minute teaser presentation
– 18:30—20:30 Interactive session in breakout rooms
– 20:30—20:40 Reconvene and announce finalists
– 20:40—20:55 Presentations by finalists
– 20:55—21:00 Announcement of Best Project winner
The Spring 2021 Final Project Presentation Session for Deep Learning will be held Tuesday, May 11th from 9-12pm Eastern Time via Zoom. This year, we have many amazing projects to review and celebrate. We are excited to invite you to join our event to see what the students have achieved with the effort of the past semester.
Join Zoom Meeting
https://wse.zoom.us/j/98898500603?pwd=dlY2RHlUZXhFUXErK1J6bHcxVUNGdz09
The closing ceremonies of the Computational Sensing and Medical Robotics (CSMR) REU are set to take place Friday, August 6 from 9am until 3pm at this Zoom link. Seventeen undergraduate students from across the country are eager to share the culmination of their work for the past 10 weeks this summer.
The schedule for the day is listed below, but each presentation is featured in more detail in the program. The event is open to the public and it is not necessary to RSVP.
2021 REU Final Presentations | ||||
Time | Presenter | Project Title | Faculty Mentor | Student/Postdoc/Research Engineer Mentors |
9:00 |
Ben Frey
|
Deep Learning for Lung Ultrasound Imaging of COVID-19 Patients | Muyinatu Bell | Lingyi Zhao |
9:15 |
Camryn Graham
|
Optimization of a Photoacoustic Technique to Differentiate Methylene Blue from Hemoglobin | Muyinatu Bell | Eduardo Gonzalez |
9:30 |
Ariadna Rivera
|
Autonomous Quadcopter Flying and Swarming | Enrique Mallada | Yue Shen |
9:45 |
Katie Sapozhnikov
|
Force Sensing Surgical Drill | Russell Taylor | Anna Goodridge |
10:00 |
Savannah Hays
|
Evaluating SLANT Brain Segmentation using CALAMITI | Jerry Prince | Lianrui Zuo |
10:15 |
Ammaar Firozi
|
Robustness of Deep Networks to Adversarial Attacks | René Vidal | Kaleab Kinfu, Carolina Pacheco |
10:30 | Break | |||
10:45 |
Karina Soto Perez
|
Brain Tumor Segmentation in Structural MRIs | Archana Venkataraman | Naresh Nandakumar |
11:00 |
Jonathan Mi
|
Design of a Small Legged Robot to Traverse a Field of Multiple Types of Large Obstacles | Chen Li | Ratan Othayoth, Yaqing Wang, Qihan Xuan |
11:15 |
Arko Chatterjee
|
Telerobotic System for Satellite Servicing | Peter Kazanzides, Louis Whitcomb, Simon Leonard | Will Pryor |
11:30 |
Lauren Peterson
|
Can a Fish Learn to Ride a Bicycle? | Noah Cowan | Yu Yang |
11:45 |
Josiah Lozano
|
Robotic System for Mosquito Dissection | Russel Taylor,
Lulian Lordachita |
Anna Goodridge |
12:00 |
Zulekha Karachiwalla
|
Application of dual modality haptic feedback within surgical robotic | Jeremy Brown | |
12:15 | Break | |||
1:00 |
James Campbell
|
Understanding Overparameterization from Symmetry | René Vidal | Salma Tarmoun |
1:15 |
Evan Dramko
|
Establishing FDR Control For Genetic Marker Selection | Soledad Villar, Jeremias Sulam | N/A |
1:30 |
Chase Lahr
|
Modeling Dynamic Systems Through a Classroom Testbed | Jeremy Brown | Mohit Singhala |
1:45 |
Anire Egbe
|
Object Discrimination Using Vibrotactile Feedback for Upper Limb Prosthetic Users | Jeremy Brown | |
2:00 |
Harrison Menkes
|
Measuring Proprioceptive Impairment in Stroke Survivors (Pre-Recorded) | Jeremy Brown | |
2:15 |
Deliberations
|
|||
3:00 | Winner Announced |
Mark Savage is the Johns Hopkins Life Design Educator for Engineering Masters Students, advising on all aspects of career development and the internship / job search, with the Handshake Career Management System as a necessary tool. Look for weekly newsletters to soon be emailed to Homewood WSE Masters Students on Sunday Nights.
Abstract:
Robots currently have the capacity to help people in several fields, including health care, assisted living, and manufacturing, where the robots must share physical space and actively interact with people in teams. The performance of these teams depends upon how fluently all team members can jointly perform their tasks. To be successful within a group, a robot requires the ability to perceive other members’ actions, model interaction dynamics, predict future actions, and adapt their plans accordingly in real-time. In the Collaborative Robotics Lab (CRL), we develop novel perception, prediction, and planning algorithms for robots to fluently coordinate and collaborate with people in complex human environments. In this talk, I will highlight various challenges of deploying robots in real-world settings and present our recent work to tackle several of these challenges.
Biography:
Tariq Iqbal is an Assistant Professor of Systems Engineering and Computer Science (by courtesy) at the University of Virginia (UVA). Prior to joining UVA, he was a Postdoctoral Associate in the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT. He received his Ph.D. in CS from the University of California San Diego (UCSD). Iqbal leads the Collaborative Robotics Lab (CRL), which focuses on building robotic systems that work alongside people in complex human environments, such as factories, hospitals, and educational settings. His research group develops artificial intelligence, computer vision, and machine learning algorithms to enable robots to solve problems in these domains.