Sponsored by the Hopkins Robotics Alumni Network, the Laboratory for Computational Sensing + Robotics, and the Healthcare Affinity
Join us as we hear from Dr. Ayushi Sinha, Senior Scientist in the Precision Diagnosis & Image Guided Therapy department at Philips Research North America. Dr. Sinha will discuss her time at Hopkins, her career journey, and her current role. We’ll have time for Q&A with our speaker and time to network with one another. This program will be presented by Zoom. A link will be shared with you in advance.
Disclaimer: The perspectives and opinions expressed by the speaker(s) during this program are those of the speaker(s) and not, necessarily, those of Johns Hopkins University and the scheduling of any speaker at an alumni event or program does not constitute the University’s endorsement of the speaker’s perspectives and opinions.
Ayushi Sinha is a Senior Scientist in the Precision Diagnosis & Image Guided Therapy department at Philips Research North America. She currently leads a project focused on using machine learning to improve workflow during X-ray guided minimally invasive procedures and has worked on improving guidance during biopsy procedures in her previous roles at Philips. She also leads a group focused on generating intellectual property around machine learning solutions for X-ray guided interventions.
Ayushi completed her Ph.D. at Johns Hopkins University with Russ Taylor and Greg Hager in the Department of Computer Science with a focus on using statistical shape models to improve guidance during endoscopic sinus procedures. She continued at Hopkins as a postdoctoral fellow and research faculty to explore unsupervised learning in image-based tool tracking. Before her Ph.D., Ayushi received a Master of Science in Engineering degree in Computer Science at Hopkins working with Misha Kazhdan, and a Bachelor of Science degree in Computer Science and a Bachelor of Arts degree in Mathematics at Providence College.
Mid-term Spring Semester can usher in the interview season for many students seeking internship or fulltime employment opportunities. Mark Savage, Life Design Educator for Engineering Masters Students, will walk you through what to expect and how to ace the job interview. Time permitting, we may also discuss the Elevator Pitch in preparation for your upcoming Robotics Industry Day. Remember to convey some of those 800 skills that relate to some of the jobs you’ll be discussing.
Update Jan 28: Industry Day will now be virtual as we won’t know the COVID climate in the future. In order to reduce zoom fatigue, we are splitting the event into 2 half days. Industry Day will be Monday March 21 1-4pm and and Tuesday March 22 1-4pm.
|1:00 pm||Welcome WSE: Larry Nagahara, Associate Dean for Research|
|1:05 pm||Introduction to LCSR: Russell H. Taylor, Director|
|1:25 pm||LCSR Education: Louis Whitcomb, Deputy Director|
|1:40 pm||Student Research Talk 1 – Max Li|
|1:50 pm||Student Research Talk 2 – Will Pryor|
|2:00 pm||Student Research Talk 3 – Neha Thomas|
|2:10 pm||Student Research Talk 4 – Filip Aronshtein and Peter Weiss|
|2:30 pm||JHTV – Seth Zonies|
|2:45 pm||Industry Talk – Gouthami Chintalapani, Siemens|
|3:05 pm||Industry Talk – Vinutha Kallem, Waymo|
|3:35 pm||New Faculty Talk – Axel Krieger|
|3:55 pm||New Faculty Talk – Mathias Unberath|
|4:15 pm||Closing: Russell H. Taylor, Director|
|Tuesday 3/22||Gather Town:|
|1:00-3:00pm||Poster and Demo Session|
|3:00-4:00pm||Student and Industry Resume Review|
The Laboratory for Computational Sensing and Robotics will highlight its elite robotics students and showcase cutting-edge research projects in areas that include Medical Robotics, Extreme Environments Robotics, Human-Machine Systems for Manufacturing, BioRobotics and more.
Robotics Industry Day will provide top companies and organizations in the private and public sectors with access to the LCSR’s forward-thinking, solution-driven students. The event will also serve as an informal opportunity to explore university-industry partnerships.
You will experience dynamic presentations and discussions, observe live demonstrations, and participate in speed networking sessions that afford you the opportunity to meet Johns Hopkins most talented robotics students before they graduate.
Please contact Ashley Moriarty if you have any questions.
Please contact Ashley Moriarty if you have any questions.
Locomotion in living systems and bio-inspired robots requires the generation and control of oscillatory motion. While a common method to generate motion is through modulation of time-dependent “clock” signals, in this talk we will motivate and study an alternative method of oscillatory generation through autonomous limit-cycle systems. Limit-cycle oscillators for robotics have many desirable properties including adaptive behaviors, entrainment between oscillators, and potential simplification of motion control. I will present several examples of the generation and control of autonomous oscillatory motion in bio-inspired robotics. First, I will describe our recent work to study the dynamics of wingbeat oscillations in “asynchronous” insects and how we can build these behaviors into micro-aerial vehicles. In the second part of this talk I will describe how limit-cycle gait generation in collective robots can enable swarms to synchronize their movement through contact and without communication. More broadly in this talk I hope to motivate why we should look to autonomous dynamical systems for designing and controlling emergent locomotor behaviors in bio-inspired robotics.
Dr. Nick Gravish received his PhD from Georgia Tech where he used robots as physical models to motivate and study aspects of biological locomotion. During his post-doc Gravish worked in the microrobotics lab of Rob Wood at Harvard, where he gained expertise in designing and studying insect-scale robots. Gravish is currently an assistant professor at UC San Diego in the Mechanical and Aerospace Engineering department. His lab bridges the gap between bio-inspiration, biomechanics, and robotics, towards the development of new bio-inspired robotic technologies to improve the adaptability and resilience of mobile robots.
Designing robots for human interaction is a multifaceted challenge involving the robot’s intelligent behavior, physical form, mechanical structure, and interaction schema. Our lab develops and studies human-centered robots using a combination of methods from AI, Design, and Human-Computer Interaction. This talk focuses on three recent projects, two concerning the design of a new robot, and one that tackles designing robots that help human designers.
Guy Hoffman is Associate Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. Prior to that he was an Assistant Professor at IDC Herzliya and co-director of the IDC Media Innovation Lab. Hoffman holds a Ph.D from MIT in the field of human-robot interaction. He heads the Human-Robot Collaboration and Companionship (HRC²) group, studying the algorithms, interaction schema, and designs enabling close interactions between people and personal robots in the workplace and at home. Among others, Hoffman developed the world’s first human-robot joint theater performance, and the first real-time improvising human-robot Jazz duet. His research papers won several top academic awards, including Best Paper awards at robotics conferences in 2004, 2006, 2008, 2010, 2013, 2015, 2018, 2019, 2020, and 2021. His TEDx talk is one of the most viewed online talks on robotics, watched more than 3 million times.
Many successful approaches to robotic locomotion and manipulation operate with high quality simulation tools. Many such approaches are “bottom-up” in a modeling sense, accounting for all internal forces and environmental interactions explicitly. These “bottom-up” models are used either beforehand (such as in reinforcement learning) and/or in real time. However, various types of robots are getting smaller, softer, and more complex (e.g. bio-hybrid actuators). Some robots lean on low-precision manufacturing and fabrication techniques, and many robots are now being asked to operate in hard-to-characterize, natural interfaces like the human body. Such attributes can render “bottom-up” simulators impractical for expected use cases on various research frontiers, such as micro-biomedical robots and soft robots deployed in uncharacterized environments. In this talk I will revisit the reconstruction equation, a result from the geometric mechanics literature that offers a “top-down” view of Lagrangian systems, permitting insights into generalizable system behaviors along a spectrum of friction-momentum dominance. I will show how these tools can permit rapid modeling of high complexity robots in their operating environment without the requirement to specify CAD models or any explicit forces. I will also discuss a related strength and weakness of the approach resulting from the use of symmetries. Surprisingly, results in simulation and hardware indicate that even with eight-jointed systems, useful behavioral models can be computed from tens of cycles of data. This suggests that high degree of freedom robots can adjust and excel in situations where explicit force models are poorly understood. I will also briefly discuss a framework for robot recovery that leans on these tools as well as a metric for a robot’s ability to cover the local space of motions, computed on the Lie algebra of the position space. The metric allows primitives to be valued for their contribution to the space of composed motions rather than just their individual qualities. Results here include a Dubins car that can learn how to turn left (with its steering wheel restricted to only turn right) in less than a second as well as a robot made of tree branches that can learn to walk around the laboratory with less than twelve minutes of experimental data. I hope to motivate the general use of structural reductions as we pursue modeling and control of the next generation of high complexity robots.
Dr. Brian Bittner received a B.S. from Carnegie Mellon and a PhD from Michigan where he researched the theory, simulation, and application of physics informed machine learning for in situ behavior modeling and optimization. He has sought out cross-disciplinary environments for research, collaborating with physicists, biologists, and mathematicians, working to facilitate insights from these fields into robotic systems. Bittner is currently a research scientist at the Applied Physics Lab. He is currently working on approaches to modeling and control for soft robots and underwater manipulation.
The talk will present a survey of my research activities, with more detailed presentation of our guidance system for robot-assisted prostate cancer surgery. The majority of prostate cancer surgery is carried out with the da Vinci surgical system. Tracking of instruments and hand-eye calibration of this robotic system enables the overlay of pre-operative magnetic resonance imaging by registration to real-time ultrasound. This enables visualization of sub-surface anatomy and cancer. We will discuss our system design, visualization and registration approaches.
We will also discuss instrumentation for force sensing using the da Vinci Research Kit, and a new approach to teleguidance for ultrasound examinations.
Tim Salcudean is a Professor with the Department of Electrical and Computer Engineering, where he holds the C.A. Laszlo Chair in Biomedical Engineering. He is cross-appointed with the UBC School of Biomedical Engineering and the Vancouver Prostate Centre. He is on the steering committee of the IPCAI conference and on the Editorial Board of the International Journal of Robotics Research. He is a Fellow of IEEE, MICCAI and of the Canadian Academy of Engineering. His research interests are in medical robotics, medical image analysis and elastography imaging.
Flexible and soft medical robots offer capabilities beyond those of conventional rigid-link robots due to their ability to traverse confined spaces and conform to highly curved paths. They also offer potential for improved safety due to their inherent compliance. In this talk, I will present several new robot designs for various surgical applications. In particular, I will discuss our work on soft, growing robots that achieve locomotion by material extending from their tip. I will discuss limitations in miniaturizing such robots, along with methods for actively steering, sensing, and controlling them. Finally, I will discuss new sensing and human-in-the-loop control paradigms that are aimed at improving the performance of flexible surgical robots.
Tania Morimoto is an Assistant Professor in the Department of Mechanical and Aerospace Engineering and in the Department of Surgery at the University of California, San Diego. She received the B.S. degree from Massachusetts Institute of Technology, Cambridge, MA, and the M.S. and Ph.D. degrees from Stanford University, Stanford, CA, all in mechanical engineering. Her research lab focuses on the design and control of flexible continuum robots for increased dexterity and accessibility in uncertain environments, particularly for minimally invasive surgical interventions. They are also working to address the challenges of designing human-in-the-loop interfaces for controlling these flexible and soft robots, including the integration of haptic feedback to improve surgical outcomes. She is a recipient of the Hellman Fellowship (2021), the Beckman Young Investigator Award (2022), and the NSF CAREER Award (2022).