How many skills do think you have? Mark Savage, Life Design Educator for Engineering Masters Students will explain how the truth may far exceed your estimate. Knowing, understanding, and communicating your major skills will prove useful as you pursue jobs and internships.
Dr. Ghazi MD, FEBU, MHPE, received his medical education from Cairo University, Egypt in 2000, where he also completed his Urology residency 2001-2005. He completed a series of fellowships in minimal invasive Urological surgery, in Paris and Austria (2009-2011), where he received accreditation from the European Board of Urology. He completed an Endourology and robotic surgery fellowship at the University of Rochester Medical Center, New York (2011-2013), after which was appointed Assistant professor of Urology at the University of Rochester (2013).
Dr. Ghazi specializes in the diagnosis and minimal invasive treatment of urological cancers as well as complex stone disease. In addition he perused research grants in education, simulation research and surgical training. To enhance his educational background, he was awarded the George Corner Deans Teaching fellowship (2014-2016), completed the Harvard Macy Institute program for Educators in Health Professions in 2016 and a Masters in Health Professions Education program at the Warner School of Education, University of Rochester (2016-2020). He is currently enrolled in a 2-year Senior Leadership Education and Development Program at the University of Rochester.
The pandemic exacerbated inequities faced by people with disabilities and healthcare workers — both are at high risk of adverse physical and mental health outcomes. Robots alone are not going to fix these major societal problems; however, our work explores how we can design technology to lessen the burden of systemic ableism and healthcare system stress. I will discuss several of our recent projects in acute care and community health contexts. In acute care, we are building hospital-based robots to support the clinical workforce, to support item delivery, telemedicine, and decision support. In community health, we are creating interactive and adaptive systems that aim to extend the reach of cognitive neurorehabilitative therapies, provide respite to overburdened caregivers, and explore how technology might serve as a means for mediating positive interactions during hardship. We focus on building robots that can adaptively team with and longitudinally learn from people, and personalize and tailor their behavior.
Dr. Laurel Riek is a professor in Computer Science and Engineering at the University of California, San Diego, with a joint appointment in the Department of Emergency Medicine, and is affiliated with the Contextual Robotics Institute and Design Lab. Dr. Riek directs the Healthcare Robotics Lab and leads research in human-robot teaming and health informatics, with a focus on autonomous robots that work proximately with people. Riek’s current research interests include long term learning, robot perception, and personalization; with applications in acute care, neurorehabilitation, and home health. Dr. Riek received a Ph.D. in Computer Science from the University of Cambridge, and B.S. in Logic and Computation from Carnegie Mellon. Riek served as a Senior Artificial Intelligence Engineer and Roboticist at The MITRE Corporation from 2000-2008, working on learning and vision systems for robots, and held the Clare Boothe Luce chair in Computer Science and Engineering at the University of Notre Dame from 2011-2016. Dr. Riek has received the NSF CAREER Award, AFOSR Young Investigator Award, Qualcomm Research Award, and was named one of ASEE’s 20 Faculty Under 40. Dr. Riek is the HRI 2023 General Co-Chair and served as the Program Co-Chair for HRI 2020, and serves on the editorial boards of T-RO and THRI.
While human interaction remains key to a caring treatment, medical robotics holds the potential to improve surgical processes through enabling scaling of forces and actuation, providing safe and individual treatments to patients, and allowing for efficient use of health care personnel and resources. Machine learning algorithms and standardization of processes can increase the quality of medical diagnosis and treatments, particularly when analyzing large quantities of data. Technical and robotic systems can thus support the medical staff in all steps of a medical process.
This talk introduces several assistive robotic systems for minimally invasive surgical procedures being researched at the Health Robotics and Automation Lab at KIT, Germany. On one hand, we will discuss steerable flexible robotic tools for medical applications that require delicate tissue handling. On the other hand, cognitive robotic surgeons and augmented reality support in the operation room are presented for application in laparoscopy and neurosurgery.
Franziska Mathis-Ullrich is Assistant Professor for Medical Robotics at the Karlsruhe Institute of Technology (KIT) in Germany. Her primary research focus is on minimally invasive and cognition controlled robotic systems and embedded machine learning with emphasis on applications in surgery. She received her B.Sc. and M.Sc. degrees in mechanical engineering and robotics in 2009 and 2012 and obtained her Ph.D. in 2017 in Microrobotics from ETH Zurich, respectively. Since 2019, she has been an Assistant Professor with the Health Robotics and Automation Laboratory at KIT.
Prof. Mathis-Ullrich is vice-president of the German Society for Computer- and Robot-assisted Surgery (CURAC) and has received the IEEE ICRA Best Paper Award in Medical Robotics (2014), the IEEE BioRob Best Student Paper Award (2016) and won twice with her team the first prize of the ICRA Microassembly Challenge (2014 & 2015). Furthermore, she made it onto the prestigious Forbes “30 under 30” list (2017).
Sponsored by the Hopkins Robotics Alumni Network, the Laboratory for Computational Sensing + Robotics, and the Healthcare Affinity
Join us as we hear from Dr. Ayushi Sinha, Senior Scientist in the Precision Diagnosis & Image Guided Therapy department at Philips Research North America. Dr. Sinha will discuss her time at Hopkins, her career journey, and her current role. We’ll have time for Q&A with our speaker and time to network with one another. This program will be presented by Zoom. A link will be shared with you in advance.
Disclaimer: The perspectives and opinions expressed by the speaker(s) during this program are those of the speaker(s) and not, necessarily, those of Johns Hopkins University and the scheduling of any speaker at an alumni event or program does not constitute the University’s endorsement of the speaker’s perspectives and opinions.
Ayushi Sinha is a Senior Scientist in the Precision Diagnosis & Image Guided Therapy department at Philips Research North America. She currently leads a project focused on using machine learning to improve workflow during X-ray guided minimally invasive procedures and has worked on improving guidance during biopsy procedures in her previous roles at Philips. She also leads a group focused on generating intellectual property around machine learning solutions for X-ray guided interventions.
Ayushi completed her Ph.D. at Johns Hopkins University with Russ Taylor and Greg Hager in the Department of Computer Science with a focus on using statistical shape models to improve guidance during endoscopic sinus procedures. She continued at Hopkins as a postdoctoral fellow and research faculty to explore unsupervised learning in image-based tool tracking. Before her Ph.D., Ayushi received a Master of Science in Engineering degree in Computer Science at Hopkins working with Misha Kazhdan, and a Bachelor of Science degree in Computer Science and a Bachelor of Arts degree in Mathematics at Providence College.
Mid-term Spring Semester can usher in the interview season for many students seeking internship or fulltime employment opportunities. Mark Savage, Life Design Educator for Engineering Masters Students, will walk you through what to expect and how to ace the job interview. Time permitting, we may also discuss the Elevator Pitch in preparation for your upcoming Robotics Industry Day. Remember to convey some of those 800 skills that relate to some of the jobs you’ll be discussing.
Update Jan 28: Industry Day will now be virtual as we won’t know the COVID climate in the future. In order to reduce zoom fatigue, we are splitting the event into 2 half days. Industry Day will be Monday March 21 1-4pm and and Tuesday March 22 1-4pm.
|1:00 pm||Welcome WSE: Larry Nagahara, Associate Dean for Research|
|1:05 pm||Introduction to LCSR: Russell H. Taylor, Director|
|1:25 pm||LCSR Education: Louis Whitcomb, Deputy Director|
|1:40 pm||Student Research Talk 1 – Max Li|
|1:50 pm||Student Research Talk 2 – Will Pryor|
|2:00 pm||Student Research Talk 3 – Neha Thomas|
|2:10 pm||Student Research Talk 4 – Filip Aronshtein and Peter Weiss|
|2:30 pm||JHTV – Seth Zonies|
|2:45 pm||Industry Talk – Gouthami Chintalapani, Siemens|
|3:05 pm||Industry Talk – Vinutha Kallem, Waymo|
|3:35 pm||New Faculty Talk – Axel Krieger|
|3:55 pm||New Faculty Talk – Mathias Unberath|
|4:15 pm||Closing: Russell H. Taylor, Director|
|Tuesday 3/22||Gather Town:|
|1:00-3:00pm||Poster and Demo Session|
|3:00-4:00pm||Student and Industry Resume Review|
The Laboratory for Computational Sensing and Robotics will highlight its elite robotics students and showcase cutting-edge research projects in areas that include Medical Robotics, Extreme Environments Robotics, Human-Machine Systems for Manufacturing, BioRobotics and more.
Robotics Industry Day will provide top companies and organizations in the private and public sectors with access to the LCSR’s forward-thinking, solution-driven students. The event will also serve as an informal opportunity to explore university-industry partnerships.
You will experience dynamic presentations and discussions, observe live demonstrations, and participate in speed networking sessions that afford you the opportunity to meet Johns Hopkins most talented robotics students before they graduate.
Please contact Ashley Moriarty if you have any questions.
Please contact Ashley Moriarty if you have any questions.
Locomotion in living systems and bio-inspired robots requires the generation and control of oscillatory motion. While a common method to generate motion is through modulation of time-dependent “clock” signals, in this talk we will motivate and study an alternative method of oscillatory generation through autonomous limit-cycle systems. Limit-cycle oscillators for robotics have many desirable properties including adaptive behaviors, entrainment between oscillators, and potential simplification of motion control. I will present several examples of the generation and control of autonomous oscillatory motion in bio-inspired robotics. First, I will describe our recent work to study the dynamics of wingbeat oscillations in “asynchronous” insects and how we can build these behaviors into micro-aerial vehicles. In the second part of this talk I will describe how limit-cycle gait generation in collective robots can enable swarms to synchronize their movement through contact and without communication. More broadly in this talk I hope to motivate why we should look to autonomous dynamical systems for designing and controlling emergent locomotor behaviors in bio-inspired robotics.
Dr. Nick Gravish received his PhD from Georgia Tech where he used robots as physical models to motivate and study aspects of biological locomotion. During his post-doc Gravish worked in the microrobotics lab of Rob Wood at Harvard, where he gained expertise in designing and studying insect-scale robots. Gravish is currently an assistant professor at UC San Diego in the Mechanical and Aerospace Engineering department. His lab bridges the gap between bio-inspiration, biomechanics, and robotics, towards the development of new bio-inspired robotic technologies to improve the adaptability and resilience of mobile robots.
Designing robots for human interaction is a multifaceted challenge involving the robot’s intelligent behavior, physical form, mechanical structure, and interaction schema. Our lab develops and studies human-centered robots using a combination of methods from AI, Design, and Human-Computer Interaction. This talk focuses on three recent projects, two concerning the design of a new robot, and one that tackles designing robots that help human designers.
Guy Hoffman is Associate Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. Prior to that he was an Assistant Professor at IDC Herzliya and co-director of the IDC Media Innovation Lab. Hoffman holds a Ph.D from MIT in the field of human-robot interaction. He heads the Human-Robot Collaboration and Companionship (HRC²) group, studying the algorithms, interaction schema, and designs enabling close interactions between people and personal robots in the workplace and at home. Among others, Hoffman developed the world’s first human-robot joint theater performance, and the first real-time improvising human-robot Jazz duet. His research papers won several top academic awards, including Best Paper awards at robotics conferences in 2004, 2006, 2008, 2010, 2013, 2015, 2018, 2019, 2020, and 2021. His TEDx talk is one of the most viewed online talks on robotics, watched more than 3 million times.
Many successful approaches to robotic locomotion and manipulation operate with high quality simulation tools. Many such approaches are “bottom-up” in a modeling sense, accounting for all internal forces and environmental interactions explicitly. These “bottom-up” models are used either beforehand (such as in reinforcement learning) and/or in real time. However, various types of robots are getting smaller, softer, and more complex (e.g. bio-hybrid actuators). Some robots lean on low-precision manufacturing and fabrication techniques, and many robots are now being asked to operate in hard-to-characterize, natural interfaces like the human body. Such attributes can render “bottom-up” simulators impractical for expected use cases on various research frontiers, such as micro-biomedical robots and soft robots deployed in uncharacterized environments. In this talk I will revisit the reconstruction equation, a result from the geometric mechanics literature that offers a “top-down” view of Lagrangian systems, permitting insights into generalizable system behaviors along a spectrum of friction-momentum dominance. I will show how these tools can permit rapid modeling of high complexity robots in their operating environment without the requirement to specify CAD models or any explicit forces. I will also discuss a related strength and weakness of the approach resulting from the use of symmetries. Surprisingly, results in simulation and hardware indicate that even with eight-jointed systems, useful behavioral models can be computed from tens of cycles of data. This suggests that high degree of freedom robots can adjust and excel in situations where explicit force models are poorly understood. I will also briefly discuss a framework for robot recovery that leans on these tools as well as a metric for a robot’s ability to cover the local space of motions, computed on the Lie algebra of the position space. The metric allows primitives to be valued for their contribution to the space of composed motions rather than just their individual qualities. Results here include a Dubins car that can learn how to turn left (with its steering wheel restricted to only turn right) in less than a second as well as a robot made of tree branches that can learn to walk around the laboratory with less than twelve minutes of experimental data. I hope to motivate the general use of structural reductions as we pursue modeling and control of the next generation of high complexity robots.
Dr. Brian Bittner received a B.S. from Carnegie Mellon and a PhD from Michigan where he researched the theory, simulation, and application of physics informed machine learning for in situ behavior modeling and optimization. He has sought out cross-disciplinary environments for research, collaborating with physicists, biologists, and mathematicians, working to facilitate insights from these fields into robotic systems. Bittner is currently a research scientist at the Applied Physics Lab. He is currently working on approaches to modeling and control for soft robots and underwater manipulation.