BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://lcsr.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-11992@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2020/2021 s chool year\n \nAbstract:\nTBA\n \nBiography:\nJeremy D. Brown\, the John C. Malone Assistant Professor in the Department of Mechanical Engineering\ , explores the interface between humans and robotics\, with a specific foc us on medical applications and haptic feedback. Brown is a graduate of the Atlanta University Center’s Dual Degree Engineering Program\, earning bac helor’s degrees in Applied Physics and Mechanical Engineering from Morehou se College and the University of Michigan\, respectively. He received his MSE and PhD in Mechanical Engineering at the University of Michigan\, wher e he worked on haptic feedback for upper-extremity prosthetic devices. Pri or to joining Johns Hopkins in 2017\, he was a postdoctoral research fello w at the University of Pennsylvania.\nChien-Ming Huang\, a John C. Malone Assistant Professor in the Department of Computer Science\, studies human- machine teaming and creates innovative\, intuitive\, personalized technolo gies to provide social\, physical\, and behavioral support for people with a variety of abilities and characteristics\, including children with auti sm spectrum disorders. Huang\, who joined the Hopkins faculty in 2017\, ha s received several awards\, including being named a prestigious John C. Ma lone Assistant Professor at JHU. In 2018\, he was selected for the Associa tion for Computing Machinery’s (ACM) Conference on Human Factors in Comput ing Systems (referred to as CHI) Early Career Symposium and its New Educat ors Workshop for the ACM’s Special Interest Group on Computer Science Educ ation. As a PhD candidate\, Huang received “Best Paper Runner-up” and “Be st Student Poster Runner-up” honors at the 2013 Robotics: Science and Syst ems (RSS) conference and was named a 2012 Human Robot Interaction (HRI) Pi oneer.\n DTSTART;TZID=America/New_York:20210324T120000 DTEND;TZID=America/New_York:20210324T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: “Human Subjects Experiments in Robotics Research” URL:https://lcsr.jhu.edu/events/human-subjects-experiments-in-robotics-rese arch/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\n
Abstract:
\nTBA
\n\n
Biography:
\nJeremy D. Brown\, the John C. Malone Assistant Professor in the Department of Mecha nical Engineering\, explores the interface between humans and robotics\, w ith a specific focus on medical applications and haptic feedback. Brown is a graduate of the Atlanta University Center’s Dual Degree Engineering Pro gram\, earning bachelor’s degrees in Applied Physics and Mechanical Engine ering from Morehouse College and the University of Michigan\, respectively . He received his MSE and PhD in Mechanical Engineering at the University of Michigan\, where he worked on haptic feedback for upper-extremity prost hetic devices. Prior to joining Johns Hopkins in 2017\, he was a postdocto ral research fellow at the University of Pennsylvania.
\nChien-Ming Huang\, a John C. Malone Assistant Professor in the Department of Computer Science\, studies human-machine teaming and creates innovative\, intuitiv e\, personalized technologies to provide social\, physical\, and behaviora l support for people with a variety of abilities and characteristics\, inc luding children with autism spectrum disorders. Huang\, who joined the Hop kins faculty in 2017\, has received several awards\, including being named a prestigious John C. Malone Assistant Professor at JHU. In 2018\, he was selected for the Association for Computing Machinery’s (ACM) Conference o n Human Factors in Computing Systems (referred to as CHI) Early Career Sym posium and its New Educators Workshop for the ACM’s Special Interest Group on Computer Science Education. As a PhD candidate\, Huang received “Best Paper Runner-up” and “Best Student Poster Runner-up” honors at the 2013 R obotics: Science and Systems (RSS) conference and was named a 2012 Human R obot Interaction (HRI) Pioneer.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-11864@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2020/2021 s chool year\n \nAbstract:\nThe ability to efficiently move in complex envir onments is a fundamental property both for animals and for robots\, and th e problem of locomotion and movement control is an area in which neuroscie nce\, biomechanics\, and robotics can fruitfully interact. In this talk\, I will present how biorobots and numerical models can be used to explore t he interplay of the four main components underlying animal locomotion\, na mely central pattern generators (CPGs)\, reflexes\, descending modulation\ , and the musculoskeletal system. Going from lamprey to human locomotion\, I will present a series of models that tend to show that the respective r oles of these components have changed during evolution with a dominant rol e of CPGs in lamprey and salamander locomotion\, and a more important role for sensory feedback and descending modulation in human locomotion. I wil l also present a recent project showing how robotics can provide scientifi c tools for paleontology. Interesting properties for robot and lower-limb exoskeleton locomotion control will finally be discussed.\n \nBiography:\n Auke Ijspeert is a professor at EPFL (Lausanne\, Switzerland) since 2002\, and head of the Biorobotics Laboratory. He has a BSc/MSc in physics from EPFL (1995)\, a PhD in artificial intelligence from the University of Edin burgh (1999). He is an IEEE Fellow. His research interests are at the inte rsection between robotics\, computational neuroscience\, nonlinear dynamic al systems and applied machine learning. He is interested in using numeric al simulations and robots to gain a better understanding of animal locomot ion\, and in using inspiration from biology to design novel types of robot s and controllers. He is also investigating how to assist persons with lim ited mobility using exoskeletons and assistive furniture.\n DTSTART;TZID=America/New_York:20210331T120000 DTEND;TZID=America/New_York:20210331T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Auke Ijspeert “Investigating animal locomotion using biorobots” URL:https://lcsr.jhu.edu/events/auke-ijspeert/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
Abstract:
\nThe ability to efficiently move in complex environments is a fundame ntal property both for animals and for robots\, and the problem of locomot ion and movement control is an area in which neuroscience\, biomechanics\, and robotics can fruitfully interact. In this talk\, I will present how b iorobots and numerical models can be used to explore the interplay of the four main components underlying animal locomotion\, namely central pattern generators (CPGs)\, reflexes\, descending modulation\, and the musculoske letal system. Going from lamprey to human locomotion\, I will present a se ries of models that tend to show that the respective roles of these compon ents have changed during evolution with a dominant role of CPGs in lamprey and salamander locomotion\, and a more important role for sensory feedbac k and descending modulation in human locomotion. I will also present a rec ent project showing how robotics can provide scientific tools for paleonto logy. Interesting properties for robot and lower-limb exoskeleton locomoti on control will finally be discussed.
\n\n
Biography:
\nAuke Ijspeert is a professor at EPFL (Lausanne\, Switzer land) since 2002\, and head of the Biorobotics Laboratory. He has a BSc/MS c in physics from EPFL (1995)\, a PhD in artificial intelligence from the University of Edinburgh (1999). He is an IEEE Fellow. His research interes ts are at the intersection between robotics\, computational neuroscience\, nonlinear dynamical systems and applied machine learning. He is intereste d in using numerical simulations and robots to gain a better understanding of animal locomotion\, and in using inspiration from biology to design no vel types of robots and controllers. He is also investigating how to assis t persons with limited mobility using exoskeletons and assistive furniture .
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-11865@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2020/2021 s chool year\n \nAbstract:\nThis talk will describe how ground\, aerial\, an d marine robots have been used in disasters\, most recently the coronaviru s pandemic. During the pandemic so far\, 338 instances of robots in 48 cou ntries protecting healthcare workers from unnecessary exposure\, handling the surge in demand for clinical care\, preventing infections\, restoring economic activity\, and maintaining individual quality of life have been r eported. The uses span six sociotechnical work domains and 29 different u se cases representing different missions\, robot work envelopes\, and huma n-robot interaction dyads. The dataset also confirms a model of adoption of robotics technology for disasters. Adoption favors robots that maximize the suitability for established use cases while minimizing risk of malfun ction\, hidden workload costs\, or unintended consequences as measured by the NASA Technical Readiness Assessment metrics. Regulations do not presen t a major barrier but availability\, either in terms of inventory or prohi bitively high costs\, does. The model suggests that in order to be prepar ed for future events\, roboticists should partner with responders now\, in vestigate how to rapidly manufacture complex\, reliable robots on demand\, and conduct fundamental research on predicting and mitigating risk in ext reme or novel environments.\\\n \nBiography:\nDr. Robin R. Murphy is the R aytheon Professor of Computer Science and Engineering at Texas A&M Univers ity\, a TED speaker\, and an IEEE and ACM Fellow. She helped create the fi elds of disaster robotics and human-robot interaction\, deploying robots t o 29 disasters in five countries including the 9/11 World Trade Center\, F ukushima\, the Syrian boat refugee crisis\, Hurricane Harvey\, and the Kil auea volcanic eruption. Murphy’s contributions to robotics have been recog nized with the ACM Eugene L. Lawler Award for Humanitarian Contributions\, a US Air Force Exemplary Civilian Service Award medal\, the AUVSI Foundat ion’s Al Aube Award\, and the Motohiro Kisoi Award for Rescue Engineering Education (Japan). She has written the best-selling textbook Introduction to AI Robotics (2nd edition 2019) and the award-winning Disaster Robotics (2014)\, plus serving an editor for the science fiction/science fact focus series for the journal Science Robotics. She co-chaired the White House O STP and NSF workshops on robotics for infectious diseases and recently co- chaired the National Academy of Engineering/Computing Community Consortium workshop on robots for COVID-19.\n DTSTART;TZID=America/New_York:20210407T120000 DTEND;TZID=America/New_York:20210407T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Robin Murphy “From the World Trade Center to the COVI D-19 Pandemic: Robots and Disasters” URL:https://lcsr.jhu.edu/events/robin-murphy/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
Abstract:
\nThis talk will describe how ground\, aerial\, and marine robots have been used in disasters\, most recently the coronavirus pandemic. During t he pandemic so far\, 338 instances of robots in 48 countries protecting he althcare workers from unnecessary exposure\, handling the surge in demand for clinical care\, preventing infections\, restoring economic activity\, and maintaining individual quality of life have been reported. The uses s pan six sociotechnical work domains and 29 different use cases representin g different missions\, robot work envelopes\, and human-robot interaction dyads. The dataset also confirms a model of adoption of robotics technolo gy for disasters. Adoption favors robots that maximize the suitability for established use cases while minimizing risk of malfunction\, hidden workl oad costs\, or unintended consequences as measured by the NASA Technical R eadiness Assessment metrics. Regulations do not present a major barrier bu t availability\, either in terms of inventory or prohibitively high costs\ , does. The model suggests that in order to be prepared for future events \, roboticists should partner with responders now\, investigate how to rap idly manufacture complex\, reliable robots on demand\, and conduct fundame ntal research on predicting and mitigating risk in extreme or novel enviro nments.\\
\n\n
Biography:
\nDr. Robin R. Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M University\, a TED speaker\, and an IEEE and ACM Fellow. She he lped create the fields of disaster robotics and human-robot interaction\, deploying robots to 29 disasters in five countries including the 9/11 Worl d Trade Center\, Fukushima\, the Syrian boat refugee crisis\, Hurricane Ha rvey\, and the Kilauea volcanic eruption. Murphy’s contributions to roboti cs have been recognized with the ACM Eugene L. Lawler Award for Humanitari an Contributions\, a US Air Force Exemplary Civilian Service Award medal\, the AUVSI Foundation’s Al Aube Award\, and the Motohiro Kisoi Award for R escue Engineering Education (Japan). She has written the best-selling text book Introduction to AI Robotics (2nd edition 2019) an d the award-winning Disaster Robotics (2014)\, plus serving an ed itor for the science fiction/science fact focus series for the journal Science Robotics. She co-chaired the White House OSTP and NSF worksh ops on robotics for infectious diseases and recently co-chaired the Nation al Academy of Engineering/Computing Community Consortium workshop on robot s for COVID-19.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-11869@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2020/2021 s chool year\n \nAbstract:\nWhen we think of animal behavior\, what typicall y comes to mind are actions – running\, eating\, swimming\, grooming\, fly ing\, singing\, resting. Behavior\, however\, is more than the catalogue o f motions that an organism can perform. Animals organize their repertoire of actions into sequences and patterns whose underlying dynamics last much longer than any particular behavior. How an organism modulates these dyna mics affects its success at accessing food\, reproducing\, and myriad othe r tasks essential for survival. Animals regulate these patterns of behavio r via many interacting internal states (hunger\, reproductive cycle\, age\ , etc.) that we cannot directly measure. Studying these hidden states’ dyn amics\, accordingly\, has proven challenging due to a lack of measurement techniques and theoretical understanding. In this talk\, I will outline ou r efforts to uncover the latent dynamics that underlie long timescale stru cture in animal behavior. Looking across a variety of organisms\, we use a novel methodology to measure animals’ full behavioral repertoires to find the existence of a non-trivial form of long timescale dynamics that canno t be explained using standard mathematical frameworks. I will present how temporal coarse-graining can be used to understand how these dynamics are generated and how the found course-grained states can be related to the in ternal states governing behavior through a combination of machine learning techniques and dynamical systems modeling. Inferring these hidden dynami cs presents a new opportunity to generate insights into the neural and phy siological mechanisms that animals use to select actions.\nBiography:\nGor don J. Berman\, Ph.D.\, Assistant Professor of Biology\, Emory University Co-Director\, Simons-Emory International Consortium on Motor Control Chair of Recruitment for the Emory Neuroscience Graduate Program . Our lab uses theoretical\, computational\, and data-driven approaches to gain quantita tive insight into entire repertoires of animal behaviors\, aiming to make connections to the neurobiology\, genetics\, and evolutionary histories an d that underlie them. Get more information here.\n DTSTART;TZID=America/New_York:20210421T120000 DTEND;TZID=America/New_York:20210421T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Gordon Berman “Measuring behavior across scales” URL:https://lcsr.jhu.edu/events/gordon-berman/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
Abstract:
\nWhen we think of animal behavior\, what typically comes to mind are actions – running\, eating\, swimming\, grooming\, flying\, singing\, rest ing. Behavior\, however\, is more than the catalogue of motions that an or ganism can perform. Animals organize their repertoire of actions into sequ ences and patterns whose underlying dynamics last much longer than any par ticular behavior. How an organism modulates these dynamics affects its suc cess at accessing food\, reproducing\, and myriad other tasks essential fo r survival. Animals regulate these patterns of behavior via many interacti ng internal states (hunger\, reproductive cycle\, age\, etc.) that we cann ot directly measure. Studying these hidden states’ dynamics\, accordingly\ , has proven challenging due to a lack of measurement techniques and theor etical understanding. In this talk\, I will outline our efforts to uncover the latent dynamics that underlie long timescale structure in animal beha vior. Looking across a variety of organisms\, we use a novel methodology t o measure animals’ full behavioral repertoires to find the existence of a non-trivial form of long timescale dynamics that cannot be explained using standard mathematical frameworks. I will present how temporal coarse-grai ning can be used to understand how these dynamics are generated and how th e found course-grained states can be related to the internal states govern ing behavior through a combination of machine learning techniques and dyna mical systems modeling. Inferring these hidden dynamics presents a new op portunity to generate insights into the neural and physiological mechanism s that animals use to select actions.
\nBiography:< /p>\n
Gordon J. Berman\, Ph.D.\, Assistant Professor of Biology\, Emory University Co-Director\, Simons-Emory International Consortium on Motor Co ntrol Chair of Recruitment for the Emory Neuroscience Graduate Program . O ur lab uses theoretical\, computational\, and data-driven approaches to ga in quantitative insight into entire repertoires of animal behaviors\, aimi ng to make connections to the neurobiology\, genetics\, and evolutionary h istories and that underlie them. Get more information here.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-11871@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2020/2021 s chool year\n \nAbstract:\nAutonomous systems offer the promise of providin g greater safety and access. However\, this positive impact will only be a chieved if the underlying algorithms that control such systems can be cert ified to behave robustly. This talk will describe a pair of techniques gro unded in infinite dimensional optimization to address this challenge.\nThe first technique\, which is called Reachability-based Trajectory Design\, constructs a parameterized representation of the forward reachable set\, w hich it then uses in concert with predictions to enable real-time\, certif ied\, collision checking. This approach\, which is guaranteed to generate not-at-fault behavior\, is demonstrated across a variety of different real -world platforms including ground vehicles\, manipulators\, and walking ro bots. The second technique is a modeling method that allows one to represe nt a nonlinear system as a linear system in the infinite-dimensional space of real-valued functions. By applying this modeling method\, one can empl oy well-understood linear model predictive control techniques to robustly control nonlinear systems. The utility of this approach is verified on a s oft robot control task.\n \nBiography:\nRam Vasudevan is an assistant prof essor in Mechanical Engineering and the Robotics Institute at the Universi ty of Michigan. He received a BS in Electrical Engineering and Computer Sc iences\, an MS degree in Electrical Engineering\, and a PhD in Electrical Engineering all from the University of California\, Berkeley. He is a reci pient of the NSF CAREER Award and the ONR Young Investigator Award. His wo rk has received best paper awards at the IEEE Conference on Robotics and A utomation\, the ASME Dynamics Systems and Controls Conference\, and IEEE O CEANS Conference and has been finalist for best paper at Robotics: Science and Systems.\n DTSTART;TZID=America/New_York:20210428T120000 DTEND;TZID=America/New_York:20210428T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Ram Vasudevan “How I Learned to Stop Worrying and Sta rt Loving Lifting to Infinite Dimensions” URL:https://lcsr.jhu.edu/events/ram-vasudevan/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
Abstract:
\nAutonomous systems offer the promise of providing greater safety and access. However\, this positive impact will only be achieved if the under lying algorithms that control such systems can be certified to behave robu stly. This talk will describe a pair of techniques grounded in infinite di mensional optimization to address this challenge.
\nThe first techni que\, which is called Reachability-based Trajectory Design\, constructs a parameterized representation of the forward reachable set\, which it then uses in concert with predictions to enable real-time\, certified\, collisi on checking. This approach\, which is guaranteed to generate not-at-fault behavior\, is demonstrated across a variety of different real-world platfo rms including ground vehicles\, manipulators\, and walking robots. The sec ond technique is a modeling method that allows one to represent a nonlinea r system as a linear system in the infinite-dimensional space of real-valu ed functions. By applying this modeling method\, one can employ well-under stood linear model predictive control techniques to robustly control nonli near systems. The utility of this approach is verified on a soft robot con trol task.
\n\n
Biography:
\nRam Vasud evan is an assistant professor in Mechanical Engineering and the Robotics Institute at the University of Michigan. He received a BS in Electrical En gineering and Computer Sciences\, an MS degree in Electrical Engineering\, and a PhD in Electrical Engineering all from the University of California \, Berkeley. He is a recipient of the NSF CAREER Award and the ONR Young I nvestigator Award. His work has received best paper awards at the IEEE Con ference on Robotics and Automation\, the ASME Dynamics Systems and Control s Conference\, and IEEE OCEANS Conference and has been finalist for best p aper at Robotics: Science and Systems.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12059@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:https://wse.zoom.us/j/635091574 DESCRIPTION:The Spring 2021 Final Project Presentation Session for Computer Integrated Surgery II will be held Thursday\, May 6th from 18:00 to 21:00 Eastern Time via Zoom. This year\, we have 18 amazing projects supported by grad students\, faculties\, surgeons and companies. We are excited to i nvite you to join our event to see what the students have achieved with th e effort of the past semester.\nConnection information\nJoin Zoom Meeting: https://wse.zoom.us/j/635091574\nMeeting ID: 635 091 574\nPassword: 00198 7\n \nAgenda\n– 18:00—18:10 Arrival and greetings\n– 18:10—18:30 1 minute teaser presentation\n– 18:30—20:30 Interactive session in breakout rooms\n – 20:30—20:40 Reconvene and announce finalists\n– 20:40—20:55 Presentation s by finalists\n– 20:55—21:00 Announcement of Best Project winner DTSTART;TZID=America/New_York:20210506T180000 DTEND;TZID=America/New_York:20210506T210000 SEQUENCE:0 SUMMARY:Computer Integrated Surgery 2 Final Project Presentations URL:https://lcsr.jhu.edu/events/computer-integrated-surgery-2-final-project -presentations/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
The Spring 20 21 Final Project Presentation Session for Computer Integrated Surg ery II will be held Thursday\, May 6th from 18 :00 to 21:00 Eastern Time via Zoom. This year\, we have 18 amazing projects supported by grad students\, faculties\, surgeons and companies. We are excited to invite y ou to join our event to see what the students have achieved with the effor t of the past semester.
\nConnection information
\nJoin Zoom Meeting: https:/ /wse.zoom.us/j/635091574
\nMeeting ID: 635 091 574
\nPassw ord: 001987
\n\n
Agenda
\n– 18:00—18:1 0 Arrival and greetings
\n– 18:10—18:30 1 minute teaser presentation
\n– 18:30—20:30 Interactive session in breakout rooms
\n– 20: 30—20:40 Reconvene and announce finalists
\n– 20:40—20:55 Presentati ons by finalists
\n– 20:55—21:00 Announcement of Best Project winner
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12064@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:https://wse.zoom.us/j/98898500603?pwd=dlY2RHlUZXhFUXErK1J6bHcxVUNGd z09 DESCRIPTION:The Spring 2021 Final Project Presentation Session for Deep Lea rning will be held Tuesday\, May 11th from 9-12pm Eastern Time via Zoom. T his year\, we have many amazing projects to review and celebrate. We are e xcited to invite you to join our event to see what the students have achie ved with the effort of the past semester.\n\nWelcome (Mathias)\nDeep Learn ing at Intuitive Surgical (Omid Mohareri)\nPoster Pitch Presentations\, ea ch group has 1.5 min (this will take ~45 min)\nBreakouts (every group has their own breakout room\, this will last around 1h)\nAwards and Closing\n \n \nJoin Zoom Meeting\nhttps://wse.zoom.us/j/98898500603?pwd=dlY2RHlUZXhF UXErK1J6bHcxVUNGdz09 DTSTART;TZID=America/New_York:20210511T090000 DTEND;TZID=America/New_York:20210511T120000 SEQUENCE:0 SUMMARY:Deep Learning Final Project Presentations URL:https://lcsr.jhu.edu/events/deep-learning-final-project-presentations/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nThe Spring 20 21 Final Project Presentation Session for Deep Learning w ill be held Tuesday\, May 11th from 9-12pm Eas tern Time via Zoom. This year\, we have many amazing proj ects to review and celebrate. We are excited to invite you to join our eve nt to see what the students have achieved with the effort of the past seme ster.
\n\n
Join Zoom Meeting
\nhttps://wse.zoom.us/j/98898500603?pwd=dlY2RHlUZXh
FUXErK1J6bHcxVUNGdz09
The closing ceremonies of the Computational Sensing and Medi cal Robotics (CSMR) REU are set to take place Friday\, August 6 from 9am u ntil 3pm at this Zoom link . Seventeen undergraduate students from across the country are eager to sh are the culmination of their work for the past 10 weeks this summer.
\nThe schedule for the day is listed below\, but each presentation is fea tured in more detail in the prog ram. The event is open to the public and it is not necessary to RSVP.< /p>\n
\n
\n
2 021 REU Final Presentations | \n||||
Time strong> | \nPresenter | \nProject Tit le | \nFaculty Mentor | \nSt udent/Postdoc/Research Engineer Mentors | \n
\n Ben Frey \n
| \nDeep Learning for Lung Ultrasound Imaging of COVID-19 Patients | \nMuyina tu Bell | \nLingyi Zhao | \n|
9:15 | \n \n Camryn Graham \n
| \nOptimization of a P hotoacoustic Technique to Differentiate Methylene Blue from Hemoglobin | \nMuyinatu Bell | \nEduardo Gonzalez | \n
\n Ariadna Rivera \n
| \nAutonomous Quadcopter Flying and Swarming | \nEnrique Mallada | \nYue Shen | \n|
9:45 | \n
\n Katie Sapozhnikov \n
| \nForce Sensing Surgical Drill\n | Russell Taylor | \nAnna Goodridge | \n
\n Savannah Hays \n
| \nEvaluating SLANT Brain Segmentation using CALAMITI | \nJerry Prince< /td>\n | Lianrui Zuo | \n|
10:15 | \n \n Ammaar Firozi \n
| \nRobustness of Deep Network s to Adversarial Attacks | \nRené Vidal | \nKaleab Kinfu\, Car olina Pacheco | \n
10:30 | \n\n | \n | \n | |
10:45 | \n \n Karina Soto Perez \n\n | Brain Tumor Segmentation in Structural MRIs | \nArchana Venk ataraman | \nNaresh Nandakumar | \n
11:00< /strong> | \n \n Jonathan Mi \n
| \nDesign of a S mall Legged Robot to Traverse a Field of Multiple Types of Large Obstacles | \nChen Li | \nRatan Othayoth\, Yaqing Wang\, Qihan Xuan | \n
11:15 | \n \n Arko Chatte rjee \n
| \nTelerobotic System for Satellite Servicing | \nPeter Kazanzides\, Louis Whitcomb\, Simon Leonard | \nWill Pryor | \n
11:30 | \n \n Lauren Peterson \n
| \nCan a Fish Learn to Ride a Bicycle? | \nNoah Cowan | \nYu Yang | \n
11:45 | \n \n Josiah Lozano \n
| \nRobotic System for Mosquito Dissection | \nRussel Taylor\, \n Lulian Lordachita< /td>\n | Anna Goodridge | \n
12:00 | \n \n Zulekha Karachiwalla \n
| \nApplication of d ual modality haptic feedback within surgical robotic | \nJeremy Brow n | \n\n |
12:15 | \n\n | \n | \n | |
1:00 | \n \n James Campbell \n
| \n<
td>Understanding Overparameterization from Symmetry\nRené Vidal td>\n | Salma Tarmoun | \n|
1:15 | \n \n Evan Dramko \n
| \nEstablishing FDR Control For Genetic Marker Selection | \nSoledad Villar\, Jeremias Sulam | \nN/A | \n
1:30 | \n \n C hase Lahr \n
| \nModeling Dynamic Systems Through a Classroom Testbed | \nJeremy Brown | \nMohit Singhala | \n
1:45 | \n \n Anire Egbe \n
|
\nObject Discrimination Using Vibrotactile Feedback for Upper Limb Pro sthetic Users | \nJeremy Brown | \n\n |
\n Harrison Menkes \n
| \nMeasuring Proprioceptive Impairment in Stroke Survivors (Pre-Recorded)\n | Jeremy Brown | \n\n | |
2:15 | \n \n Deliberations \n
| \n\n | \n | \n |
3:00 | \nWinner Announced | \n\n | \n | \n |
\n
Mark Savage is the Johns Hopkins Li fe Design Educator for Engineering Masters Students\, advising on all aspe cts of career development and the internship / job search\, with the Hands hake Career Management System as a necessary tool. Look for weekly newsle tters to soon be emailed to Homewood WSE Masters Students on Sunday Nights .
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12289@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \n \nAbstract:\nRobots currently have the capacity to help pe ople in several fields\, including health care\, assisted living\, and man ufacturing\, where the robots must share physical space and actively inter act with people in teams. The performance of these teams depends upon how fluently all team members can jointly perform their tasks. To be successfu l within a group\, a robot requires the ability to perceive other members’ actions\, model interaction dynamics\, predict future actions\, and adapt their plans accordingly in real-time. In the Collaborative Robotics Lab ( CRL)\, we develop novel perception\, prediction\, and planning algorithms for robots to fluently coordinate and collaborate with people in complex h uman environments. In this talk\, I will highlight various challenges of d eploying robots in real-world settings and present our recent work to tack le several of these challenges.\n \nBiography:\nTariq Iqbal is an Assistan t Professor of Systems Engineering and Computer Science (by courtesy) at t he University of Virginia (UVA). Prior to joining UVA\, he was a Postdocto ral Associate in the Computer Science and Artificial Intelligence Lab (CSA IL) at MIT. He received his Ph.D. in CS from the University of California San Diego (UCSD). Iqbal leads the Collaborative Robotics Lab (CRL)\, which focuses on building robotic systems that work alongside people in complex human environments\, such as factories\, hospitals\, and educational sett ings. His research group develops artificial intelligence\, computer visio n\, and machine learning algorithms to enable robots to solve problems in these domains. DTSTART;TZID=America/New_York:20210915T120000 DTEND;TZID=America/New_York:20210915T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Tariq Iqbal “Toward Fluent Collaboration in Human-Rob ot Teams” URL:https://lcsr.jhu.edu/events/tariq-iqbal/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
\n
Abstract:
\nRobots currently have the capacity to help people in several fie lds\, including health care\, assisted living\, and manufacturing\, where the robots must share physical space and actively interact with people in teams. The performance of these teams depends upon how fluently all team m embers can jointly perform their tasks. To be successful within a group\, a robot requires the ability to perceive other members’ actions\, model in teraction dynamics\, predict future actions\, and adapt their plans accord ingly in real-time. In the Collaborative Robotics Lab (CRL)\, we develop n ovel perception\, prediction\, and planning algorithms for robots to fluen tly coordinate and collaborate with people in complex human environments. In this talk\, I will highlight various challenges of deploying robots in real-world settings and present our recent work to tackle several of these challenges.
\n\n
Biography:
\nTariq I qbal is an Assistant Professor of Systems Engineering and Computer Science (by courtesy) at the University of Virginia (UVA). Prior to joining UVA\, he was a Postdoctoral Associate in the Computer Science and Artificial In telligence Lab (CSAIL) at MIT. He received his Ph.D. in CS from the Univer sity of California San Diego (UCSD). Iqbal leads the Collaborative Robotic s Lab (CRL)\, which focuses on building robotic systems that work alongsid e people in complex human environments\, such as factories\, hospitals\, a nd educational settings. His research group develops artificial intelligen ce\, computer vision\, and machine learning algorithms to enable robots to solve problems in these domains.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12292@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract: We describe an approach for incorporating prior k nowledge into machine learning algorithms. We aim at applications in physi cs and signal processing in which we know that certain operations must be embedded into the algorithm. Any operation that allows computation of a gr adient or sub-gradient towards its inputs is suited for our framework. We derive a maximal error bound for deep nets that demonstrates that inclusio n of prior knowledge results in its reduction. Furthermore\, we show exper imentally that known operators reduce the number of free parameters. We ap ply this approach to various tasks ranging from computed tomography image reconstruction over vessel segmentation to the derivation of previously un known imaging algorithms. As such\, the concept is widely applicable for m any researchers in physics\, imaging and signal processing. We assume that our analysis will support further investigation of known operators in oth er fields of physics\, imaging and signal processing.\nShort Bio: Prof. Dr . Andreas Maier was born on 26th of November 1980 in Erlangen. He studied Computer Science\, graduated in 2005\, and received his PhD in 2009. From 2005 to 2009 he was working at the Pattern Recognition Lab at the Computer Science Department of the University of Erlangen-Nuremberg. His major res earch subject was medical signal processing in speech data. In this period \, he developed the first online speech intelligibility assessment tool – PEAKS – that has been used to analyze over 4.000 patient and control subje cts so far.\nFrom 2009 to 2010\, he started working on flat-panel C-arm CT as post-doctoral fellow at the Radiological Sciences Laboratory in the De partment of Radiology at the Stanford University. From 2011 to 2012 he joi ned Siemens Healthcare as innovation project manager and was responsible f or reconstruction topics in the Angiography and X-ray business unit.\nIn 2 012\, he returned the University of Erlangen-Nuremberg as head of the Medi cal Reconstruction Group at the Pattern Recognition lab. In 2015 he became professor and head of the Pattern Recognition Lab. Since 2016\, he is mem ber of the steering committee of the European Time Machine Consortium. In 2018\, he was awarded an ERC Synergy Grant “4D nanoscope”. Current resear ch interests focuses on medical imaging\, image and audio processing\, dig ital humanities\, and interpretable machine learning and the use of known operators.\n \n \n DTSTART;TZID=America/New_York:20210922T120000 DTEND;TZID=America/New_York:20210922T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Andreas Maier “Known Operator Learning – An Approach to Unite Machine Learning\, Signal Processing and Physics” URL:https://lcsr.jhu.edu/events/andreas-maier/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract: We descr ibe an approach for incorporating prior knowledge into machine learning al gorithms. We aim at applications in physics and signal processing in which we know that certain operations must be embedded into the algorithm. Any operation that allows computation of a gradient or sub-gradient towards it s inputs is suited for our framework. We derive a maximal error bound for deep nets that demonstrates that inclusion of prior knowledge results in i ts reduction. Furthermore\, we show experimentally that known operators re duce the number of free parameters. We apply this approach to various task s ranging from computed tomography image reconstruction over vessel segmen tation to the derivation of previously unknown imaging algorithms. As such \, the concept is widely applicable for many researchers in physics\, imag ing and signal processing. We assume that our analysis will support furthe r investigation of known operators in other fields of physics\, imaging an d signal processing.
\nShort Bio: Prof. Dr. Andreas
Maier was born on 26th of November 1980 in Erlangen. He studied Computer
Science\, graduated in 2005\, and received his PhD in 2009. From 2005 to 2
009 he was working at the Pattern Recognition Lab at the Computer Science
Department of the University of Erlangen-Nuremberg. His major research sub
ject was medical signal processing in speech data. In this period\, he dev
eloped the first online speech intelligibility assessment tool – PEAKS – t
hat has been used to analyze over 4.000 patient and control subjects so fa
r.
\nFrom 2009 to 2010\, he started working on flat-panel C-arm CT as
post-doctoral fellow at the Radiological Sciences Laboratory in the Depar
tment of Radiology at the Stanford University. From 2011 to 2012 he joined
Siemens Healthcare as innovation project manager and was responsible for
reconstruction topics in the Angiography and X-ray business unit.
\nI
n 2012\, he returned the University of Erlangen-Nuremberg as head of the M
edical Reconstruction Group at the Pattern Recognition lab. In 2015 he bec
ame professor and head of the Pattern Recognition Lab. Since 2016\, he is
member of the steering committee of the European Time Machine Consortium.
In 2018\, he was awarded an ERC Synergy Grant “4D nanoscope”. Current res
earch interests focuses on medical imaging\, image and audio processing\,
digital humanities\, and interpretable machine learning and the use of kno
wn operators.
\n
\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12297@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nThe unprecedented prediction accuracy of modern machine learning beckons for its application in a wide range of real-world applications\, including autonomous robots\, fine-grained computer vision \, scientific experimental design\, and many others. In order to create tr ustworthy AI systems\, we must safeguard machine learning methods from cat astrophic failures and provide calibrated uncertainty estimates. For examp le\, we must account for the uncertainty and guarantee the performance for safety-critical systems\, like autonomous driving and health care\, befor e deploying them in the real world. A key challenge in such real-world app lications is that the test cases are not well represented by the pre-colle cted training data. To properly leverage learning in such domains\, espec ially safety-critical ones\, we must go beyond the conventional learning p aradigm of maximizing average prediction accuracy with generalization guar antees that rely on strong distributional relationships between training a nd test examples.\n \nIn this talk\, I will describe a distributionally ro bust learning framework that offers accurate uncertainty estimation and ri gorous guarantees under data distribution shift. This framework yields app ropriately conservative yet still accurate predictions to guide real-world decision-making and is easily integrated with modern deep learning. I wi ll showcase the practicality of this framework in applications on agile ro botic control and computer vision. I will also introduce a survey of othe r real-world applications that would benefit from this framework for futur e work.\n \nBiography:\nAnqi (Angie) Liu is an Assistant Professor in the Department of Computer Science at the Whiting School of Engineering of the Johns Hopkins University. She is broadly interested in developing princip led machine learning algorithms for building more reliable\, trustworthy\, and human-compatible AI systems in the real world. Her research focuses o n enabling the machine learning algorithms to be robust to the changing da ta and environments\, to provide accurate and honest uncertainty estimates \, and to consider human preferences and values in the interaction. She is particularly interested in high-stake applications that concern the safet y and societal impact of AI. Previously\, she completed her postdoc in the Department of Computing and Mathematical Sciences of the California Insti tute of Technology. She obtained her Ph.D. from the Department of Computer Science of the University of Illinois at Chicago. She has been selected a s the 2020 EECS Rising Stars. Her publications appear in top machine learn ing conferences like NeurIPS\, ICML\, ICLR\, AAAI\, and AISTATS. DTSTART;TZID=America/New_York:20210929T120000 DTEND;TZID=America/New_York:20210929T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Angie Liu “Towards Trustworthy AI: Distributionally R obust Learning under Data Shift” URL:https://lcsr.jhu.edu/events/angie-liu/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
Abstract:
\nThe unprecedented prediction accuracy of modern machine learning beckons f or its application in a wide range of real-world applications\, including autonomous robots\, fine-grained computer vision\, scientific experimental design\, and many others. In order to create trustworthy AI systems\, we must safeguard machine learning methods from catastrophic failures and pro vide calibrated uncertainty estimates. For example\, we must account for t he uncertainty and guarantee the performance for safety-critical systems\, like autonomous driving and health care\, before deploying them in the re al world. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data. To pr operly leverage learning in such domains\, especially safety-critical ones \, we must go beyond the conventional learning paradigm of maximizing aver age prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples.
\n\n
In this talk\, I will describe a distributionally robust learnin g framework that offers accurate uncertainty estimation and rigorous guara ntees under data distribution shift. This framework yields appropriately c onservative yet still accurate predictions to guide real-world decision-ma king and is easily integrated with modern deep learning. I will showcase the practicality of this framework in applications on agile robotic contro l and computer vision. I will also introduce a survey of other real-world applications that would benefit from this framework for future work.
\n\n
Biography:
\nAnqi (Angie) Liu is an Assistant Professor in the Department of Computer Science at the Whiting S chool of Engineering of the Johns Hopkins University. She is broadly inter ested in developing principled machine learning algorithms for building mo re reliable\, trustworthy\, and human-compatible AI systems in the real wo rld. Her research focuses on enabling the machine learning algorithms to b e robust to the changing data and environments\, to provide accurate and h onest uncertainty estimates\, and to consider human preferences and values in the interaction. She is particularly interested in high-stake applicat ions that concern the safety and societal impact of AI. Previously\, she c ompleted her postdoc in the Department of Computing and Mathematical Scien ces of the California Institute of Technology. She obtained her Ph.D. from the Department of Computer Science of the University of Illinois at Chica go. She has been selected as the 2020 EECS Rising Stars. Her publications appear in top machine learning conferences like NeurIPS\, ICML\, ICLR\, AA AI\, and AISTATS.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12300@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nDeployment of autonomous vehicles (AV) on public roads promises increases in efficiency and safety\, and requires intellig ent situation awareness. We wish to have autonomous vehicles that can lear n to behave in safe and predictable ways\, and are capable of evaluating r isk\, understanding the intent of human drivers\, and adapting to differen t road situations. This talk describes an approach to learning and integra ting risk and behavior analysis in the control of autonomous vehicles. I w ill introduce Social Value Orientation (SVO)\, which captures how an agent ’s social preferences and cooperation affect interactions with other agent s by quantifying the degree of selfishness or altruism. SVO can be integra ted in control and decision making for AVs. I will provide recent examples of self-driving vehicles capable of adaptation.\n \nBiography:\nDaniela R us is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineeri ng and Computer Science\, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT\, and Deputy Dean of Research in th e Schwarzman College of Computing at MIT. Rus’ research interests are in r obotics and artificial intelligence. The key focus of her research is to d evelop the science and engineering of autonomy. Rus is a Class of 2002 Mac Arthur Fellow\, a fellow of ACM\, AAAI and IEEE\, a member of the National Academy of Engineering\, and of the American Academy of Arts and Sciences . She is a senior visiting fellow at MITRE Corporation. She is the recipie nt of the Engelberger Award for robotics. She earned her PhD in Computer S cience from Cornell University. DTSTART;TZID=America/New_York:20211006T120000 DTEND;TZID=America/New_York:20211006T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Daniela Rus “Learning Risk and Social Behavior in Mix ed Human-Autonomous Vehicles Systems” URL:https://lcsr.jhu.edu/events/daniela-rus/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract:
\nDeployment of autonomous vehicles (AV) on public roads promises increases in efficiency and safety\, and requires intelligent situation awareness. W e wish to have autonomous vehicles that can learn to behave in safe and pr edictable ways\, and are capable of evaluating risk\, understanding the in tent of human drivers\, and adapting to different road situations. This ta lk describes an approach to learning and integrating risk and behavior ana lysis in the control of autonomous vehicles. I will introduce Social Value Orientation (SVO)\, which captures how an agent’s social preferences and cooperation affect interactions with other agents by quantifying the degre e of selfishness or altruism. SVO can be integrated in control and decisio n making for AVs. I will provide recent examples of self-driving vehicles capable of adaptation.
\n\n
Biography:
\nDaniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electric al Engineering and Computer Science\, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT\, and Deputy Dean of Re search in the Schwarzman College of Computing at MIT. Rus’ research intere sts are in robotics and artificial intelligence. The key focus of her rese arch is to develop the science and engineering of autonomy. Rus is a Class of 2002 MacArthur Fellow\, a fellow of ACM\, AAAI and IEEE\, a member of the National Academy of Engineering\, and of the American Academy of Arts and Sciences. She is a senior visiting fellow at MITRE Corporation. She is the recipient of the Engelberger Award for robotics. She earned her PhD i n Computer Science from Cornell University.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12307@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \n \nAbstract:\nDigital cameras have dramatically changed int erventional and surgical procedures. Modern operating rooms utilize a rang e of cameras to minimize invasiveness or provide vision beyond human capab ilities in magnification\, spectra or sensitivity. Such surgical cameras p rovide the most informative and rich signal from the surgical site contain ing information about activity and events as well as physiology and tissue function. This talk will highlight some of the opportunities for computer vision in surgical applications and the challenges in translation to clin ically usable systems.\n \nBio: \nDan Stoyanov is a Professor of Robot Vis ion in the Department of Computer Science at University College London\, D irector of the Wellcome/EPSRC Centre for Interventional and Surgical Scien ces (WEISS)\, a Royal Academy of Engineering Chair in Emerging Technologie s and Chief Scientist at Digital Surgery Ltd. Dan first studied electronic s and computer systems engineering at King’s College London before complet ing a PhD in Computer Science at Imperial College London where he speciali zed in medical image computing.\n DTSTART;TZID=America/New_York:20211013T120000 DTEND;TZID=America/New_York:20211013T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Danail Stoyanov “Towards Understanding Surgical Scene s Using Computer Vision” URL:https://lcsr.jhu.edu/events/danail-stoyanov/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
\n
Abstract:
\nDigital cameras have dramatically changed interventional and sur gical procedures. Modern operating rooms utilize a range of cameras to min imize invasiveness or provide vision beyond human capabilities in magnific ation\, spectra or sensitivity. Such surgical cameras provide the most inf ormative and rich signal from the surgical site containing information abo ut activity and events as well as physiology and tissue function. This tal k will highlight some of the opportunities for computer vision in surgical applications and the challenges in translation to clinically usable syste ms.
\n\n
Bio:
\nDan Stoyanov is a Professor of Robot Vision in the Department of Computer Scie nce at University College London\, Director of the Wellcome/EPSRC Centre f or Interventional and Surgical Sciences (WEISS)\, a Royal Academy of Engin eering Chair in Emerging Technologies and Chief Scientist at Digital Surge ry Ltd. Dan first studied electronics and computer systems engineering at King’s College London before completing a PhD in Computer Science at Imper ial College London where he specialized in medical image computing.
\n< p> \n END:VEVENT BEGIN:VEVENT UID:ai1ec-12310@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \n \nAbstract:\nI will discuss recent efforts at CinfonIA in enhancing interpretability in deep neural networks through the use of adve rsarial robustness and multimodal information.\n \nBiography:\nPablo Arbel áez received the PhD with honors in Applied Mathematics from the Universit é Paris Dauphine in 2005. He was Senior Research Scientist with the Comput er Vision Group at UC Berkeley from 2007 to 2014. He currently holds a fac ulty position in the Department of Biomedical Engineering at Universidad d e los Andes in Colombia. Since 2020\, he leads the Center for Research and Formation in Artificial Intelligence (CinfonIA) at UniAndes. His research interests are in computer vision and machine learning\, in which he has w orked on several problems\, including perceptual grouping\, object recogni tion and the analysis of biomedical images. DTSTART;TZID=America/New_York:20211020T120000 DTEND;TZID=America/New_York:20211020T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Pablo Arbelaez “Towards Robust Artificial Intelligenc e” URL:https://lcsr.jhu.edu/events/pablo-arbelaez/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
\n
Abstract:
\nI will discuss recent efforts at CinfonIA in enhancing interpret ability in deep neural networks through the use of adversarial robustness and multimodal information.
\n\n
Biography:< /p>\n
Pablo Arbeláez received the PhD with honors in Applied Mathematics from the Université Paris Dauphine in 2005. He was Senior Research Scient ist with the Computer Vision Group at UC Berkeley from 2007 to 2014. He cu rrently holds a faculty position in the Department of Biomedical Engineeri ng at Universidad de los Andes in Colombia. Since 2020\, he leads the Cent er for Research and Formation in Artificial Intelligence (CinfonIA) at Uni Andes. His research interests are in computer vision and machine learning\ , in which he has worked on several problems\, including perceptual groupi ng\, object recognition and the analysis of biomedical images.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12334@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nLCSR Faculty “Interviewing for Jobs in Academia and Industr y”\n \nSpeakers: Louis Whitcomb\, Marin Kobilarov\, and the LCSR Faculty\n Abstract:\nThis LCSR professional development seminar will review the proc ess of interviewing for jobs in academia (e.g. faculty\, post-doc\, and sc ientist positions) and industry (e.g. engineering\, scientist\, and manage ment positions)\, and will provide tips and best-practices for successful interviewing.\n DTSTART;TZID=America/New_York:20211027T120000 DTEND;TZID=America/New_York:20211027T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: LCSR Faculty “Interviewing for Jobs in Academia and I ndustry” URL:https://lcsr.jhu.edu/events/lcsr-faculty/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
\n
Speakers: Louis Whitcomb\, Marin Kobilarov\, and the LCSR Faculty
\nAb
stract:
\nThis LCSR professional development seminar will re
view the process of interviewing for jobs in academia (e.g. faculty\, post
-doc\, and scientist positions) and industry (e.g. engineering\, scientist
\, and management positions)\, and will provide tips and best-practices fo
r successful interviewing.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12312@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nThere are more than 2 million industrial robots used worldwide every day\, and yet these devices represent one of the most fragmented technologies in the world. With more than 100 brands of indust rial robots\, each with their own proprietary\, difficult to learn softwar e and programming languages\, we are not seeing the exponential growth we expected out of robots. The computer industry faced a similar challenge in the early 1980s with the advent of the PC\, and computers did not see exp losive growth until a few key platforms emerged that focused on making com puters accessible to end users\, and run on a common software platform. At READY robotics\, we believe the same is true for robots\, and that is why we are building Forge/OS\, our “Windows” for the robotics space that lets every robot speak the same language and provide the same award winning us er experience to end-users. We will talk about how this technology came ab out\, how we think it can change the future\, and discuss the journey from the initial research performed at Johns Hopkins University up to today.\n \nBiography:\nKel Guerin has been working in the robotics space for more than 10 years\, focusing on the design and usability of a wide variety of robots\, including systems for space exploration\, deep mining\, surgery\, and industrial manufacturing. While obtaining his Ph.D. from Johns Hopkin s University (Defended 2016)\, Kel worked specifically on the challenge of making industrial robots more flexible and easy to use. The result was hi s award-winning Forge Operating System and easy-to-use programming interfa ce for industrial robots. Kel spun out his technology into READY Robotics\ , an industrial robotics start-up he co-founded in 2016. His work has been featured in the Wall Street Journal\, Forbes\, and READY’s products have been called “the Swiss Army knife of robots” by Inc. magazine. DTSTART;TZID=America/New_York:20211103T120000 DTEND;TZID=America/New_York:20211103T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Kel Guerin “Building an End-User Focused Operating Sy stem for Robotics” URL:https://lcsr.jhu.edu/events/kel-guerin/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
Abstract:
\nThere are more than 2 million industrial robots used worldwide every day\, and yet these devices represent one of the most fragmented technologies i n the world. With more than 100 brands of industrial robots\, each with th eir own proprietary\, difficult to learn software and programming language s\, we are not seeing the exponential growth we expected out of robots. Th e computer industry faced a similar challenge in the early 1980s with the advent of the PC\, and computers did not see explosive growth until a few key platforms emerged that focused on making computers accessible to end u sers\, and run on a common software platform. At READY robotics\, we belie ve the same is true for robots\, and that is why we are building Forge/OS\ , our “Windows” for the robotics space that lets every robot speak the sam e language and provide the same award winning user experience to end-users . We will talk about how this technology came about\, how we think it can change the future\, and discuss the journey from the initial research perf ormed at Johns Hopkins University up to today.
\n\n
B iography:
\nKel Guerin has been working in the robotics spa ce for more than 10 years\, focusing on the design and usability of a wide variety of robots\, including systems for space exploration\, deep mining \, surgery\, and industrial manufacturing. While obtaining his Ph.D. from Johns Hopkins University (Defended 2016)\, Kel worked specifically on the challenge of making industrial robots more flexible and easy to use. The r esult was his award-winning Forge Operating System and easy-to-use program ming interface for industrial robots. Kel spun out his technology into REA DY Robotics\, an industrial robotics start-up he co-founded in 2016. His w ork has been featured in the Wall Street Journal\, Forbes\, and READY’s pr oducts have been called “the Swiss Army knife of robots” by Inc. magazine.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12322@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \n \nAbstract:\nTBA\n \nBiography:\nTBA DTSTART;TZID=America/New_York:20211110T120000 DTEND;TZID=America/New_York:20211110T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Maya Cakmak URL:https://lcsr.jhu.edu/events/maya-cakmak/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
\n
Abstract:
\nTBA
\n\n
Biography:
\nTBA
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12324@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nRobot-assisted surgery (RAS) has gained momentum over the last few decades with nearly 1\,200\,000 RAS procedures performe d in 2019 alone using the da Vinci Surgical System\, the most widely used surgical robotics platform. The current state-of-the-art surgical robotic systems use only a single endoscope to view the surgical field. In this ta lk\, we present a novel design of an additional “pickup” camera that can b e integrated into the da Vinci Surgical System. We then explore the benefi ts of our design for human-robot interaction (HRI) and autonomy in RAS. On the HRI side\, we show how this “pickup” camera improves depth perception as well as how its additional view can lead to better surgical training. On the autonomy side\, we show how automating the motion of this camera pr ovides better visualization of the surgical scene. Finally\, we show how t his automation work inspires the design of novel execution models of the a utomation of surgical subtasks\, leading to superhuman performance.\n \nBi ography:\nAlaa Eldin Abdelaal is a PhD candidate at the Robotics and Contr ol Laboratory at the University of British Columbia and a visiting graduat e scholar at the Computational Interaction and Robotics Lab at Johns Hopki ns University. He holds a B.Sc. in Computer and Systems Engineering from M ansoura University in Egypt and a M.Sc. in Computing Science from Simon Fr aser University in Canada. His research interests are at the intersection of autonomy and human-robot interaction for human skill augmentation and d ecision support with application to surgical robotics. His work is co-advi sed by Dr. Tim Salcudean and Dr. Gregory Hager. His research has been reco gnized with the Best Bench-to-Bedside Paper Award at the International Con ference on Information Processing in Computer-Assisted Interventions (IPCA I) 2019. He is the recipient of the Vanier Canada Graduate Scholarship\, t he most prestigious scholarship for PhD students in Canada. DTSTART;TZID=America/New_York:20211117T120000 DTEND;TZID=America/New_York:20211117T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Alaa Eldin Abdelaal “An “Additional View” on Human-Ro bot Interaction and Autonomy in Robot-Assisted Surgery” URL:https://lcsr.jhu.edu/events/alaa-eldin-abdelaal/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract:
\nRobot-assisted surgery (RAS) has gained momentum over the last few decades with nearly 1\,200\,000 RAS procedures performed in 2019 alone using the da Vinci Surgical System\, the most widely used surgical robotics platform . The current state-of-the-art surgical robotic systems use only a single endoscope to view the surgical field. In this talk\, we present a novel de sign of an additional “pickup” camera that can be integrated into the da V inci Surgical System. We then explore the benefits of our design for human -robot interaction (HRI) and autonomy in RAS. On the HRI side\, we show ho w this “pickup” camera improves depth perception as well as how its additi onal view can lead to better surgical training. On the autonomy side\, we show how automating the motion of this camera provides better visualizatio n of the surgical scene. Finally\, we show how this automation work inspir es the design of novel execution models of the automation of surgical subt asks\, leading to superhuman performance.
\n\n
Biogra phy:
\nAlaa Eldin Abdelaal is a PhD candidate at the Roboti cs and Control Laboratory at the University of British Columbia and a visi ting graduate scholar at the Computational Interaction and Robotics Lab at Johns Hopkins University. He holds a B.Sc. in Computer and Systems Engine ering from Mansoura University in Egypt and a M.Sc. in Computing Science f rom Simon Fraser University in Canada. His research interests are at the i ntersection of autonomy and human-robot interaction for human skill augmen tation and decision support with application to surgical robotics. His wor k is co-advised by Dr. Tim Salcudean and Dr. Gregory Hager. His research h as been recognized with the Best Bench-to-Bedside Paper Award at the Inter national Conference on Information Processing in Computer-Assisted Interve ntions (IPCAI) 2019. He is the recipient of the Vanier Canada Graduate Sch olarship\, the most prestigious scholarship for PhD students in Canada.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12339@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \n \nAbstract:\nIn this seminar\, we will have a panel of thr ee LCSR faculty\, Dr. Peter Kazanzides\, Dr. Marin Kobilarov\, and Dr. Axe l Krieger discussing their experience in commercializing robotic research through licensing and start-up. The panel will include questions and answe r sessions with the audience.\n DTSTART;TZID=America/New_York:20211201T120000 DTEND;TZID=America/New_York:20211201T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: LCSR Faculty “Panel on commercialization of robotics research” URL:https://lcsr.jhu.edu/events/tba-3/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
\n
Abstract:
\nIn this seminar\, we will have a panel of three LCSR faculty\, D r. Peter Kazanzides\, Dr. Marin Kobilarov\, and Dr. Axel Krieger discussin g their experience in commercializing robotic research through licensing a nd start-up. The panel will include questions and answer sessions with the audience.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12597@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nAn enduring goal of AI and robotics has been to build a robot capable of robustly performing a wide variety of tasks in a wide variety of environments\; not by sequentially being programmed (or ta ught) to perform one task in one environment at a time\, but rather by int elligently choosing appropriate actions for whatever task and\nenvironment it is facing. This goal remains a challenge. In this talk I’ll describe r ecent work in our lab aimed at the goal of general-purpose robot manipulat ion by integrating task-and-motion planning with various forms of model le arning. In particular\, I’ll describe approaches to manipulating objects w ithout prior shape models\, to acquiring composable sensorimotor skills\, and to exploiting past experience for more efficient planning.\n \nBiograp hy:\nTomas Lozano-Perez is professor in EECS at MIT\, and a member of CSAI L. He was a recipient of the 2011 IEEE Robotics Pioneer Award and a co-rec ipient of the 2021 IEEE Robotics and Automation Technical Field Award. He is a Fellow of the AAAI\, ACM\, and\nIEEE. DTSTART;TZID=America/New_York:20220126T120000 DTEND;TZID=America/New_York:20220126T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Tomas Lozano-Perez “Generalization in Planning and Le arning for Robotic Manipulation” URL:https://lcsr.jhu.edu/events/tomas-lozano-perez/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
Abstract:
\n
An enduring goal of AI and robotics has been to build a robot capable of r
obustly performing a wide variety of tasks in a wide variety of environmen
ts\; not by sequentially being programmed (or taught) to perform one task
in one environment at a time\, but rather by intelligently choosing approp
riate actions for whatever task and
\nenvironment it is facing. This
goal remains a challenge. In this talk I’ll describe recent work in our la
b aimed at the goal of general-purpose robot manipulation by integrating t
ask-and-motion planning with various forms of model learning. In particula
r\, I’ll describe approaches to manipulating objects without prior shape m
odels\, to acquiring composable sensorimotor skills\, and to exploiting pa
st experience for more efficient planning.
\n
Biogr aphy:
\nTomas Lozano-Perez is professor in EECS at MIT\, an
d a member of CSAIL. He was a recipient of the 2011 IEEE Robotics Pioneer
Award and a co-recipient of the 2021 IEEE Robotics and Automation Technica
l Field Award. He is a Fellow of the AAAI\, ACM\, and
\nIEEE.
\n
Abstract:
\nWhile many robots are currently deployable in factories\, warehouses\, and homes\, their autonomous deployment requires either the deployment enviro nments to be highly controlled\, or the deployment to only entail executin g one single preprogrammed task. These deployable robots do not learn to a ddress changes and to improve performance. For uncontrolled environments a nd for novel tasks\, current robots must seek help from highly skilled rob ot operators for teleoperated (not autonomous) deployment.
\n\n
In this talk\, I will present two approaches to removing these limitati ons by learning to enable autonomous deployment in the context of mobile r obot navigation\, a common core capability for deployable robots: (1) Adap tive Planner Parameter Learning utilizes existing motion planners\, fine-t unes these systems using simple interactions with non-expert users before autonomous deployment\, adapts to different deployment environments\, and produces robust autonomous navigation\; (2) Learning from Hallucination en ables agile navigation in highly-constrained deployment environments by ex ploring in a completely safe training environment and creating synthetic o bstacle configurations to learn from. Building on robust autonomous naviga tion\, I will discuss my vision toward a hardened\, reliable\, and resilie nt robot fleet which is also task-efficient and continually learns from ea ch other and from humans.
\n\n
Biography:
\nXuesu Xiao is an incoming Assistant Professor in the Department of C omputer Science at George Mason University starting Fall 2022. Currently\, he is a roboticist on The Everyday Robot Project at X\, The Moonshot Fact ory\, and a research affiliate in the Department of Computer Science at Th e University of Texas at Austin. Dr. Xiao’s research focuses on field robo tics\, motion planning\, and machine learning. He develops highly capable and intelligent mobile robots that are robustly deployable in the real wor ld with minimal human supervision. Dr. Xiao received his Ph.D. in Computer Science from Texas A&M University in 2019\, Master of Science in Mechanic al Engineering from Carnegie Mellon University in 2015\, and dual Bachelor of Engineering in Mechatronics Engineering from Tongji University and FH Aachen University of Applied Sciences in 2013.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12615@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nThis presentation overviews a number of the proj ects related to image-guided intervention that have taken place in my lab at the Robarts Research Institute at Western University in recent years. P rojects cover applications in Image-guided Neurosurgery\, Cardiac surgery\ , as well as the role of simulation phantoms and technologies as motion m agnification and mixed reality in image-guided interventions.\n \nBiograp hy:\nDr. Terry Peters is a Scientist in the Imaging Research Laboratories at the Robarts Research Institute\, London\, ON\, Canada\, and is Professo r Emeritus in the Departments of Medical Imaging and Medical Biophysics\, and the School of Biomedical Engineering\, at Western University. He obtai ned his PhD in Electrical Engineering at the University of Canterbury in C hristchurch NZ\, in the field image reconstruction for CT in 1974\, and f ollowing some time as a Medical Physicist at the Christchurch Hospital\, joined the Montreal Neurological Institute at McGill University as a rese arch scientist in 1978. In 1997 he joined the Imaging Research Labs at th e Robarts Research Institute at Western University in London Canada\, wher e he expanded his research focus to encompass image-guided procedures in m ultiple organ systems. He has authored over 350 peer-reviewed papers\, bo oks and book chapters\, and has mentored over 100 trainees. Dr Peters is a Fellow of several academic and professional societies including the IEE E\, the MICCAI Society\, the Royal Society of Canada. DTSTART;TZID=America/New_York:20220209T120000 DTEND;TZID=America/New_York:20220209T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Terry Peters “A journey in Image-guided Intervention” URL:https://lcsr.jhu.edu/events/terry-peters/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract:
\nThis presentation overviews a number of the projects related to image-guid ed intervention that have taken place in my lab at the Robarts Research In stitute at Western University in recent years. Projects cover applications in Image-guided Neurosurgery\, Cardiac surgery\, as well as the role of s imulation phantoms and technologies as motion magnification and mixed re ality in image-guided interventions.
\n\n
Biography:< /strong>
\nDr. Terry Peters is a Scientist in the Imaging Research L aboratories at the Robarts Research Institute\, London\, ON\, Canada\, and is Professor Emeritus in the Departments of Medical Imaging and Medical B iophysics\, and the School of Biomedical Engineering\, at Western Universi ty. He obtained his PhD in Electrical Engineering at the University of Can terbury in Christchurch NZ\, in the field image reconstruction for CT in 1974\, and following some time as a Medical Physicist at the Christchurch Hospital\, joined the Montreal Neurological Institute at McGill Universi ty as a research scientist in 1978. In 1997 he joined the Imaging Researc h Labs at the Robarts Research Institute at Western University in London C anada\, where he expanded his research focus to encompass image-guided pro cedures in multiple organ systems. He has authored over 350 peer-reviewed papers\, books and book chapters\, and has mentored over 100 trainees. Dr Peters is a Fellow of several academic and professional societies inclu ding the IEEE\, the MICCAI Society\, the Royal Society of Canada.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12619@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nHow many skills do think you have? Mark Savage\, Life Desi gn Educator for Engineering Masters Students will explain how the truth ma y far exceed your estimate. Knowing\, understanding\, and communicating your major skills will prove useful as you pursue jobs and internships.\n DTSTART;TZID=America/New_York:20220216T120000 DTEND;TZID=America/New_York:20220216T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Mark Savage (Life Design Educator) “Skills” URL:https://lcsr.jhu.edu/events/tbd-2/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
How many skills do think you have? Mark Savage\, Life Design Educator for Engineering Masters Students will explain how the truth may far exceed your estimate. Knowing\, understand ing\, and communicating your major skills will prove useful as you pursue jobs and internships.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12624@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nTBA\n \nBiography:\nDr. Ghazi MD\, FEBU\, MHPE\, received his medical education from Cairo University\, Egypt in 2000\, wh ere he also completed his Urology residency 2001-2005. He completed a seri es of fellowships in minimal invasive Urological surgery\, in Paris and Au stria (2009-2011)\, where he received accreditation from the European Boar d of Urology. He completed an Endourology and robotic surgery fellowship a t the University of Rochester Medical Center\, New York (2011-2013)\, afte r which was appointed Assistant professor of Urology at the University of Rochester (2013).\nDr. Ghazi specializes in the diagnosis and minimal inva sive treatment of urological cancers as well as complex stone disease. In addition he perused research grants in education\, simulation research and surgical training. To enhance his educational background\, he was awarded the George Corner Deans Teaching fellowship (2014-2016)\, completed the H arvard Macy Institute program for Educators in Health Professions in 2016 and a Masters in Health Professions Education program at the Warner School of Education\, University of Rochester (2016-2020). He is currently enrol led in a 2-year Senior Leadership Education and Development Program at the University of Rochester. DTSTART;TZID=America/New_York:20220223T120000 DTEND;TZID=America/New_York:20220223T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Ahmed Ghazi “Enhancing Surgical Robotic Innovations t hrough the integration of Novel Simulation Technologies” URL:https://lcsr.jhu.edu/events/ahmed-ghazi/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
Abstract:
\nTBA
\n\n
Biography:
\nDr. Ghazi MD\, F EBU\, MHPE\, received his medical education from Cairo University\, Egypt in 2000\, where he also completed his Urology residency 2001-2005. He comp leted a series of fellowships in minimal invasive Urological surgery\, in Paris and Austria (2009-2011)\, where he received accreditation from the E uropean Board of Urology. He completed an Endourology and robotic surgery fellowship at the University of Rochester Medical Center\, New York (2011- 2013)\, after which was appointed Assistant professor of Urology at the Un iversity of Rochester (2013).
\nDr. Ghazi specializes in the diagnos is and minimal invasive treatment of urological cancers as well as complex stone disease. In addition he perused research grants in education\, simu lation research and surgical training. To enhance his educational backgrou nd\, he was awarded the George Corner Deans Teaching fellowship (2014-2016 )\, completed the Harvard Macy Institute program for Educators in Health P rofessions in 2016 and a Masters in Health Professions Education program a t the Warner School of Education\, University of Rochester (2016-2020). He is currently enrolled in a 2-year Senior Leadership Education and Develop ment Program at the University of Rochester.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12625@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nThe pandemic exacerbated inequities faced by peo ple with disabilities and healthcare workers — both are at high risk of ad verse physical and mental health outcomes. Robots alone are not going to f ix these major societal problems\; however\, our work explores how we can design technology to lessen the burden of systemic ableism and healthcare system stress. I will discuss several of our recent projects in acute care and community health contexts. In acute care\, we are building hospital-b ased robots to support the clinical workforce\, to support item delivery\, telemedicine\, and decision support. In community health\, we are creatin g interactive and adaptive systems that aim to extend the reach of cogniti ve neurorehabilitative therapies\, provide respite to overburdened caregiv ers\, and explore how technology might serve as a means for mediating posi tive interactions during hardship. We focus on building robots that can ad aptively team with and longitudinally learn from people\, and personalize and tailor their behavior.\n \nBiography:\nDr. Laurel Riek is a professor in Computer Science and Engineering at the University of California\, San Diego\, with a joint appointment in the Department of Emergency Medicine\, and is affiliated with the Contextual Robotics Institute and Design Lab. Dr. Riek directs the Healthcare Robotics Lab and leads research in human-r obot teaming and health informatics\, with a focus on autonomous robots th at work proximately with people. Riek’s current research interests include long term learning\, robot perception\, and personalization\; with applic ations in acute care\, neurorehabilitation\, and home health. Dr. Riek rec eived a Ph.D. in Computer Science from the University of Cambridge\, and B .S. in Logic and Computation from Carnegie Mellon. Riek served as a Senior Artificial Intelligence Engineer and Roboticist at The MITRE Corporation from 2000-2008\, working on learning and vision systems for robots\, and h eld the Clare Boothe Luce chair in Computer Science and Engineering at the University of Notre Dame from 2011-2016. Dr. Riek has received the NSF CA REER Award\, AFOSR Young Investigator Award\, Qualcomm Research Award\, an d was named one of ASEE’s 20 Faculty Under 40. Dr. Riek is the HRI 2023 Ge neral Co-Chair and served as the Program Co-Chair for HRI 2020\, and serve s on the editorial boards of T-RO and THRI. DTSTART;TZID=America/New_York:20220302T120000 DTEND;TZID=America/New_York:20220302T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Laurel Riek “Robots in Hospitals and in the Community : Supporting Wellbeing and Furthering Health Equity” URL:https://lcsr.jhu.edu/events/laurel-riek/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract:
\nThe pandemic exacerbated inequities faced by people with disabilities and healthcare workers — both are at high risk of adverse physical and mental health outcomes. Robots alone are not going to fix these major societal pr oblems\; however\, our work explores how we can design technology to lesse n the burden of systemic ableism and healthcare system stress. I will disc uss several of our recent projects in acute care and community health cont exts. In acute care\, we are building hospital-based robots to support the clinical workforce\, to support item delivery\, telemedicine\, and decisi on support. In community health\, we are creating interactive and adaptive systems that aim to extend the reach of cognitive neurorehabilitative the rapies\, provide respite to overburdened caregivers\, and explore how tech nology might serve as a means for mediating positive interactions during h ardship. We focus on building robots that can adaptively team with and lon gitudinally learn from people\, and personalize and tailor their behavior.
\n\n
Biography:
\nDr. Laurel Riek is a professor in Computer Science and Engineering at the University of Calif ornia\, San Diego\, with a joint appointment in the Department of Emergenc y Medicine\, and is affiliated with the Contextual Robotics Institute and Design Lab. Dr. Riek directs the Healthcare Robotics Lab and leads researc h in human-robot teaming and health informatics\, with a focus on autonomo us robots that work proximately with people. Riek’s current research inter ests include long term learning\, robot perception\, and personalization\; with applications in acute care\, neurorehabilitation\, and home health. Dr. Riek received a Ph.D. in Computer Science from the University of Cambr idge\, and B.S. in Logic and Computation from Carnegie Mellon. Riek served as a Senior Artificial Intelligence Engineer and Roboticist at The MITRE Corporation from 2000-2008\, working on learning and vision systems for ro bots\, and held the Clare Boothe Luce chair in Computer Science and Engine ering at the University of Notre Dame from 2011-2016. Dr. Riek has receive d the NSF CAREER Award\, AFOSR Young Investigator Award\, Qualcomm Researc h Award\, and was named one of ASEE’s 20 Faculty Under 40. Dr. Riek is the HRI 2023 General Co-Chair and served as the Program Co-Chair for HRI 2020 \, and serves on the editorial boards of T-RO and THRI.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12635@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nWhile human interaction remains key to a caring treatment\, medical robotics holds the potential to improve surgical proce sses through enabling scaling of forces and actuation\, providing safe and individual treatments to patients\, and allowing for efficient use of hea lth care personnel and resources. Machine learning algorithms and standard ization of processes can increase the quality of medical diagnosis and tre atments\, particularly when analyzing large quantities of data. Technical and robotic systems can thus support the medical staff in all steps of a m edical process.\nThis talk introduces several assistive robotic systems fo r minimally invasive surgical procedures being researched at the Health Ro botics and Automation Lab at KIT\, Germany. On one hand\, we will discuss steerable flexible robotic tools for medical applications that require del icate tissue handling. On the other hand\, cognitive robotic surgeons and augmented reality support in the operation room are presented for applicat ion in laparoscopy and neurosurgery.\n \nBiography:\nFranziska Mathis-Ullr ich is Assistant Professor for Medical Robotics at the Karlsruhe Institute of Technology (KIT) in Germany. Her primary research focus is on minimall y invasive and cognition controlled robotic systems and embedded machine l earning with emphasis on applications in surgery. She received her B.Sc. a nd M.Sc. degrees in mechanical engineering and robotics in 2009 and 2012 a nd obtained her Ph.D. in 2017 in Microrobotics from ETH Zurich\, respectiv ely. Since 2019\, she has been an Assistant Professor with the Health Robo tics and Automation Laboratory at KIT.\nProf. Mathis-Ullrich is vice-presi dent of the German Society for Computer- and Robot-assisted Surgery (CURAC ) and has received the IEEE ICRA Best Paper Award in Medical Robotics (201 4)\, the IEEE BioRob Best Student Paper Award (2016) and won twice with he r team the first prize of the ICRA Microassembly Challenge (2014 & 2015). Furthermore\, she made it onto the prestigious Forbes “30 under 30” list ( 2017). DTSTART;TZID=America/New_York:20220309T120000 DTEND;TZID=America/New_York:20220309T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Franziska Mathis-Ullrich “Cognitive Robotics and Embe dded AI for minimally invasive Surgery” URL:https://lcsr.jhu.edu/events/franziska-mathis-ullrich/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract:
\nWhile human interaction remains key to a caring treatment\, medical roboti cs holds the potential to improve surgical processes through enabling scal ing of forces and actuation\, providing safe and individual treatments to patients\, and allowing for efficient use of health care personnel and res ources. Machine learning algorithms and standardization of processes can i ncrease the quality of medical diagnosis and treatments\, particularly whe n analyzing large quantities of data. Technical and robotic systems can th us support the medical staff in all steps of a medical process.
\nTh is talk introduces several assistive robotic systems for minimally invasiv e surgical procedures being researched at the Health Robotics and Automati on Lab at KIT\, Germany. On one hand\, we will discuss steerable flexible robotic tools for medical applications that require delicate tissue handli ng. On the other hand\, cognitive robotic surgeons and augmented reality s upport in the operation room are presented for application in laparoscopy and neurosurgery.
\n\n
Biography:
\nFr anziska Mathis-Ullrich is Assistant Professor for Medical Robotics at the Karlsruhe Institute of Technology (KIT) in Germany. Her primary research f ocus is on minimally invasive and cognition controlled robotic systems and embedded machine learning with emphasis on applications in surgery. She r eceived her B.Sc. and M.Sc. degrees in mechanical engineering and robotics in 2009 and 2012 and obtained her Ph.D. in 2017 in Microrobotics from ETH Zurich\, respectively. Since 2019\, she has been an Assistant Professor w ith the Health Robotics and Automation Laboratory at KIT.
\nProf. Ma this-Ullrich is vice-president of the German Society for Computer- and Robot-assisted Surgery (CURAC) and has received the IEEE ICRA Best P aper Award in Medical Robotics (2014)\, the IEEE BioRob Best Student Paper Award (2016) and won twice with her team the first prize of the ICRA Micr oassembly Challenge (2014 & 2015). Furthermore\, she made it onto the pres tigious Forbes “30 under 30” list (2017).
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12682@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Jamie Seward\; 1-800-JHU-JHU1 (548-5481)\; alumevents@jhu.edu\; htt ps://events.jhu.edu/form/roboticsinhealthcare DESCRIPTION: \n\nSponsored by the Hopkins Robotics Alumni Network\, the Lab oratory for Computational Sensing + Robotics\, and the Healthcare Affinity \nJoin us as we hear from Dr. Ayushi Sinha\, Senior Scientist in the Preci sion Diagnosis & Image Guided Therapy department at Philips Research North America. Dr. Sinha will discuss her time at Hopkins\, her career journey\ , and her current role. We’ll have time for Q&A with our speaker and time to network with one another. This program will be presented by Zoom. A lin k will be shared with you in advance.\nDisclaimer: The perspectives and op inions expressed by the speaker(s) during this program are those of the sp eaker(s) and not\, necessarily\, those of Johns Hopkins University and the scheduling of any speaker at an alumni event or program does not constitu te the University’s endorsement of the speaker’s perspectives and opinions .\n\n\n\n\n\n\nAyushi Sinha\nSenior Scientist\, Phillips\n\n\nAyushi Sinha is a Senior Scientist in the Precision Diagnosis & Image Guided Therapy d epartment at Philips Research North America. She currently leads a project focused on using machine learning to improve workflow during X-ray guided minimally invasive procedures and has worked on improving guidance during biopsy procedures in her previous roles at Philips. She also leads a grou p focused on generating intellectual property around machine learning solu tions for X-ray guided interventions.\nAyushi completed her Ph.D. at Johns Hopkins University with Russ Taylor and Greg Hager in the Department of C omputer Science with a focus on using statistical shape models to improve guidance during endoscopic sinus procedures. She continued at Hopkins as a postdoctoral fellow and research faculty to explore unsupervised learning in image-based tool tracking. Before her Ph.D.\, Ayushi received a Master of Science in Engineering degree in Computer Science at Hopkins working w ith Misha Kazhdan\, and a Bachelor of Science degree in Computer Science a nd a Bachelor of Arts degree in Mathematics at Providence College.\n\nFOLL OW\nLinkedIn\nPersonal Website DTSTART;TZID=America/New_York:20220311T120000 DTEND;TZID=America/New_York:20220311T130000 SEQUENCE:0 SUMMARY:Robotics in Healthcare: A Conversation with Ayushi Sinha\, PhD (Eng ineering ’18) URL:https://lcsr.jhu.edu/events/robotics-in-healthcare-a-conversation-with- ayushi-sinha-phd-engineering-18/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2022/01/ Alumni.png\;600\;330\,medium\;https://lcsr.jhu.edu/wp-content/uploads/2022 /01/Alumni.png\;600\;330\,large\;https://lcsr.jhu.edu/wp-content/uploads/2 022/01/Alumni.png\;600\;330\,full\;https://lcsr.jhu.edu/wp-content/uploads /2022/01/Alumni.png\;600\;330 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Sponsored by the Hopkins Robotics Alumni Network\, the Laboratory for Computational Sensing + Robotics\, and the Healthcare Affi nity
\nJoin us as we hear from Dr. Ayushi Sinha\, Seni or Scientist in the Precision Diagnosis & Image Guided Therapy department at Philips Research North America. Dr. Sinha will discuss her time at Hopk ins\, her career journey\, and her current role. We’ll have time for Q&A w ith our speaker and time to network with one another. This program will be presented by Zoom. A link will be shared with you in advance.
\nDisclaimer: The perspectives and opinions expressed by the speaker(s) dur ing this program are those of the speaker(s) and not\, necessarily\, those of Johns Hopkins University and the scheduling of any speaker at an alumn i event or program does not constitute the University’s endorsement of the speaker’s perspectives and opinions.
\nAyushi Sinha is a Senior Scientist in the Precision Diagnosis & Image Guided Therapy department at Philips Re search North America. She currently leads a project focused on using machi ne learning to improve workflow during X-ray guided minimally invasive pro cedures and has worked on improving guidance during biopsy procedures in h er previous roles at Philips. She also leads a group focused on generating intellectual property around machine learning solutions for X-ray guided interventions.
\nAyushi completed her Ph.D. at Johns Hopkins Univers ity with Russ Taylor and Greg Hager in the Department of Computer Science with a focus on using statistical shape models to improve guidance during endoscopic sinus procedures. She continued at Hopkins as a postdoctoral fe llow and research faculty to explore unsupervised learning in image-based tool tracking. Before her Ph.D.\, Ayushi received a Master of Science in E ngineering degree in Computer Science at Hopkins working with Misha Kazhda n\, and a Bachelor of Science degree in Computer Science and a Bachelor of Arts degree in Mathematics at Providence College.
\n\n
Mid-term Spring Semester can usher in the interview season for many students seeking internship or fulltime e mployment opportunities. Mark Savage\, Life Design Educator for Engineeri ng Masters Students\, will walk you through what to expect and how to ace the job interview. Time permitting\, we may also discuss the Elevator Pit ch in preparation for your upcoming Robotics Industry Day. Remember to co nvey some of those 800 skills that relate to some of the jobs you’ll be di scussing.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12449@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; 410-516-6841\; ashleymoriarty@jhu.edu DESCRIPTION:Update Jan 28: Industry Day will now be virtual as we won’t kno w the COVID climate in the future. In order to reduce zoom fatigue\, we ar e splitting the event into 2 half days. Industry Day will be Monday March 21 1-4pm and and Tuesday March 22 1-4pm.\n2022 Industry Day Agenda/Program \n\n\n\nMonday 3/21\nZoom\n\n\n1:00 pm\nWelcome WSE: Larry Nagahara\, Asso ciate Dean for Research\n\n\n1:05 pm\nIntroduction to LCSR: Russell H. Tay lor\, Director\n\n\n1:25 pm\nLCSR Education: Louis Whitcomb\, Deputy Direc tor\n\n\n1:40 pm\nStudent Research Talk 1 – Max Li\n\n\n1:50 pm\nStudent R esearch Talk 2 – Will Pryor\n\n\n2:00 pm\nStudent Research Talk 3 – Neha T homas\n\n\n2:10 pm\nStudent Research Talk 4 – Filip Aronshtein and Peter W eiss\n\n\n2:20 pm\nBreak\n\n\n2:30 pm\nJHTV – Seth Zonies\n\n\n2:45 pm\nIn dustry Talk – Gouthami Chintalapani\, Siemens\n\n\n3:05 pm\nIndustry Talk – Vinutha Kallem\, Waymo\n\n\n3:25 pm\nBreak\n\n\n3:35 pm\nNew Faculty Tal k – Axel Krieger\n\n\n3:55 pm\nNew Faculty Talk – Mathias Unberath\n\n\n4: 15 pm\nClosing: Russell H. Taylor\, Director\n\n\n\n \n\n\n\nTuesday 3/22 \nGather Town:\n\n\n1:00-3:00pm\nPoster and Demo Session\n\n\n3:00-4:00pm \nStudent and Industry Resume Review\n\n\n4:00-5:00pm\nNetworking Receptio n\n\n\n\n \nThe Laboratory for Computational Sensing and Robotics will hig hlight its elite robotics students and showcase cutting-edge research proj ects in areas that include Medical Robotics\, Extreme Environments Robotic s\, Human-Machine Systems for Manufacturing\, BioRobotics and more.\nRobot ics Industry Day will provide top companies and organizations in the priva te and public sectors with access to the LCSR’s forward-thinking\, solutio n-driven students. The event will also serve as an informal opportunity to explore university-industry partnerships.\nYou will experience dynamic pr esentations and discussions\, observe live demonstrations\, and participat e in speed networking sessions that afford you the opportunity to meet Joh ns Hopkins most talented robotics students before they graduate.\nPlease c ontact Ashley Moriarty if you have any questions.\n\nDownload our NEW 2022 Industry Day Book\n\n\n\n\nPlease contact Ashley Moriarty if you have any questions.\n \nTickets: https://forms.gle/YUfHzMXBy6t6FdBn8. DTSTART;TZID=America/New_York:20220321T130000 DTEND;TZID=America/New_York:20220321T160000 LOCATION:Zoom SEQUENCE:0 SUMMARY:2022 JHU Robotics Industry Day URL:https://lcsr.jhu.edu/events/jhu-robotics-industry-day-2022/ X-COST-TYPE:external X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2017/11/ 6722-LCSR-book-final-pdf.jpg\;553\;716\,medium\;https://lcsr.jhu.edu/wp-co ntent/uploads/2017/11/6722-LCSR-book-final-pdf.jpg\;553\;716\,large\;https ://lcsr.jhu.edu/wp-content/uploads/2017/11/6722-LCSR-book-final-pdf.jpg\;5 53\;716\,full\;https://lcsr.jhu.edu/wp-content/uploads/2017/11/6722-LCSR-b ook-final-pdf.jpg\;553\;716 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Update Jan 28 : Industry Day will now be virtual as we won’t know the COVID climate in t he future. In order to reduce zoom fatigue\, we are splitting the event in to 2 half days. Industry Day will be Monday March 21 1-4pm and and Tuesday March 22 1-4pm.
\n2022 Industry Day Agenda/Program
\nMonday 3/21 | \nZoo m | \n
1:00 pm | \nWelcome WSE: Larry Nagahara\, Associate Dean for Resear ch | \n
1:05 pm | \n|
1:25 pm | \nLCSR Educati on: Louis Whitcomb\, Deputy Director | \n
1:40 pm | \nStudent Research Talk 1 – Max Li | \n
1:50 pm | \nStudent Resea rch Talk 2 – Will Pryor | \n
2:00 pm | \n< td width='588'>Student Research Talk 3 – Neha Thomas\n|
Student Research Talk 4 – Fili p Aronshtein and Peter Weiss | \n|
2:20 pm\n | Break | \n
2:30 pm | \nJHTV – Seth Zonies | \n
Industry Talk – Gouthami Chint alapani\, Siemens | \n|
3:05 pm | \nIndustry Talk – Vinutha Kallem\, Waymo | \n
3:25 pm | \nBreak | \n
3:35 pm | \nNew Faculty Talk – A xel Krieger | \n
3:55 pm | \nNew Faculty Talk – Mathias Unberath | \n
4:15 pm | \nClosing: Russell H. Taylor\, Director | \n
\n
Tuesday 3/22 | \nGather Town : | \n
1:00-3:00pm | \nPoster and Demo Session | \n
3:00-4:00 pm | \nStudent and Industry Resume Review | \n4:00-5:00pm | \nNetworking Receptio n | \n\n\n
\n
The Laboratory for Computa tional Sensing and Robotics will highlight its elite robotics students and showcase cutting-edge research projects in areas that include Med ical Robotics\, Extreme Environments Robotics\, Human-Machine Systems for Manufacturing\, BioRobotics and more.
\nRobotics Industry D ay will provide top companies and organizations in the private and public sectors with access to the LCSR’s forward-thinking\, solution-driven stude nts. The event will also serve as an informal opportunity to explore unive rsity-industry partnerships.
\nYou will experience dynamic presentat ions and discussions\, observe live demonstrations\, and participate in sp eed networking sessions that afford you the opportunity to meet Johns Hopk ins most talented robotics students before they graduate.
\nPlease c ontact Ashley Moriarty if you have any questions.
\nPlease contact Ashley Moriarty if you have any quest ions.
\n\n
Tickets: https://forms.gle/YUfHzMXBy6t6FdBn8 .
X-TICKETS-URL:https://forms.gle/YUfHzMXBy6t6FdBn8 END:VEVENT BEGIN:VEVENT UID:ai1ec-12632@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nLocomotion in living systems and bio-inspired ro bots requires the generation and control of oscillatory motion. While a co mmon method to generate motion is through modulation of time-dependent “cl ock” signals\, in this talk we will motivate and study an alternative meth od of oscillatory generation through autonomous limit-cycle systems. Limit -cycle oscillators for robotics have many desirable properties including a daptive behaviors\, entrainment between oscillators\, and potential simpli fication of motion control. I will present several examples of the generat ion and control of autonomous oscillatory motion in bio-inspired robotics. First\, I will describe our recent work to study the dynamics of wingbeat oscillations in “asynchronous” insects and how we can build these behavio rs into micro-aerial vehicles. In the second part of this talk I will desc ribe how limit-cycle gait generation in collective robots can enable swarm s to synchronize their movement through contact and without communication. More broadly in this talk I hope to motivate why we should look to autono mous dynamical systems for designing and controlling emergent locomotor be haviors in bio-inspired robotics.\n \nBiography:\nDr. Nick Gravish receive d his PhD from Georgia Tech where he used robots as physical models to mot ivate and study aspects of biological locomotion. During his post-doc Grav ish worked in the microrobotics lab of Rob Wood at Harvard\, where he gain ed expertise in designing and studying insect-scale robots. Gravish is cur rently an assistant professor at UC San Diego in the Mechanical and Aerosp ace Engineering department. His lab bridges the gap between bio-inspiratio n\, biomechanics\, and robotics\, towards the development of new bio-inspi red robotic technologies to improve the adaptability and resilience of mob ile robots. DTSTART;TZID=America/New_York:20220330T120000 DTEND;TZID=America/New_York:20220330T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Nick Gravish “Design and control of emergent oscillat ions for flapping-wing flyers and synchronizing swarms” URL:https://lcsr.jhu.edu/events/nick-gravish/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract:
\nLocomotion in living systems and bio-inspired robots requires the generati on and control of oscillatory motion. While a common method to generate mo tion is through modulation of time-dependent “clock” signals\, in this tal k we will motivate and study an alternative method of oscillatory generati on through autonomous limit-cycle systems. Limit-cycle oscillators for rob otics have many desirable properties including adaptive behaviors\, entrai nment between oscillators\, and potential simplification of motion control . I will present several examples of the generation and control of autonom ous oscillatory motion in bio-inspired robotics. First\, I will describe o ur recent work to study the dynamics of wingbeat oscillations in “asynchro nous” insects and how we can build these behaviors into micro-aerial vehic les. In the second part of this talk I will describe how limit-cycle gait generation in collective robots can enable swarms to synchronize their mov ement through contact and without communication. More broadly in this talk I hope to motivate why we should look to autonomous dynamical systems for designing and controlling emergent locomotor behaviors in bio-inspired ro botics.
\n\n
Biography:
\nDr. Nick Gra vish received his PhD from Georgia Tech where he used robots as physical m odels to motivate and study aspects of biological locomotion. During his p ost-doc Gravish worked in the microrobotics lab of Rob Wood at Harvard\, w here he gained expertise in designing and studying insect-scale robots. Gr avish is currently an assistant professor at UC San Diego in the Mechanica l and Aerospace Engineering department. His lab bridges the gap between bi o-inspiration\, biomechanics\, and robotics\, towards the development of n ew bio-inspired robotic technologies to improve the adaptability and resil ience of mobile robots.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12639@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nDesigning robots for human interaction is a mult ifaceted challenge involving the robot’s intelligent behavior\, physical f orm\, mechanical structure\, and interaction schema. Our lab develops and studies human-centered robots using a combination of methods from AI\, Des ign\, and Human-Computer Interaction. This talk focuses on three recent p rojects\, two concerning the design of a new robot\, and one that tackles designing robots that help human designers.\n \nBiography:\nGuy Hoffman is Associate Professor and the Mills Family Faculty Fellow in the Sibley Sch ool of Mechanical and Aerospace Engineering at Cornell University. Prior t o that he was an Assistant Professor at IDC Herzliya and co-director of th e IDC Media Innovation Lab. Hoffman holds a Ph.D from MIT in the field of human-robot interaction. He heads the Human-Robot Collaboration and Compan ionship (HRC²) group\, studying the algorithms\, interaction schema\, and designs enabling close interactions between people and personal robots in the workplace and at home. Among others\, Hoffman developed the world’s fi rst human-robot joint theater performance\, and the first real-time improv ising human-robot Jazz duet. His research papers won several top academic awards\, including Best Paper awards at robotics conferences in 2004\, 200 6\, 2008\, 2010\, 2013\, 2015\, 2018\, 2019\, 2020\, and 2021. His TEDx ta lk is one of the most viewed online talks on robotics\, watched more than 3 million times. DTSTART;TZID=America/New_York:20220406T120000 DTEND;TZID=America/New_York:20220406T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Guy Hoffman “Designing Robots and Designing with Robo ts” URL:https://lcsr.jhu.edu/events/guy-hoffman/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract:
\nDesigning robots for human interaction is a multifaceted challenge involvi ng the robot’s intelligent behavior\, physical form\, mechanical structure \, and interaction schema. Our lab develops and studies human-centered rob ots using a combination of methods from AI\, Design\, and Human-Computer I nteraction. This talk focuses on three recent projects\, two concerning t he design of a new robot\, and one that tackles designing robots that help human designers.
\n\n
Biography:
\nGu y Hoffman is Associate Professor and the Mills Family Faculty Fellow in th e Sibley School of Mechanical and Aerospace Engineering at Cornell Univers ity. Prior to that he was an Assistant Professor at IDC Herzliya and co-di rector of the IDC Media Innovation Lab. Hoffman holds a Ph.D from MIT in t he field of human-robot interaction. He heads the Human-Robot Collaboratio n and Companionship (HRC²) group\, studying the algorithms\, interaction s chema\, and designs enabling close interactions between people and persona l robots in the workplace and at home. Among others\, Hoffman developed th e world’s first human-robot joint theater performance\, and the first real -time improvising human-robot Jazz duet. His research papers won several t op academic awards\, including Best Paper awards at robotics conferences i n 2004\, 2006\, 2008\, 2010\, 2013\, 2015\, 2018\, 2019\, 2020\, and 2021. His TEDx talk is one of the most viewed online talks on robotics\, watche d more than 3 million times.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12649@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nMany successful approaches to robotic locomotion and manipulation operate with high quality simulation tools. Many such ap proaches are “bottom-up” in a modeling sense\, accounting for all internal forces and environmental interactions explicitly. These “bottom-up” model s are used either beforehand (such as in reinforcement learning) and/or in real time. However\, various types of robots are getting smaller\, softe r\, and more complex (e.g. bio-hybrid actuators). Some robots lean on low- precision manufacturing and fabrication techniques\, and many robots are n ow being asked to operate in hard-to-characterize\, natural interfaces lik e the human body. Such attributes can render “bottom-up” simulators imprac tical for expected use cases on various research frontiers\, such as micro -biomedical robots and soft robots deployed in uncharacterized environment s. In this talk I will revisit the reconstruction equation\, a result from the geometric mechanics literature that offers a “top-down” view of Lagra ngian systems\, permitting insights into generalizable system behaviors al ong a spectrum of friction-momentum dominance. I will show how these tools can permit rapid modeling of high complexity robots in their operating en vironment without the requirement to specify CAD models or any explicit fo rces. I will also discuss a related strength and weakness of the approach resulting from the use of symmetries. Surprisingly\, results in simulation and hardware indicate that even with eight-jointed systems\, useful behav ioral models can be computed from tens of cycles of data. This suggests th at high degree of freedom robots can adjust and excel in situations where explicit force models are poorly understood. I will also briefly discuss a framework for robot recovery that leans on these tools as well as a metri c for a robot’s ability to cover the local space of motions\, computed on the Lie algebra of the position space. The metric allows primitives to be valued for their contribution to the space of composed motions rather than just their individual qualities. Results here include a Dubins car that c an learn how to turn left (with its steering wheel restricted to only turn right) in less than a second as well as a robot made of tree branches tha t can learn to walk around the laboratory with less than twelve minutes of experimental data. I hope to motivate the general use of structural reduc tions as we pursue modeling and control of the next generation of high com plexity robots.\n \nBiography:\nDr. Brian Bittner received a B.S. from Car negie Mellon and a PhD from Michigan where he researched the theory\, simu lation\, and application of physics informed machine learning for in situ behavior modeling and optimization. He has sought out cross-disciplinary e nvironments for research\, collaborating with physicists\, biologists\, an d mathematicians\, working to facilitate insights from these fields into r obotic systems. Bittner is currently a research scientist at the Applied P hysics Lab. He is currently working on approaches to modeling and control for soft robots and underwater manipulation. DTSTART;TZID=America/New_York:20220413T120000 DTEND;TZID=America/New_York:20220413T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Brian Bittner “Data-driven geometric mechanics: top-d own tools for in situ robotic modeling and adaptation to injury” URL:https://lcsr.jhu.edu/events/brian-bittner/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract:
\nMany successful approaches to robotic locomotion and manipulation operate with high quality simulation tools. Many such approaches are “bottom-up” i n a modeling sense\, accounting for all internal forces and environmental interactions explicitly. These “bottom-up” models are used either beforeha nd (such as in reinforcement learning) and/or in real time. However\, var ious types of robots are getting smaller\, softer\, and more complex (e.g. bio-hybrid actuators). Some robots lean on low-precision manufacturing an d fabrication techniques\, and many robots are now being asked to operate in hard-to-characterize\, natural interfaces like the human body. Such att ributes can render “bottom-up” simulators impractical for expected use cas es on various research frontiers\, such as micro-biomedical robots and sof t robots deployed in uncharacterized environments. In this talk I will rev isit the reconstruction equation\, a result from the geometric mechanics l iterature that offers a “top-down” view of Lagrangian systems\, permitting insights into generalizable system behaviors along a spectrum of friction -momentum dominance. I will show how these tools can permit rapid modeling of high complexity robots in their operating environment without the requ irement to specify CAD models or any explicit forces. I will also discuss a related strength and weakness of the approach resulting from the use of symmetries. Surprisingly\, results in simulation and hardware indicate tha t even with eight-jointed systems\, useful behavioral models can be comput ed from tens of cycles of data. This suggests that high degree of freedom robots can adjust and excel in situations where explicit force models are poorly understood. I will also briefly discuss a framework for robot recov ery that leans on these tools as well as a metric for a robot’s ability to cover the local space of motions\, computed on the Lie algebra of the pos ition space. The metric allows primitives to be valued for their contribut ion to the space of composed motions rather than just their individual qua lities. Results here include a Dubins car that can learn how to turn left (with its steering wheel restricted to only turn right) in less than a sec ond as well as a robot made of tree branches that can learn to walk around the laboratory with less than twelve minutes of experimental data. I hope to motivate the general use of structural reductions as we pursue modelin g and control of the next generation of high complexity robots.
\n< /p>\n
Biography:
\nDr. Brian Bittner received a B .S. from Carnegie Mellon and a PhD from Michigan where he researched the t heory\, simulation\, and application of physics informed machine learning for in situ behavior modeling and optimization. He has sought out cross-disciplinary environments for research\, collaborating with physici sts\, biologists\, and mathematicians\, working to facilitate insights fro m these fields into robotic systems. Bittner is currently a research scien tist at the Applied Physics Lab. He is currently working on approaches to modeling and control for soft robots and underwater manipulation.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12654@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nAbstract:\nThe talk will present a survey of my research ac tivities\, with more detailed presentation of our guidance system for robo t-assisted prostate cancer surgery. The majority of prostate cancer surger y is carried out with the da Vinci surgical system. Tracking of instrument s and hand-eye calibration of this robotic system enables the overlay of p re-operative magnetic resonance imaging by registration to real-time ultra sound. This enables visualization of sub-surface anatomy and cancer. We wi ll discuss our system design\, visualization and registration approaches. \nWe will also discuss instrumentation for force sensing using the da Vinc i Research Kit\, and a new approach to teleguidance for ultrasound examina tions.\n \nBiography:\nTim Salcudean is a Professor with the Department of Electrical and Computer Engineering\, where he holds the C.A. Laszlo Chai r in Biomedical Engineering. He is cross-appointed with the UBC School of Biomedical Engineering and the Vancouver Prostate Centre. He is on the ste ering committee of the IPCAI conference and on the Editorial Board of the International Journal of Robotics Research. He is a Fellow of IEEE\, MICCA I and of the Canadian Academy of Engineering. His research interests are i n medical robotics\, medical image analysis and elastography imaging. DTSTART;TZID=America/New_York:20220420T120000 DTEND;TZID=America/New_York:20220420T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Tim Salcudean “Ultrasound-based guidance for robot as sisted prostate surgery” URL:https://lcsr.jhu.edu/events/tim-salcudean/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract:
\nThe talk will present a survey of my research activities\, with more detai led presentation of our guidance system for robot-assisted prostate cancer surgery. The majority of prostate cancer surgery is carried out with the da Vinci surgical system. Tracking of instruments and hand-eye calibration of this robotic system enables the overlay of pre-operative magnetic reso nance imaging by registration to real-time ultrasound. This enables visual ization of sub-surface anatomy and cancer. We will discuss our system desi gn\, visualization and registration approaches.
\nWe will also discu ss instrumentation for force sensing using the da Vinci Research Kit\, and a new approach to teleguidance for ultrasound examinations.
\n\n
Biography:
\nTim Salcudean is a Professor with the Department of Electrical and Computer Engineering\, where he holds th e C.A. Laszlo Chair in Biomedical Engineering. He is cross-appointed with the UBC School of Biomedical Engineering and the Vancouver Prostate Centre . He is on the steering committee of the IPCAI conference and on the Edito rial Board of the International Journal of Robotics Research. He is a Fell ow of IEEE\, MICCAI and of the Canadian Academy of Engineering. His resear ch interests are in medical robotics\, medical image analysis and elastogr aphy imaging.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-12660@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2021/2022 s chool year\n \nPanelists:\n\nCourtney Schmitt\, BE-3U Lead Controls Engine er\, Blue Origin (JHU Class of 2018\, B.S. Mechanical Engineering\, JHU Cl ass of 2019\, M.S.E. Robotics)\n\nCourtney is the lead controls engineer a t Blue Origin for the BE-3U Engine. Courtney joined Blue Origin in 2019 af ter completing her Masters in Robotics and her Bachelors in Mechanical Eng ineering at Johns Hopkins University. Before joining Blue Origin\, Courtne y completed an internship at Virgin Galactic and worked at an autonomous u nderwater vehicle startup. While at Johns Hopkins she participated in a va riety of research including a project to map the locations of black holes in the universe\, researching autonomous vehicles\, and working with the S pace Telescope Science Institute for her senior design capstone project. I n 2018\, she was selected to receive the Brooke Owens Fellowship\, a compe titive fellowship awarded to women pursuing careers in the space industry. Courtney was also recently named to the 20 under 35 SSPI list in 2021.\nO utside of work hours\, Courtney volunteered for the Community School of Ba ltimore during her undergraduate studies as a STEM educator. She was a co- founder and President of the JHU chapter of the Students for the Explorati on and Development of Space (SEDS) with the goal of bringing together the space community at JHU across a variety of majors and disciplines. She has frequently mentored students as well as an all-girls First Robotics team from the Museum of Flight in Washington.\n\n\nRachel Hegeman\, Software En gineer\, Waymo (JHU Class of 2016\, B.S. Mechanical Engineering & B.S. App lied Math\, JHU Class of 2019\, M.S.E. Computer Science)\n\nHi\, my name i s Rachel and I like robots. I started working with underwater robots at JH U in the Dynamical Systems and Control Lab (DSCL) in 2015\, and after grad uating undergrad I spent a few years with the JHU Applied Physics Lab (JHU APL) working on experimental reconnaissance systems. At this time\, I also worked with the Biomechanical and Image Guided Surgical Systems Lab (BIGS S) to ascertain the efficacy of various autonomous surgical systems and im aging techniques. I got my Masters in Computer Science in 2019\, and then headed west for a job with Waymo working on reasoning within the vehicle’s planner. At Waymo\, I just recently transferred teams to focus more on tr ajectory optimization.\n\n\nAnn Majewicz Fey\n\n2010 JHU MSE ME JHU\, PhD Stanford\, recently tenured @ UT Dallas: Ann Majewicz Fey http://sites.ute xas.edu/herolab/people/\n\n\nAmanda Zimmet\n\n2017 JHU BME PhD focusing in robotics: Amanda Zimmet – Senior Algorithms Analytics Engineer at Auris H ealth\, A Robotic Bronchoscopy company owned by J&J. https://www.linkedin. com/public-profile/settings\n\n\n\n DTSTART;TZID=America/New_York:20220427T120000 DTEND;TZID=America/New_York:20220427T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Panel on Careers in Robotics A Panel Discussion With Experts From Industry and Academia URL:https://lcsr.jhu.edu/events/panel-on-careers-in-robotics-a-panel-discus sion-with-experts-from-industry-and-academia/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Panelists:
\n\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13064@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \n DTSTART;TZID=America/New_York:20220831T120000 DTEND;TZID=America/New_York:20220831T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Welcome Townhall “Review of LCSR” URL:https://lcsr.jhu.edu/events/townhall2022/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13074@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract:\nFlexible and soft medical robots offer capabilit ies beyond those of conventional rigid-link robots due to their ability to traverse confined spaces and conform to highly curved paths. They also of fer potential for improved safety due to their inherent compliance. In thi s talk\, I will present several new robot designs for various surgical app lications. In particular\, I will discuss our work on soft\, growing robot s that achieve locomotion by material extending from their tip. I will dis cuss limitations in miniaturizing such robots\, along with methods for act ively steering\, sensing\, and controlling them. Finally\, I will discuss new sensing and human-in-the-loop control paradigms that are aimed at impr oving the performance of flexible surgical robots.\nBio:\nTania Morimoto i s an Assistant Professor in the Department of Mechanical and Aerospace Eng ineering and in the Department of Surgery at the University of California\ , San Diego. She received the B.S. degree from Massachusetts Institute of Technology\, Cambridge\, MA\, and the M.S. and Ph.D. degrees from Stanford University\, Stanford\, CA\, all in mechanical engineering. Her research lab focuses on the design and control of flexible continuum robots for inc reased dexterity and accessibility in uncertain environments\, particularl y for minimally invasive surgical interventions. They are also working to address the challenges of designing human-in-the-loop interfaces for contr olling these flexible and soft robots\, including the integration of hapti c feedback to improve surgical outcomes. She is a recipient of the Hellman Fellowship (2021)\, the Beckman Young Investigator Award (2022)\, and the NSF CAREER Award (2022). DTSTART;TZID=America/New_York:20220907T120000 DTEND;TZID=America/New_York:20220907T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Tania Morimoto “Flexible Surgical Robots: Design\, Se nsing\, and Control” URL:https://lcsr.jhu.edu/events/tania-morimoto/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Abstract:
\nFlexible and soft medical robots offer capabilit ies beyond those of conventional rigid-link robots due to their ability to traverse confined spaces and conform to highly curved paths. They also of fer potential for improved safety due to their inherent compliance. In thi s talk\, I will present several new robot designs for various surgical app lications. In particular\, I will discuss our work on soft\, growing robot s that achieve locomotion by material extending from their tip. I will dis cuss limitations in miniaturizing such robots\, along with methods for act ively steering\, sensing\, and controlling them. Finally\, I will discuss new sensing and human-in-the-loop control paradigms that are aimed at impr oving the performance of flexible surgical robots.
\nBio:
\nTa nia Morimoto is an Assistant Professor in the Department of Mechanical and Aerospace Engineering and in the Department of Surgery at the University of California\, San Diego. She received the B.S. degree from Massachusetts Institute of Technology\, Cambridge\, MA\, and the M.S. and Ph.D. degrees from Stanford University\, Stanford\, CA\, all in mechanical engineering. Her research lab focuses on the design and control of flexible continuum robots for increased dexterity and accessibility in uncertain environments \, particularly for minimally invasive surgical interventions. They are al so working to address the challenges of designing human-in-the-loop interf aces for controlling these flexible and soft robots\, including the integr ation of haptic feedback to improve surgical outcomes. She is a recipient of the Hellman Fellowship (2021)\, the Beckman Young Investigator Award (2 022)\, and the NSF CAREER Award (2022).
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13084@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract:\nExtreme globalization\, war in the Western World \, COVID-19 are presenting together an unprecedented challenge for humanit y. Engineering intelligent systems and robotics can help to counter-balanc e the negative effects in a number of ways. Potential technology-driven so lutions include the emergence of medical robots\, Surgical Data Science\, AI-based support for early anomaly detection and health diagnosis\, rescue robotics\, smart agrifood robotic solutions and beyond. Much of these are as are addressed by the various applied research projects of the Universit y Research and Innovation center (EKIK) at Óbuda University. This presenta tion highlights through examples the role that robotics and automation can play in living up to global challenges. The talk will also cover the ethi cal implications of robotics research\, in both the emergency and post-pan demic world\, with a specific focus on the 2015 UN Sustainable Development Goals.\n DTSTART;TZID=America/New_York:20220914T120000 DTEND;TZID=America/New_York:20220914T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Tamas Haidegger “Medtech research and beyond at Óbuda University” URL:https://lcsr.jhu.edu/events/tamas-haidegger/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Abstract:
\nExtreme globalization\, war in the Western World \, COVID-19 are presenting together an unprecedented challenge for humanit y. Engineering intelligent systems and robotics can help to counter-balanc e the negative effects in a number of ways. Potential technology-driven so lutions include the emergence of medical robots\, Surgical Data Science\, AI-based support for early anomaly detection and health diagnosis\, rescue robotics\, smart agrifood robotic solutions and beyond. Much of these are as are addressed by the various applied research projects of the Universit y Research and Innovation center (EKIK) at Óbuda University. This presenta tion highlights through examples the role that robotics and automation can play in living up to global challenges. The talk will also cover the ethi cal implications of robotics research\, in both the emergency and post-pan demic world\, with a specific focus on the 2015 UN Sustainable Development Goals.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13079@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nWorkshop Description: TBA\n \n DTSTART;TZID=America/New_York:20220921T120000 DTEND;TZID=America/New_York:20220921T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Mark Savage “Elevator Pitch Workshop” URL:https://lcsr.jhu.edu/events/mark-savage-3/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Workshop Description: TBA
\n\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13096@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract:\nHuman motor learning depends on a suite of brain mechanisms that are driven by different signals and operate on timescales ranging from minutes to years. Understanding these processes requires id entifying how new movement patterns are normally acquired\, retained\, and generalized\, as well as the effects of distinct brain lesions. The lect ure will focus on normal and abnormal motor learning\, and how we can use this information to improve rehabilitation for individuals with neurologic al damage.\n \nBio:\nDr. Amy Bastian is a neuroscientist who has made impo rtant contributions to the neuroscience of sensorimotor control. She is t he Chief Science Officer at the Kennedy Krieger Institute\, and Director o f the motion analysis laboratory that studies the neural control of human movement. Dr. Bastian is also a Professor of Neuroscience\, Neurology and PM&R at the Johns Hopkins University School of Medicine. Dr. Bastian is a recognized and highly accomplished neuroscientists whose interests inclu de understanding cerebellar function/dysfunction\, locomotor learning mech anisms\, motor learning in development\, and how to rehabilitate people wi th many types of neurological diseases. DTSTART;TZID=America/New_York:20220928T120000 DTEND;TZID=America/New_York:20220928T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Amy Bastian “Learning and relearning human movement” URL:https://lcsr.jhu.edu/events/amy-bastien/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Abstract:
\nHuman motor learning depends on a suite of brain mechanisms that are driven by different signals and operate on timescales ranging from minutes to years. Understanding these processes requires id entifying how new movement patterns are normally acquired\, retained\, and generalized\, as well as the effects of distinct brain lesions. The lect ure will focus on normal and abnormal motor learning\, and how we can use this information to improve rehabilitation for individuals with neurologic al damage.
\n\n
Bio:
\nDr. Amy Bastian is a neuroscient ist who has made important contributions to the neuroscience of sensorimot or control. She is the Chief Science Officer at the Kennedy Krieger Insti tute\, and Director of the motion analysis laboratory that studies the neu ral control of human movement. Dr. Bastian is also a Professor of Neurosc ience\, Neurology and PM&R at the Johns Hopkins University School of Medic ine. Dr. Bastian is a recognized and highly accomplished neuroscientists whose interests include understanding cerebellar function/dysfunction\, lo comotor learning mechanisms\, motor learning in development\, and how to r ehabilitate people with many types of neurological diseases.
\n< /HTML> END:VEVENT BEGIN:VEVENT UID:ai1ec-13091@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract: Planning\, the ability to imagine different futur es and select one assessed to have high value\, is one of the most vaunted of animal capacities. As such it has been a central target of artificial intelligence work from the origins of that field\, in addition to being a focus of neuroscience and cognitive science. These separate and sometimes synergistic traditions are combined in our new work exploring the origin a nd mechanics of planning in animals. We will show how mammals evade autono mous robot “predators” in complex large arenas. We have discovered that de pending on the arrangement and density of barriers to vision\, animals app ear to carefully manage their uncertainty about the predator’s location in order to reach their goal. Their behavior appears unlikely to be driven b y cached responses that were successful in the past\, but rather based on planning during brief pauses over which they peek at the hidden robot adve rsary that is looking for them. After peeking\, they re-route to avoid the predator.\n \nBio: Malcolm A. MacIver is a group leader of the Center for Robotics and Biosystems at Northwestern University\, with a joint appoint ment between Mechanical Engineering and Biomedical Engineering\, and court esy appointments in the Department of Neurobiology and the Department of C omputer Science. His work focuses on extracting principles underlying anim al behavior\, focusing on interactions between biomechanics\, sensory syst ems\, and planning circuits. He then incorporates these principles into bi orobotic systems or simulations of the animal in its environment for syner gy between technological and scientific advances. For this work he receive d the 2009 Presidential Early Career Award for Science and Engineering fro m President Obama at the White House. MacIver has also developed interacti ve science-inspired art installations that have exhibited internationally\ , and consults for science fiction film and TV series makers. DTSTART;TZID=America/New_York:20221005T120000 DTEND;TZID=America/New_York:20221005T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Malcolm MacIver “Biological planning deciphered via A I algorithms and robot-animal competition in partially observable environm ents” URL:https://lcsr.jhu.edu/events/malcolm-maciver/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Abstract: Planning\, the ability to imagine different futures and select one assessed to have high value\, is one of the most vaunted of ani mal capacities. As such it has been a central target of artificial intelli gence work from the origins of that field\, in addition to being a focus o f neuroscience and cognitive science. These separate and sometimes synergi stic traditions are combined in our new work exploring the origin and mech anics of planning in animals. We will show how mammals evade autonomous ro bot “predators” in complex large arenas. We have discovered that depending on the arrangement and density of barriers to vision\, animals appear to carefully manage their uncertainty about the predator’s location in order to reach their goal. Their behavior appears unlikely to be driven by cache d responses that were successful in the past\, but rather based on plannin g during brief pauses over which they peek at the hidden robot adversary t hat is looking for them. After peeking\, they re-route to avoid the predat or.
\n\n
Bio: Malcolm A. MacIver is a group leader of the Cen ter for Robotics and Biosystems at Northwestern University\, with a joint appointment between Mechanical Engineering and Biomedical Engineering\, an d courtesy appointments in the Department of Neurobiology and the Departme nt of Computer Science. His work focuses on extracting principles underlyi ng animal behavior\, focusing on interactions between biomechanics\, senso ry systems\, and planning circuits. He then incorporates these principles into biorobotic systems or simulations of the animal in its environment fo r synergy between technological and scientific advances. For this work he received the 2009 Presidential Early Career Award for Science and Engineer ing from President Obama at the White House. MacIver has also developed in teractive science-inspired art installations that have exhibited internati onally\, and consults for science fiction film and TV series makers.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13268@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:LCSR Career Fair – October 7\, 2022 1-4pm EDT\nWe would like to invite you to participate in the first annual Johns Hopkins Robotics Care er Fair. The event will be completely online (virtual) on Gather.town. Th e goal is to help connect students and industry with internships and jobs. The tentative schedule include a keynote speaker from 1-2pm\, elevator pi tch practice with Industry professionals working one-on-one with students from 2-3pm\, and then virtual company job-fair from 3-4pm in which each co mpany/organization will have a dedicated virtual “table” to meet with our students.\n\n\n\nFriday 10/7\n\nGather.Town\n\n\n\n1:00 pm\nKeynote Speake r: Keynote by Stephen Aylward\, Senior Director of Strategic Initiatives a t Kitware\n\n\n2:00 pm\nElevator Pitch Practice: For students with Industr y Professionals\n\n\n\n3:00 pm\nVirtual Company Job Fair\n\n\n\n\nIf you w ould like to participate\, please email Ashley Moriarty by Thursday Septem ber 15\, 2022. More info on our Industry Page\n \nKeynote Speaker: Stephen Aylward “Do something slightly different”\n \n\nAbstract: This talk explo res the increasing overlap that exists in academic and industry environmen ts\, the role of research and product development in those environments\, and how you can shape your career to succeed in either. It also explores how adopting the concepts and tools of open science can lead to success in both.\nBio: Stephen Aylward’s industry career began as an MS graduate sur rounded by PhDs in the AI research labs at McDonnell Douglas. He then rec eived a PhD in computer science and became a tenured associate professor i n the department of radiology at UNC. That was followed by him pivoting b ack to industry and founding Kitware’s office in North Carolina\, where he has had many roles as the company grew. He successfully patented and lic ensed software while in academia and played lead roles in the development of numerous open-source projects including ITK and 3D Slicer while in indu stry. He now serves as Senior Director of Strategic Initiatives at Kitwar e\, as an adjunct professor in computer science at UNC\, and as chair of t he advisory board for the development of MONAI\, a leading open-source PyT orch library for medical AI. His NIH\, DARPA\, and DoD funded research cu rrently focuses on point-of-care AI and developing quantitative ultrasound spectroscopy measures to aid in the care of trauma victims in ambulances\ , emergency departments\, and intensive care units.\n DTSTART;TZID=America/New_York:20221007T130000 DTEND;TZID=America/New_York:20221007T160000 SEQUENCE:0 SUMMARY:JHU Robotics Career Fair URL:https://lcsr.jhu.edu/events/jhu-robotics-career-fair/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2022/10/ Stephen.jpg\;300\;300\,medium\;https://lcsr.jhu.edu/wp-content/uploads/202 2/10/Stephen.jpg\;300\;300\,large\;https://lcsr.jhu.edu/wp-content/uploads /2022/10/Stephen.jpg\;300\;300\,full\;https://lcsr.jhu.edu/wp-content/uplo ads/2022/10/Stephen.jpg\;300\;300 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nFriday 10/
7 \n | \nGather.Town \n | \n
1:00 pm | \n|
2:0 0 pm | \nElevator Pitch Practice: For s
tudents with Industry Professionals \n | \n
3:00 pm | \nVirtual Compan
y Job Fair \n | \n
\n
Keynote Speaker: Stephen Aylward “Do s omething slightly different”
\n\n
< a href='https://lcsr.jhu.edu/wp-content/uploads/2022/10/Stephen.jpg'>
\nAbstract: This talk explores the increasing over lap that exists in academic and industry environments\, the role of resear ch and product development in those environments\, and how you can shape y our career to succeed in either. It also explores how adopting the concep ts and tools of open science can lead to success in both.
\nBio: Stephen Aylward’s industry career began as an MS graduate su rrounded by PhDs in the AI research labs at McDonnell Douglas. He then re ceived a PhD in computer science and became a tenured associate professor in the department of radiology at UNC. That was followed by him pivoting back to industry and founding Kitware’s office in North Carolina\, where h e has had many roles as the company grew. He successfully patented and li censed software while in academia and played lead roles in the development of numerous open-source projects including ITK and 3D Slicer while in ind ustry. He now serves as Senior Director of Strategic Initiatives at Kitwa re\, as an adjunct professor in computer science at UNC\, and as chair of the advisory board for the development of MONAI\, a leading open-source Py Torch library for medical AI. His NIH\, DARPA\, and DoD funded research c urrently focuses on point-of-care AI and developing quantitative ultrasoun d spectroscopy measures to aid in the care of trauma victims in ambulances \, emergency departments\, and intensive care units.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13086@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nStudent Seminar 1:\n \nStudent Seminar 2:\n \n DTSTART;TZID=America/New_York:20221012T120000 DTEND;TZID=America/New_York:20221012T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Student Seminar URL:https://lcsr.jhu.edu/events/student-seminar/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Student Seminar 1:
\n\n
Student Seminar 2:
\n< /p>\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13110@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract:\nWhen a flapping bat propels through its fluidic environment\, it creates periodic air jets in the form of wake structures downstream of its flight path. The animal’s remarkable dexterity to quickl y manipulate these wakes with fine-grained\, fast body adjustments is key to retaining the force-moment needed for an all-time controllable flight\, even near stall conditions\, sharp turns\, and heel-above-head maneuvers. We refer to bats’ locomotion based on dexterously manipulating the fluidi c environment through dynamically versatile wing conformations as dynamic morphing wing flight.\nIn this talk\, I will describe some of the challeng es facing the design and control of dynamic morphing Micro Aerial Vehicles (MAV) and report our latest morphing flying robot design called Aerobat. Dynamic morphing is the defining characteristic of bat locomotion and is k ey to their agility and efficiency. Unlike a jellyfish whose body conforma tions are fully dominated by its passive dynamics\, a bat employs its acti ve and passive dynamics to achieve dynamic morphing within its gaitcycles with a notable degree of control over joint movements. Copying bats’ morph ing wings has remained an open engineering problem due to a classical robo t design challenge: having many active coordinates in MAVs is impossible b ecause of prohibitive design restrictions such as limited payload and powe r budget. I will propose a framework based on integrating low-power\, feed back-driven components within computational structures (mechanical structu res with computational resources) to address two challenges associated wit h gait generation and regulation. We call this framework Morphing via Inte grated Mechanical Intelligence and Control (MIMIC). Based on this framewor k\, my team at SiliconSynapse Laboratory at Northeastern University has co pied bat dynamically versatile wing conformations in untethered flight tes ts.\n \nBio:\nAlireza Ramezani is an assistant professor at the Department of Electrical & Computer Engineering at Northeastern University (NU). Bef ore joining NU in 2018\, he was a post-doc at Caltech’s Division of Engine ering and Applied Science. He received his Ph.D. degree in Mechanical Engi neering from the University of Michigan\, Ann Arbor\, with Jessy Grizzle. His research interests are the design of bioinspired robots with nontrivia l morphologies (high degrees of freedom and dynamic interactions with the environment)\, analysis\, and nonlinear\, closed-loop feedback design of l ocomotion systems. His designs have been featured in high-impact journals\ , including two cover articles in Science Robotics Magazine and research h ighlights in Nature. Alireza has received NASA’s Space Technology Mission Directorate’s Program Award in designing bioinspired locomotion systems fo r the exploration of the Moon and Mars craters two times. He is the recipi ent of Caltech’s Jet Propulsion Lab (JPL) Faculty Research Program Positio n. Alireza’s research has been covered by over 200 news outlets\, includin g The New York Times\, The Wall Street Journal\, The Associated Press\, CN N\, NBC\, and Euronews. Currently\, he is leading a $1 Million NSF project to design and control bat-inspired MAVs in the confined space of sewer ne tworks for monitoring and inspection. DTSTART;TZID=America/New_York:20221019T120000 DTEND;TZID=America/New_York:20221019T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Alireza Ramezani “Bat-inspired Dynamic Morphing Wing Flight Through Morphology and Control Design” URL:https://lcsr.jhu.edu/events/alireza-ramezani/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Abstract:
\nWhen a flapping bat propels through its fluidic environment\, it creates periodic air jets in the form of wake structures downstream of its flight path. The animal’s remarkable dexterity to quickl y manipulate these wakes with fine-grained\, fast body adjustments is key to retaining the force-moment needed for an all-time controllable flight\, even near stall conditions\, sharp turns\, and heel-above-head maneuvers. We refer to bats’ locomotion based on dexterously manipulating the fluidi c environment through dynamically versatile wing conformations as dynamic morphing wing flight.
\nIn this talk\, I will describe some of the c hallenges facing the design and control of dynamic morphing Micro Aerial V ehicles (MAV) and report our latest morphing flying robot design called Ae robat. Dynamic morphing is the defining characteristic of bat locomotion a nd is key to their agility and efficiency. Unlike a jellyfish whose body c onformations are fully dominated by its passive dynamics\, a bat employs i ts active and passive dynamics to achieve dynamic morphing within its gait cycles with a notable degree of control over joint movements. Copying bats ’ morphing wings has remained an open engineering problem due to a classic al robot design challenge: having many active coordinates in MAVs is impos sible because of prohibitive design restrictions such as limited payload a nd power budget. I will propose a framework based on integrating low-power \, feedback-driven components within computational structures (mechanical structures with computational resources) to address two challenges associa ted with gait generation and regulation. We call this framework Morphing v ia Integrated Mechanical Intelligence and Control (MIMIC). Based on this f ramework\, my team at SiliconSynapse Laboratory at Northeastern University has copied bat dynamically versatile wing conformations in untethered fli ght tests.
\n\n
Bio:
\nAlireza Ramezani is an assistant professor at the Department of Electrical & Computer Engineering at North eastern University (NU). Before joining NU in 2018\, he was a post-doc at Caltech’s Division of Engineering and Applied Science. He received his Ph. D. degree in Mechanical Engineering from the University of Michigan\, Ann Arbor\, with Jessy Grizzle. His research interests are the design of bioin spired robots with nontrivial morphologies (high degrees of freedom and dy namic interactions with the environment)\, analysis\, and nonlinear\, clos ed-loop feedback design of locomotion systems. His designs have been featu red in high-impact journals\, including two cover articles in Science Robo tics Magazine and research highlights in Nature. Alireza has received NASA ’s Space Technology Mission Directorate’s Program Award in designing bioin spired locomotion systems for the exploration of the Moon and Mars craters two times. He is the recipient of Caltech’s Jet Propulsion Lab (JPL) Facu lty Research Program Position. Alireza’s research has been covered by over 200 news outlets\, including The New York Times\, The Wall Street Journal \, The Associated Press\, CNN\, NBC\, and Euronews. Currently\, he is lead ing a $1 Million NSF project to design and control bat-inspired MAVs in th e confined space of sewer networks for monitoring and inspection.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13115@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract:\nI will present a bio-inspired fish simulation pl atform\, which we call “Foids”\, to generate realistic synthetic datasets for an use in computer vision algorithm training. This is a first-of-its-k ind synthetic dataset platform for fish\, which generates all the 3D scene s just with a simulation. One of the major challenges in deep learning bas ed computer vision is the preparation of the annotated dataset. It is alre ady hard to collect a good quality video dataset with enough variations\; moreover\, it is a painful process to annotate a sufficiently large video dataset frame by frame. This is especially true when it comes to a fish da taset because it is difficult to set up a camera underwater and the number of fish (target objects) in the scene can range up to 30\,000 in a fish c age on a fish farm. All of these fish need to be annotated with labels suc h as a bounding box or silhouette\, which can take hours to complete manua lly\, even for only a few minutes of video. We solve this challenge by int roducing a realistic synthetic dataset generation platform that incorporat es details of biology and ecology studied in the aquaculture field. Becaus e it is a simulated scene\, it is easy to generate the scene data with ann otation labels from the 3D mesh geometry data and transformation matrix. T o this end\, we develop an automated fish counting system utilizing the pa rt of synthetic dataset that shows comparable counting accuracy to human e yes\, which reduces the time compared to the manual process\, and reduces physical injuries sustained by the fish.\n \nBio: Masaki Nakada obtained a master degree in physics at Waseda University in Japan. Then\, he finishe d PhD in computer science at UCLA and worked as a postdoc for another year \, where he published a series of scientific papers. (https://www.masakina kada.com/) He devoted more than 10 years in the research of artificial lif e\, specifically in the area of biomechanical human simulation with muscul oskeletal models\, neuromuscular controllers\, and biomimetic vision. Prev iously\, he worked for Intel as a software engineer. He received MIT Tech nology Review Innovator Award Under 35\, Forbes Next 1000\, Institute for Digital Research and Education Postdoctoral Scholar Award\, Siggraph Thesi s Fast Forward Honorable mention\, TEEC Cup North American Entrepreneurshi p Competition in Silicon Valley\, Japan Student Services Organization Fell owship\, Rotary Ambassadorial Fellowship\, Itoh Foundation Fellowship\, En trepreneurship Foundation Fellowship\, Aoi Foundation Fellowship and winne r of several Startup business competition & hackathons. He founded NeuralX \, Inc (https://www.neuralx.ai/) in 2019 based on the IP he has developed over the decade of research. The company provides an interactive online fi tness service Presence.fit (https://www.presence.fit/)\, where it combines the power of human instructor and motion analytics AI\, which enables the m to provide highly interactive online fitness experience. DTSTART;TZID=America/New_York:20221026T120000 DTEND;TZID=America/New_York:20221026T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Masaki Nakada “Foids: Bio-Inspired Fish Simulation fo r Generating Synthetic Datasets” URL:https://lcsr.jhu.edu/events/masaki-nakada/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Abstract:
\nI will present a bio-inspired fish simulation pl atform\, which we call “Foids”\, to generate realistic synthetic datasets for an use in computer vision algorithm training. This is a first-of-its-k ind synthetic dataset platform for fish\, which generates all the 3D scene s just with a simulation. One of the major challenges in deep learning bas ed computer vision is the preparation of the annotated dataset. It is alre ady hard to collect a good quality video dataset with enough variations\; moreover\, it is a painful process to annotate a sufficiently large video dataset frame by frame. This is especially true when it comes to a fish da taset because it is difficult to set up a camera underwater and the number of fish (target objects) in the scene can range up to 30\,000 in a fish c age on a fish farm. All of these fish need to be annotated with labels suc h as a bounding box or silhouette\, which can take hours to complete manua lly\, even for only a few minutes of video. We solve this challenge by int roducing a realistic synthetic dataset generation platform that incorporat es details of biology and ecology studied in the aquaculture field. Becaus e it is a simulated scene\, it is easy to generate the scene data with ann otation labels from the 3D mesh geometry data and transformation matrix. T o this end\, we develop an automated fish counting system utilizing the pa rt of synthetic dataset that shows comparable counting accuracy to human e yes\, which reduces the time compared to the manual process\, and reduces physical injuries sustained by the fish.
\n\n
Bio: Masaki Nak ada obtained a master degree in physics at Waseda University in Japan. The n\, he finished PhD in computer science at UCLA and worked as a postdoc fo r another year\, where he published a series of scientific papers. (https://www.masakinakada.com/) H e devoted more than 10 years in the research of artificial life\, specific ally in the area of biomechanical human simulation with musculoskeletal mo dels\, neuromuscular controllers\, and biomimetic vision. Previously\, he worked for Intel as a software engineer. He received MIT Technology Revie w Innovator Award Under 35\, Forbes Next 1000\, Institute for Digital Rese arch and Education Postdoctoral Scholar Award\, Siggraph Thesis Fast Forwa rd Honorable mention\, TEEC Cup North American Entrepreneurship Competitio n in Silicon Valley\, Japan Student Services Organization Fellowship\, Rot ary Ambassadorial Fellowship\, Itoh Foundation Fellowship\, Entrepreneursh ip Foundation Fellowship\, Aoi Foundation Fellowship and winner of several Startup business competition & hackathons. He founded NeuralX\, Inc (https://www.neuralx.ai/) in 2019 based on the IP he has developed over the decade of research. The company provi des an interactive online fitness service Presence.fit (https://www.presence.fit/)\, where it combines the p ower of human instructor and motion analytics AI\, which enables them to p rovide highly interactive online fitness experience.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13105@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract:\nThis talk will describe the robotics and AI acti vities and projects within JHU/APL’s Research and Exploratory Development Department. I will present motivating challenge problems faced by various defense\, military and medical sponsors across a number of government agen cies. Further\, I will highlight several research projects we are currentl y executing in the areas of robot manipulation\, navigation and human robo t interaction. Specifically\, the projects will highlight areas including underwater manipulation\, learned policies for off-road and complex terrai n navigation\, human robot interaction\, heterogenous robot teaming\, and fixed wing aerial navigation. Finally\, I will present areas of future res earch and exploration and possible intersections with LCSR.\n \nBio:\nKapi l Katyal is a principal researcher and robotics program manager in the Res earch and Exploratory Development Department at JHU/APL. He completed his PhD at JHU advised by Greg Hager on prediction and perception capabilities for robot navigation. He has worked at JHU/APL since 2007 on several proj ects spanning robot manipulation\, brain machine interfaces\, vision algor ithms for retinal prosthetics and robot navigation in complex terrains. He holds 5 patents and has co-authored over 30 publications in areas of robo tics and AI.\n \n DTSTART;TZID=America/New_York:20221102T120000 DTEND;TZID=America/New_York:20221102T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Kapil Katyal “Robot Manipulation and Navigation Resea rch at JHU/APL” URL:https://lcsr.jhu.edu/events/kapil-katyal/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Abstract:
\nThis talk will describe the robotics and AI acti vities and projects within JHU/APL’s Research and Exploratory Development Department. I will present motivating challenge problems faced by various defense\, military and medical sponsors across a number of government agen cies. Further\, I will highlight several research projects we are currentl y executing in the areas of robot manipulation\, navigation and human robo t interaction. Specifically\, the projects will highlight areas including underwater manipulation\, learned policies for off-road and complex terrai n navigation\, human robot interaction\, heterogenous robot teaming\, and fixed wing aerial navigation. Finally\, I will present areas of future res earch and exploration and possible intersections with LCSR.
\n\n
Bio:
\nKapil Katyal is a principal researcher and robotics prog ram manager in the Research and Exploratory Development Department at JHU/ APL. He completed his PhD at JHU advised by Greg Hager on prediction and p erception capabilities for robot navigation. He has worked at JHU/APL sinc e 2007 on several projects spanning robot manipulation\, brain machine int erfaces\, vision algorithms for retinal prosthetics and robot navigation i n complex terrains. He holds 5 patents and has co-authored over 30 publica tions in areas of robotics and AI.
\n\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13134@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract: Robots have begun to transition from assembly lin es\, where they are physically separated from humans\, to human-populated environments and human-enhancing applications\, where interaction with peo ple is inevitable. With this shift\, research in human-robot interaction ( HRI) has grown to allow robots to work with and around humans on complex t asks\, augment and enhance people\, and provide the best support to them. In this talk\, I will provide an overview of the work performed in the HIR O Group and our efforts toward intuitive\, human-centered technologies for the next generation of robot workers\, assistants\, and collaborators. Mo re specifically\, I will present our research on: a) robots that are safe to people\, b) robots that are capable of operating in complex environment s\, and c) robots that are good teammates. In all\, this research will ena ble capabilities that were not previously possible\, and will impact work domains such as manufacturing\, construction\, logistics\, the home\, and health care.\n \nBio: Alessandro Roncone is Assistant Professor in the Com puter Science Department at University of Colorado Boulder. He received hi s B.Sc. summa cum laude in Biomedical Engineering in 2008\, and his M.Sc. summa cum laude in NeuroEngineering in 2011 from the University of Genoa\, Italy. In 2015 he completed his Ph.D. in Robotics\, Cognition and Interac tion Technologies from the Italian Institute of Technology [IIT]\, working on the iCub humanoid in the Robotics\, Brain and Cognitive Sciences depar tment and the iCub Facility. From 2015 to 2018\, he was Postdoctoral Assoc iate at the Social Robotics Lab in Yale University\, performing research i n Human-Robot Collaboration for advanced manufacturing. He joined as facul ty at CU Boulder in August 2018\, where he is director of the Human Intera ction and Robotics Group (HIRO\, https://hiro-group.ronc.one/ ) and co-dir ector of the Interdisciplinary Research Theme in Engineering Education Res earch and AI-augmented Learning (EER-AIL IRT\, https://www.colorado.edu/i rt/engineering-education-ai/ ).\n DTSTART;TZID=America/New_York:20221109T120000 DTEND;TZID=America/New_York:20221109T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Alessandro Roncone “Robots working with and around pe ople” URL:https://lcsr.jhu.edu/events/alessandro-roncone/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Abstract: Robots have begun to transition from as sembly lines\, where they are physically separated from humans\, to human- populated environments and human-enhancing applications\, where interactio n with people is inevitable. With this shift\, research in human-robot int eraction (HRI) has grown to allow robots to work with and around humans on complex tasks\, augment and enhance people\, and provide the best support to them. In this talk\, I will provide an overview of the work performed in the HIRO Group and our efforts toward intuitive\, human-centered techno logies for the next generation of robot workers\, assistants\, and collabo rators. More specifically\, I will present our research on: a) robots that are safe to people\, b) robots that are capable of operating in complex e nvironments\, and c) robots that are good teammates. In all\, this researc h will enable capabilities that were not previously possible\, and will im pact work domains such as manufacturing\, construction\, logistics\, the h ome\, and health care.
\n\n
Bio: Alessandro Roncone is Assistant Professor in the Computer Science Department at Unive rsity of Colorado Boulder. He received his B.Sc. summa cum laude in Biomedical Engineering in 2008\, and his M.Sc. summa cum laude in NeuroEngineering in 2011 from the University of Genoa\, Italy. In 2015 he completed his Ph.D. in Robotics\, Cognition and Interaction Technologi es from the Italian Institute of Technology [IIT]\, working on th e iCub humanoid in the Robotics\, Brain and Cognitive Sciences department and the iCub Facility. From 2015 to 2018\, he was Postdoctoral Associate a t the Social Robotics Lab in Yale University\, performing research in Huma n-Robot Collaboration for advanced manufacturing. He joined as faculty at CU Boulder in August 2018\, where he is director of the Human Interaction and Robotics Group (HIRO\, https: //hiro-group.ronc.one/ ) and co-director of the Interdisciplinary Rese arch Theme in Engineering Education Research and AI-augmented Learning (EE R-AIL IRT\, https://www.colorado.edu/irt/engineering-education-ai/ ).
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13394@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract: The target of human flight in space is missions b eyond low earth orbit and the Lunar Gateway for deep space exploration and Missions to Mars. Several conditions\, such as the effect of weightlessne ss and radiations on the human body\, behavioral health decrements\, and c ommunication latency have to be considered. Telemedicine and telerobotic a pplications\, robot-assisted surgery with some hints on experimental surgi cal procedures carried out in previous missions\, have to be considered as well. The need for greater crew autonomy in dealing with health issues is related to the increasing severity of medical and surgical interventions that could occur in these missions\, and the presence of a highly trained surgeon on board would be recommended. A surgical robot could be a valuabl e aid but only insofar as it is provided with multiple functions\, includi ng the capability to perform certain procedures autonomously. Providing a multi-functional surgical robot is the new frontier. Research in this fiel d shall be paving the way for the development of new structured plans for human health in space\, as well as providing new suggestions for clinical applications on Earth.\n \nBio: Dr. Desire Pantalone MD is a general surge on with a particular interest in trauma surgery and emergency surgery. She is a staff surgeon in the Unit of Emergency Surgery and part of the Traum a Team of the University Hospital Careggi in Florence. She is also a speci alist in General Surgery and Vascular Surgery. She previously was a Resea rch Associate at the University of Chicago (IL) (Prof M. Michelassi) for O ncological Surgery and for Liver Transplantation and Hepatobiliary Surgery (Dr. J Emond). She is also an instructor for the Advanced Trauma Operativ e Management (American College of Surgeons Committee for Trauma) and a Fel low of the American College of Surgeons. She is also a Core Board member r esponsible for “Studies on traumatic events and surgery” in the ESA-Topica l Team on “Tissue Healing in Space: Techniques for promoting and monitorin g tissue repair and regeneration” for Life Science Activities.\n DTSTART;TZID=America/New_York:20221114T110000 DTEND;TZID=America/New_York:20221114T120000 LOCATION:Malone G33/35 SEQUENCE:0 SUMMARY:Special LCSR Seminar: Desire Pantalone “Robotic Surgery in Space” URL:https://lcsr.jhu.edu/events/desire-pantalone/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Abstract: The target of human flight in space is missions beyond low earth orbit and the Lunar Gateway for deep space explo ration and Missions to Mars. Several conditions\, such as the effect of we ightlessness and radiations on the human body\, behavioral health decremen ts\, and communication latency have to be considered. Telemedicine and tel erobotic applications\, robot-assisted surgery with some hints on experime ntal surgical procedures carried out in previous missions\, have to be con sidered as well. The need for greater crew autonomy in dealing with health issues is related to the increasing severity of medical and surgical inte rventions that could occur in these missions\, and the presence of a highl y trained surgeon on board would be recommended. A surgical robot could be a valuable aid but only insofar as it is provided with multiple functions \, including the capability to perform certain procedures autonomously. Pr oviding a multi-functional surgical robot is the new frontier. Research in this field shall be paving the way for the development of new structured plans for human health in space\, as well as providing new suggestions for clinical applications on Earth.
\n\n
Bio: D r. Desire Pantalone MD is a general surgeon with a particular interest in trauma surgery and emergency surgery. She is a staff surgeon in the Unit o f Emergency Surgery and part of the Trauma Team of the University Hospital Careggi in Florence. She is also a specialist in General Surgery and Vas cular Surgery. She previously was a Research Associate at the University o f Chicago (IL) (Prof M. Michelassi) for Oncological Surgery and for Liver Transplantation and Hepatobiliary Surgery (Dr. J Emond). She is also an in structor for the Advanced Trauma Operative Management (American College of Surgeons Committee for Trauma) and a Fellow of the American College of Su rgeons. She is also a Core Board member responsible for “Studies on trauma tic events and surgery” in the ESA-Topical Team on “Tissue Healing in Spac e: Techniques for promoting and monitoring tissue repair and regeneration” for Life Science Activities.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13129@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nStudent 1: Maia Stiber “Supporting Effective HRI via Flexib le Robot Error Management Using Natural Human Responses”\nAbstract: Unexpe cted robot errors during human-robot interaction are inescapable\; they ca n occur during any task and do not necessarily fit human expectations of p ossible errors. When left unmanaged\, robot errors’ impact on an interacti on harms task performance and user trust\, resulting in user unwillingness to work with a robot. Prior error management techniques often do not poss ess the versatility to appropriately address robot errors across tasks and error types as they frequently use task or error specific information for robust management. In this presentation\, I describe my work on exploring techniques for creating flexible error management through leveraging natu ral human responses (social signals) to robot errors as input for error de tection and classification across tasks\, scenarios\, and error types in p hysical human-robot interaction. I present an error detection method that uses facial reactions for real-time detection and temporal localization of robot error during HRI\, a flexible error-aware framework using traditio nal and social signal inputs that allow for improved error detection\, and an exploration on the effects of robot error severity on natural human re sponses. I will end my talk by discussing how my current and future work f urther investigates the use of social signals in the context of HRI for fl exible error detection and classification.\nBio: Maia Stiber is a Ph.D. ca ndidate in the Department of Computer Science\, co-advised by Dr. Chien-Mi ng Huang and Dr. Russell Taylor. Her work focuses on leveraging natural hu man responses to robot errors in an effort to develop flexible error manag ement techniques in support of effective human-robot interaction.\n \n \nS tudent 2: Akwasi Akwaboah “Neuromorphic Cognition and Neural Interfaces”\n Abstract: I present research at the Ralph Etienne-Cummings-led Computation al Sensor-Motor Systems Lab\, Johns Hopkins University on two fronts – (1) Neuromorphic Cognition (NC) focused on the emulation neural physiology at algorithmic and hardware levels\, and (2) Neural Interfaces with emphasis on electronics for neural MicroElectrode Array (MEA) characterization. Th e motivation for the NC front is as follows. The human brain expends a mer e 20 watts in learning and inference\, exponentially lower than state-of-t he-art large language models (GPT-3 and LaMDA). There is the need to innov ate sustainable AI hardware as the 3.4x compute doubling per month has dra stically outpaced Moore’s law\, i.e.\, a 2-year transistor doubling. Effor ts here are geared towards realizing biologically plausible learning rules such as the Hebb’s rule-based Spike-Timing-Dependent Plasticity (STDP) al gorithmically and in correspondingly low-power mixed analog-digital VLSI i mplements. On the same front of achieving a parsimonious artificial intell igence\, we are investigating the outcomes of using our models of the prim ate visual attention to selectively sparsify computation in deep neural ne tworks. At the NI front\, we are developing an open-source multichannel po tentiostat with parallel data acquisition capability. This work holds impl ications for rapid characterization and monitoring of neural MEAs often ad opted in neural rehabilitation and in neuroscientific experiments. A stand ard characterization technique is the Electrochemical Impedance (EI) Spect rometry. However\, the increasing channel counts in state-of-the-art MEAs (100x and 1000x) imposes the curse of prolonged acquisition time needed fo r high spectral resolution. Thus\, a truly parallel EI spectrometer made a vailable to the scientific community will ameliorate prolonged research ti me and cost.\nBio: Akwasi Akwaboah joined the Computational Sensory-Motor Systems (CSMS) Lab in Fall 2020 and is working towards his PhD. He receive d the MSE in Electrical Engineering from the Johns Hopkins University\, Ba ltimore\, MD in Summer 2022 en route the PhD. He received the B.Sc. Degree in Biomedical Engineering (First Class Honors) from the Kwame Nkrumah Uni versity of Science and Technology\, Ghana in 2017. He also received the M. S. degree in Electronics Engineering from Norfolk State University\, Norfo lk\, VA\, USA in 2020. His master’s thesis there focused on the formulatio n of a heuristically optimized computational model of a stem cell-derived cardiomyocyte with implications in cardiac safety pharmacology. He subsequ ently worked at Dr. James Weiland’s BioElectronic Vision Lab at the Univer sity of Michigan\, Ann Arbor\, MI\, USA in 2020\; where he collaborated on research in retinal prostheses\, calcium imaging and neural electrode cha racterization. His current interests include the following: neuromorphic c ircuits and systems\, bio-inspired algorithms\, computational biology\, an d neural interfaces. On the lighter side\, Akwasi loves to cook and listen to classical and Afrobeats music. He lives by Marie Curie’s quote – “Noth ing in life is to be feared\, it is only to be understood …”\n DTSTART;TZID=America/New_York:20221116T120000 DTEND;TZID=America/New_York:20221116T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Student Seminar URL:https://lcsr.jhu.edu/events/student-seminar-2/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Student 1: Maia Stiber “Supporting Effective HRI via Flexi ble Robot Error Management Using Natural Human Responses”
\nAbstract: Unexpected robot errors during human-robot interaction are ines capable\; they can occur during any task and do not necessarily fit human expectations of possible errors. When left unmanaged\, robot errors’ impac t on an interaction harms task performance and user trust\, resulting in u ser unwillingness to work with a robot. Prior error management techniques often do not possess the versatility to appropriately address robot errors across tasks and error types as they frequently use task or error specifi c information for robust management. In this presentation\, I describe my work on exploring techniques for creating flexible error management throug h leveraging natural human responses (social signals) to robot errors as i nput for error detection and classification across tasks\, scenarios\, and error types in physical human-robot interaction. I present an error detec tion method that uses facial reactions for real-time detection and tempora l localization of robot error during HRI\, a flexible error-aware framewo rk using traditional and social signal inputs that allow for improved erro r detection\, and an exploration on the effects of robot error severity on natural human responses. I will end my talk by discussing how my current and future work further investigates the use of social signals in the cont ext of HRI for flexible error detection and classification.
\nBio: M aia Stiber is a Ph.D. candidate in the Department of Computer Science\, co -advised by Dr. Chien-Ming Huang and Dr. Russell Taylor. Her work focuses on leveraging natural human responses to robot errors in an effort to deve lop flexible error management techniques in support of effective human-rob ot interaction.
\n\n
\n
Student 2: Akwasi Akwa boah “Neuromorphic Cognition and Neural Interfaces”
\nAbstr act: I present research at the Ralph Etienne-Cummings-led Computational Sensor-Motor Systems Lab\, J ohns Hopkins University on two fronts – (1) Neuromorphic Cognition (NC) fo cused on the emulation neural physiology at algorithmic and hardware level s\, and (2) Neural Interfaces with emphasis on electronics for neural Micr oElectrode Array (MEA) characterization. The motivation for the NC front i s as follows. The human brain expends a mere 20 watts in learning and infe rence\, exponentially lower than state-of-the-art large language models (G PT-3 and LaMDA). There is the need to innovate sustainable AI hardware as the 3.4x compute doubling per month has drastically outpaced Moore’s law\, i.e.\, a 2-year transistor doubling. Efforts here are geared towards real izing biologically plausible learning rules such as the Hebb’s rule-based Spike-Timing-Dependent Plasticity (STDP) algorithmically and in correspond ingly low-power mixed analog-digital VLSI implements. On the same front of achieving a parsimonious artificial intelligence\, we are investigating t he outcomes of using our models of the primate visual attention to selecti vely sparsify computation in deep neural networks. At the NI front\, we ar e developing an open-source multichannel potentiostat with parallel data a cquisition capability. This work holds implications for rapid characteriza tion and monitoring of neural MEAs often adopted in neural rehabilitation and in neuroscientific experiments. A standard characterization technique is the Electrochemical Impedance (EI) Spectrometry. However\, the increasi ng channel counts in state-of-the-art MEAs (100x and 1000x) imposes the cu rse of prolonged acquisition time needed for high spectral resolution. Thu s\, a truly parallel EI spectrometer made available to the scientific comm unity will ameliorate prolonged research time and cost.
\nBio: Akwasi Akwaboah jo ined the Computational Sensory-Motor Systems (CSMS) Lab in Fall 2020 and i s working towards his PhD. He received the MSE in Electrical Engineering f rom the Johns Hopkins University\, Baltimore\, MD in Summer 2022 en ro ute the PhD. He received the B.Sc. Degree in Biomedical Engineering ( First Class Honors) from the Kwame Nkrumah University of Science and Techn ology\, Ghana in 2017. He also received the M.S. degree in Electronics Eng ineering from Norfolk State University\, Norfolk\, VA\, USA in 2020. His m aster’s thesis there focused on the formulation of a heuristically optimiz ed computational model of a stem cell-derived cardiomyocyte with implicati ons in cardiac safety pharmacology. He subsequently worked at Dr. James We iland’s BioElectronic Vision Lab at the University of Michigan\, Ann Arbor \, MI\, USA in 2020\; where he collaborated on research in retinal prosthe ses\, calcium imaging and neural electrode characterization. His current i nterests include the following: neuromorphic circuits and systems\, bio-in spired algorithms\, computational biology\, and neural interfaces. On the lighter side\, Akwasi loves to cook and listen to classical and Afrobeats music. He lives by Marie Curie’s quote – “Nothing in life is to be fea red\, it is only to be understood …”
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13124@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nPanel Speaker 1: Erin Sutton\, PhD\nGuidance and Control En gineer at the JHU Applied Physics Laboratory\nPh.D. Mechanical Engineering 2017\, M.S. Mechanical Engineering 2016\nErin Sutton is a mechanical engi neer at Johns Hopkins Applied Physics Laboratory. She received a BS in mec hanical engineering from the University of Dayton and an MS and a PhD in m echanical engineering from Johns Hopkins University. She spent a year at t he Naval Air Systems Command designing flight simulators before joining AP L in 2019. Her primary research interest is in enhancing existing guidance and control systems with autonomy\, and her recent projects range from hy personic missile defense to civil space exploration.\n \nPanel Speaker 2: Star Kim\, PhD\nJob title and affiliation: Management Consultant at McKins ey & Company\nPh.D. Mechanical Engineering 2021\nStar is an Associate at a global business management consulting firm\, McKinsey & Company. At JHU\, she worked on personalizing cardiac surgery by creating patient specific vascular conduits at Dr. Axel Krieger’s IMERSE lab. She made a virtual rea lity software for doctors to design and evaluate conduits for each patient . Her team filed a patent and founded a startup together\, which received funding from the State of Maryland. Before joining JHU\, she was at the Un iversity of Maryland\, College Park and the U.S. Food and Drug Administrat ion. There\, she developed and tested patient specific medical devices and systems such as virtual reality mental therapy and orthopedic surgical cu tting guides.\n \nPanel Speaker 3: Nicole Ortega\, MSE\nSenior Robotics an d Controls Engineer at Johnson and Johnson\, Robotics and Digital Solution s\nJHU MSE Robotics 2018\, JHU BS in Biomedical Engineering 2016\nAt Johns on and Johnson Nicole works on the Robotis and Controls team to improve th e accuracy of their laparoscopic surgery platform. Before joining J&J\, N icole worked as a contractor for NASA supporting Gateway and at Think Surg ical supporting their next generation knee arthroplasty robot.\n \nPanel S peaker 4: Ryan Keating\, MSE\nSoftware Engineer at Nuro\nBS Mechanical Eng ineering 2013\, MSE Robotics 2014\nBio: After finishing my degrees at JHU\ , I spent two and a half years working at Carnegie Robotics\, where I was primarily involved in the development of a land-mine sweeping robot and an inertial navigation system. Following a brief stint working at SRI Intern ational to prototype a sandwich-making robot system (yes\, really)\, I hav e been working on the perception team at Nuro for the past four and a half years. I’ve had the opportunity to work on various parts of the perceptio n stack over that time period\, but my largest contributions have been to our backup autonomy system\, our object tracking system\, and the evaluati on framework we use to validate changes to the perception system. DTSTART;TZID=America/New_York:20221130T120000 DTEND;TZID=America/New_York:20221130T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Careers in Robotics: A Panel Discussion With Experts From Industry and Academia URL:https://lcsr.jhu.edu/events/panel-tbd/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Panel Speaker 1: Erin Sutton\, PhD
\nGuidan ce and Control Engineer at the JHU Applied Physics Laboratory
\nPh.D . Mechanical Engineering 2017\, M.S. Mechanical Engineering 2016
\nE rin Sutton is a mechanical engineer at Johns Hopkins Applied Physics Labor atory. She received a BS in mechanical engineering from the University of Dayton and an MS and a PhD in mechanical engineering from Johns Hopkins Un iversity. She spent a year at the Naval Air Systems Command designing flig ht simulators before joining APL in 2019. Her primary research interest is in enhancing existing guidance and control systems with autonomy\, and he r recent projects range from hypersonic missile defense to civil space exp loration.
\n\n
Panel Speaker 2: Star Kim\, PhD
\nJob title and affiliation: Management Consultant at McKinsey & Company
\nPh.D. Mechanical Engineering 2021
\nStar is an Assoc iate at a global business management consulting firm\, McKinsey & Company. At JHU\, she worked on personalizing cardiac surgery by creating patient specific vascular conduits at Dr. Axel Krieger’s IMERSE lab. She made a vi rtual reality software for doctors to design and evaluate conduits for eac h patient. Her team filed a patent and founded a startup together\, which received funding from the State of Maryland. Before joining JHU\, she was at the University of Maryland\, College Park and the U.S. Food and Drug Ad ministration. There\, she developed and tested patient specific medical de vices and systems such as virtual reality mental therapy and orthopedic su rgical cutting guides.
\n\n
Panel Speaker 3: Nicole O rtega\, MSE
\nSenior Robotics and Controls Engineer at John son and Johnson\, Robotics and Digital Solutions
\nJHU MSE Robotics 2018\, JHU BS in Biomedical Engineering 2016
\nAt Johnson and Johnso n Nicole works on the Robotis and Controls team to improve the accuracy of their laparoscopic surgery platform. Before joining J&J\, Nicole worked as a contractor for NASA supporting Gateway and at Think Surgical supporti ng their next generation knee arthroplasty robot.
\n\n
Software Engineer a t Nuro
\nBS Mechanical Engineering 2013\, MSE Robotics 2014
\nBio: After finishing my degrees at JHU\, I spent two and a half years work ing at Carnegie Robotics\, where I was primarily involved in the developme nt of a land-mine sweeping robot and an inertial navigation system. Follow ing a brief stint working at SRI International to prototype a sandwich-mak ing robot system (yes\, really)\, I have been working on the perception te am at Nuro for the past four and a half years. I’ve had the opportunity to work on various parts of the perception stack over that time period\, but my largest contributions have been to our backup autonomy system\, our ob ject tracking system\, and the evaluation framework we use to validate cha nges to the perception system.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13401@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION: \n \nAll LCSR members\, their families\, and significant other s are invited to the:\n \nUgly (or normal) Sweater Bash \nFriday\, Decembe r 9th\n5:00PM-7:00PM\nGlass Pavilion\n \nYou can help by contributing your favorite holiday dish (regional specialties strongly encouraged!) to this pot-luck get together (you don’t have to bring anything to participate). Main dishes will be provided\, as will plates\, napkins\, utensils\, etc. Click here to sign up\n \nThere will a gingerbread decorating contest and prizes for best/ugliest sweater!\n \n \n \n \n \n \n \n \n DTSTART;TZID=America/New_York:20221209T170000 DTEND;TZID=America/New_York:20221209T190000 LOCATION:Levering Hall - Glass Pavilion SEQUENCE:0 SUMMARY:LCSR Winter Potluck/ Ugly Sweater Bash URL:https://lcsr.jhu.edu/events/winter-potluck/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2022/11/ 2022-Holiday-Potluck.png\;576\;577\,medium\;https://lcsr.jhu.edu/wp-conten t/uploads/2022/11/2022-Holiday-Potluck.png\;576\;577\,large\;https://lcsr. jhu.edu/wp-content/uploads/2022/11/2022-Holiday-Potluck.png\;576\;577\,ful l\;https://lcsr.jhu.edu/wp-content/uploads/2022/11/2022-Holiday-Potluck.pn g\;576\;577 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
All LCSR members\, their families\, and significant others are invited to the:
\n\n
Ugly (or normal) Swea
ter Bash
\nFriday\, December 9th
\n5:00PM-7:00PM
\nGlass Pavilion
You can help by contributing your favorite holiday dish (regio nal specialties strongly encouraged!) to this pot-luck get together (you d on’t have to bring anything to participate). Main dishes will be provided\ , as will plates\, napkins\, utensils\, etc. Click here to sign up
\n\n
There will a gingerbread decorating contest and prizes for best/ugliest sweater !
\n\n
\n
\n
\n
\n
\n
\n
\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13554@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Recovering the Sense of Touch for Robotic Surgery and Surgical Training \n \nBy Ugur Tumerdem\nAssistant. Professor of Mechanical Enginee ring at Marmara University\nVisiting Assistant Professor of Mechanical Eng ineering at Johns Hopkins University\nFulbright Visiting Research Scholar 2022/23\n \nAbstract\n \nWhile robotic surgery systems have revolutionized the field of minimally invasive surgery in the past 25 years\, their bigg est disadvantage since their inception is the lack of haptic feedback to t he surgeon. While controlling robotic instrument with teleoperation surgeo ns operate without their sense of touch and rely on only visual feedback w hich can result in unwanted complications.\n \nIn this seminar\, I am goin g to talk about our recent and ongoing work to recover the lost sense of t ouch in robotic surgery\, through new motion control laws\, haptic teleope ration and machine learning algorithms as well as novel mechanism design. Major hurdles to providing reliable haptic feedback in robotic surgery sys tems are the difficulty in obtaining reliable force measurements/estimatio ns from robotic laparoscopic instruments and the lack of transparent teleo peration architectures which can guarantee stability under environment unc ertainty or communication delays. I will be talking about our approaches to solving these issues and on our ongoing project to achieve haptic feedb ack on the da Vinci Research Kit. As an extension of the technology we are developing\, I will also be discussing how the proposed haptic control ap proaches can be used to connect multiple surgeons through haptic interface s to enable a new haptic training approach in surgical robotics.\n \nBio\n Ugur Tumerdem is an Assistant Professor of Mechanical Engineering at Marma ra University\,Istanbul\, Turkey and a Visiting Professor of Mechanical En gineering at Johns Hopkins University as the recipient of a Fulbright Visi ting Research Fellowship in the academic year 2022/2023. Prof. Tumerdem re ceived his B.Sc. in Mechatronics Engineering from Sabanci University\, Ist anbul\, Turkey in 2005\, his M.Sc. and Ph.D. degrees in Integrated Design Engineering from Keio University\, Tokyo\, Japan in 2007 and 2010 respecti vely. He worked as a postdoctoral researcher at IBM Research – Tokyo in 20 11 before joining Marmara University.\n \n \n DTSTART;TZID=America/New_York:20230125T120000 DTEND;TZID=America/New_York:20230125T130000 SEQUENCE:0 SUMMARY:LCSR Seminar: Ugur Tumerdem “Recovering the Sense of Touch for Robo tic Surgery and Surgical Training” URL:https://lcsr.jhu.edu/events/lcsr-seminar-ugar-tumerdem/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Recov ering the Sense of Touch for Robotic Surgery and Surgical Training
\n\n
By Ugur Tumerdem
\nAssistant. Professor of Mechanical Engineering at Marmara University
\nVisiting Assistant Professor of Mechanical Engineering at Johns Hopkins University
\nFulbright Visiting Research Scholar 2022/23
\n\n
Abstract
\n\n
While robotic surgery systems have revolutionized the field of minimally invasive surgery in the past 25 years\, their biggest disadvantage since t heir inception is the lack of haptic feedback to the surgeon. While contro lling robotic instrument with teleoperation surgeons operate without their sense of touch and rely on only visual feedback which can result in unwan ted complications.
\n\n
In this seminar\, I am going to talk about our recent and ongoing work to recover the lost sense of touch in ro botic surgery\, through new motion control laws\, haptic teleoperation and machine learning algorithms as well as novel mechanism design. Major hurd les to providing reliable haptic feedback in robotic surgery systems are t he difficulty in obtaining reliable force measurements/estimations from ro botic laparoscopic instruments and the lack of transparent teleoperation a rchitectures which can guarantee stability under environment uncertainty o r communication delays. I will be talking about our approaches to solving these issues and on our ongoing project to achieve haptic feedback on the da Vinci Research Kit. As an extension of the technology we are developin g\, I will also be discussing how the proposed haptic control approaches c an be used to connect multiple surgeons through haptic interfaces to enabl e a new haptic training approach in surgical robotics.
\nstrong>
\nBio
\nUgur Tumerdem is an Assistant Professor of Mechanical Engineering at Marmara University\,Istanbul\, Tur key and a Visiting Professor of Mechanical Engineering at Johns Hopkins Un iversity as the recipient of a Fulbright Visiting Research Fellowship in t he academic year 2022/2023. Prof. Tumerdem received his B.Sc. in Mechatron ics Engineering from Sabanci University\, Istanbul\, Turkey in 2005\, his M.Sc. and Ph.D. degrees in Integrated Design Engineering from Keio Univers ity\, Tokyo\, Japan in 2007 and 2010 respectively. He worked as a postdoct oral researcher at IBM Research – Tokyo in 2011 before joining Marmara Uni versity.
\n\n
\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13586@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Lydia\; lcsr-gsa@jhu.edu DESCRIPTION:We are super excited for you to join us on our first LCSR socia l event of this spring term! We are planning to get together for an ice-sk ating session on Thursday January 26th @6:00 pm at the JHU ice rink\, foll owed by an informal happy hour at the Charles Village Pub (we are not prov iding food or drinks this time). The ice rink on the night of the 26th is dedicated to JHU grad students\, so it’s a good opportunity to mingle with peeps from other departments as well! If you are interested in joining us \, please sign up on this google form – we will be taking in people on a f irst come first serve basis.\n \nWe currently have 27 available tickets op en only to LCSR students. However\, you are free to bring in extra guests by signing them up yourselves at this link (please read through JHU’s poli cy on bringing in non JHU affiliated guests on their website). We will be meeting up at Hackerman breezeway at 5:40pm to head together as a group be cause all tickets are registered under 3 of our committee members’ names. The skating session will last for 1.5 hours.\n \nFAQs:\n\nDo I need to kno w how to skate? Nope. You are all welcome to join\, no matter much or litt le you know about ice-skating!\nDo I need to bring in anything? No. Come a s you are! JHU will provide skates\, that’s all you’re going to need. Just wear thicker socks for added comfort/padding. But you’re welcome to bring in your own skates or protective equipment (knee/butt pads) if you wish. \n\n \nLastly\, we wanted to emphasize that the aforementioned date is TEN TATIVE and weather dependent. Should the clouds bless us with rain on that Thursday\, we will need to postpone the event. We will send an email on M onday January 23rd to confirm the final date\, but it will most likely be a Thursday or Friday either the week of January 23 or 30.\n \nLooking forw ard to cruising with you soon ⛸️⛸️!\nTickets: https://docs.google.com/form s/d/e/1FAIpQLSf35TqIHkRdGBtobJCT-4t5E-Bw8DE6sFffGxIxjF5489vW_Q/viewform. DTSTART;TZID=America/New_York:20230126T180000 DTEND;TZID=America/New_York:20230126T193000 SEQUENCE:0 SUMMARY:GSA Ice Skating Social URL:https://lcsr.jhu.edu/events/gsa-ice-skating-social/ X-COST-TYPE:external X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
We are super excited for you to join us on our first LCSR social event of this spring t erm! We are planning to get together for an ice-skating session on Thursday January 26th @6:00 pm a t the JHU ice rink\, followed by an informal happy hour at t he Charles Village Pub (we are not providing food or drinks this time). Th e ice rink on the night of the 26th is dedicated to JHU grad st udents\, so it’s a good opportunity to mingle with peeps from other depart ments as well! If you are interested in joining us\, please si gn up on this google form< /a> – we will be taking in people on a first come first serve basis.< /strong>
\n\n
We currently have 27 available tickets open onl y to LCSR students. However\, you are free to bring in extra guests by sig ning them up yourselves at this link (please read through JH U’s policy on bringing in non JHU affiliated guests on their website). We will be meeting up at Hackerman breezeway at 5:40pm to he ad together as a group because all tickets are registered under 3 of our c ommittee members’ names. The skating session will last for 1.5 hours.
\n\n
FAQs:
\n\n
Lastly\, w e wanted to emphasize that the aforementioned date is TENTATIV E and weather dependent. Should the clouds bless us with rai n on that Thursday\, we will need to postpone the event. We will send an e mail on Monday January 23rd to confirm the final date\, but it will most likely be a Thursday or Friday either the week of January 23 or 30.
\n\n
Looking forward to cruising with you soon ⛸️⛸️!
\nTickets: https://docs.google.com/forms/d/e/1FAIpQLSf35TqIHkRdGBtobJCT-4t5 E-Bw8DE6sFffGxIxjF5489vW_Q/viewform.
X-TAGS;LANGUAGE=en-US:gsa X-TICKETS-URL:https://docs.google.com/forms/d/e/1FAIpQLSf35TqIHkRdGBtobJCT- 4t5E-Bw8DE6sFffGxIxjF5489vW_Q/viewform END:VEVENT BEGIN:VEVENT UID:ai1ec-13522@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Christy Brooks\; cbrook53@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract:\nRecent Results and Future Challenges for Autonom ous Underwater Vehicles in Ocean Exploration\nDr. Dana R. Yoerger\nSenior Scientist\nDept of Applied Ocean Physics and Engineering\nWoods Hole Ocean ographic Institution\n \nIn the past two decades\, engineers and scientist s have used robots to study basic processes in the deep ocean including th e Mid Ocean Ridge\, coral habitats\, volcanoes\, and the deepest trenches. We have also used such vehicles to investigate the environmental impact o f the Deepwater Horizon oil spill and to investigate ancient and modern sh ipwrecks. More recently\, we are expanding our efforts to include the meso pelagic or “twilight zone” which extends vertically in the ocean from abou t 200 to 1000m where sunlight ceases to penetrate. This regime is particu larly under-explored and poorly understood due in large part to the logist ical and technological challenges in accessing it. However\, knowledge of this vast region is critical for many reasons\, including understanding t he global carbon cycle – and Earth’s climate – and for managing biological resources. This talk will show results from our past expeditions and look to future challenges.\n \nBio:\nDr. Dana Yoerger is a Senior Scientist at the Woods Hole Oceanographic Institution and a researcher in robotics and autonomous vehicles. He supervises the research and academic program of graduate students studying oceanographic engineering through the MIT/WHOI Joint Program in the areas of control\, robotics\, and design. Dr. Yoerge r has been a key contributor to the remotely-operated vehicle Jason\; to t he Autonomous Benthic Explorer known as ABE\; most recently\, to the auton omous underwater vehicle\, Sentry\; the hybrid remotely operated vehicle\, Nereus which reached the bottom of the Mariana Trench in 2009\, and most recently Mesobot\, a hybrid robot for midwater exploration. Dr. Yoerger h as gone to sea on over 90 oceanographic expeditions exploring the Mid-Ocea n Ridge\, mapping underwater seamounts and volcanoes\, surveying ancient a nd modern shipwrecks\, studying the environmental effects of the Deepwater Horizon oil spill\, and the recent effort that located the Voyage Data Re corder from the merchant vessel El Faro. His current research focuses on r obots for exploring the midwater regions of the world’s ocean. Dr. Yoerger has served on several National Academies committees and is a member of th e Research Board of the Gulf of Mexico Research Initiative. He has a PhD in mechanical engineering from the Massachusetts Institute of Technology a nd is a Fellow of the IEEE.\n \n \n DTSTART;TZID=America/New_York:20230201T120000 DTEND;TZID=America/New_York:20230201T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Dana Yoerger “Recent Results and Future Challenges f or Autonomous Underwater Vehicles in Ocean Exploration” URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-8/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Abstract:
\nDr. Dana R. Yoerger
\nSenior Scientist
\nDept of Applied Ocean Physics and Engine ering
\nWoods Hole Oceanographic Institution
\n\n
In th e past two decades\, engineers and scientists have used robots to study ba sic processes in the deep ocean including the Mid Ocean Ridge\, coral habi tats\, volcanoes\, and the deepest trenches. We have also used such vehicl es to investigate the environmental impact of the Deepwater Horizon oil sp ill and to investigate ancient and modern shipwrecks. More recently\, we a re expanding our efforts to include the mesopelagic or “twilight zone” whi ch extends vertically in the ocean from about 200 to 1000m where sunlight ceases to penetrate. This regime is particularly under-explored and poorl y understood due in large part to the logistical and technological challen ges in accessing it. However\, knowledge of this vast region is critical for many reasons\, including understanding the global carbon cycle – and E arth’s climate – and for managing biological resources. This talk will sho w results from our past expeditions and look to future challenges.
\n\n
Bio:
\nDr. Dana Yoerger is a Senior Scientist at the Wood s Hole Oceanographic Institution and a researcher in robotics and autonomo us vehicles. He supervises the research and academic program of graduate students studying oceanographic engineering through the MIT/WHOI Joint Pro gram in the areas of control\, robotics\, and design. Dr. Yoerger has bee n a key contributor to the remotely-operated vehicle Jason\; to t he Autonomous Benthic Explorer known as ABE\; most recently\, to the auton omous underwater vehicle\, Sentry\; the hybrid remotely operated vehicle\, Nereus which reached the bottom of the Mariana Trench i n 2009\, and most recently Mesobot\, a hybrid robot for midwater exploration. Dr. Yoerger has gone to sea on over 90 oceanographic expedit ions exploring the Mid-Ocean Ridge\, mapping underwater seamounts and volc anoes\, surveying ancient and modern shipwrecks\, studying the environment al effects of the Deepwater Horizon oil spill\, and the recent effort that located the Voyage Data Recorder from the merchant vessel El Faro. His cu rrent research focuses on robots for exploring the midwater regions of the world’s ocean. Dr. Yoerger has served on several National Academies commi ttees and is a member of the Research Board of the Gulf of Mexico Research Initiative. He has a PhD in mechanical engineering from the Massachusett s Institute of Technology and is a Fellow of the IEEE.
\n\n
\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13562@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Bagels! DTSTART;TZID=America/New_York:20230206T103000 DTEND;TZID=America/New_York:20230206T130000 SEQUENCE:0 SUMMARY:GSA: Bagel Day URL:https://lcsr.jhu.edu/events/gsa-bagel-day/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Bagels!
\n X-TAGS;LANGUAGE=en-US:gsa END:VEVENT BEGIN:VEVENT UID:ai1ec-13530@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Christy Brooks\; cbrook53@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nMark Savage is the Johns Hopkins Life Design Educator for E ngineering Masters Students\, advising on all aspects of career developmen t and the internship / job search\, with the Handshake Career Management S ystem as a necessary tool. Look for weekly newsletters to soon be emailed to Homewood WSE Masters Students on Sunday Nights.\n \n \n DTSTART;TZID=America/New_York:20230208T120000 DTEND;TZID=America/New_York:20230208T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Mark Savage “Resumes” URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-3/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Mark Savage is the Johns Hopkins Life Design Educator for Engineering Masters Students\, advising on all aspects of career development and the internship / job search\, wit h the Handshake Career Management System as a necessary tool. Look for we ekly newsletters to soon be emailed to Homewood WSE Masters Students on Su nday Nights.
\n\n
\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13532@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Christy Brooks\; cbrook53@jhu.edu DESCRIPTION: \nLink for Live Seminar\nLink for Recorded seminars – 2022/202 3 school year\n \nAbstract:\nAll models are wrong\, and too many are direc ted inward. The Internal Model Principle of control engineering directs ou r attention (and modeling proficiency) to what makes the world around us p atterned and predictable. It says that driving a model of that patterned or predictable behavior in a feedback loop is the only way to achieve perf ect tracking or disturbance rejection. In the spirit of “some models are u seful”\, I will present a control system model of humans tracking moving t argets on a screen using a mouse and cursor. Simple analyses reveal this c ontroller’s robustness to visual blanking and experiments (even experiment s conducted remotely during the pandemic) provide ample support. Extension s that combine feedforward and feedback control complete the picture and c omplement existing literature in human motor behavior\, most of which is f ocused on modeling the system under control rather than the environment.\n Bio:\nBrent Gillespie is a Professor of Mechanical Engineering and Robotic s at the University of Michigan. He received a Bachelor of Science in Mech anical Engineering from the University of California Davis in 1986\, a Mas ter of Music from the San Francisco Conservatory of Music in 1989\, and a Ph.D. in Mechanical Engineering from Stanford University in 1996. His rese arch interests include haptic interface\, human motor behavior\, haptic sh ared control\, and robot-assisted rehabilitation after neurological injury . Prof. Gillespie’s awards include the Popular Science Invention Award (20 16)\, the University of Michigan Provost’s Teaching Innovation Prize (2012 )\, and the Presidential Early Career Award for Scientists and Engineers ( 2001).\n DTSTART;TZID=America/New_York:20230215T120000 DTEND;TZID=America/New_York:20230215T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Brent Gillespie “Predicting Human Behavior in Predict able Environments Using the Internal Model Principle” URL:https://lcsr.jhu.edu/events/lcsr-seminar-brent-gillespie-2/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
\n
Abstract:
\nAll models are wrong\, and too many ar e directed inward. The Internal Model Principle of control engineering dir ects our attention (and modeling proficiency) to what makes the world arou nd us patterned and predictable. It says that driving a model of that pat terned or predictable behavior in a feedback loop is the only way to achie ve perfect tracking or disturbance rejection. In the spirit of “some model s are useful”\, I will present a control system model of humans tracking m oving targets on a screen using a mouse and cursor. Simple analyses reveal this controller’s robustness to visual blanking and experiments (even exp eriments conducted remotely during the pandemic) provide ample support. Ex tensions that combine feedforward and feedback control complete the pictur e and complement existing literature in human motor behavior\, most of whi ch is focused on modeling the system under control rather than the environ ment.
\nBio:
\nBrent Gillespie is a Professor of Mechanical En gineering and Robotics at the University of Michigan. He received a Bachel or of Science in Mechanical Engineering from the University of California Davis in 1986\, a Master of Music from the San Francisco Conservatory of M usic in 1989\, and a Ph.D. in Mechanical Engineering from Stanford Univers ity in 1996. His research interests include haptic interface\, human motor behavior\, haptic shared control\, and robot-assisted rehabilitation afte r neurological injury. Prof. Gillespie’s awards include the Popular Scienc e Invention Award (2016)\, the University of Michigan Provost’s Teaching I nnovation Prize (2012)\, and the Presidential Early Career Award for Scien tists and Engineers (2001).
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13534@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Christy Brooks\; cbrook53@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \n \nAbstract:\nOver 70% of our world is underwater\, but les s than 1% of the world’s oceans have been mapped at resolutions greater th an 100m per pixel. Regular inspection\, mapping\, and data collection in m arine environments is essential for a whole host of reasons including gain ing a scientific understanding of our planet\, civil infrastructure mainte nance\, and safe navigation. However\, manual inspection/data collection u sing divers is expensive\, dangerous\, time-consuming\, and tedious work. \n \nIn this talk\, I will discuss the use of autonomous underwater vehicl es (AUVs) and autonomous surface vessels (ASVs) to automatically and intel ligently map\, inspect\, and collect information in unstructured marine en vironments. In particular\, we will discuss the problems present in this s pace as well as the contributions my lab is making towards addressing thes e problems\, including i) the development of a general-purpose marine robo tics testbed at BYU\, ii) the development of a marine robotics simulator c alled HoloOcean (https://holoocean.readthedocs.io/en/stable/)\, iii) advan cements in marine robotic localization using Lie groups\, and iv) prelimin ary results towards expert-guided topic modeling and intelligent data coll ection.\n \nBio:\nDr. Joshua Mangelson holds PhD and Masters degrees in Ro botics from the University of Michigan. After completing his degre\, he se rved as a post-doctoral fellow at Carnegie Mellon University before joinin g the Electrical and Computer Engineering faculty at Brigham Young Univers ity in 2020. His qualifications include demonstrated expertise in robotic perception\, mapping\, and localization with a particular focus on marine robotics. He has extensive experience leading marine robotic field trials in various locations around the world including San Diego\, Hawaii\, Bosto n\, northern Michigan\, and Utah. In 2018\, his work on multi-robot mappin g received the Best Multi-Robot Paper Award at the IEEE ICRA conference an d 1st-Place in the IEEE OCEANS Student Poster Competition. He is currently serving as an associate editor for The International Journal of Robotics Research (IJRR) and the IEEE/RSJ IROS Conference.\n DTSTART;TZID=America/New_York:20230222T120000 DTEND;TZID=America/New_York:20230222T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Joshua Mangelson “Steps Towards Intelligent Autonomou s Underwater Inspection and Data Collection” URL:https://lcsr.jhu.edu/events/joshua-mangelson/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
\n
Abstract:
\nOver 70% of our world is underwater\, but less than 1% of the world’s oceans have been mapped at resolutions gre ater than 100m per pixel. Regular inspection\, mapping\, and data collecti on in marine environments is essential for a whole host of reasons includi ng gaining a scientific understanding of our planet\, civil infrastructure maintenance\, and safe navigation. However\, manual inspection/data colle ction using divers is expensive\, dangerous\, time-consuming\, and tedious work.
\n\n
In this talk\, I will discuss the use of autonomo us underwater vehicles (AUVs) and autonomous surface vessels (ASVs) to aut omatically and intelligently map\, inspect\, and collect information in un structured marine environments. In particular\, we will discuss the proble ms present in this space as well as the contributions my lab is making tow ards addressing these problems\, including i) the development of a general -purpose marine robotics testbed at BYU\, ii) the development of a marine robotics simulator called HoloOcean (https://holoocean.readthedocs.io/en/stable/)\, iii) advancements in marine robotic localization using Lie groups\, and iv) pr eliminary results towards expert-guided topic modeling and intelligent dat a collection.
\n\n
Bio:
\nDr. Joshua Mangelson holds Ph D and Masters degrees in Robotics from the University of Michigan. After c ompleting his degre\, he served as a post-doctoral fellow at Carnegie Mell on University before joining the Electrical and Computer Engineering facul ty at Brigham Young University in 2020. His qualifications include demonst rated expertise in robotic perception\, mapping\, and localization with a particular focus on marine robotics. He has extensive experience leading m arine robotic field trials in various locations around the world including San Diego\, Hawaii\, Boston\, northern Michigan\, and Utah. In 2018\, his work on multi-robot mapping received the Best Multi-Robot Paper Award at the IEEE ICRA conference and 1st-Place in the IEEE OCEANS Student Poster C ompetition. He is currently serving as an associate editor for The Interna tional Journal of Robotics Research (IJRR) and the IEEE/RSJ IROS Conferenc e.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13536@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nUlas Berk Karli and Shiye (Sally) Cao “What if it is wrong: effects of power dynamics and trust repair strategy on trust and complian ce in HRI.”\nAbstract: Robotic systems designed to work alongside people a re susceptible to technical and unexpected errors. Prior work has investig ated a variety of strategies aimed at repairing people’s trust in the robo t after its erroneous operations. In this work\, we explore the effect of post-error trust repair strategies (promise and explanation) on people’s t rust in the robot under varying power dynamics (supervisor and subordinate robot). Our results show that\, regardless of the power dynamics\, promis e is more effective at repairing user trust than explanation. Moreover\, p eople found a supervisor robot with verbal trust repair to be more trustwo rthy than a subordinate robot with verbal trust repair. Our results furthe r reveal that people are prone to complying with the supervisor robot even if it is wrong. We discuss the ethical concerns in the use of supervisor robot and potential interventions to prevent improper compliance in users for more productive human-robot collaboration.\n \nBio: Ulas Berk Karli is a MSE student in Robotics LCSR\, Johns Hopkins University. He received th e Bachelor of Science degree in Mechanical Engineering and Double Majored in Computer Engineering from Koc University\, Istanbul in 2021. His resear ch interests are Human-Robot Collaboration and Robot Learning for HRI.\nSh iye Cao is a first-year Ph.D. student in the Department of Computer Scienc e\, co-advised by Dr. Chien-Ming Huang and Dr. Anqi Liu. She received the Bachelor of Science degree in Computer Science with a second major in App lied Mathematics and Statistics from Johns Hopkins University in 2021\, an d the Masters of Science in Engineering in Computer Science from Johns Hop kins University in 2022. Her work focuses on user trust and reliance in hu man-machine collaborative tasks.\n \n \nEugene Lin “Robophysical modeling of spider vibration sensing of prey on orb webs”\nAbstract: Orb-weaving sp iders are functionally blind and detect prey-generated web vibrations thro ugh vibration sensors at their leg joints to locate and identify prey caug ht in their (near) planar webs. Previous studies focused on how spiders us e web geometry\, silk properties\, and web pre-tension to modulate vibrati on sensing. Spiders can also dynamically adjust their posture while sensin g prey\, which may be a form of active sensing (Hung\, Corver\, Gordus\, 2 022\, APS March Meeting). However\, whether this is true and how it works is poorly understood\, due to difficulty of measuring the dynamics of the entire prey-web-spider interaction system all at once. Here\, we developed a robophysical model of the system to test this hypothesis of active sens ing and discover its principles. Our model consists of a vibrating prey ro bot and a spider robot that can adjust its posture\, with torsional spring s at leg joints and accelerometers to measure joint vibration. Both robots are attached to a physical web made of cords with qualitatively similar p roperties to real spider web threads. Load cells measure web pre-tension a nd a high-speed camera system measure web vibrations and robot movement. P reliminary results showed vibration attenuation through the web from the p rey robot. We are currently studying the complex effects of spider robot’s dynamic posture change on vibration propagation across the web and leg jo ints\, by systematically varying the parameters of prey robot vibration\, spider robot leg posture\, and web pre-tension.\n \nBio: Eugene Lin is a t hird year PhD student in Dr. Chen Li’s lab (Terradynamics lab). His work f ocuses on understanding environmental sensing on suspended\, sparse terrai n. He received a B.S. in Mechanical Engineering at the University of Calif ornia\, San Diego. He recently presented this work at the annual SICB conf erence and will present it again at the annual March APS conference.\n \n \nAishwarya Pantula “Pick a Side: Untethered Gel Crawlers That Can Break S ymmetry”\nAbstract: The development of untethered soft crawling robots pro grammed to respond to environmental stimuli and precisely maneuverable acr oss size scales has been paramount to the fields of soft robotics\, drug d elivery\, and autonomous smart devices. Of particular relevance are revers ible thermoresponsive hydrogels\, which swell and shrink in the temperatur e range of (30- 60 °C) for operating such untethered soft robots in human physiological and ambient conditions. While crawling has been demonstrated by thermoresponsive hydrogels\, they need surface modifications in the fo rm of rachets\, asymmetric patterning\, or constraints to achieve unidirec tional motion.\nHere we demonstrate and validate a new mechanism for untet hered\, unidirectional crawling for multisegmented gel crawlers built from an active thermoresponsive poly (N-isopropyl acrylamide) (pNIPAM) and pas sive polyacrylamide (pAAM) on flat unpatterned surfaces. By connecting bil ayers of different geometries and thicknesses using a centrally suspended gel linker\, we create a morphological gradient along the fore-aft axis\, which leads to an asymmetry in the contact forces during the swelling and deswelling of our crawler. We thoroughly explain our mechanism using exper iments and finite element simulations and\, using experiments\, demonstrat e that we can tune the generated asymmetry and\, in turn\, increase the di splacement of the crawler by varying linker stiffness\, morphology\, and t he number of bilayer segments. We believe this mechanism can be widely app lied across fields of study to create the next generation of autonomous sh ape-changing and smart locomotors.\nBio: Aishwarya is a 4th year Ph.D. can didate in the lab of Dr. David Gracias at Johns Hopkins University\, USA. Her research focuses on exploring smart materials like stimuli-responsive hydrogels\, combining them with novel patterning methods like 3D/4D printi ng\, imprint molding\, lithography\, etc.\, and using different mechanical design strategies to create untethered biomimetic actuators and locomotor s across size scales for soft robotics and biomedical devices.\n \n \nMaia Stiber “On using social signals to enable flexible error-aware HRI.”\nAbs tract: Prior error management techniques often do not possess the versatil ity to appropriately address robot errors across tasks and scenarios. Thei r fundamental framework involves explicit\, manual error management and im plicit domain-specific information driven error management\, tailoring the ir response for specific interaction contexts. We present a framework for approaching error-aware systems by adding implicit social signals as anoth er information channel to create more flexibility in application. To suppo rt this notion\, we introduce a novel dataset (composed of three data coll ections) with a focus on understanding natural facial action unit (AU) res ponses to robot errors during physical-based human-robot interactions—vary ing across task\, error\, people\, and scenario. Analysis of the dataset r eveals that\, through the lens of error detection\, using AUs as input int o error management affords flexibility to the system and has the potential to improve error detection response rate. In addition\, we provide an exa mple real-time interactive robot error management system using the error-a ware framework.\n \nBio: Maia Stiber is a 4th year Ph.D. candidate in the Department of Computer Science\, co-advised by Dr. Chien-Ming Huang and Dr . Russell Taylor. She received a B.S. in Computer Science from Caltech in 2019 and a M.S.E. in Computer Science from Johns Hopkins University in 202 1. Her work focuses on leveraging natural human responses to robot errors in an effort to develop flexible error management techniques in support of effective human-robot interaction.\n \nVictor Antony “Co-designing with o lder adults\, for older adults: robots to promote physical activity.”\nAbs tract: Lack of physical activity has severe negative health consequences f or older adults and limits their ability to live independently. Robots hav e been proposed to help engage older adults in physical activity (PA)\, al beit with limited success. There is a lack of robust understanding of olde r adults’ needs and wants from robots designed to engage them in PA. In th is paper\, we report on the findings of a co-design process where older ad ults\, physical therapy experts\, and engineers designed robots to promote PA in older adults. We found a variety of motivators for and barriers aga inst PA in older adults\; we\, then\, conceptualized a broad spectrum of p ossible robotic support and found that robots can play various roles to he lp older adults engage in PA. This exploratory study elucidated several ov erarching themes and emphasized the need for personalization and adaptabil ity. This work highlights key design features that researchers and enginee rs should consider when developing robots to engage older adults in PA\, a nd underscores the importance of involving various stakeholders in the des ign and development of assistive robots.\n \nBio: Victor Antony is a secon d-year Ph.D. student in the Department of Computer Science\, advised by Dr . Chien-Ming Huang. He received the Bachelor of Science degree in Computer Science from the University of Rochester in 2021. His work focuses on Soc ial Robots for well-being.\n DTSTART;TZID=America/New_York:20230301T120000 DTEND;TZID=America/New_York:20230301T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Student Seminars URL:https://lcsr.jhu.edu/events/lcsr-seminar-student/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
< /p>\n
Ulas Berk Karli and Shiye (Sally) Cao “What if it is wrong : effects of power dynamics and trust repair strategy on trust and complia nce in HRI.”
\nAbstract: Robotic systems designed to work a longside people are susceptible to technical and unexpected errors. Prior work has investigated a variety of strategies aimed at repairing people’s trust in the robot after its erroneous operations. In this work\, we explo re the effect of post-error trust repair strategies (promise and explanati on) on people’s trust in the robot under varying power dynamics (superviso r and subordinate robot). Our results show that\, regardless of the power dynamics\, promise is more effective at repairing user trust than explanat ion. Moreover\, people found a supervisor robot with verbal trust repair t o be more trustworthy than a subordinate robot with verbal trust repair. O ur results further reveal that people are prone to complying with the supe rvisor robot even if it is wrong. We discuss the ethical concerns in the u se of supervisor robot and potential interventions to prevent improper com pliance in users for more productive human-robot collaboration.
\n< /p>\n
Bio: Ulas Berk Karli is a MSE student in Robotics LCSR\, Johns Hop kins University. He received the Bachelor of Science degree in Mechanical Engineering and Double Majored in Computer Engineering from Koc University \, Istanbul in 2021. His research interests are Human-Robot Collaboration and Robot Learning for HRI.
\nShiye Cao is a first-year Ph.D. studen t in the Department of Computer Science\, co-advised by Dr. Chien-Ming Hua ng and Dr. Anqi Liu. She received the Bachelor of Science degree in Comput er Science with a second major in Applied Mathematics and Statistics from Johns Hopkins University in 2021\, and the Masters of Science in Engineer ing in Computer Science from Johns Hopkins University in 2022. Her work fo cuses on user trust and reliance in human-machine collaborative tasks.
\n\n
\n
Eugene Lin “Robophysical modeling of spider vibration sensing of prey on orb webs ”
\nAbstract: Orb-weaving spiders are functionally blind an d detect prey-generated web vibrations through vibration sensors at their leg joints to locate and identify prey caught in their (near) planar webs. Previous studies focused on how spiders use web geometry\, silk propertie s\, and web pre-tension to modulate vibration sensing. Spiders can also dy namically adjust their posture while sensing prey\, which may be a form of active sensing (Hung\, Corver\, Gordus\, 2022\, APS March Meeting). Howev er\, whether this is true and how it works is poorly understood\, due to d ifficulty of measuring the dynamics of the entire prey-web-spider interact ion system all at once. Here\, we developed a robophysical model of the sy stem to test this hypothesis of active sensing and discover its principles . Our model consists of a vibrating prey robot and a spider robot that can adjust its posture\, with torsional springs at leg joints and acceleromet ers to measure joint vibration. Both robots are attached to a physical web made of cords with qualitatively similar properties to real spider web th reads. Load cells measure web pre-tension and a high-speed camera system m easure web vibrations and robot movement. Preliminary results showed vibra tion attenuation through the web from the prey robot. We are currently stu dying the complex effects of spider robot’s dynamic posture change on vibr ation propagation across the web and leg joints\, by systematically varyin g the parameters of prey robot vibration\, spider robot leg posture\, and web pre-tension.
\n\n
Bio: Eugene Lin is a third year PhD stu dent in Dr. Chen Li’s lab (Terradynamics lab). His work focuses on underst anding environmental sensing on suspended\, sparse terrain. He received a B.S. in Mechanical Engineering at the University of California\, San Diego . He recently presented this work at the annual SICB conference and will p resent it again at the annual March APS conference.
\n\n
\n
Aishwarya Pantula “Pick a Side: Untethered Gel Crawlers That Can Break Symmetry”
\nAb stract: The development of untethered soft crawling robots programmed to r espond to environmental stimuli and precisely maneuverable across size sca les has been paramount to the fields of soft robotics\, drug delivery\, an d autonomous smart devices. Of particular relevance are reversible thermor esponsive hydrogels\, which swell and shrink in the temperature range of ( 30- 60 °C) for operating such untethered soft robots in human physiologica l and ambient conditions. While crawling has been demonstrated by thermore sponsive hydrogels\, they need surface modifications in the form of rachet s\, asymmetric patterning\, or constraints to achieve unidirectional motio n.
\nHere we demonstrate and validate a new mechanism for untethered \, unidirectional crawling for multisegmented gel crawlers built from an a ctive thermoresponsive poly (N-isopropyl acrylamide) (pNIPAM) and passive polyacrylamide (pAAM) on flat unpatterned surfaces. By connecting bilayers of different geometries and thicknesses using a centrally suspended gel l inker\, we create a morphological gradient along the fore-aft axis\, which leads to an asymmetry in the contact forces during the swelling and deswe lling of our crawler. We thoroughly explain our mechanism using experiment s and finite element simulations and\, using experiments\, demonstrate tha t we can tune the generated asymmetry and\, in turn\, increase the displac ement of the crawler by varying linker stiffness\, morphology\, and the nu mber of bilayer segments. We believe this mechanism can be widely applied across fields of study to create the next generation of autonomous shape-c hanging and smart locomotors.
\nBio: Aishwarya is a 4th year Ph.D. c andidate in the lab of Dr. David Gracias at Johns Hopkins University\, USA . Her research focuses on exploring smart materials like stimuli-responsiv e hydrogels\, combining them with novel patterning methods like 3D/4D prin ting\, imprint molding\, lithography\, etc.\, and using different mechanic al design strategies to create untethered biomimetic actuators and locomot ors across size scales for soft robotics and biomedical devices.
\n\n
\n
Maia Stiber “On using social signals to enable flexible error-aware HRI.”
\nAbstract: Prior error manageme nt techniques often do not possess the versatility to appropriately addres s robot errors across tasks and scenarios. Their fundamental framework inv olves explicit\, manual error management and implicit domain-specific info rmation driven error management\, tailoring their response for specific in teraction contexts. We present a framework for approaching error-aware sys tems by adding implicit social signals as another information channel to c reate more flexibility in application. To support this notion\, we introdu ce a novel dataset (composed of three data collections) with a focus on un derstanding natural facial action unit (AU) responses to robot errors duri ng physical-based human-robot interactions—varying across task\, error\, p eople\, and scenario. Analysis of the dataset reveals that\, through the l ens of error detection\, using AUs as input into error management affords flexibility to the system and has the potential to improve error detection response rate. In addition\, we provide an example real-time interactive robot error management system using the error-aware framework.
\np>\n
Bio: Maia Stiber is a 4th year Ph.D. candidate in the Department of Computer Science\, co-advised by Dr. Chien-Ming Huang and Dr. Russell Tay lor. She received a B.S. in Computer Science from Caltech in 2019 and a M. S.E. in Computer Science from Johns Hopkins University in 2021. Her work f ocuses on leveraging natural human responses to robot errors in an effort to develop flexible error management techniques in support of effective hu man-robot interaction.
\n\n
Victor A ntony “Co-designing with older adults\, for older adults: robots to promot e physical activity.”
\nAbstract: Lack of physical activity has severe negative health consequences for older adults and limits their ability to live independently. Robots have been proposed to help engage o lder adults in physical activity (PA)\, albeit with limited success. There is a lack of robust understanding of older adults’ needs and wants from r obots designed to engage them in PA. In this paper\, we report on the find ings of a co-design process where older adults\, physical therapy experts\ , and engineers designed robots to promote PA in older adults. We found a variety of motivators for and barriers against PA in older adults\; we\, t hen\, conceptualized a broad spectrum of possible robotic support and foun d that robots can play various roles to help older adults engage in PA. Th is exploratory study elucidated several overarching themes and emphasized the need for personalization and adaptability. This work highlights key de sign features that researchers and engineers should consider when developi ng robots to engage older adults in PA\, and underscores the importance of involving various stakeholders in the design and development of assistive robots.
\n\n
Bio: Victor Antony is a second-year Ph.D. stude nt in the Department of Computer Science\, advised by Dr. Chien-Ming Huang . He received the Bachelor of Science degree in Computer Science from the University of Rochester in 2021. His work focuses on Social Robots for wel l-being.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13540@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Christy Brooks DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\nAllison Okamura: “Wearable Haptic Devices for Ubiquitous Commu nication”\nAbstract:\nHaptic devices allow touch-based information transfe r between humans and intelligent systems\, enabling communication in a sal ient but private manner that frees other sensory channels. For such device s to become ubiquitous\, their physical and computational aspects must be intuitive and unobtrusive. The amount of information that can be transmitt ed through touch is limited in large part by the location\, distribution\, and sensitivity of human mechanoreceptors. Not surprisingly\, many haptic devices are designed to be held or worn at the highly sensitive fingertip s\, yet stimulation using a device attached to the fingertips precludes na tural use of the hands. Thus\, we explore the design of a wide array of ha ptic feedback mechanisms\, ranging from devices that can be actively touch ed by the fingertips to multi-modal haptic actuation mounted on the arm. W e demonstrate how these devices are effective in virtual reality\, human-m achine communication\, and human-human communication.\n \nBio:\nAllison Ok amura received the BS degree from the University of California at Berkeley \, and the MS and PhD degrees from Stanford University. She is the Richard W. Weiland Professor of Engineering at Stanford University in the mechani cal engineering department\, with a courtesy appointment in computer scien ce. She is an IEEE Fellow and is the co-general chair of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems and a deputy d irector of the Wu Tsai Stanford Neurosciences Institute. Her awards includ e the IEEE Engineering in Medicine and Biology Society Technical Achieveme nt Award\, IEEE Robotics and Automation Society Distinguished Service Awar d\, and Duca Family University Fellow in Undergraduate Education. Her acad emic interests include haptics\, teleoperation\, virtual reality\, medical robotics\, soft robotics\, rehabilitation\, and education. For more infor mation\, please see the CHARM Lab website. DTSTART;TZID=America/New_York:20230308T120000 DTEND;TZID=America/New_York:20230308T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Allison Okamura “Wearable Haptic Devices for Ubiquito us Communication” URL:https://lcsr.jhu.edu/events/allison-okamura/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Al lison Okamura: “Wearable Haptic Devices for Ubiquitous Communication”
\nAbstract:
\nHaptic devices allow touch-based information transf er between humans and intelligent systems\, enabling communication in a sa lient but private manner that frees other sensory channels. For such devic es to become ubiquitous\, their physical and computational aspects must be intuitive and unobtrusive. The amount of information that can be transmit ted through touch is limited in large part by the location\, distribution\ , and sensitivity of human mechanoreceptors. Not surprisingly\, many hapti c devices are designed to be held or worn at the highly sensitive fingerti ps\, yet stimulation using a device attached to the fingertips precludes n atural use of the hands. Thus\, we explore the design of a wide array of h aptic feedback mechanisms\, ranging from devices that can be actively touc hed by the fingertips to multi-modal haptic actuation mounted on the arm. We demonstrate how these devices are effective in virtual reality\, human- machine communication\, and human-human communication.
\n\n
B io:
\nAllison Okamura received the BS degree from the University of California at Berkeley\, and the MS and PhD degrees from Stanford Universi ty. She is the Richard W. Weiland Professor of Engineering at Stanford Uni versity in the mechanical engineering department\, with a courtesy appoint ment in computer science. She is an IEEE Fellow and is the co-general chai r of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems and a deputy director of the Wu Tsai Stanford Neurosciences Instit ute. Her awards include the IEEE Engineering in Medicine and Biology Socie ty Technical Achievement Award\, IEEE Robotics and Automation Society Dist inguished Service Award\, and Duca Family University Fellow in Undergradua te Education. Her academic interests include haptics\, teleoperation\, vir tual reality\, medical robotics\, soft robotics\, rehabilitation\, and edu cation. For more information\, please see the CHARM Lab website.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13542@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Christy Brooks DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract: From genetic engineering to direct to consumer ne urotechnology to ChatGPT\, it is a standard refrain that science outpaces the development of ethical norms and governance. Further\, technologies in creasingly cross boundaries from medicine to the consumer market to law en forcement and beyond\, in ways that our existing governance structures are not equipped to address. Finally\, our standard governance approaches to addressing ethical issues related to new technologies fail to address popu lation and societal-level impacts. This talk will demonstrate the above th rough a series of examples and describe ongoing work by the US National Ac ademies and others to address these challenges.\n \nBio: Debra JH Mathews\ , PhD\, MA\, is the Associate Director for Research and Programs for the J ohns Hopkins Berman Institute of Bioethics\, and an Associate Professor in the Department of Genetic Medicine\, Johns Hopkins University School of M edicine. Within the JHU Institute for Assured Autonomy\, Dr. Mathews serve s as the Ethics & Governance Lead. Her academic work focuses on ethics and policy issues raised by emerging technologies\, with particular focus on genetics\, stem cell science\, neuroscience\, synthetic biology\, and arti ficial intelligence. Dr. Mathews helped found and lead The Hinxton Group\, an international collective of scientists\, ethicists\, policymakers and others\, interested in ethical and well-regulated science\, and whose work focuses primarily on stem cell research. She has been a member of the Boa rd of Directors of the International Neuroethics Society since 2015\, and is currently President-Elect. In addition to her academic work\, Dr. Mathe ws has spent time at the Genetics and Public Policy Center\, the US Depart ment of Health and Human Services\, the Presidential Commission for the St udy of Bioethical Issues\, and the National Academy of Medicine working in various capacities on science policy.\nDr. Mathews earned her PhD in gene tics from Case Western Reserve University\, as well as a concurrent Master ’s in bioethics. She completed a Post-Doctoral Fellowship in genetics at J ohns Hopkins\, and the Greenwall Fellowship in Bioethics and Health Policy at Johns Hopkins and Georgetown Universities.\n \n DTSTART;TZID=America/New_York:20230315T120000 DTEND;TZID=America/New_York:20230315T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Debra Mathews “Ethics and Governance of Emerging Tech nologies” URL:https://lcsr.jhu.edu/events/lcsr-seminar-debra-mathews-2/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Abstract: From genetic engineering to direct to c onsumer neurotechnology to ChatGPT\, it is a standard refrain that science outpaces the development of ethical norms and governance. Further\, techn ologies increasingly cross boundaries from medicine to the consumer market to law enforcement and beyond\, in ways that our existing governance stru ctures are not equipped to address. Finally\, our standard governance appr oaches to addressing ethical issues related to new technologies fail to ad dress population and societal-level impacts. This talk will demonstrate th e above through a series of examples and describe ongoing work by the US N ational Academies and others to address these challenges.
\n\n< p>Bio: Debra JH Mathews\, PhD\, MA\, is the Associate Dir ector for Research and Programs for the Johns Hopkins Berman Institute of Bioethics\, and an Associate Professor in the Department of Genetic Medici ne\, Johns Hopkins University School of Medicine. Within the JHU Institute for Assured Autonomy\, Dr. Mathews serves as the Ethics & Governance Lead . Her academic work focuses on ethics and policy issues raised by emerging technologies\, with particular focus on genetics\, stem cell science\, ne uroscience\, synthetic biology\, and artificial intelligence. Dr. Mathews helped found and lead The Hinxton Group\, an international collective of s cientists\, ethicists\, policymakers and others\, interested in ethical an d well-regulated science\, and whose work focuses primarily on stem cell r esearch. She has been a member of the Board of Directors of the Internatio nal Neuroethics Society since 2015\, and is currently President-Elect. In addition to her academic work\, Dr. Mathews has spent time at the Genetics and Public Policy Center\, the US Department of Health and Human Services \, the Presidential Commission for the Study of Bioethical Issues\, and th e National Academy of Medicine working in various capacities on science po licy.\n
Dr. Mathews earned her PhD in genetics from Case Western Res erve University\, as well as a concurrent Master’s in bioethics. She compl eted a Post-Doctoral Fellowship in genetics at Johns Hopkins\, and the Gre enwall Fellowship in Bioethics and Health Policy at Johns Hopkins and Geor getown Universities.
\n\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13422@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Ashley Moriarty\; 410-516-6841\; ashleymoriarty@jhu.edu DESCRIPTION:2023 Industry Day Agenda/Program\nZoom Link for Morning Session \n\n\n\nFriday 3/24\nLocation: Glass Pavilion – Levering Hall\n\n\n8:30 AM \nRegistration Open and Breakfast\n\n\n9:00 AM\nWelcome\n\n\n9:05 AM\nIntr oduction to LCSR – Russell H. Taylor\, Director\n\n\n9:20 AM\nLCSR Educati on – Louis Whitcomb\, Deputy Director\n\n\n9:25 AM\nIAA – James Bellingham and Anton Dahbura\n\n\n9:30 AM\nStudent Research Talk – Max Li\n\n\n9:42 AM\nStudent Research Talk – Divya Ramesh\n\n\n9:55 AM\nStudent Research Ta lk – Michael Kam\n\n\n10:07 AM\nStudent Research Talk – Di Cao\n\n\n10:20 AM\nCoffee Break\n\n\n10:40 AM\nJohns Hopkins Tech Ventures – Seth Zonies \n\n\n10:55 AM\nIndustry Talk – Ankur Kapoor\, Siemens\n\n\n11:15 AM\nIndu stry Talk – William Tan\, GE\n\n\n11:35 AM\nNew LCSR Faculty – Alejandro M artin-Gomez\,\n\n\n11:55 AM\nClosing – Russell H. Taylor\, Director\n\n\n1 2:00 PM\nLunch – Resume Roundtables\n\n\n\n\n\n\n1:30-4:00 PM\nPoster and Demo Session (Hackerman Hall)\n\n\n 1:45-3:45 PM\n Guided Krieg er Hall Tours (meet outside Hackerman 134)\n\n\n\n\n\n\n4:00-5:00 PM\nAlum ni Reception (Shriver Hall – Clipper Room)\n\n\n\n \nThe Laboratory for Co mputational Sensing and Robotics will highlight its elite robotics student s and showcase cutting-edge research projects in areas that include Medica l Robotics\, Extreme Environments Robotics\, Human-Machine Systems\, BioRo botics and more.\nRobotics Industry Day will provide top companies and org anizations in the private and public sectors with access to the LCSR’s for ward-thinking\, solution-driven students. The event will also serve as an informal opportunity to explore university-industry partnerships.\nYou wil l experience dynamic presentations and discussions\, observe live demonstr ations\, and participate in speed networking sessions that afford you the opportunity to meet Johns Hopkins most talented robotics students before t hey graduate.\nPlease contact Ashley Moriarty if you have any questions.\n \nNEW LCSR Industry Day Book 2023\n\n\n\n\nPlease contact Ashley Moriarty if you have any questions.\n \nTickets: https://forms.gle/c8DwoVkfnPTsSbcY 7. DTSTART;TZID=America/New_York:20230324T090000 DTEND;TZID=America/New_York:20230324T160000 LOCATION:Levering Hall - Glass Pavilion @ 3400 N Charles St\, Baltimore MD 21218 SEQUENCE:0 SUMMARY:2023 JHU Robotics Industry Day URL:https://lcsr.jhu.edu/events/jhu-robotics-industry-day-2023/ X-COST-TYPE:external X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2017/11/ header.png\;872\;130\,medium\;https://lcsr.jhu.edu/wp-content/uploads/2017 /11/header.png\;872\;130\,large\;https://lcsr.jhu.edu/wp-content/uploads/2 017/11/header.png\;872\;130\,full\;https://lcsr.jhu.edu/wp-content/uploads /2017/11/header.png\;872\;130 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Zoom Link for Mo rning Session
\nFriday 3/24 | \nLocation: Glass Pavilion – Levering Hall | \n8:30 AM | \nRegistration Open and Breakfast | \n\n9:00 AM | \nWelcome | \n\n
9:05 AM | \nIntroduction to LCSR – Russell H. Taylor\, Director | \n
LCSR Education – Louis Whitcomb\, Deputy Director | \n|
9:25 AM | \nIAA – James Bellingham and Anton Dahb ura | \n
9:30 AM | \nStudent Research Talk – Max L i | \n
9:42 AM | \nStudent Research Talk – Divya R amesh | \n
9:55 AM | \nStudent Research Talk – Mic hael Kam | \n
10:07 AM | \nStudent Research Talk – Di Cao | \n
10:20 AM | \nCoffee Break | \n
10:40 AM | \nJohns Hopkins Tech Ventures – Seth Zonies td>\n |
10:55 AM | \nIndustry Talk – Ankur Kapoor\, Si emens | \n
11:15 AM | \nIndustry Talk – William Ta n\, GE | \n
11:35 AM | \nNew LCSR Faculty – Alejan dro Martin-Gomez\, | \n
11:55 AM | \nClosing – Rus sell H. Taylor\, Director | \n
12:00 PM | \nLunch – Resume Roundtables | \n
\n | \n |
1:30-4:00 PM | \nPoster and Demo Session (Hackerman Hall) | \n
1:45-3:45 PM | \nGuided Krieger Hall Tours (meet outside Hackerman 134) | \n
\n | \n |
4:00-5:00 PM | \nAlumni Reception (Shriver Hall – Clipper Room) | \n
\n
The Laborato ry for Computational Sensing and Robotics will highlight its elite robotic s students and showcase cutting-edge research projects in areas that inclu de Medical Robotics\, Extreme Environments Robotics\, Human-Machin e Systems\, BioRobotics and more.
\nRobotics Industry Day w ill provide top companies and organizations in the private and public sect ors with access to the LCSR’s forward-thinking\, solution-driven students. The event will also serve as an informal opportunity to explore universit y-industry partnerships.
\nYou will experience dynamic presentations and discussions\, observe live demonstrations\, and participate in speed networking sessions that afford you the opportunity to meet Johns Hopkins most talented robotics students before they graduate.
\nPlease conta ct Ashley Moriarty if you have any questions.
\nPlease contact Ashley Moriarty if you have any questions.
\n\n
Tickets: https://forms.gl e/c8DwoVkfnPTsSbcY7.
X-TICKETS-URL:https://forms.gle/c8DwoVkfnPTsSbcY7 END:VEVENT BEGIN:VEVENT UID:ai1ec-13544@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Christy Brooks\; cbrook53@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract:\nThe open ocean is a massive 3D ecosystem respons ible for absorbing much of Earth’s excess heat and CO2 emissions produced by humans. A portion of the ocean’s carbon pump sequesters atmospheric ca rbon into the sediments of the deep sea. Quantifying the amount of this ca rbon exported to the deep and identifying the variables driving that expor t is vital to understanding how we might better mitigate the deleterious e ffects of climate change. The Monterey Bay Aquarium Institutes MBARI has d eveloped high endurance mobile robots to investigate ocean carbon transpor t. One of its vehicles\, the Benthic Rover has been working continuously o n the seafloor at 4000m for 6 years– measuring the spatial and temporal va riability of carbon export from the surface. This long-term dataset has re vealed that carbon enters the deep sea in large pulses of sinking detritus . MBARI is now focused on connecting these carbon pulses to processes in t he upper layers of the ocean. Exploring\, mapping and sampling the upper w ater column to uncover ocean productivity hotspots (HS) is a central/key i nitiative/goal requiring the collaboration of MBARI’s Long Range Autonomou s Underwater Vehicles (LRAUVs) as well as other complementary vehicles tha t are able to measure the full ecology of the hotspots from the microbes t o the whales.\nBio:\nBrett W. Hobson received a BS in Mechanical Engineeri ng from San Francisco State University in 1989. He began his ocean engine ering career at Deep Ocean Engineering in San Leandro California\, develop ing remotely operated vehicles (ROVs) and manned submarines. In 1992\, he helped start and run Deep Sea Discoveries where he helped develop and oper ate deep towed sonar and camera systems offshore the US\, Venezuela\, Spai n and the Philippians. In 1998\, he joined Nekton Research in North Carol ina to develop bio-inspired underwater vehicles for Navy applications. Aft er the sale of Nekton Research to iRobot in 2005\, Hobson joined the Monte rey Bay Aquarium Research Institute (MBARI) where he leads the Long Range Autonomous Underwater Vehicle (AUV) program overseeing the development and science operations of a fleet of AUVs. He also helped develop MBARI’s lo ng-endurance seafloor crawling Benthic Rover. Hobson holds a patent on the design of a biomimetic underwater vehicle and has been the Co-PI on large projects funded by NSF\, NASA\, and DHS projects aimed at developing nove l underwater vehicles for ocean science. DTSTART;TZID=America/New_York:20230329T120000 DTEND;TZID=America/New_York:20230329T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Brett Hobson “The development of robots for open ocea n ecology” URL:https://lcsr.jhu.edu/events/brett-hobson/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Abstract:
\nThe open ocean is a massive 3D ecosystem respons ible for absorbing much of Earth’s excess heat and CO2 emissions produced by humans. A portion of the ocean’s carbon pump sequesters atmospheric ca rbon into the sediments of the deep sea. Quantifying the amount of this ca rbon exported to the deep and identifying the variables driving that expor t is vital to understanding how we might better mitigate the deleterious e ffects of climate change. The Monterey Bay Aquarium Institutes MBARI has d eveloped high endurance mobile robots to investigate ocean carbon transpor t. One of its vehicles\, the Benthic Rover has been working continuously o n the seafloor at 4000m for 6 years– measuring the spatial and temporal va riability of carbon export from the surface. This long-term dataset has re vealed that carbon enters the deep sea in large pulses of sinking detritus . MBARI is now focused on connecting these carbon pulses to processes in t he upper layers of the ocean. Exploring\, mapping and sampling the upper w ater column to uncover ocean productivity hotspots (HS) is a central/key i nitiative/goal requiring the collaboration of MBARI’s Long Range Autonomou s Underwater Vehicles (LRAUVs) as well as other complementary vehicles tha t are able to measure the full ecology of the hotspots from the microbes t o the whales.
\nBio:
\nBrett W. Hobson received a BS in Mechan ical Engineering from San Francisco State University in 1989. He began hi s ocean engineering career at Deep Ocean Engineering in San Leandro Califo rnia\, developing remotely operated vehicles (ROVs) and manned submarines. In 1992\, he helped start and run Deep Sea Discoveries where he helped de velop and operate deep towed sonar and camera systems offshore the US\, Ve nezuela\, Spain and the Philippians. span>In 1998\, he joined Nekton Research in North Carolina to develop bio- inspired underwater vehicles for Navy applications. After the sale of Nekt on Research to iRobot in 2005\, Hobson joined the Monterey Bay Aquarium Re search Institute (MBARI) where he leads the Long Range Autonomous Underwat er Vehicle (AUV) program overseeing the development and science operations of a fleet of AUVs. He also helped develop MBARI’s long-endurance seaflo or crawling Benthic Rover. Hobson holds a patent on the design of a biomim etic underwater vehicle and has been the Co-PI on large projects funded by NSF\, NASA\, and DHS projects aimed at developing novel underwater vehicl es for ocean science.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13546@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:LCSR\; lcsr-admin@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nBenjamin D. Killeen “An Autonomous X-ray Image Acquisition and Interpretation System for Assisting Percutaneous Pelvic Fracture Fixat ion”\nAbstract: Percutaneous fracture fixation involves multiple X-ray acq uisitions to determine adequate tool trajectories in bony anatomy. In orde r to reduce time spent adjusting the X-ray imager’s gantry\, avoid excess acquisitions\, and anticipate inadequate trajectories before penetrating b one\, we propose an autonomous system for intra-operative feedback that co mbines robotic X-ray imaging and machine learning for automated image acqu isition and interpretation\, respectively. Our approach reconstructs an ap propriate trajectory in a two-image sequence\, where the optimal second vi ewpoint is determined based on analysis of the first image. The reconstruc ted corridor and K-wire pose are compared to determine likelihood of corti cal breach\, and both are visualized for the clinician in a mixed reality environment that is spatially registered to the patient and delivered by a n optical see-through head-mounted display. We assess the upper bounds on system performance through in silico evaluation across 11 CTs with fractur es present\, in which the corridor and K-wire are adequately reconstructed . In post-hoc analysis of radiographs across 3 cadaveric specimens\, our s ystem determines the appropriate trajectory to within 2.8 ± 1.3 mm and 2.7 ± 1.8°. An expert user study with an anthropomorphic phantom demonstrates how our autonomous\, integrated system requires fewer images and lower mo vement to guide and confirm adequate placement compared to current clinica l practice.\nBio: A 4th year Ph.D. candidate at Johns Hopkins University\, Benjamin D. Killeen is interested in intelligent surgical systems that im prove patient outcomes. His recent work involves realistic simulation of i nterventional X-ray imaging for the purpose of developing AI-integrated su rgical systems. Benjamin is a member of the Advanced Robotics and Computat ionally Augmented Environments (ARCADE) research group\, led by Mathias Un berath\, as well as the President of the LCSR Graduate Student Association (GSA) and Sports Officer for the MICCAI Student Board. In 2019\, he earne d a B.A. in Computer Science with Honors from the University of Chicago\, with a minor in Physics\, and he has completed internships at IBM Research – Almaden\, Epic Systems\, and Intuitive Surgical. In his spare time\, he enjoys bouldering and creative writing.\n \nDivya Ramesh “Studying terres trial fish locomotion on wet deformable substrates”\nAbstract: Many amphib ious fishes can make forays onto land. The water-land interface often has wet deformable substrates like mud and sand\, whose strength changes as th ey get dryer or wetter\, challenging locomotion. Most previous terrestrial locomotion studies of fishes focused on quantifying kinematics\, muscle c ontrol\, and functional morphology. Yet\, without quantifying how the comp lex mechanics of wet deformable substrates affect ground reaction forces d uring locomotion\, we cannot fully understand how these locomotor features interact with the environment to permit performance. Here\, we used contr olled mud as a model wet deformable substrate and developed methods to pre pare mud into spatially uniform and temporally stable states and tools to characterize its strength. As a first step to understand how mud strength impact locomotion\, we studied the Atlantic mudskipper (Periophthalmus bar barus) moving on a thicker and a thinner mud\, which differs in strength b y a factor of two. The animal performed similar “crutching” walks on mud o f both strengths\, with only a slight reduction in speed on the thinner mu d (from 0.39 ± 0.12 to 0.32 ± 0.14 body length/s\, P < 0.05\, ANOVA). Howe ver\, it jumped more frequently on the thinner mud (from 1.2 ± 0.7 to 3.2 ± 1.6 times per minute\, P < 0.05\, ANOVA)\, likely due to it sticking mor e to the belly and fins and hindering walking.\nBio: Divya Ramesh is a fou rth year PhD student in Dr. Chen Li’s lab (Terradynamics lab). Her current work focuses in studying and understanding amphibious fish locomotion on wet deformable substrates. Her previous work focused in using contact sens ing to study and understand limbless locomotion of snakes and snake-robot on 3-D terrains. She received a BTech in Electronics and Communication Eng ineering from VIT University (Vellore\, India) and MSE in Electrical Engin eering from University of Pennsylvania. She has published in IEEE RA-L (pr esented at ICRA 2020) and presented at ICRA 2022. This work was presented in SICB 2023 where she was a finalist for Best Student Presentation in the Division of Comparative Biomechanics.\n \nGargi Sadalgekar “Template-leve l robophysical models for studying sustained terrestrial locomotion of amp hibious fish”\nAbstract: Studying terrestrial locomotion of amphibious fis hes informs how early tetrapods may have invaded land. The water-land inte rface often has wet\, deformable substrates like mud and sand that challen ge locomotion. Recent progress has been made on understanding limbed and l imbless tetrapod locomotion by studying robots as active physical models o f model organisms. Robophysical models complement animals with their high controllability and repeatability for systematic experiments. They also co mplement theoretical and computational models because they enact physical laws in the real world\, which is especially useful for studying locomotio n in complex terrain. Here\, we created the first robophysical models for studying sustained terrestrial locomotion of amphibious fishes on controll ed mud as a model web deformable substrate. Our three robots are on the te mplate level (lowest degree-of-freedom to generate a target locomotor beha vior) and represent mudskippers\, ropefish\, and bichirs that use appendic ular\, axial\, and axial-appendicular strategies\, respectively. The mudsk ipper robot rotated two fins in phase to raise the body and “crutch” forwa rd on mud. The ropefish robot used body lateral undulation to “surface-swi m” on mud. The bichir robot combined body undulation and out-of-phase fin rotations to “army-crawl” forward on mud. Each robot generated qualitative ly similar locomotion on mud as its model organism. We are currently refin ing the robots and performing systematic experiments on mud of a wide rang e of strengths.\nBio: Gargi Sadalgekar is a 2nd year master’s student in t he Terradynamics Lab at Johns Hopkins University and is interested in deve loping bio-inspired robots to investigate locomotion in extreme environmen ts. Her current work focuses on developing low-order robophysical models o f amphibious fish to uncover general principles of locomotion over wet def ormable substrates\, and this work was presented in SICB 2023 where she wa s a finalist for Best Student Presentation in the Division of Comparative Biomechanics. Gargi received a BSE in Mechanical and Aerospace Engineering from Princeton University with a minor in Robotics and Information System s.\n \nYaqing Wang “Force sensing can help robots reconstruct potential en ergy landscape and guide locomotor transitions to traverse large obstacles ”\nAbstract: Legged robots already excel at maintaining stability during u pright walking and running to step over small obstacles. However\, they mu st further traverse large obstacles comparable to body size to enable a br oader range of applications like search and rescue in rubble and sample co llection in rocky Martian hills. Our lab’s recent research demonstrated th at legged robots can traverse large obstacles if they can be destabilized to transition across various locomotor modes. When viewed on a potential e nergy landscape of the system\, which results from physical interaction wi th obstacles\, these locomotor transitions are strenuous barrier-crossing transitions between landscape basins. Because potential energy landscape g radients are closely related to terrain reaction forces and torques\, we h ypothesize that sensing obstacle interaction forces allows landscape recon struction\, which can guide robots to cross barriers at the saddle to make transitions more easily (analogous to crossing a mountain ridge at its sa ddle). Here\, we created a robophysical model with custom 3-axis force sen sors and surface contact sensors to measure forces and contacts during int eraction with large obstacles. We found that the measured forces indeed we ll captured potential energy landscape gradients and we could use the loca lly measured gradients to roughly reconstruct the potential energy landsca pe. Our future work should understand how to enable robots to make locomot or transitions at the landscape saddle based on local landscape reconstruc tion.\nBio: Yaqing Wang is a fourth-year PhD student in Dr. Chen Li’s lab (Terradynamics lab). His work focuses on understanding locomotor transitio ns in bio and bio-inspired terrestrial locomotion. He received a B.S. in M echanical Engineering at Tsinghua University in China. He recently present ed this work at the annual APS March meeting. DTSTART;TZID=America/New_York:20230405T120000 DTEND;TZID=America/New_York:20230405T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Student Seminar URL:https://lcsr.jhu.edu/events/lcsr-seminar-student-2/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Benjamin D. Killeen “An Autonomous X-ray Image Acquisition and Interpretation System for Assisting Percutaneous Pelvic Fracture Fixa tion”
\nAbstract: Percutaneous fracture fixation involves m ultiple X-ray acquisitions to determine adequate tool trajectories in bony anatomy. In order to reduce time spent adjusting the X-ray imager’s gantr y\, avoid excess acquisitions\, and anticipate inadequate trajectories bef ore penetrating bone\, we propose an autonomous system for intra-operative feedback that combines robotic X-ray imaging and machine learning for aut omated image acquisition and interpretation\, respectively. Our approach r econstructs an appropriate trajectory in a two-image sequence\, where the optimal second viewpoint is determined based on analysis of the first imag e. The reconstructed corridor and K-wire pose are compared to determine li kelihood of cortical breach\, and both are visualized for the clinician in a mixed reality environment that is spatially registered to the patient a nd delivered by an optical see-through head-mounted display. We assess the upper bounds on system performance through in silico evaluation across 11 CTs with fractures present\, in which the corridor and K-wire are adequat ely reconstructed. In post-hoc analysis of radiographs across 3 cadaveric specimens\, our system determines the appropriate trajectory to within 2.8 ± 1.3 mm and 2.7 ± 1.8°. An expert user study with an anthropomorphic pha ntom demonstrates how our autonomous\, integrated system requires fewer im ages and lower movement to guide and confirm adequate placement compared t o current clinical practice.
\nBio: A 4th year Ph.D. candidate at Jo hns Hopkins University\, Benjami n D. Killeen is interested in intelligent surgical systems that improv e patient outcomes. His recent work involves realistic simulation of inter ventional X-ray imaging for the purpose of developing AI-integrated surgic al systems. Benjamin is a member of the Advanced Robotics and Computationa lly Augmented Environments (ARCADE) research group\, led by Mathias Unbera th\, as well as the President of the LCSR Graduate Student Association (GS A) and Sports Officer for the MICCAI Student Board. In 2019\, he earned a B.A. in Computer Science with Honors from the University of Chicago\, with a minor in Physics\, and he has completed internships at IBM Research – A lmaden\, Epic Systems\, and Intuitive Surgical. In his spare time\, he enj oys bouldering and creative writing.
\n\n
Divya Rames h “Studying terrestrial fish locomotion on wet deformable substrates”
\nAbstract: Many amphibious fishes can make forays onto land. T he water-land interface often has wet deformable substrates like mud and s and\, whose strength changes as they get dryer or wetter\, challenging loc omotion. Most previous terrestrial locomotion studies of fishes focused on quantifying kinematics\, muscle control\, and functional morphology. Yet\ , without quantifying how the complex mechanics of wet deformable substrat es affect ground reaction forces during locomotion\, we cannot fully under stand how these locomotor features interact with the environment to permit performance. Here\, we used controlled mud as a model wet deformable subs trate and developed methods to prepare mud into spatially uniform and temp orally stable states and tools to characterize its strength. As a first st ep to understand how mud strength impact locomotion\, we studied the Atlan tic mudskipper (Periophthalmus barbarus) moving on a thicker and a thinner mud\, which differs in strength by a factor of two. The animal p erformed similar “crutching” walks on mud of both strengths\, with only a slight reduction in speed on the thinner mud (from 0.39 ± 0.12 to 0.32 ± 0 .14 body length/s\, P < 0.05\, ANOVA). However\, it jumped more f requently on the thinner mud (from 1.2 ± 0.7 to 3.2 ± 1.6 times per minute \, P < 0.05\, ANOVA)\, likely due to it sticking more to the bell y and fins and hindering walking.
\nBio: Divya Ramesh is a fourth ye ar PhD student in Dr. Chen Li’s lab (Terradynamics lab). Her current work focuses in studying and understanding amphibious fish locomotion on wet de formable substrates. Her previous work focused in using contact sensing to study and understand limbless locomotion of snakes and snake-robot on 3-D terrains. She received a BTech in Electronics and Communication Engineeri ng from VIT University (Vellore\, India) and MSE in Electrical Engineering from University of Pennsylvania. She has published in IEEE RA-L (presente d at ICRA 2020) and presented at ICRA 2022. This work was presented in SIC B 2023 where she was a finalist for Best Student Presentation in the Divis ion of Comparative Biomechanics.
\n\n
Gargi Sadalgeka r “Template-level robophysical models for studying sustained terrestrial l ocomotion of amphibious fish”
\nAbstract: Studying terrestr ial locomotion of amphibious fishes informs how early tetrapods may have i nvaded land. The water-land interface often has wet\, deformable substrate s like mud and sand that challenge locomotion. Recent progress has been ma de on understanding limbed and limbless tetrapod locomotion by studying ro bots as active physical models of model organisms. Robophysical models com plement animals with their high controllability and repeatability for syst ematic experiments. They also complement theoretical and computational mod els because they enact physical laws in the real world\, which is especial ly useful for studying locomotion in complex terrain. Here\, we created th e first robophysical models for studying sustained terrestrial locomotion of amphibious fishes on controlled mud as a model web deformable substrate . Our three robots are on the template level (lowest degree-of-freedom to generate a target locomotor behavior) and represent mudskippers\, ropefish \, and bichirs that use appendicular\, axial\, and axial-appendicular stra tegies\, respectively. The mudskipper robot rotated two fins in phase to r aise the body and “crutch” forward on mud. The ropefish robot used body la teral undulation to “surface-swim” on mud. The bichir robot combined body undulation and out-of-phase fin rotations to “army-crawl” forward on mud. Each robot generated qualitatively similar locomotion on mud as its model organism. We are currently refining the robots and performing systematic e xperiments on mud of a wide range of strengths.
\nBio: Gargi Sadalge kar is a 2nd year master’s student in the Terradynamics Lab at Johns Hopkins University and is interested in developing bio-inspired robo ts to investigate locomotion in extreme environments. Her current work foc uses on developing low-order robophysical models of amphibious fish to unc over general principles of locomotion over wet deformable substrates\, and this work was presented in SICB 2023 where she was a finalist for Best St udent Presentation in the Division of Comparative Biomechanics. Gargi rece ived a BSE in Mechanical and Aerospace Engineering from Princeton Universi ty with a minor in Robotics and Information Systems.
\n\n
Abstract: Legged robots already excel at maintaining stab ility during upright walking and running to step over small obstacles. How ever\, they must further traverse large obstacles comparable to body size to enable a broader range of applications like search and rescue in rubble and sample collection in rocky Martian hills. Our lab’s recent research d emonstrated that legged robots can traverse large obstacles if they can be destabilized to transition across various locomotor modes. When viewed on a potential energy landscape of the system\, which results from physical interaction with obstacles\, these locomotor transitions are strenuous bar rier-crossing transitions between landscape basins. Because potential ener gy landscape gradients are closely related to terrain reaction forces and torques\, we hypothesize that sensing obstacle interaction forces allows l andscape reconstruction\, which can guide robots to cross barriers at the saddle to make transitions more easily (analogous to crossing a mountain r idge at its saddle). Here\, we created a robophysical model with custom 3- axis force sensors and surface contact sensors to measure forces and conta cts during interaction with large obstacles. We found that the measured fo rces indeed well captured potential energy landscape gradients and we coul d use the locally measured gradients to roughly reconstruct the potential energy landscape. Our future work should understand how to enable robots t o make locomotor transitions at the landscape saddle based on local landsc ape reconstruction.
\nBio: Yaqing Wang is a fourth-year PhD student in Dr. Chen Li’s lab (Terradynamics lab). His work focuses on understandin g locomotor transitions in bio and bio-inspired terrestrial locomotion. He received a B.S. in Mechanical Engineering at Tsinghua University in China . He recently presented this work at the annual APS March meeting.
\n BODY> END:VEVENT BEGIN:VEVENT UID:ai1ec-13548@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:LCSR\; lcsr-admin@jhu.edu DESCRIPTION:Link for Live Seminar\nLink for Recorded seminars – 2022/2023 s chool year\n \nAbstract:\nEmotional intelligence for artificial systems is not a luxury but a necessity. It is paramount for many applications that require both short and long–term engaging human–technology interactions\, including entertainment\, hospitality\, education\, and healthcare. Howeve r\, creating artificially intelligent systems and interfaces with social a nd emotional skills is a challenging task. Progress in industry and develo pments in academia provide us a positive outlook\, however\, the artificia l emotional intelligence of the current technology is still quite limited. Creating technology with artificial emotional intelligence requires the d evelopment of perception\, learning\, action and adaptation capabilities\, and the ability to execute these pipelines in real-time in human-AI inter actions. Truly addressing these challenges relies on cross-fertilization o f multiple research fields\, including psychology\, nonverbal behaviour un derstanding\, psychiatry\, vision\, social signal processing\, affective c omputing\, and human-computer and human-robot interaction. My lab’s resear ch has been pushing the state of the art in a wide spectrum of research to pics in this area\, including the design and creation of new datasets\; n ovel feature representations and learning algorithms for sensing and under standing human nonverbal behaviours in solo\, dyadic and group settings\; designing short/long-term human-robot adaptive interactions for wellbeing\ ; and creating algorithmic solutions to mitigate the bias that creeps into these systems.\nIn this talk\, I will present the recent explorations of the Cambridge Affective Intelligence and Robotics Lab in these areas with insights for human embodied-AI interaction research.\nBio:\nHatice Gunes i s a Professor of Affective Intelligence and Robotics (AFAR) and leads the AFAR Lab at the University of Cambridge’s Department of Computer Science a nd Technology. Her expertise is in the areas of affective computing and so cial signal processing cross-fertilising research in multimodal interactio n\, computer vision\, signal processing\, machine learning and social robo tics. She has published over 155 papers in these areas (H-index=36\, citat ions > 7\,300)\, with most recent works on lifelong learning for faci al expression recognition\, fairness\, and affective robotics\; and lon gitudinal HRI for wellbeing. She has served as an Associate Editor for IEEE Transactions on Affective Computing\, IEEE Transactions on Multimedia \, and Image and Vision Computing Journal\, and has guest edited many Spec ial Issues\, the latest ones being 2022 Int’l Journal of Social Robotics S pecial Issue on Embodied Agents for Wellbeing\, 2022 Frontiers in Robotics and AI Special Issue on Lifelong Learning and Long-Term Human-Robot Inter action\, and 2021 IEEE Transactions on Affective Computing Special Issue on Automated Perception of Human Affect from Longitudinal Behavioural D ata. Other research highlights include Outstanding PC Award at ACM/IEEE HRI’23\, RSJ/KROS Distinguished Interdisciplinary Research Award Finalist at IEEE RO-MAN’21\, Distinguished PC Award at IJCAI’21\, Best Paper Awa rd Finalist at IEEE RO-MAN’20\, Finalist for the 2018 Frontiers Spotlight Award\, Outstanding Paper Award at IEEE FG’11\, and Best Demo Award at IEE E ACII’09. Prof Gunes is a former President of the Association for the Adv ancement of Affective Computing (2017-2019)\, is/was the General Co-Chair of ACM ICMI’24 and ACII’19\, and the Program Co-Chair of ACM/IEEE HRI’20 a nd IEEE FG’17. She was the Chair of the Steering Board of IEEE Transaction s on Affective Computing (2017-2019) and was a member of the Human-Robot I nteraction Steering Committee (2018-2021. Her research has been supported by various competitive grants\, with funding from Google\, the Engineering and Physical Sciences Research Council UK (EPSRC)\, Innovate UK\, British Council\, Alan Turing Institute and EU Horizon 2020. In 2019 she was awar ded a prestigious EPSRC Fellowship to investigate adaptive robotic emotion al intelligence for wellbeing (2019-2024) and has been named a Faculty Fel low of the Alan Turing Institute – UK’s national centre for data science a nd artificial intelligence (2019-2021). Prof Gunes is currently a Staff Fe llow of Trinity Hall\, a Senior Member of the IEEE\, and a member of the A AAC. DTSTART;TZID=America/New_York:20230412T120000 DTEND;TZID=America/New_York:20230412T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Hatice Gunes “Emotional Intelligence for Human-Embodi ed AI Interaction” URL:https://lcsr.jhu.edu/events/hatice-gunes/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n< /p>\n
Abstract:
\nEmotional intelligence for arti ficial systems is not a luxury but a necessity. It is paramount for many a pplications that require both short and long–term engaging human–technolog y interactions\, including entertainment\, hospitality\, education\, and h ealthcare. However\, creating artificially intelligent systems and interfa ces with social and emotional skills is a challenging task. Progress in in dustry and developments in academia provide us a positive outlook\, howeve r\, the artificial emotional intelligence of the current technology is sti ll quite limited. Creating technology with artificial emotional intelligen ce requires the development of perception\, learning\, action and adaptati on capabilities\, and the ability to execute these pipelines in real-time in human-AI interactions. Truly addressing these challenges relies on cros s-fertilization of multiple research fields\, including psychology\, nonve rbal behaviour understanding\, psychiatry\, vision\, social signal process ing\, affective computing\, and human-computer and human-robot interaction . My lab’s research has been pushing the state of the art in a wide spectr um of research topics in this area\, including the design and creation of new datasets\; novel feature representations and learning algorithms for sensing and understanding human nonverbal behaviours in solo\, dyadic and group settings\; designing short/long-term human-robot adaptive interactio ns for wellbeing\; and creating algorithmic solutions to mitigate the bias that creeps into these systems.
\nIn this talk\, I will present the recent explorations of the Cambridge Affective Intelligence and Robotics Lab in these areas with insights for human embodied-AI interaction researc h.
\nBio:
\nHatice Gunes is a Professor of Af fective Intelligence and Robotics (AFAR) and leads the AFAR Lab at the Uni versity of Cambridge’s Department of Computer Science and Technology. Her expertise is in the areas of affective computing and social signal process ing cross-fertilising research in multimodal interaction\, computer vision \, signal processing\, machine learning and social robotics. She has publi shed over 155 papers in these areas (H-index=36\, citations > 7\,300)\, w ith most recent works on lifelong learning for facial expression recog nition\, fairness\, and affective robotics\; and longitudinal HRI for wellbeing. She has served as an Associate Editor for IEEE Transactions o n Affective Computing\, IEEE Transactions on Multimedia\, and Image and Vi sion Computing Journal\, and has guest edited many Special Issues\, the la test ones being 2022 Int’l Journal of Social Robotics Special Issue on Emb odied Agents for Wellbeing\, 2022 Frontiers in Robotics and AI Special Iss ue on Lifelong Learning and Long-Term Human-Robot Interaction\, and 2021 I EEE Transactions on Affective Computing Special Issue on Automated Per ception of Human Affect from Longitudinal Behavioural Data. Other research highlights include Outstanding PC Award at ACM/IEEE HRI’23\, RSJ/KROS D istinguished Interdisciplinary Research Award Finalist at IEEE RO-MAN’21\, Distinguished PC Award at IJCAI’21\, Best Paper Award Finalist at IEEE RO-MAN’20\, Finalist for the 2018 Frontiers Spotlight Award\, Outstanding Paper Award at IEEE FG’11\, and Best Demo Award at IEEE ACII’09. Prof Gun es is a former President of the Association for the Advancement of Affecti ve Computing (2017-2019)\, is/was the General Co-Chair of ACM ICMI’24 and ACII’19\, and the Program Co-Chair of ACM/IEEE HRI’20 and IEEE FG’17. She was the Chair of the Steering Board of IEEE Transactions on Affective Comp uting (2017-2019) and was a member of the Human-Robot Interaction Steering Committee (2018-2021. Her research has been supported by various competit ive grants\, with funding from Google\, the Engineering and Physical Scien ces Research Council UK (EPSRC)\, Innovate UK\, British Council\, Alan Tur ing Institute and EU Horizon 2020. In 2019 she was awarded a prestigious E PSRC Fellowship to investigate adaptive robotic emotional intelligence for wellbeing (2019-2024) and has been named a Faculty Fellow of the Alan Tur ing Institute – UK’s national centre for data science and artificial intel ligence (2019-2021). Prof Gunes is currently a Staff Fellow of Trinity Hal l\, a Senior Member of the IEEE\, and a member of the AAAC.
\n HTML> END:VEVENT BEGIN:VEVENT UID:ai1ec-13698@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Zoom Link for Seminar Recorded seminars for the 2022/2023 sch ool year \n \nAbstract:\nOld monkeys may have stories. Some could be lesso ns learned to help overcome obstacles. The first part of this seminar disc usses classical robotic applications in industry and critical factors in t he development and applications. The second part discusses intelligent man ufacturing with the use of data and easy-to-use analytics\, necessary in m odern-day manufacturing. Moving forward\, some opportunities in robotics i n intelligent manufacturing are discussed.\n \n \nBio:\nDr. Day was previo usly a Senior VP of Foxconn Automation Technology. Dr. Day began his caree r in 1970 as a coop equipment development engineer at IBM Burlington Vt. a nd later continued to plow in the manufacturing automation field with Gene ral Motors\, Fanuc\, Rockwell Automation\, Stoneridge and Foxconn. Dr. was the founder of Foxbot\, with 80\,000 units deployed in various applicatio ns. In June 2016\, Dr. Day received the Joseph F. Engelberger award from t he Robot Industries Association for a lifetime career contribution in the automotive and electronic industries.\n DTSTART;TZID=America/New_York:20230418T110000 DTEND;TZID=America/New_York:20230418T120000 LOCATION:106 Latrobe Hall SEQUENCE:0 SUMMARY:LCSR Seminar: Chia Day “Robotics in Intelligent Manufacturing” URL:https://lcsr.jhu.edu/events/lcsr-seminar-chia-day-robotics-in-intellige nt-manufacturing/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nZoom L ink for Seminar Recorded seminars for the 2022/2023 school year
\n\n
Abstract:
\nOld monkeys may have stori es. Some could be lessons learned to help overcome obstacles. The first pa rt of this seminar discusses classical robotic applications in industry an d critical factors in the development and applications. The second part di scusses intelligent manufacturing with the use of data and easy-to-use ana lytics\, necessary in modern-day manufacturing. Moving forward\, some oppo rtunities in robotics in intelligent manufacturing are discussed.
\n\n
\n
Bio:
\nDr. Day was previously a Senior VP of Foxconn Automation Technol ogy. Dr. Day began his career in 1970 as a coop equipment development engi neer at IBM Burlington Vt. and later continued to plow in the manufacturin g automation field with General Motors\, Fanuc\, Rockwell Automation\, Sto neridge and Foxconn. Dr. was the founder of Foxbot\, with 80\,000 units de ployed in various applications. In June 2016\, Dr. Day received the Joseph F. Engelberger award from the Robot Industries Association for a lifetime career contribution in the automotive and electronic industries.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13550@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:LCSR\; lcsr-admin@jhu.edu DESCRIPTION: \nLink for Live Seminar\nLink for Recorded seminars – 2022/202 3 school year\n \n“Games Without Frontiers: Beating Super Mario Bros. 1-1 with a 3D-Printed Soft Robotic Hand”\n Ryan D. Sochol\, Ph.D.\n \nAssociat e Professor\, Department of Mechanical Engineering\nAffiliate Faculty\, Fi schell Department of Bioengineering\nExecutive Committee Member\, Maryland Robotics Center\nFischell Institute Fellow\, Robert E. Fischell Institute for Biomedical Devices\nAffiliate Faculty\, Institute for Systems Researc h\nJames Clark School of Engineering\nUniversity of Maryland\, College Par k\n \nAbstract:\n\nOver the past decade\, the field of “soft robotics” has established itself as uniquely suited for applications that would be diff icult or impossible to realize using traditional\, rigid-bodied robots. T he reliance on compliant materials that are often actuated by fluidic (e.g .\, hydraulic or pneumatic) means presents a number of inherent benefits f or soft robots\, particularly in terms of safety for human-robot interacti ons and adaptability for manipulating complex and/or delicate objects. Un fortunately\, progress has been impeded by broad challenges associated wit h controlling the underlying fluidics of such systems. In this seminar\, Prof. Ryan D. Sochol will discuss how his Bioinspired Advanced Manufacturi ng (BAM) Laboratory is leveraging the capabilities of two alternative type s of additive manufacturing (or “three-dimensional (3D) printing”) technol ogies to address these critical barriers. Specifically\, Prof. Sochol wil l describe his lab’s recent strategies for using the 3D nanoprinting appro ach\, “Two-Photon Direct Laser Writing”\, and the inkjet 3D printing techn ique\, “PolyJet 3D Printing”\, to engineer soft robotic systems that compr ise integrated fluidic circuitry… including a soft robotic “hand” that pla ys Nintendo.\n \nBiography:\nProf. Ryan D. Sochol is an Associate Professo r of Mechanical Engineering within the A. James Clark School of Engineerin g at the University of Maryland\, College Park. Prof. Sochol received his B.S. in Mechanical Engineering from Northwestern University in 2006\, and both his M.S. and Ph.D. in Mechanical Engineering from the University of California\, Berkeley\, in 2009 and 2011\, respectively\, with Doctoral Mi nors in Bioengineering and Public Health. Prior to joining the faculty at UMD\, Prof. Sochol served two primary academic roles: (i) as an NIH Postd octoral Trainee within the Harvard-MIT Division of Health Sciences & Techn ology\, Harvard Medical School\, and Brigham & Women’s Hospital\, and (ii) as the Director of the Micro Mechanical Methods for Biology (M3B) Laborat ory Program within the Berkeley Sensor & Actuator Center at UC Berkeley. Prof. Sochol also served as a Visiting Postdoctoral Fellow at the Universi ty of Tokyo. In 2019\, Prof. Sochol was elected Co-President of the Mid-A tlantic Micro/Nano Alliance. His group received IEEE MEMS Outstanding Stu dent Paper Awards in both 2019 and 2021 and the Springer Nature Best Paper Award (Runner-Up) in 2022. Prof. Sochol received the NSF CAREER Award i n 2020 and the Early Career Award from the IOP Journal of Micromechanics a nd Microengineering in 2021\, and was recently honored as an inaugural Ris ing Star by the journal\, Advanced Materials Technologies\, in 2023.\n DTSTART;TZID=America/New_York:20230419T120000 DTEND;TZID=America/New_York:20230419T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Ryan Sochol “Games without Frontiers: Beating Super M ario Bros. 1-1 with a 3D printed Soft Robotic Hand” URL:https://lcsr.jhu.edu/events/ryan-sochol/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
\n
“Games Without Frontiers: Beating Super Mario Bros. 1-1 with a 3D-Printed Soft Robotic Hand”
\n
Associate Professor\, Department
of Mechanical Engineering
\nAffiliate Faculty\, Fischell Dep
artment of Bioengineering
\nExecutive Committee Member\, Mar
yland Robotics Center
\nFischell Institute Fellow\, Robert E
. Fischell Institute for Biomedical Devices
\nAffiliate Faculty
em>\, Institute for Systems Research
\nJames Clark School of Engineer
ing
\nUniversity of Maryland\, College Park
\n
Abstract:
\n
Over the past decade\, the field of “s oft robotics” has established itself as uniquely suited for applications t hat would be difficult or impossible to realize using traditional\, rigid- bodied robots. The reliance on compliant materials that are often actuate d by fluidic (e.g.\, hydraulic or pneumatic) means presents a num ber of inherent benefits for soft robots\, particularly in terms of safety for human-robot interactions and adaptability for manipulating complex an d/or delicate objects. Unfortunately\, progress has been impeded by broad challenges associated with controlling the underlying fluidics of such sy stems. In this seminar\, Prof. Ryan D. Sochol will discuss how his Bioinspired Advanced Manufacturing (BAM) Labo ratory is leveraging the capabilities of two alternative types of additive manufacturing (or “three-dimensional (3D) printing”) technologie s to address these critical barriers. Specifically\, Prof. Sochol will de scribe his lab’s recent strategies for using the 3D nanoprinting approach\ , “Two-Photon Direct Laser Writing”\, and the inkjet 3D printing technique \, “PolyJet 3D Printing”\, to engineer soft robotic systems that comprise integrated fluidic circuitry… including a soft robotic “hand” that plays N intendo.
\n\n
Biography:
\nProf. Ryan
D. Sochol is an Associate Professor of Mechanical Engineering within the A
. James Clark School of Engineering at the University of Maryland\, Colleg
e Park. Prof. Sochol received his B.S. in Mechanical Engineering from Nor
thwestern University in 2006\, and both his M.S. and Ph.D. in Mechanical E
ngineering from the University of California\, Berkeley\, in 2009 and 2011
\, respectively\, with Doctoral Minors in Bioengineering and Public Health
. Prior to joining the faculty at UMD\, Prof. Sochol served two primary a
cademic roles: (i) as an NIH Postdoctoral Trainee within the Harv
ard-MIT Division of Health Sciences & Technology\, Harvard Medical School\
, and Brigham & Women’s Hospital\, and (ii) as the Director of th
e Micro Mechanical Methods for Biology (M3B) Laboratory Program
within the Berkeley Sensor & Actuator Center at UC Berkeley. Prof. Socho
l also served as a Visiting Postdoctoral Fellow at the University of Tokyo
. In 2019\, Prof. Sochol was elected Co-President of the Mid-Atlantic Mic
ro/Nano Alliance. His group received IEEE MEMS Outstanding Student Pa
per Awards in both 2019 and 2021 and the Springer Nature Best Paper Award (Runner-Up) in 2022. Prof. Sochol received the
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13552@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:LCSR\; lcsr-admin@jhu.edu DESCRIPTION: \nLink for Live Seminar\nLink for Recorded seminars – 2022/202 3 school year\n \nAyushi Sinha Ph.D.\nJob title and affiliation: Senior Sc ientist\, Philips\nJHU degrees and year(s) of degree(s): Ph.D. Computer Sc ience 2018\, MSE Computer Science 2014\nShort bio: Ayushi Sinha is a Senio r Scientist at Philips working on image guided therapy systems including C -arm X-ray imaging systems. Ayushi received a BS in Computer Science and a BA in Mathematics from Providence College\, RI\, and a MSE and Ph.D. in C omputer Science from Johns Hopkins University\, MD. She remained at Hopkin s as a Provost’s Postdoctoral Fellow followed by a Research Scientist befo re joining Philips in late 2019. Her primary research interest is in image analysis to enable automation of medical imaging systems and integration of multiple systems.\n \n \nCan Kocabalkanli M.S.E.\nJob title and affilia tion: Computer Vision Research Scientist at PediaMetrix\nJHU degrees and y ear(s) of degree(s): BS Mechanical Engineering 2019\, MSE Robotics 2020\nS hort bio: Originally from Istanbul\, Turkey\, Can came to JHU for his unde rgraduate degree and early on explored an interest in robotics through cou rsework and the robotics minor. He completed his master’s research and the sis under Prof. Taylor in 2020 on an autonomous endoscope safety system. S ince graduation\, Can has been working as a Computer Vision Research Scien tist at PediaMetrix\, a medical imaging startup focused on infant healthca re. There he has worked on developing\, deploying\, and validating image p rocessing and vision algorithms\, machine and deep learning models\, as we ll as acquiring 510(k) clearance. Since September 2022\, he has taken a le adership role in their R&D department. Can is interested in making healthc are more robust and accessible through innovation and technology and is th e co-inventor of 2 US patents.\n \n \nMichael Kutzer Ph.D.\nJob title and affiliation: Associate Professor\, United States Naval Academy Department of Weapons\, Robotics\, and Control Engineering/Instructor\, JHU-EP Mechan ical Engineering Program\nJHU degrees and year(s) of degree(s): M.S.E. Mec hanical Engineering 2007\, Ph.D. Mechanical Engineering 2012\nShort bio: M ike Kutzer received his Ph.D. in mechanical engineering from the Johns Hop kins University\, Baltimore\, MD\, USA in 2012. He is currently an Associa te Professor in the Weapons\, Robotics\, and Control Engineering Departmen t (WRCE) at the United States Naval Academy (USNA). Prior to joining USNA\ , he worked as a senior researcher in the Research and Exploratory Develop ment Department of the Johns Hopkins University Applied Physics Laboratory (JHU/APL). His research interests include robotic manipulation\, computer vision and motion capture\, applications of and extensions to additive ma nufacturing\, mechanism design and characterization\, continuum manipulato rs\, redundant mechanisms\, and modular systems.\n \n \n DTSTART;TZID=America/New_York:20230426T120000 DTEND;TZID=America/New_York:20230426T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Careers in Robotics: A Panel Discussion With Experts From Industry and Academia URL:https://lcsr.jhu.edu/events/robotics/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
\n
Jo b title and affiliation: Senior Scientist\, Philips
\nShort bio: Ayushi Sinha is a Senior Scientist at Philips working on image guided therapy sy stems including C-arm X-ray imaging systems. Ayushi received a BS in Compu ter Science and a BA in Mathematics from Providence College\, RI\, and a M SE and Ph.D. in Computer Science from Johns Hopkins University\, MD. She r emained at Hopkins as a Provost’s Postdoctoral Fellow followed by a Resear ch Scientist before joining Philips in late 2019. Her primary research int erest is in image analysis to enable automation of medical imaging systems and integration of multiple systems.
\n\n
< strong>
\nJob title and affiliation: Computer Vision Research Sc ientist at PediaMetrix
\nJHU degrees and year(s) of degree(s ): BS Mechanical Engineering 2019\, MSE Robotics 2020
\np>\n
\n
Job title and affiliation: Associate Professor\, United States Naval Academy Department of Weapons\, Robotics\, and Control Engineering/Instructor\, JHU-EP Mechanical Engineering Program
\nJHU degrees and year(s) of degree(s): M.S.E. Mechanical Eng
ineering 2007\, Ph.D. Mechanical Engineering 2012
Short bi o: Mike Kutzer received his Ph.D. in mechanical engineering from the Johns Hopkins University\, Baltimore\, MD\, USA in 2012. He is current ly an Associate Professor in the Weapons\, Robotics\, and Control Engineer ing Department (WRCE) at the United States Naval Academy (USNA). Prior to joining USNA\, he worked as a senior researcher in the Research and Explor atory Development Department of the Johns Hopkins University Applied Physi cs Laboratory (JHU/APL). His research interests include robotic manipulati on\, computer vision and motion capture\, applications of and extensions t o additive manufacturing\, mechanism design and characterization\, continu um manipulators\, redundant mechanisms\, and modular systems.
\n\n
\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13826@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Click here to see recording. DTSTART;TZID=America/New_York:20230830T120000 DTEND;TZID=America/New_York:20230830T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Welcome Townhall “Review of LCSR” URL:https://lcsr.jhu.edu/events/lcsr-seminar-welcome-townhall-review-of-lcs r-hackerman-b17/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Dr . Juan Wachs is a Professor and Faculty Scholar in the Industrial Engineer ing School at Purdue University\, Professor of Biomedical Engineering (by courtesy) and an Adjunct Associate Professor of Surgery at IU School of Me dicine. He is currently serving at NSF as a Program Director for robotics and AI programs at CISE. He is also the director of the Intelligent System s and Assistive Technologies (ISAT) Lab at Purdue\, and he is affiliated w ith the Regenstrief Center for Healthcare Engineering. He completed postdo ctoral training at the Naval Postgraduate School’s MOVES Institute under a National Research Council Fellowship from the National Academies of Scien ces. Dr. Wachs received his B.Ed.Tech in Electrical Education in ORT Acade mic College\, at the Hebrew University of Jerusalem campus. His M.Sc and P h.D in Industrial Engineering and Management from the Ben-Gurion Universit y of the Negev\, Israel. He is the recipient of the 2013 Air Force Young I nvestigator Award\, and the 2015 Helmsley Senior Scientist Fellow\, and 20 16 Fulbright U.S. Scholar\, the James A. and Sharon M. Tompkins Rising Sta r Associate Professor\, 2017\, and an ACM Distinguished Speaker 2018. He i s also the Associate Editor of IEEE Transactions in Human-Machine Systems\ , Frontiers in Robotics and AI.
\n\n
Click here for the recording of Dr. Wach’s Seminar.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13874@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; mgreatt1@jhu.edu DESCRIPTION: DTSTART;TZID=America/New_York:20230908T130000 DTEND;TZID=America/New_York:20230908T150000 LOCATION:Taharka Brothers Ice Cream truck @ Outdoors\, between Shriver & Ma lone SEQUENCE:0 SUMMARY:LCSR Welcome (Back) Ice Cream Social URL:https://lcsr.jhu.edu/events/lcsr-welcome-back-ice-cream-social/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2023/09/ 2023-Welcome-Back-Ice-Cream--223x300.jpg\;223\;300\,medium\;https://lcsr.j hu.edu/wp-content/uploads/2023/09/2023-Welcome-Back-Ice-Cream--223x300.jpg \;223\;300\,large\;https://lcsr.jhu.edu/wp-content/uploads/2023/09/2023-We lcome-Back-Ice-Cream--223x300.jpg\;223\;300\,full\;https://lcsr.jhu.edu/wp -content/uploads/2023/09/2023-Welcome-Back-Ice-Cream--223x300.jpg\;223\;30 0 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n\n
p>\n
WE ARE BACK WITH OUR MONDAY BAGELS TRADITION!!
\n\n
Pl ease join us this coming Monday 09/11 at 10.30 am at the students’ office space in Hackerman 136/137 for some fresh morning bagels!! We will provide various cream cheese spreads\, and there will be a coffee machine\, water boiler and K-cups for you to enjoy as well (bring your own mugs though).< /p>\n
\n
Looking forward to seeing you all there!
\nCHEERIO S!
\n\n
Lydia & Benjamin
\n\n
The LCSR Graduate Student Association (LCSR-GSA)
\nLaboratory for Computational Sensin g and Robotics
\nJohns Hopkins University
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13830@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Mark S. Savage is Associate Director\, Life Design Lab & Life D esign Educator for Engineering master’s Students at Johns Hopkins Universi ty.\nClick here for a link to Mark’s presentation. DTSTART;TZID=America/New_York:20230913T120000 DTEND;TZID=America/New_York:20230913T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Mark Savage “COMPOSING AN EFFECTIVE RESUME FOR JOB & INTERNSHIP APPLICATIONS” URL:https://lcsr.jhu.edu/events/lcsr-seminar-mark-savage-tbd-hackerman-b17/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nMark S. Savag e is Associate Director\, Life Design Lab & Life Design Educator for Engin eering master’s Students at Johns Hopkins University.
\nClick here for a link to Mark’s presentation.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13832@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Abstract: The haptic (touch) sensations felt when interacting w ith the physical world create a rich and varied impression of objects and their environment. Humans can discover a significant amount of information through touch with their environment\, allowing them to assess object pro perties and qualities\, dexterously handle objects\, and communicate socia l cues and emotions. Humans are spending significantly more time in the di gital world\, however\, and are increasingly interacting with people and o bjects through a digital medium. Unfortunately\, digital interactions rema in unsatisfying and limited\, representing the human as having only two se nsory inputs: visual and auditory.\n \nThis talk will focus on methods for building haptic and multimodal models that can be used to create realisti c virtual interactions in mobile applications and in VR. I will discuss da ta-driven modeling methods that involve recording force\, vibration\, and sounds data from direct interactions with the physical objects. I will com pare this to new methods using machine learning to generate and tune hapti c models using human preferences.\n\nBio: Heather Culbertson is a Gabilan Assistant Professor of Computer Science at the University of Southern Cali fornia. Her research focuses on the design and control of haptic devices a nd rendering systems\, human-robot interaction\, and virtual reality. Part icularly she is interested in creating haptic interactions that are natura l and realistically mimic the touch sensations experienced during interact ions with the physical world. Previously\, she was a research scientist in the Department of Mechanical Engineering at Stanford University. She rece ived her PhD in the Department of Mechanical Engineering and Applied Mecha nics (MEAM) at the University of Pennsylvania. She is currently serving as Publications Chair for IEEE Haptics Symposium. Her awards include the NSF CAREER Award\, IEEE Technical Committee on Haptics Early Career Award\, a nd the Okawa Research Foundation Award.\n \nClick here to watch a video re cording of this presentation. DTSTART;TZID=America/New_York:20230920T120000 DTEND;TZID=America/New_York:20230920T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Heather Culbertson “Using Data for Increased Realism with Haptic Modeling and Devices” URL:https://lcsr.jhu.edu/events/lcsr-seminar-heather-culbertson-tbd/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2023/07/ HCulbertson-225x300.jpg\;186\;248\,medium\;https://lcsr.jhu.edu/wp-content /uploads/2023/07/HCulbertson-225x300.jpg\;186\;248\,large\;https://lcsr.jh u.edu/wp-content/uploads/2023/07/HCulbertson-225x300.jpg\;186\;248\,full\; https://lcsr.jhu.edu/wp-content/uploads/2023/07/HCulbertson-225x300.jpg\;1 86\;248 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act: The haptic (touch) sensations felt when interacting with the physical world create a rich and varied impression of objects and their e nvironment. Humans can discover a significant amount of information throug h touch with their environment\, allowing them to assess object properties and qualities\, dexterously handle objects\, and communicate social cues and emotions. Humans are spending significantly more time in the digital w orld\, however\, and are increasingly interacting with people and objects through a digital medium. Unfortunately\, digital interactions remain unsa tisfying and limited\, representing the human as having only two sensory i nputs: visual and auditory.
\n\n
This talk will focus on meth ods for building haptic and multimodal models that can be used to create r ealistic virtual interactions in mobile applications and in VR. I will dis cuss data-driven modeling methods that involve recording force\, vibration \, and sounds data from direct interactions with the physical objects. I w ill compare this to new methods using machine learning to generate and tun e haptic models using human preferences.
\n\nBio: Heather Culbertson is a Gabilan Assist ant Professor of Computer Science at the University of Southern California . Her research focuses on the design and control of haptic devices and ren dering systems\, human-robot interaction\, and virtual reality. Particular ly she is interested in creating haptic interactions that are natural and realistically mimic the touch sensations experienced during interactions w ith the physical world. Previously\, she was a research scientist in the D epartment of Mechanical Engineering at Stanford University. She received h er PhD in the Department of Mechanical Engineering and Applied Mechanics ( MEAM) at the University of Pennsylvania. She is currently serving as Publi cations Chair for IEEE Haptics Symposium. Her awards include the NSF CAREE R Award\, IEEE Technical Committee on Haptics Early Career Award\, and the Okawa Research Foundation Award.
\n\n
Click here to watch a video recording of this presentation.< /p>\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13844@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:IROS practice talks by the students followed by 5-minute Q&A se ssion after each paper.\n\nHisashi Ishida\nJuan Barragan\nMichael Kam\nJia wei Liu\nJim Wang. DTSTART;TZID=America/New_York:20230927T120000 DTEND;TZID=America/New_York:20230927T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: IROS paper presentations URL:https://lcsr.jhu.edu/events/lcsr-seminar-student-seminars/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nIROS practice talks by the students followed by 5-minute Q&A session after each paper.< /p>\n
Abstr act:
\nThe success in medical device development depends o n alignment between needs of patients\, providers\, and hospitals. In thi s talk I will cover 20 years of my journey in defining clinical needs\, bu siness objectives\, and developing products in the space medical devices a nd robotics. I will discuss products across image guidance\, navigation\, ultrasound\, and robotics technologies\, starting with products in electro physiology mapping and lung interventions\, covering breakthroughs in quan titative imaging for ultrasound and AI-based ultrasound exams. We will tal k about projects that worked and those that failed addressing key issues i n the development cycle. In the final section\, I will cover surgical and interventional robotic developments and Johnson & Johnson.
\n\n \n
As VP of Robotic Strategy at Johns on and Johnson MedTech\, Aleksandra is leading Johnson & Johnson efforts i n defining the future of surgical robotics. Johnson & Johnson MedTech is p resent in almost every operating room in the world with more than 75 milli on procedures each year. Aleksandra has over 20 years of experience in med ical device and robotics. Starting her career in Germany\, at RWTH Aachen University\, Helmholtz Institute and University Hospital\, Aleksandra obta ined PhD (Dr. Ing.) with specialization in surgical robotics. After gradua te school\, Aleksandra spend 15 years at Philips in New York and Boston\, starting as a scientist developing products across different clinical area s (e.g.\, electrophysiology\, vascular interventional\, lung interventions \, cardiology) with technical focus on image guidance\, navigation\, and r obotics. In the later years\, she became innovation lead for Ultrasound an d subsequently Image Guided Therapy at Philips. Today\, she heads strategy for leading surgical robotics company Johnson & Johnson. Aleksandra grew up in former Yugoslavia (Montenegro and Serbia). She obtained her master’s degree (Dipl.-Ing.) in Electrical Engineering from Belgrade University in Serbia and PhD in Engineering from RWTH Aachen University in Germany. Str ong believer in formal education\, Aleksandra also has executive degree fr om MIT Sloan School of Management and certificate in Industrial Design fro m Massachusetts College of Art and Design.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13840@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Continuum robots change their shape with elastic deformations r ather than mechanical joints and are often elastically deform under typica l forces for their applications. They have advantages in some environments where geometry may be complex and not well-known in advance of operations \, which is a common feature of many applications outside of factory setti ngs. Continuum robots leverage contact and deformation to complete tasks\, relying on passive mechanical behaviors in addition to software-based int elligence and traditional control systems. For example\, robots with slend er\, snake-like\, elastic bodies can navigate the tortuous human anatomy l ike the colon or the esophagus to perform surgery\, or they can navigate t hrough challenging industrial environments like pipelines and machinery to perform “minimally invasive” inspection and maintenance. However\, slend er bodies and mechanical softness come with distinct engineering challenge s. Many slender-bodied soft robots have adopted remote actuation approache s that suffer from exponentially worsening friction as they bend. Addition ally\, many approaches to actuation result in an undesirable coupling betw een actuators. In this seminar\, I will describe our recent research that has focused on improving the understanding of continuum mechanism manipula tor designs\, models\, and applications. Ongoing studies are aimed at (i) improving the design of electromechanically driven continuum robots\; (ii) investigating methods to mitigate friction in long\, slender devices\; an d (iii) improving modeling approaches for continuum robots.\n \nBio:\nHunt er B. Gilbert received the B.S. degree in mechanical engineering in 2010 f rom Rice University (Houston\, Texas)\, and the Ph.D. degree in mechanical engineering in 2016 from Vanderbilt University (Nashville\, Tennessee). H e conducted a postdoctoral fellowship in the Physical Intelligence Departm ent of the Max Planck Institute for Intelligent Systems (Stuttgart\, Germa ny)\, supported by an Alexander von Humboldt Stiftung postdoctoral fellows hip from 2016-2019. He is currently an Associate Professor of Mechanical E ngineering at Louisiana State University\, where he is co-director of the Innovation in Control and Robotics Engineering (iCORE) research laboratory . He is an Associate Editor for the IEEE Robotics and Automation Letters a nd for Frontiers in Robotics and AI. His research interests are centered o n several themes within applied mechanics and dynamic systems: mechanicall y “soft” or deformable robots\, systems and technologies focused on human health and safety\, and modeling of complex dynamic systems. DTSTART;TZID=America/New_York:20231025T120000 DTEND;TZID=America/New_York:20231025T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Hunter Gilbert “Continuum Robots: Addressing Challeng es through Modeling\, Design\, and Control” URL:https://lcsr.jhu.edu/events/lcsr-seminar-hunter-gilbert-tbd/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2023/07/ HGilbert-236x300.jpg\;236\;300\,medium\;https://lcsr.jhu.edu/wp-content/up loads/2023/07/HGilbert-236x300.jpg\;236\;300\,large\;https://lcsr.jhu.edu/ wp-content/uploads/2023/07/HGilbert-236x300.jpg\;236\;300\,full\;https://l csr.jhu.edu/wp-content/uploads/2023/07/HGilbert-236x300.jpg\;236\;300 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nContinuum robots change their shape with elastic deformations rather than mechanical joints and are often elastically deform under typical forces f or their applications. They have advantages in some environments where geo metry may be complex and not well-known in advance of operations\, which i s a common feature of many applications outside of factory settings. Conti nuum robots leverage contact and deformation to complete tasks\, relying o n passive mechanical behaviors in addition to software-based intelligence and traditional control systems. For example\, robots with slender\, snake -like\, elastic bodies can navigate the tortuous human anatomy like the co lon or the esophagus to perform surgery\, or they can navigate through cha llenging industrial environments like pipelines and machinery to perform “ minimally invasive” inspection and maintenance. However\, slender bodies and mechanical softness come with distinct engineering challenges. Many sl ender-bodied soft robots have adopted remote actuation approaches that suf fer from exponentially worsening friction as they bend. Additionally\, man y approaches to actuation result in an undesirable coupling between actuat ors. In this seminar\, I will describe our recent research that has focuse d on improving the understanding of continuum mechanism manipulator design s\, models\, and applications. Ongoing studies are aimed at (i) improving the design of electromechanically driven continuum robots\; (ii) investiga ting methods to mitigate friction in long\, slender devices\; and (iii) im proving modeling approaches for continuum robots.
\n\n
Hunter B. Gilbert received the B.S. degree in mech anical engineering in 2010 from Rice University (Houston\, Texas)\, and th e Ph.D. degree in mechanical engineering in 2016 from Vanderbilt Universit y (Nashville\, Tennessee). He conducted a postdoctoral fellowship in the P hysical Intelligence Department of the Max Planck Institute for Intelligen t Systems (Stuttgart\, Germany)\, supported by an Alexander von Humboldt S tiftung postdoctoral fellowship from 2016-2019. He is currently an Associa te Professor of Mechanical Engineering at Louisiana State University\, whe re he is co-director of the Innovation in Control and Robotics Engineering (iCORE) research laboratory. He is an Associate Editor for the IEEE Robot ics and Automation Letters and for Frontiers in Robotics and AI. His resea rch interests are centered on several themes within applied mechanics and dynamic systems: mechanically “soft” or deformable robots\, systems and te chnologies focused on human health and safety\, and modeling of complex dy namic systems.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13842@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION:Eric Diller\, Associate Professor\, Department of Mechanical an d Industrial Engineering\, Robotics Institute\, Institute of Biomedical En gineering (cross-appointed)\; University of Toronto\n \nAbstract: Micro-sc ale mobile robots can physically access small spaces in a versatile and no n-invasive manner. Such microrobots under several mm in size have potentia l unique applications for surgery\, sensing and drug delivery in healthcar e\, microfactories and as scientific tools. These devices are powered and controlled remotely using externally-applied magnetic fields for motion in 3D. This talk will introduce how we design and produce these tiny machine s\, as well as how we create magnetic fields that can move them as functio nal robots inside the body. Moving microrobots for swimming\, crawling and grasping powered by these magnetic fields will be shown\, along with our progress towards medical applications for diagnosis in the gut\, and in ne urosurgery.\nEric Diller received the B.S. and M.S. degree in mechanical e ngineering from Case Western Reserve University in 2010 and the Ph.D. degr ee in mechanical engineering from Carnegie Mellon University in 2013. He i s currently Associate Professor in the Department of Mechanical and Indust rial Engineering and the Robotics Institute at the University of Toronto\, where he is director of the Microrobotics Laboratory. His research intere sts include micro-scale robotics\, and features fabrication and control re lating to remote actuation of micro-scale devices using magnetic fields\, micro-scale robotic manipulation\, and smart materials. He is an Associate Editor of the Journal of Micro-Bio Robotics\, and received the IEEE Robot ics & Automation Society 2020 Early Career Award. He has also received the 2018 Ontario Early Researcher Award\, the University of Toronto Innovatio n Award\, and the Canadian Society of Mechanical Engineering’s 2018 I.W. S mith Award for research contributions in medical microrobotics. He envisio ns an accessible future of medicine free of invasive colonoscopies\, open surgery and long recoveries.\nLab website: http://microrobotics.mie.utoron to.ca/ DTSTART;TZID=America/New_York:20231101T120000 DTEND;TZID=America/New_York:20231101T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Eric Diller “Micro-Scale Surgery: Using Magnetic Fiel ds to Control Tiny Robots in the Gut and Brain” URL:https://lcsr.jhu.edu/events/lcsr-seminar-eric-diller-tbd/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2023/07/ Diller-300x290.jpg\;300\;290\,medium\;https://lcsr.jhu.edu/wp-content/uplo ads/2023/07/Diller-300x290.jpg\;300\;290\,large\;https://lcsr.jhu.edu/wp-c ontent/uploads/2023/07/Diller-300x290.jpg\;300\;290\,full\;https://lcsr.jh u.edu/wp-content/uploads/2023/07/Diller-300x290.jpg\;300\;290 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\n
Abstract: Micro-s cale mobile robots can physically access small spaces in a versatile and n on-invasive manner. Such microrobots under several mm in size have potenti al unique applications for surgery\, sensing and drug delivery in healthca re\, microfactories and as scientific tools. These devices are powered and controlled remotely using externally-applied magnetic fields for motion i n 3D. This talk will introduce how we design and produce these tiny machin es\, as well as how we create magnetic fields that can move them as functi onal robots inside the body. Moving microrobots for swimming\, crawling an d grasping powered by these magnetic fields will be shown\, along with our progress towards medical applications for diagnosis in the gut\, and in n eurosurgery.
\nEric Diller received the B.S. and M.S. degree in mech
anical engineering from Case Western Reserve University in 2010 and the Ph
.D. degree in mechanical engineering from Carnegie Mellon University in 20
13. He is currently Associate Professor in the Department of Mechanical an
d Industrial Engineering and the Robotics Institute at the University of T
oronto\, where he is director of the Microrobotics Laboratory. His researc
h interests include micro-scale robotics\, and features fabrication and co
ntrol relating to remote actuation of micro-scale devices using magnetic f
ields\, micro-scale robotic manipulation\, and smart materials. He is an A
ssociate Editor of the Journal of Micro-Bio Robotics\, and received the IE
EE Robotics & Automation Society 2020 Early Career Award. He has also rece
ived the 2018 Ontario Early Researcher Award\, the University of Toronto I
nnovation Award\, and the Canadian Society of Mechanical Engineering’s 201
8 I.W. Smith Award for research contributions in medical microrobotics. He
envisions an accessible future of medicine free of invasive colonoscopies
\, open surgery and long recoveries.
\nLab website: http://microrobotics.mie.utoronto.ca/
As robotics increasingly integrates into our social and p rofessional spheres\, the question of how humans perceive and trust robots has gained prominence. Are robots regarded as utilitarian tools\, designe d to fulfill tasks efficiently\, or are they embraced as teammates\, elici ting human-like trust? Some argue that humans interact with robots in a wa y that resembles social interactions with other humans\, a viewpoint align ed with the ‘computers are social actors’ (CASA) concept. Conversely\, pro ponents of the robot as a tool view contend that humans perceive robots as non-human tools\, promoting the use of human-to-automation theories and t rust measures. In this presentation\, we delve into these arguments and pr opose an empirical study aimed at shedding light on this debate.
\nHe holds the position of Professor in the School of Information at the Unive rsity of Michigan and boasts a number of distinguished memberships\, inclu ding AIS Distinguished Member Cum Laude and IEEE Senior Member. Dr. Robert obtained his Ph.D. in Information Systems from Indiana University\, where he was a BAT Fellow and KPMG Scholar. Currently\, he is the director of t he Michigan Autonomous Vehicle Research Intergroup Collaboration (MAVRIC) and affiliated with various institutions\, including the University of Mic higan Robotics Institute\, the National Center for Institutional Diversity at the University of Michigan\, and the Center for Computer-Mediated Comm unication at Indiana University. Additionally\, he is a member of the AAAS Community Advisory Board. Dr. Robert’s research interests revolve around human collaboration with technology\, which is reflected in his published works in leading information systems and information science journals as w ell as notable computer and robotics conferences. His research has garnere d numerous accolades\, including best paper awards/nominations from the Jo urnal of the Association of Information Systems\, the ACM Conference on Co mputer-Supported Cooperative Work\, SAE International\, and the ACM/IEEE I nternational Conference on Human–Robot Interaction. Dr. Robert has receive d research funding from various sources\, such as the AAA Foundation\, Aut omotive Research Center/U.S. Army\, Army Research Laboratory\, Toyota Rese arch Institute\, MCity\, Lieberthal-Rogel Center for Chinese Studies\, and the National Science Foundation. He has also been featured in print\, rad io\, and television for major media outlets like ABC\, CBS\, CNN\, CNBC\, Michigan Radio\, Inc.\, New York Times\, and the Associated Press.< /p>\n
\n
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13846@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Abstract:\nImproving the capabilities of robots in medicine req uires innovation in both robot design and computational methods. In this t alk\, I will discuss recent research from my lab on both topics. I will pr esent new continuum robot designs at both meso- and micro-scales intended for procedures in delicate tissues such as the brain and lungs. I will als o present data-driven and model-driven algorithmic methods we have develop ed to model\, control\, and plan motions for continuum\, deformable robots and deformable tissue in the human body.\n \nBio:\nAlan Kuntz is an assis tant professor in the Robotics Center and the Kahlert School of Computing at the University of Utah. He leads a highly interdisciplinary research la b consisting of computer scientists\, mechanical engineers\, electrical an d computer engineers\, and applied mathematicians. His research focuses on the design of automation and machine learning methods for robots and on t he mechanical design and control of novel robotic systems with healthcare applications.\n \nPrior to joining the University of Utah\, he was a postd octoral research scholar at Vanderbilt University in the Vanderbilt Instit ute for Surgery and Engineering and the Department of Mechanical Engineeri ng. He holds a BS in Computer Science from the University of New Mexico an d an MS and PhD in Computer Science from the University of North Carolina at Chapel Hill. DTSTART;TZID=America/New_York:20231115T120000 DTEND;TZID=America/New_York:20231115T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Alan Kuntz “Deformable tissue and deformable robots: novel algorithmic and mechanical solutions to leverage robots in surgical and interventional medicine” URL:https://lcsr.jhu.edu/events/lcsr-seminar-alan-kuntz-tbd/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2023/07/ AlanKuntz-Small-297x300.jpg\;297\;300\,medium\;https://lcsr.jhu.edu/wp-con tent/uploads/2023/07/AlanKuntz-Small-297x300.jpg\;297\;300\,large\;https:/ /lcsr.jhu.edu/wp-content/uploads/2023/07/AlanKuntz-Small-297x300.jpg\;297\ ;300\,full\;https://lcsr.jhu.edu/wp-content/uploads/2023/07/AlanKuntz-Smal l-297x300.jpg\;297\;300 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Improving the capabilities of robots in medicine requires innovatio n in both robot design and computational methods. In this talk\, I will di scuss recent research from my lab on both topics. I will present new conti nuum robot designs at both meso- and micro-scales intended for procedures in delicate tissues such as the brain and lungs. I will also present data- driven and model-driven algorithmic methods we have developed to model\, c ontrol\, and plan motions for continuum\, deformable robots and deformable tissue in the human body.
\n\n
Alan Kuntz is an assistant professor in the Ro botics Center and the Kahlert School of Computing at the University of Uta h. He leads a highly interdisciplinary research lab consisting of computer scientists\, mechanical engineers\, electrical and computer engineers\, a nd applied mathematicians. His research focuses on the design of automatio n and machine learning methods for robots and on the mechanical design and control of novel robotic systems with healthcare applications.
\n< /p>\n
Prior to joining the University of Utah\, he was a postdoctoral re search scholar at Vanderbilt University in the Vanderbilt Institute for Su rgery and Engineering and the Department of Mechanical Engineering. He hol ds a BS in Computer Science from the University of New Mexico and an MS an d PhD in Computer Science from the University of North Carolina at Chapel Hill.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13850@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Abstract:\nSoft and continuum robots have immense potential to assist humans with tasks that require navigation and manipulation in unstr uctured environments. In this talk\, I present my group’s research on the design\, modeling\, and control of a variety of soft and continuum robots. I begin by discussing soft vine-inspired robots\, which move through thei r environment by extending from their tip and are well suited for navigati on and manipulation within confined spaces. In particular\, I discuss our research on vine robot field deployment\, shape sensing\, force sensing\, and collapse modeling. I then present our research on two other bioinspire d robots: spider monkey tail-inspired robots for grasping objects\, and am oeba-inspired robots for navigation in confined spaces. Finally\, I discus s our research on soft wearable robots for replacing or assisting the moti on of the upper limbs. This research helps make robots more capable of ass isting humans in the unstructured environments of everyday life.\nBio:\nMa rgaret Coad joined the faculty at the University of Notre Dame in the fall of 2021\, and she is currently an Assistant Professor of Aerospace and Me chanical Engineering. She leads the Innovative Robotics and Interactive Sy stems (IRIS) Lab\, which explores the design\, modeling\, and control of i nnovative robotic systems to improve human health\, safety\, and productiv ity\; she also teaches courses in robotics and soft robotics. Prior to joi ning Notre Dame\, she completed her Ph.D. degree in 2021 and M.S. degree i n 2017 in Mechanical Engineering at Stanford University under the directio n of Professor Allison Okamura\, and her B.S. degree in 2015 in Mechanical Engineering at MIT. She won the Robotics and Automation Magazine Best Pap er Award for 2020 for her work on vine robots\, and she has been a finalis t for several Best Paper Awards at international robotics conferences. Out side of academics\, she plays ultimate frisbee and sings in choir. DTSTART;TZID=America/New_York:20231129T120000 DTEND;TZID=America/New_York:20231129T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Margaret Coad “Soft and Continuum Robots for Unstruct ured Environments” URL:https://lcsr.jhu.edu/events/lcsr-seminar-margaret-coad-tbd/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2023/07/ coad-hs-1-300x300-1.jpg\;187\;187\,medium\;https://lcsr.jhu.edu/wp-content /uploads/2023/07/coad-hs-1-300x300-1.jpg\;187\;187\,large\;https://lcsr.jh u.edu/wp-content/uploads/2023/07/coad-hs-1-300x300-1.jpg\;187\;187\,full\; https://lcsr.jhu.edu/wp-content/uploads/2023/07/coad-hs-1-300x300-1.jpg\;1 87\;187 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nSoft and continuum robots have immense potential to assist humans w ith tasks that require navigation and manipulation in unstructured environ ments. In this talk\, I present my group’s research on the design\, modeli ng\, and control of a variety of soft and continuum robots. I begin by dis cussing soft vine-inspired robots\, which move through their environment b y extending from their tip and are well suited for navigation and manipula tion within confined spaces. In particular\, I discuss our research on vin e robot field deployment\, shape sensing\, force sensing\, and collapse mo deling. I then present our research on two other bioinspired robots: spide r monkey tail-inspired robots for grasping objects\, and amoeba-inspired r obots for navigation in confined spaces. Finally\, I discuss our research on soft wearable robots for replacing or assisting the motion of the upper limbs. This research helps make robots more capable of assisting humans i n the unstructured environments of everyday life.
\nMargaret Coad joined th e faculty at the University of Notre Dame in the fall of 2021\, and she is currently an Assistant Professor of Aerospace and Mechanical Engineering. She leads the Innovative Robotics and Interactive Systems (IRIS) Lab\, wh ich explores the design\, modeling\, and control of innovative robotic sys tems to improve human health\, safety\, and productivity\; she also teache s courses in robotics and soft robotics. Prior to joining Notre Dame\, she completed her Ph.D. degree in 2021 and M.S. degree in 2017 in Mechanical Engineering at Stanford University under the direction of Professor Alliso n Okamura\, and her B.S. degree in 2015 in Mechanical Engineering at MIT. She won the Robotics and Automation Magazine Best Paper Award for 2020 for her work on vine robots\, and she has been a finalist for several Best Pa per Awards at international robotics conferences. Outside of academics\, s he plays ultimate frisbee and sings in choir.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-13852@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT: DESCRIPTION:Abstract:\nThis seminar will provide PhD students and postdocs with some information on how to navigate the academic job market. The semi nar will touch on 1) benefits and possible challenges of the academic care er path\, 2) the many aspects of the academic job search (such as timing\, required documents\, interview schedule\, …)\, and 3) the essential tasks junior faculty (and people aspiring to be) must master quickly. DTSTART;TZID=America/New_York:20231206T120000 DTEND;TZID=America/New_York:20231206T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Professional Development Mathias Unberath “Applying f or Faculty Positions in Engineering Disciplines – A (Too) Brief Overview” URL:https://lcsr.jhu.edu/events/lcsr-seminar-professional-development/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstract:
\nThis seminar will provide PhD students and postdocs with some informa tion on how to navigate the academic job market. The seminar will touch on 1) benefits and possible challenges of the academic career path\, 2) the many aspects of the academic job search (such as timing\, required documen ts\, interview schedule\, …)\, and 3) the essential tasks junior faculty ( and people aspiring to be) must master quickly.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14035@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION:Abstract:\nThe da Vinci Research Kit (dVRK) is an open research platform that couples open-source control electronics and software with t he mechanical components of the da Vinci surgical robot. This presentation will describe the dVRK system architecture\, followed by selected researc h enabled by this system\, including mixed reality for the first assistant \, autonomous camera motion\, and force estimation for bilateral teleopera tion. The presentation will conclude with an overview of the AccelNet Surg ical Robotics Challenge\, which includes both simulated and physical envir onments.\nBio:\nPeter Kazanzides received the Ph.D. degree in electrical e ngineering from Brown University in 1988. He began work on surgical roboti cs in March 1989 as a postdoctoral researcher with Russell Taylor at the I BM T.J. Watson Research Center and co-founded Integrated Surgical Systems (ISS) in November 1990. As Director of Robotics and Software at ISS\, he w as responsible for the design\, implementation\, validation and support of the ROBODOC System\, which has been used for more than 20\,000 hip and kn ee replacement surgeries. Dr. Kazanzides joined Johns Hopkins University D ecember 2002 and currently holds an appointment as a Research Professor of Computer Science. His research focuses on computer-integrated surgery\, s pace robotics and mixed reality.\n DTSTART;TZID=America/New_York:20240124T120000 DTEND;TZID=America/New_York:20240124T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Seminar: Peter Kazanzides “The da Vinci Research Kit: System D escription\, Research Highlights\, and Surgical Robotics Challenge” URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-4/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2020/08/ Kazanzides-Photo.jpg\;230\;231\,medium\;https://lcsr.jhu.edu/wp-content/up loads/2020/08/Kazanzides-Photo.jpg\;230\;231\,large\;https://lcsr.jhu.edu/ wp-content/uploads/2020/08/Kazanzides-Photo.jpg\;230\;231\,full\;https://l csr.jhu.edu/wp-content/uploads/2020/08/Kazanzides-Photo.jpg\;230\;231 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstract:
\nThe da Vinci Research Kit (dVRK) is an open researc h platform that couples open-source control electronics and software with the mechanical components of the da Vinci surgical robot. This presentatio n will describe the dVRK system architecture\, followed by selected resear ch enabled by this system\, including mixed reality for the first assistan t\, autonomous camera motion\, and force estimation for bilateral teleoper ation. The presentation will conclude with an overview of the AccelNet Sur gical Robotics Challenge\, which includes both simulated and physical envi ronments.
\nBio:
\nPeter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988. He began work on surgical robotics in March 1989 as a postdoctoral researcher with Russell Taylor at the IBM T.J. Watson Research Center and co-founded Integrated Su rgical Systems (ISS) in November 1990. As Director of Robotics and Softwar e at ISS\, he was responsible for the design\, implementation\, validation and support of the ROBODOC System\, which has been used for more than 20\ ,000 hip and knee replacement surgeries. Dr. Kazanzides joined Johns Hopki ns University December 2002 and currently holds an appointment as a Resear ch Professor of Computer Science. His research focuses on computer-integra ted surgery\, space robotics and mixed reality.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14037@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION:Dimitri Lezcano\, M.S.E.\nPhD Candidate\nLaboratory for Computa tional Sensing and Robotics\nTitle: Shape-Sensing\, Shape-Prediction and S ensor Location Optimization of FBG-Sensorized Needles\nAbstract:\nNeedle i nsertion is typical for surgical intervention including biopsy\, cryoablat ion\, and injection. To reduce tissue damage and for vital organ obstacle avoidance\, asymmetric bevel-tipped flexible needles are commonly used for needle insertion. The challenges for accurate needle placement in minimal ly-invasive surgeries requires modalities for determining the needle posit ion during insertion. Current modalities for 3D needle positioning\, like magnetic-resonance imaging\, computational tomography and ultrasound are e ither too slow\, require large amounts of radiation over sustained periods \, and/or are not precise enough. Embedding flexible bevel-tipped needles with fiber-Bragg grading (FBG) sensors enables for shape-sensing capabilit ies for accurate and real-time needle localization during needle insertion . Our shape-sensing model leverages Lie group theory and curvature sensing to not only provide accurate 3D shape-sensing\, but to perform shape-pred iction during needle insertion. The conducted experiments will demonstrate our model’s effectiveness on determining and predicting needle shape in i sotropic phantom tissue using a novel hybrid deep learning and model-based approach. Furthermore\, a presentation of constructive optimization of FB G sensor placement along with the framework to stochastically modeling nee dle shape-sensing\, will be included.\nBio:\nDimitri Lezcano is a fifth ye ar PhD candidate in LCSR’s Advanced Medical Instrumentation and Robotics L aboratory at Johns Hopkins University working with Professor Iulian Iordac hita and Professor Jin Seob Kim. He received a B.A. in Physics and Mathema tics from McDaniel College (2015) and M.S.E. in Robotics (2020) from Johns Hopkins University. His research focuses on the instrumentation and appli cation of flexible\, shape-sensing\, needles in minimally-invasive surgica l interventions.\n \nWangqu Liu\nPh.D. Candidate\, Gracias Lab\nDepartment of Chemical and Biomolecular Engineering\n \nTitle: Autonomous Untethered Microinjectors for Gastrointestinal Delivery of Insulin\nAbstract:\nDeliv ering macromolecular drugs like insulin via the gastrointestinal (GI) trac t is challenging due to the low stability of these drugs and their poor ab sorption through the tight GI epithelium. An innovative approach has been developed using untethered microscopic robots to break the epithelial barr ier and improve the delivery of these drugs.\nIn this talk\, I will presen t our research on autonomous untethered micro-robotic injectors for the ga strointestinal delivery of insulin. These submillimeter-sized microinjecto rs utilize thermally activated\, prestressed thin films that function like micro-spring-loaded latches. Once triggered by body temperature\, the arm s of microinjectors self-fold. The shape-changing motion allows the inject ion tip to penetrate the GI epithelium\, efficiently delivering insulin wi th bioavailability in line with intravenous injection. Due to their small size\, tunability in sizing and dosing\, wafer-scale fabrication\, and par allel\, autonomous operation\, we anticipate these microinjectors will sig nificantly advance drug delivery across the GI tract mucosa to the systemi c circulation safely. We will conclude the talk by discussing the future d evelopment of ingestible active drug delivery systems incorporating microi njectors.\nBio:\nWangqu is a Ph.D. candidate in the Chemical and Biomolecu lar Engineering department\, advised by Prof. David Gracias. His works mai nly focus on the development of shape-morphing microdevices and related sy stems for biomedical applications such as drug delivery\, minimally invasi ve surgery\, and biopsy. He received a B.Eng. in Chemical Engineering from Beijing Forestry University (2017)\, where he focused on developing bioma ss-based sustainable nanomaterials for water treatment. He later joined th e Johns Hopkins University for his M.Sc. (2019)\, where he worked on devel oping 3D printing functional hydrogel materials and shape-changing structu res. DTSTART;TZID=America/New_York:20240131T120000 DTEND;TZID=America/New_York:20240131T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Student Seminars: Dimitri Lezcano and Wangqu Liu URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-5/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2023/11/ Wangju-Liu-300x300.jpg\;241\;241\,medium\;https://lcsr.jhu.edu/wp-content/ uploads/2023/11/Wangju-Liu-300x300.jpg\;241\;241\,large\;https://lcsr.jhu. edu/wp-content/uploads/2023/11/Wangju-Liu-300x300.jpg\;241\;241\,full\;htt ps://lcsr.jhu.edu/wp-content/uploads/2023/11/Wangju-Liu-300x300.jpg\;241\; 241 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Title: Shape-Sensing\, Shape-Prediction and Sens or Location Optimization of FBG-Sensorized Needles
\nAbstract:
\nNeedle insertion is typical for surgical intervention including biopsy\,
cryoablation\, and injection. To reduce tissue damage and for vital organ
obstacle avoidance\, asymmetric bevel-tipped flexible needles are commonl
y used for needle insertion. The challenges for accurate needle placement
in minimally-invasive surgeries requires modalities for determining the ne
edle position during insertion. Current modalities for 3D needle positioni
ng\, like magnetic-resonance imaging\, computational tomography and ultras
ound are either too slow\, require large amounts of radiation over sustain
ed periods\, and/or are not precise enough. Embedding flexible bevel-tippe
d needles with fiber-Bragg grading (FBG) sensors enables for shape-sensing
capabilities for accurate and real-time needle localization during needle
insertion. Our shape-sensing model leverages Lie group theory and curvatu
re sensing to not only provide accurate 3D shape-sensing\, but to perform
shape-prediction during needle insertion. The conducted experiments will d
emonstrate our model’s effectiveness on determining and predicting needle
shape in isotropic phantom tissue using a novel hybrid deep learning and m
odel-based approach. Furthermore\, a presentation of constructive optimiza
tion of FBG sensor placement along with the framework to stochastically mo
deling needle shape-sensing\, will be included.
Bio:
\nDimitr
i Lezcano is a fifth year PhD candidate in LCSR’s Advanced Medical Instrum
entation and Robotics Laboratory at Johns Hopkins University working with
Professor Iulian Iordachita and Professor Jin Seob Kim. He received a B.A.
in Physics and Mathematics from McDaniel College (2015) and M.S.E. in Rob
otics (2020) from Johns Hopkins University. His research focuses on the in
strumentation and application of flexible\, shape-sensing\, needles in min
imally-invasive surgical interventions.
\n
\n
Title: Autonomous Untethered Microinjectors for Gastrointestinal Delivery of Insulin
\nAbstract:
\nDelivering macromolecular drugs like insulin via the gas
trointestinal (GI) tract is challenging due to the low stability of these
drugs and their poor absorption through the tight GI epithelium. An innova
tive approach has been developed using untethered microscopic robots to br
eak the epithelial barrier and improve the delivery of these drugs.
\nIn this talk\, I will present our research on autonomous untethered micr
o-robotic injectors for the gastrointestinal delivery of insulin. These su
bmillimeter-sized microinjectors utilize thermally activated\, prestressed
thin films that function like micro-spring-loaded latches. Once triggered
by body temperature\, the arms of microinjectors self-fold. The shape-cha
nging motion allows the injection tip to penetrate the GI epithelium\, eff
iciently delivering insulin with bioavailability in line with intravenous
injection. Due to their small size\, tunability in sizing and dosing\, waf
er-scale fabrication\, and parallel\, autonomous operation\, we anticipate
these microinjectors will significantly advance drug delivery across the
GI tract mucosa to the systemic circulation safely. We will conclude the t
alk by discussing the future development of ingestible active drug deliver
y systems incorporating microinjectors.
Bio:
\nWangqu is a Ph
.D. candidate in the Chemical and Biomolecular Engineering department\, ad
vised by Prof. David Gracias. His works mainly focus on the development of
shape-morphing microdevices and related systems for biomedical applicatio
ns such as drug delivery\, minimally invasive surgery\, and biopsy. He rec
eived a B.Eng. in Chemical Engineering from Beijing Forestry University (2
017)\, where he focused on developing biomass-based sustainable nanomateri
als for water treatment. He later joined the Johns Hopkins University for
his M.Sc. (2019)\, where he worked on developing 3D printing functional hy
drogel materials and shape-changing structures.
Abst
ract:
\nComplex and unstructured environments pose several c
hallenges for traditional rigid robot technologies.
\nInspired by bio
logical systems\, soft robots offer a promising alternative with respect t
o their rigid counterparts and demonstrate increased resilience and adapta
tion\, resulting in machines that can safely interact with natural environ
ments.
\nMimicking how biological systems use their soft and dexterou
s body to interact with and exploit their surroundings entails addressing
multiple fundamental challenges related to the design\, manufacturing\, an
d control of soft robots.
\nIn this talk\, I will present our researc
h on developing new manufacturing methods to enable the fabrication of mul
ti-degrees-of-freedom soft robots with distributed actuation and multiscal
e features.
\nI will also discuss opportunities and challenges arisin
g in deploying soft multi-degrees-of-freedom soft robots in the real world
. Specifically\, I will introduce our work on methods to embed control and
computational capabilities onboard soft robots to increase their autonomy
\, focusing on our efforts towards enabling electronic control of multi-Do
F fluidic soft robots.
\nFinally\, I will present our work on the app
lication of soft robotic technologies in minimally invasive surgery. I wil
l discuss various applications\, including atraumatic manipulation of larg
e abdominal organs and accurate and effective manipulation of delicate str
uctures inside the beating heart.
\n
Bio:<
br />\nTom Ranzani received a Bachelor’s and Master’s degree in Biomedical
Engineering from the University of Pisa\, Italy. He did his Ph.D. at the
BioRobotics Institute of the Sant’Anna School of Advanced Studies. In 2014
\, he joined the Wyss Institute for Biologically Inspired Engineering at t
he Harvard John A. Paulson School of Engineering and Applied Sciences as a
postdoctoral fellow.
\nHe is currently an Assistant Professor in the
Department of Mechanical Engineering\, Biomedical Engineering\, and in th
e Division of Materials Science and Engineering at Boston University\, whe
re he established the Morphable Biorobotics Lab in 2018.
\nIn 2020 he
was awarded the NIH Trailblazer Award for New and Early Stage Investigato
rs.
\nHis research focuses on soft and bioinspired robotics with appl
ications ranging from underwater exploration to surgical and wearable devi
ces. He is interested in expanding the potential of soft robots across dif
ferent scales to develop novel reconfigurable soft-bodied robots capable o
f operating in environments where traditional robots cannot.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14041@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION:Abstract:\nRecent technological advances in the field of surgic al Robotics have resulted in the development of a range of\nnew techniques and technologies that have reduced patient trauma\, shortened hospitaliza tion\, and improved\ndiagnostic accuracy and therapeutic outcome. Despite the many appreciated benefits of robot-assisted mini-\nmally invasive surg ery (MIS)\, there are still significant drawbacks associated with these te chnologies including\, dexterity\, intelligence\, and autonomy of the deve loped robotic devices and prognosis design of medical devices and implants .\nThe dexterity limitation is associated with the poor accessibility to t he areas of interest and insufficient\ninstrument control and ergonomics c aused by rigidity of the conventional instruments and implants. In other w ords\, the ability to adequately access different target anatomy is still the main challenge of MIS end endoscopic procedures demanding specialized instrumentation\, sensing and control paradigms.\nTo enhance the safety of robot-assisted procedures\, current robotics research is also exploring n ew ways of providing synergistic intelligent semi/autonomous control betwe en the surgeon and the robot. In this context\, the robot can perform cert ain surgical tasks autonomously under the supervision of the surgeon. Howe ver\, such autonomy not only requires understanding the robot’s perception and adaptation to dynamically changing environments of the tissue\, but a lso it requires understanding the mental workload and decision-making stat e of the surgeon as the decision-maker and key component of this systems. This demands a Surgeon-Centric Brain-In-the-Loop Autonomous Control techni ques.\nTo address these challenges\, this talk covers our efforts towards engineering of surgery (surgineering) and bringing dexterity and autonomy in various robot-assisted minimally invasive surgical procedures. Particul alrly\, I will discuss our efforts towards enhancing the existing paradigm in spinal fixation\, colorectal cancer diagnosis\, and bioprinting of vol umetric muscle loss injuries using continuum manipulators\, soft sensors\, flexible implants\, and semi/autonomous intelligent surgical robotic syst ems.\n \nBio:\nDr. Farshid Alambeigi is an Assistant Professor at the Walk er Department of Mechanical Engineering at the University of Texas at Aust in since August 2019. He is also one of the core faculties of the Texas Ro botics. Dr. Alambeigi received his Ph.D. in Mechanical Engineering from th e Johns Hopkins University\, in 2019. He also holds an M.Sc. degree (2017) in Robotics from the Johns Hopkins University. In summer of 2018\, Dr. Al ambeigi received the 2019 SIEBEL Scholarship because of the academic excel lence and demonstrated leadership. In 2020\, Dr. Alambeigi received the NI H NIBIB Trailblazer Career Award to develop novel flexible implants and ro bots for minimally invasive spinal fixation surgery. He also has received the prestigious 2022 NIH Director’s New Innovator Award to develop an in v ivo bioprinting surgical robotic system for treatment of volumetric muscle loss.\nAt The University of Texas at Austin\, Dr. Alambeigi directs the A dvanced Robotic Technologies for Surgery (ARTS) Lab. Dr. Alambeigi’s resea rch focuses on developing high dexterity and situationally aware continuum manipulators\, soft robots\, and appropriate instruments and sensors desi gned for less/minimally invasive treatment of various medical applications . Utilizing these novel surgical instruments together with intelligent con trol algorithms\, the ARTS Lab in collaboration with the UT Dell Medical S chool will work toward engineering of the surgery (Surgineering) and partn ering dexterous intelligent robots with surgeons. Ultimately\, our goal is to augment the clinicians’ skills and quality of the surgery to further i mprove patient’s safety and outcomes. DTSTART;TZID=America/New_York:20240214T120000 DTEND;TZID=America/New_York:20240214T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Seminar Farshid Alambeigi\,”Surgineering Using Intelligent and Flexible Robotic Systems” URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-7/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2023/11/ Alambeigi-Farshid-300x200.jpeg\;300\;200\,medium\;https://lcsr.jhu.edu/wp- content/uploads/2023/11/Alambeigi-Farshid-300x200.jpeg\;300\;200\,large\;h ttps://lcsr.jhu.edu/wp-content/uploads/2023/11/Alambeigi-Farshid-300x200.j peg\;300\;200\,full\;https://lcsr.jhu.edu/wp-content/uploads/2023/11/Alamb eigi-Farshid-300x200.jpeg\;300\;200 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
\n
Bio:
\nDr. Farshid Alambeigi is an Assistant Professor at the Walker Department
of Mechanical Engineering at the University of Texas at Austin since Augus
t 2019. He is also one of the core faculties of the Texas Robotics. Dr. Al
ambeigi received his Ph.D. in Mechanical Engineering from the Johns Hopkin
s University\, in 2019. He also holds an M.Sc. degree (2017) in Robotics f
rom the Johns Hopkins University. In summer of 2018\, Dr. Alambeigi receiv
ed the 2019 SIEBEL Scholarship because of the academic excellence and demo
nstrated leadership. In 2020\, Dr. Alambeigi received the NIH NIBIB Trailb
lazer Career Award to develop novel flexible implants and robots for minim
ally invasive spinal fixation surgery. He also has received the prestigiou
s 2022 NIH Director’s New Innovator Award to develop an in vivo bioprintin
g surgical robotic system for treatment of volumetric muscle loss.
\n
At The University of Texas at Austin\, Dr. Alambeigi directs the Advanced
Robotic Technologies for Surgery (ARTS) Lab. Dr. Alambeigi’s research focu
ses on developing high dexterity and situationally aware continuum manipul
ators\, soft robots\, and appropriate instruments and sensors designed for
less/minimally invasive treatment of various medical applications. Utiliz
ing these novel surgical instruments together with intelligent control alg
orithms\, the ARTS Lab in collaboration with the UT Dell Medical School wi
ll work toward engineering of the surgery (Surgineering) and partnering de
xterous intelligent robots with surgeons. Ultimately\, our goal is to augm
ent the clinicians’ skills and quality of the surgery to further improve p
atient’s safety and outcomes.
Abstract:
\nConventional robotic systems are mo
st effective in structured environments with well-defined tasks. The next
frontier of robotics is to create systems that can operate in challenging
environments while autonomously adapting to changing and uncertain task re
quirements. In the field of modular self-reconfigurable robotics\, we appr
oach this challenge by designing a set of robotic building blocks that can
be combined to form a variety of robot morphologies. By autonomously rear
ranging these modules\, the system can change its shape to complete a wide
r variety of tasks than is possible with a fixed morphology.
In this talk\, I will present my research on a new modular robot\, the Variable Topology Truss (VTT). Most existing mo dular self-reconfigurable robots use cube-shaped modules that connect toge ther on a lattice or as a serial string of joints. These architectures are convenient when it comes to designing reconfiguration algorithms\, but th ey face serious practical challenges when it comes to scaling the system u p to solve larger tasks. Instead\, VTT uses a truss-based architecture. In dividual modules are beams which can extend or retract using a novel high- extension-ratio linear actuator: the Spiral Zipper. By connecting the beam modules together like a truss\, we can create large\, lightweight structu res with much greater structural efficiency than conventional modular arch itectures. Furthermore\, the length-changing ability of the Spiral Zipper allows the system to more flexibly adapt its scale and geometry without ne eding to use as many modules. However\, the truss architecture poses new c hallenges when it comes to motion and reconfiguration planning. I will dis cuss the hardware design of the VTT system as well as my research on colli sion-free motion and reconfiguration planning for this novel system.
\nBio:
\nAlexander Spinos received his Bachelor’s d
egree in Mechanical Engineering from Johns Hopkins University. He then joi
ned the Modlab in GRASP at the University of Pennsylvania\, where he recei
ved his PhD in Mechanical Engineering and Applied Mechanics. His dissertat
ion centered around the mechanical design and self-reconfiguration plannin
g of the Variable Topology Truss\, a modular self-reconfigurable parallel
robot. He is now a robotics researcher at the JHU Applied Physics Lab\, wh
ere he works on multi-robot planning and the design of novel robot hardwar
e.
Title : Decision Making with Internet-Scale Knowledge
\n\n
Abstract: Machine learning models pretrained on internet data have acquired broad knowledge about the world but struggle to solve complex tasks that require extended reasoning and planning. Sequential dec ision making\, on the other hand\, has empowered AlphaGo’s superhuman perf ormance\, but lacks visual\, language\, and physical knowledge about the w orld. In this talk\, I will present my research towards enabling decision making with internet-scale knowledge. First\, I will illustrate how langua ge models and video generation are unified interfaces that can integrate i nternet knowledge and represent diverse tasks\, enabling the creation of a generative simulator to support real-world decision-making. Second\, I wi ll discuss my work on designing decision making algorithms that can take a dvantage of generative language and video models as agents and environment s. Combining pretrained models with decision making algorithms can effecti vely enable a wide range of applications such as developing chatbots\, lea rning robot policies\, and discovering novel materials.
\n\nBio: Sherry is a final year PhD student at UC Berkeley advised by Pieter Ab beel and a senior research scientist at Google DeepMind. Her research aims to develop machine learning models with internet-scale knowledge to make better-than-human decisions. To this end\, she has developed techniques fo r generative modeling and representation learning from large-scale vision\ , language\, and structured data\, coupled with developing algorithms for sequential decision making such as imitation learning\, planning\, and rei nforcement learning. Sherry initiated and led the Foundation Models for De cision Making workshop at NeurIPS 2022 and 2023\, bringing together resear ch communities in vision\, language\, planning\, and reinforcement learnin g to solve complex decision making tasks at scale. Before her current rol e\, Sherry received her Bachelor’s degree and Master’s degree from MIT adv ised by Patrick Winston and Julian Shun.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14047@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION:Abstract:\nHumans can effortlessly construct rich mental repres entations of the 3D world from sparse input\, such as a single image. This is a core aspect of intelligence that helps us understand and interact wi th our surroundings and with each other. My research aims to build similar computational models–artificial intelligence methods that can perceive pr operties of the 3D structured world from images and videos. Despite remark able progress in 2D computer vision\, 3D perception remains an open proble m due to some unique challenges\, such as limited 3D training data and unc ertainties in reconstruction.\nIn this talk\, I will discuss these challen ges and explain how my research addresses them by posing vision as an inve rse problem\, and by designing machine learning models with physics-inspir ed inductive biases. I will demonstrate techniques for reconstructing 3D f aces and objects\, and for reasoning about uncertainties in scene reconstr uction using generative models. I will then discuss how these efforts adva nce us toward scalable and generalizable visual perception and how they ad vance application domains such as robotics and computer graphics.\nBio: \n Ayush Tewari is a postdoctoral researcher at MIT CSAIL with William Freema n\, Vincent Sitzmann\, and Joshua Tenenbaum. He previously completed his P h.D. at the Max Planck Institute for Informatics\, advised by Christian Th eobalt. His research interests lie at the intersection of computer vision\ , computer graphics\, and machine learning\, focusing on 3D perception and its applications. Ayush was awarded the Otto Hahn medal from the Max Plan ck Society for his scientific contributions as a Ph.D. student. DTSTART;TZID=America/New_York:20240306T120000 DTEND;TZID=America/New_York:20240306T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Seminar: Ayush Tewari\, “Learning to See the World in 3D” URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-11/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act:
\nHumans can effortlessly construct rich mental repres
entations of the 3D world from sparse input\, such as a single image. This
is a core aspect of intelligence that helps us understand and interact wi
th our surroundings and with each other. My research aims to build similar
computational models–artificial intelligence methods that can perceive pr
operties of the 3D structured world from images and videos. Despite remark
able progress in 2D computer vision\, 3D perception remains an open proble
m due to some unique challenges\, such as limited 3D training data and unc
ertainties in reconstruction.
\nIn this talk\, I will discuss these c
hallenges and explain how my research addresses them by posing vision as a
n inverse problem\, and by designing machine learning models with physics-
inspired inductive biases. I will demonstrate techniques for reconstructin
g 3D faces and objects\, and for reasoning about uncertainties in scene re
construction using generative models. I will then discuss how these effort
s advance us toward scalable and generalizable visual perception and how t
hey advance application domains such as robotics and computer graphics.
Bio:
\nAyush Tewari is a postdoctoral resear cher at MIT CSAIL with William Freeman\, Vincent Sitzmann\, and Joshua Ten enbaum. He previously completed his Ph.D. at the Max Planck Institute for Informatics\, advised by Christian Theobalt. His research interests lie at the intersection of computer vision\, computer graphics\, and machine lea rning\, focusing on 3D perception and its applications. Ayush was awarded the Otto Hahn medal from the Max Planck Society for his scientific contrib utions as a Ph.D. student.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14049@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION: DTSTART;TZID=America/New_York:20240313T120000 DTEND;TZID=America/New_York:20240313T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Student Seminar: TBD URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-12/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-14207@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; mgreatt1@jhu.edu DESCRIPTION:Abstract: Decision-making in robotics domains is complicated by continuous state and action spaces\, long horizons\, and sparse feedback. One way to address these challenges is to perform bilevel planning\, wher e decision-making is decomposed into reasoning about “what to do” (task pl anning) and “how to do it” (continuous optimization). Bilevel planning is powerful\, but it requires multiple types of domain-specific abstractions that are often difficult to design by hand. In this talk\, I will give an overview of my work on learning these abstractions from data. This work re presents the first unified system for learning all the abstractions needed for bilevel planning. In addition to learning to plan\, I will also discu ss planning to learn\, where the robot uses planning to collect additional data that it can use to improve its abstractions. My long-term goal is to create a virtuous cycle where learning improves planning and planning imp roves learning\, leading to a very general library of abstractions and a b roadly competent robot.\n\nBio: Tom Silver is a final year PhD student at MIT EECS advised by Leslie Kaelbling and Josh Tenenbaum. His research is a t the intersection of machine learning and planning with applications to r obotics and often uses techniques from task and motion planning\, program synthesis\, and neuro-symbolic learning. Before graduate school\, he was a researcher at Vicarious AI and received his B.A. from Harvard with highes t honors in computer science and mathematics in 2016. He has also interned at Google Research (Brain Robotics) and currently splits his time between MIT and the Boston Dynamics AI Institute. His work is supported by an NSF fellowship and an MIT presidential fellowship.\n DTSTART;TZID=America/New_York:20240320T120000 DTEND;TZID=America/New_York:20240320T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Tom Silver\, “Learning and Planning with Relational A bstractions” URL:https://lcsr.jhu.edu/events/lcsr-seminar-tom-silver-title-tbd/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2024/03/ tom-silver.jpg\;200\;203\,medium\;https://lcsr.jhu.edu/wp-content/uploads/ 2024/03/tom-silver.jpg\;200\;203\,large\;https://lcsr.jhu.edu/wp-content/u ploads/2024/03/tom-silver.jpg\;200\;203\,full\;https://lcsr.jhu.edu/wp-con tent/uploads/2024/03/tom-silver.jpg\;200\;203 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act: Decision-making in robotics domains is complicated by contin uous state and action spaces\, long horizons\, and sparse feedback. One wa y to address these challenges is to perform bilevel planning\, where decis ion-making is decomposed into reasoning about “what to do” (task planning) and “how to do it” (continuous optimization). Bilevel planning is powerfu l\, but it requires multiple types of domain-specific abstractions that ar e often difficult to design by hand. In this talk\, I will give an overvie w of my work on learning these abstractions from data. This work represent s the first unified system for learning all the abstractions needed for bi level planning. In addition to learning to plan\, I will also discuss plan ning to learn\, where the robot uses planning to collect additional data t hat it can use to improve its abstractions. My long-term goal is to create a virtuous cycle where learning improves planning and planning improves l earning\, leading to a very general library of abstractions and a broadly competent robot.
\n\nBio: Tom Silver is a final year PhD student at MIT EECS adv
ised by Leslie Kaelbling and Josh Tenenbaum. His research is at the inters
ection of machine learning and planning with applications to robotics and
often uses techniques from task and motion planning\, program synthesis\,
and neuro-symbolic learning. Before graduate school\, he was a researcher
at Vicarious AI and received his B.A. from Harvard with highest honors in
computer science and mathematics in 2016. He has also interned at Google R
esearch (Brain Robotics) and currently splits his time between MIT and the
Boston Dynamics AI Institute. His work is supported by an NSF fellowship
and an MIT presidential fellowship.
\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14051@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION: DTSTART;TZID=America/New_York:20240327T120000 DTEND;TZID=America/New_York:20240327T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Seminar: Faculty Candidate\, TBD URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-13/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-14053@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION:Model-Based Methods in Today’s Data-Driven Robotics Landscape\n Seth Hutchinson\, Georgia Tech\nAbstract:\nData-driven machine learning me thods are making advances in many long-standing problems in robotics\, inc luding grasping\, legged locomotion\, perception\, and more. There are\, h owever\, robotics applications for which data-driven methods are less effe ctive. Data acquisition can be expensive\, time consuming\, or dangerous — to the surrounding workspace\, humans in the workspace\, or the robot its elf. In such cases\, generating data via simulation might seem a natural r ecourse\, but simulation methods come with their own limitations\, particu larly when nondeterministic effects are significant\, or when complex dyna mics are at play\, requiring heavy computation and exposing the so-called sim2real gap. Another alternative is to rely on a set of demonstrations\, limiting the amount of required data by careful curation of the training e xamples\; however\, these methods fail when confronted with problems that were not represented in the training examples (so-called out-of-distributi on problems)\, and this precludes the possibility of providing provable pe rformance guarantees.\nIn this talk\, I will describe recent work on robot ics problems that do not readily admit data-driven solutions\, including f lapping flight by a bat-like robot\, vision-based control of soft continuu m robots\, a cable-driven graffiti-painting robot\, and ensuring safe oper ation of mobile manipulators in HRI scenarios. I will describe some specif ic difficulties that confront data-driven methods for these problems\, and describe how model-based approaches can provide workable solutions. Along the way\, I will also discuss how judicious incorporation of data-driven machine learning tools can enhance performance of these methods.\n\nBIO:\n Seth Hutchinson is the Executive Director of the Institute for Robotics an d Intelligent Machines at the Georgia Institute of Technology\, where he i s also Professor and KUKA Chair for Robotics in the School of Interactive Computing. Hutchinson received his Ph.D. from Purdue University in 1988\, and in 1990 joined the University of Illinois in Urbana-Champaign (UIUC)\, where he was a Professor of Electrical and Computer Engineering (ECE) unt il 2017\, serving as Associate Department Head of ECE from 2001 to 2007.\n Hutchinson served as president of the IEEE Robotics and Automation Society (RAS) 2020-21. He has previously served as a member of the RAS Administra tive Committee\, as the Editor-in-Chief for the “IEEE Transactions on Robo tics” and as the founding Editor-in-Chief of the RAS Conference Editorial Board. He has served on the organizing committees for more than 100 confer ences\, has more than 300 publications on the topics of robotics and compu ter vision\, and is coauthor of the books “Robot Modeling and Control\,” p ublished by Wiley\, “Principles of Robot Motion: Theory\, Algorithms\, and Implementations\,” published by MIT Press\, and the forthcoming “Introduc tion to Robotics and Perception\,” to be published by Cambridge University Press. He is a Fellow of the IEEE.\n DTSTART;TZID=America/New_York:20240403T120000 DTEND;TZID=America/New_York:20240403T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Seminar: Seth Hutchinson\, “Model-Based Methods in Today’s Dat a-Driven Robotics Landscape URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-14/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2023/11/ seth-hutchinson-9688-300x205.jpg\;300\;205\,medium\;https://lcsr.jhu.edu/w p-content/uploads/2023/11/seth-hutchinson-9688-300x205.jpg\;300\;205\,larg e\;https://lcsr.jhu.edu/wp-content/uploads/2023/11/seth-hutchinson-9688-30 0x205.jpg\;300\;205\,full\;https://lcsr.jhu.edu/wp-content/uploads/2023/11 /seth-hutchinson-9688-300x205.jpg\;300\;205 X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Model-Based M
ethods in Today’s Data-Driven Robotics Landscape
\nSeth Hutchinson\,
Georgia Tech
Abstract:
\nData-driven machine
learning methods are making advances in many long-standing problems in ro
botics\, including grasping\, legged locomotion\, perception\, and more. T
here are\, however\, robotics applications for which data-driven methods a
re less effective. Data acquisition can be expensive\, time consuming\, or
dangerous — to the surrounding workspace\, humans in the workspace\, or t
he robot itself. In such cases\, generating data via simulation might seem
a natural recourse\, but simulation methods come with their own limitatio
ns\, particularly when nondeterministic effects are significant\, or when
complex dynamics are at play\, requiring heavy computation and exposing th
e so-called sim2real gap. Another alternative is to rely on a set of demon
strations\, limiting the amount of required data by careful curation of th
e training examples\; however\, these methods fail when confronted with pr
oblems that were not represented in the training examples (so-called out-o
f-distribution problems)\, and this precludes the possibility of providing
provable performance guarantees.
In this talk\, I will describe r ecent work on robotics problems that do not readily admit data-driven solu tions\, including flapping flight by a bat-like robot\, vision-based contr ol of soft continuum robots\, a cable-driven graffiti-painting robot\, and ensuring safe operation of mobile manipulators in HRI scenarios. I will d escribe some specific difficulties that confront data-driven methods for t hese problems\, and describe how model-based approaches can provide workab le solutions. Along the way\, I will also discuss how judicious incorporat ion of data-driven machine learning tools can enhance performance of these methods.
\n\nBIO:
\nSeth Hutchinson is the Executive Direc tor of the Institute for Robotics and Intelligent Machines at the Georgia Institute of Technology\, where he is also Professor and KUKA Chair for Ro botics in the School of Interactive Computing. Hutchinson received his Ph. D. from Purdue University in 1988\, and in 1990 joined the University of I llinois in Urbana-Champaign (UIUC)\, where he was a Professor of Electrica l and Computer Engineering (ECE) until 2017\, serving as Associate Departm ent Head of ECE from 2001 to 2007.
\nHutchinson served as president of the IEEE Robotics and Automation Society (RAS) 2020-21. He has previous ly served as a member of the RAS Administrative Committee\, as the Editor- in-Chief for the “IEEE Transactions on Robotics” and as the founding Edito r-in-Chief of the RAS Conference Editorial Board. He has served on the org anizing committees for more than 100 conferences\, has more than 300 publi cations on the topics of robotics and computer vision\, and is coauthor of the books “Robot Modeling and Control\,” published by Wiley\, “Principles of Robot Motion: Theory\, Algorithms\, and Implementations\,” published b y MIT Press\, and the forthcoming “Introduction to Robotics and Perception \,” to be published by Cambridge University Press. He is a Fellow of the I EEE.
\n\n END:VEVENT BEGIN:VEVENT UID:ai1ec-14055@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION: DTSTART;TZID=America/New_York:20240410T120000 DTEND;TZID=America/New_York:20240410T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Seminar: Student URL:https://lcsr.jhu.edu/events/lcsr-seminar-student-3/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-14057@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION: DTSTART;TZID=America/New_York:20240417T120000 DTEND;TZID=America/New_York:20240417T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Seminar: Faculty Candidate\, TBD URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-15/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-14059@lcsr.jhu.edu DTSTAMP:20240319T071248Z CATEGORIES: CONTACT:Michele Greatti\; 4105166841\; lcsr-admin@jh.edu DESCRIPTION: DTSTART;TZID=America/New_York:20240424T120000 DTEND;TZID=America/New_York:20240424T130000 LOCATION:Hackerman B17 @ 3400 N Charles St SEQUENCE:0 SUMMARY:LCSR Seminar: Professional Development URL:https://lcsr.jhu.edu/events/lcsr-seminar-professional-development-2/ X-COST-TYPE:free END:VEVENT END:VCALENDAR