BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://lcsr.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-11992@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract:
TBA
\n\n
Biography:< /strong>
\nJeremy D. Brown\, the John C. Malone Assistant Professor in the Department of Mechanical Engineering\, explores the interface betw een humans and robotics\, with a specific focus on medical applications an d haptic feedback. Brown is a graduate of the Atlanta University Center’s Dual Degree Engineering Program\, earning bachelor’s degrees in Applied Ph ysics and Mechanical Engineering from Morehouse College and the University of Michigan\, respectively. He received his MSE and PhD in Mechanical Eng ineering at the University of Michigan\, where he worked on haptic feedbac k for upper-extremity prosthetic devices. Prior to joining Johns Hopkins i n 2017\, he was a postdoctoral research fellow at the University of Pennsy lvania.
\nChien-Ming Huang\, a John C. Malone Assistant Professor in the Department of Computer Science\, studies human-machine teaming and cr eates innovative\, intuitive\, personalized technologies to provide social \, physical\, and behavioral support for people with a variety of abilitie s and characteristics\, including children with autism spectrum disorders. Huang\, who joined the Hopkins faculty in 2017\, has received several awa rds\, including being named a prestigious John C. Malone Assistant Profess or at JHU. In 2018\, he was selected for the Association for Computing Mac hinery’s (ACM) Conference on Human Factors in Computing Systems (referred to as CHI) Early Career Symposium and its New Educators Workshop for the A CM’s Special Interest Group on Computer Science Education. As a PhD candid ate\, Huang received “Best Paper Runner-up” and “Best Student Poster Runn er-up” honors at the 2013 Robotics: Science and Systems (RSS) conference a nd was named a 2012 Human Robot Interaction (HRI) Pioneer.
\nDTSTART;TZID=America/New_York:20210324T120000 DTEND;TZID=America/New_York:20210324T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: “Human Subjects Experiments in Robotics Research” URL:https://lcsr.jhu.edu/events/human-subjects-experiments-in-robotics-rese arch/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-11864@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract:
The ability to efficiently move in comple x environments is a fundamental property both for animals and for robots\, and the problem of locomotion and movement control is an area in which ne uroscience\, biomechanics\, and robotics can fruitfully interact. In this talk\, I will present how biorobots and numerical models can be used to ex plore the interplay of the four main components underlying animal locomoti on\, namely central pattern generators (CPGs)\, reflexes\, descending modu lation\, and the musculoskeletal system. Going from lamprey to human locom otion\, I will present a series of models that tend to show that the respe ctive roles of these components have changed during evolution with a domin ant role of CPGs in lamprey and salamander locomotion\, and a more importa nt role for sensory feedback and descending modulation in human locomotion . I will also present a recent project showing how robotics can provide sc ientific tools for paleontology. Interesting properties for robot and lowe r-limb exoskeleton locomotion control will finally be discussed.
\n\n
Biography:
\nAuke Ijspeert is a professor at EPFL (Lausanne\, Switzerland) since 2002\, and head of the Biorobotics Laboratory. He has a BSc/MSc in physics from EPFL (1995)\, a PhD in artifi cial intelligence from the University of Edinburgh (1999). He is an IEEE F ellow. His research interests are at the intersection between robotics\, c omputational neuroscience\, nonlinear dynamical systems and applied machin e learning. He is interested in using numerical simulations and robots to gain a better understanding of animal locomotion\, and in using inspiratio n from biology to design novel types of robots and controllers. He is also investigating how to assist persons with limited mobility using exoskelet ons and assistive furniture.
\nDTSTART;TZID=America/New_York:20210331T120000 DTEND;TZID=America/New_York:20210331T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Auke Ijspeert “Investigating animal locomotion using biorobots” URL:https://lcsr.jhu.edu/events/auke-ijspeert/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-11865@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract:
This talk will describe how ground\, aeri al\, and marine robots have been used in disasters\, most recently the cor onavirus pandemic. During the pandemic so far\, 338 instances of robots in 48 countries protecting healthcare workers from unnecessary exposure\, ha ndling the surge in demand for clinical care\, preventing infections\, res toring economic activity\, and maintaining individual quality of life have been reported. The uses span six sociotechnical work domains and 29 diff erent use cases representing different missions\, robot work envelopes\, a nd human-robot interaction dyads. The dataset also confirms a model of ad option of robotics technology for disasters. Adoption favors robots that m aximize the suitability for established use cases while minimizing risk of malfunction\, hidden workload costs\, or unintended consequences as measu red by the NASA Technical Readiness Assessment metrics. Regulations do not present a major barrier but availability\, either in terms of inventory o r prohibitively high costs\, does. The model suggests that in order to be prepared for future events\, roboticists should partner with responders n ow\, investigate how to rapidly manufacture complex\, reliable robots on d emand\, and conduct fundamental research on predicting and mitigating risk in extreme or novel environments.\\
\n\n
Biography:< /strong>
\nDr. Robin R. Murphy is the Raytheon Professor of Computer
Science and Engineering at Texas A&M University\, a TED speaker\, and an
IEEE and ACM Fellow. She helped create the fields of disaster robotics and
human-robot interaction\, deploying robots to 29 disasters in five countr
ies including the 9/11 World Trade Center\, Fukushima\, the Syrian boat re
fugee crisis\, Hurricane Harvey\, and the Kilauea volcanic eruption. Murph
y’s contributions to robotics have been recognized with the ACM Eugene L.
Lawler Award for Humanitarian Contributions\, a US Air Force Exemplary Civ
ilian Service Award medal\, the AUVSI Foundation’s Al Aube Award\, and the
Motohiro Kisoi Award for Rescue Engineering Education (Japan). She has wr
itten the best-selling textbook Introduction to AI Robotics (2
DTSTART;TZID=America/New_York:20210407T120000 DTEND;TZID=America/New_York:20210407T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Robin Murphy “From the World Trade Center to the COVI D-19 Pandemic: Robots and Disasters” URL:https://lcsr.jhu.edu/events/robin-murphy/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-11869@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract:
When we think of animal behavior\, what t ypically comes to mind are actions – running\, eating\, swimming\, groomin g\, flying\, singing\, resting. Behavior\, however\, is more than the cata logue of motions that an organism can perform. Animals organize their repe rtoire of actions into sequences and patterns whose underlying dynamics la st much longer than any particular behavior. How an organism modulates the se dynamics affects its success at accessing food\, reproducing\, and myri ad other tasks essential for survival. Animals regulate these patterns of behavior via many interacting internal states (hunger\, reproductive cycle \, age\, etc.) that we cannot directly measure. Studying these hidden stat es’ dynamics\, accordingly\, has proven challenging due to a lack of measu rement techniques and theoretical understanding. In this talk\, I will out line our efforts to uncover the latent dynamics that underlie long timesca le structure in animal behavior. Looking across a variety of organisms\, w e use a novel methodology to measure animals’ full behavioral repertoires to find the existence of a non-trivial form of long timescale dynamics tha t cannot be explained using standard mathematical frameworks. I will prese nt how temporal coarse-graining can be used to understand how these dynami cs are generated and how the found course-grained states can be related to the internal states governing behavior through a combination of machine l earning techniques and dynamical systems modeling. Inferring these hidden dynamics presents a new opportunity to generate insights into the neural and physiological mechanisms that animals use to select actions.
\n< strong>Biography:
\nGordon J. Berman\, Ph.D.\, Assistant Pr ofessor of Biology\, Emory University Co-Director\, Simons-Emory Internati onal Consortium on Motor Control Chair of Recruitment for the Emory Neuros cience Graduate Program . Our lab uses theoretical\, computational\, and d ata-driven approaches to gain quantitative insight into entire repertoires of animal behaviors\, aiming to make connections to the neurobiology\, ge netics\, and evolutionary histories and that underlie them. Get more infor mation here.< /p>\n
DTSTART;TZID=America/New_York:20210421T120000 DTEND;TZID=America/New_York:20210421T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Gordon Berman “Measuring behavior across scales” URL:https://lcsr.jhu.edu/events/gordon-berman/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-11871@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract:
Autonomous systems offer the promise of p roviding greater safety and access. However\, this positive impact will on ly be achieved if the underlying algorithms that control such systems can be certified to behave robustly. This talk will describe a pair of techniq ues grounded in infinite dimensional optimization to address this challeng e.
\nThe first technique\, which is called Reachability-based Trajec tory Design\, constructs a parameterized representation of the forward rea chable set\, which it then uses in concert with predictions to enable real -time\, certified\, collision checking. This approach\, which is guarantee d to generate not-at-fault behavior\, is demonstrated across a variety of different real-world platforms including ground vehicles\, manipulators\, and walking robots. The second technique is a modeling method that allows one to represent a nonlinear system as a linear system in the infinite-dim ensional space of real-valued functions. By applying this modeling method\ , one can employ well-understood linear model predictive control technique s to robustly control nonlinear systems. The utility of this approach is v erified on a soft robot control task.
\n\n
Biography:
\nRam Vasudevan is an assistant professor in Mechanical En gineering and the Robotics Institute at the University of Michigan. He rec eived a BS in Electrical Engineering and Computer Sciences\, an MS degree in Electrical Engineering\, and a PhD in Electrical Engineering all from t he University of California\, Berkeley. He is a recipient of the NSF CAREE R Award and the ONR Young Investigator Award. His work has received best p aper awards at the IEEE Conference on Robotics and Automation\, the ASME D ynamics Systems and Controls Conference\, and IEEE OCEANS Conference and h as been finalist for best paper at Robotics: Science and Systems.
\nDTSTART;TZID=America/New_York:20210428T120000 DTEND;TZID=America/New_York:20210428T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Ram Vasudevan “How I Learned to Stop Worrying and Sta rt Loving Lifting to Infinite Dimensions” URL:https://lcsr.jhu.edu/events/ram-vasudevan/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12059@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:https://wse.zoom.us/j/635091574 DESCRIPTION:
The Spring 2021 Final Project Presentation Session f or Computer Integrated Surgery II will be held Thursday\, May 6th from 18:00 to 21:00 Eastern Time via Zoom. This year\, we have 18 amazing projects supported by grad students\, faculties\, surgeons and companies. We are excited to invite you to join our event to see what the students h ave achieved with the effort of the past semester.
\nConnect ion information
\nJoin Zoom Meeting: https://wse.zoom.us/j/635091574
\nMeeting ID : 635 091 574
\nPassword: 001987
\n\n
Agenda
\n– 18:00—18:10 Arrival and greetings
\n– 18:10—18:30 1 minute teaser presentation
\n– 18:30—20:30 Interactive session in b reakout rooms
\n– 20:30—20:40 Reconvene and announce finalists
\n– 20:40—20:55 Presentations by finalists
\n– 20:55—21:00 Announce ment of Best Project winner
DTSTART;TZID=America/New_York:20210506T180000 DTEND;TZID=America/New_York:20210506T210000 SEQUENCE:0 SUMMARY:Computer Integrated Surgery 2 Final Project Presentations URL:https://lcsr.jhu.edu/events/computer-integrated-surgery-2-final-project -presentations/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12064@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:https://wse.zoom.us/j/98898500603?pwd=dlY2RHlUZXhFUXErK1J6bHcxVUNGd z09 DESCRIPTION:The Spring 2021 Final Project Presentation Session f or Deep Learning will be held Tuesday\, May 11th from 9-12pm Eastern Time via Zoom. This year\ , we have many amazing projects to review and celebrate. We are excited to invite you to join our event to see what the students have achieved with the effort of the past semester.
\n\n
Join Zoom M
eeting
\nhttps://wse.zoom.us/j/
98898500603?pwd=dlY2RHlUZXhFUXErK1J6bHcxVUNGdz09
The closing ceremonies of the Com putational Sensing and Medical Robotics (CSMR) REU are set to take place F riday\, August 6 from 9am until 3pm at this Zoom link. Seventeen undergraduate students from across the country are eager to share the culmination of their work for the past 10 weeks this summer.
\nThe schedule for the day is listed below\, b ut each presentation is featured in more detail in the program. The event is open to the public and it is not necessary to RSVP.
\n\n
\n
2021 REU Final Presentations | \n||||
Time | \nPresenter\n | Project Title | \nFaculty Mentor | \nStudent/Postdoc/Research Engineer Mentors | \n
9:00 | \n \n Ben Fre y \n
| \nDeep Learning for Lung Ultrasound Imaging of COVID-1 9 Patients | \nMuyinatu Bell | \nLingyi Zhao | \n
9:15 | \n \n Camryn Graham \n\n | Optimization of a Photoacoustic Technique to Differentiate Methyle ne Blue from Hemoglobin | \nMuyinatu Bell | \nEduardo Gonzalez | \n
9:30 | \n \n Ariadna Rivera \n
| \nAutonomous Quadcopter Flying and Swarming | \nEnrique Mallada | \nYue Shen | \n
9: 45 | \n \n Katie Sapozhnikov \n
| \nForc e Sensing Surgical Drill | \nRussell Taylor | \nAnna Goodridge | \n
10:00 | \n \n Savanna h Hays \n
| \nEvaluating SLANT Brain Segmentation using CALAM ITI | \nJerry Prince | \nLianrui Zuo | \n
\n Ammaar Firozi \n
| \nRobustness of Deep Networks to Adversarial Attacks | \nRené Vidal td>\n | Kaleab Kinfu\, Carolina Pacheco | \n|
10 :30 | \nBreak | \n\n | \n | \n |
10:45 | \n \n Kar ina Soto Perez \n
| \nBrain Tumor Segmentation in Structural MRIs | \nArchana Venkataraman | \nNaresh Nandakumar | \n
11:00 | \n \n Jonathan Mi \n< p> | \nDesign of a Small Legged Robot to Traverse a Field of Multip le Types of Large Obstacles | \nChen Li | \nRatan Othayoth\, Y aqing Wang\, Qihan Xuan | \n
11:15 | \n \n Arko Chatterjee \n
| \nTelerobotic System for Satellite Servicing | \nPeter Kazanzides\, Louis Whitcomb\, Simon L eonard | \nWill Pryor | \n
11:30< /td>\n | \n Lauren Peterson \n
| \nCan a Fish Learn t o Ride a Bicycle? | \nNoah Cowan | \nYu Yang | \n
11:45 | \n \n Josiah Lozano \ntd>\n | Robotic System for Mosquito Dissection | \nRussel Taylor\,
\n Lulian Lordachita | \nAnna Goodridge | \n
12:00 | \n \n Zulekha Karachiwalla \n
| \nApplication of dual modality haptic feedback within surgical ro botic | \nJeremy Brown | \n\n |
12: 15 | \nBreak | \n\n | \n< td>\n | |
1:00 | \n \n James Campbell \n
| \nUnderstanding Overparameterization from Symm etry | \nRené Vidal | \nSalma Tarmoun | \n
< strong>1:15 | \n \n Evan Dramko \n
| \nE stablishing FDR Control For Genetic Marker Selection | \nSoledad Vil lar\, Jeremias Sulam | \nN/A | \n
1:30 | \n \n Chase Lahr \n
| \nModeling Dynamic Systems Through a Classroom Testbed | \nJeremy Brown | \nMohit Singhala | \n
1:45 | \n \n Anire Egbe \n
| \nObject Discrimination Using Vibrotactile F eedback for Upper Limb Prosthetic Users | \nJeremy Brown | \n< /td>\n |
2:00 | \n \n Harrison Menkes \n
| \nMeasuring Proprioceptive Impairment in Stroke S urvivors (Pre-Recorded) | \nJeremy Brown | \n\n |
2:15 | \n \n Deliberations \n
| \n\n | \n | \n |
Winner Announced | \n\n | \n | \n |
\n
Mark Sav age is the Johns Hopkins Life Design Educator for Engineering Masters Stud ents\, advising on all aspects of career development and the internship / job search\, with the Handshake Career Management System as a necessary to ol. Look for weekly newsletters to soon be emailed to Homewood WSE Master s Students on Sunday Nights.
DTSTART;TZID=America/New_York:20210908T120000 DTEND;TZID=America/New_York:20210908T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Mark Savage “Telling Your Story: The Resume as a Mark eting Tool” URL:https://lcsr.jhu.edu/events/mark-savage/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12289@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
\n< p>Abstract:\n
Robots currently have the capacity to help people in several fields\, including health care\, assisted living\, and manufacturing\, where the robots must share physical space and active ly interact with people in teams. The performance of these teams depends u pon how fluently all team members can jointly perform their tasks. To be s uccessful within a group\, a robot requires the ability to perceive other members’ actions\, model interaction dynamics\, predict future actions\, a nd adapt their plans accordingly in real-time. In the Collaborative Roboti cs Lab (CRL)\, we develop novel perception\, prediction\, and planning alg orithms for robots to fluently coordinate and collaborate with people in c omplex human environments. In this talk\, I will highlight various challen ges of deploying robots in real-world settings and present our recent work to tackle several of these challenges.
\n\n
Biograph y:
\nTariq Iqbal is an Assistant Professor of Systems Engin eering and Computer Science (by courtesy) at the University of Virginia (U VA). Prior to joining UVA\, he was a Postdoctoral Associate in the Compute r Science and Artificial Intelligence Lab (CSAIL) at MIT. He received his Ph.D. in CS from the University of California San Diego (UCSD). Iqbal lead s the Collaborative Robotics Lab (CRL)\, which focuses on building robotic systems that work alongside people in complex human environments\, such a s factories\, hospitals\, and educational settings. His research group dev elops artificial intelligence\, computer vision\, and machine learning alg orithms to enable robots to solve problems in these domains.
DTSTART;TZID=America/New_York:20210915T120000 DTEND;TZID=America/New_York:20210915T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Tariq Iqbal “Toward Fluent Collaboration in Human-Rob ot Teams” URL:https://lcsr.jhu.edu/events/tariq-iqbal/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12292@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Abstract: We describe an approach for incorporating prior knowled ge into machine learning algorithms. We aim at applications in physics and signal processing in which we know that certain operations must be embedd ed into the algorithm. Any operation that allows computation of a gradient or sub-gradient towards its inputs is suited for our framework. We derive a maximal error bound for deep nets that demonstrates that inclusion of p rior knowledge results in its reduction. Furthermore\, we show experimenta lly that known operators reduce the number of free parameters. We apply th is approach to various tasks ranging from computed tomography image recons truction over vessel segmentation to the derivation of previously unknown imaging algorithms. As such\, the concept is widely applicable for many re searchers in physics\, imaging and signal processing. We assume that our a nalysis will support further investigation of known operators in other fie lds of physics\, imaging and signal processing.
\nShort Bio:
Prof. Dr. Andreas Maier was born on 26th of November 1980 in Erl
angen. He studied Computer Science\, graduated in 2005\, and received his
PhD in 2009. From 2005 to 2009 he was working at the Pattern Recognition L
ab at the Computer Science Department of the University of Erlangen-Nuremb
erg. His major research subject was medical signal processing in speech da
ta. In this period\, he developed the first online speech intelligibility
assessment tool – PEAKS – that has been used to analyze over 4.000 patient
and control subjects so far.
\nFrom 2009 to 2010\, he started workin
g on flat-panel C-arm CT as post-doctoral fellow at the Radiological Scien
ces Laboratory in the Department of Radiology at the Stanford University.
From 2011 to 2012 he joined Siemens Healthcare as innovation project manag
er and was responsible for reconstruction topics in the Angiography and X-
ray business unit.
\nIn 2012\, he returned the University of Erlangen
-Nuremberg as head of the Medical Reconstruction Group at the Pattern Reco
gnition lab. In 2015 he became professor and head of the Pattern Recogniti
on Lab. Since 2016\, he is member of the steering committee of the Europea
n Time Machine Consortium. In 2018\, he was awarded an ERC Synergy Grant “
4D nanoscope”. Current research interests focuses on medical imaging\, im
age and audio processing\, digital humanities\, and interpretable machine
learning and the use of known operators.
\n
\n
DTSTART;TZID=America/New_York:20210922T120000 DTEND;TZID=America/New_York:20210922T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Andreas Maier “Known Operator Learning – An Approach to Unite Machine Learning\, Signal Processing and Physics” URL:https://lcsr.jhu.edu/events/andreas-maier/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12297@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:
\n
Abstract:
\nThe unprecedented prediction accuracy of modern machine learning beckons for its application in a wide range of real-worl d applications\, including autonomous robots\, fine-grained computer visio n\, scientific experimental design\, and many others. In order to create t rustworthy AI systems\, we must safeguard machine learning methods from ca tastrophic failures and provide calibrated uncertainty estimates. For exam ple\, we must account for the uncertainty and guarantee the performance fo r safety-critical systems\, like autonomous driving and health care\, befo re deploying them in the real world. A key challenge in such real-world ap plications is that the test cases are not well represented by the pre-coll ected training data. To properly leverage learning in such domains\, espe cially safety-critical ones\, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization gua rantees that rely on strong distributional relationships between training and test examples.
\n\n
In this talk\, I will describe a dist ributionally robust learning framework that offers accurate uncertainty es timation and rigorous guarantees under data distribution shift. This frame work yields appropriately conservative yet still accurate predictions to g uide real-world decision-making and is easily integrated with modern deep learning. I will showcase the practicality of this framework in applicati ons on agile robotic control and computer vision. I will also introduce a survey of other real-world applications that would benefit from this fram ework for future work.
\n\n
Biography:
\nAnqi (Angie) Liu is an Assistant Professor in the Department of Compute r Science at the Whiting School of Engineering of the Johns Hopkins Univer sity. She is broadly interested in developing principled machine learning algorithms for building more reliable\, trustworthy\, and human-compatible AI systems in the real world. Her research focuses on enabling the machin e learning algorithms to be robust to the changing data and environments\, to provide accurate and honest uncertainty estimates\, and to consider hu man preferences and values in the interaction. She is particularly interes ted in high-stake applications that concern the safety and societal impact of AI. Previously\, she completed her postdoc in the Department of Comput ing and Mathematical Sciences of the California Institute of Technology. S he obtained her Ph.D. from the Department of Computer Science of the Unive rsity of Illinois at Chicago. She has been selected as the 2020 EECS Risin g Stars. Her publications appear in top machine learning conferences like NeurIPS\, ICML\, ICLR\, AAAI\, and AISTATS.
DTSTART;TZID=America/New_York:20210929T120000 DTEND;TZID=America/New_York:20210929T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Angie Liu “Towards Trustworthy AI: Distributionally R obust Learning under Data Shift” URL:https://lcsr.jhu.edu/events/angie-liu/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12300@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Abstract:
\nDeployment of autonomous vehicles (AV) on publi c roads promises increases in efficiency and safety\, and requires intelli gent situation awareness. We wish to have autonomous vehicles that can lea rn to behave in safe and predictable ways\, and are capable of evaluating risk\, understanding the intent of human drivers\, and adapting to differe nt road situations. This talk describes an approach to learning and integr ating risk and behavior analysis in the control of autonomous vehicles. I will introduce Social Value Orientation (SVO)\, which captures how an agen t’s social preferences and cooperation affect interactions with other agen ts by quantifying the degree of selfishness or altruism. SVO can be integr ated in control and decision making for AVs. I will provide recent example s of self-driving vehicles capable of adaptation.
\n\n
Daniela Rus is the Andrew (1956) and Erna Vi terbi Professor of Electrical Engineering and Computer Science\, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT\, and Deputy Dean of Research in the Schwarzman College of Computing a t MIT. Rus’ research interests are in robotics and artificial intelligence . The key focus of her research is to develop the science and engineering of autonomy. Rus is a Class of 2002 MacArthur Fellow\, a fellow of ACM\, A AAI and IEEE\, a member of the National Academy of Engineering\, and of th e American Academy of Arts and Sciences. She is a senior visiting fellow a t MITRE Corporation. She is the recipient of the Engelberger Award for rob otics. She earned her PhD in Computer Science from Cornell University.
DTSTART;TZID=America/New_York:20211006T120000 DTEND;TZID=America/New_York:20211006T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Daniela Rus “Learning Risk and Social Behavior in Mix ed Human-Autonomous Vehicles Systems” URL:https://lcsr.jhu.edu/events/daniela-rus/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12307@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
\n< p>Abstract:\n
Digital cameras have dramatically cha nged interventional and surgical procedures. Modern operating rooms utiliz e a range of cameras to minimize invasiveness or provide vision beyond hum an capabilities in magnification\, spectra or sensitivity. Such surgical c ameras provide the most informative and rich signal from the surgical site containing information about activity and events as well as physiology an d tissue function. This talk will highlight some of the opportunities for computer vision in surgical applications and the challenges in translation to clinically usable systems.
\n\n
Bio:
\nDan Stoyanov is a Professor of Robot Vision in the Department of Computer Science at University College London\, Director of the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) \, a Royal Academy of Engineering Chair in Emerging Technologies and Chief Scientist at Digital Surgery Ltd. Dan first studied electronics and compu ter systems engineering at King’s College London before completing a PhD i n Computer Science at Imperial College London where he specialized in medi cal image computing.
\nDTSTART;TZID=America/New_York:20211013T120000 DTEND;TZID=America/New_York:20211013T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Danail Stoyanov “Towards Understanding Surgical Scene s Using Computer Vision” URL:https://lcsr.jhu.edu/events/danail-stoyanov/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12310@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:
\n
\n< p>Abstract:\n
I will discuss recent efforts at Cinf onIA in enhancing interpretability in deep neural networks through the use of adversarial robustness and multimodal information.
\n\n
< strong>Biography:
\nPablo Arbeláez received the PhD with ho nors in Applied Mathematics from the Université Paris Dauphine in 2005. He was Senior Research Scientist with the Computer Vision Group at UC Berkel ey from 2007 to 2014. He currently holds a faculty position in the Departm ent of Biomedical Engineering at Universidad de los Andes in Colombia. Sin ce 2020\, he leads the Center for Research and Formation in Artificial Int elligence (CinfonIA) at UniAndes. His research interests are in computer v ision and machine learning\, in which he has worked on several problems\, including perceptual grouping\, object recognition and the analysis of bio medical images.
DTSTART;TZID=America/New_York:20211020T120000 DTEND;TZID=America/New_York:20211020T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Pablo Arbelaez “Towards Robust Artificial Intelligenc e” URL:https://lcsr.jhu.edu/events/pablo-arbelaez/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12334@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
\n
Speakers: Louis Whitcomb\, Marin Kobilarov\, and the LCSR Faculty
\nAbstract:
\nThis LCSR professional
development seminar will review the process of interviewing for jobs in ac
ademia (e.g. faculty\, post-doc\, and scientist positions) and industry (e
.g. engineering\, scientist\, and management positions)\, and will provide
tips and best-practices for successful interviewing.
DTSTART;TZID=America/New_York:20211027T120000 DTEND;TZID=America/New_York:20211027T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: LCSR Faculty “Interviewing for Jobs in Academia and I ndustry” URL:https://lcsr.jhu.edu/events/lcsr-faculty/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12312@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:
\n
Abstract:
\nThere are more than 2 million industrial robots used worldwide every day\, and yet these devices represent one of the mos t fragmented technologies in the world. With more than 100 brands of indus trial robots\, each with their own proprietary\, difficult to learn softwa re and programming languages\, we are not seeing the exponential growth we expected out of robots. The computer industry faced a similar challenge i n the early 1980s with the advent of the PC\, and computers did not see ex plosive growth until a few key platforms emerged that focused on making co mputers accessible to end users\, and run on a common software platform. A t READY robotics\, we believe the same is true for robots\, and that is wh y we are building Forge/OS\, our “Windows” for the robotics space that let s every robot speak the same language and provide the same award winning u ser experience to end-users. We will talk about how this technology came a bout\, how we think it can change the future\, and discuss the journey fro m the initial research performed at Johns Hopkins University up to today.< /p>\n
\n
Biography:
\nKel Guerin has been working in the robotics space for more than 10 years\, focusing on the des ign and usability of a wide variety of robots\, including systems for spac e exploration\, deep mining\, surgery\, and industrial manufacturing. Whil e obtaining his Ph.D. from Johns Hopkins University (Defended 2016)\, Kel worked specifically on the challenge of making industrial robots more flex ible and easy to use. The result was his award-winning Forge Operating Sys tem and easy-to-use programming interface for industrial robots. Kel spun out his technology into READY Robotics\, an industrial robotics start-up h e co-founded in 2016. His work has been featured in the Wall Street Journa l\, Forbes\, and READY’s products have been called “the Swiss Army knife o f robots” by Inc. magazine.
DTSTART;TZID=America/New_York:20211103T120000 DTEND;TZID=America/New_York:20211103T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Kel Guerin “Building an End-User Focused Operating Sy stem for Robotics” URL:https://lcsr.jhu.edu/events/kel-guerin/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12322@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
\n< p>Abstract:\n
TBA
\n\n
Biograp hy:
\nTBA
DTSTART;TZID=America/New_York:20211110T120000 DTEND;TZID=America/New_York:20211110T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Maya Cakmak URL:https://lcsr.jhu.edu/events/maya-cakmak/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12324@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Abstract:
\nRobot-assisted surgery (RAS) has gained momentu m over the last few decades with nearly 1\,200\,000 RAS procedures perform ed in 2019 alone using the da Vinci Surgical System\, the most widely used surgical robotics platform. The current state-of-the-art surgical robotic systems use only a single endoscope to view the surgical field. In this t alk\, we present a novel design of an additional “pickup” camera that can be integrated into the da Vinci Surgical System. We then explore the benef its of our design for human-robot interaction (HRI) and autonomy in RAS. O n the HRI side\, we show how this “pickup” camera improves depth perceptio n as well as how its additional view can lead to better surgical training. On the autonomy side\, we show how automating the motion of this camera p rovides better visualization of the surgical scene. Finally\, we show how this automation work inspires the design of novel execution models of the automation of surgical subtasks\, leading to superhuman performance.
\n\n
Biography:
\nAlaa Eldin Abdelaal is a PhD candidate at the Robotics and Control Laboratory at the University of British Columbia and a visiting graduate scholar at the Computational Inte raction and Robotics Lab at Johns Hopkins University. He holds a B.Sc. in Computer and Systems Engineering from Mansoura University in Egypt and a M .Sc. in Computing Science from Simon Fraser University in Canada. His rese arch interests are at the intersection of autonomy and human-robot interac tion for human skill augmentation and decision support with application to surgical robotics. His work is co-advised by Dr. Tim Salcudean and Dr. Gr egory Hager. His research has been recognized with the Best Bench-to-Bedsi de Paper Award at the International Conference on Information Processing i n Computer-Assisted Interventions (IPCAI) 2019. He is the recipient of the Vanier Canada Graduate Scholarship\, the most prestigious scholarship for PhD students in Canada.
DTSTART;TZID=America/New_York:20211117T120000 DTEND;TZID=America/New_York:20211117T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Alaa Eldin Abdelaal “An “Additional View” on Human-Ro bot Interaction and Autonomy in Robot-Assisted Surgery” URL:https://lcsr.jhu.edu/events/alaa-eldin-abdelaal/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12339@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
\n< p>Abstract:\n
In this seminar\, we will have a pane l of three LCSR faculty\, Dr. Peter Kazanzides\, Dr. Marin Kobilarov\, and Dr. Axel Krieger discussing their experience in commercializing robotic r esearch through licensing and start-up. The panel will include questions a nd answer sessions with the audience.
\nDTSTART;TZID=America/New_York:20211201T120000 DTEND;TZID=America/New_York:20211201T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: LCSR Faculty “Panel on commercialization of robotics research” URL:https://lcsr.jhu.edu/events/tba-3/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12597@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:
\n
Abstract:
\nAn enduring goal of AI and robotics has been to
build a robot capable of robustly performing a wide variety of tasks in a
wide variety of environments\; not by sequentially being programmed (or t
aught) to perform one task in one environment at a time\, but rather by in
telligently choosing appropriate actions for whatever task and
\nenvi
ronment it is facing. This goal remains a challenge. In this talk I’ll des
cribe recent work in our lab aimed at the goal of general-purpose robot ma
nipulation by integrating task-and-motion planning with various forms of m
odel learning. In particular\, I’ll describe approaches to manipulating ob
jects without prior shape models\, to acquiring composable sensorimotor sk
ills\, and to exploiting past experience for more efficient planning.
\n
Biography:
\nTomas Lozano-Perez is p
rofessor in EECS at MIT\, and a member of CSAIL. He was a recipient of the
2011 IEEE Robotics Pioneer Award and a co-recipient of the 2021 IEEE Robo
tics and Automation Technical Field Award. He is a Fellow of the AAAI\, AC
M\, and
\nIEEE.
\n
Abstract:
\nWhile many robots are currently deployable in f actories\, warehouses\, and homes\, their autonomous deployment requires e ither the deployment environments to be highly controlled\, or the deploym ent to only entail executing one single preprogrammed task. These deployab le robots do not learn to address changes and to improve performance. For uncontrolled environments and for novel tasks\, current robots must seek h elp from highly skilled robot operators for teleoperated (not autonomous) deployment.
\n\n
In this talk\, I will present two approaches to removing these limitations by learning to enable autonomous deployment in the context of mobile robot navigation\, a common core capability for deployable robots: (1) Adaptive Planner Parameter Learning utilizes existi ng motion planners\, fine-tunes these systems using simple interactions wi th non-expert users before autonomous deployment\, adapts to different dep loyment environments\, and produces robust autonomous navigation\; (2) Lea rning from Hallucination enables agile navigation in highly-constrained de ployment environments by exploring in a completely safe training environme nt and creating synthetic obstacle configurations to learn from. Building on robust autonomous navigation\, I will discuss my vision toward a harden ed\, reliable\, and resilient robot fleet which is also task-efficient and continually learns from each other and from humans.
\n\n
Xuesu Xiao is an incoming Assistant Profe ssor in the Department of Computer Science at George Mason University star ting Fall 2022. Currently\, he is a roboticist on The Everyday Robot Proje ct at X\, The Moonshot Factory\, and a research affiliate in the Departmen t of Computer Science at The University of Texas at Austin. Dr. Xiao’s res earch focuses on field robotics\, motion planning\, and machine learning. He develops highly capable and intelligent mobile robots that are robustly deployable in the real world with minimal human supervision. Dr. Xiao rec eived his Ph.D. in Computer Science from Texas A&M University in 2019\, Ma ster of Science in Mechanical Engineering from Carnegie Mellon University in 2015\, and dual Bachelor of Engineering in Mechatronics Engineering fro m Tongji University and FH Aachen University of Applied Sciences in 2013.< /p> DTSTART;TZID=America/New_York:20220202T120000 DTEND;TZID=America/New_York:20220202T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Xuesu Xiao “Deployable Robots that Learn” URL:https://lcsr.jhu.edu/events/xuesu-xiao/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12615@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:
\n
Abstract:
\nThis presentation overviews a number of the pro jects related to image-guided intervention that have taken place in my lab at the Robarts Research Institute at Western University in recent years. Projects cover applications in Image-guided Neurosurgery\, Cardiac surgery \, as well as the role of simulation phantoms and technologies as motion magnification and mixed reality in image-guided interventions.
\n< /p>\n
Biography:
\nDr. Terry Peters is a Scientis t in the Imaging Research Laboratories at the Robarts Research Institute\, London\, ON\, Canada\, and is Professor Emeritus in the Departments of Me dical Imaging and Medical Biophysics\, and the School of Biomedical Engine ering\, at Western University. He obtained his PhD in Electrical Engineeri ng at the University of Canterbury in Christchurch NZ\, in the field imag e reconstruction for CT in 1974\, and following some time as a Medical Phy sicist at the Christchurch Hospital\, joined the Montreal Neurological I nstitute at McGill University as a research scientist in 1978. In 1997 he joined the Imaging Research Labs at the Robarts Research Institute at Wes tern University in London Canada\, where he expanded his research focus to encompass image-guided procedures in multiple organ systems. He has auth ored over 350 peer-reviewed papers\, books and book chapters\, and has me ntored over 100 trainees. Dr Peters is a Fellow of several academic and p rofessional societies including the IEEE\, the MICCAI Society\, the Royal Society of Canada.
DTSTART;TZID=America/New_York:20220209T120000 DTEND;TZID=America/New_York:20220209T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Terry Peters “A journey in Image-guided Intervention” URL:https://lcsr.jhu.edu/events/terry-peters/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12619@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
How many skills do think you have? Mark Savage\, Life Design Educator for Enginee ring Masters Students will explain how the truth may far exceed your estim ate. Knowing\, understanding\, and communicating your major skills will prove useful as you pursue jobs and internships.
\nDTSTART;TZID=America/New_York:20220216T120000 DTEND;TZID=America/New_York:20220216T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Mark Savage (Life Design Educator) “Skills” URL:https://lcsr.jhu.edu/events/tbd-2/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12624@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:
\n
Abstract:
\nTBA
\n\n
Biography:
\nDr. Ghazi MD\, FEBU\, MHPE\, received his medical education fro m Cairo University\, Egypt in 2000\, where he also completed his Urology r esidency 2001-2005. He completed a series of fellowships in minimal invasi ve Urological surgery\, in Paris and Austria (2009-2011)\, where he receiv ed accreditation from the European Board of Urology. He completed an Endou rology and robotic surgery fellowship at the University of Rochester Medic al Center\, New York (2011-2013)\, after which was appointed Assistant pro fessor of Urology at the University of Rochester (2013).
\nDr. Ghazi specializes in the diagnosis and minimal invasive treatment of urological cancers as well as complex stone disease. In addition he perused research grants in education\, simulation research and surgical training. To enhan ce his educational background\, he was awarded the George Corner Deans Tea ching fellowship (2014-2016)\, completed the Harvard Macy Institute progra m for Educators in Health Professions in 2016 and a Masters in Health Prof essions Education program at the Warner School of Education\, University o f Rochester (2016-2020). He is currently enrolled in a 2-year Senior Leade rship Education and Development Program at the University of Rochester.
DTSTART;TZID=America/New_York:20220223T120000 DTEND;TZID=America/New_York:20220223T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Ahmed Ghazi “Enhancing Surgical Robotic Innovations t hrough the integration of Novel Simulation Technologies” URL:https://lcsr.jhu.edu/events/ahmed-ghazi/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12625@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Abstract:
\nThe pandemic exacerbated inequities faced by pe ople with disabilities and healthcare workers — both are at high risk of a dverse physical and mental health outcomes. Robots alone are not going to fix these major societal problems\; however\, our work explores how we can design technology to lessen the burden of systemic ableism and healthcare system stress. I will discuss several of our recent projects in acute car e and community health contexts. In acute care\, we are building hospital- based robots to support the clinical workforce\, to support item delivery\ , telemedicine\, and decision support. In community health\, we are creati ng interactive and adaptive systems that aim to extend the reach of cognit ive neurorehabilitative therapies\, provide respite to overburdened caregi vers\, and explore how technology might serve as a means for mediating pos itive interactions during hardship. We focus on building robots that can a daptively team with and longitudinally learn from people\, and personalize and tailor their behavior.
\n\n
Biography:< /p>\n
Dr. Laurel Riek is a professor in Computer Science and Engineering at the University of California\, San Diego\, with a joint appointment in the Department of Emergency Medicine\, and is affiliated with the Context ual Robotics Institute and Design Lab. Dr. Riek directs the Healthcare Rob otics Lab and leads research in human-robot teaming and health informatics \, with a focus on autonomous robots that work proximately with people. Ri ek’s current research interests include long term learning\, robot percept ion\, and personalization\; with applications in acute care\, neurorehabil itation\, and home health. Dr. Riek received a Ph.D. in Computer Science f rom the University of Cambridge\, and B.S. in Logic and Computation from C arnegie Mellon. Riek served as a Senior Artificial Intelligence Engineer a nd Roboticist at The MITRE Corporation from 2000-2008\, working on learnin g and vision systems for robots\, and held the Clare Boothe Luce chair in Computer Science and Engineering at the University of Notre Dame from 2011 -2016. Dr. Riek has received the NSF CAREER Award\, AFOSR Young Investigat or Award\, Qualcomm Research Award\, and was named one of ASEE’s 20 Facult y Under 40. Dr. Riek is the HRI 2023 General Co-Chair and served as the Pr ogram Co-Chair for HRI 2020\, and serves on the editorial boards of T-RO a nd THRI.
DTSTART;TZID=America/New_York:20220302T120000 DTEND;TZID=America/New_York:20220302T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Laurel Riek “Robots in Hospitals and in the Community : Supporting Wellbeing and Furthering Health Equity” URL:https://lcsr.jhu.edu/events/laurel-riek/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12635@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Abstract:
\nWhile human interaction remains key to a caring treatment\, medical robotics holds the potential to improve surgical proc esses through enabling scaling of forces and actuation\, providing safe an d individual treatments to patients\, and allowing for efficient use of he alth care personnel and resources. Machine learning algorithms and standar dization of processes can increase the quality of medical diagnosis and tr eatments\, particularly when analyzing large quantities of data. Technical and robotic systems can thus support the medical staff in all steps of a medical process.
\nThis talk introduces several assistive robotic sy stems for minimally invasive surgical procedures being researched at the H ealth Robotics and Automation Lab at KIT\, Germany. On one hand\, we will discuss steerable flexible robotic tools for medical applications that req uire delicate tissue handling. On the other hand\, cognitive robotic surge ons and augmented reality support in the operation room are presented for application in laparoscopy and neurosurgery.
\n\n
Bio graphy:
\nFranziska Mathis-Ullrich is Assistant Professor f or Medical Robotics at the Karlsruhe Institute of Technology (KIT) in Germ any. Her primary research focus is on minimally invasive and cognition con trolled robotic systems and embedded machine learning with emphasis on app lications in surgery. She received her B.Sc. and M.Sc. degrees in mechanic al engineering and robotics in 2009 and 2012 and obtained her Ph.D. in 201 7 in Microrobotics from ETH Zurich\, respectively. Since 2019\, she has be en an Assistant Professor with the Health Robotics and Automation Laborato ry at KIT.
\nProf. Mathis-Ullrich is vice-president of the Germa n Society for Computer- and Robot-assisted Surgery (CURAC) and has re ceived the IEEE ICRA Best Paper Award in Medical Robotics (2014)\, the IEE E BioRob Best Student Paper Award (2016) and won twice with her team the f irst prize of the ICRA Microassembly Challenge (2014 & 2015). Furthermore\ , she made it onto the prestigious Forbes “30 under 30” list (2017).
DTSTART;TZID=America/New_York:20220309T120000 DTEND;TZID=America/New_York:20220309T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Franziska Mathis-Ullrich “Cognitive Robotics and Embe dded AI for minimally invasive Surgery” URL:https://lcsr.jhu.edu/events/franziska-mathis-ullrich/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12682@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Jamie Seward\; 1-800-JHU-JHU1 (548-5481)\; alumevents@jhu.edu\; htt ps://events.jhu.edu/form/roboticsinhealthcare DESCRIPTION:\n\n
Sponsored by the Hopkins R obotics Alumni Network\, the Laboratory for Computational Sensing + Roboti cs\, and the Healthcare Affinity
\nJoin us as we hear from Dr. Ayushi Sinha\, Senior Scientist in the Precision Diagnosis & Imag e Guided Therapy department at Philips Research North America. Dr. Sinha w ill discuss her time at Hopkins\, her career journey\, and her current rol e. We’ll have time for Q&A with our speaker and time to network with one a nother. This program will be presented by Zoom. A link will be shared with you in advance.
\nDisclaimer: The perspectives and opinions exp ressed by the speaker(s) during this program are those of the speaker(s) a nd not\, necessarily\, those of Johns Hopkins University and the schedulin g of any speaker at an alumni event or program does not constitute the Uni versity’s endorsement of the speaker’s perspectives and opinions.
\nAyushi S inha is a Senior Scientist in the Precision Diagnosis & Image Guided Thera py department at Philips Research North America. She currently leads a pro ject focused on using machine learning to improve workflow during X-ray gu ided minimally invasive procedures and has worked on improving guidance du ring biopsy procedures in her previous roles at Philips. She also leads a group focused on generating intellectual property around machine learning solutions for X-ray guided interventions.
\nAyushi completed her Ph. D. at Johns Hopkins University with Russ Taylor and Greg Hager in the Depa rtment of Computer Science with a focus on using statistical shape models to improve guidance during endoscopic sinus procedures. She continued at H opkins as a postdoctoral fellow and research faculty to explore unsupervis ed learning in image-based tool tracking. Before her Ph.D.\, Ayushi receiv ed a Master of Science in Engineering degree in Computer Science at Hopkin s working with Misha Kazhdan\, and a Bachelor of Science degree in Compute r Science and a Bachelor of Arts degree in Mathematics at Providence Colle ge.
\n\n
Mid-term Spring Semester can usher in the interview season for many students seeki ng internship or fulltime employment opportunities. Mark Savage\, Life De sign Educator for Engineering Masters Students\, will walk you through wha t to expect and how to ace the job interview. Time permitting\, we may al so discuss the Elevator Pitch in preparation for your upcoming Robotics In dustry Day. Remember to convey some of those 800 skills that relate to so me of the jobs you’ll be discussing.
\nDTSTART;TZID=America/New_York:20220316T120000 DTEND;TZID=America/New_York:20220316T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Mark Savage “Interviews” URL:https://lcsr.jhu.edu/events/mark-savage-2/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12449@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; 410-516-6841\; ashleymoriarty@jhu.edu DESCRIPTION:
Update Jan 28: Industry Day will now be virtual as we won’t know the COVID climate in the future. In order to reduce zoom fatigue\, we are splitting the event into 2 half days. Industry Day will be Mo nday March 21 1-4pm and and Tuesday March 22 1-4pm.
\n2022 Industry Day Agenda/Program
\n< table>\n\n\n
Tuesday 3/22 | \nGather Town: | \n
1: 00-3:00pm | \nPoster and Demo Session | \n
3:00-4:00pm | \nStudent and Industry R esume Review | \n
4:00-5:00pm | \nNetworking Reception | \n
\n
The Laboratory for Computational Sensing and Robotics will highlight its elite robotics students and showcase cutting-edge research projects in are as that include Medical Robotics\, Extreme Environments Robotics\, Human-Machine Systems for Manufacturing\, BioRobotics and more.< /p>\n
Robotics Industry Day will provide top companies and organizations in the private and public sectors with access to the LCSR’s forward-think ing\, solution-driven students. The event will also serve as an informal o pportunity to explore university-industry partnerships.
\nYou will e xperience dynamic presentations and discussions\, observe live demonstrati ons\, and participate in speed networking sessions that afford you the opp ortunity to meet Johns Hopkins most talented robotics students before they graduate.
\nPlease contact Ashley Moriarty if you have any qu estions.
\nPlease contact Ashley Moriart y if you have any questions.
\n\n
Tickets: https:// forms.gle/YUfHzMXBy6t6FdBn8.
DTSTART;TZID=America/New_York:20220321T130000 DTEND;TZID=America/New_York:20220321T160000 LOCATION:Zoom SEQUENCE:0 SUMMARY:2022 JHU Robotics Industry Day URL:https://lcsr.jhu.edu/events/jhu-robotics-industry-day-2022/ X-COST-TYPE:external X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2017/11/ 6722-LCSR-book-final-pdf.jpg\;553\;716\,medium\;https://lcsr.jhu.edu/wp-co ntent/uploads/2017/11/6722-LCSR-book-final-pdf.jpg\;553\;716\,large\;https ://lcsr.jhu.edu/wp-content/uploads/2017/11/6722-LCSR-book-final-pdf.jpg\;5 53\;716\,full\;https://lcsr.jhu.edu/wp-content/uploads/2017/11/6722-LCSR-b ook-final-pdf.jpg\;553\;716 X-TICKETS-URL:https://forms.gle/YUfHzMXBy6t6FdBn8 END:VEVENT BEGIN:VEVENT UID:ai1ec-12632@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Abstract:
\nLocomotion in living systems and bio-inspired r obots requires the generation and control of oscillatory motion. While a c ommon method to generate motion is through modulation of time-dependent “c lock” signals\, in this talk we will motivate and study an alternative met hod of oscillatory generation through autonomous limit-cycle systems. Limi t-cycle oscillators for robotics have many desirable properties including adaptive behaviors\, entrainment between oscillators\, and potential simpl ification of motion control. I will present several examples of the genera tion and control of autonomous oscillatory motion in bio-inspired robotics . First\, I will describe our recent work to study the dynamics of wingbea t oscillations in “asynchronous” insects and how we can build these behavi ors into micro-aerial vehicles. In the second part of this talk I will des cribe how limit-cycle gait generation in collective robots can enable swar ms to synchronize their movement through contact and without communication . More broadly in this talk I hope to motivate why we should look to auton omous dynamical systems for designing and controlling emergent locomotor b ehaviors in bio-inspired robotics.
\n\n
Biography:
\nDr. Nick Gravish received his PhD from Georgia Tech where h e used robots as physical models to motivate and study aspects of biologic al locomotion. During his post-doc Gravish worked in the microrobotics lab of Rob Wood at Harvard\, where he gained expertise in designing and study ing insect-scale robots. Gravish is currently an assistant professor at UC San Diego in the Mechanical and Aerospace Engineering department. His lab bridges the gap between bio-inspiration\, biomechanics\, and robotics\, t owards the development of new bio-inspired robotic technologies to improve the adaptability and resilience of mobile robots.
DTSTART;TZID=America/New_York:20220330T120000 DTEND;TZID=America/New_York:20220330T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Nick Gravish “Design and control of emergent oscillat ions for flapping-wing flyers and synchronizing swarms” URL:https://lcsr.jhu.edu/events/nick-gravish/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12639@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Abstract:
\nDesigning robots for human interaction is a mul tifaceted challenge involving the robot’s intelligent behavior\, physical form\, mechanical structure\, and interaction schema. Our lab develops and studies human-centered robots using a combination of methods from AI\, De sign\, and Human-Computer Interaction. This talk focuses on three recent projects\, two concerning the design of a new robot\, and one that tackles designing robots that help human designers.
\n\n
Bio graphy:
\nGuy Hoffman is Associate Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Eng ineering at Cornell University. Prior to that he was an Assistant Professo r at IDC Herzliya and co-director of the IDC Media Innovation Lab. Hoffman holds a Ph.D from MIT in the field of human-robot interaction. He heads t he Human-Robot Collaboration and Companionship (HRC²) group\, studying the algorithms\, interaction schema\, and designs enabling close interactions between people and personal robots in the workplace and at home. Among ot hers\, Hoffman developed the world’s first human-robot joint theater perfo rmance\, and the first real-time improvising human-robot Jazz duet. His re search papers won several top academic awards\, including Best Paper award s at robotics conferences in 2004\, 2006\, 2008\, 2010\, 2013\, 2015\, 201 8\, 2019\, 2020\, and 2021. His TEDx talk is one of the most viewed online talks on robotics\, watched more than 3 million times.
DTSTART;TZID=America/New_York:20220406T120000 DTEND;TZID=America/New_York:20220406T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Guy Hoffman “Designing Robots and Designing with Robo ts” URL:https://lcsr.jhu.edu/events/guy-hoffman/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12649@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Abstract:
\nMany successful approaches to robotic locomotio n and manipulation operate with high quality simulation tools. Many such a pproaches are “bottom-up” in a modeling sense\, accounting for all interna l forces and environmental interactions explicitly. These “bottom-up” mode ls are used either beforehand (such as in reinforcement learning) and/or i n real time. However\, various types of robots are getting smaller\, soft er\, and more complex (e.g. bio-hybrid actuators). Some robots lean on low -precision manufacturing and fabrication techniques\, and many robots are now being asked to operate in hard-to-characterize\, natural interfaces li ke the human body. Such attributes can render “bottom-up” simulators impra ctical for expected use cases on various research frontiers\, such as micr o-biomedical robots and soft robots deployed in uncharacterized environmen ts. In this talk I will revisit the reconstruction equation\, a result fro m the geometric mechanics literature that offers a “top-down” view of Lagr angian systems\, permitting insights into generalizable system behaviors a long a spectrum of friction-momentum dominance. I will show how these tool s can permit rapid modeling of high complexity robots in their operating e nvironment without the requirement to specify CAD models or any explicit f orces. I will also discuss a related strength and weakness of the approach resulting from the use of symmetries. Surprisingly\, results in simulatio n and hardware indicate that even with eight-jointed systems\, useful beha vioral models can be computed from tens of cycles of data. This suggests t hat high degree of freedom robots can adjust and excel in situations where explicit force models are poorly understood. I will also briefly discuss a framework for robot recovery that leans on these tools as well as a metr ic for a robot’s ability to cover the local space of motions\, computed on the Lie algebra of the position space. The metric allows primitives to be valued for their contribution to the space of composed motions rather tha n just their individual qualities. Results here include a Dubins car that can learn how to turn left (with its steering wheel restricted to only tur n right) in less than a second as well as a robot made of tree branches th at can learn to walk around the laboratory with less than twelve minutes o f experimental data. I hope to motivate the general use of structural redu ctions as we pursue modeling and control of the next generation of high co mplexity robots.
\n\n
Biography:
\nDr. Brian Bittner received a B.S. from Carnegie Mellon and a PhD from Michiga n where he researched the theory\, simulation\, and application of physics informed machine learning for in situ behavior modeling and opti mization. He has sought out cross-disciplinary environments for research\, collaborating with physicists\, biologists\, and mathematicians\, working to facilitate insights from these fields into robotic systems. Bittner is currently a research scientist at the Applied Physics Lab. He is currentl y working on approaches to modeling and control for soft robots and underw ater manipulation.
DTSTART;TZID=America/New_York:20220413T120000 DTEND;TZID=America/New_York:20220413T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Brian Bittner “Data-driven geometric mechanics: top-d own tools for in situ robotic modeling and adaptation to injury” URL:https://lcsr.jhu.edu/events/brian-bittner/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12654@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Abstract:
\nThe talk will present a survey of my research a ctivities\, with more detailed presentation of our guidance system for rob ot-assisted prostate cancer surgery. The majority of prostate cancer surge ry is carried out with the da Vinci surgical system. Tracking of instrumen ts and hand-eye calibration of this robotic system enables the overlay of pre-operative magnetic resonance imaging by registration to real-time ultr asound. This enables visualization of sub-surface anatomy and cancer. We w ill discuss our system design\, visualization and registration approaches.
\nWe will also discuss instrumentation for force sensing using the da Vinci Research Kit\, and a new approach to teleguidance for ultrasound examinations.
\n\n
Biography:
\nTim Sa lcudean is a Professor with the Department of Electrical and Computer Engi neering\, where he holds the C.A. Laszlo Chair in Biomedical Engineering. He is cross-appointed with the UBC School of Biomedical Engineering and th e Vancouver Prostate Centre. He is on the steering committee of the IPCAI conference and on the Editorial Board of the International Journal of Robo tics Research. He is a Fellow of IEEE\, MICCAI and of the Canadian Academy of Engineering. His research interests are in medical robotics\, medical image analysis and elastography imaging.
DTSTART;TZID=America/New_York:20220420T120000 DTEND;TZID=America/New_York:20220420T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Tim Salcudean “Ultrasound-based guidance for robot as sisted prostate surgery” URL:https://lcsr.jhu.edu/events/tim-salcudean/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-12660@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu\; https://wse.zoom.us/s/94623801 186 DESCRIPTION:\n
Panelists:
\nDTSTART;TZID=America/New_York:20220427T120000 DTEND;TZID=America/New_York:20220427T130000 LOCATION:https://wse.zoom.us/s/94623801186 SEQUENCE:0 SUMMARY:LCSR Seminar: Panel on Careers in Robotics A Panel Discussion With Experts From Industry and Academia URL:https://lcsr.jhu.edu/events/panel-on-careers-in-robotics-a-panel-discus sion-with-experts-from-industry-and-academia/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13064@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
DTSTART;TZID=America/New_York:20220831T120000 DTEND;TZID=America/New_York:20220831T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Welcome Townhall “Review of LCSR” URL:https://lcsr.jhu.edu/events/townhall2022/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13074@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract:
\nFlexible and soft med ical robots offer capabilities beyond those of conventional rigid-link rob ots due to their ability to traverse confined spaces and conform to highly curved paths. They also offer potential for improved safety due to their inherent compliance. In this talk\, I will present several new robot desig ns for various surgical applications. In particular\, I will discuss our w ork on soft\, growing robots that achieve locomotion by material extending from their tip. I will discuss limitations in miniaturizing such robots\, along with methods for actively steering\, sensing\, and controlling them . Finally\, I will discuss new sensing and human-in-the-loop control parad igms that are aimed at improving the performance of flexible surgical robo ts.
\nBio:
\nTania Morimoto is an Assistant Professor in the D epartment of Mechanical and Aerospace Engineering and in the Department of Surgery at the University of California\, San Diego. She received the B.S . degree from Massachusetts Institute of Technology\, Cambridge\, MA\, and the M.S. and Ph.D. degrees from Stanford University\, Stanford\, CA\, all in mechanical engineering. Her research lab focuses on the design and con trol of flexible continuum robots for increased dexterity and accessibilit y in uncertain environments\, particularly for minimally invasive surgical interventions. They are also working to address the challenges of designi ng human-in-the-loop interfaces for controlling these flexible and soft ro bots\, including the integration of haptic feedback to improve surgical ou tcomes. She is a recipient of the Hellman Fellowship (2021)\, the Beckman Young Investigator Award (2022)\, and the NSF CAREER Award (2022).
DTSTART;TZID=America/New_York:20220907T120000 DTEND;TZID=America/New_York:20220907T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Tania Morimoto “Flexible Surgical Robots: Design\, Se nsing\, and Control” URL:https://lcsr.jhu.edu/events/tania-morimoto/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13084@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:\n
Abstract:
\nExtreme globalization \, war in the Western World\, COVID-19 are presenting together an unpreced ented challenge for humanity. Engineering intelligent systems and robotics can help to counter-balance the negative effects in a number of ways. Pot ential technology-driven solutions include the emergence of medical robots \, Surgical Data Science\, AI-based support for early anomaly detection an d health diagnosis\, rescue robotics\, smart agrifood robotic solutions an d beyond. Much of these areas are addressed by the various applied researc h projects of the University Research and Innovation center (EKIK) at Óbud a University. This presentation highlights through examples the role that robotics and automation can play in living up to global challenges. The ta lk will also cover the ethical implications of robotics research\, in both the emergency and post-pandemic world\, with a specific focus on the 2015 UN Sustainable Development Goals.
\nDTSTART;TZID=America/New_York:20220914T120000 DTEND;TZID=America/New_York:20220914T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Tamas Haidegger “Medtech research and beyond at Óbuda University” URL:https://lcsr.jhu.edu/events/tamas-haidegger/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13079@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Workshop Description: TBA
\n\n
DTSTART;TZID=America/New_York:20220921T120000 DTEND;TZID=America/New_York:20220921T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Mark Savage “Elevator Pitch Workshop” URL:https://lcsr.jhu.edu/events/mark-savage-3/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13096@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract:
\nHuman motor learning depends on a suite of brain mechanisms that are driven by different signal s and operate on timescales ranging from minutes to years. Understanding these processes requires identifying how new movement patterns are normall y acquired\, retained\, and generalized\, as well as the effects of distin ct brain lesions. The lecture will focus on normal and abnormal motor lea rning\, and how we can use this information to improve rehabilitation for individuals with neurological damage.
\n\n
Bio:
\nDr. A my Bastian is a neuroscientist who has made important contributions to the neuroscience of sensorimotor control. She is the Chief Science Officer a t the Kennedy Krieger Institute\, and Director of the motion analysis labo ratory that studies the neural control of human movement. Dr. Bastian is also a Professor of Neuroscience\, Neurology and PM&R at the Johns Hopkins University School of Medicine. Dr. Bastian is a recognized and highly ac complished neuroscientists whose interests include understanding cerebella r function/dysfunction\, locomotor learning mechanisms\, motor learning in development\, and how to rehabilitate people with many types of neurologi cal diseases.
DTSTART;TZID=America/New_York:20220928T120000 DTEND;TZID=America/New_York:20220928T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Amy Bastian “Learning and relearning human movement” URL:https://lcsr.jhu.edu/events/amy-bastien/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13091@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:\n
Abstract: Planning\, the ability to ima gine different futures and select one assessed to have high value\, is one of the most vaunted of animal capacities. As such it has been a central t arget of artificial intelligence work from the origins of that field\, in addition to being a focus of neuroscience and cognitive science. These sep arate and sometimes synergistic traditions are combined in our new work ex ploring the origin and mechanics of planning in animals. We will show how mammals evade autonomous robot “predators” in complex large arenas. We hav e discovered that depending on the arrangement and density of barriers to vision\, animals appear to carefully manage their uncertainty about the pr edator’s location in order to reach their goal. Their behavior appears unl ikely to be driven by cached responses that were successful in the past\, but rather based on planning during brief pauses over which they peek at t he hidden robot adversary that is looking for them. After peeking\, they r e-route to avoid the predator.
\n\n
Bio: Malcolm A. MacIver i s a group leader of the Center for Robotics and Biosystems at Northwestern University\, with a joint appointment between Mechanical Engineering and Biomedical Engineering\, and courtesy appointments in the Department of Ne urobiology and the Department of Computer Science. His work focuses on ext racting principles underlying animal behavior\, focusing on interactions b etween biomechanics\, sensory systems\, and planning circuits. He then inc orporates these principles into biorobotic systems or simulations of the a nimal in its environment for synergy between technological and scientific advances. For this work he received the 2009 Presidential Early Career Awa rd for Science and Engineering from President Obama at the White House. Ma cIver has also developed interactive science-inspired art installations th at have exhibited internationally\, and consults for science fiction film and TV series makers.
DTSTART;TZID=America/New_York:20221005T120000 DTEND;TZID=America/New_York:20221005T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Malcolm MacIver “Biological planning deciphered via A I algorithms and robot-animal competition in partially observable environm ents” URL:https://lcsr.jhu.edu/events/malcolm-maciver/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13268@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT: DESCRIPTION:Friday 10/7 \n | \n\n | \n
1:00 pm< /td>\n | Keynote Speaker: Keynote by Stephen Aylward\, Senior Director of Strategic Initiatives at Kitware | \n
2:00 pm | \nElevator Pit
ch Practice: For students with Industry Professionals \n | \n
3:00 pm | \nVirtual Company Job Fair \n | \n
\n
Keynote Spe aker: Stephen Aylward “Do something slightly different”
\n< strong>
\n\nAbstract: This talk e xplores the increasing overlap that exists in academic and industry enviro nments\, the role of research and product development in those environment s\, and how you can shape your career to succeed in either. It also explo res how adopting the concepts and tools of open science can lead to succes s in both.
\nBio: Stephen Aylward’s industry career began as an MS graduate surrounded by PhDs in the AI research labs at McD onnell Douglas. He then received a PhD in computer science and became a t enured associate professor in the department of radiology at UNC. That wa s followed by him pivoting back to industry and founding Kitware’s office in North Carolina\, where he has had many roles as the company grew. He s uccessfully patented and licensed software while in academia and played le ad roles in the development of numerous open-source projects including ITK and 3D Slicer while in industry. He now serves as Senior Director of Str ategic Initiatives at Kitware\, as an adjunct professor in computer scienc e at UNC\, and as chair of the advisory board for the development of MONAI \, a leading open-source PyTorch library for medical AI. His NIH\, DARPA\ , and DoD funded research currently focuses on point-of-care AI and develo ping quantitative ultrasound spectroscopy measures to aid in the care of t rauma victims in ambulances\, emergency departments\, and intensive care u nits.
\nDTSTART;TZID=America/New_York:20221007T130000 DTEND;TZID=America/New_York:20221007T160000 SEQUENCE:0 SUMMARY:JHU Robotics Career Fair URL:https://lcsr.jhu.edu/events/jhu-robotics-career-fair/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2022/10/ Stephen.jpg\;300\;300\,medium\;https://lcsr.jhu.edu/wp-content/uploads/202 2/10/Stephen.jpg\;300\;300\,large\;https://lcsr.jhu.edu/wp-content/uploads /2022/10/Stephen.jpg\;300\;300\,full\;https://lcsr.jhu.edu/wp-content/uplo ads/2022/10/Stephen.jpg\;300\;300 END:VEVENT BEGIN:VEVENT UID:ai1ec-13086@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Student Seminar 1:
\n\n
St udent Seminar 2:
\n\n
DTSTART;TZID=America/New_York:20221012T120000 DTEND;TZID=America/New_York:20221012T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Student Seminar URL:https://lcsr.jhu.edu/events/student-seminar/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13110@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract:
\nWhen a flapping bat p ropels through its fluidic environment\, it creates periodic air jets in t he form of wake structures downstream of its flight path. The animal’s rem arkable dexterity to quickly manipulate these wakes with fine-grained\, fa st body adjustments is key to retaining the force-moment needed for an all -time controllable flight\, even near stall conditions\, sharp turns\, and heel-above-head maneuvers. We refer to bats’ locomotion based on dexterou sly manipulating the fluidic environment through dynamically versatile win g conformations as dynamic morphing wing flight.
\nIn this talk\, I will describe some of the challenges facing the design and control of dyna mic morphing Micro Aerial Vehicles (MAV) and report our latest morphing fl ying robot design called Aerobat. Dynamic morphing is the defining charact eristic of bat locomotion and is key to their agility and efficiency. Unli ke a jellyfish whose body conformations are fully dominated by its passive dynamics\, a bat employs its active and passive dynamics to achieve dynam ic morphing within its gaitcycles with a notable degree of control over jo int movements. Copying bats’ morphing wings has remained an open engineeri ng problem due to a classical robot design challenge: having many active c oordinates in MAVs is impossible because of prohibitive design restriction s such as limited payload and power budget. I will propose a framework bas ed on integrating low-power\, feedback-driven components within computatio nal structures (mechanical structures with computational resources) to add ress two challenges associated with gait generation and regulation. We cal l this framework Morphing via Integrated Mechanical Intelligence and Contr ol (MIMIC). Based on this framework\, my team at SiliconSynapse Laboratory at Northeastern University has copied bat dynamically versatile wing conf ormations in untethered flight tests.
\n\n
Bio:
\nAlire za Ramezani is an assistant professor at the Department of Electrical & Co mputer Engineering at Northeastern University (NU). Before joining NU in 2 018\, he was a post-doc at Caltech’s Division of Engineering and Applied S cience. He received his Ph.D. degree in Mechanical Engineering from the Un iversity of Michigan\, Ann Arbor\, with Jessy Grizzle. His research intere sts are the design of bioinspired robots with nontrivial morphologies (hig h degrees of freedom and dynamic interactions with the environment)\, anal ysis\, and nonlinear\, closed-loop feedback design of locomotion systems. His designs have been featured in high-impact journals\, including two cov er articles in Science Robotics Magazine and research highlights in Nature . Alireza has received NASA’s Space Technology Mission Directorate’s Progr am Award in designing bioinspired locomotion systems for the exploration o f the Moon and Mars craters two times. He is the recipient of Caltech’s Je t Propulsion Lab (JPL) Faculty Research Program Position. Alireza’s resear ch has been covered by over 200 news outlets\, including The New York Time s\, The Wall Street Journal\, The Associated Press\, CNN\, NBC\, and Euron ews. Currently\, he is leading a $1 Million NSF project to design and cont rol bat-inspired MAVs in the confined space of sewer networks for monitori ng and inspection.
DTSTART;TZID=America/New_York:20221019T120000 DTEND;TZID=America/New_York:20221019T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Alireza Ramezani “Bat-inspired Dynamic Morphing Wing Flight Through Morphology and Control Design” URL:https://lcsr.jhu.edu/events/alireza-ramezani/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13115@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:\n
Abstract:
\nI will present a bio- inspired fish simulation platform\, which we call “Foids”\, to generate re alistic synthetic datasets for an use in computer vision algorithm trainin g. This is a first-of-its-kind synthetic dataset platform for fish\, which generates all the 3D scenes just with a simulation. One of the major chal lenges in deep learning based computer vision is the preparation of the an notated dataset. It is already hard to collect a good quality video datase t with enough variations\; moreover\, it is a painful process to annotate a sufficiently large video dataset frame by frame. This is especially true when it comes to a fish dataset because it is difficult to set up a camer a underwater and the number of fish (target objects) in the scene can rang e up to 30\,000 in a fish cage on a fish farm. All of these fish need to b e annotated with labels such as a bounding box or silhouette\, which can t ake hours to complete manually\, even for only a few minutes of video. We solve this challenge by introducing a realistic synthetic dataset generati on platform that incorporates details of biology and ecology studied in th e aquaculture field. Because it is a simulated scene\, it is easy to gener ate the scene data with annotation labels from the 3D mesh geometry data a nd transformation matrix. To this end\, we develop an automated fish count ing system utilizing the part of synthetic dataset that shows comparable c ounting accuracy to human eyes\, which reduces the time compared to the ma nual process\, and reduces physical injuries sustained by the fish.
\n< p> \nBio: Masaki Nakada obtained a master degree in physics at Wase da University in Japan. Then\, he finished PhD in computer science at UCLA and worked as a postdoc for another year\, where he published a series of scientific papers. (https://w ww.masakinakada.com/) He devoted more than 10 years in the research of artificial life\, specifically in the area of biomechanical human simulat ion with musculoskeletal models\, neuromuscular controllers\, and biomimet ic vision. Previously\, he worked for Intel as a software engineer. He rec eived MIT Technology Review Innovator Award Under 35\, Forbes Next 1000\, Institute for Digital Research and Education Postdoctoral Scholar Award\, Siggraph Thesis Fast Forward Honorable mention\, TEEC Cup North American Entrepreneurship Competition in Silicon Valley\, Japan Student Services Or ganization Fellowship\, Rotary Ambassadorial Fellowship\, Itoh Foundation Fellowship\, Entrepreneurship Foundation Fellowship\, Aoi Foundation Fello wship and winner of several Startup business competition & hackathons. He founded NeuralX\, Inc (https://www.neu ralx.ai/) in 2019 based on the IP he has developed over the decade of research. The company provides an interactive online fitness service Prese nce.fit (https://www.presence.fit/ )\, where it combines the power of human instructor and motion analytics A I\, which enables them to provide highly interactive online fitness experi ence.
DTSTART;TZID=America/New_York:20221026T120000 DTEND;TZID=America/New_York:20221026T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Masaki Nakada “Foids: Bio-Inspired Fish Simulation fo r Generating Synthetic Datasets” URL:https://lcsr.jhu.edu/events/masaki-nakada/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13105@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:\n
Abstract:
\nThis talk will descri be the robotics and AI activities and projects within JHU/APL’s Research a nd Exploratory Development Department. I will present motivating challenge problems faced by various defense\, military and medical sponsors across a number of government agencies. Further\, I will highlight several resear ch projects we are currently executing in the areas of robot manipulation\ , navigation and human robot interaction. Specifically\, the projects will highlight areas including underwater manipulation\, learned policies for off-road and complex terrain navigation\, human robot interaction\, hetero genous robot teaming\, and fixed wing aerial navigation. Finally\, I will present areas of future research and exploration and possible intersection s with LCSR.
\n\n
Bio:
\nKapil Katyal is a principal re searcher and robotics program manager in the Research and Exploratory Deve lopment Department at JHU/APL. He completed his PhD at JHU advised by Greg Hager on prediction and perception capabilities for robot navigation. He has worked at JHU/APL since 2007 on several projects spanning robot manipu lation\, brain machine interfaces\, vision algorithms for retinal prosthet ics and robot navigation in complex terrains. He holds 5 patents and has c o-authored over 30 publications in areas of robotics and AI.
\n\n
DTSTART;TZID=America/New_York:20221102T120000 DTEND;TZID=America/New_York:20221102T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Kapil Katyal “Robot Manipulation and Navigation Resea rch at JHU/APL” URL:https://lcsr.jhu.edu/events/kapil-katyal/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13134@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract: Robots have begun to transition from assembly lines\, where they are physically separa ted from humans\, to human-populated environments and human-enhancing appl ications\, where interaction with people is inevitable. With this shift\, research in human-robot interaction (HRI) has grown to allow robots to wor k with and around humans on complex tasks\, augment and enhance people\, a nd provide the best support to them. In this talk\, I will provide an over view of the work performed in the HIRO Group and our efforts toward intuit ive\, human-centered technologies for the next generation of robot workers \, assistants\, and collaborators. More specifically\, I will present our research on: a) robots that are safe to people\, b) robots that are capabl e of operating in complex environments\, and c) robots that are good teamm ates. In all\, this research will enable capabilities that were not previo usly possible\, and will impact work domains such as manufacturing\, const ruction\, logistics\, the home\, and health care.
\n\n
DTSTART;TZID=America/New_York:20221109T120000 DTEND;TZID=America/New_York:20221109T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Alessandro Roncone “Robots working with and around pe ople” URL:https://lcsr.jhu.edu/events/alessandro-roncone/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13394@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Abstract: The target o f human flight in space is missions beyond low earth orbit and the Lunar G ateway for deep space exploration and Missions to Mars. Several conditions \, such as the effect of weightlessness and radiations on the human body\, behavioral health decrements\, and communication latency have to be consi dered. Telemedicine and telerobotic applications\, robot-assisted surgery with some hints on experimental surgical procedures carried out in previou s missions\, have to be considered as well. The need for greater crew auto nomy in dealing with health issues is related to the increasing severity o f medical and surgical interventions that could occur in these missions\, and the presence of a highly trained surgeon on board would be recommended . A surgical robot could be a valuable aid but only insofar as it is provi ded with multiple functions\, including the capability to perform certain procedures autonomously. Providing a multi-functional surgical robot is th e new frontier. Research in this field shall be paving the way for the dev elopment of new structured plans for human health in space\, as well as pr oviding new suggestions for clinical applications on Earth.
\n\n
Bio: Dr. Desire Pantalone MD is a general surgeon wi th a particular interest in trauma surgery and emergency surgery. She is a staff surgeon in the Unit of Emergency Surgery and part of the Trauma Tea m of the University Hospital Careggi in Florence. She is also a specialist in General Surgery and Vascular Surgery. She previously was a Research A ssociate at the University of Chicago (IL) (Prof M. Michelassi) for Oncolo gical Surgery and for Liver Transplantation and Hepatobiliary Surgery (Dr. J Emond). She is also an instructor for the Advanced Trauma Operative Man agement (American College of Surgeons Committee for Trauma) and a Fellow o f the American College of Surgeons. She is also a Core Board member respon sible for “Studies on traumatic events and surgery” in the ESA-Topical Tea m on “Tissue Healing in Space: Techniques for promoting and monitoring tis sue repair and regeneration” for Life Science Activities.
\nDTSTART;TZID=America/New_York:20221114T110000 DTEND;TZID=America/New_York:20221114T120000 LOCATION:Malone G33/35 SEQUENCE:0 SUMMARY:Special LCSR Seminar: Desire Pantalone “Robotic Surgery in Space” URL:https://lcsr.jhu.edu/events/desire-pantalone/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13129@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Student 1: Maia Stiber “Support ing Effective HRI via Flexible Robot Error Management Using Natural Human Responses”
\nAbstract: Unexpected robot errors during human -robot interaction are inescapable\; they can occur during any task and do not necessarily fit human expectations of possible errors. When left unma naged\, robot errors’ impact on an interaction harms task performance and user trust\, resulting in user unwillingness to work with a robot. Prior e rror management techniques often do not possess the versatility to appropr iately address robot errors across tasks and error types as they frequentl y use task or error specific information for robust management. In this pr esentation\, I describe my work on exploring techniques for creating flexi ble error management through leveraging natural human responses (social si gnals) to robot errors as input for error detection and classification acr oss tasks\, scenarios\, and error types in physical human-robot interactio n. I present an error detection method that uses facial reactions for real -time detection and temporal localization of robot error during HRI\, a f lexible error-aware framework using traditional and social signal inputs t hat allow for improved error detection\, and an exploration on the effects of robot error severity on natural human responses. I will end my talk by discussing how my current and future work further investigates the use of social signals in the context of HRI for flexible error detection and cla ssification.
\nBio: Maia Stiber is a Ph.D. candidate in the Departme nt of Computer Science\, co-advised by Dr. Chien-Ming Huang and Dr. Russel l Taylor. Her work focuses on leveraging natural human responses to robot errors in an effort to develop flexible error management techniques in sup port of effective human-robot interaction.
\n\n
\n
Stu dent 2: Akwasi Akwaboah “Neuromorphic Cognition and Neural Interfa ces”
\nAbstract: I present research at the Ralph Etienne-Cu mmings-led Computational Senso r-Motor Systems Lab\, Johns Hopkins University on two fronts – (1) Neu romorphic Cognition (NC) focused on the emulation neural physiology at alg orithmic and hardware levels\, and (2) Neural Interfaces with emphasis on electronics for neural MicroElectrode Array (MEA) characterization. The mo tivation for the NC front is as follows. The human brain expends a mere 20 watts in learning and inference\, exponentially lower than state-of-the-a rt large language models (GPT-3 and LaMDA). There is the need to innovate sustainable AI hardware as the 3.4x compute doubling per month has drastic ally outpaced Moore’s law\, i.e.\, a 2-year transistor doubling. Efforts h ere are geared towards realizing biologically plausible learning rules suc h as the Hebb’s rule-based Spike-Timing-Dependent Plasticity (STDP) algori thmically and in correspondingly low-power mixed analog-digital VLSI imple ments. On the same front of achieving a parsimonious artificial intelligen ce\, we are investigating the outcomes of using our models of the primate visual attention to selectively sparsify computation in deep neural networ ks. At the NI front\, we are developing an open-source multichannel potent iostat with parallel data acquisition capability. This work holds implicat ions for rapid characterization and monitoring of neural MEAs often adopte d in neural rehabilitation and in neuroscientific experiments. A standard characterization technique is the Electrochemical Impedance (EI) Spectrome try. However\, the increasing channel counts in state-of-the-art MEAs (100 x and 1000x) imposes the curse of prolonged acquisition time needed for hi gh spectral resolution. Thus\, a truly parallel EI spectrometer made avail able to the scientific community will ameliorate prolonged research time a nd cost.
\nBio: Akwa si Akwaboah joined the Computational Sensory-Motor Systems (C SMS) Lab in Fall 2020 and is working towards his PhD. He received the MSE in Electrical Engineering from the Johns Hopkins University\, Baltimore\, MD in Summer 2022 en route the PhD. He received the B.Sc. Degree in Biomedical Engineering (First Class Honors) from the Kwame Nkrumah Univ ersity of Science and Technology\, Ghana in 2017. He also received the M.S . degree in Electronics Engineering from Norfolk State University\, Norfol k\, VA\, USA in 2020. His master’s thesis there focused on the formulation of a heuristically optimized computational model of a stem cell-derived c ardiomyocyte with implications in cardiac safety pharmacology. He subseque ntly worked at Dr. James Weiland’s BioElectronic Vision Lab at the Univers ity of Michigan\, Ann Arbor\, MI\, USA in 2020\; where he collaborated on research in retinal prostheses\, calcium imaging and neural electrode char acterization. His current interests include the following: neuromorphic ci rcuits and systems\, bio-inspired algorithms\, computational biology\, and neural interfaces. On the lighter side\, Akwasi loves to cook and listen to classical and Afrobeats music. He lives by Marie Curie’s quote – “N othing in life is to be feared\, it is only to be understood …”
\nDTSTART;TZID=America/New_York:20221116T120000 DTEND;TZID=America/New_York:20221116T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Student Seminar URL:https://lcsr.jhu.edu/events/student-seminar-2/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13124@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Panel Speaker 1: Erin Sutton\, PhD
\nGuidance and Control Engineer at the JHU Applied Phys ics Laboratory
\nPh.D. Mechanical Engineering 2017\, M.S. Mechanical Engineering 2016
\nErin Sutton is a mechanical engineer at Johns Ho pkins Applied Physics Laboratory. She received a BS in mechanical engineer ing from the University of Dayton and an MS and a PhD in mechanical engine ering from Johns Hopkins University. She spent a year at the Naval Air Sys tems Command designing flight simulators before joining APL in 2019. Her p rimary research interest is in enhancing existing guidance and control sys tems with autonomy\, and her recent projects range from hypersonic missile defense to civil space exploration.
\n\n
Panel Speak er 2: Star Kim\, PhD
\nJob title and affiliation: Managemen t Consultant at McKinsey & Company
\nPh.D. Mechanical Engineering 20 21
\nStar is an Associate at a global business management consulting firm\, McKinsey & Company. At JHU\, she worked on personalizing cardiac s urgery by creating patient specific vascular conduits at Dr. Axel Krieger’ s IMERSE lab. She made a virtual reality software for doctors to design an d evaluate conduits for each patient. Her team filed a patent and founded a startup together\, which received funding from the State of Maryland. Be fore joining JHU\, she was at the University of Maryland\, College Park an d the U.S. Food and Drug Administration. There\, she developed and tested patient specific medical devices and systems such as virtual reality menta l therapy and orthopedic surgical cutting guides.
\n\n
Senior Robotics an d Controls Engineer at Johnson and Johnson\, Robotics and Digital Solution s
\nJHU MSE Robotics 2018\, JHU BS in Biomedical Engineering 2016
\nAt Johnson and Johnson Nicole works on the Robotis and Controls team to improve the accuracy of their laparoscopic surgery platform. Before j oining J&J\, Nicole worked as a contractor for NASA supporting Gateway and at Think Surgical supporting their next generation knee arthroplasty robo t.
\n\n
Panel Speaker 4: Ryan Keating\, MSE< /p>\n
Software Engineer at Nuro
\nBS Mechanical Engineering 2013\, MSE Robotics 2014
\nBio: After finishing my degrees at JHU\, I spen t two and a half years working at Carnegie Robotics\, where I was primaril y involved in the development of a land-mine sweeping robot and an inertia l navigation system. Following a brief stint working at SRI International to prototype a sandwich-making robot system (yes\, really)\, I have been w orking on the perception team at Nuro for the past four and a half years. I’ve had the opportunity to work on various parts of the perception stack over that time period\, but my largest contributions have been to our back up autonomy system\, our object tracking system\, and the evaluation frame work we use to validate changes to the perception system.
DTSTART;TZID=America/New_York:20221130T120000 DTEND;TZID=America/New_York:20221130T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Careers in Robotics: A Panel Discussion With Experts From Industry and Academia URL:https://lcsr.jhu.edu/events/panel-tbd/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13401@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:\n\n
\n
All LCSR members\, their families\, and significant others are invited to the:
\n\n
Ugly (or normal) Sweater Bash
\nFriday\, Dece
mber 9th
\n5:00PM-7:00PM
\nGla
ss Pavilion
\n
You can help by contributing your f
avorite holiday dish (regional specialties strongly encouraged!) to this p
ot-luck get together (you don’t have to bring anything to participate). Ma
in dishes will be provided\, as will plates\, napkins\, utensils\, etc. Click here to sign up
\n
There will a gingerbread decorating contest and priz es for best/ugliest sweater!
\n\n
\n
\n
\n
\n
\n
\n
\n
DTSTART;TZID=America/New_York:20221209T170000 DTEND;TZID=America/New_York:20221209T190000 LOCATION:Levering Hall - Glass Pavilion SEQUENCE:0 SUMMARY:LCSR Winter Potluck/ Ugly Sweater Bash URL:https://lcsr.jhu.edu/events/winter-potluck/ X-COST-TYPE:free X-WP-IMAGES-URL:thumbnail\;https://lcsr.jhu.edu/wp-content/uploads/2022/11/ 2022-Holiday-Potluck.png\;576\;577\,medium\;https://lcsr.jhu.edu/wp-conten t/uploads/2022/11/2022-Holiday-Potluck.png\;576\;577\,large\;https://lcsr. jhu.edu/wp-content/uploads/2022/11/2022-Holiday-Potluck.png\;576\;577\,ful l\;https://lcsr.jhu.edu/wp-content/uploads/2022/11/2022-Holiday-Potluck.pn g\;576\;577 END:VEVENT BEGIN:VEVENT UID:ai1ec-13554@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT: DESCRIPTION:
Recovering the Sense of Touch for Robotic Surgery an d Surgical Training
\n\n
By Ugur T umerdem
\nAssistant. Professor of Mechanical Engineering at Marmara University
\nVisiting Assistant Professor of Mechanical Engineering at Johns Hopkins University
\nFulbright Visiting Research Scholar 20 22/23
\n\n
Abstract
\n\n
While robotic surgery systems have revolutioni zed the field of minimally invasive surgery in the past 25 years\, their b iggest disadvantage since their inception is the lack of haptic feedback t o the surgeon. While controlling robotic instrument with teleoperation sur geons operate without their sense of touch and rely on only visual feedbac k which can result in unwanted complications.
\n\n
In this se minar\, I am going to talk about our recent and ongoing work to recover th e lost sense of touch in robotic surgery\, through new motion control laws \, haptic teleoperation and machine learning algorithms as well as novel m echanism design. Major hurdles to providing reliable haptic feedback in ro botic surgery systems are the difficulty in obtaining reliable force measu rements/estimations from robotic laparoscopic instruments and the lack of transparent teleoperation architectures which can guarantee stability unde r environment uncertainty or communication delays. I will be talking abou t our approaches to solving these issues and on our ongoing project to ach ieve haptic feedback on the da Vinci Research Kit. As an extension of the technology we are developing\, I will also be discussing how the proposed haptic control approaches can be used to connect multiple surgeons through haptic interfaces to enable a new haptic training approach in surgical ro botics.
\n\n
Bio
\nUg ur Tumerdem is an Assistant Professor of Mechanical Engineering at Marmara University\,Istanbul\, Turkey and a Visiting Professor of Mechanical Engi neering at Johns Hopkins University as the recipient of a Fulbright Visiti ng Research Fellowship in the academic year 2022/2023. Prof. Tumerdem rece ived his B.Sc. in Mechatronics Engineering from Sabanci University\, Istan bul\, Turkey in 2005\, his M.Sc. and Ph.D. degrees in Integrated Design En gineering from Keio University\, Tokyo\, Japan in 2007 and 2010 respective ly. He worked as a postdoctoral researcher at IBM Research – Tokyo in 2011 before joining Marmara University.
\n\n
\n
DTSTART;TZID=America/New_York:20230125T120000 DTEND;TZID=America/New_York:20230125T130000 SEQUENCE:0 SUMMARY:LCSR Seminar: Ugur Tumerdem “Recovering the Sense of Touch for Robo tic Surgery and Surgical Training” URL:https://lcsr.jhu.edu/events/lcsr-seminar-ugar-tumerdem/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13586@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Lydia\; lcsr-gsa@jhu.edu DESCRIPTION:
We are super excited for you to join us on our first LCSR so cial event of this spring term! We are planning to get together for an ice -skating session on Thursday January 26th @6:00 pm< /em> at the JHU ice rink\, followed by an informal happy hour at the Charles Village Pub (we are not providing fo od or drinks this time). The ice rink on the night of the 26th is dedicated to JHU grad students\, so it’s a good opportunity to mingle w ith peeps from other departments as well! If you are intereste d in joining us\, please sign up on this google form – we will be taking in people on a first co me first serve basis.
\n\n
We currently have 27
available tickets open only to LCSR students. However\, you are free to b
ring in extra guests by signing them up yourselves at this link<
/a> (please read through JHU’s policy on bringing in non JHU affiliated gu
ests on their website). We will be meeting up at Hackerman breezeway
\n
FAQs:
\n\n
Lastly\, we wanted to emphasize that the aforementioned d ate is TENTATIVE and weather dependent. Should t he clouds bless us with rain on that Thursday\, we will need to postpone t he event. We will send an email on Monday January 23rd to confi rm the final date\, but it will most likely be a Thursday or Friday either the week of January 23 or 30.
\n\n
Looking forward to cruisi ng with you soon ⛸️⛸️!
\nTickets: https://docs.google.com/forms/d/e/1FAI pQLSf35TqIHkRdGBtobJCT-4t5E-Bw8DE6sFffGxIxjF5489vW_Q/viewform.
DTSTART;TZID=America/New_York:20230126T180000 DTEND;TZID=America/New_York:20230126T193000 SEQUENCE:0 SUMMARY:GSA Ice Skating Social URL:https://lcsr.jhu.edu/events/gsa-ice-skating-social/ X-COST-TYPE:external X-TAGS;LANGUAGE=en-US:gsa X-TICKETS-URL:https://docs.google.com/forms/d/e/1FAIpQLSf35TqIHkRdGBtobJCT- 4t5E-Bw8DE6sFffGxIxjF5489vW_Q/viewform END:VEVENT BEGIN:VEVENT UID:ai1ec-13522@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Christy Brooks\; cbrook53@jhu.edu DESCRIPTION:\n
Abstract:
\nDr. Dana R. Yoerger
\nSenior Scientist
\nDept of Appli ed Ocean Physics and Engineering
\nWoods Hole Oceanographic Institut ion
\n\n
In the past two decades\, engineers and scientists h ave used robots to study basic processes in the deep ocean including the M id Ocean Ridge\, coral habitats\, volcanoes\, and the deepest trenches. We have also used such vehicles to investigate the environmental impact of t he Deepwater Horizon oil spill and to investigate ancient and modern shipw recks. More recently\, we are expanding our efforts to include the mesopel agic or “twilight zone” which extends vertically in the ocean from about 2 00 to 1000m where sunlight ceases to penetrate. This regime is particular ly under-explored and poorly understood due in large part to the logistica l and technological challenges in accessing it. However\, knowledge of th is vast region is critical for many reasons\, including understanding the global carbon cycle – and Earth’s climate – and for managing biological re sources. This talk will show results from our past expeditions and look to future challenges.
\n\n
Bio:
\nDr. Dana Yoerger is a S enior Scientist at the Woods Hole Oceanographic Institution and a research er in robotics and autonomous vehicles. He supervises the research and ac ademic program of graduate students studying oceanographic engineering thr ough the MIT/WHOI Joint Program in the areas of control\, robotics\, and d esign. Dr. Yoerger has been a key contributor to the remotely-operated ve hicle Jason\; to the Autonomous Benthic Explorer known as ABE\; m ost recently\, to the autonomous underwater vehicle\, Sentry\; th e hybrid remotely operated vehicle\, Nereus which reached the bot tom of the Mariana Trench in 2009\, and most recently Mesobot\, a hybrid robot for midwater exploration. Dr. Yoerger has gone to sea on ov er 90 oceanographic expeditions exploring the Mid-Ocean Ridge\, mapping un derwater seamounts and volcanoes\, surveying ancient and modern shipwrecks \, studying the environmental effects of the Deepwater Horizon oil spill\, and the recent effort that located the Voyage Data Recorder from the merc hant vessel El Faro. His current research focuses on robots for exploring the midwater regions of the world’s ocean. Dr. Yoerger has served on sever al National Academies committees and is a member of the Research Board of the Gulf of Mexico Research Initiative. He has a PhD in mechanical engine ering from the Massachusetts Institute of Technology and is a Fellow of th e IEEE.
\n\n
\n
DTSTART;TZID=America/New_York:20230201T120000 DTEND;TZID=America/New_York:20230201T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Dana Yoerger “Recent Results and Future Challenges f or Autonomous Underwater Vehicles in Ocean Exploration” URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-8/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13562@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT: DESCRIPTION:
Bagels!
DTSTART;TZID=America/New_York:20230206T103000 DTEND;TZID=America/New_York:20230206T130000 SEQUENCE:0 SUMMARY:GSA: Bagel Day URL:https://lcsr.jhu.edu/events/gsa-bagel-day/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:gsa END:VEVENT BEGIN:VEVENT UID:ai1ec-13530@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Christy Brooks\; cbrook53@jhu.edu DESCRIPTION:\n
Mark Savage is the Johns Hopkins Life Design Educator for Engineering M asters Students\, advising on all aspects of career development and the in ternship / job search\, with the Handshake Career Management System as a n ecessary tool. Look for weekly newsletters to soon be emailed to Homewood WSE Masters Students on Sunday Nights.
\n\n
\n
DTSTART;TZID=America/New_York:20230208T120000 DTEND;TZID=America/New_York:20230208T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Mark Savage “Resumes” URL:https://lcsr.jhu.edu/events/lcsr-seminar-tbd-3/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13532@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Christy Brooks\; cbrook53@jhu.edu DESCRIPTION:
\n
\n
Abstract:
\nAll models are wrong\, and too many are directed inward. The Internal Model Principle of control engineering directs our attention (and modeling proficiency) t o what makes the world around us patterned and predictable. It says that driving a model of that patterned or predictable behavior in a feedback lo op is the only way to achieve perfect tracking or disturbance rejection. I n the spirit of “some models are useful”\, I will present a control system model of humans tracking moving targets on a screen using a mouse and cur sor. Simple analyses reveal this controller’s robustness to visual blankin g and experiments (even experiments conducted remotely during the pandemic ) provide ample support. Extensions that combine feedforward and feedback control complete the picture and complement existing literature in human m otor behavior\, most of which is focused on modeling the system under cont rol rather than the environment.
\nBio:
\nBrent Gillespie is a Professor of Mechanical Engineering and Robotics at the University of Mic higan. He received a Bachelor of Science in Mechanical Engineering from th e University of California Davis in 1986\, a Master of Music from the San Francisco Conservatory of Music in 1989\, and a Ph.D. in Mechanical Engine ering from Stanford University in 1996. His research interests include hap tic interface\, human motor behavior\, haptic shared control\, and robot-a ssisted rehabilitation after neurological injury. Prof. Gillespie’s awards include the Popular Science Invention Award (2016)\, the University of Mi chigan Provost’s Teaching Innovation Prize (2012)\, and the Presidential E arly Career Award for Scientists and Engineers (2001).
\nDTSTART;TZID=America/New_York:20230215T120000 DTEND;TZID=America/New_York:20230215T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Brent Gillespie “Predicting Human Behavior in Predict able Environments Using the Internal Model Principle” URL:https://lcsr.jhu.edu/events/lcsr-seminar-brent-gillespie-2/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13534@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Christy Brooks\; cbrook53@jhu.edu DESCRIPTION:
\n
\n
Abstract:
\nOver 70% of our world is underwater\, but less than 1% of the world’s oceans have bee n mapped at resolutions greater than 100m per pixel. Regular inspection\, mapping\, and data collection in marine environments is essential for a wh ole host of reasons including gaining a scientific understanding of our pl anet\, civil infrastructure maintenance\, and safe navigation. However\, m anual inspection/data collection using divers is expensive\, dangerous\, t ime-consuming\, and tedious work.
\n\n
In this talk\, I will discuss the use of autonomous underwater vehicles (AUVs) and autonomous su rface vessels (ASVs) to automatically and intelligently map\, inspect\, an d collect information in unstructured marine environments. In particular\, we will discuss the problems present in this space as well as the contrib utions my lab is making towards addressing these problems\, including i) t he development of a general-purpose marine robotics testbed at BYU\, ii) t he development of a marine robotics simulator called HoloOcean (https://holoocean.readthedoc s.io/en/stable/)\, iii) advancements in marine robotic localization us ing Lie groups\, and iv) preliminary results towards expert-guided topic m odeling and intelligent data collection.
\n\n
Bio:
\nDr . Joshua Mangelson holds PhD and Masters degrees in Robotics from the Univ ersity of Michigan. After completing his degre\, he served as a post-docto ral fellow at Carnegie Mellon University before joining the Electrical and Computer Engineering faculty at Brigham Young University in 2020. His qua lifications include demonstrated expertise in robotic perception\, mapping \, and localization with a particular focus on marine robotics. He has ext ensive experience leading marine robotic field trials in various locations around the world including San Diego\, Hawaii\, Boston\, northern Michiga n\, and Utah. In 2018\, his work on multi-robot mapping received the Best Multi-Robot Paper Award at the IEEE ICRA conference and 1st-Place in the I EEE OCEANS Student Poster Competition. He is currently serving as an assoc iate editor for The International Journal of Robotics Research (IJRR) and the IEEE/RSJ IROS Conference.
\nDTSTART;TZID=America/New_York:20230222T120000 DTEND;TZID=America/New_York:20230222T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Joshua Mangelson “Steps Towards Intelligent Autonomou s Underwater Inspection and Data Collection” URL:https://lcsr.jhu.edu/events/joshua-mangelson/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13536@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; amoriar2@jhu.edu DESCRIPTION:
\n
Ulas Berk Karli and Shiye (Sall y) Cao “What if it is wrong: effects of power dynamics and trust repair st rategy on trust and compliance in HRI.”
\nAbstract: Robotic systems designed to work alongside people are susceptible to technical an d unexpected errors. Prior work has investigated a variety of strategies a imed at repairing people’s trust in the robot after its erroneous operatio ns. In this work\, we explore the effect of post-error trust repair strate gies (promise and explanation) on people’s trust in the robot under varyin g power dynamics (supervisor and subordinate robot). Our results show that \, regardless of the power dynamics\, promise is more effective at repairi ng user trust than explanation. Moreover\, people found a supervisor robot with verbal trust repair to be more trustworthy than a subordinate robot with verbal trust repair. Our results further reveal that people are prone to complying with the supervisor robot even if it is wrong. We discuss th e ethical concerns in the use of supervisor robot and potential interventi ons to prevent improper compliance in users for more productive human-robo t collaboration.
\n\n
Bio: Ulas Berk Karli is a MSE student i n Robotics LCSR\, Johns Hopkins University. He received the Bachelor of Sc ience degree in Mechanical Engineering and Double Majored in Computer Engi neering from Koc University\, Istanbul in 2021. His research interests are Human-Robot Collaboration and Robot Learning for HRI.
\nShiye Cao i s a first-year Ph.D. student in the Department of Computer Science\, co-ad vised by Dr. Chien-Ming Huang and Dr. Anqi Liu. She received the Bachelor of Science degree in Computer Science with a second major in Applied Math ematics and Statistics from Johns Hopkins University in 2021\, and the Mas ters of Science in Engineering in Computer Science from Johns Hopkins Univ ersity in 2022. Her work focuses on user trust and reliance in human-machi ne collaborative tasks.
\n\n
\n
Eugene Lin “Robophysical modeling of spider vibration sensing of prey on orb webs”
\nAbstract: Orb-weaving spider s are functionally blind and detect prey-generated web vibrations through vibration sensors at their leg joints to locate and identify prey caught i n their (near) planar webs. Previous studies focused on how spiders use we b geometry\, silk properties\, and web pre-tension to modulate vibration s ensing. Spiders can also dynamically adjust their posture while sensing pr ey\, which may be a form of active sensing (Hung\, Corver\, Gordus\, 2022\ , APS March Meeting). However\, whether this is true and how it works is p oorly understood\, due to difficulty of measuring the dynamics of the enti re prey-web-spider interaction system all at once. Here\, we developed a r obophysical model of the system to test this hypothesis of active sensing and discover its principles. Our model consists of a vibrating prey robot and a spider robot that can adjust its posture\, with torsional springs at leg joints and accelerometers to measure joint vibration. Both robots are attached to a physical web made of cords with qualitatively similar prope rties to real spider web threads. Load cells measure web pre-tension and a high-speed camera system measure web vibrations and robot movement. Preli minary results showed vibration attenuation through the web from the prey robot. We are currently studying the complex effects of spider robot’s dyn amic posture change on vibration propagation across the web and leg joints \, by systematically varying the parameters of prey robot vibration\, spid er robot leg posture\, and web pre-tension.
\n\n
Bio: Eugene Lin is a third year PhD student in Dr. Chen Li’s lab (Terradynamics lab). His work focuses on understanding environmental sensing on suspended\, spa rse terrain. He received a B.S. in Mechanical Engineering at the Universit y of California\, San Diego. He recently presented this work at the annual SICB conference and will present it again at the annual March APS confere nce.
\n\n
\n
Aishwarya Pantula “Pick a Side: Untethered Gel Crawlers That Can Break Sy mmetry”
\nAbstract: The development of untethered soft craw ling robots programmed to respond to environmental stimuli and precisely m aneuverable across size scales has been paramount to the fields of soft ro botics\, drug delivery\, and autonomous smart devices. Of particular relev ance are reversible thermoresponsive hydrogels\, which swell and shrink in the temperature range of (30- 60 °C) for operating such untethered soft r obots in human physiological and ambient conditions. While crawling has be en demonstrated by thermoresponsive hydrogels\, they need surface modifica tions in the form of rachets\, asymmetric patterning\, or constraints to a chieve unidirectional motion.
\nHere we demonstrate and validate a n ew mechanism for untethered\, unidirectional crawling for multisegmented g el crawlers built from an active thermoresponsive poly (N-isopropyl acryla mide) (pNIPAM) and passive polyacrylamide (pAAM) on flat unpatterned surfa ces. By connecting bilayers of different geometries and thicknesses using a centrally suspended gel linker\, we create a morphological gradient alon g the fore-aft axis\, which leads to an asymmetry in the contact forces du ring the swelling and deswelling of our crawler. We thoroughly explain our mechanism using experiments and finite element simulations and\, using ex periments\, demonstrate that we can tune the generated asymmetry and\, in turn\, increase the displacement of the crawler by varying linker stiffnes s\, morphology\, and the number of bilayer segments. We believe this mecha nism can be widely applied across fields of study to create the next gener ation of autonomous shape-changing and smart locomotors.
\nBio: Aish warya is a 4th year Ph.D. candidate in the lab of Dr. David Gracias at Joh ns Hopkins University\, USA. Her research focuses on exploring smart mater ials like stimuli-responsive hydrogels\, combining them with novel pattern ing methods like 3D/4D printing\, imprint molding\, lithography\, etc.\, a nd using different mechanical design strategies to create untethered biomi metic actuators and locomotors across size scales for soft robotics and bi omedical devices.
\n\n
\n
Maia Stiber “On usin g social signals to enable flexible error-aware HRI.”
\nAbs tract: Prior error management techniques often do not possess the versatil ity to appropriately address robot errors across tasks and scenarios. Thei r fundamental framework involves explicit\, manual error management and im plicit domain-specific information driven error management\, tailoring the ir response for specific interaction contexts. We present a framework for approaching error-aware systems by adding implicit social signals as anoth er information channel to create more flexibility in application. To suppo rt this notion\, we introduce a novel dataset (composed of three data coll ections) with a focus on understanding natural facial action unit (AU) res ponses to robot errors during physical-based human-robot interactions—vary ing across task\, error\, people\, and scenario. Analysis of the dataset r eveals that\, through the lens of error detection\, using AUs as input int o error management affords flexibility to the system and has the potential to improve error detection response rate. In addition\, we provide an exa mple real-time interactive robot error management system using the error-a ware framework.
\n\n
Bio: Maia Stiber is a 4th year Ph.D. can didate in the Department of Computer Science\, co-advised by Dr. Chien-Min g Huang and Dr. Russell Taylor. She received a B.S. in Computer Science fr om Caltech in 2019 and a M.S.E. in Computer Science from Johns Hopkins Uni versity in 2021. Her work focuses on leveraging natural human responses to robot errors in an effort to develop flexible error management techniques in support of effective human-robot interaction.
\n\n
Victor Antony “Co-designing with older adults\, for old er adults: robots to promote physical activity.”
\nAbstract : Lack of physical activity has severe negative health consequences for ol der adults and limits their ability to live independently. Robots have bee n proposed to help engage older adults in physical activity (PA)\, albeit with limited success. There is a lack of robust understanding of older adu lts’ needs and wants from robots designed to engage them in PA. In this pa per\, we report on the findings of a co-design process where older adults\ , physical therapy experts\, and engineers designed robots to promote PA i n older adults. We found a variety of motivators for and barriers against PA in older adults\; we\, then\, conceptualized a broad spectrum of possib le robotic support and found that robots can play various roles to help ol der adults engage in PA. This exploratory study elucidated several overarc hing themes and emphasized the need for personalization and adaptability. This work highlights key design features that researchers and engineers sh ould consider when developing robots to engage older adults in PA\, and un derscores the importance of involving various stakeholders in the design a nd development of assistive robots.
\n\n
Bio: Victor Antony i s a second-year Ph.D. student in the Department of Computer Science\, advi sed by Dr. Chien-Ming Huang. He received the Bachelor of Science degree in Computer Science from the University of Rochester in 2021. His work focus es on Social Robots for well-being.
\nDTSTART;TZID=America/New_York:20230301T120000 DTEND;TZID=America/New_York:20230301T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Student Seminars URL:https://lcsr.jhu.edu/events/lcsr-seminar-student/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13540@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Christy Brooks DESCRIPTION:
Allison Okamura: “Wearable Haptic Devices for Ubi quitous Communication”
\nAbstract:
\nHaptic devices allow touc h-based information transfer between humans and intelligent systems\, enab ling communication in a salient but private manner that frees other sensor y channels. For such devices to become ubiquitous\, their physical and com putational aspects must be intuitive and unobtrusive. The amount of inform ation that can be transmitted through touch is limited in large part by th e location\, distribution\, and sensitivity of human mechanoreceptors. Not surprisingly\, many haptic devices are designed to be held or worn at the highly sensitive fingertips\, yet stimulation using a device attached to the fingertips precludes natural use of the hands. Thus\, we explore the d esign of a wide array of haptic feedback mechanisms\, ranging from devices that can be actively touched by the fingertips to multi-modal haptic actu ation mounted on the arm. We demonstrate how these devices are effective i n virtual reality\, human-machine communication\, and human-human communic ation.
\n\n
Bio:
\nAllison Okamura received the BS degr ee from the University of California at Berkeley\, and the MS and PhD degr ees from Stanford University. She is the Richard W. Weiland Professor of E ngineering at Stanford University in the mechanical engineering department \, with a courtesy appointment in computer science. She is an IEEE Fellow and is the co-general chair of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems and a deputy director of the Wu Tsai Sta nford Neurosciences Institute. Her awards include the IEEE Engineering in Medicine and Biology Society Technical Achievement Award\, IEEE Robotics a nd Automation Society Distinguished Service Award\, and Duca Family Univer sity Fellow in Undergraduate Education. Her academic interests include hap tics\, teleoperation\, virtual reality\, medical robotics\, soft robotics\ , rehabilitation\, and education. For more information\, please see the CHARM Lab website.
DTSTART;TZID=America/New_York:20230308T120000 DTEND;TZID=America/New_York:20230308T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Allison Okamura “Wearable Haptic Devices for Ubiquito us Communication” URL:https://lcsr.jhu.edu/events/allison-okamura/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13542@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Christy Brooks DESCRIPTION:\n
Abstract: From genetic engineering to direct to consumer neurotechnology to ChatGPT\, it is a st andard refrain that science outpaces the development of ethical norms and governance. Further\, technologies increasingly cross boundaries from medi cine to the consumer market to law enforcement and beyond\, in ways that o ur existing governance structures are not equipped to address. Finally\, o ur standard governance approaches to addressing ethical issues related to new technologies fail to address population and societal-level impacts. Th is talk will demonstrate the above through a series of examples and descri be ongoing work by the US National Academies and others to address these c hallenges.
\n\n
Bio: Debra JH Mathews\, PhD\ , MA\, is the Associate Director for Research and Programs for the Johns H opkins Berman Institute of Bioethics\, and an Associate Professor in the D epartment of Genetic Medicine\, Johns Hopkins University School of Medicin e. Within the JHU Institute for Assured Autonomy\, Dr. Mathews serves as t he Ethics & Governance Lead. Her academic work focuses on ethics and polic y issues raised by emerging technologies\, with particular focus on geneti cs\, stem cell science\, neuroscience\, synthetic biology\, and artificial intelligence. Dr. Mathews helped found and lead The Hinxton Group\, an in ternational collective of scientists\, ethicists\, policymakers and others \, interested in ethical and well-regulated science\, and whose work focus es primarily on stem cell research. She has been a member of the Board of Directors of the International Neuroethics Society since 2015\, and is cur rently President-Elect. In addition to her academic work\, Dr. Mathews has spent time at the Genetics and Public Policy Center\, the US Department o f Health and Human Services\, the Presidential Commission for the Study of Bioethical Issues\, and the National Academy of Medicine working in vario us capacities on science policy.
\nDr. Mathews earned her PhD in gen etics from Case Western Reserve University\, as well as a concurrent Maste r’s in bioethics. She completed a Post-Doctoral Fellowship in genetics at Johns Hopkins\, and the Greenwall Fellowship in Bioethics and Health Polic y at Johns Hopkins and Georgetown Universities.
\n\n
DTSTART;TZID=America/New_York:20230315T120000 DTEND;TZID=America/New_York:20230315T130000 LOCATION:Hackerman B17 SEQUENCE:0 SUMMARY:LCSR Seminar: Debra Mathews “Ethics and Governance of Emerging Tech nologies” URL:https://lcsr.jhu.edu/events/lcsr-seminar-debra-mathews-2/ X-COST-TYPE:free END:VEVENT BEGIN:VEVENT UID:ai1ec-13422@lcsr.jhu.edu DTSTAMP:20240319T091411Z CATEGORIES: CONTACT:Ashley Moriarty\; 410-516-6841\; ashleymoriarty@jhu.edu DESCRIPTION:\n
Friday 3/24 strong> | \nLocation: Glass Pavilion – Levering Hall | \n
8:30 AM | \nRegistration Open a nd Breakfast | \n
9:00 AM | \nWelcome | \n
9:05 AM | \nIntroduction to LCSR – Russell H. Taylor\, D irector | \n
9:20 AM | \nLCSR Education – Louis Wh itcomb\, Deputy Director | \n
9:25 AM | \nIAA – Ja mes Bellingham and Anton Dahbura | \n
9:30 AM | \nStudent Research Talk – Max Li | \n
9:42 AM | \nSt udent Research Talk – Divya Ramesh | \n
9:55 AM | \n|
10:07 AM | \nStudent Research Talk – Di Cao | \n
10:20 AM | \nCoffee Break | \n
10:40 AM | \nJohns Hopkins Tech Ventures – Seth Zonies | \n
10:55 AM | \nInd ustry Talk – Ankur Kapoor\, Siemens | \n
11:15 AM | \nIndustry Talk – William Tan\, GE | \n
11:35 AM | \nNew LCSR Faculty – Alejandro Martin-Gomez\, | \n
1 1:55 AM | \nClosing – Russell H. Taylor\, Director | \n
12:00 PM | \nLunch – Resume Roundtables | \n
\n | \n |
1:30-4:00 PM | \nPoster and Demo Session (Hackerman Hall) | \n
1:45-3:45 PM | \n< td> Guided Krieger Hall Tours (meet outside Hackerman 134)\n tr>\n|
\n | \n |
4:00-5:00 PM | \nA lumni Reception (Shriver Hall – Clipper Room) | \n