The Spring 2021 Final Project Presentation Session for Computer Integrated Surgery II will be held Thursday, May 6th from 18:00 to 21:00 Eastern Time via Zoom. This year, we have 18 amazing projects supported by grad students, faculties, surgeons and companies. We are excited to invite you to join our event to see what the students have achieved with the effort of the past semester.
Connection information
Join Zoom Meeting: https://wse.zoom.us/j/635091574
Meeting ID: 635 091 574
Password: 001987
Agenda
– 18:00—18:10 Arrival and greetings
– 18:10—18:30 1 minute teaser presentation
– 18:30—20:30 Interactive session in breakout rooms
– 20:30—20:40 Reconvene and announce finalists
– 20:40—20:55 Presentations by finalists
– 20:55—21:00 Announcement of Best Project winner
The Spring 2021 Final Project Presentation Session for Deep Learning will be held Tuesday, May 11th from 9-12pm Eastern Time via Zoom. This year, we have many amazing projects to review and celebrate. We are excited to invite you to join our event to see what the students have achieved with the effort of the past semester.
Join Zoom Meeting
https://wse.zoom.us/j/98898500603?pwd=dlY2RHlUZXhFUXErK1J6bHcxVUNGdz09
The closing ceremonies of the Computational Sensing and Medical Robotics (CSMR) REU are set to take place Friday, August 6 from 9am until 3pm at this Zoom link. Seventeen undergraduate students from across the country are eager to share the culmination of their work for the past 10 weeks this summer.
The schedule for the day is listed below, but each presentation is featured in more detail in the program. The event is open to the public and it is not necessary to RSVP.
2021 REU Final Presentations | ||||
Time | Presenter | Project Title | Faculty Mentor | Student/Postdoc/Research Engineer Mentors |
9:00 |
Ben Frey
|
Deep Learning for Lung Ultrasound Imaging of COVID-19 Patients | Muyinatu Bell | Lingyi Zhao |
9:15 |
Camryn Graham
|
Optimization of a Photoacoustic Technique to Differentiate Methylene Blue from Hemoglobin | Muyinatu Bell | Eduardo Gonzalez |
9:30 |
Ariadna Rivera
|
Autonomous Quadcopter Flying and Swarming | Enrique Mallada | Yue Shen |
9:45 |
Katie Sapozhnikov
|
Force Sensing Surgical Drill | Russell Taylor | Anna Goodridge |
10:00 |
Savannah Hays
|
Evaluating SLANT Brain Segmentation using CALAMITI | Jerry Prince | Lianrui Zuo |
10:15 |
Ammaar Firozi
|
Robustness of Deep Networks to Adversarial Attacks | René Vidal | Kaleab Kinfu, Carolina Pacheco |
10:30 | Break | |||
10:45 |
Karina Soto Perez
|
Brain Tumor Segmentation in Structural MRIs | Archana Venkataraman | Naresh Nandakumar |
11:00 |
Jonathan Mi
|
Design of a Small Legged Robot to Traverse a Field of Multiple Types of Large Obstacles | Chen Li | Ratan Othayoth, Yaqing Wang, Qihan Xuan |
11:15 |
Arko Chatterjee
|
Telerobotic System for Satellite Servicing | Peter Kazanzides, Louis Whitcomb, Simon Leonard | Will Pryor |
11:30 |
Lauren Peterson
|
Can a Fish Learn to Ride a Bicycle? | Noah Cowan | Yu Yang |
11:45 |
Josiah Lozano
|
Robotic System for Mosquito Dissection | Russel Taylor,
Lulian Lordachita |
Anna Goodridge |
12:00 |
Zulekha Karachiwalla
|
Application of dual modality haptic feedback within surgical robotic | Jeremy Brown | |
12:15 | Break | |||
1:00 |
James Campbell
|
Understanding Overparameterization from Symmetry | René Vidal | Salma Tarmoun |
1:15 |
Evan Dramko
|
Establishing FDR Control For Genetic Marker Selection | Soledad Villar, Jeremias Sulam | N/A |
1:30 |
Chase Lahr
|
Modeling Dynamic Systems Through a Classroom Testbed | Jeremy Brown | Mohit Singhala |
1:45 |
Anire Egbe
|
Object Discrimination Using Vibrotactile Feedback for Upper Limb Prosthetic Users | Jeremy Brown | |
2:00 |
Harrison Menkes
|
Measuring Proprioceptive Impairment in Stroke Survivors (Pre-Recorded) | Jeremy Brown | |
2:15 |
Deliberations
|
|||
3:00 | Winner Announced |
Mark Savage is the Johns Hopkins Life Design Educator for Engineering Masters Students, advising on all aspects of career development and the internship / job search, with the Handshake Career Management System as a necessary tool. Look for weekly newsletters to soon be emailed to Homewood WSE Masters Students on Sunday Nights.
Abstract:
Robots currently have the capacity to help people in several fields, including health care, assisted living, and manufacturing, where the robots must share physical space and actively interact with people in teams. The performance of these teams depends upon how fluently all team members can jointly perform their tasks. To be successful within a group, a robot requires the ability to perceive other members’ actions, model interaction dynamics, predict future actions, and adapt their plans accordingly in real-time. In the Collaborative Robotics Lab (CRL), we develop novel perception, prediction, and planning algorithms for robots to fluently coordinate and collaborate with people in complex human environments. In this talk, I will highlight various challenges of deploying robots in real-world settings and present our recent work to tackle several of these challenges.
Biography:
Tariq Iqbal is an Assistant Professor of Systems Engineering and Computer Science (by courtesy) at the University of Virginia (UVA). Prior to joining UVA, he was a Postdoctoral Associate in the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT. He received his Ph.D. in CS from the University of California San Diego (UCSD). Iqbal leads the Collaborative Robotics Lab (CRL), which focuses on building robotic systems that work alongside people in complex human environments, such as factories, hospitals, and educational settings. His research group develops artificial intelligence, computer vision, and machine learning algorithms to enable robots to solve problems in these domains.
Abstract: We describe an approach for incorporating prior knowledge into machine learning algorithms. We aim at applications in physics and signal processing in which we know that certain operations must be embedded into the algorithm. Any operation that allows computation of a gradient or sub-gradient towards its inputs is suited for our framework. We derive a maximal error bound for deep nets that demonstrates that inclusion of prior knowledge results in its reduction. Furthermore, we show experimentally that known operators reduce the number of free parameters. We apply this approach to various tasks ranging from computed tomography image reconstruction over vessel segmentation to the derivation of previously unknown imaging algorithms. As such, the concept is widely applicable for many researchers in physics, imaging and signal processing. We assume that our analysis will support further investigation of known operators in other fields of physics, imaging and signal processing.
Short Bio: Prof. Dr. Andreas Maier was born on 26th of November 1980 in Erlangen. He studied Computer Science, graduated in 2005, and received his PhD in 2009. From 2005 to 2009 he was working at the Pattern Recognition Lab at the Computer Science Department of the University of Erlangen-Nuremberg. His major research subject was medical signal processing in speech data. In this period, he developed the first online speech intelligibility assessment tool – PEAKS – that has been used to analyze over 4.000 patient and control subjects so far.
From 2009 to 2010, he started working on flat-panel C-arm CT as post-doctoral fellow at the Radiological Sciences Laboratory in the Department of Radiology at the Stanford University. From 2011 to 2012 he joined Siemens Healthcare as innovation project manager and was responsible for reconstruction topics in the Angiography and X-ray business unit.
In 2012, he returned the University of Erlangen-Nuremberg as head of the Medical Reconstruction Group at the Pattern Recognition lab. In 2015 he became professor and head of the Pattern Recognition Lab. Since 2016, he is member of the steering committee of the European Time Machine Consortium. In 2018, he was awarded an ERC Synergy Grant “4D nanoscope”. Current research interests focuses on medical imaging, image and audio processing, digital humanities, and interpretable machine learning and the use of known operators.
Abstract:
The unprecedented prediction accuracy of modern machine learning beckons for its application in a wide range of real-world applications, including autonomous robots, fine-grained computer vision, scientific experimental design, and many others. In order to create trustworthy AI systems, we must safeguard machine learning methods from catastrophic failures and provide calibrated uncertainty estimates. For example, we must account for the uncertainty and guarantee the performance for safety-critical systems, like autonomous driving and health care, before deploying them in the real world. A key challenge in such real-world applications is that the test cases are not well represented by the pre-collected training data. To properly leverage learning in such domains, especially safety-critical ones, we must go beyond the conventional learning paradigm of maximizing average prediction accuracy with generalization guarantees that rely on strong distributional relationships between training and test examples.
In this talk, I will describe a distributionally robust learning framework that offers accurate uncertainty estimation and rigorous guarantees under data distribution shift. This framework yields appropriately conservative yet still accurate predictions to guide real-world decision-making and is easily integrated with modern deep learning. I will showcase the practicality of this framework in applications on agile robotic control and computer vision. I will also introduce a survey of other real-world applications that would benefit from this framework for future work.
Biography:
Anqi (Angie) Liu is an Assistant Professor in the Department of Computer Science at the Whiting School of Engineering of the Johns Hopkins University. She is broadly interested in developing principled machine learning algorithms for building more reliable, trustworthy, and human-compatible AI systems in the real world. Her research focuses on enabling the machine learning algorithms to be robust to the changing data and environments, to provide accurate and honest uncertainty estimates, and to consider human preferences and values in the interaction. She is particularly interested in high-stake applications that concern the safety and societal impact of AI. Previously, she completed her postdoc in the Department of Computing and Mathematical Sciences of the California Institute of Technology. She obtained her Ph.D. from the Department of Computer Science of the University of Illinois at Chicago. She has been selected as the 2020 EECS Rising Stars. Her publications appear in top machine learning conferences like NeurIPS, ICML, ICLR, AAAI, and AISTATS.
Abstract:
Deployment of autonomous vehicles (AV) on public roads promises increases in efficiency and safety, and requires intelligent situation awareness. We wish to have autonomous vehicles that can learn to behave in safe and predictable ways, and are capable of evaluating risk, understanding the intent of human drivers, and adapting to different road situations. This talk describes an approach to learning and integrating risk and behavior analysis in the control of autonomous vehicles. I will introduce Social Value Orientation (SVO), which captures how an agent’s social preferences and cooperation affect interactions with other agents by quantifying the degree of selfishness or altruism. SVO can be integrated in control and decision making for AVs. I will provide recent examples of self-driving vehicles capable of adaptation.
Biography:
Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, and Deputy Dean of Research in the Schwarzman College of Computing at MIT. Rus’ research interests are in robotics and artificial intelligence. The key focus of her research is to develop the science and engineering of autonomy. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences. She is a senior visiting fellow at MITRE Corporation. She is the recipient of the Engelberger Award for robotics. She earned her PhD in Computer Science from Cornell University.
Abstract:
Digital cameras have dramatically changed interventional and surgical procedures. Modern operating rooms utilize a range of cameras to minimize invasiveness or provide vision beyond human capabilities in magnification, spectra or sensitivity. Such surgical cameras provide the most informative and rich signal from the surgical site containing information about activity and events as well as physiology and tissue function. This talk will highlight some of the opportunities for computer vision in surgical applications and the challenges in translation to clinically usable systems.
Bio:
Dan Stoyanov is a Professor of Robot Vision in the Department of Computer Science at University College London, Director of the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), a Royal Academy of Engineering Chair in Emerging Technologies and Chief Scientist at Digital Surgery Ltd. Dan first studied electronics and computer systems engineering at King’s College London before completing a PhD in Computer Science at Imperial College London where he specialized in medical image computing.
Abstract:
I will discuss recent efforts at CinfonIA in enhancing interpretability in deep neural networks through the use of adversarial robustness and multimodal information.
Biography:
Pablo Arbeláez received the PhD with honors in Applied Mathematics from the Université Paris Dauphine in 2005. He was Senior Research Scientist with the Computer Vision Group at UC Berkeley from 2007 to 2014. He currently holds a faculty position in the Department of Biomedical Engineering at Universidad de los Andes in Colombia. Since 2020, he leads the Center for Research and Formation in Artificial Intelligence (CinfonIA) at UniAndes. His research interests are in computer vision and machine learning, in which he has worked on several problems, including perceptual grouping, object recognition and the analysis of biomedical images.