LCSR Research during COVID-19

As with any new situation, our LCSR professors have recognized the pressing need to develop open-source engineering solutions to address many aspects of the COVID-19 crisis. This page contains some of the new research opportunities that have emerged.

Telerobotics for the Intensive Care Unit (ICU)

For an infectious disease such as Covid-19, health care workers must don and doff personal protective equipment to enter the ICU, even if only to perform a simple task such as changing a setting on a ventilator or infusion pump.  LCSR researchers are developing a quickly deployable solution that will allow health-care workers to remotely operate equipment from outside the ICU. The LCSR team consists of Profs. Peter Kazanzides and Russell H. Taylor from the Department of Computer Science, Profs. Axel Krieger and Iulian Iordachita from the Department of Mechanical Engineering, and research scientists and technical staff members Balazs Vagvolgyi, Anton Deguet and Anna Goodridge. Clinical collaborators include critical care doctors at Johns Hopkins Hopsital and University of Maryland Medical Center, faculty from the Johns Hopkins School of Nursing, and respiratory specialists from the Johns Hopkins Hospital. In addition, the team is working the JHU Armstrong Institute for Patient Safety and Quality.

The team is exploring two robotic concepts: a Cartesian (XYZ) stage that is attached to the screen of the medical device and a conventional robot arm that is mounted near the medical device. The plan is to first deploy the XYZ robot on the most prevalent ventilator at Johns Hopkins Hospital, which contains a touchscreen interface, while refining both robot designs to enable interaction with more complex medical device interfaces, such as infusion pumps and ventilators with knobs. All robotic systems will include at least one camera to provide live video feedback to the operator outside the ICU. With some robot designs, the operator could command the robot arm to also video survey other parts of the ICU. All robot designs will include safety features, such as force sensing to ensure that they do not damage the equipment, and will be easily cleaned/disinfected.



Facilitating Machine Learning Research to Inform Coronavirus Response

The overarching goal of this project is to collect region-specific data so it can be understood why COVID-19 spreads differently in different communities. For example, can it be speculated that the density of New York City leads to a higher rate of infection, but what can that trajectory tell us about how COVID-19 might spread in, say, Miami?

By collecting data like population density and public transportation usage, we aim to identify similar areas and extend predictive models to regions with less advanced spread. By combining a similarity measure  relating New York City, Seattle, or LA, for example, to Miami, with a model for describing interventions, such as shelter-in-place orders, have affect COVID-19 transmission, we aim to understand how implementing or rolling back such interventions could affect transmission in the future.

So far, we’ve adopted an epidemiological model from Imperial College to estimate the effects of past interventions. Our model has shown that compared to the European Countries that Imperial College has focused on, most US counties are in the earlier stages of the disease and have yet to effectively “flatten the curve”. Moving forward, we hope to incorporate new information detailing compliance with stay-at-home orders based on foot-traffic data, examining the public’s response in a region-specific manner and its effect on the transmission rate of COVID-19.

The dataset we have collected to inform our research has won a Kaggle COVID-19 Dataset Award and is publicly available here:

The project is led by Mathias Unberath and is the result of a herculean effort by a group of students at Johns Hopkins University and LCSR. Special thanks goes to Jie Ying Wu, Benjamin Killeen, Kinjal Shah, Anna Zapaishchykova, Philipp Nikutta, Aniruddha Tamhane, Shreya Chakraborty, Jinchi Wei, Tiger Gao, and Mareike Thies.


Alternative PPE Filter materials

The Covid-19 pandemic has created a shortage of personal protective equipment (PPE) for health care workers and first responders worldwide. The LCSR team is sourcing and testing alternate filter materials to be used in a respirator mask. The group is working on particulate testing for several types of filter materials as well as fit testing for open source mask designs and designs developed in house. The team working on the project consists of research scientist Anna Goodridge from the LCSR, in collaboration with the WSE Manufacturing team lead by Rich Middlestadt including Niel Leon, from the Department of Biomedical Engineering, Jeff Siewerdsen, Zachary Baker, and Paul Hage, from the Department of Electrical Engineering, Kevin Gilboy and from the Department of Environmental Health and Engineering in the Bloomberg School of Public Health, Ana Rule, Kirsten Koehler, Peter DeCarlo, and Ashley Newton. Clinical collaborators include critical care doctors at Johns Hopkins Hospital.

Deep learning course prepares students for success in AI careers

Students present final projects for Machine Learning: Deep Learning in Hackerman Hall


Artificial intelligence (AI), under development for decades, is now hitting the mainstream. Many industries – from healthcare and banking to retail and entertainment – are eager to invest in workers who can apply AI-powered tools to solve real-world problems.


To meet this soaring demand for AI talent, Johns Hopkins University now offers courses aimed at preparing students for successful careers in AI-related fields. One such course, offered by the Department of Computer Science, introduces students to deep learning, a subdiscipline of AI in which a computer tries to discover meaningful patterns from data to make decisions.


In deep learning, artificial neural networks process and learn information in a way that many argue is similar to the human brain. And much like humans, deep learning algorithms learn from experience, performing the same task over and over to improve the outcome. Directly mapping the input to the desired output makes deep learning superior to other machine learning methods when applied to complex problems related to image classification, speech recognition, and computer vision.


Everyday examples of deep learning in action include virtual assistants like Alexa or Siri and translation apps like Google Translate – and it’s how Amazon knows which products you might be interested in buying next.


If deep learning is the next frontier in artificial intelligence, Hopkins students are eager to explore the unknown. Machine Learning: Deep Learning will be offered for the fifth time this spring. Last semester, 90 students enrolled, with 70 students on the waiting list.  One reason the course is popular is that it attracts students from various disciplines, according to instructor Mathias Unberath, an assistant research professor of computer science and a member of the university’s Laboratory for Computational Sensing and Robotics and Malone Center for Engineering in Healthcare.


Biomedical engineering senior Hadley VanRenterghem opted in because she knows deep learning skills are in high demand in her field.


“Machine learning has amazing potential to contribute to new advances in healthcare. I took the class because knowing how to design and apply deep learning systems is going to be very useful for my future career plans in medical technology. One big thing I learned is that there isn’t one ‘best’ model for solving deep learning problems. You have to understand the motivation and mechanisms behind different approaches,” said VanRenterghem.


MATHIAS project can classify cell types (including colon cancer tumor cells) in histopathological images like the above


Throughout the semester, students learn to design and train neural network architectures using standardized data sets. For their final project, student teams must implement a deep learning model to solve a problem of their choosing.


VanRenterghem, along with fellow biomedical engineering students Matt Figdore and Max Xu, and computer science major Taha Baig, proposed applying deep learning to improve histopathology image analysis.


Histopathologists examine human tissue samples and return a diagnosis. Automatic histopathology image recognition could speed up the diagnostic process and improve the quality of diagnosis. To this end, VanRenterghem and team built a system comprised of two deep learning models. Using a deep clustering framework, they trained the first model to group unlabeled cell images with similar features, allowing the model to classify each cell type. Their second model could then successfully identify individual cell types in tissue slide images containing multiple cell types. The team says their system (appropriately called MATHIAS, a homage to their instructor) could help histopathologists quickly identify areas that are most likely to have tumor cells.


The MATHIAS project won one of two Best Project Awards handed out by surgical robotics company Intuitive Surgical during the course’s final project demonstrations in December.


The other winning project explored training a neural network to solve jigsaw puzzles in a human-like fashion. Team members Shaoyan Pan, Matthew Pittman, Mike Peven, and Ben Killeen – graduate students in electrical and computer engineering, mechanical engineering, and computer science – were interested in how well their network could solve a puzzle of a cat. Puzzles are a popular deep learning task because the algorithm must identify reliable patterns and classify images in order to solve the puzzle.


Ultimately, the algorithm learned to solve the puzzle but performance varied depending on the content of the jigsaw pieces. Computerized puzzle pieces used for deep learning are perfectly square. Since the edges didn’t provide clues on how to solve the puzzle, the algorithm had to learn to put pieces together based only on its knowledge of what a cat is supposed to look like. This was challenging for the computer if, for example, several puzzle pieces only showed fur.


“We wanted the model to emulate unsupervised learning when solving the puzzle – to make decisions based on context clues much like humans do. That way, our model can perform different tasks that involve object recognition, rather than just one specialized task. The next step would be to test the model on other datasets and see how it works,” said Peven.


Developing a deep learning course can be tricky because the field moves so quickly, says Unberath. He wants his students to understand AI’s potential impact on society, not just its technical concepts. For example, Unberath has added a new lecture to the syllabus called Ethics, Fairness, and Human-Centered AI.


“We cover the basics in the first half of the semester, but then we start to talk about issues I think more creative minds should be considering: How can we learn from unannotated data? How can we detect and deal with bias in the data? And most fundamentally, which open problems can we tackle with learning-based algorithms to promote a fair and equitable society? Discussing these topics in class seems to resonate very well with our students.”


Written by Catherine Graham

Russell Taylor elected to the National Academy of Engineering

Election to the academy is considered among the highest professional distinctions accorded to an engineer and honors individuals who have made outstanding contributions to engineering research, practice, or education, and to the pioneering of new and developing fields of technology. Members are elected by current NAE members. This is wonderful and well-deserved recognition of Yannis and Russ’ accomplishments—as well as of the Whiting School’s leadership in engineering research and education and of the impact we are having on the world.

Russ was recognized for his contributions to the development of medical robotics and computer integrated systems.

Russ directs WSE’s Laboratory for Computational Sensing and Robotics and was the founding director of the Engineering Research Center for Computer-Integrated Surgical Systems and Technology. His work focuses on robotics, human-machine cooperative systems, medical imaging and modeling, and computer-integrated interventional systems and he is widely regarded as a pioneer in the early development of medical robotics technology and computer-integrated surgical systems. While at IBM’s J. Watson Research Center, Russ was the principal architect of what became the “Robodoc” system for joint replacement surgery, the first robotic assistant to be used in a major surgical procedure. Among his other groundbreaking achievements were the development of a surgical planning and execution system for craniofacial osteotomies and a robotic system for minimally-invasive endoscopic surgery, and many of the features and concepts he pioneered are now commonplace in medical robots. Russ is continuing to make substantial contributions in all aspects of medical robotics, including mechanism development, robot systems and control, image analysis and image guidance, human-machine interfaces, and a wide range of application areas, including orthopedics, minimally-invasive endoscopic surgery, image-guided needle placement, ophthalmology, otology, laryngology, sinus surgery, and radiation oncology.

Russ and other members of the newly elected class will be formally inducted during a ceremony at the NAE’s annual meeting on October 4, 2020, in Washington, D.C.

Meet the 2018 REU cohort!

This year we have welcomed students from all over the country to take part in the CSMR REU program. Click on the link to meet them!

Meet the 2018 Cohort

LCSR Russell Taylor elected a Fellow of the National Academy of Inventors

Russell H. Taylor, John C. Malone Professor in the Department of Computer Science and director of the Laboratory for Computational Sensing and Robotics, has been elected as a Fellow of the National Academy of Inventors. Status as a NAI Fellow is a prestigious distinction bestowed on academic inventors who have created or facilitated outstanding inventions that have made a difference to people and to society.

Widely hailed as the father of medical robotics, Russ started his work in robotics research more than 40 years ago. He began focusing on medical robotics about 25 years ago—a time when the field was virtually nonexistent. His election as a NAI Fellow recognizes this early work, as well as his continuing role as a global leader in efforts to enhance medical treatment through the expanded use of robotic devices and computer-integrated tools. Russ has repeatedly said that his goal isn’t to produce tools that will replace doctors, but rather to give doctors new tools that can help them achieve better outcomes in treating their patients.

Russ is one of two WSE faculty members elected this year, bringing WSE’s total number of NAI Fellows to four, and JHU’s total number to 11. He will be formally inducted at NAI’s 7th Annual Conference in April.

Laboratory for Computational Sensing + Robotics