Vision-based surgical instrument segmentation, which aims to detect instrument regions in surgery images, is often a critical component for the computer or robot-assisted surgical systems. While advanced algorithms including deep CNN models have achieved promising instrument segmentation results, several limitations remain unsolved: (1) The robustness and generalization ability of existing algorithms is still insufficient for challenging surgery images, and (2) deep networks usually come with high computation cost, which needed to be addressed for time-sensitive applications during surgery. In this talk, I will present two algorithms to address these challenges. First, I will introduce a lightweight CNN that can achieve better segmentation performance with less inference time on low-quality endoscopic sinus surgery videos compared with several advanced deep networks. I will then discuss a domain adaptation method that can transfer the knowledge learned from relevant and labeled datasets for instrument segmentation on an unlabeled dataset.
Shan Lin is a PhD candidate in the Electrical and Computer Engineering department at the University of Washington working with Prof. Blake Hannaford on medical robotics. Her research focuses on surgical instrument segmentation and skill assessment.