Student 1: Juan Antonio Barragan, advised by Peter Kazanzides
“Context-aware CT guidance during diagnostic laparoscopy for advanced ovarian cancer”
Abstract
In advanced-stage ovarian cancer, malignant cells spread throughout the abdominal cavity, making complete tumor removal during cytoreductive surgery critical yet often infeasible when disease burden is high. Diagnostic laparoscopy and computed tomography (CT) provide complementary information on tumor spread and operability, but are traditionally analyzed separately, resulting in fragmented assessments. This work presents an AI-based approach to provide context-aware CT navigation by detecting the abdominal region and visible organs in each video frame and then retrieving corresponding CT slices and anatomical segmentations. A dedicated dataset of annotated laparoscopic videos is used to train region and organ detectors built on self-supervised learning (SSL) pretrained backbones, with comparisons to existing surgical vision baselines. The proposed models achieve a mean F1-score of 60.2 across seven abdominal regions and a mean average precision (mAP) of 80.3 across thirteen organs. A CT visualization tool further demonstrates real-time synchronization between CT and video, supporting improved intraoperative disease assessment and surgical decision-making.
Bio
Juan Antonio Barragan is a Ph.D. student in the Department of Computer Science at Johns Hopkins University, advised by Professor Peter Kazanzides. His research focuses on advanced human–computer interfaces in surgery and image-guided navigation systems, including digital-twin modeling, simulation frameworks, and deep learning. He received his M.S. in Industrial Engineering from Purdue University and his B.S. in Electronic Engineering from the Universidad Nacional de Colombia. During his Ph.D., he was awarded the Chateaubriand Fellowship, which supported a 10-month research stay at the University of Strasbourg under the supervision of Professor Nicolas Padoy.
Student 2: Yifan Yin, advised by Tianmin Shu
Abstract
Instruction following is central to embodied AI and human–robot interaction. It allows robots to interact with people through natural language, the interface we use every day. This talk explores how robots interpret intent and execute multi-step actions across tabletop and mobile manipulation tasks when instructions vary in both granularity and fidelity. We’ll outline key challenges and discuss emerging solutions that blend part-aware perception, hierarchical planning, and pragmatic reasoning with contextual cues. We’ll also highlight future research directions in this area.
Bio
Yifan Yin is a second-year Ph.D. student in Computer Science at Johns Hopkins University, advised by Prof. Tianmin Shu. His research spans embodied AI, robotics, and human–robot interaction, with a focus on 3D scene understanding, world models, multimodal policy learning, and integrated task and motion planning for assistance and collaboration.