The Biomechanical- and Image-Guided Surgical Systems (BIGSS) laboratory is a collaboration between researchers at the Johns Hopkins University and the Johns Hopkins University Applied Physics Laboratory. This laboratory focuses on developing innovative computer-aided surgical guidance systems involving novel robots, advanced imaging, and real-time biomechanical assessments to improve surgical outcomes.
Photoacoustic & Ultrasonic Systems Engineering (PULSE) Lab – Muyinatu Bell
The PULSE Lab, directed by Dr. Muyinatu A. Lediju Bell, integrates light, sound, and robots to develop innovative biomedical imaging systems that simultaneously address unmet clinical needs and improve patient care. Our emphasis is diagnostic and surgical ultrasound and photoacoustic technologies, with applications in neurosurgery, cancer detection and treatment, and women’s health. We maintain a constant eye toward interfacing our technologies with real patients to facilitate clinical translation. The PULSE Lab is affiliated with the Laboratory for Computational Sensing and Robotics, the Malone Center for Engineering in Healthcare, and the Carnegie Center for Surgical Innovation, with dedicated laboratory space at both the Johns Hopkins University Homewood Campus and the Johns Hopkins Hospital School of Medicine.
Medical UltraSound Imaging and Intervention Collaboration (MUSiiC) – Emad Boctor
The MUSiiC research lab, headed by Dr. Emad Boctor, develops innovative ultrasound technologies for medical applications ranging from prostate and breast cancer treatment to liver ablation and brachytherapy, among others. The group is based on a collaboration among researchers from Johns Hopkins Medical School, Johns Hopkins Whiting School of Engineering, and partners from other academic institutions and industry.
Haptics and Medical Robotics Lab (HAMR) – Jeremy Brown
The Haptics and Medical Robotics (HAMR) Laboratory seeks to extend the current knowledge surrounding the human perception of touch, especially as it relates to applications of human/robot interaction and collaboration. We are particularly interested in medical robotics applications such as minimally invasive surgical robots, upper-limb prosthetic devices, and rehabilitation robots. To solve many of the problems in these areas, we apply techniques from human perception, human motor control, neuromechanics, and control theory.
Robot and Protein Kinematics Lab (RPK) – Gregory Chirikjian
Dr. Gregory Chirikjian directs the Robot and Protein Kinematics Lab in LCSR. This lab is involved in research in computational structural biology (in particular, computational mechanics of large proteins), conformational statistics of biological macromolecules, developed theory for ‘hyper- redundant’ (snakelike) robot motion planning, hyper-redundant robotic manipulator arms, modular self-reconfigurable robots, applied mathematics (applications of group theory in engineering), self-replicating robotic systems.
The LIMBS laboratory, directed by Noah J. Cowan, strives to uncover principles of animal and robot sensory guidance. For animals this is an analysis problem: we reverse engineer the biomechanical and neural control principles underlying animal movement. For robotics, this is a design problem: we incorporate biological inspiration and engineering insights to synthesize new approaches to robot control. This research program includes several projects in robot and animal (including human) sensing, navigation, and control.
Computational Interaction and Robotics Lab (CIRL) – Gregory Hager
The Computational Interaction and Robotics Laboratory, directed by Dr. Gregory Hager, is devoted to the study of problems that involve dynamic, spatial interaction at the intersection of imaging, robotics, and human-computer interaction. The laboratory has a number of ongoing projects in this area. The Language of Motion project is seeking to develop new methods to recognize and evaluate skilled human manipulation, with a particular emphasis on surgery. Data is collected using a da Vinci Surgical robot, and processed into gesture-based models that support skill evaluation, training, and human-robot collaborative task execution. The Manipulating and Perceiving Simultaneously (MAPS) project seeks to apply principles of computer vision to tactile sensing, with the goal of developing new methods for haptic object recognition. The lab’s most recent work aims to develop Generic Perception to support general-purposes manipulation of objects in the physical world. The laboratory also works in the area of medical imaging. Interactive computer-aided diagnostic systems based on images are also an area of interest.
Intuitive Computing Laboratory – Chien-Ming Huang
The Intuitive Computing Laboratory seeks to innovate interactive robot systems to provide physical, social, and behavioral support personalized to people with various characteristics and needs. We are an interdisciplinary team that designs, builds, and studies intuitive interaction capabilities of robotic systems to improve their utilities and user experience. We draw on principles and techniques from human-computer interaction, robotics, and machine learning in our research and are interested in using our systems to address problems in the fields of health care, education, and collaborative manufacturing.
Advanced Medical Instrumentation, and Robotics (AMIRo) – Iulian Iordachita
The Advanced Medical Instrumentation and Robotics Research Laboratory (AMIRo), directed by Dr. Iulian Iordachita, conducts research to aid and support the robotic assisted medical technology encompassing medical diagnosis and therapy, and clinical research. The main goal is to create the future medical robots and devices that will help clinicians to deliver earlier diagnosis and less invasive treatments at lower cost and in shorter time. Application areas include robot-assisted microsurgery, MRI-compatible mechatronic systems, image-guided procedures, optical fiber-based force and shape sensing, and small animal research platforms.
Sensing, Manipulation, and Real-Time Systems Laboratory (SMARTS Lab) – Peter Kazanzides
Dr. Peter Kazanzides heads the SMARTS lab, which works on components and integrated systems for computer-assisted surgery and robotics in extreme environments. This includes the development of mixed reality user interfaces and the integration of real-time sensing to enable robotic assistance in challenging environments, such as minimally invasive surgery, microsurgery, and outer space. Research in component technologies includes high-performance motor control, sensing, sensor fusion, and head-mounted displays. The lab also performs research in system architectures, applying component-based software engineering concepts to provide a uniform programming model for multi-threaded, multi-process, and multi-processor systems.
Autonomous Systems, Control, and Optimization Laboratory (ASCO) – Marin Kobilarov
The Autonomous Systems, Control and Optimization Laboratory (ASCO), directed by Dr. Marin Kobilarov, aims to develop intelligent robotic vehicles that can perceive, navigate, and accomplish challenging tasks in uncertain, dynamic, and highly constrained environments. The lab performs research in analytical and computational methods for mechanics, control, motion planning, and reasoning under uncertainty, and in the design and integration of novel mechanisms and embedded systems. Application areas include mobile robots, aerial vehicles, and nano satellites.
Our work focus on both basic research and translational research in the development of novel tools, imaging, and robot control techniques for medical robotics. Specifically we investigate methodologies that (i) increase the smartness and autonomy and (ii) improve image guidance of medical robots to perform previously impossible tasks, improve efficiency, and improve patient outcomes.
Terradynamics Lab – Chen Li
Aero- and hydrodynamics have helped us understand how animals fly and swim and develop aerial and aquatic vehicles that move through air and water rapidly, agilely, and efficiently. By contrast, we know surprisingly little about how terrestrial animals move so well in nature, and even the best robots still struggle in complex terrains like building rubble, forest floor, mountain boulders, and cluttered indoor environments. In our lab, we are developing experimental tools and theoretical models to create the new field of terradynamics that describe complex locomotor-terrain interactions, and use terradynamics to better understand animal locomotion and advance robot locomotion in complex terrains.
Computer Aided Medical Procedures (CAMP) – Nassir Navab
The CAMP laboratory aims at developing the next generation solutions for computer assisted interventions. The complexity of surgical environments requires us to study, model and monitor surgical workflow enabling the development of novel patient and process specific imaging and visualization methods. Due to the requirements of flexibility and reliability we work on novel robotized multi-modal imaging solutions and to satisfy the challenging usability requirements we focus on data fusion and its interactive representation within augmented reality environments. The lab creates a bridge across the Atlantic ocean by hosting researchers working at both of Prof. Navab’s groups at JHU in Baltimore and TU in Munich.
Computer Integrated Interventional Systems (CIIS) Laboratory – Russell Taylor
Professor Russell Taylor directs the Computer Integrated Interventional Systems (CIIS) laboratory. This lab exists to develop surgical systems that integrate novel computer and human/machine interface technologies that will revolutionize surgical procedures, extending the surgeon’s abilities to achieve better outcomes at lower costs. Some of the recent research projects include robot assisted microsurgery (steady hand eye robot), surgical control and planning, snake robot, deformable human anatomical models, smart surgical instruments, treatment plan optimization in radiation oncology, image overlay, laparoscopic-assisted robot system, robot assisted ultrasound and MRI compatible robotics.
Advanced Robotics and Computationally AugmenteD Environments (ARCADE) Lab – Mathias Unberath
In the Advanced Robotics and Computationally AugmenteD Environments (ARCADE) Lab we are interested in pioneering research in computer vision, machine learning, augmented reality and medical imaging to innovate collaborative systems that assist clinical professionals across the healthcare spectrum. We collaborate closely with care providers to understand clinical workflows, identify opportunities and constraints, and facilitate translation.
Dynamical Systems and Control Laboratory (DSCL) – Louis Whitcomb
Professor Louis Whitcomb directs the DSCL lab whose research focuses on problems in the navigation, dynamics, and control of linear and nonlinear dynamical systems, observers, nonlinear systems analysis, modeling, and sensing relevant to robots that interact dynamically in extreme environments. We focus is on problems motivated by several application areas that share a common underlying mathematical framework including underwater robotics, space telerobotics, and medical robotics. Lab Director Louis Whitcomb and his students have participated in the development of numerous underwater vehicles for oceanographic science missions including the Nereus hybrid underwater vehicle that dove to the bottom of the Mariana Trench in 2009, and Nereid Under-Ice (NUI) hybrid underwater vehicle that was deployed under Arctic sea ice at 87 degrees North in 2016. Our methodology is to address fundamental theoretical issues with concise mathematical analysis, and to experimentally validate our research results in actual working systems.
Computational Sensory-Motor Systems Lab (CSMS) – Ralph Etienne-Cummings
Dr. Ralph Etienne-Cummings directs the CSMS lab. The lab’s current research includes various experiments to understand neurophysiology of spinal neural circuits, to interface with them, to decode their sensory-motor relationships, and to use these relationships to control biomorphic robots. The lab is developing brain-like computational systems to mimic the object detection, recognition, and tracking found in humans and primates. The plan is to continue to expand this area of research, while leveraging the laboratory’s expertise in VLSI circuits and systems, visual and acoustical information processing, neuromorphic computation systems and biomorphic robotics.
Networked and Spatially Distributed Systems (NSDS) – Dennice Gayme
The Networked and Spatially Distributed Systems (NSDS) group directed by Dr. Dennice Gayme, is concerned with characterizing, predicting, and controlling spatially distributed and networked systems in order to ensure stability and manage disturbances, while also optimizing efficiency and performance. These systems are typically represented as dynamical systems interacting over a graph (e.g. transportation, communication or power networks) or as partial differential equations (e.g. wind farms, wall-turbulence and power system oscillations). We develop theory and computational approaches for applications that lie at the interdisciplinary intersections of dynamical systems, controls and fluid mechanics, e.g. coordinated control of wind farms and grid integration of renewable energy.
Photonics and Optoelectronics Laboratory – Jin U. Kang
The Photonics and Optoelectronics Laboratory, directed by Jin U. Kang, conducts experimental and theoretical investigations in the area of photonics and optoelectronics with an emphasis on developing novel fiber optic imaging and sensor systems for medical applications. Specifically the lab develops high-speed real-time optical coherence tomography system that can guide surgical procedures and allow doctors to make accurate prognosis of the surgical outcome. In addition we develop an array of “smart surgical tools” that uses fiber optic OCT distal sensors to ensure safe and precise surgical maneuvers. Also a large effort is devoted to developing an array of sub-millimber endoscopic imaging systems that allows imaging of brain activities in awake freely moving mice.
Image Analysis and Communications Laboratory(IACL) – Jerry Prince
The Image Analysis and Communications Lab (IACL) in the Department of Electrical and Computer Engineering at Johns Hopkins University is led by Professor Jerry Prince. Research focuses on image and signal processing in medical imaging and video processing. Specific areas of technical interest include filter banks, wavelets, multivariate systems, signal decomposition, time-frequency and time-scale analysis, active contours and deformable geometry, computed tomography, magnetic resonance imaging, and optical flow.
Urology Robotics (URobotics) – Dan Stoianovici
Urology Robotics is a research and education program dedicated to advance the technology used in Urology. The main focus of the lab is in the development of robots for real-time Image-Guided Interventions. The application range of the lab technologies extends to other medical specialties and industry. The program is based on a multidisciplinary integrated team of students, engineers, and clinicians working in partnership from the bench to the bedside. The lab is specialized in the development of surgical robotic systems and especially in robotics for image-guided intervention (IGI). Besides Urology, the instruments and systems created in the lab apply to a larger area of medical fields, such as Interventional Radiology. The lab is part of the Brady Urological Institute (Urology Department at the Johns Hopkins Medicine) and is located at the Johns Hopkins Bayview Medical Center.
Vision, Dynamics and Learning Lab (VDL) – Rene Vidal
Our research spans a wide range of areas in biomedical imaging, computer vision, dynamics and controls, machine learning and robotics. In particular, we are interested in inference problems involving geometry, dynamics, photometry and statistics, such as (1) inferring models from images (image/video segmentation and structure from motion), static data (generalized PCA) or dynamic data (identification of hybrid systems), and (2) using such models to accomplish a complex mission (land a helicopter, pursue a team of evaders, follow a formation).