REU Projects 2016

Please select top 3 projects from the list below and list the project numbers in your application.

Project 1: Active Sensing in Biological Systems
Faculty Mentor: Dr. Noah Cowan

Project Description:
Active Sensing is the expenditure of energy—often in the form of movement—for the purpose of sensing. For example, when you sense the texture of an object, versus estimating its weight, you perform different hand motions. These specialized “active” hand movements match the encoding properties of the appropriate sensory receptors for the task. Our goal in this project is to analyze the active sensing behavior in weakly electric fish. These fish produce and detect and electric field (something analogous to “electrical sonar”). They have sensory receptors all over their skin, and actively swim back-and-forth to enhance the sensory input to these receptors (much like you move your hand up-and-down to sense the weight of an object). Our goal is to develop quantitative mathematical models of active sensing so that we can translate the ideas of active sensing in a rigorous way, into algorithms that could be implemented by a robotic system.

Role of the student:
The REU student will be involved in biological experimentation using our custom-real-time, closed loop experimental system. In addition he or she will perform data analysis, system programming, and hopefully mathematical modeling of active sensing behavior.

Preferred Background Skills:
Undergraduate training in linear algebra and differential equations. Some knowledge of dynamical systems and control theory is highly desirable. Also, knowledge of Matlab or another language (C, C++, Python). No specific experimental background in biological systems is required, but a lack of fear of getting into the lab (with mentorship) and performing non-invasive behavior experiments on live animals (specifically, fish) is essential.

Project 2: Design and Fabrication of Plasmonically-Enhanced Solution-Processed Infrared Photodetectors
Faculty Mentor: Dr. Susanna Thon

Project Description:
Infrared photon sensing and detection technology is of interest for a variety of applications in medicine, communications, and computing. Current infrared photodetectors rely on expensive materials that must be operated at low temperatures for efficient detection. The aim of this project is to use solution-processed materials, such as colloidal quantum dots and plasmonic metal nanoparticles, to build low-noise photodetectors for short-wave infrared (SWIR) radiation that can operate at room temperature and compete with traditional crystalline technology. The project will include computational design, chemical synthesis, device fabrication, and optical/electronic testing components.

Role of the Undergraduate Student:
The undergraduate researcher will be in charge of doing optical (finite-difference time-domain) simulations using commercial and existing software to design and optimize plasmonic enhancers for colloidal quantum dot photodetectors. Additionally, the undergraduate researcher will assist graduate students with colloidal materials synthesis and optical spectrophotometry measurements.

Preferred Background Skills:
Familiarity with Matlab is preferred. Some experience or comfort level with wet chemistry techniques is desirable but not required. All lab skills will be taught as-needed.

Project 3:Telerobotic System for Satellite Servicing
Faculty Mentors: Dr. Peter Kazanzides, Dr. Louis Whitcomb and Dr. Simon Leonard

Project Description: With some satellites entering their waning years, the space industry is facing the challenge of either replacing these expensive assets or to develop the technology to repair, refuel and service the existing fleet. Our goal is to perform robotic on-orbit servicing under ground-based supervisory control of human operators to perform tasks in the presence of uncertainty and time delay of several seconds. We have developed an information-rich telemanipulation environment, based on the master console of a customized da Vinci surgical robot, together with a set of tools specifically designed for in-orbit manipulation and servicing of space hardware. We have successfully demonstrated telerobotic removal of the insulating blanket flap that covers the spacecraft’s fuel access port, under software-imposed time delays of several seconds. We now wish to extend the models and tools to other telerobotic servicing operations.

Preferred Background Skills: Ability to develop mathematical models of satellite servicing tasks, implement models in C/C++, and perform experiments to determine model parameters.

Project 4: Photoacoustic Ultrasound Guidance for Neurosurgery
Faculty Mentors: Dr. Peter Kazanzides, Dr. Sungmin Kim and Dr. Muyinatu Lediju Bell

Project Description: Photoacoustic ultrasound refers to an ultrasound image that is formed by using a pulsed laser to excite selected tissue to create an acoustic wave that is detected by an ultrasound receiver array. This project explores the use of photoacoustic ultrasound to detect blood vessels behind bone during skull base surgery. We are developing a test platform based on a research da Vinci Surgical Robot. The project goal is to perform phantom experiments to quantify the accuracy of the photoacoustic measurements.

Preferred Background Skills: Ability to perform laboratory experiments and analyze results using Matlab. Experience with ultrasound and programming experience in C/C++ would be helpful, but not required.

Project 5 : Modeling Human Attention in Oculus Rift
Faculty Mentor: Dr. Ralph Etienne-Cummings

Project Description: What draws your attention when you enter a room? Look at a piece of art? Survey nature? We can model these effects in software. Now we need to provide users with highlighted 3D images of areas of interest that they may have missed. We also need to know where they are looking. Hence, we need a Virtual Reality system that allows use to provide 3D videos to a user and track their eyes and head so we can update the areas of most interest based on gaze. Also, this will allow us to monitor eye movements while visually surveying an area.

Role of the Undergraduate Student: The REU will work with graduate students to convert our algorithms for real-time operation and overlay onto video that are piped to the Oculus Rift googles.

Preferred Background Skills: Programming, some FPGA, some hardware.

Project 6: Machine Learning methods for Measuring (Disease) Activity Using Smart Phones: Application to Parkinson’s
Faculty Mentor: Prof. Suchi Saria (Machine Learning and Applied Statistics); Ray Dorsey (Neurology)

Project Description:
The goals of this project are to understand how smartphones can be used in everyday settings to monitor health in individuals with neurodegenerative disorders. In Parkinson’s, for example, medications lose their efficacy, and symptoms often reappear before it’s time for another dose. Real-time inference of an individual’s health status can allow for triggers that make individualized recommendations for initiating a clinic visit or increasing dose. This project will involve developing novel methods for inferring changes in an individual’s health status from streaming time series data of symptoms measured via smartphones, and methods for tailoring interventions to the patient. Our study has assembled the largest and most comprehensive database of smartphone based measurements collected to date on individuals with Parkinson’s disease and is continuing to grow at the rate of several hundred Gbs/week.

Project 7: Early Detection of Adverse Events
Faculty Mentor: Prof. Suchi Saria (Machine Learning and Applied Statistics)

Project Description:
More than one in five patients suffer from hospital acquired infections. Data that are routinely collected in the hospital environment can be used to predict individual’s at risk for adverse clinical events in real-time. By identifying at-risk individuals early, clinicians can begin more aggressive therapies that are targeted to the specific event sooner and significantly decrease risk of death. In this project, we will develop computational methods to infer an individual’s risk for adverse events over time based on the individual’s clinical temporal streams. The emphasis will be on developing methods that scale to large data sets and can be implemented in real-time.

Project Title 8: Learning Fine-grained Action Models Using Video and Sensor Data
Mentors: Dr. Gregory Hager, Colin Lea

Project Description: Recent work in deep learning has sparked significant progress on many computer vision problems. Despite many successes in image recognition tasks, their effectiveness has been limited when modeling actions in video. In this project we leverage work from the robotics community to jointly train action models using video and domain-specific sensors like accelerometers and robot joint positions. The goal is to train a model using multiple modalities and evaluate solely with video. The focus may be in the surgical domain, where we want to model different phases of surgery from laparoscopic video, but our models will be applied more broadly to general structured activities like cooking.
Preferred Background Skills : Python or Matlab, (preferred) computer vision, machine learning, deep learning

Project 9: Human Activity Recognition
Faculty Mentor: Dr. Rene Vidal
Project Description: The human visual system is exquisitely sensitive to an enormous range of human movements. We can differentiate between simple motions (left leg up vs. right hand down), actions (walking vs. running) and activities (playing vs. dancing). We can also identify friends by their walking styles, infer mood and intent from hand or arm gestures, or evaluate the grace and athleticism of a ballerina. Recently, significant progress has been made in automatically recognizing human activities in videos. Such advances have been made possible by the discovery of powerful image descriptors and the development of advanced classification techniques. However, the performance of the “features + classifiers” approach seems to be reaching a limit, which is still well below human performance.

Project Goals: The goal of this project is to develop algorithms recognizing human movements, actions and activities in unstructured and dynamically changing environments. For instance, recognizing face, hand and arm gestures, human gaits, and daily activities (shaking hands, drinking, eating, etc.). Classical 2D representation will be merged with 3D data (motion capture, Kinect, accelerometers) in order to represent a human performing an action as a collection of 2D/3D pose/shape and 2D/3D dynamical models.

REU Goals: As part of the project, the intern will work alongside PhD students and develop novel algorithms for 3-D human motion representation for activity recognition. The intern will implement code for these algorithms as well as test them on several databases. The intern will read research papers on activity recognition, 3D shape modeling, motion capture-based recognition methods, and will learn new techniques to solve the above problem. Moreover, the intern will implement novel algorithms in MATLAB and C++ and become familiar with several computer vision and machine learning concepts. The intern will present their work to other graduate students and professors and will potentially be able to publish their research in computer vision conferences and journals. As part of the group, the intern will experience first-hand a rigorous and rewarding research environment. Experience in C++ and MATLAB coding and familiarity with classification techniques and dynamical systems is a plus.

Project 10: Automated Analysis of Cardiac Action Potentials from Human Embryonic Stem Cells
Faculty Mentor: Dr. Rene Vidal

Project Description: The use of human embryonic stem cell cardiomyocytes (hESC-CMs) in tissue transplantation and repair has led to major advances in cardiac regenerative medicine. However, to avoid potential arrhythmias, it is critical that hESC-CMs used in replacement therapy be electrophysiologically compatible with the adult atrial, ventricular, and nodal phenotypes. One approach to ensuring compatibility is by investigating the electrophysiological signature, or action potential (AP), of a cardiomyocyte.

Project Goals: The goal of this project is to tackle this problem by using machine learning techniques in order to provide objective measures for classifying action potentials derived by human embryonic stem cells by their shape. Our previous work has shown that by using a shape preserving distance framework called metamorphosis and computational models of adult APs, one can, with high accuracy, identify the phenotype (atrial, ventricular, or nodal) of the embryonic cardiomyocyte. Further, the framework provides an interpolation between the embryonic and mature APs that may be representative of the maturation process. Our current goal is to optimize the current framework for use in larger populations, as well as use the framework to investigate the efficacy of current biochemical methods for the purification of specific phenotype CMs.

REU Goals: In this project, the intern will work with PhD students to develop novel mathematical models for representing embryonic and mature cardiac action potentials (APs) and methods for classifying APs from cardiac time-series data. Further, the maturation interpolants will be used to update current, state-of-the-art computational cardiomyocyte models by introducing cell maturation. The intern will implement code to demonstrate their performance on synthetic data as well as a large AP dataset.
The intern will read a number of research papers on cardiac signal acquisition, electrophysiology, cardiac cell models, and machine learning and will learn an understanding of the problem, its applications, and the techniques involved to tackle it. Moreover, the intern will implement novel algorithms in MATLAB and C++ and become familiar with analyzing cardiac time-series data as well as evaluating the developed methods on acquired datasets. The intern will present their work to other graduate students and professors and will potentially be able to publish their research in biomedical engineering conferences and journals. As part of the group, the intern will experience first-hand a rigorous and rewarding multi-disciplinary research environment.
Experience in C++ and MATLAB coding and familiarity with cardiac electrophysiology, signal processing, machine learning, or differential equations (partial and ordinary) is a plus.

Project 11: Analysis of Diffusion Magnetic Resonance Images
Faculty Mentor: Dr. Rene Vidal

Project Description: Diffusion Magnetic Resonance Imaging (DMRI) is a medical imaging technique that is used to estimate the anatomical network of neuronal fibers in the brain, in vivo, by measuring and exploiting the constrained diffusion properties of water molecules in the presence of bundles of neurons. Water molecules will diffuse more readily along fibrous bundles (think of fiber optics cables), then in directions against them. Therefore by measuring the relative rates of water diffusion along different spatial directions, we can estimate the orientations of fibers in the brain. In particular, one important type of DMRI technique that we will analyze is high angular resolution diffusion imaging (HARDI) which measures water diffusion with an increased level of angular resolution in order to better estimate the probability of fiber orientation, known as the Orientation Distribution Function (ODF). HARDI is an advancement over the clinically popular Diffusion Tensor Imaging (DTI) which requires less angular measurements because of a Gaussian assumption which restricts the number of fiber orientations that can be estimated in each voxel. More accurate estimates of ODFs at the voxel level using HARDI lead to more accurate reconstructions of fiber networks. For instance, the extraction of neuronal fibers from HARDI can help understand brain anatomical and functional connectivity in the corpus callosum, cingulum, thalamic radiations, optical nerves, etc. DMRI has been vital in the understanding of brain development and neurological diseases such as multiple sclerosis, amyotrophic lateral sclerosis, stroke, Alzheimer’s disease, schizophrenia, autism, and reading disability.

Project Goals: To make DMRI beneficial in both diagnosis and clinical applications, it is of fundamental importance to develop computational and mathematical algorithms for analyzing this complex DMRI data. In this research area, we aim to develop methods for processing and analyzing HARDI data with an ultimate goal of applying these computational tools for robust disease classification and characterization. Possible project areas include:
1. Sparse HARDI Reconstruction: To develop advanced algorithms for the sparse representation and reconstruction of HARDI signals with the goals of speeding up
HARDI acquisition and compact data compression.
2. ODF Estimation: To develop advanced algorithms for computing accurate fields of Orientation Distribution Functions (ODFs) from HARDI data.
3. Fiber Segmentation: To develop advanced algorithms for automatically segmenting
HARDI volumes into multiple regions corresponding to different structures in the brain.
4. HARDI Registration: To develop advanced algorithms for the registration of HARDI brain volumes to preserve fiber orientation information.
5. HARDI Feature Extraction: To develop methods for extracting features from high- dimensional HARDI data that can be exploited for ODF clustering, fiber segmentation,
HARDI registration and disease classification.
6. Disease Classification: To develop advanced classification techniques using novel
HARDI feature representations to robustly classify and characterize neurological disease.
REU Goals: In our lab, the intern will work with a PhD student to complete a project within an area(s) mentioned above. The intern will read a number of research papers on DMRI and the state-of-the-art, and will learn an understanding of the problem, its applications, and the techniques involved to tackle it. There are two aspects of the research that may be of interest to the applicant. One is a more theoretical aspect that involves developing mathematical theories to improve existing frameworks. The second is a more computational aspect that involves more image processing, analysis, and algorithm implementation in MATLAB or C++. An applicant with some interest and experience in both areas is most favorable, but it is possible for an applicant to be interested in working on only one of the aspects as well.
At the end of the internship period the student will present their work to other graduate students and professors and will potentially be able to publish their research in medical imaging conferences and journals. As becoming part of the Vision Lab, the intern will experience first-hand a rigorous and rewarding research environment with a highly collaborative and supportive social element.
Experience in MATLAB or C++ and familiarity with image analysis or processing is a plus. Mathematical maturity is also favorable.

Project 12: Modeling and Teaching the Language of Surgery
Faculty Mentor: Dr. Rene Vidal

Project Description: To learn the art of surgery, medical students practice in a number of phantoms, but rarely in an actual patient. Recent technological advances have enabled the use of surgical robots for performing certain surgical procedures. However, surgical training still relies on the subjective visual evaluation by an expert. In fact, surgical expertise is more often than not judged by the number of practice hours rather than the actual expertise level. But how can we define and quantify surgical expertise more precisely?

Project Goals: The goal of this project is to develop a more objective way to evaluate the skill of a medical student or surgeon. For that purpose, we are using motion and video data acquired by the Da Vinci robot to build models of surgical tasks that can be used for training and evaluation of medical students. In the same way that speech is divided into sentences, words and phonemes, one can divide a surgical task into subtasks, such as suturing, knot-tying, etc., and each subtask into basic surgical motions, such as grabbing a needle, inserting a needle, etc. Given motion and video data from multiple tasks and performed by different surgeons with different levels of expertise, our goal is to discover the basic surgical motions (the words) and the typical transitions among such basic surgical motions (the grammar). To discover this language of surgery, we are using advanced techniques from dynamical systems, machine learning and computer vision. Such techniques automatically segment the data into different surgical motions and determine the most likely sequence of surgical motions being executed. To determine the skill level, we are looking at the way in which basic surgical motions are executed (measured in terms of kinematic and dynamic features) and the way in which transitions between basic surgical motions occur (e.g., novices tend to transition a lot). We thus build models that describe the different motions and the different transitions for different skill levels and use these models to evaluate the skill of few surgeons.

REU Goals: In this internship, you will develop a set of mathematical tools for modeling skilled human activities (e.g. surgical tasks). The mathematical models investigated will include: language-based models, computer vision classification tools (e.g. bag of features), hybrid dynamical systems, and sparse-representation based approaches. These tools will provide new insights into the relationship between skill, style, and content in human motion. In addition, you will develop software using these tools for teaching, training and assistance of humans performing these activities. The data of a person who is trying to learn the activity will be compared with the model derived by observing experts, and both physical guidance and information display tools will be developed to provide feedback to the learner based on the expert model.
Experience in MATLAB and C++ coding and familiarity with computer vision, statistical language models or dynamical systems is a plus.

Project 13: Object Recognition
Faculty Mentor: Dr. Rene Vidal

Project Description: When a person is shown an image, he/she is able to immediately identify a variety of things like: the various objects present in the image, their locations, their spatial extent, their categories and the underlying 3D scene of which it is an image. The human visual system uses a combination of prior knowledge about the world and the information present in the image to perform this complicated task. We want to replicate this on a computer. This is broadly called object recognition and it involves object detection (is there an object in this video? where is it located?), segmentation (which pixels contain the object?), categorization (what is the object’s class?) and pose estimation (what is the 3D location of object in the scene?). We also want to perform all these tasks jointly rather than a pipeline approach as knowledge of one task helps us perform the others better.

Project Goals: The project aims to develop object representations (models that capture prior knowledge about how the object looks like under varying viewing conditions) and techniques to perform the tasks of object detection, categorization, image segmentation, and pose estimation in a fast and efficient manner. We are developing a graph-theoretic approach in which different levels of abstractions, such as pixels, superpixels, object parts, object categories, their 3d pose, relative con_guration in the scene, etc., are represented as nodes in a graph. Contextual relationships among different nodes are captured by an energy defined on this graph. In this energy, bottom-up costs capture local appearance and motion information among neighboring nodes. Each of the tasks corresponds to terms in the energy function (the top-down costs), which is then solved in a joint manner. We are also developing algorithms based on branch and bound (pose estimation task) and graph cuts (image segmentation task) for minimizing the energy, and methods based on structured output learning (structural SVMs) to learn its parameters.

REU Goals: As part of the project, the intern will help enhance our current framework for object recognition by improving the model to capture more sub- categories, develop models for more object categories and design algorithms to utilize these models for various vision tasks. The intern will be exposed to current research in the area of Object Recognition and Scene Understanding. He/she will read a lot of literature on a variety of topics like image representation, clustering, classification and graphical models. The intern will implement algorithms in Matlab/C++ and test them across various datasets. The intern will present their work to other graduate students and professors and will potentially be able to publish their research in computer vision conferences and journals. This project will help the intern gain a good understanding of challenges and research opportunities in the area of Object Recognition and Scene Understanding. Experience in C++ and MATLAB coding and familiarity with image processing, computer vision, or statistical inference is a plus.

Project 14: Subspace Clustering
Faculty Mentor: Dr. Rene Vidal

Description: Consider the task of separating different moving objects in a video (e.g., running cars and walking people in a video clip of a street). While human can easily solve this task, a computer would find itself totally clueless since all that it sees is a big chunk of ordered 0’s and 1’s. Fortunately for the computers, this problem has a specific property that allows an alternative approach which a computer would find more comfortable with. That is, for all the points of the same moving object, the vectors built from their trajectories lie in a common subspace. Thus this problem boils down to a math problem of separating different subspaces in the ambient space.

Project Goals: Given a set of data points that are drawn from multiple subspaces with unknown membership, we want to simultaneously cluster the data into appropriate subspaces and find low-dimensional subspaces fitting each group of points. This problem is known as subspace clustering, it has applications in, beside the motion segmentation mentioned above, image segmentation, face clustering, hybrid system identification, etc. The Vision Lab has worked extensively on this topic, and has developed methods of geometric approaches such as the Generalized Principle Component Analysis, and spectral clustering approaches such as the Sparse Subspace Clustering. The performance of the algorithms on a motion segmentation benchmark and a face recognition database has been studied. The goal of the project is thus to further improve the algorithms for subspace clustering, and study the performance the algorithms on tasks that have the multi-subspace structure.

Internship Goals: As part of the project, the intern will work alongside PhD students and develop novel algorithms for subspace clustering. The intern will implement code for these algorithms as well as test them on several databases. The intern will learn necessary background knowledge in machine learning, computer vision, compressed sensing, and will read research papers on subspace clustering. Moreover, the intern will implement novel algorithms in MATLAB to different datasets. The intern will present their work to other graduate students and professors and will potentially be able to publish their research in computer vision conferences and journals. As part of the group, the intern will experience first-hand a rigorous and rewarding research environment.
Strong background in linear algebra, experience in MATLAB coding is a plus.

Project 15 : Cerebellum Parcellation and Data Analysis of the Baltimore Longitudinal Study of Aging
Mentors: Dr. Jerry Prince, Amod Jog

Project Description: The aim of this project is to run an existing cerebellum parcellation algorithm on a cohort of healthy aging controls. The project will establish normative estimates of volumes for various regions of the cerebellum during the aging process. The project is an opportunity to learn about the anatomy of core portion of the human brain while being engaged in a cutting edge research project.
Preferred Skills: Matlab, Java, and basic image processing

Project 16: Development of Features for Segmentation & Registration of OCT Data
Mentors: Dr. Jerry Prince, Bhavna Antony

Project Description: New features that are scanner indendent could improve the segmentation and registration of retinal optical coherence tomography (OCT) data. The REU student will develop these features and work on their incorporation in existing segmentation software.
Preferred Skills: Basic image processing and Matlab

Project 17: Trends of Thalamic Nuclei in the Baltimore Longitudinal Studying of Aging
Mentors: Dr. Jerry Prince, Jeffrey Glaister

Project Description: The thalamus is made up of a system of myelinated nerve fibers that separate the thalamus into several subparts. This project will use an existing parcellation algorithm on a large cohort of aging subjects with multiple time- points. With subsequent data analysis to explore possible trends in the population.
Preferred Skills: Linux, Matlab, and basic statistics

Project 18: Image-Based Biomechanics
Mentors: Dr. Jerry Prince, A. David Gomez

Project Description: The goal is to implement, verify, and validate a technique for integration of image-based motion estimation and solid modeling of soft tissues. The experience will include an introduction to basic concepts on mechanical modeling using MRI imaging information, and comparison of simulated results to experimental dynamic data.
Preferred Skills: Matlab, C++, and basic image processing or mechanics

Project 19: Software Environment and Virtual Fixtures for Medical Robotics
Mentors: Dr. Russell Taylor and Dr. Peter Kazanzides

Project Description: Our laboratory has an active ongoing research program to develop open source software for medical robotics research. This “Surgical Assistant Workstation (SAW)” environment includes modules for real time computer vision; video overlay graphics and visualization; software interfaces for “smart” surgical tools; software interfaces for imaging devices such as ultrasound systems, x-ray imaging devices, and video cameras; and interfaces for talking to multiple medical robots, including the Intuitive Surgical DaVinci robot, our microsurgical Steady Hand robots, the IBM/JHU LARS robot, the JHU/Columbia “Snake” robot, etc. A brief overview for this project may be found at https://www.cisst.org/saw/Main_Page. Students will contribute to this project by developing “use case” software modules and applications. Typical examples might include: using a voice control interface to enhance human-machine cooperation with the DaVinci robot; developing enhanced interfaces between open source surgical navigation software and the DaVinci or other surgical robots; or developing telesurgical demonstration projects with our research robots. However, the specific project will be defined in consultation with the student and our engineering team.
Preferred Skills: The student should have strong programming skills in C++. Some experience in computer graphics may also be desirable.

Project 20: Instrumentation and Steady-hand Control for New Robot for Head-and-Neck Surgery
Mentors: Prof. Russell Taylor, Dr. Jeremy Richmon (Otolaryngology), Dr. Masaru Ishii (Otolaryngology), Dr. Lee Akst (Otolaryngology), Dr. Matthew Stewart (Otolaryngology)

Project Description: Our laboratory is developing a new robot for head-and-neck surgery. Although the system may be used for “open” microsurgery, it is specifically designed for clinical applications in which long thin instruments are inserted into narrow cavities. Examples include endoscopic sinus surgery, transphenoidal neurosurgery, laryngeal surgery, otologic surgery, and open microsurgery. Although it can be teleoperated, our expectation is that we will use “steady hand” control, in which both the surgeon and the robot hold the surgical instrument. The robot senses forces exerted by the surgeon on the tool and moves to comply. Since the motion is actually made by the robot, there is no hand tremor, the motion is very precise, and “virtual fixtures” may be implemented to enhance safety or otherwise improve the task. Possible projects include:
• Development of “phantoms” (anatomic models) for evaluation of the robot in realistic surgical applications.
• User studies comparing surgeon performance with/without robotic assistance on suitable artificial phantoms.
• Optimization of steady-hand control and development of virtual fixtures for a specific surgical application
• Design of instrument adapters for the robot

Preferred Skills: The student should have a background in biomedical instrumentation and an interest in developing clinically usable instruments and devices for surgery. Specific skills will depend on the project chosen. Experience in at least one of robotics, mechanical engineering, and C/C++ programming is important. Similarly, experience in statistical methods for reducing experimental data would be desirable.

Project 21: Statistical Modeling of 3D Anatomy
Mentor: Dr. Russell Taylor

Project Description: The goal of this project is creation of 3D statistical models of human anatomic variability from multiple CT and MRI scans. The project will involve processing multiple images from the Johns Hopkins Hospital, co-registering them, and performing statistical analyses. The resulting statistical models will be used in ongoing research on image segmentation and interventional imaging. We anticipate that the results will lead to joint publications involving the REU student as a co-author.
Preferred skills: Experience in computer vision, medical imaging, and/or statistical methods is highly desirable.

Project 22: Accuracy Compensation for “Steady Hand” Cooperatively Controlled Robots
Mentor: Dr. Russell Taylor

Project Description: Many of our surgical robots are cooperatively controlled. In this form of robot control, both the robot and a human user (e.g., a surgeon) hold the tool. A force sensor in the robot’s tool holder senses forces exerted by the human on the tool and moves to comply. Because the robot is doing the moving, there is no hand tremor, and the robot’s motion may be otherwise constrained by virtual fixtures to enforce safety barriers or otherwise provide guidance for the robot. However, any robot mechanism has some small amount of compliance, which can affect accuracy depending on how much force is exerted by the human on the tool. In this project, the student will use existing instrumentation in our lab to measure the displacement of a robot-held tool as various forces are exerted on the tool and develop mathematical models for the compliance. The student will then use these models to compensate for the compliance in order to assist the human place the tool accurately on predefine targets. We anticipate that the results will lead to joint publications involving the REU student as a co-author.
Preferred skills: The student should be familiar with basic laboratory skills, have a solid mathematical background, and should be familiar with computer programming. Familiarity with C++ would be a definite plus, but much of the programming work can likely be done in MATLAB or Python.

Project 23: Voice Interfaces for Surgical Robot Systems

Mentor: Dr. Russell Taylor

Project Description: The goal of this project would be development of suitable voice interfaces for surgical robots such as the da Vinci surgical system. Surgical robots operate in an information rich environment and often have several different operating behaviors. Although the actual motion of the robot is best controlled by conventional teleoperation or hands-on cooperative control, selection of the appropriate behavior, control of information displays shown to the surgeon, or other information-based interactions may be require the surgeon to communicate information without affecting what his or her hands are doing. Voice is a natural communication modality for this sort of interaction. In this project, the student will adapt a commercial voice recognition system to provide simple voice interactions with a surgical robot and will demonstrate this capability with one of our surgical robot systems.
Preferred Skill: The student should be a very strong C++ programmer. Familiarity with voice recognition or robotics would be a plus.

Project 24: Reconstruction of Light Position from Shape of Illumination

Mentor: Dr. Russell Taylor and Balazs Vagyolgyi

Project Description: In Vitreoretinal surgery the surgeon uses a handheld instrument equipped with a fiber-optic light pipe to illuminate the retina from inside the eye. On microscopic images only the tip of the light pipe may be visible but not the shaft, thus the location of the light pipe cannot be reliably determined by just looking at the instrument itself. In this project we aim to determine the light pipe’s position and orientation by looking at the shape of the illumination pattern on the surface of the retina.
Preferred Skill: Image processing, Programming (Matlab or C++)

Project 25: Mosquitoe Dissection
Mentor: Dr. Russell Taylor

Project Description: We have a small collaboration project with a company that is developing a malaria vaccine. Part of the vaccine production process involves removing the salivary glands from mosquitoes. Our goal in the collaboration is to improve the technology for this process, in order to make it more efficient. Note: We use only non-infected mosquitoes for this research.
Preferred Skill: Mechanical design and fabrication

Project 26: Development of a New Remotely Operated Underwater Vehicle: Hardware Development
Mentor: Dr. Louis Whitcomb

Project Description: This REU student project is to work with other undergraduate and graduate students to develop a new remotely operated underwater robotic vehicle (ROV) that will be use for research in navigation, dynamics, and control of underwater vehicles. Our goal is to develop a neutrally buoyant tethered vehicle capable of agile six degree-of-freedom motion with six marine thrusters, and to develop new navigation and control system software using the open-source Robot Operating System (ROS).
Preferred Skills: Experience in in CAD and modern manufacturing methods – please submit work examples with application. C++ programming, Linux, and ROS programming experience desired but not required.

Project 27: Development of a New Remotely Operated Underwater Vehicle: Software Development
Mentor: Dr. Louis Whitcomb

Project Description: This REU student project is to work with other undergraduate and graduate students to develop a new remotely operated underwater robotic vehicle (ROV) that will be use for research in navigation, dynamics, and control of underwater vehicles. Our goal is to develop a neutrally buoyant tethered vehicle capable of agile six degree-of-freedom motion with six marine thrusters, and to develop new navigation and control system software using the open-source Robot Operating System (ROS).
Preferred Skills: Intermediate C++ programming, knowledge of Linux and version control systems such as git, ROS programming experience. Please submit ROS code examples with application.

Project 28: Computer Vision for Surgical Data Science
Mentors: Dr. Gregory Hager and Dr. Swaroop Vedula

Project Description – We are working with surgical datasets captured on the simulator and in the operating room. The data include video images, tool motion kinematics, and meta-data. Our overall goal is to apply machine learning techniques to the data and extract information that is useful for surgical skill evaluation and feedback. In this REU project, we will solve two problems related to our overall goal. The first problem relates to synchronizing different data streams captured on the surgical simulator. The second problem relates to developing analytics on surgical video image data that yield information useful for surgical education.
Role of the Undergraduate Student – The student will work closely with a graduate student and faculty. The first problem will involve implementing a computer vision analysis pipeline to extract cues from video data for certain events, match the cues with logs from a tool motion data stream, and synchronizing the video and motion data streams. The second problem will involve analysis of surgical video images to extract information that is useful for automated skill assessment and feedback.
Preferred Background Skills – Linear algebra (well versed), Computer Vision (intermediate), Robotics (basic). Coding and data analytical skills are required: C++ (Intermediate) or Python (Intermediate), OpenCV (intermediate).

Laboratory for Computational Sensing + Robotics