REU Projects 2017

Please review this project list and pick your top 3 projects. Indicate these 3 choices in your REU application.

1. Active Sensing in Biological Systems
Mentor: Prof. Noah Cowan
Project Description: Active Sensing is the expenditure of energy—often in the form of movement—for the purpose of sensing. For example, when you sense the texture of an object, versus estimating its weight, you perform different hand motions. These specialized “active” hand movements match the encoding properties of the appropriate sensory receptors for the task. Our goal in this project is to analyze the active sensing behavior in weakly electric fish. These fish produce and detect and electric field (something analogous to “electrical sonar”). They have sensory receptors all over their skin, and actively swim back-and-forth to enhance the sensory input to these receptors (much like you move your hand up-and-down to sense the weight of an object). Our goal is to develop quantitative mathematical models of active sensing so that we can translate the ideas of active sensing in a rigorous way, into algorithms that could be implemented by a robotic system.
Role of the student: The REU student will be involved in biological experimentation using our custom-real-time, closed loop experimental system. In addition he or she will perform data analysis, system programming, and hopefully mathematical modeling of active sensing behavior.

Background Skills: Undergraduate training in linear algebra and differential equations. Some knowledge of dynamical systems and control theory is highly desirable. Also, knowledge of Matlab or another language (C, C++, Python). No specific experimental background in biological systems is required, but a lack of fear of getting into the lab (with mentorship) and performing non-invasive behavior experiments on live animals (specifically, fish) is essential.

2. Modeling Human Attention in Oculus Rift
Mentor: Dr. Ralph Etienne-Cummings
Project Description: What draws your attention when you enter a room? Look at a piece of art? Survey nature? We can model these effects in software. Now we need to provide users with highlighted 3D images of areas of interest that they may have missed. We also need to know where they are looking. Hence, we need a Virtual Reality system that allows use to provide 3D videos to a user and track their eyes and head so we can update the areas of most interest based on gaze. Also, this will allow us to monitor eye movements while visually surveying an area.

Role of the Undergraduate Student: The REU will work with graduate students to convert our algorithms for real-time operation and overlay onto video that are piped to the Oculus Rift googles.

Preferred Background Skills: Programming, some FPGA, some hardware.

3. Cerebellum Parcellation and Data Analysis of the Baltimore Longitudinal Study of Aging
Mentors: Dr. Jerry Prince, Shuo Han
Project Description: The aim of this project is to run an existing cerebellum parcellation algorithm on a cohort of healthy aging controls. The project will establish normative estimates of volumes for various regions of the cerebellum during the aging process. The project is an opportunity to learn about the anatomy of core portion of the human brain while being engaged in a cutting edge research project.

Role of the REU Student: The REU student will use statistical methods to first determine whether there are algorithm failures in the processed data. The student will then look for relationships in the data that may relate to cognitive decline.

Preferred Skills: Matlab, Java, and basic image processing

4. Development of Features for Segmentation & Registration of OCT Data
Mentors: Dr. Jerry Prince, Yufan He
Project Description: Retinal optical coherence tomography (OCT) is proving to be an important tool in the diagnosis and management of neurological diseases. Currently, algorithms that are used to derive quantitative measures from OCT scanners are very much dependent on the scanner that is used. New features that are scanner indendent could improve the segmentation and registration of these data.

Role of the REU Student: The REU student will investigate new features that are intended to be scanner independent and work on their incorporation in existing segmentation software.

Preferred Skills: Basic image processing and Matlab

5. Trends of Thalamic Nuclei in the Baltimore Longitudinal Studying of Aging
Mentors: Dr. Jerry Prince, Jeffrey Glaister

Project Description: The thalamus is made up of a system of myelinated nerve fibers that separate the thalamus into several subparts (nuclei). These nuclei can be visualized in magnetic resonance images and their sizes and shapes can be compared.

Role of the REU Student: The REU student will apply an existing thalamus parcellation algorithm on a large cohort of aging subjects with multiple time-points. The student will use statistical analyses to determine whether the algorithm failed to work correctly and will analyze trends on the correctly-segmented data.

Preferred Skills: Linux, Matlab, and basic statistics

6. Image-Based Biomechanics
Mentors: Dr. Jerry Prince, Dr. A. David Gomez
Project Description: The project involves imaging the tongue during both normal and abnormal speech. Special magnetic resonance images (MRI) are acquired and analyzed both to determine the shape and the motion of the tongue. The goal is to implement, verify, and validate a technique for integration of image-based motion estimation and solid modeling of soft tissues.

Role of the REU Student: The REU student will learn the basic concepts of mechanical modeling and then use MRI image information to compare simulated results to experimental dynamic data.

Preferred Skills: Matlab, C++, and basic image processing or mechanics

7. Development of a New Remotely Operated Underwater Vehicle: Mechanical Development
Mentor: Dr. Louis Whitcomb
Description: This REU student project is to work with other undergraduate and graduate students to develop a new remotely operated underwater robotic vehicle (ROV) that will be used for research in
navigation, dynamics, and control of underwater vehcles. Our goal is to develop a neutrally buoyant tethered vehicle capable of agile six degree-of-freedom motion with six marine thrusters, and to develop new navigation and control system software using the open-source Robot Operating System (ROS).

Required Skills: Experience in CAD (solidworks preferred) and modern manufacturing methods – please submit work examples with application. C++ programming, Linux, and ROS programming experience desired but not required.

8. Development of a New Remotely Operated Underwater Vehicle: Electrical Development
Mentor: Dr. Louis Whitcomb
Project Description: This REU student project is to work with other undergraduate and graduate students to develop a new remotely operated underwater robotic vehicle (ROV) that will be used for research in navigation, dynamics, and control of underwater vehicles. Our goal is to develop a neutrally buoyant tethered vehicle capable of agile six degree-of-freedom motion with six marine thrusters, and to develop new navigation and control system software using the open-source Robot Operating System (ROS).

Required Skills: Experience in analog and digital circuit design, electronic CAD for circuit design, simulation, and PCB design (kicad preferred) and modern circuit board manufacturing methods – please
submit work examples with application. C++ programming, Linux, and ROS programming experience desired but not required.

9. Development of a New Remotely Operated Underwater Vehicle: Software Development
Mentor: Dr. Louis Whitcomb
Description: This REU student project is to work with other undergraduate and graduate students to develop a new remotely operated underwater robotic vehicle (ROV) that will be used for research in
navigation, dynamics, and control of underwater vehicles. Our goal is to develop a neutrally buoyant tethered vehicle capable of agile six degree-of-freedom motion with six marine thrusters, and to develop new navigation and control system software using the open-source Robot Operating System (ROS).

Required Skills: Intermediate C++ programming, knowledge of Linux and version control systems such as git, ROS programming experience. Please submit ROS code examples with application.

10.Software Framework for Research in Semi-Autonomous Teleoperation
Mentors:Dr. Peter Kazanzides and Dr. Russell Taylor
Project Description: We have developed an open source hardware and software framework to turn retired da Vinci surgical robots into research platforms (da Vinci Research Kit, dVRK) and have disseminated this to more than 25 institutions around the world. The goal of this project is contribute to the advancement of this research infrastructure. The specific task will take into account the student’s background and interests, but is expected be one of the following: (1) 3D user interface software framework, (2) constrained optimization control framework, (3) Simulink Real-Time interface to the robot controller, (4) integration of alternative input devices and/or robots, or (5) development of dynamic models and simulators.

Preferred Background Skills: Student should have experience with at least one of the following programming environments: C/C++, Python, ROS, Matlab/Simulink.

11. Telerobotic System for Satellite Servicing
Mentors: Dr. Peter Kazanzides, Dr. Louis Whitcomb and Dr. Simon Leonard
Project Description: With some satellites entering their waning years, the space industry is facing the challenge of either replacing these expensive assets or to develop the technology to repair, refuel and service the existing fleet. Our goal is to perform robotic on-orbit servicing under ground-based supervisory control of human operators to perform tasks in the presence of uncertainty and time delay of several seconds. We have developed an information-rich telemanipulation environment, based on the master console of a customized da Vinci surgical robot, together with a set of tools specifically designed for in-orbit manipulation and servicing of space hardware. We have successfully demonstrated telerobotic removal of the insulating blanket flap that covers the spacecraft’s fuel access port, under software-imposed time delays of several seconds. We now wish to extend the models and tools to other telerobotic servicing operations.

Preferred Background Skills: Ability to develop mathematical models of satellite servicing tasks, implement models in C/C++, and perform experiments to determine model parameters.

12. Photoacoustic Guidance of Robotic Gynecological Surgeries with the da Vinci Robot
Faculty Mentors: Dr. Muyinatu A. Lediju Bell, Dr. Peter Kazanzides
Project Description: Photoacoustic imaging is an emerging technique that uses pulsed lasers to excite selected tissue and create an acoustic wave that is detected by ultrasound technology. This project explores the use of photoacoustic imaging to detect blood vessels behind tissues during minimally invasive gynecological surgeries, such as hysterectomies, endometriosis resection, and surgeries to remove uterine fibroids (myomectomies). We are developing a test platform based on a research da Vinci Surgical Robot. The project goal is to perform phantom experiments to quantify the accuracy of the photoacoustic measurements.

Role of REU Student: Literature searches; phantom design and construction; integration of the photoacoustic imaging system with the da Vinci robot; hands-on experiments with the integrated system

Preferred Background Skills: Ability to perform laboratory experiments and analyze results using MATLAB; Experience with ultrasound and programming experience in C/C++ would be helpful, but not required.

13. Photoacoustic Image Guidance for Neurosurgery
Mentor: Dr. Muyinatu Lediju Bell
Project Description: Photoacoustic imaging is an emerging technique that uses pulsed lasers to excite selected tissue and create an acoustic wave that is detected by ultrasound technology. This project explores the use of photoacoustic imaging to detect blood vessels behind bone and other tissues during minimally invasive neurosurgery. The project goals are to perform phantom experiments to quantify the performance of the photoacoustic imaging system and to prepare for patient testing to determine clinical utility.
Role of REU Student: Build and test tissue-mimicking phantoms; perform experiments with cadaver head models; prepare a photoacoustic imaging system for clinical studies
Preferred Background Skills: Ability to perform laboratory experiments and analyze results; Programming experience in MATLAB, Python, and C/C++. Experience with ultrasound imaging and lasers would be helpful, but not required.

14. Photoacoustic Imaging of Nerve Blocks
Faculty Mentor: Dr. Muyinatu A. Lediju Bell
Project Description: Photoacoustic imaging has the potential to avoid injury to hidden nerves during surgery and possibly prevent surgery-related paralysis. The technique is implemented by using pulsed lasers to excite selected tissue and create an acoustic wave that is detected by ultrasound technology. This project explores the use of photoacoustic imaging to visualize nerve blocks for avoidance during multiple minimally invasive surgeries. The project goals are to design specialized light delivery systems and to perform experiments with nerve blocks to quantify photoacoustic imaging system capabilities.
Role of REU Student: Phantom design and construction; perform experiments with nerve blocks; data analysis and interpretation; interact and interface with clinical partners at the Johns Hopkins Hospital
Preferred Background Skills: Ability to perform laboratory experiments and analyze results; programming experience in MATLAB; experience with ultrasound imaging and optics would be helpful, but not required.

15. Photoacoustic Guided Spinal Fusion Surgery
Faculty Mentor: Dr. Muyinatu A. Lediju Bell
Project Description: Photoacoustic imaging has demonstrated capabilities to differentiate various bone properties, which can be particularly useful during surgeries that involve the spinal chord. This imaging technique uses pulsed lasers to excite selected tissue and create an acoustic wave that is detected by ultrasound technology. This project explores the use of photoacoustic imaging to distinguish cortical from cancellous bone during spinal fusion surgery. The project goals are to perform experiments with spinal bone specimens to quantify the performance of the photoacoustic imaging system.

Role of REU Student: Perform experiments with spinal bone specimens; data analysis and interpretation; interact and interface with clinical partners at the Johns Hopkins Hospital

Preferred Background Skills: Ability to perform laboratory experiments and analyze results; programming experience in MATLAB; experience with ultrasound imaging and lasers would be helpful, but not required.

16. Flexible Transparent Electrode Development for Infrared Optoelectronics
Mentor: Dr. Susanna Thon
Project Description: Infrared photon sensing and detection technology is of interest for a variety of applications in medicine, communications, and computing. Solution-process materials, such as colloidal quantum dots, have the potential to act as low-noise photodetectors for short-wave infrared (SWIR) radiation that can operate at room temperature, but integrating them with traditional read-out electronics is a challenge due to the need for flexible, transparent contacts and interlayers. The aim of this project is to develop new all solution-processed materials, such as silver nanowires and colloidal metal oxides, as flexible transparent electrodes for optoelectronic devices including photodetectors, solar cells, and LEDs. The project will include computational design, chemical synthesis, device fabrication, and optical/electronic testing components.

Role of the Undergraduate Student: The undergraduate researcher will be in charge of doing optical measurements (broadband absorption/reflection/transmission spectroscopy) and simulations using commercial and existing software to characterize and optimize new transparent electrode materials. Additionally, the undergraduate researcher will fabricate thin-film transparent electrodes on flexible substrates, and assist graduate students with colloidal materials synthesis and device testing.

Preferred Background Skills: Familiarity with Matlab is preferred. Some experience or comfort level with wet chemistry techniques is desirable but not required. All lab skills will be taught as-needed.

17. Using Machine Learning for Learning Information from Clinical Notes
Mentor: Dr. Suchi Saria
Project Description: Clinical notes contain rich unstructured information about a patient’s condition during their hospital stay. Critical information like patient history, qualitative observations, and diagnoses may only be recorded in the clinical notes. However, the unstructured and highly technical nature of the notes makes this information hard to extract automatically. Moreover, most existing natural language processing frameworks are not adept at handling medical text. In this project we will use machine learning techniques to develop a pipeline for automatically extracting information from clinical notes that could later be used in building disease modeling systems. The resulting information are critical in making automated surveillance algorithms

Good background in programming (at least one class in object oriented programming and/or familiarity with C++, Java or Python) is recommended. While useful, a medical background is not required. A class in natural language processing or experience in implementing machine learning algorithms (even as a hobby project) will be seen as a plus but is not required. Most of all, we want to work with someone who is a self-starter and is eager to learn and deploy machine learning algorithms.

18. Human Activity Recognition
Mentor: Dr. Rene Vidal
Motivations. The human visual system is exquisitely sensitive to an enormous range of human movements. We can differentiate between simple motions (left leg up vs. right hand down), actions (walking vs. running) and activities (playing vs. dancing). We can also identify friends by their walking styles, infer mood and intent from hand or arm gestures, or evaluate the grace and athleticism of a ballerina. Recently, significant progress has been made in automatically recognizing human activities in videos. Such advances have been made possible by the discovery of powerful image descriptors and the development of advanced classification techniques. However, the performance of the “features + classifier” approach seems to be reaching a limit, which is still well below human performance.

Project Goals. The goal of this project is to develop algorithms recognizing human movements, actions and activities in unstructured and dynamically changing environments. For instance, recognizing face, hand and arm gestures, human gaits, and daily activities (shaking hands, drinking, eating, etc.). Classical 2D representation will be merged with 3D data (motion capture, Kinect, accelerometers) in order to represent a human performing an action as a collection of 2D/3D pose/shape and 2D/3D dynamical models.
Internship Goals. As part of the project, the intern will work alongside PhD students and develop novel algorithms for 3-D human motion representation for activity recognition. The intern will implement code for these algorithms as well as test them on several databases. The intern will read research papers on activity recognition, 3D shape modeling, motion capture-based recognition methods, and will learn new techniques to solve the above problem. Moreover, the intern will implement novel algorithms in MATLAB and C++ and become familiar with several computer vision and machine learning concepts. The intern will present their work to other graduate students and professors and will potentially be able to publish their research in computer vision conferences and journals. As part of the group, the intern will experience first-hand a rigorous and rewarding research environment.
Experience in C++ and MATLAB coding and familiarity with classification techniques
and dynamical systems is a plus.

19. Subspace Clustering
Mentor: Dr. Rene Vidal
Motivations. Consider the task of separating different moving objects in a video (e.g., running cars and walking people in a video clip of a street). While human can easily solve this task, a computer would find itself totally clueless since all that it sees is a big chunk of ordered 0’s and 1’s. Fortunately for the computers, this problem has a specific property that allows an alternative approach which a computer would find more comfortable with. That is, for all the points of the same moving object, the vectors built from their trajectories lie in a common subspace. Thus this problem boils down to a math problem of separating different subspaces in the ambient space.

Project Goals. Given a set of data points that are drawn from multiple subspaces with unknown membership, we want to simultaneously cluster the data into appropriate subspaces and find low-dimensional subspaces fitting each group of points. This problem is known as subspace clustering, it has applications in, beside the motion segmentation mentioned above, image segmentation, face clustering, hybrid system identification, etc. The Vision Lab has worked extensively on this topic, and has developed methods of geometric approaches such as the Generalized Principle Component Analysis, and spectral clustering approaches such as the Sparse Subspace Clustering. The performance of the algorithms on a motion segmentation benchmark and a face recognition database has been studied. The goal of the project is thus to further improve the algorithms for subspace clustering, Possible research directions include:
1. To develop scalable algorithms that are able to deal with data that has millions of entries, e.g., the ImageNet database.
2. To develop algorithms that can deal with label-unbalanced data and improve clustering accuracy.
3. To develop algorithms that are able to deal with missing entries in the data, e.g., incomplete trajectories in the motion segmentation applications.
4. To develop robust algorithms that can effectively handle outliers in the data.
5. To develop algorithms that are able to deal with high relative dimensional subspaces.

Internship Goals. As part of the project, the intern will work alongside PhD students and develop novel algorithms for subspace clustering. The intern will implement code for these algorithms as well as test them on several databases. The intern will learn necessary background knowledge in machine learning, computer vision, compressed sensing, and will read research papers on subspace clustering. Moreover, the intern will implement novel algorithms in MATLAB to different datasets. The intern will present their work to other graduate students and professors and will potentially be able to publish their research in computer vision conferences and journals. As part of the group, the intern will experience first-hand a rigorous and rewarding research environment. Strong background in linear algebra, experience in MATLAB coding is a plus.

20. Object Recognition
Mentor: Dr. Rene Vidal
Motivations. When a person is shown an image, he/she is able to immediately identify a variety of things like : the various objects present in the image, their locations, their spatial extent, their categories and the underlying 3D scene of which it is an image. The human visual system uses a combination of prior knowledge about the world and the information present in the image to perform this complicated task. We want to replicate this on a computer. This is broadly called object recognition and it involves object detection (is there an object in this video?where is it located?), segmentation (which pixels contain the object?), categorization (what is the object’s class?) and pose estimation (what is the 3D location of object in the scene?). We also want to perform all these tasks jointly rather than a pipeline approach as knowledge of one task helps us perform the others better.

Project Goals. The project aims to develop object representations (models that capture prior knowledge about how the object looks like under varying viewing conditions) and techniques to perform the tasks of object detection, categorization, image segmentation and pose estimation in a fast and efficient manner. We are developing a graph-theoretic approach in which different levels of abstractions, such as pixels, superpixels, object parts, object categories, their 3d pose, relative configuration in the scene, etc., are represented as nodes in a graph. Contextual relationships among different nodes are captured by an energy defined on this graph. In this energy, bottom-up costs capture local appearance and motion information among neighboring nodes. Each of the tasks corresponds to terms in the energy function (the top-down costs), which is then solved in a joint manner. We are also developing algorithms based on branch and bound (pose estimation task) and graph cuts (image segmentation task) for minimizing the energy, and methods based on structured output learning (structural SVMs) to learn its parameters.

Internship Goals. As part of the project, the intern will help enhance our current framework for object recognition by improving the model to capture more sub-categories, develop models for more object categories and design algorithms to utilize these models for various vision tasks. The intern will be exposed to current research in the area of Object Recognition and Scene Understanding. He/she will read a lot of literature on a variety of topics like image representation, clustering, classification and graphical models. The intern will implement algorithms in Matlab/C++ and test them across various datasets. The intern will present their work to other graduate students and professors and will potentially be able to publish their research in computer vision conferences and journals. This project will help the intern gain a good understanding of challenges and research opportunities in the area of Object Recognition and Scene Understanding. Experience in C++ and MATLAB coding and familiarity with image processing, computer vision, or statistical inference is a plus.

21. Automated Analysis of Cardiac Action Potentials from Human Embryonic Stem Cells
Mentor: Dr. Rene Vidal
Motivations. The use of human embryonic stem cell cardiomyocytes (hESC-CMs) in tissue transplantation and repair has led to major advances in cardiac regenerative medicine. However, to avoid potential arrhythmias, it is critical that hESC-CMs used in replacement therapy be electrophysiologically compatible with the adult atrial, ventricular, and nodal phenotypes. One approach to ensuring compatibility is by investigating the electrophysiological signature, or action potential (AP), of a cardiomyocyte.

Project Goals. The goal of this project is to tackle this problem by using machine learning techniques in order to provide objective measures for classifying action potentials derived by human embryonic stem cells by their shape. Our previous work has shown that by using a shape preserving distance framework called metamorphosis and computational models of adult APs, one can, with high accuracy, identify the phenotype (atrial, ventricular, or nodal) of the embryonic cardiomyocyte. Further, the framework provides an interpolation between the embryonic and mature APs that may be representative of the maturation process. Our current goal is to optimize the current framework for use in larger populations, as well as use the framework to investigate the efficacy of current biochemical methods for the purification of specific phenotype CMs.

Internship Goals. In this project, the intern will work with PhD students to develop novel mathematical models for representing embryonic and mature cardiac action potentials (APs) and methods for classifying APs from cardiac time-series data. Further, the maturation interpolants will be used to update current, state-of-the-art computational cardiomyocyte models by introducing cell maturation. The intern will implement code to demonstrate their performance on synthetic data as well as a large AP dataset. The intern will read a number of research papers on cardiac signal acquisition, electrophysiology, cardiac cell models, and machine learning and will learn an understanding of the problem, its applications, and the techniques involved to tackle it. Moreover, the intern will implement novel algorithms in MATLAB and C++ and become familiar with analyzing cardiac time-series data as well as evaluating the developed methods on acquired datasets. The intern will present their work to other graduate students and professors and will potentially be able to publish their research in biomedical engineering conferences and journals. As part of the group, the intern will experience first-hand a rigorous and rewarding multi-disciplinary research environment.
Experience in C++ and MATLAB coding and familiarity with cardiac electrophysiology, signal processing, machine learning, differential equations (partial and ordinary), and numerical optimization is a plus.

22. Analysis of Diffusion Magnetic Resonance Images
Mentor: Dr. Rene Vidal
Motivations. Diffusion Magnetic Resonance Imaging (DMRI) is a medical imaging technique that is used to estimate the anatomical network of neuronal fibers in the brain, in vivo, by measuring and exploiting the constrained diffusion properties of water molecules in the presence of bundles of neurons. Water molecules will diffuse more readily along fibrous bundles (think of fiber optics cables), then in directions against them. Therefore by measuring the relative rates of water diffusion along different spatial directions, we can estimate the orientations of fibers in the brain. In particular, one important type of DMRI technique that we will analyze is high angular resolution diffusion imaging (HARDI) which measures water diffusion with an increased level of angular resolution in order to better estimate the probability of fiber orientation, known as the Orientation Distribution Function
(ODF). HARDI is an advancement over the clinically popular Diffusion Tensor Imaging (DTI) which requires less angular measurements because of a Gaussian assumption which restricts the number of fiber orientations that can be estimated in each voxel. More accurate estimates of ODFs at the voxel level using HARDI lead to more accurate reconstructions of fiber networks. For instance, the extraction of neuronal fibers from HARDI can help understand brain anatomical and functional connectivity in the corpus callosum, cingulum, thalamic radiations, optical nerves, etc. DMRI has been vital in the understanding of brain development and neurological diseases such as multiple sclerosis, amyotrophic lateral sclerosis, stroke, Alzheimer’s disease, schizophrenia, autism, and reading disability.

Project Goals. To make DMRI beneficial in both diagnosis and clinical applications, it is of fundamental importance to develop computational and mathematical algorithms for analyzing this complex DMRI data. In this research area, we aim to develop methods for processing and analyzing HARDI data with an ultimate goal of applying these computational tools for robust disease classification and characterization. Possible project areas include:
1. ODF Estimation: To develop advanced algorithms for computing accurate fields of Orientation Distribution Functions (ODFs) from HARDI data.
2. Fiber Segmentation: To develop advanced algorithms for automatically segmenting HARDI volumes into multiple regions corresponding to different structures in the brain.
3. HARDI Registration: To develop advanced algorithms for the registration of HARDI brain volumes to preserve fiber orientation information.
4. HARDI Feature Extraction: To develop methods for extracting features from high-dimensional HARDI data that can be exploited for ODF clustering, fiber segmentation, HARDI registration and disease classification.
5. Disease Classification: To develop advanced classification techniques using novel HARDI feature representations to robustly classify and characterize neurological disease.

Internship Goals. In our lab, the intern will work with a PhD student to complete a project within an area(s) mentioned above. The intern will read a number of research papers on DMRI and the state-of-the-art, and will learn an understanding of the problem, its applications, and the techniques involved to tackle it. There are two aspects of the research that may be of interest to the applicant. One is a more theoretical aspect that involves developing mathematical theories to improve existing frameworks. The second is a more computational aspect that involves more image processing, analysis, and algorithm implementation in MATLAB or C++. An applicant with some interest and experience in both areas is most favorable, but it is possible for an applicant to be interested in working on only one of the aspects as well. At the end of the internship period the student will present their work to other graduate students and professors and will potentially be able to publish their research in medical imaging conferences and journals. As becoming part of the Vision Lab, the intern will experience first-hand a rigorous and rewarding research environment with a highly collaborative and supportive social element.
Experience in MATLAB or C++ and familiarity with image analysis or processing is a plus. Mathematical maturity is also favorable.

23. Software Environment and Virtual Fixtures for Medical Robotics
Mentors: Prof. Russell Taylor, Dr. Peter Kazanzides

Description: Our laboratory has an active ongoing research program to develop open source software for medical robotics research. This “Surgical Assistant Workstation (SAW)” environment includes modules for real time computer vision; video overlay graphics and visualization; software interfaces for “smart” surgical tools; software interfaces for imaging devices such as ultrasound systems, x-ray imaging devices, and video cameras; and interfaces for talking to multiple medical robots, including the Intuitive Surgical DaVinci robot, our microsurgical Steady Hand robots, the IBM/JHU LARS robot, the JHU/Columbia “Snake” robot, etc. A brief overview for this project may be found at https://www.cisst.org/saw/Main_Page. Students will contribute to this project by developing “use case” software modules and applications. Typical examples might include: using a voice control interface to enhance human-machine cooperation with the DaVinci robot; developing enhanced interfaces between open source surgical navigation software and the DaVinci or other surgical robots; or developing telesurgical demonstration projects with our research robots. However, the specific project will be defined in consultation with the student and our engineering team.
Required Skills: The student should have strong programming skills in C++. Some experience in computer graphics may also be desirable.

24. Instrumentation and Steady-hand Control for New Robot for Head-and-Neck Surgery
Mentors: Prof. Russell Taylor, Dr. Jeremy Richmon (Otolaryngology), Dr. Masaru Ishii (Otolaryngology), Dr. Lee Akst (Otolaryngology), Dr. Matthew Stewart (Otolaryngology)
Description: Our laboratory is developing a new robot for head-and-neck surgery. Although the system may be used for “open” microsurgery, it is specifically designed for clinical applications in which long thin instruments are inserted into narrow cavities. Examples include endoscopic sinus surgery, transphenoidal neurosurgery, laryngeal surgery, otologic surgery, and open microsurgery. Although it can be teleoperated, our expectation is that we will use “steady hand” control, in which both the surgeon and the robot hold the surgical instrument. The robot senses forces exerted by the surgeon on the tool and moves to comply. Since the motion is actually made by the robot, there is no hand tremor, the motion is very precise, and “virtual fixtures” may be implemented to enhance safety or otherwise improve the task. Possible projects include:
• Development of “phantoms” (anatomic models) for evaluation of the robot in realistic surgical applications.
• User studies comparing surgeon performance with/without robotic assistance on suitable artificial phantoms.
• Optimization of steady-hand control and development of virtual fixtures for a specific surgical application
• Design of instrument adapters for the robot

Required Skills: The student should have a background in biomedical instrumentation and an interest in developing clinically usable instruments and devices for surgery. Specific skills will depend on the project chosen. Experience in at least one of robotics, mechanical engineering, and C/C++ programming is important. Similarly, experience in statistical methods for reducing experimental data would be desirable.

25. Statistical Modeling of 3D Anatomy
Mentor: Prof. Russell Taylor
Description: The goal of this project is creation of 3D statistical models of human anatomic variability from multiple CT and MRI scans. The project will involve processing multiple images from the Johns Hopkins Hospital, co-registering them, and performing statistical analyses. The resulting statistical models will be used in ongoing research on image segmentation and interventional imaging. We anticipate that the results will lead to joint publications involving the REU student as a co-author.
Required skills: Experience in computer vision, medical imaging, and/or statistical methods is highly desirable.

26. Accuracy Compensation for “Steady Hand” Cooperatively Controlled Robots
Mentor: Prof. Russell Taylor
Description: Many of our surgical robots are cooperatively controlled. In this form of robot control, both the robot and a human user (e.g., a surgeon) hold the tool. A force sensor in the robot’s tool holder senses forces exerted by the human on the tool and moves to comply. Because the robot is doing the moving, there is no hand tremor, and the robot’s motion may be otherwise constrained by virtual fixtures to enforce safety barriers or otherwise provide guidance for the robot. However, any robot mechanism has some small amount of compliance, which can affect accuracy depending on how much force is exerted by the human on the tool. In this project, the student will use existing instrumentation in our lab to measure the displacement of a robot-held tool as various forces are exerted on the tool and develop mathematical models for the compliance. The student will then use these models to compensate for the compliance in order to assist the human place the tool accurately on predefine targets. We anticipate that the results will lead to joint publications involving the REU student as a co-author.
Required Skill: The student should be familiar with basic laboratory skills, have a solid mathematical background, and should be familiar with computer programming. Familiarity with C++ would be a definite plus, but much of the programming work can likely be done in MATLAB or Python.

27. Voice Interfaces for Surgical Robot Systems

Mentor: Prof. Russell Taylor
Description: The goal of this project would be development of suitable voice interfaces for surgical robots such as the da Vinci surgical system. Surgical robots operate in an information rich environment and often have several different operating behaviors. Although the actual motion of the robot is best controlled by conventional teleoperation or hands-on cooperative control, selection of the appropriate behavior, control of information displays shown to the surgeon, or other information-based interactions may be require the surgeon to communicate information without affecting what his or her hands are doing. Voice is a natural communication modality for this sort of interaction. In this project, the student will adapt a commercial voice recognition system to provide simple voice interactions with a surgical robot and will demonstrate this capability with one of our surgical robot systems.
Required Skill: The student should be a very strong C++ programmer. Familiarity with voice recognition or robotics would be a plus.

28. Reconstruction of Light Position from Shape of Illumination

Mentors: Prof. Russell Taylor, Balazs Vagvolgyi
Description: In Vitreoretinal surgery the surgeon uses a handheld instrument equipped with a fiber-optic light pipe to illuminate the retina from inside the eye. On microscopic images only the tip of the light pipe may be visible but not the shaft, thus the location of the light pipe cannot be reliably determined by just looking at the instrument itself. In this project we aim to determine the light pipe’s position and orientation by looking at the shape of the illumination pattern on the surface of the retina.
Required Skill: Image processing, Programming (Matlab or C++)

29. Animal Locomotion in Complex Terrain
Mentor: Prof. Chen Li
Description: Animals are fantastic locomotors and can move through almost any terrain. However, compared to flight and swimming where we understand a lot about how animal’s body forms and sensorimotor control help them exert appropriate forces to move fast and maneuver agily, we know surprisingly little about how terrestrial animals do so. This is largely because terrestrial locomotion has been studied only on simple ground like treadmill until very recently. To advance our understanding, we are studying how animals rapidly and efficiently traverse complex terrains. One model system that we are focuing on is cluttered terrain such as forest floor and building rubble (imagine how we can apply this to advance search-and-rescue robots). We will perform systematic experiments to vary parameters of the robot (e.g. speed, angle of attack, body shape) and terrain (e.g. surface friction, obstacle size and spatial density), and perform data analysis of to understand how they affect robot locomotion.
Role of the student: The REU student will be involved in animal locomotion experiments to test high-performing animals such as cockroaches and snakes moving through lab models of complex terrain, and perform data analysis (and potentially simple mechanics modeling). Main experimental techniques include high speed imaging and a new automated insect testing arena that we developed.
Background Skills: Good understanding of Newtonian mechanics; strong skills of MATLAB. No specific animal experiment background is required, but the student must be confortable handling live insects and snakes (with mentorship).

30. Development of Novel Bio-inspired Robots for Multi-functional Locomotion in Complex Terrain
Mentor: Prof. Chen Li
Description: Animals are fantastic locomotors and can move through almost any terrain. One main feature that animals have but still lacking for mobile robots is the ability to perform multi-functional locomotion (e.g., run, climb, burrow, self-right). We are systematically studying how animals do so and will use biological insights to guide the design and development of novel robots that begin to be able to achieve multi-functional locomotion. For example, in recent studies, we found that cockroach has wings that are normally closed to form a rounded body shape that help them traverse cluttered obstacles, but when flipped over they open their wings to self-right; based on these insights, we have developed new robots that can transform between having a rounded shell for obstacle traversal or having wings open for self-righting.
Role of the student: The REU student will be involved in the design, fabrication, and testing novel robots using inspirations from our animal experiment observations.
Background Skills: Required: Arduino microcontroller circuit and programming, CAD design, 3D printing, machining; Preferred: MATLAB.

31. Robophysical System to Study Locomotor-terrain Interaction
Mentor: Prof. Chen Li
Description: A main challenge that has prevented robots from moving as well as animals in complex terrain is our lack of understanding of the forces between animals and robots (locomotors) and the surrounding terrain (think about what if we wanted to make an airplane fly but had no idea how to generate lift!) A technical difficulty to study this is that it is often hard to directly measure forces during free locomotion of animals and robots in complex terrain. To address this challenge, we are developing novel automated “robophysical” systems. The idea is to apply robotic technology to move physical models of animals or robots into physical models of complex terrain to mimic free locomotion, and measure the resulting forces and motions using sensors (IMU, force transducer) and imaging analysis, much like how one can use wind tunnel or to test airfoils at different angles of attack to better understand fluid-structure interaction.
Role of the student: The REU student will be involved in development, refinement, and testing of our novel robophysical system, and perform data analysis (and potentially simple mechanics modeling).
Background Skills: Arduino microcontroller, MATLAB, CAD design, 3D printing, machining, sensors.

32. Bimanual Haptic Feedback for Robotic Surgery Training
Mentor: Dr. Jeremy Brown
Description: Robotic minimally invasive surgery (RMIS) has transformed surgical practice over the last decade; teleoperated robots like Intuitive Surgical’s da Vinci provide surgeons with vision and dexterity that are far better than traditional minimally invasive approaches. Current commercially available surgical robots, however, lack support for rich haptic (touch-based) feedback, prohibiting surgeons from directly feeling how hard they are pressing on tissue or pulling on sutures. Expert surgeons learn to compensate for this lack of haptic feedback by using vision to estimate the robot’s interactions with surrounding tissue. Yet, moving from novice proficiency to that of an expert often takes a long time. In a collaboration with Dr. Katherine J. Kuchenbecker, we previously demonstrated that tactile feedback of the force magnitude applied by the surgical instruments during training helps trainees produce less force with the robot, even after the feedback is removed. This project seeks to build on these previous findings by refining and evaluating a bimanual haptic feedback system that produces a squeezing sensation on the trainee’s two wrists in proportion to the forces they produce with the left and right surgical robotic instruments. The research objective of this project is to test the hypothesis that this bimanual haptic feedback will accelerate the learning curve of trainees learning to perform robotic surgery.
Role of the student: With supportive mentorship, the REU student will lead the refinement and evaluation of our current haptic feedback system, which involves mechanical, electrical, and computational components. He or she will then work closely with clinical partners to select clinically appropriate training tasks and will design, conduct, and analyze a human-subject experiment to evaluate the system.
Background skills: Experience with CAD, Matlab, and/or Python would be beneficial. Interest in working collaboratively with both engineering and clinical researchers. Mechatronic design experience and human-subject experiment experience would be helpful but are not required.

Laboratory for Computational Sensing + Robotics