Robotics Infrastructure Development Project

RILogo[1]Welcome to the Robotics Infrastructure Development Project at the Laboratory for Computational Sensing and Robotics (LCSR) at Johns Hopkins University. Our project team is developing infrastructure for integrated sensing, modeling, and manipulation with robotic and human-machine systems. We are creating a publicly available framework of software, mechatronics, and Nsf_logo[1]hardware. Two major research platforms will be created: an assistive robotics system with arms and hands, and a robot-assisted surgical system with integrated, enhanced visualization. Researchers at other institutions will be able to use our disseminated materials to design their own systems, as well as visit LCSR to access our experimental platforms. This infrastructure will enable new science by facilitating difficult systems level robotics research, broadening the accessibility to advanced robotic capabilities, and promoting the interchange of information in the field.

This Major Research Instrumentation project, called MRI: Development of Infrastructure for Integrated Sensing, Modeling, and Manipulation with Robotic and Human-Machine Systems, is sponsored by the National Science Foundation, grant no. CNS-0722943.Allison_davinci[1]
  • Development of Infrastructure for Integrated Sensing, Modeling, and Manipulation with Robotic and Human-Machine Systems

    ShunkHand[1]“Integrated robotic systems that fuse multimodal sensory information to enhance models and manipulate the environment will positively impact human lives, particularly in health care, safety, and human assistance.”

    The Infrastructure Development project aims to create a publicly available system for the control of an integrated robot capable of movement and manipulation in complex unstructured environments. The publicly available system will include software, mechatronics, and integrated hardware, all of which will aid in the control of the robot’s sensing, modeling, and manipulation capabilities. Two complementary research platforms will be developed to carry out the projects goals; the first will be a bimanual dextrous manipulation system with integrated environment sensing and the ability to model rigid objects commonly found in human environments; the second will be a teleoperated surgical robotic system with integrated sensors that can acquire patient specific deformable tissue models.
    Intellectual Merit
    When completed, the proposed infrastructure will enable investigators at JHU and elsewhere to commence a broad range of experimentally-driven research activities without laboriously re-tooling software and mechatronic interfaces for each new project. The infrastructure will facilitate research that integrates sensing, modeling and manipulation, including environment exploration and modeling, vision-based manipulation, human-robot interaction, computer integrated surgery, prosthetics, and tactile perception. JHU’s Laboratory for Computational Sensing and Robotics (LCSR) is uniquely positioned to develop this open-access robotic infrastructure due to the breadth and depth of both its fundamental and experimental robotics research activities.
    Broader Impact
    Integrated robotic systems that fuse multimodal sensory information to enhance models and manipulate the environment will positively impact human lives, particularly in health care, safety, and human assistance. The robotics research resulting from this infrastructure will directly impact several related disciplines, including neuroscience, rehabilitation, and surgery. Researchers worldwide will have access to our open source software and designs, accelerating th epace of robotics research while encouraging experimentation. The dissemination of this research will allow students at all levels to learn about robotic systems, and students can use the infrastructure to jump start their research. The large research team, including the investigators, development staff, and users, is interdisciplinary and diverse. The project involves people at many career stages, from high school students to senior faculty.
  • The Infrastructure Development project will contribute substantially to the education of graduate, undergraduate, and high school students involved in the development and use of the platforms.
    A yearly undergraduate research assistant has been budgeted to participate in the development of software, mechatronics, and experimental platforms. These students will learn about low-level engineering development and have the opportunity to see their work applied in systems that affect a number of research activities. An undergraduate student will also be involved in developing the website and will learn about the infrastructure through the dissemination process. Beyond the students who are supported directly by this project, many undergraduate researchers in the LCSR will use the infrastructure. The investigators have an excellent collective record of involvement of undergraduates in world-class research, enabling them to publish papers, attend conferences, and go on to graduate school with prestigious research fellowships.

    We will participate, through the CISST ERC, in Research Experience for Undergraduates (REU), Research Experience for Teachers (RET), and K-12 outreach and diversity programs, providing substantial contact with those communities. We will also continue participating in the Women in Science and Engineering (WISE) program, in which local high-school girls do research rotations in each lab for a semester or year, for three afternoons per week. The infrastructure will also improve the research training and advance the careers of underrepresented groups in engineering. We will use the exciting topic of robotics to encourage young students, particularly women and minorities, to enter into science, technology, engineering, andmath (STEM) fields.
    Women in Science and Engineering (WISE) Students, Fall 2008:

    • Meghan Vu, mentored by Dr. Allison Okamura and undergraduate student Kamini Balaji
    • Taylor Harman, Dr. Greg Hager and graduate student Carol Reiley
  • Principal Investigators
    NAME PHONE EMAIL WEBSITE
    Allison M Okamura 410-516-7266 aokamura@jhu.edu Haptics Laboratory
    Noah J Cowan 410-516-5301 ncowan@jhu.edu LIMBS Laboratory
    Gregory D Hager 410-516-5521 hager@cs.jhu.edu CIRL
    Peter Kazanzides 410-516-5590 pkaz@cs.jhu.edu SMARTS Laboratory
    Russell H Taylor 410-516-6299 rht@cs.jhu.edu CISST
    Other JHU Investigators
    NAME POSITION DEPARTMENT
    Gregory Chirikjian Professor Mechanical Engineering
    Ralph Etienne-Cummings Professor Electrical and Computer Engineering
    Gabor Fichtinger Associate Research Professor Computer Science (also Associate Professor at Queen’s University)
    Stephen S. Hsiao Professor Neuroscience
    Jin Kang Professor Electrical and Computer Engineering
    Nitish Thakor Professor Biomedical Engineering
    Rene Vidal Assistant Professor Biomedical Engineering
    Louis Whitcomb Professor Mechanical Engineering
    Iulian Iordachita Assistant Research Professor Mechanical Engineering
    JHU Staff
    NAME POSITION
    Anton Deguet Senior Software Engineer
    Balazs Vagvolgyi Senior Software Engineer
    Collaborating Users
    NAME POSITION INSTITUTION
    Katherine Kuchenbecker Assistant Professor (formerly JHU Postdoc) University of Pennsylvania
    Nabil Simaan Associate Professor Vanderbilt University
    Marcia O’Malley Assistant Professor Rice University
    Kevin Cleary Technical Director, Sheikh Zayed Institute Childrens National Medical Center
    Jaydev Desai Associate Professor University of Maryland
    Chris Hasser Director of Applied Research Intuitive Surgical Inc.
    Robert Howe Professor Harvard University
    Partners
    Intuitive[1]
  • Hardware will be developed for both assistive and surgical robotics platforms.
    • The assistive robotics platform will be able to model common environments and assist humans in manipulation tasks
    • The surgical robotics platform will focus on manipulating, sensing, and modeling deformable soft tissue using a modified version of the da Vinci surgical system.
      • Equipment includes:
        • Experimental custom system based on the da Vinci Classic
        • Clinical system (da Vinci S) on loan
        • An ultrasound machine
        • An eye tracking system
    Wam_hand[1]Our software and mechatronic infrastructure must be applicable to many different types of hardware systems. We propose to construct two robotic platforms, with different sensing, modeling, and manipulation requirements, in order to guide development and ensure generality. The hardware will include a set of off-the-shelf components (some already owned and some to be purchased), including robot arms, artificial hands, tactile sensors, vision and imaging sensors, position trackers, and human input devices. A significant portion of the development work is to integrate these components along with the software and mechatronic infrastructure into two functional experimental platforms for robotics research. We emphasize that, although there are significant purchases required to create these platforms, this a developmental activity. These components cannot simply be assembled into a working system. Rather, they will be generically interfaced through the software infrastructure. Also, for some manipulation components, the mechatronic interfaces described are needed to control the system. The two experimental platforms we will develop and the primary purchases we will make are described below.

    Assistive Robotics

    We will develop a bimanual dexterous manipulation system with integrated self and environment sensing. This platform will enable research to acquire data from objects commonly found in natural environments and develop models to be applied to robots that assist humans in manipulation tasks. The sensors in this system will include force, position, tactile, and vision (cameras). The research related to this platform will enable the acquisition of models of rigid objects based on geometry, surface properties, and inertia.

    Surgical Robotics

    Mockor_davinci[1]Two “Human-in-the-loop” teleoperated surgical robotic systems are involved in this project. The first is a custom system with imaging and vision sensors that can acquire patient-specific deformable tissue models. The sensors will include a stereo endoscope, video cameras, ultrasound scanner, and force sensors on surgical tools. The models will focus on deformable soft tissues that can be tracked using computer vision techniques and whose mechanical properties can be acquired using force and motion sensing. The manipulation system includes of two da Vinci master manipulators, “classic” patient-side manipulators with surgical tools, and 3-D visualization for the operator.  In addition, a da Vinci S system is on loan from Intuitive Surgical, Inc. This is a clinical version of the robot being used in the JHU Swirnow Mock Operating Room for systems integration experiments, in a realistic mock clinical setting.
  • Mechatronic interfaces exist between the control software and the sensors and actuators that interact with the physical world.

    We have designed a new controller using IEEE 1394 (Firewire) for communications. This controller consists of two boards: an IEEE-1394 FPGA Controller and a Quad Linear Amplifier. Key features of this controller are:

    • Provides equivalent functionality to Low Power Motor Controller (LoPoMoCo), which is a custom board with an ISA bus interface developed in 2004
    • Uses FPGA to provide bridge between IEEE 1394 bus and I/O hardware
    • All control software implemented on host PC (no embedded software development required)
    • Includes linear power amplifiers in bridge configuration, providing bidirectional motor control using a single power supply
    • Enables closed-loop control up to 8 kHz
    • FPGA has sufficient capacity for additional functionality, including microprocessor
    • More detailed specifications available here.

    FirewireController[1]

    All design files are available in the public SVN repository, with a Trac interface that includes a wiki for documentation:

    The git repositories include the schematic and PCB design files (Altium Designer format) and the FPGA (Verilog) source code.

    The cost of one 4-axis board set (one FPGA and one QLA) is about $1,150, assuming a total run of at least 40 board sets. The price will decrease with larger quantities.

    See Design Review 1 presentation (pdf) for an early design review; the resulting design decisions and user requirements are summarized below:

    DESIGN QUESTIONS CURRENT PLAN
    Use the 4-pin, 6-pin, or 9-pin IEEE-1394 connector? Use 6-pin connector, so that bus can provide logic power. Include power input connector so laptops can use 4-pin to 6-pin adapter and separate power supply. Can also use 9-pin to 6-pin adapter cable.
    Should the controller consist of one or two physical boards? Two boards: one for digital circuits (IEEE-1394 interface and FPGA) and one for I/O and power amplifiers
    How many axes per board? 4 (same as LoPoMoCo)
    What type of connectors for motors and sensors? DB9 for motors (assuming they can handle the current) and VHDCI-68 for sensors (same connectors and pinout as LoPoMoCo)
    What is the required continuous current? At least 5 Amps, probably not more than 8 Amps
    What is the maximum motor supply voltage? Will support up to 48V motors (probably up to 60V supply)
    Should the amplifier control motor current, voltage, or either? Current control only (LoPoMoCo supported either)

    For the experimental platforms being developed, most of the sensors and actuators will be provided by purchased subsystems. Just as open software interfaces are fundamental research enablers, it is crucial to have open mechatronic interfaces.Lopomoco[1]Several years ago, we created a custom Low Power Motor Controller (LoPoMoCo) to support our research (see figure). This board was originally designed to drive a small snake-like robot developed for throat surgery, but was subsequently applied to other robotic systems A custom solution was created because there were no commercially-available boards that satisfied the requirements to perform voltage (speed) or current (torque) control of DC motors, enforce torque limits in hardware (even when configured for voltage control), and provide precise feedback of small motor currents (100 mA and lower).The LoPoMoCo is a half-length ISA board that provides all I/O and power amplification for 4 robot axes. Although this board satisfied our control requirements, we noted a few areas for improvement, such as the interface to the control computer. The main problem is that a board physically located inside the PC (whether ISA or PCI) requires a significant amount of cabling to the remote sensors and actuators. One solution is to create a distributed system with embedded microprocessors located near the sensors and actuators. For research, however, we prefer to implement all control on the PC because it provides a low-cost, high-performance processor and a familiar development environment. Thus, we are creating a new robot controller that uses IEEE 1394 (Firewire), rather than ISA or PCI, to communicate between the PC and the remote I/O hardware. A key feature of this controller is that the remote hardware uses an FPGA to provide the interface between the Firewire and the I/O devices; there is no remote microprocessor and therefore no additional overhead. This design will enable closed-loop control to be performed on the PC at frequencies of 8 kHz and higher, assuming a real-time operating system is employed.
  • The Infrastructure Development Project is releasing open source libraries and frameworks for sensing, modeling, and manipulation.
    Sys_arch[1]
    Software development is an essential element of most robotics research. We are developing an infrastructure for integration of sensing, modeling, and manipulation that applies across diverse application domains. Our starting point is the cisst software developed at JHU to support research in medical robotics and computer-assisted interventions. The cisst foundation libraries (e.g., vectors, matrices, transformations, numerical methods, class and object registries, logging) and multi-tasking support libraries (operating system abstraction, devices, tasks) are already available as open-source software at trac.lcsr.jhu.edu/cisst. We will augment them with open source releases of libraries for sensing, modeling, and manipulation. Furthermore, we will integrate these libraries into two research platforms for surgical and assistive robotics. The system architecture will consist of application frameworks, built with both in-house (cisst) and external software libraries and toolkits, that can be customized by the addition of research components, as shown in the figure. The software is written (primarily) in C++, with Python interfaces, and supports conventional operating systems (e.g., Windows, Linux, and Mac OS X) and real-time operating systems such as RTAI/Linux (Real Time Application Interface for Linux). Researchers can customize the frameworks by either adding new code and compiling/linking with the entire framework, or by compiling “plugin” modules and dynamically loading them at runtime. The dynamic loading approach provides the benefit of lowering the software development threshold – a researcher can create a new algorithm using just the libraries and tools that it requires. The framework also includes the Interactive Research Environment (IRE), a Python interface to our C++ software. Although a compiled language such as C++ provides efficiency, an interactive development environment is crucial for rapid prototyping, testing, and debugging.
  • The data linked below is described in

    • James C. Gwilliam, Zachary Pezzementi, Erica Jantho, Allison M. Okamura, Steven Hsiao. “Human vs. Robotic Tactile Sensing: Detecting Lumps in Soft Tissue.” In Haptics Symposium, March 2010.

    It is stored in the .mat v5 format, readable by Matlab and Octave. Loading this file will provide a cell array named models, which is size 18-x-16, indexed by model number then by indentation level. Each element is a 12-x-6 tactile image.

    models{X,Y}

    will give a 12-x-6 image of model X at indentation level Y.

    X: Models 1 – 18 (shown below)
    Y: Indentation of model into sensor (0.25 mm – 4.0 mm, by 0.25 mm increments).

    Models2[1]

    Models with embedded lumps

    The models are numbered in the following pattern:

    00-10 (H10)
    Depth B1 B2 B3
    D1 1 4 7
    D2 2 5 8
    D3 3 6 9
    00-30 (H30)
    Depth B1 B2 B3
    D1 10 13 16
    D2 11 14 17
    D3 12 15 18

    where the D and B values are as described in the paper (D1=1.5mm, D2=2.5mm, D3=3.5mm, B1=6.5mm, B2=9.5mm, B3=12.5mm). The image units denote sensor voltages. Please refer to the paper for additional details.

Laboratory for Computational Sensing + Robotics