Calendar

Mar
2
Wed
Cenk Cavusoglu: Towards Intelligent Robotic Surgical Assistants @ B17 Hackerman Hall
Mar 2 @ 12:00 pm – 1:00 pm

Abstract

Robotic surgical systems are becoming widely used in various surgical specialties ranging from urologic and gynecological surgery to cardiothoracic surgery.  The state-of-the-art robotic surgical systems represent tremendous improvements over manual minimally invasive surgery, with 6 degrees-of-freedom manipulators that provide improved dexterity, and immersive interfaces that provide improved hand-eye coordination.  These systems are also becoming platforms for information augmentation.  However, the state-of-the-art robotic surgical systems still have substantial shortcomings:   Robotic surgical manipulations are slower than open surgical manipulations. Robotic surgical systems lack high fidelity haptic feedback. And, issues with situational awareness still remain.

 

In this talk, I will introduce the current state of our research towards the development of intelligent robotic surgical assistants.  The goal of this research is to develop robotic surgical systems that can act more like surgical assistants and less like master-slave controlled tools.  In the proposed paradigm, the robotic surgical system will have subtask automation capabilities for performing basic low-level manipulation tasks. The subtask automation will allow the surgeon to have a high-level interaction with the system rather than controlling it through low-level direct teleoperation.  Such a system would potentially reduce tedium from simple, repetitive tasks; assist the surgeon in complex manipulation tasks; and reduce the cognitive load on the surgeon.

 

Automated execution of surgical tasks requires development of new robotic planning, perception, and control algorithms for robustly performing robotic manipulation under substantial uncertainty.  The presentation will introduce our recent work on these three aspects of the problem.  I will first focus on our research on robotic perception algorithms.  Specifically, I will present algorithms for estimation of deformable object boundary constraints and material parameters, and for localization and tracking of surgical thread.  I will then introduce our work on planning algorithms, which focus on needle path planning for surgical suturing, and optimal needle grasp and entry port planning.  Finally, I will present control algorithms for needle driving and knot tying.

 

Bio

  1. Cenk Cavusoglu is currently a Professor at the Department of Electrical Engineering and Computer Science of Case Western Reserve University (CWRU), with secondary appointments in Biomedical Engineering, and Mechanical and Aerospace Engineering.  He received the B.S. degree in Electrical and Electronic Engineering from the Middle East Technical University, Ankara, Turkey, in 1995, and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences from the University of California, Berkeley, in 1997 and 2000, respectively.  He was a Visiting Researcher at the INRIA Rhones-Alpes Research Center, Grenoble, France (1998); a Postdoctoral Researcher and Lecturer at the University of California, Berkeley (2000-2002); and, a Visiting Associate Professor at Bilkent University, Ankara, Turkey (2009-2010).

 

Dr. Cavusoglu’s research spans the general areas of robotics and human-machine interfaces with special emphasis on medical robotics, and haptics.  Specifically, for the past twenty years, he has been conducting research on all of the different aspects of medical robotic systems from control, mechanism, and system design, to human-machine interfaces, haptics, and algorithms.

 

More information on Dr. Cavusoglu’s research can be found at his homepage at

http://engr.case.edu/cavusoglu_cenk/

 

Mar
4
Fri
LCSR Industry Day (rescheduled)
Mar 4 @ 8:00 am – 3:00 pm

Johns Hopkins University

 

Homewood Campus

Events include :-Presentations on latest developments in JHU Robotics Laboratory
Demonstrations, Poster Session and Speed Networking

Register now

 

High Bay Main

 

LCSR Industry Day

Draft Agenda

Note: Schedule subject to change

 

08:00             Registration/continental breakfast

08:30             Welcome

08:40             Introduction to LCSR, Russell Taylor, Director

08:50             Overview of LCSR Robotics Research

–  Medical Robotics & Computer Assisted Surgery

–  Human Machine Collaborative Systems

–  Robots for Extreme Environments

–  Biorobotics

9:10 – 10:30   Research and Commercialization Highlights

10:30              BREAK

10:45 – 11:30  New Faculty Talks with Dr. Chen Li and Dr. Enrique Mallada

 

 

11:45 – 1:30   LUNCH / Open Lab Poster + Demo sessions

1:30 – 2:45     Industry and Student Speed Networking sessions

 

 

 

Mar
9
Wed
Jeremy Brown: Smart Haptic Displays for Dexterous Manipulation of Telerobots @ B17 Hackerman Hall
Mar 9 @ 12:00 pm – 1:00 pm

Abstract

The human body is capable of dexterous manipulation in many different environments. Some environments, however, are challenging to access because of distance, scale, and limitations of the body itself. In many of these situations, access can be effectively restored via a telerobot, in which a human remotely controls a robot to perform the task. Dexterous manipulation through a telerobot is currently limited, and will be possible only if the interface between the operator’s body and the telerobot is able to accurately relay any sensory feedback resulting from the telerobot’s interactions in the environment.
This talk will focus on the scientific investigation of high fidelity haptic interfaces that adequately translate the interactions between the telerobot and its environment to the operator’s body through the sense of touch. I will introduce the theme of “Smart Haptic Displays,” which are capable of modulating their own dynamic properties to compensate for the dynamics of the body and the telerobot to ensure the environment dynamics are accurately presented to the operator. Along the way, I will highlight contributions I have already made for two specific telerobots: upper-limb prostheses and minimally invasive surgical robots. These contributions include an empirical validation of the utility of force feedback in body-powered prostheses and the creation of a testbed to compare various haptic displays for pinching palpation in robotic surgery. Finally, I will briefly introduce a novel approach I am currently investigating that utilizes haptic signals to automatically predict a surgical trainee’s skill on a minimally invasive surgical robotic platform. As this work progresses, it will lead to the creation of interfaces that provide the rich haptic sensations the body has come to expect, and will allow for dexterous manipulation in any environment whether or not access is mediated through a telerobot.

 

Bio

Jeremy D. Brown is a Postdoctoral Research Fellow in the Department of Mechanical Engineering and Applied Mechanics and the Haptics Group in the GRASP Lab at the University of Pennsylvania. He earned undergraduate degrees in applied physics and mechanical engineering from Morehouse College and the University of Michigan, and a PhD degree in mechanical engineering from the University of Michigan, where he worked in the HaptiX Laboratory. His research focuses on the interface between humans and robots with a specific focus on medical applications and haptic feedback. He was honored to receive several awards including the National Science Foundation (NSF) Graduate Research Fellowship and the Penn Postdoctoral Fellowship for Academic Diversity.

Mar
16
Wed
No Seminar: Spring Break @ B17 Hackerman Hall
Mar 16 @ 12:00 pm – 1:00 pm
Mar
23
Wed
David Held: Using Motion to Understand Objects in the Real World @ 320 Hackerman Hall
Mar 23 @ 9:00 am – 10:00 am

Abstract

Many robots today are confined to operate in relatively simple, controlled environments. One reason for this is that current methods for processing visual data tend to break down when faced with occlusions, viewpoint changes, poor lighting, and other challenging but common situations that occur when robots are placed in the real world. I will show that we can train robots to handle these variations by modeling the causes behind visual appearance changes. If we model how the world changes over time, we can be robust to the types of changes that objects often undergo. I demonstrate this idea in the context of autonomous driving, and I show how we can use this idea to improve performance on three different tasks: velocity estimation, segmentation, and tracking with neural networks. By modeling the causes of appearance changes over time, we can make our methods more robust to a variety of challenging situations that commonly occur in the real-world, thus enabling robots to come out of the factory and into our lives.

 

Bio

David Held is a Post-doctoral Researcher at U.C. Berkeley working with Pieter Abbeel. He recently completed his Ph.D. in Computer Science at Stanford, doing research at the intersection of robotics, computer vision, and machine learning. His Ph.D. was co-advised by Sebastian Thrun and Silvio Savarese. David has also interned at Google, working on the self-driving car project. Before Stanford, he worked as a software developer for a startup company and was a researcher at the Weizmann Institute, working on building a robotic octopus. He received a B.S. in Mechanical Engineering at MIT in 2005, an M.S. in Mechanical Engineering at MIT 2007, and an M.S. in Computer Science at Stanford in 2012, for which he was awarded the Best Master’s Thesis Award from the Computer Science Department.

Yaoling Yu: The Computational, Statistical, and Practical Aspects of Machine Learning @ B17 Hackerman Hall
Mar 23 @ 12:00 pm – 1:00 pm

Abstract

The big data revolution has profoundly changed, among many other things, how we perceive business, research, and application. However, in order to fully realize the potential of big data, certain computational and statistical challenges need to be addressed. In this talk, I will present my research in facilitating the deployment of machine learning methodologies and algorithms in big data applications. I will first present robust methods that are capable of accounting for uncertain or abnormal observations. Then I will present a generic regularization scheme that automatically extracts compact and informative representations from heterogeneous, multi-modal, multi-array, time-series, and structured data. Next, I will discuss two gradient algorithms that are computationally very efficient for our regularization scheme, and I will mention their theoretical convergence properties and computational requirements. Finally, I will present a distributed machine learning framework that allows us to process extremely large-scale datasets and models. I conclude my talk by sharing some future directions that I am and will be pursuing.

 

Bio

Yaoliang Yu is currently a research scientist affiliated with the center for machine learning and health, and the machine learning department of Carnegie Mellon University. He obtained his PhD (under Dale Schuurmans and Csaba Szepesvari) in computing science from University of Alberta (Canada, 2013), and he received the PhD Dissertation Award from the Canadian Artificial Intelligence Association in 2015.

Mar
30
Wed
Berk Gonenc: Force Sensing for Robotic Assistance in Retinal Microsurgery @ B17 Hackerman Hall
Mar 30 @ 12:00 pm – 1:00 pm

Abstract

Microsurgery ranks among the most challenging areas of surgical practice, requiring the manipulation of extremely delicate tissues by various micron scale maneuvers and the application of very small forces. Vitreoretinal surgery, as the most technically demanding field of ophthalmic surgery, treats disorders of the retina, vitreous body, and macula, such as retinal detachment, diabetic retinopathy, macular hole, and epiretinal membrane. Recent advancements in medical robotics have significant potential to address most of the challenges in vitreoretinal practice, and therefore to prevent traumas, lessen complications, minimize intra-operative surgeon effort, maximize surgeon comfort, and promote patient safety. In this talk, I will present the development of novel force-sensing tools and robot control methods to produce integrated assistive surgical systems that work in partnership with surgeons against the current limitations in microsurgery, specifically focusing on membrane peeling and vein cannulation tasks in retinal microsurgery. Integrating high sensitivity force sensing into the ophthalmic instruments enables precise quantitative monitoring of applied forces. Auditory feedback based upon the measured forces can inform (and warn) the surgeon quickly during the surgery and help prevent injury due to excessive forces. Using these tools on a robotic platform can attenuate hand tremor of the surgeon, which effectively promotes tool manipulation accuracy. In addition, based upon certain force signatures, the robotic system can actively guide the tool towards clinical targets, compensate any involuntary motion of the surgeon, or generate additional motion that will make the surgical task easier. I will present our latest experimental results for two distinct robotic platforms, the Steady Hand Robot and Micron, with the force-sensing ophthalmic instruments, which show significant performance improvement in artificial dry phantoms and ex-vivo biological tissues.

 

Bio

Berk Gonenc is a Ph.D. candidate in Mechanical Engineering at Johns Hopkins University. He received his M.S. degree in Mechanical Engineering from Washington State University Vancouver in 2011 and joined the Advanced Medical Instrumentation and Robotics Research Laboratory in Johns Hopkins University. He received his M.S.E. degree in Mechanical Engineering from Johns Hopkins University in 2014. His research is focused on developing smart instruments and robot systems for microsurgery.

Apr
6
Wed
LCSR Seminar: Ali Kamen: Imaging for Personalized Healthcare @ B17 Hackerman Hall
Apr 6 @ 12:00 pm – 1:00 pm

Abstract

In this talk, I will give a perspective on past and current trends in medical imaging particularly regarding the role of imaging in personalized medicine. I will then outline core technologies enabling the advancements, with specific focus on empirical and mechanistic modeling. In addition, I demonstrate some example clinical applications on how mechanistic and empirical models derived based on imaging are used for treatment planning and therapy outcome analysis. I conclude by providing a future outlook for the utilization of imaging in the healthcare continuum.

Apr
13
Wed
Christie Lecture: Vijay Kumar “Flying Robots: Beyond UAVs” in B17 Hackerman Hall @ B17 Hackerman Hall
Apr 13 @ 12:00 pm – 1:00 pm

Abstract

Flying robots can operate in three-dimensional, indoor and outdoor environments. However, many challenges arise as we scale down the size of the robot, which is necessary for operating in cluttered environments. I will describe recent work in developing small, autonomous robots, and the design and algorithmic challenges in the areas of (a) control and planning, (b) state estimation and mapping, and (c) coordinating large teams of robots. I will also discuss applications to search and rescue, first response and precision farming. Publications and videos are available at kumarrobotics.org.

 

Bio

DR. VIJAY KUMAR is the Nemirovsky Family Dean of Penn Engineering with appointments in the Departments of Mechanical Engineering and Applied Mechanics, Computer and Information Science, and Electrical and Systems Engineering at the University of Pennsylvania. Dr. Kumar received his Bachelor of Technology degree from the Indian Institute of Technology, Kanpur and his Ph.D. from The Ohio State University in 1987. He has been on the Faculty in the Department of Mechanical Engineering and Applied Mechanics with a secondary appointment in the Department of Computer and Information Science at the University of Pennsylvania since 1987.

Dr. Kumar served as the Deputy Dean for Research in the School of Engineering and Applied Science from 2000-2004. He directed the GRASP Laboratory,
a multidisciplinary robotics and perception laboratory, from 1998-2004. He was the Chairman of the Department of Mechanical Engineering and Applied Mechanics from 2005-2008. He served as the Deputy Dean for Education in the School of Engineering and Applied Science from 2008-2012. He then served as the assistant director of robotics and cyber physical systems at the White House Office of Science and Technology Policy (2012 – 2013).

Dr. Kumar is a Fellow of the American Society of Mechanical Engineers (2003), a Fellow of the Institution of Electrical and Electronic Engineers (2005) and a member of the National Academy of Engineering (2013). Dr. Kumar’s research interests are in robotics, specifically multi-robot systems, and micro aerial vehicles. He has served on the editorial boards of the IEEE Transactions on Robotics and Automation, IEEE Transactions on Automation Science and Engineering, ASME Journal of Mechanical Design, the ASME Journal of Mechanisms and Robotics and the Springer Tract in Advanced Robotics (STAR).

 

For More Information or to RSVP please contact Deana Santoni at dsantoni@jhu.edu.

Sponsored by the Department of Mechanical Engineering and the JHU Student Section and the Baltimore Section of the American Society of Mechanical Engineers.

Apr
20
Wed
LCSR Seminar: Timothy Kowalewski “Measuring Surgical Skill: Crowds, Robots, and Beyond” @ B17 Hackerman Hall
Apr 20 @ 12:00 pm – 1:00 pm

Abstract

For over a decade surgical educators have called for objective, quantitative methods to measure surgical skill. To date, no satisfactory method exists that is simultaneously accurate, scalable, and generalizable. That is, a method whose scores correlate with patient outcomes, can scale to cope with 51 million annual surgeries in the United States, and generalize across the diversity surgical procedures or specialties. This talk will review the promising results of exploiting crowdsourcing techniques to meet this need. The talk will also survey the limitations of this approach, fundamental problems in establishing ground truth for surgical skill evaluation, and steps to exploit surgical robotics data. The talk will conclude by proposing some future robotic approaches that may obviate the need for surgeons to master complex technical skills in the first place.

 

Bio

Dr. Kowalewski completed his PhD in electrical engineering for “quantitative surgical skill evaluation” at the University of Washington’s Biorobotics lab. This work was recognized with a best doctoral candidate award at the American College of Surgeons AEI Consortium on Surgical Robotics and Simulation. He was also a research scientist at DARPA’s “Traumapod: Operating room of the future” project. He has helped commercialize his PhD work for quantitative skill evaluation hardware (Simulab Corp., Seattle, WA) and also pioneered the use of crowdsourcing for highvolume assessment of surgical skills and cofounded CSATS Inc, Seattle, WA to make these methods available to modern healthcare. This work has been published in JAMA Surgery and formally adopted by the American Urological Association for educational and certification needs. In 2012 he started the Medical Robotics and Devices Lab at the University of Minnesota, Mechanical Engineering department where he is currently an Assistant Professor.

Laboratory for Computational Sensing + Robotics