Calendar

Nov
4
Wed
Carla Pugh: Signatures: What can Sensors and Motion Tracking Technology Tell us about Technical Skills Performance? @ B17 Hackerman Hall
Nov 4 @ 12:00 pm – 1:00 pm

Speaker Bio

Doctor Carla Pugh is currently Vice-Chair of Education and Patient Safety in the Department of Surgery at University Wisconsin, Madison. She is also Director of UW-Health’s Inter-Professional Simulation Program. Her clinical area of expertise is Acute Care Surgery. Dr Pugh obtained her undergraduate degree at U.C. Berkeley in Neurobiology and her medical degree at Howard University School of Medicine. Upon completion of her surgical training at Howard University Hospital, she went to Stanford University and obtained a PhD in Education. Her research interests include the use of simulation technology for medical and surgical education. Dr. Pugh holds a method patent on the use of sensor and data acquisition technology to measure and characterize the sense of touch. Currently, over two hundred medical and nursing schools are using one of her sensor enabled training tools for their students and trainees. The use of simulation technology to assess and quantitatively define hands-on clinical skills is one of her major research areas. In addition to obtaining an NIH R-01 in 2010 (to validate a sensorized device for high stakes clinical skills assessments), her work has received numerous awards from various medical and engineering organizations. In 2011 Dr. Pugh received the Presidential Early Career Award for Scientists and Engineers. Dr Pugh is also the developer of several decision-based simulators that are currently being used to assess intra-operative judgment and team skills. This work was recently funded by a 2 million dollar grant from the Department of Defense.

Dec
2
Wed
Greg Hager: From Mimicry to Mastery: Creating Machines that Augment Human Skill
Dec 2 @ 12:00 pm – 1:00 pm

Abstract

We are entering an era where people will interact with smart machines to enhance the physical aspects of their lives, just as smart mobile devices have revolutionized how we access and use information. Robots already provide surgeons with physical enhancements that improve their ability to cure disease, we are seeing the first generation of robots that collaborate with humans to enhance productivity in manufacturing, and a new generation of startups are looking at ways to enhance our day to day existence through automated driving and delivery.

 

In this talk, I will use examples from surgery and manufacturing to frame some of the broad science, technology, and commercial trends that are converging to fuel progress on human-machine collaborative systems. I will describe how surgical robots can be used to observe surgeons “at work” and to define a “language of manipulation” from data, mirroring the statistical revolution in speech processing. With these models, it is possible to recognize, assess, and intelligently augment surgeons’ capabilities. Beyond surgery, new advances in perception, coupled with steadily declining costs and increasing capabilities of manipulation systems, have opened up new science and commercialization opportunities around manufacturing assistants that can be instructed “in-situ.” Finally, I will close with some thoughts on the broader challenges still be to surmounted before we are able to create true collaborative partners.

 

Speaker Bio

Gregory D. Hager is the Mandell Bellmore Professor of Computer Science at Johns Hopkins University. His research interests include collaborative and vision-based robotics, time-series analysis of image data, and medical applications of image analysis and robotics. He has published over 300 articles and books in these areas. Professor Hager is also Chair of the Computing Community Consortium, a board member of the Computing Research Association, and is currently a member of the governing board of the International Federation of Robotics Research. In 2014, he was awarded a Hans Fischer Fellowship in the Institute of Advanced Study of the Technical University of Munich where he also holds an appointment in Computer Science. He is a fellow of the IEEE for his contributions to Vision-Based Robotics, and has served on the editorial boards of IEEE TRO, IEEE PAMI, and IJCV. Professor Hager received his BA in Mathematics and Computer Science Summa Cum Laude at Luther College (1983), and his MS (1986) and PhD (1988) from the University of Pennsylvania. He was a Fulbright Fellow at the University of Karlsruhe, and was on the faculty of Yale University prior to joining Johns Hopkins. He is founding CEO of Clear Guide Medical.

Feb
3
Wed
Henry Lin: Virtual Reality Surgical Simulation: “It’s not just a game. It’s a matter of saving lives.” @ B17 Hackerman Hall
Feb 3 @ 12:00 pm – 1:00 pm

Abstract

Increasingly, robotic technologies are targeting the general public.  The adoption of these technologies depends on understanding the human-machine user experience. Research in how users learn to use a technology (learning curves) and how to train users (training methodologies) are crucial in driving its success. Intuitive Surgical’s da Vinci surgical system is an interesting case study on how a complex machine can positively impact a highly technical and sensitive field – surgery.  To augment the hands-on training of the da Vinci, the company introduced the VR-based da Vinci Skills Simulator. The Skills Simulator provides hands-on technical training on the surgeon console. In addition to the cost and time benefits of training on the simulator, it also provides various forms of feedback and evaluation. This talk will discuss the research that goes into developing an effective VR-based surgical simulator – from designing modules with clear training goals to developing proper metrics for feedback and analysis to implementing proper scoring systems to scientifically validating the modules.

 

Bio

Henry Lin manages the Surgical Simulation Development and Research Team at Intuitive Surgical. He started in the Medical Research Group investigating surgical skill evaluation. He then moved to the Simulation Group to apply his research within the surgical simulation environment. He received his Ph.D. from Johns Hopkins University in 2010 – in the Computational Interaction and Robotics Lab under the guidance of Dr. Gregory Hager. His dissertation research, “Surgical Motion”, focused on understanding surgeon technical skill through the analysis of da Vinci kinematics data and video. He received the 2005 MICCAI Best Student Paper Award and the Link Fellowship for his work. Post-JHU, Dr. Lin spent 2 years as a Post-Doc in the NIAAA at NIH understanding brain morphology changes due to alcohol abuse.  He maintains his academic interests – publishing research manuscripts, serving as a reviewer for technical conferences, including MICCAI and M2CAI, and reviews for Intuitive Surgical’s clinical and technical grant programs. Dr. Lin also has degrees in Computer Science from Columbia University and Carnegie Mellon University.

Feb
10
Wed
LCSR Seminar: Bilge Mutlu: Human-Centered Principles and Methods for Designing Robotic Technologies @ B17 Hackerman Hall
Feb 10 @ 12:00 pm – 1:00 pm

Abstract

The increasing emergence of robotic technologies that serve as automated tools, assistants, and collaborators promises tremendous benefits in everyday settings from the home to manufacturing facilities.  While robotic technologies promise interactions that can be far more complex than those with conventional ones, their successful integration into the human environment requires these interactions to also be natural and intuitive.  To achieve complex but intuitive interactions, designers and developers must simultaneously understand and address human and computational challenges.  In this talk, I will present my group’s work on building human-centered guidelines, methods, and tools to address these challenges in order to facilitate the design of robotic technologies that are more effective, intuitive, acceptable, and even enjoyable.  In particular, through a series of projects, this work demonstrates how a marrying of knowledge about people and computational methods can enable effective user interactions with social, assistive, and telepresence robots and the development of novel tools and methods that support complex design tasks across the key stages of the design process.  The talk will also include our ongoing work that applies these guidelines to the development of real-world applications of robotic technology and that targets the successful integration of these technologies into everyday settings.

 

Bio

Bilge Mutlu is an associate professor of computer science, psychology, and industrial engineering at the University of Wisconsin–Madison.  He received his Ph.D. degree from Carnegie Mellon University’s Human-Computer Interaction Institute in 2009.  His background combines training in interaction design, human-computer interaction, and robotics with industry experience in product design and development.  Dr. Mutlu is a former Fulbright Scholar and the recipient of the NSF CAREER award as well as several best paper awards and nominations, including HRI 2008, HRI 2009, HRI 2011, UbiComp 2013, IVA 2013, RSS 2013, HRI 2014, CHI 2015, and ASHA 2015.  His research has been covered by national and international press including the NewScientist, MIT Technology Review, Discovery News, Science Nation, and Voice of America.  He has served in the Steering Committee of the HRI Conference and the Editorial Board of IEEE Transactions on Affective Computing, co-chairing the Program Committees for ROMAN 2016, HRI 2015, ROMAN 2015, and ICSR 2011, the Program Sub-committees on Design for CHI 2013 and CHI 2014, and the organizing committee for HRI 2017.

Feb
17
Wed
LCSR Seminar Cancelled @ B17 Hackerman Hall
Feb 17 @ 12:00 pm – 1:00 pm
Feb
24
Wed
Anirudha Majumdar: Control of agile robots in complex environments with formal safety guarantees @ B17 Hackerman Hall
Feb 24 @ 12:00 pm – 1:00 pm

Abstract

The goal of my research is to develop algorithmic and theoretical techniques that push highly agile robotic systems to the brink of their hardware limits while guaranteeing that they operate in a safe manner despite uncertainty in the environment and dynamics. In this talk, I will describe my work on algorithms for the synthesis of feedback controllers that come with associated formal guarantees on the stability of the robot and show how these controllers and certificates of stability can be used for robust planning in environments previously unseen by the system. In order to make these results possible, my work connects deeply to computational tools such as sums-of-squares (SOS) programming and semi-definite programming from the theory of mathematical optimization, along with approaches from nonlinear control theory.

I will describe this work in the context of the problem of high-speed unmanned aerial vehicle (UAV) flight through cluttered environments previously unseen by the robot. In this context, the tools I have developed allow us to guarantee that the robot will fly through its environment in a collision-free manner despite uncertainty in the dynamics (e.g., wind gusts or modeling errors). The resulting hardware demonstrations on a fixed-wing airplane constitute one of the first examples of provably safe and robust control for robotic systems with complex nonlinear dynamics that need to plan in real time in environments with complex geometric constraints.

 

Bio

Anirudha Majumdar is a Ph.D. candidate in the Electrical Engineering and Computer Science department at MIT. He is a member of the Robot Locomotion Group at the Computer Science and Artificial Intelligence Lab and is advised by Prof. Russ Tedrake. Ani received his undergraduate degree in Mechanical Engineering and Mathematics from the University of Pennsylvania, where he was a member of the GRASP lab. His research is primarily in robotics: he works on algorithms for controlling highly dynamics robots such as unmanned aerial vehicles with formal guarantees on the safety of the system. Ani’s research has been recognized by the Siebel Foundation Scholarship and the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA) 2013.

Mar
2
Wed
Cenk Cavusoglu: Towards Intelligent Robotic Surgical Assistants @ B17 Hackerman Hall
Mar 2 @ 12:00 pm – 1:00 pm

Abstract

Robotic surgical systems are becoming widely used in various surgical specialties ranging from urologic and gynecological surgery to cardiothoracic surgery.  The state-of-the-art robotic surgical systems represent tremendous improvements over manual minimally invasive surgery, with 6 degrees-of-freedom manipulators that provide improved dexterity, and immersive interfaces that provide improved hand-eye coordination.  These systems are also becoming platforms for information augmentation.  However, the state-of-the-art robotic surgical systems still have substantial shortcomings:   Robotic surgical manipulations are slower than open surgical manipulations. Robotic surgical systems lack high fidelity haptic feedback. And, issues with situational awareness still remain.

 

In this talk, I will introduce the current state of our research towards the development of intelligent robotic surgical assistants.  The goal of this research is to develop robotic surgical systems that can act more like surgical assistants and less like master-slave controlled tools.  In the proposed paradigm, the robotic surgical system will have subtask automation capabilities for performing basic low-level manipulation tasks. The subtask automation will allow the surgeon to have a high-level interaction with the system rather than controlling it through low-level direct teleoperation.  Such a system would potentially reduce tedium from simple, repetitive tasks; assist the surgeon in complex manipulation tasks; and reduce the cognitive load on the surgeon.

 

Automated execution of surgical tasks requires development of new robotic planning, perception, and control algorithms for robustly performing robotic manipulation under substantial uncertainty.  The presentation will introduce our recent work on these three aspects of the problem.  I will first focus on our research on robotic perception algorithms.  Specifically, I will present algorithms for estimation of deformable object boundary constraints and material parameters, and for localization and tracking of surgical thread.  I will then introduce our work on planning algorithms, which focus on needle path planning for surgical suturing, and optimal needle grasp and entry port planning.  Finally, I will present control algorithms for needle driving and knot tying.

 

Bio

  1. Cenk Cavusoglu is currently a Professor at the Department of Electrical Engineering and Computer Science of Case Western Reserve University (CWRU), with secondary appointments in Biomedical Engineering, and Mechanical and Aerospace Engineering.  He received the B.S. degree in Electrical and Electronic Engineering from the Middle East Technical University, Ankara, Turkey, in 1995, and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences from the University of California, Berkeley, in 1997 and 2000, respectively.  He was a Visiting Researcher at the INRIA Rhones-Alpes Research Center, Grenoble, France (1998); a Postdoctoral Researcher and Lecturer at the University of California, Berkeley (2000-2002); and, a Visiting Associate Professor at Bilkent University, Ankara, Turkey (2009-2010).

 

Dr. Cavusoglu’s research spans the general areas of robotics and human-machine interfaces with special emphasis on medical robotics, and haptics.  Specifically, for the past twenty years, he has been conducting research on all of the different aspects of medical robotic systems from control, mechanism, and system design, to human-machine interfaces, haptics, and algorithms.

 

More information on Dr. Cavusoglu’s research can be found at his homepage at

http://engr.case.edu/cavusoglu_cenk/

 

Mar
9
Wed
Jeremy Brown: Smart Haptic Displays for Dexterous Manipulation of Telerobots @ B17 Hackerman Hall
Mar 9 @ 12:00 pm – 1:00 pm

Abstract

The human body is capable of dexterous manipulation in many different environments. Some environments, however, are challenging to access because of distance, scale, and limitations of the body itself. In many of these situations, access can be effectively restored via a telerobot, in which a human remotely controls a robot to perform the task. Dexterous manipulation through a telerobot is currently limited, and will be possible only if the interface between the operator’s body and the telerobot is able to accurately relay any sensory feedback resulting from the telerobot’s interactions in the environment.
This talk will focus on the scientific investigation of high fidelity haptic interfaces that adequately translate the interactions between the telerobot and its environment to the operator’s body through the sense of touch. I will introduce the theme of “Smart Haptic Displays,” which are capable of modulating their own dynamic properties to compensate for the dynamics of the body and the telerobot to ensure the environment dynamics are accurately presented to the operator. Along the way, I will highlight contributions I have already made for two specific telerobots: upper-limb prostheses and minimally invasive surgical robots. These contributions include an empirical validation of the utility of force feedback in body-powered prostheses and the creation of a testbed to compare various haptic displays for pinching palpation in robotic surgery. Finally, I will briefly introduce a novel approach I am currently investigating that utilizes haptic signals to automatically predict a surgical trainee’s skill on a minimally invasive surgical robotic platform. As this work progresses, it will lead to the creation of interfaces that provide the rich haptic sensations the body has come to expect, and will allow for dexterous manipulation in any environment whether or not access is mediated through a telerobot.

 

Bio

Jeremy D. Brown is a Postdoctoral Research Fellow in the Department of Mechanical Engineering and Applied Mechanics and the Haptics Group in the GRASP Lab at the University of Pennsylvania. He earned undergraduate degrees in applied physics and mechanical engineering from Morehouse College and the University of Michigan, and a PhD degree in mechanical engineering from the University of Michigan, where he worked in the HaptiX Laboratory. His research focuses on the interface between humans and robots with a specific focus on medical applications and haptic feedback. He was honored to receive several awards including the National Science Foundation (NSF) Graduate Research Fellowship and the Penn Postdoctoral Fellowship for Academic Diversity.

Laboratory for Computational Sensing + Robotics