Calendar

Feb
4
Wed
Uluc Saranli: Model-Based Methods for Robotic Legged Locomotion @ B17 Hackerman Hall
Feb 4 @ 12:00 pm – 1:00 pm

Abstract

Legged mobility has long been among key research areas in mobile robotics. In this context, accurate dynamic models of locomotory behaviors provide tools that are useful both in understanding biological systems as well as constructing robots and controllers to realize these behaviors. In this talk, I will focus on the latter, using spring-mass models that have been instrumental in the understanding and artificial realization of running behaviors. I will first describe our work in finding approximate analytic solutions for spring-mass models of running, which possess otherwise non-integrable stance dynamics. I will then show different applications of these solutions, including adaptive control, state estimation and footstep planning for planar running. Subsequently, I will describe a new method for energy regulation through virtual tuning of damping properties for such systems, towards a level of energy and power efficiency that has not been possible with previous methods.

 

Speaker Bio

Dr. Uluç Saranlı is an Associate Professor in the Department of Computer Engineering in Middle East Technical University, Ankara, Turkey. He received his B.S. degree in Electrical and Electronics Engineering from The Middle East Technical University, Turkey in 1996. He received his M.S. and Ph.D. degrees in Computer Science from The University of Michigan in 1998 and 2002, respectively. He then joined the Robotics Institute in Carnegie Mellon University as a postdoctoral associate until 2005. Before joining Middle East Technical University in 2012, he was an Assistant Professor in the Department of Computer Engineering in Bilkent University. His research interests focus on autonomous robotic mobility, with specific contributions in modeling, analysis, control of legged locomotion and behavioral planning for dynamically dexterous robot morphologies.

Feb
11
Wed
Christopher Prentice: Renaissance: An Application of Robotics in Spine Surgery @ B17 Hackerman Hall
Feb 11 @ 12:00 pm – 1:00 pm

Abstract

The clinical applications of surgical robotics have advanced greatly over the past two decades. Intuitive Surgical and the da Vinci platform have been the dominant company and system commercially in the field of surgical robotics. In the past decade, robotic platforms have been introduced commercially for applications other than assisting a laparoscopic surgical approach. One such robotic platform is the Renaissance System from Mazor Robotics. This robotic platform differs greatly from the da Vinci platform and is focused on spine surgery and cranial surgery. This talk will provide a historical review of the genesis and maturation of the Renaissance platform, an overview of its current applications in spine surgery and neurosurgery, and a review of the current limitations of the system and potential areas for further development.

 

Bio

Christopher Prentice is Chief Executive Officer for Mazor Robotics Inc., the US subsidiary of Mazor Robotics Ltd. He received his B.S. degree in Systems Engineering from the United States Military Academy at West Point in 1992. He received his MBA in 1994 from Western New England University and his MHA in 2009 from the University of South Florida. His career in healthcare & medical devices began in 1997 and includes product management, marketing, sales, and finance roles with Ethicon Endo-Surgery, Intuitive Surgical, Tampa General Hospital, and Mazor Robotics. His product focus has been on laser, radiotherapy, and robotic platforms intended to advance the quality of surgical care.

Feb
18
Wed
Muyinatu A. Lediju Bell: Light, Sound, Action: Toward Clinical Ubiquity of Photoacoustic Systems by Integrating Optics, Acoustics, and Robotics @ B17 Hackerman Hall
Feb 18 @ 12:00 pm – 1:00 pm

Abstract

 

Photoacoustic imaging has gained widespread popularity in molecular and preclinical applications, yet it is often excluded from conversations among primary care physicians, surgeons, and interventional radiologists. As an imaging modality that relies on light transmission, optical absorption, and the subsequent generation of sound waves, three primary challenges hinder its clinical advancement: (1) acoustic clutter noise artifacts; (2) limited optical penetration depths; and (3) restrictive system designs that fix light sources relative to acoustic receivers.

 

In this talk, I propose the integration of optical, acoustic, and robotic principles to overcome existing challenges. Acoustic clutter, which plagues ultrasound and photoacoustic images alike, is mitigated with a novel short-lag spatial coherence (SLSC) beamformer that I developed, resulting in improved image quality and an effective tripling of optical penetration depths. It is advantageous over conventional methods when applied to longstanding and emerging clinical practices, including transcranial, prostate, and vascular photoacoustic imaging, as well as liver, fetal, and cardiac ultrasound imaging. I will describe the acoustic theories that enable these improvements and demonstrate feasibility with data from computer simulations, phantoms, ex vivo tissue, and in vivo animal and human studies.  Finally, I will show that autonomous or cooperative robotic control relies on this optimal image quality to enhance the maneuverability of system components, and thereby facilitate the flexible separation of light delivery from acoustic reception. This work promises to expand the technical envelope of photoacoustic imaging systems and revolutionize clinical standards of care.

 

Speaker Bio

Dr. Muyinatu A. Lediju Bell is a postdoctoral fellow at Johns Hopkins University. She received her Ph.D. in Biomedical Engineering from Duke University in 2012 and earned her B.S. in Mechanical Engineering with a minor in Biomedical Engineering from the Massachusetts Institute of Technology in 2006. Dr. Bell is a recipient of numerous awards and fellowships including a Whitaker International Fellowship to spend a year abroad conducting research in the United Kingdom (2009), the UNCF-Merck Graduate (2011) and Postdoctoral (2012) Research Fellowships, and the Ford Foundation Postdoctoral Fellowship for her commitment to using diversity as a resource for enriching the education of all students (2012). Her research interests include ultrasound and photoacoustic imaging, image-guided surgery, medical robotics, and medical device design.

 

Feb
25
Wed
Aaron Dollar: “Mechanical Intelligence” in Robotic Manipulation: Towards Human-level Dexterity in Robotic and Prosthetic Hands @ B17 Hackerman Hall
Feb 25 @ 12:00 pm – 1:00 pm

Abstract

The human hand is the pinnacle of dexterity – it has the ability to powerfully grasp a wide range of object sizes and shapes as well as delicately manipulate objects held within the fingertips. Current robotic and prosthetic systems, however, have only a fraction of that manual dexterity. My group attempts to address this gap in two main ways: the mechanical design of effective hands and the study of human hand function and use as inspiration and performance benchmarking. In terms of hand design, we strongly prioritize passive mechanics, including incorporating adaptive underactuated transmissions and carefully tuned compliance, and seek to maximize open-loop performance while minimizing complexity. To motivate and benchmark our efforts, we are examining human hand usage during daily activities as well as quantifying functional aspects such as precision manipulation workspaces. Besides describing these efforts, I will touch on other work in the lab related to legged robots, novel fabrication techniques, modular robots, and the study of non-human “hands”.

 

Speaker Bio

Aaron Dollar is the John J. Lee Associate Professor of Mechanical Engineering and Materials Science at Yale and is currently a Visiting Professor in the Department of Ecology and Evolutionary Biology at Brown. He earned a B.S. in Mechanical Engineering at UMass Amherst, S.M. and Ph.D. degrees in Engineering Science at Harvard, and was a postdoctoral associate at MIT in Health Sciences and Technology and the Media Lab. Prof. Dollar is the recipient of a number of awards, including young investigator awards from AFOSR, DARPA, NASA, and NSF, and is the founder of the IEEE Robotics and Automation Society Technical Committee on Mechanisms and Design and open-source teaching and research repositories RoboticsCourseWare.org and OpenRobotHardware.org.

Mar
4
Wed
Marcia O’Malley: Natural Sensory Feedback for Intuitive Prosthesis Control @ B17 Hackerman Hall
Mar 4 @ 12:00 pm – 1:00 pm

Abstract

Able bodied individuals can easily take for granted the dexterous capabilities of the human hand. Key to our ability to easily manipulate common objects are the rich sensory cues conveying force and object properties, often without need for visual attention to the task. For amputees, these manipulations can require significant time, visual attention, and cognitive effort due to the lack of sensory feedback even in the most advanced prosthetic hands. In this talk I will describe our approach to improving dexterous manipulation with prosthetic hands, and a series of experiments that have provided new insight on the importance of providing natural sensory feedback cues to the residual limb for prosthesis users. I will also briefly describe the other major research thrusts of my group, including robotic rehabilitation of the upper limb following stroke and incomplete spinal cord injury, and quantitative assessment of motor skill for training in virtual environments, with a special focus on endovascular surgical procedures.

 

Speaker Bio

Marcia O’Malley received the B.S. degree in mechanical engineering from Purdue University in 1996, and the M.S. and Ph.D. degrees in mechanical engineering from Vanderbilt University in 1999 and 2001, respectively. She is currently Professor of Mechanical Engineering and of Computer Science at Rice University and directs the Mechatronics and Haptic Interfaces Lab. She is an Adjunct Associate Professor in the Departments of Physical Medicine and Rehabilitation at both Baylor College of Medicine and the University of Texas Medical School at Houston. Additionally, she is the Director of Rehabilitation Engineering at TIRR-Memorial Hermann Hospital, and is a co-founder of Houston Medical Robotics, Inc. Her research addresses issues that arise when humans physically interact with robotic systems, with a focus on training and rehabilitation in virtual environments. In 2008, she received the George R. Brown Award for Superior Teaching at Rice University. O’Malley is a 2004 ONR Young Investigator and the recipient of the NSF CAREER Award in 2005. She is a Fellow of the American Society of Mechanical Engineers, and currently serves on the editorial board for the ASME Journal of Mechanisms and Robotics.

Mar
11
Wed
Terry Peters: Augmented Reality and Ultrasound for Interventional Cardiac Guidance @ B17 Hackerman Hall
Mar 11 @ 12:00 pm – 1:00 pm

Abstract

Many inter-cardiac interventions are performed either via open-heart surgery, or using minimally invasive approaches, where instrumentation is introduced into the cardiac chambers via the vascular system or heart wall. While many of the latter procedures are often employed under x-ray guidance, for some of these x-ray imaging is not appropriate, and ultrasound is the preferred intra-operative imaging modality. One such procedure involves the repair of a mitral-valve leaflet, using an instrument introduced into the heart via the apex. The standard of care for this procedure employs a 3D Trans-esophageal probe as guidance, but using primarily its bi-plane mode, with full 3D only being used sporadically. In spite of the clinical success of this procedure, many problems are encountered during the navigation of the instrument to the site of the therapy. To overcome these difficulties, we have developed a guidance platform that tracks the US probe and instrument, and augments the US mages with virtual   elements representing the instrument and target, to optimise the navigation process. Results of using this approach on animal studies have demonstrated increased performance in multiple metrics, including total tool distance from ideal pathway, total navigation time, and total tool path lengths, by factors of 3,4,and 5 respectively, as well as a 40 fold reduction in the number of times an instrument intruded into potentially unsafe zones in the heart. Ongoing work involves the application of these ideas for aortic valve replacement as well.

 

Speaker Bio

Dr. Terry Peters is a Scientist in the Imaging Research Laboratories at the Robarts Research Institute (RRI), London, ON, Canada, and Professor in the Departments of Medical Imaging and Medical Biophysics at Western University London, Canada, as well as a member of the Graduate Programs in Neurosciences and Biomedical Engineering. He is also an adjunct Professor at McGill University in Montreal. He received his graduate training at the University of Canterbury in New Zealand in Electrical Engineering, under the direction of Professor Richard Bates. His PhD work dealt with fundamental issues in Computed Tomography image reconstruction, and resulted in a seminal paper on the topic in 1971, just prior to the beginning of CT’s commercial development and worldwide application. For the past 30 years, his research has built on this foundation, focusing on the application of computational hardware and software advances to medical imaging modalities in surgery and therapy. Starting in 1978 at the Montreal Neurological Institute (MNI), Dr. Peters’ lab pioneered many of the image-guidance techniques and applications for image-guided neurosurgery.   In 1997, he was recruited by the Robarts Research Institute at Western, to establish a focus of image-guided surgery and therapy within the Robarts Imaging Research Laboratories. His lab has expanded over the past seventeen years to encompass image-guided procedures of the heart, brain and abdomen. He has authored over 250 peer-reviewed papers and book chapters, a similar number of abstracts, and has delivered over 200 invited presentations. He has mentored over 85 trainees at the Masters, Doctoral and Postdoctoral levels.

 

He is a Fellow of the Institute of Electrical and Electronics Engineers, the Canadian College of Physicists in Medicine; the Canadian Organization of Medical Physicists; the American Association of Physicists in Medicine, the Australasian College of Physical Scientists and Engineers in Medicine; the MICCAI Society, and the Institute of Physics. In addition, he received the Dean’s Award for Research Excellence at Western University in 2011, the Hellmuth Prize for Achievement in Research from Western in 2012, and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society’s Enduring Impact Award in 2014.

Mar
18
Wed
Spring Break: No Seminar
Mar 18 @ 12:00 pm – 1:00 pm
Mar
25
Wed
Magnus Egerstedt: From Global Properties to Local Rules in Multi-Agent Systems @ B17 Hackerman Hall
Mar 25 @ 12:00 pm – 1:00 pm

Abstract

The last few years have seen significant progress in our understanding of how one should structure multi-robot systems. New control, coordination, and communication strategies have emerged and, in this talk, we discuss some of these developments. In particular, we will show how one can go from global, geometric, team-level specifications to local coordination rules for achieving and maintaining formations, area coverage, and swarming behaviors. One aspect of this concerns how users can interact with networks of mobile robots in order to inject new, global information and objectives. We will also investigate what global objectives are fundamentally implementable in a distributed manner on a collection of spatially distributed and locally interacting agents.

 

Speaker Bio

Magnus Egerstedt is the Schlumberger Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology, where he serves as Associate Chair for Research and External Affairs. He received the M.S. degree in Engineering Physics and the Ph.D. degree in Applied Mathematics from the Royal Institute of Technology, Stockholm, Sweden, the B.A. degree in Philosophy from Stockholm University, and was a Postdoctoral Scholar at Harvard University. Dr. Egerstedt conducts research in the areas of control theory and robotics, with particular focus on control and coordination of complex networks, such as multi-robot systems, mobile sensor networks, and cyber-physical systems. Magnus Egerstedt is the Deputy Editor-in-Chief for the IEEE Transactions on Network Control Systems, the director of the Georgia Robotics and Intelligent Systems Laboratory (GRITS Lab), a Fellow of the IEEE, and a recipient of the ECE/GT Outstanding Junior Faculty Member Award, the HKN Outstanding Teacher Award, the Alum of the Year Award from the Royal Institute of Technology, and the U.S. National Science Foundation CAREER Award.

Apr
8
Wed
Julie Shah: Integrating Robots into Team-Oriented Environments @ B17 Hackerman Hall
Apr 8 @ 12:00 pm – 1:00 pm

Abstract

Recent advances in computation, sensing, and hardware enable robotics to perform an increasing percentage of traditionally manual tasks in manufacturing. Yet, often the assembly mechanic cannot be removed entirely from the process. This provides new economic motivation to explore opportunities where human workers and industrial robots may work in close physical collaboration. In this talk, I present the development of new algorithmic techniques for collaborative plan execution that scale to real-world industrial applications. I also discuss the design of new models for robot planning, which use insights and data derived from the planning and execution strategies employed by successful human teams, to support more seamless robot participation in human work practices. This includes models for human-robot team training, which involves hands-on practice to clarify sequencing and timing of actions, and for team planning, which includes communication to negotiate and clarify allocation and sequencing of work. The aim is to support both the human and robot workers in co-developing a common understanding of task responsibilities and information requirements, to produce more effective human-robot partnerships.

 

Speaker Bio

Julie Shah is an Assistant Professor in the Department of Aeronautics and Astronautics at MIT and leads the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory. Shah received her SB (2004) and SM (2006) from the Department of Aeronautics and Astronautics at MIT, and her PhD (2010) in Autonomous Systems from MIT. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. She has developed innovative methods for enabling fluid human-robot teamwork in time-critical, safety-critical domains, ranging from manufacturing to surgery to space exploration. Her group draws on expertise in artificial intelligence, human factors, and systems engineering to develop interactive robots that emulate the qualities of effective human team members to improve the efficiency of human-robot teamwork. In 2014, Shah was recognized with an NSF CAREER award for her work on “Human-aware Autonomy for Team-oriented Environments,” and by the MIT Technology Review TR35 list as one of the world’s top innovators under the age of 35. Her work on industrial human-robot collaboration was also recognized by the Technology Review as one of the 10 Breakthrough Technologies of 2013, and she has received international recognition in the form of best paper awards and nominations from the International Conference on Auto- mated Planning and Scheduling, the American Institute of Aeronautics and Astronautics, the IEEE/ACM International Conference on Human-Robot Interaction, the International Symposium on Robotics, and the Human Factors and Ergonomics Society.

Apr
15
Wed
Jim Freudenberg: 15 years of Embedded Control Systems at the University of Michigan @ B17 Hackerman Hall
Apr 15 @ 12:00 pm – 1:00 pm

Abstract

In 1998 we were asked by the local automotive industry to create a class to better train engineers to wirote embedded control software. We have now taught the class for 15 years at the University of Michigan and at ETH Zurich, and are currently serving almost 200 students/year. In this talk, we describe the motivations for the class from an automotive perspective, as well as the instructional lab, which uses an industry state of the art microprocessor (Freescale MPC5643L) and haptic wheels. We then describe the NSF sponsored research into haptics and cyberphysical systems that we perform in the lab, in collaboration with Professor Brent Gillespie of the UofM Mechanical Engineering Department.

 

Laboratory for Computational Sensing + Robotics