Calendar

Sep
30
Wed
LCSR Seminar: IP/COI Briefing @ https://wse.zoom.us/s/94623801186
Sep 30 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Peter A. Sheppard – Sr. Intellectual Property Manager Johns Hopkins Technology Ventures

“Intellectual Property Primer For Conflict of Interest Training.”

 

Laura M. Evans – Senior Policy Associate, Director, Homewood IRB

“Conflicts of Interest: Identification, Review, and Management.”

Oct
7
Wed
LCSR Seminar: Ken Goldberg “The New Wave in Robot Grasping” @ https://wse.zoom.us/s/94623801186
Oct 7 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract: Despite 50 years of research, robots remain remarkably clumsy, limiting their reliability for warehouse order fulfillment, robot-assisted surgery, and home decluttering.  The First Wave of grasping research is purely analytical, applying variations of screw theory to exact knowledge of pose, shape, and contact mechanics.  The Second Wave is purely empirical: end-to-end hyperparametric function approximation (aka Deep Learning) based on human demonstrations or time-consuming self-exploration.  A “New Wave” of research considers hybrid methods that combine analytic models with stochastic sampling and Deep Learning models.  I’ll present this history with new results from our lab on grasping diverse and previously-unknown objects.

 

Bio: Ken Goldberg is the William S. Floyd Distinguished Chair in Engineering at UC Berkeley and an award-winning roboticist, filmmaker, artist and popular public speaker on AI and robotics. Ken trains the next generation of researchers and entrepreneurs in his research lab at UC Berkeley; he has published over 300 papers, 3 books, and holds 9 US Patents. Ken’s artwork has been featured in 70 art exhibits including the 2000 Whitney Biennial. He is a pioneer in technology and artistic visual expression, bridging the “two cultures” of art and science. With unique skills in communication and creative problem solving, invention, and thinking on the edge, Ken has presented over 600 invited lectures at events around the world.

 

Oct
14
Wed
LCSR Seminar: Axel Krieger “The IMERSE Lab: Designing Smarter Safer Surgery” @ https://wse.zoom.us/s/94623801186
Oct 14 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Robotic assisted surgery (RAS) systems, incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, such as manufacturing and aviation. A limited form of autonomous RAS with pre-planned functionality was introduced in orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions. The fundamental problems in soft tissue surgery include unpredictable shape changes, tissue deformations, and perception challenges.

 

My research goal is to transform current manual and teleoperated robotic soft tissue surgery to autonomous robotic surgery, improving patient outcomes by reducing the reliance on the operating surgeon, eliminating human errors, and increasing precision and speed. This presentation will introduce our Intelligent Medical Robotic Systems and Equipment (IMERSE) lab and discuss our novel strategies to overcome the challenges encountered in soft tissue autonomous surgery.  Presentation topics will include: a) a robotic system for supervised autonomous laparoscopic anastomosis, b) magnetically steered robotic suturing, c) development of patient specific biodegradable nanofiber tissue-engineered vascular grafts to optimally repair congenital heart defects (CHD), and d) our work on COVID-19 mitigation in ICU robotics, safe testing, and safe intubation.

 

Bio:  Axel Krieger, PhD, and his IMERSE team joined LCSR in July 2020. He is an Assistant Professor in the Department of Mechanical Engineering at the Johns Hopkins University. He is leading a team of students, scientists, and engineers in the research and development of robotic tools and laparoscopic devices. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger is an inventor of over twenty patents and patent applications. Licensees of his patents include medical device start-ups Activ Surgical and PeriCor as well as industry leaders such as Siemens, Philips, and Intuitive Surgical. Before joining the Johns Hopkins University, Professor Axel Krieger was Assistant Professor in Mechanical Engineering at the University of Maryland and Assistant Research Professor and Program Lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing devices and software systems from concept to FDA approval and market introduction. Dr. Krieger completed his undergraduate and master’s degrees at the University of Karlsruhe in Germany and his doctorate at Johns Hopkins, where he pioneered an MRI guided prostate biopsy robot used in over 50 patient procedures at three hospitals.

 

Oct
21
Wed
LCSR Seminar: Dieter Fox “Toward robust manipulation in complex environments” @ https://wse.zoom.us/s/94623801186
Oct 21 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Over the last years, advances in deep learning and GPU-based computing have enabled significant progress in several areas of robotics, including visual recognition, real-time tracking, object manipulation, and learning-based control.  This progress has turned applications such as autonomous driving and delivery tasks in warehouses, hospitals, or hotels into realistic application scenarios.  However, robust manipulation in complex settings is still an open research problem. Various research efforts show promising results on individual pieces of the manipulation puzzle, including manipulator control, touch sensing, object pose detection, task and motion planning, and object pickup. In this talk, I will present our work in integrating such components into a complete manipulation system. Specifically, I will describe a robot manipulator that can open and close cabinet doors and drawers in a kitchen, detect and pickup objects, and move these objects to desired locations.  Our baseline system is designed to be applicable in a wide variety of environments, only relying on 3D articulated models of the kitchen and the relevant objects. I will discuss lessons learned so far, and various research directions toward enabling more robust and general manipulation systems that do not rely on existing models.

Bio:

Dieter Fox is Senior Director of Robotics Research at NVIDIA. He is also a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter obtained his Ph.D. from the University of Bonn, Germany.  His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as mapping, object detection and tracking, manipulation, and activity recognition. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE and the AAAI, and recipient of the 2020 Pioneer in Robotics and Automation Award.  Dieter also received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.

Oct
28
Wed
LCSR Seminar: Michael Yip “Towards Autonomous Surgical Robots: New Strategies in Design, Control, and AI” @ https://wse.zoom.us/s/94623801186
Oct 28 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Surgical robots offer a potential future for combatting doctors shortages, decreased access to care, and the longer wait-times. My lab has looked towards developing autonomous surgical robots that can break the dependency on having a human surgeon perform each procedure, which is not scalable to meet the increasing population of patients, and suffers from a large and unpredictable variability amongst doctor experiences, training, and even day-to-day alertness. However, with very limited exceptions, we (as roboticists) are not there yet — the hurdles facing surgical robotics AI and automation comprise a host of multidisciplinary problems, from challenging computer vision problems robot and scene estimation, to control challenges with flexible and complex surgical instrumentation, to sub-second reactive motion planning in constrained and dynamic environments. In this talk, I will show how my lab’s research towards autonomous surgical robots have led us to develop computationally efficient methods for deformable SLAM, model-free robot learning, neural motion planning, and machine learning models for trajectory optimization. Furthermore, I will show how these techniques, many of which driven by data, are ubiquitous in that they expand not only to different surgical robots (both commercially available and those developed in the lab) but also to a broader set of applications across robot manipulation and bio-inspired robotics.

Bio:

Michael Yip is an Assistant Professor of Electrical and Computer Engineering at UC San Diego, IEEE RAS Distinguished Lecturer, Hellman Fellow,  and Director of the Advanced Robotics and Controls Laboratory (ARCLab). His group currently focuses on solving problems in data-efficient and computationally efficient robot control and motion planning through the use of various forms of learning representations, including deep learning and reinforcement learning strategies. His lab applies these ideas to surgical robotics and the automation of surgical procedures. Previously, Dr. Yip’s research has investigated different facets of haptics, soft robotics, artificial muscles, computer vision, and teleoperation. Dr. Yip’s work has been recognized through several best paper awards at ICRA, including the inaugural best paper award for IEEE’s Robotics and Automation Letters. Dr. Yip has previously been a Research Associate with Disney Research Los Angeles in 2014, a Visiting Professor with Amazon Robotics’ Machine Learning and Computer Vision group in Seattle, WA in 2018, and a Visiting Professor at Stanford University in 2019. He received a B.Sc. in Mechatronics Engineering from the University of Waterloo, an M.S. in Electrical Engineering from the University of British Columbia, and a Ph.D. in Bioengineering from Stanford University.

 

Nov
4
Wed
LCSR Seminar: Josie Hughes “Computational Design & Fabrication of Robots” @ https://wse.zoom.us/s/94623801186
Nov 4 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Automating the design and creation of complex robots is challenging due to the complexity of the design search space and physical processes required.  To address this, new approaches are required to understand how to design and optimize robotic structures for a given task.  This talk introduces a number of techniques and processes for the computational design of robots, focusing on automated design, rapid fabrication, and task-specific learning.  This includes approaches ranging from biologically inspired design, to developing terrain optimized robots by searching over 10,000s of possible designs, and Bayesian based approaches for rapid task learning.  Different application scenarios for these approaches are also presented.  The talk concludes with a vision for the future in which bespoke robots can be automatically created for a given task.

 

Bio:

Josie Hughes completed her Undergraduate, Masters and PhD at the University of Cambridge.  She finished her PhD in 2018, developing robots which utilize embodied mechanics and sensory coordination for advanced capabilities.  Her research focused on manipulation, sensor technologies and new approaches for designing and fabricating complex anthropomorphic manipulators.  Josie is now working as a Post-Doctoral Research Associate in the Distributed Robotics Lab, MIT.   At MIT she is working on computational design methods, wearable technologies and new novel robot fabrication methods.  Her work has been published in Science Robotics, Nature Machine Intelligence, Soft Robotics and many other conferences and journals.  Additionally, she has lead teams which have won over 5 International Robotics Competitions.

Nov
11
Wed
LCSR Seminar: Andinet Enquobahrie “Accelerating Medical Image Guided Intervention Research using Open Source Platforms” @ https://wse.zoom.us/s/94623801186
Nov 11 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Image-guided intervention techniques are replacing traditional intervention, surgery, and invasive procedures with minimally invasive techniques that incorporate medical imaging to guide the intervention. Patients prefer these procedures to open surgeries and interventions because they are typically less traumatic to the body and result in faster recovery times. Despite its many merits, image guided intervention procedures are challenging due to restricted views and depth perception, limited mobility and maneuvering of surgical instruments, and poor tactile feedback in some instances, which make it difficult to palpate organs. Virtual simulators and planning systems are powerful tools that allow clinicians to practice and rehearse their surgical and procedural skills in a risk-free environment. Software is an integral part of these virtual simulators and planners. Whether it is for interfacing with a tracking device to collect position information from surgical instruments, integrating intra-operative and pre-operative images, controlling and guiding robots or generating a 3D visualization to provide visual feedback to the clinician, software has a critical role. Open source software is playing a major role in increasing the pace of research and discovery in image-guided intervention systems by promoting collaborations between clinicians, biomedical engineers, and software developers across the globe. Kitware, Inc., a leader in the creation and support of open-source scientific computing software is at the forefront of this type of effort. In this talk, I will provide an overview of image guided intervention system and discuss two NIH funded image guided intervention training projects currently led by Kitware: 1) A simulator that trains clinicians to improve procedural skill competence in real-time, ultrasound-guided renal biopsy and 2) An interactive, patient-specific virtual surgical planning system for upper airway obstruction treatments.

 

Bio:

Dr. Enquobahrie received his Ph.D. in Electrical and Computer Engineering from Cornell University. He has an MBA from Poole College of Management at North Carolina State University with an emphasis in innovation management, product innovation, and technology evaluation and commercialization. Dr. Enquobahrie has authored or co-authored more than 70 publications in machine learning, image analysis, visualization, and image-guided intervention. He has served as a technical reviewer for several medical image analysis and image-guided intervention journals including Medical Imaging Computing and Computer Assisted Intervention (MICCAI), Computer Methods and Programs in Biomedicine, Academic Radiology, Journal of Digital Imaging, IEEE Transactions on Medical Imaging, and the IEEE International Conference on Robotics and Automation.

Nov
18
Wed
LCSR Seminar: Dan Bohus “Situated Interaction” @ https://wse.zoom.us/s/94623801186
Nov 18 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Situated language interaction is a complex, multimodal affair that extends well beyond the spoken word. When interacting, we use a wide array of non-verbal signals and incrementally coordinate with each other to simultaneously resolve several problems: we manage engagement, coordinate on taking turns, recognize intentions, and establish and maintain common ground as a basis for contributing to the conversation. Proximity and body pose, attention and gaze, head nods and hand gestures, prosody and facial expressions, all play very important roles in this process. And just like a couple of decades ago advances in speech recognition opened up the field of spoken dialog systems, current advances in vision and other perceptual technologies are again opening up new horizons — we are starting to be able to build machines that computationally understand these social signals and the physical world around them, and participate in physically situated interactions and collaborations with people.

 

In this talk, using a number of research vignettes from work we have done over the last decade at Microsoft Research, I will draw attention to some of the challenges and opportunities that lie ahead of us in this exciting space. In particular, I will discuss issues with managing engagement and turn-taking in multiparty open-world settings, and more generally highlight the importance of timing and fine-grained coordination in situated language interaction. Finally, I will conclude by describing an open-source framework we are developing that promises to simplify the construction of physically situated interactive systems, and in the process further enable and accelerate research in this area.

 

Bio:

Dan Bohus is a Senior Principal Researcher in the Adaptive Systems and Interaction Group at Microsoft Research. His work centers on the study and development of computational models for physically situated spoken language interaction and collaboration. The long term question that shapes his research agenda is how can we enable interactive systems to reason more deeply about their surroundings and seamlessly participate in open-world, multiparty dialog and collaboration with people? Prior to joining Microsoft Research, Dan obtained his Ph.D. from Carnegie Mellon University.

Dec
2
Wed
LCSR Seminar – Life After Graduate School: Careers in Robotics A Panel Discussion With Experts From Industry and Academia @ https://wse.zoom.us/s/94623801186
Dec 2 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Life After Graduate School: Careers in Robotics
A Panel Discussion With Experts From Industry and Academia
A Special LCSR Career Development Seminar

Please join us with a panel of robotics experts to discuss careers in robotics.

The panelists are:

Amy Blank, PhD
Senior Software Engineer and Manager
Barrett Advanced Robotics
Boston, Massachusetts,

Muyinatu Bell, PhD
Assistant Professor
Department of Electrical and Computer Engineering
Department of Biomedical Engineering
Whiting School  of Engineering
Johns Hopkins University

Peter Kazanzides, PhD
Research Professor
Department of Computer Science
Whiting School  of Engineering
Johns Hopkins University

Cara LaPointe, PhD
Co-Director of the Johns Hopkins Institute for Assured Autonomy
Assured Intelligent Systems Program Manager
Johns Hopkins Applied Physics Laboratory

Moderator: Louis Whitcomb

Panelist Bios:

Amy Blank, PhD

Dr. Amy Blank is a Senior Software Engineer and Manager at Barrett Advanced Robotics, Boston, Massachusetts, (https://advanced.barrett.com/) where she previously was Senior Software Engineer.   Dr. Blank received her undergraduate degree in Mechanical Engineering from the Pennsylvania State University om 2006, and completed her PhD in the topics of proprioceptive motion feedback and task-dependent impedance and implications for upper-limb prosthesis control in 2012.    She conducted post-doctoral research at LCSR on the topic of hybrid force/position control for teleoperation under large time delay using the Whole Arm Manipulator, and the da Vinci Surgical System master console,
and post-doctoral research at Rice University developing novel hardware, control algorithms, and haptic feedback systems for an EMG-controlled robotic grippers.

Muyinatu Bell, PhD

Dr. Muyinatu Bell is an Assistant Professor of Electrical and Computer Engineering, Biomedical Engineering, and Computer Science at Johns Hopkins University, where she founded and directs the Photoacoustic and Ultrasonic Systems Engineering (PULSE) Lab. Dr. Bell earned a B.S. degree in Mechanical Engineering (biomedical engineering minor) from Massachusetts Institute of Technology (2006), received a Ph.D. degree in Biomedical Engineering from Duke University (2012), conducted research abroad as a Whitaker International Fellow at the Institute of Cancer Research and Royal Marsden Hospital in the United Kingdom (2009-2010), and completed a postdoctoral fellowship with the Engineering Research Center for Computer-Integrated Surgical Systems and Technology at Johns Hopkins University (2016). She is Associate Editor-in-Chief of IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control (T-UFFC), Associate Editor of IEEE Transactions on Medical Imaging, and holds patents for short-lag spatial coherence beamforming and photoacoustic-guided surgery. She is a recipient of multiple awards and honors, including MIT Technology Review’s Innovator Under 35 Award (2016), the NSF CAREER Award (2018), the NIH Trailblazer Award (2018), the Alfred P. Sloan Research Fellowship (2019), the ORAU Ralph E. Powe Jr. Faculty Enhancement Award (2019), and Maryland’s Outstanding Young Engineer Award (2019). She most recently received the inaugural IEEE UFFC Star Ambassador Lectureship Award (2020) from her IEEE society.

Peter Kazanzides, PhD

Peter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988 and began work on surgical robotics as a postdoctoral researcher at the IBM T.J. Watson Research Center. He co-founded Integrated Surgical Systems (ISS) in November 1990 to develop the ROBODOC System, which has been used for more than 20,000 hip and knee replacement surgeries. Dr. Kazanzides joined Johns Hopkins University in 2002, where he is appointed as a Research Professor of Computer Science. His current research is in the areas of medical robotics, space robotics and augmented reality.

Cara LaPointe, PhD

Dr. Cara LaPointe is a futurist who focuses on the intersection of technology, policy, ethics, and leadership. She is the Co-Director of the Johns Hopkins Institute for Assured Autonomy which works to ensure that autonomous systems are safe, secure, and trustworthy as they are increasingly integrated into every aspect of our lives. During more than two decades in the United States Navy, Dr. LaPointe held numerous roles in the areas of autonomous systems, acquisitions, ship design and production, naval force architecture, power and energy systems, and unmanned vehicle technology integration. At the Deep Submergence Lab of the Woods Hole Oceanographic Institution (WHOI), she conducted research in underwater autonomy and robotics, developing sensor fusion algorithms for deep-ocean autonomous underwater vehicle navigation.  Dr. LaPointe has served as an advisor to numerous global emerging technology initiatives and she is a frequent speaker on autonomy, artificial intelligence, blockchain, and other emerging technologies at a wide range of venues such as the United Nations, the World Bank, and the Organization for Economic Co-operation and Development. Dr. LaPointe is a patented engineer, a White House Fellow, and a French American Foundation Young Leader. She served for two Presidents as the Interim Director of the President’s Commission on White House Fellowships. She holds a Doctor of Philosophy in Mechanical and Oceanographic Engineering awarded jointly by the Massachusetts Institute of Technology (MIT) and WHOI, a Master of Science in Ocean Systems Management and a Naval Engineer degree from MIT, a Master of Philosophy in International Development Studies from the University of Oxford, and a Bachelor of Science in Ocean Engineering from the United States Naval Academy.

 

Jan
27
Wed
LCSR Seminar: Mahyar Fazlyab “Safe Deep Learning in Feedback Loops: A Robust Control Approach” @ https://wse.zoom.us/s/94623801186
Jan 27 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Neural networks have become increasingly effective at many difficult machine-learning tasks. However, the nonlinear and large-scale nature of neural networks makes them hard to analyze and, therefore, they are mostly used as black-box models without formal guarantees. This issue becomes even more complicated when DNNs are used in learning-enabled closed-loop systems, where a small perturbation can substantially impact the system being controlled. Therefore, it is of utmost importance to develop tools that can provide useful certificates of stability, safety, and robustness for DNN-driven systems.

 

In this talk, we present a convex optimization framework that can address several problems regarding deep neural networks. The main idea is to abstract hard-to-analyze components of a DNN (e.g., the nonlinear activation functions) with the formalism of quadratic constraints. This abstraction allows us to reason about various properties of DNNs (safety, robustness, stability in closed-loop settings, etc.) via semidefinite programming.

 

Biography:

Mahyar Fazlyab will join the Department of Electrical and Computer Engineering as an assistant professor in July 2021. Currently, he is an assistant research professor at the Mathematical Institute for Data Science (MINDS) at Johns Hopkins University (JHU). Before that, Mahyar received his Ph.D. in Electrical and Systems Engineering (ESE) from the University of Pennsylvania (UPenn) in 2018, with a dual MA’s degree in Statistics from the Wharton School. He was also a postdoctoral fellow in the ESE Department at UPenn from 2018 to 2020. Mahyar’s research interests are at the intersection of optimization, control, and machine learning. His current research focus is on the safety and stability of learning-enabled autonomous systems. Mahyarwon the Joseph and Rosaline Wolf Best Doctoral Dissertation Award in 2019, awarded by the Department of Electrical and Systems Engineering at the University of Pennsylvania.