Calendar

Nov
11
Wed
LCSR Seminar: Andinet Enquobahrie “Accelerating Medical Image Guided Intervention Research using Open Source Platforms” @ https://wse.zoom.us/s/94623801186
Nov 11 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Image-guided intervention techniques are replacing traditional intervention, surgery, and invasive procedures with minimally invasive techniques that incorporate medical imaging to guide the intervention. Patients prefer these procedures to open surgeries and interventions because they are typically less traumatic to the body and result in faster recovery times. Despite its many merits, image guided intervention procedures are challenging due to restricted views and depth perception, limited mobility and maneuvering of surgical instruments, and poor tactile feedback in some instances, which make it difficult to palpate organs. Virtual simulators and planning systems are powerful tools that allow clinicians to practice and rehearse their surgical and procedural skills in a risk-free environment. Software is an integral part of these virtual simulators and planners. Whether it is for interfacing with a tracking device to collect position information from surgical instruments, integrating intra-operative and pre-operative images, controlling and guiding robots or generating a 3D visualization to provide visual feedback to the clinician, software has a critical role. Open source software is playing a major role in increasing the pace of research and discovery in image-guided intervention systems by promoting collaborations between clinicians, biomedical engineers, and software developers across the globe. Kitware, Inc., a leader in the creation and support of open-source scientific computing software is at the forefront of this type of effort. In this talk, I will provide an overview of image guided intervention system and discuss two NIH funded image guided intervention training projects currently led by Kitware: 1) A simulator that trains clinicians to improve procedural skill competence in real-time, ultrasound-guided renal biopsy and 2) An interactive, patient-specific virtual surgical planning system for upper airway obstruction treatments.

 

Bio:

Dr. Enquobahrie received his Ph.D. in Electrical and Computer Engineering from Cornell University. He has an MBA from Poole College of Management at North Carolina State University with an emphasis in innovation management, product innovation, and technology evaluation and commercialization. Dr. Enquobahrie has authored or co-authored more than 70 publications in machine learning, image analysis, visualization, and image-guided intervention. He has served as a technical reviewer for several medical image analysis and image-guided intervention journals including Medical Imaging Computing and Computer Assisted Intervention (MICCAI), Computer Methods and Programs in Biomedicine, Academic Radiology, Journal of Digital Imaging, IEEE Transactions on Medical Imaging, and the IEEE International Conference on Robotics and Automation.

Nov
18
Wed
LCSR Seminar: Dan Bohus “Situated Interaction” @ https://wse.zoom.us/s/94623801186
Nov 18 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Situated language interaction is a complex, multimodal affair that extends well beyond the spoken word. When interacting, we use a wide array of non-verbal signals and incrementally coordinate with each other to simultaneously resolve several problems: we manage engagement, coordinate on taking turns, recognize intentions, and establish and maintain common ground as a basis for contributing to the conversation. Proximity and body pose, attention and gaze, head nods and hand gestures, prosody and facial expressions, all play very important roles in this process. And just like a couple of decades ago advances in speech recognition opened up the field of spoken dialog systems, current advances in vision and other perceptual technologies are again opening up new horizons — we are starting to be able to build machines that computationally understand these social signals and the physical world around them, and participate in physically situated interactions and collaborations with people.

 

In this talk, using a number of research vignettes from work we have done over the last decade at Microsoft Research, I will draw attention to some of the challenges and opportunities that lie ahead of us in this exciting space. In particular, I will discuss issues with managing engagement and turn-taking in multiparty open-world settings, and more generally highlight the importance of timing and fine-grained coordination in situated language interaction. Finally, I will conclude by describing an open-source framework we are developing that promises to simplify the construction of physically situated interactive systems, and in the process further enable and accelerate research in this area.

 

Bio:

Dan Bohus is a Senior Principal Researcher in the Adaptive Systems and Interaction Group at Microsoft Research. His work centers on the study and development of computational models for physically situated spoken language interaction and collaboration. The long term question that shapes his research agenda is how can we enable interactive systems to reason more deeply about their surroundings and seamlessly participate in open-world, multiparty dialog and collaboration with people? Prior to joining Microsoft Research, Dan obtained his Ph.D. from Carnegie Mellon University.

Dec
2
Wed
LCSR Seminar – Life After Graduate School: Careers in Robotics A Panel Discussion With Experts From Industry and Academia @ https://wse.zoom.us/s/94623801186
Dec 2 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Life After Graduate School: Careers in Robotics
A Panel Discussion With Experts From Industry and Academia
A Special LCSR Career Development Seminar

Please join us with a panel of robotics experts to discuss careers in robotics.

The panelists are:

Amy Blank, PhD
Senior Software Engineer and Manager
Barrett Advanced Robotics
Boston, Massachusetts,

Muyinatu Bell, PhD
Assistant Professor
Department of Electrical and Computer Engineering
Department of Biomedical Engineering
Whiting School  of Engineering
Johns Hopkins University

Peter Kazanzides, PhD
Research Professor
Department of Computer Science
Whiting School  of Engineering
Johns Hopkins University

Cara LaPointe, PhD
Co-Director of the Johns Hopkins Institute for Assured Autonomy
Assured Intelligent Systems Program Manager
Johns Hopkins Applied Physics Laboratory

Moderator: Louis Whitcomb

Panelist Bios:

Amy Blank, PhD

Dr. Amy Blank is a Senior Software Engineer and Manager at Barrett Advanced Robotics, Boston, Massachusetts, (https://advanced.barrett.com/) where she previously was Senior Software Engineer.   Dr. Blank received her undergraduate degree in Mechanical Engineering from the Pennsylvania State University om 2006, and completed her PhD in the topics of proprioceptive motion feedback and task-dependent impedance and implications for upper-limb prosthesis control in 2012.    She conducted post-doctoral research at LCSR on the topic of hybrid force/position control for teleoperation under large time delay using the Whole Arm Manipulator, and the da Vinci Surgical System master console,
and post-doctoral research at Rice University developing novel hardware, control algorithms, and haptic feedback systems for an EMG-controlled robotic grippers.

Muyinatu Bell, PhD

Dr. Muyinatu Bell is an Assistant Professor of Electrical and Computer Engineering, Biomedical Engineering, and Computer Science at Johns Hopkins University, where she founded and directs the Photoacoustic and Ultrasonic Systems Engineering (PULSE) Lab. Dr. Bell earned a B.S. degree in Mechanical Engineering (biomedical engineering minor) from Massachusetts Institute of Technology (2006), received a Ph.D. degree in Biomedical Engineering from Duke University (2012), conducted research abroad as a Whitaker International Fellow at the Institute of Cancer Research and Royal Marsden Hospital in the United Kingdom (2009-2010), and completed a postdoctoral fellowship with the Engineering Research Center for Computer-Integrated Surgical Systems and Technology at Johns Hopkins University (2016). She is Associate Editor-in-Chief of IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control (T-UFFC), Associate Editor of IEEE Transactions on Medical Imaging, and holds patents for short-lag spatial coherence beamforming and photoacoustic-guided surgery. She is a recipient of multiple awards and honors, including MIT Technology Review’s Innovator Under 35 Award (2016), the NSF CAREER Award (2018), the NIH Trailblazer Award (2018), the Alfred P. Sloan Research Fellowship (2019), the ORAU Ralph E. Powe Jr. Faculty Enhancement Award (2019), and Maryland’s Outstanding Young Engineer Award (2019). She most recently received the inaugural IEEE UFFC Star Ambassador Lectureship Award (2020) from her IEEE society.

Peter Kazanzides, PhD

Peter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988 and began work on surgical robotics as a postdoctoral researcher at the IBM T.J. Watson Research Center. He co-founded Integrated Surgical Systems (ISS) in November 1990 to develop the ROBODOC System, which has been used for more than 20,000 hip and knee replacement surgeries. Dr. Kazanzides joined Johns Hopkins University in 2002, where he is appointed as a Research Professor of Computer Science. His current research is in the areas of medical robotics, space robotics and augmented reality.

Cara LaPointe, PhD

Dr. Cara LaPointe is a futurist who focuses on the intersection of technology, policy, ethics, and leadership. She is the Co-Director of the Johns Hopkins Institute for Assured Autonomy which works to ensure that autonomous systems are safe, secure, and trustworthy as they are increasingly integrated into every aspect of our lives. During more than two decades in the United States Navy, Dr. LaPointe held numerous roles in the areas of autonomous systems, acquisitions, ship design and production, naval force architecture, power and energy systems, and unmanned vehicle technology integration. At the Deep Submergence Lab of the Woods Hole Oceanographic Institution (WHOI), she conducted research in underwater autonomy and robotics, developing sensor fusion algorithms for deep-ocean autonomous underwater vehicle navigation.  Dr. LaPointe has served as an advisor to numerous global emerging technology initiatives and she is a frequent speaker on autonomy, artificial intelligence, blockchain, and other emerging technologies at a wide range of venues such as the United Nations, the World Bank, and the Organization for Economic Co-operation and Development. Dr. LaPointe is a patented engineer, a White House Fellow, and a French American Foundation Young Leader. She served for two Presidents as the Interim Director of the President’s Commission on White House Fellowships. She holds a Doctor of Philosophy in Mechanical and Oceanographic Engineering awarded jointly by the Massachusetts Institute of Technology (MIT) and WHOI, a Master of Science in Ocean Systems Management and a Naval Engineer degree from MIT, a Master of Philosophy in International Development Studies from the University of Oxford, and a Bachelor of Science in Ocean Engineering from the United States Naval Academy.

 

Jan
27
Wed
LCSR Seminar: Mahyar Fazlyab “Safe Deep Learning in Feedback Loops: A Robust Control Approach” @ https://wse.zoom.us/s/94623801186
Jan 27 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Neural networks have become increasingly effective at many difficult machine-learning tasks. However, the nonlinear and large-scale nature of neural networks makes them hard to analyze and, therefore, they are mostly used as black-box models without formal guarantees. This issue becomes even more complicated when DNNs are used in learning-enabled closed-loop systems, where a small perturbation can substantially impact the system being controlled. Therefore, it is of utmost importance to develop tools that can provide useful certificates of stability, safety, and robustness for DNN-driven systems.

 

In this talk, we present a convex optimization framework that can address several problems regarding deep neural networks. The main idea is to abstract hard-to-analyze components of a DNN (e.g., the nonlinear activation functions) with the formalism of quadratic constraints. This abstraction allows us to reason about various properties of DNNs (safety, robustness, stability in closed-loop settings, etc.) via semidefinite programming.

 

Biography:

Mahyar Fazlyab will join the Department of Electrical and Computer Engineering as an assistant professor in July 2021. Currently, he is an assistant research professor at the Mathematical Institute for Data Science (MINDS) at Johns Hopkins University (JHU). Before that, Mahyar received his Ph.D. in Electrical and Systems Engineering (ESE) from the University of Pennsylvania (UPenn) in 2018, with a dual MA’s degree in Statistics from the Wharton School. He was also a postdoctoral fellow in the ESE Department at UPenn from 2018 to 2020. Mahyar’s research interests are at the intersection of optimization, control, and machine learning. His current research focus is on the safety and stability of learning-enabled autonomous systems. Mahyarwon the Joseph and Rosaline Wolf Best Doctoral Dissertation Award in 2019, awarded by the Department of Electrical and Systems Engineering at the University of Pennsylvania.

 

Feb
3
Wed
LCSR Seminar: Overview of the human subjects research IRB review and approval process at Johns Hopkins School of Medicine and Homewood @ https://wse.zoom.us/s/94623801186
Feb 3 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Presenter:

Laura M. Evans – Senior Policy Associate, Director, Homewood IRB

Ken Borst – Associate Director, IRB Operations, SOM Admin Clinical Invest Human Subjects

 

Feb
10
Wed
LCSR Seminar: Shan Lin “Exploring Robust Real-time Instrument Segmentation for Endoscopic Sinus Surgery” @ https://wse.zoom.us/s/94623801186
Feb 10 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Vision-based surgical instrument segmentation, which aims to detect instrument regions in surgery images, is often a critical component for the computer or robot-assisted surgical systems. While advanced algorithms including deep CNN models have achieved promising instrument segmentation results, several limitations remain unsolved: (1) The robustness and generalization ability of existing algorithms is still insufficient for challenging surgery images, and (2) deep networks usually come with high computation cost, which needed to be addressed for time-sensitive applications during surgery. In this talk, I will present two algorithms to address these challenges. First, I will introduce a lightweight CNN that can achieve better segmentation performance with less inference time on low-quality endoscopic sinus surgery videos compared with several advanced deep networks. I will then discuss a domain adaptation method that can transfer the knowledge learned from relevant and labeled datasets for instrument segmentation on an unlabeled dataset.

 

Biography:

Shan Lin is a PhD candidate in the Electrical and Computer Engineering department at the University of Washington working with Prof. Blake Hannaford on medical robotics. Her research focuses on surgical instrument segmentation and skill assessment.

 

Feb
17
Wed
LCSR Seminar: James Bellingham “Ocean Observing in the Age of Robots” @ https://wse.zoom.us/s/94623801186
Feb 17 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Progress in the ocean sciences has been fundamentally limited by the high cost of observing the ocean interior, which in turn has been driven by the necessity that humans go to sea to make those measurements. That linkage is being broken. We are on the cusp of an age where robotic systems will operate routinely without the on-site attendance of humans. In this talk I will discuss design of survey-class Autonomous Underwater Vehicles and multi-platform observing systems, some implications for the future of marine systems, and the impact on how we do science at sea. These topics are impossible to discuss without considering the larger ocean technology enterprise. The use of robotics has been a key enabler for the offshore oil and gas industry and is making large inroads to defense. As robotics become more capable and accessible, their impacts will spread, enabling entirely new ocean enterprises. Thus marine robotics both promise to greatly improve our ability to observe the ocean, while at the same time offering a powerful enabling technology for ocean industries.

 

Biography:

James G. Bellingham research activities center on the creation of new, high-performance classes of underwater robots and the design and operations of large-scale multi-platform field programs. He has led and participated in research expeditions around the world from the Arctic to the Antarctic.  Jim founded the Consortium for Marine Robotics at the Woods Hole Oceanographic Institution (WHOI), founded the Autonomous Underwater Vehicles Laboratory at MIT, and co-founded Bluefin Robotics. He was Director of Engineering and Chief Technologist at the Monterey Bay Aquarium Research Institute (MBARI).  Jim serves on numerous advisory and National Academies studies.  His awards include the Lockheed Martin Award for Ocean Science and Engineering, the MIT Fourteenth Robert Bruce Wallace lecturer, the Blue Innovation Rising Tides Award, and the Navy Superior Public Service Award.

 

Feb
24
Wed
LCSR Seminar: Hao Su “High-Performance Soft Wearable Robots for Human Augmentation and Rehabilitation” @ https://wse.zoom.us/s/94623801186
Feb 24 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

Wearable robots for physical augmentation of humans are the new frontier of robotics, but they are typically rigid, bulky, and limited in lab settings for steady-state walking assistance. To overcome those challenges, the first part of the talk will present a new design paradigm that leverages high torque density motors to enable the electrification of robotic actuation. Thus, our rigid and soft robots are able to achieve unprecedented performances, including most lightweight powered exoskeleton, high compliance, and high bandwidth human-robot interaction. The second part of the talk will focus on AI-powered controllers that estimate human dynamics and assist multimodal locomotion with superhuman performance to walk longer, squat more, jump higher, and swim faster. We use robots as a tool for scientific discovery to explore new research fields, including wearable robots for pediatric rehabilitation and pain relief of musculoskeletal disorders. Our breakthrough advances in bionic limbs will provide greater mobility and new hope to those with physical disabilities. We envision that our work will enable a paradigm shift of wearable robots from lab-bounded rehabilitation machines to ubiquitous personal robots for workplace injury prevention, pediatric and elderly rehabilitation, home care, and space exploration.

 

Biography:

Hao Su is Irwin Zahn Endowed Assistant Professor in the Department of Mechanical Engineering at the City University of New York, City College. He is the Director of the Biomechatronics and Intelligent Robotics (BIRO) Lab. He was a postdoctoral research fellow at Harvard University and the Wyss Institute for Biologically Inspired Engineering. Before this role, he was a Research Scientist at Philips Research North America, where he designed robots for lung and cardiac surgery. He received his Ph.D. degree at Worcester Polytechnic Institute. Dr. Su received the NSF CAREER Award, Best Medical Robotics Paper Runner-up Award at the IEEE International Conference on Robotics and Automation (ICRA), and Philips Innovation Transfer Award. His research is sponsored by NSF (National Robotics Initiative, Cyber-Physical Systems, Future of Work), NIH R01, National Institute on Disability, Independent Living, and Rehabilitation Research (NIDLRR), and Toyota Mobility Foundation. He is currently directing a Center of Assistive and Personal Robotics for Independent Living (APRIL) funded by the National Science Foundation and Department of Health and Human Services.

 

Mar
3
Wed
LCSR Seminar: Chad Jenkins “Semantic Robot Programming… and Maybe Making the World a Better Place” @ https://wse.zoom.us/s/94623801186
Mar 3 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

The visions of interconnected heterogeneous autonomous robots in widespread use are a coming reality that will reshape our world. Similar to “app stores” for modern computing, people at varying levels of technical background will contribute to “robot app stores” as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers. In order for people to fluently program autonomous robots, a robot must be able to interpret user instructions that accord with that user’s model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit a critical missing component is the grounding of semantic symbols in a manner that addresses both uncertainty in low-level robot perception and intentionality in high-level reasoning. Such a grounding will enable robots to fluidly work with human collaborators to perform tasks that require extended goal-directed autonomy.

 

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

 

Biography:

Odest Chadwicke Jenkins, Ph.D., is a Professor of Computer Science and Engineering and Associate Director of the Robotics Institute at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. His research often intersects topics in computer vision, machine learning, and computer animation. Prof. Jenkins has been recognized as a Sloan Research Fellow and is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR), the Air Force Office of Scientific Research (AFOSR) and the National Science Foundation (NSF). Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science and the Association for the Advancement of Artificial Intelligence, and Senior Member of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. He is an alumnus of the Defense Science Study Group (2018-19).

 

Mar
10
Wed
LCSR Seminar: Peter Kazanzides “Robotics and mixed reality to assist human task performance” @ https://wse.zoom.us/s/94623801186
Mar 10 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2020/2021 school year

 

Abstract:

The capabilities of artificial intelligence and robotics have advanced significantly in recent years, but many tasks still require human involvement or oversight for at least some phases. This is especially true for critical tasks, such as surgery or space operations, where the costs of failure are high. We therefore consider approaches, such as mixed reality visualization, interactive interfaces and mechanical assistance, that can enable more effective partnerships between humans and machines. This presentation will highlight several examples in applications of computer-assisted interventions in the operating room and in space.

 

Biography:

Peter Kazanzides received the Ph.D. degree in electrical engineering from Brown University in 1988 and began work on surgical robotics as a postdoctoral researcher, advised by Russell H. Taylor, at the IBM T.J. Watson Research Center. Dr. Kazanzides co-founded Integrated Surgical Systems (ISS) in November 1990 to commercialize the robotic hip replacement research performed at IBM and the University of California, Davis. As Director of Robotics and Software, he was responsible for the design, implementation, validation and support of the ROBODOC System, which has been used for more than 20,000 hip and knee replacement surgeries. Dr. Kazanzides joined Johns Hopkins University in December 2002 and is currently appointed as a Research Professor of Computer Science. He is a member of the Laboratory for Computational Sensing and Robotics (LCSR) and directs the Sensing, Manipulation and Real-Time Systems (SMARTS) lab.  His research interests include medical robotics, space robotics, and mixed reality, which share the common themes of human/machine interfaces to keep the human in the loop, real-time sensing to account for uncertainty, and system engineering to enable deployment in the real world.