Calendar

Oct
6
Wed
LCSR Seminar: Daniela Rus “Learning Risk and Social Behavior in Mixed Human-Autonomous Vehicles Systems” @ https://wse.zoom.us/s/94623801186
Oct 6 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2021/2022 school year

 

Abstract:

Deployment of autonomous vehicles (AV) on public roads promises increases in efficiency and safety, and requires intelligent situation awareness. We wish to have autonomous vehicles that can learn to behave in safe and predictable ways, and are capable of evaluating risk, understanding the intent of human drivers, and adapting to different road situations. This talk describes an approach to learning and integrating risk and behavior analysis in the control of autonomous vehicles. I will introduce Social Value Orientation (SVO), which captures how an agent’s social preferences and cooperation affect interactions with other agents by quantifying the degree of selfishness or altruism. SVO can be integrated in control and decision making for AVs. I will provide recent examples of self-driving vehicles capable of adaptation.

 

Biography:

Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, and Deputy Dean of Research in the Schwarzman College of Computing at MIT. Rus’ research interests are in robotics and artificial intelligence. The key focus of her research is to develop the science and engineering of autonomy. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences. She is a senior visiting fellow at MITRE Corporation. She is the recipient of the Engelberger Award for robotics. She earned her PhD in Computer Science from Cornell University.

Oct
13
Wed
LCSR Seminar: Danail Stoyanov “Towards Understanding Surgical Scenes Using Computer Vision” @ https://wse.zoom.us/s/94623801186
Oct 13 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2021/2022 school year

 

 

Abstract:

Digital cameras have dramatically changed interventional and surgical procedures. Modern operating rooms utilize a range of cameras to minimize invasiveness or provide vision beyond human capabilities in magnification, spectra or sensitivity. Such surgical cameras provide the most informative and rich signal from the surgical site containing information about activity and events as well as physiology and tissue function. This talk will highlight some of the opportunities for computer vision in surgical applications and the challenges in translation to clinically usable systems.

 

Bio:

Dan Stoyanov is a Professor of Robot Vision in the Department of Computer Science at University College London, Director of the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), a Royal Academy of Engineering Chair in Emerging Technologies and Chief Scientist at Digital Surgery Ltd. Dan first studied electronics and computer systems engineering at King’s College London before completing a PhD in Computer Science at Imperial College London where he specialized in medical image computing.

 

Oct
20
Wed
LCSR Seminar: Pablo Arbelaez “Towards Robust Artificial Intelligence” @ https://wse.zoom.us/s/94623801186
Oct 20 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2021/2022 school year

 

 

Abstract:

I will discuss recent efforts at CinfonIA in enhancing interpretability in deep neural networks through the use of adversarial robustness and multimodal information.

 

Biography:

Pablo Arbeláez received the PhD with honors in Applied Mathematics from the Université Paris Dauphine in 2005. He was Senior Research Scientist with the Computer Vision Group at UC Berkeley from 2007 to 2014. He currently holds a faculty position in the Department of Biomedical Engineering at Universidad de los Andes in Colombia. Since 2020, he leads the Center for Research and Formation in Artificial Intelligence (CinfonIA) at UniAndes. His research interests are in computer vision and machine learning, in which he has worked on several problems, including perceptual grouping, object recognition and the analysis of biomedical images.

Oct
27
Wed
LCSR Seminar: LCSR Faculty “Interviewing for Jobs in Academia and Industry” @ https://wse.zoom.us/s/94623801186
Oct 27 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2021/2022 school year

 

LCSR Faculty “Interviewing for Jobs in Academia and Industry”

 

Speakers: Louis Whitcomb, Marin Kobilarov, and the LCSR Faculty

Abstract:
This LCSR professional development seminar will review the process of interviewing for jobs in academia (e.g. faculty, post-doc, and scientist positions) and industry (e.g. engineering, scientist, and management positions), and will provide tips and best-practices for successful interviewing.

 

Nov
3
Wed
LCSR Seminar: Kel Guerin “Building an End-User Focused Operating System for Robotics” @ https://wse.zoom.us/s/94623801186
Nov 3 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2021/2022 school year

 

Abstract:

There are more than 2 million industrial robots used worldwide every day, and yet these devices represent one of the most fragmented technologies in the world. With more than 100 brands of industrial robots, each with their own proprietary, difficult to learn software and programming languages, we are not seeing the exponential growth we expected out of robots. The computer industry faced a similar challenge in the early 1980s with the advent of the PC, and computers did not see explosive growth until a few key platforms emerged that focused on making computers accessible to end users, and run on a common software platform. At READY robotics, we believe the same is true for robots, and that is why we are building Forge/OS, our “Windows” for the robotics space that lets every robot speak the same language and provide the same award winning user experience to end-users. We will talk about how this technology came about, how we think it can change the future, and discuss the journey from the initial research performed at Johns Hopkins University up to today.

 

Biography:

Kel Guerin has been working in the robotics space for more than 10 years, focusing on the design and usability of a wide variety of robots, including systems for space exploration, deep mining, surgery, and industrial manufacturing. While obtaining his Ph.D. from Johns Hopkins University (Defended 2016), Kel worked specifically on the challenge of making industrial robots more flexible and easy to use. The result was his award-winning Forge Operating System and easy-to-use programming interface for industrial robots. Kel spun out his technology into READY Robotics, an industrial robotics start-up he co-founded in 2016. His work has been featured in the Wall Street Journal, Forbes, and READY’s products have been called “the Swiss Army knife of robots” by Inc. magazine.

Nov
10
Wed
LCSR Seminar: Maya Cakmak @ https://wse.zoom.us/s/94623801186
Nov 10 @ 12:00 pm – 1:00 pm
Nov
17
Wed
LCSR Seminar: Alaa Eldin Abdelaal “An “Additional View” on Human-Robot Interaction and Autonomy in Robot-Assisted Surgery” @ https://wse.zoom.us/s/94623801186
Nov 17 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2021/2022 school year

 

Abstract:

Robot-assisted surgery (RAS) has gained momentum over the last few decades with nearly 1,200,000 RAS procedures performed in 2019 alone using the da Vinci Surgical System, the most widely used surgical robotics platform. The current state-of-the-art surgical robotic systems use only a single endoscope to view the surgical field. In this talk, we present a novel design of an additional “pickup” camera that can be integrated into the da Vinci Surgical System. We then explore the benefits of our design for human-robot interaction (HRI) and autonomy in RAS. On the HRI side, we show how this “pickup” camera improves depth perception as well as how its additional view can lead to better surgical training. On the autonomy side, we show how automating the motion of this camera provides better visualization of the surgical scene. Finally, we show how this automation work inspires the design of novel execution models of the automation of surgical subtasks, leading to superhuman performance.

 

Biography:

Alaa Eldin Abdelaal is a PhD candidate at the Robotics and Control Laboratory at the University of British Columbia and a visiting graduate scholar at the Computational Interaction and Robotics Lab at Johns Hopkins University. He holds a B.Sc. in Computer and Systems Engineering from Mansoura University in Egypt and a M.Sc. in Computing Science from Simon Fraser University in Canada. His research interests are at the intersection of autonomy and human-robot interaction for human skill augmentation and decision support with application to surgical robotics. His work is co-advised by Dr. Tim Salcudean and Dr. Gregory Hager. His research has been recognized with the Best Bench-to-Bedside Paper Award at the International Conference on Information Processing in Computer-Assisted Interventions (IPCAI) 2019. He is the recipient of the Vanier Canada Graduate Scholarship, the most prestigious scholarship for PhD students in Canada.

Dec
1
Wed
LCSR Seminar: LCSR Faculty “Panel on commercialization of robotics research” @ https://wse.zoom.us/s/94623801186
Dec 1 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2021/2022 school year

 

 

Abstract:

In this seminar, we will have a panel of three LCSR faculty, Dr. Peter Kazanzides, Dr. Marin Kobilarov, and Dr. Axel Krieger discussing their experience in commercializing robotic research through licensing and start-up. The panel will include questions and answer sessions with the audience.

 

Jan
26
Wed
LCSR Seminar: Tomas Lozano-Perez “Generalization in Planning and Learning for Robotic Manipulation” @ https://wse.zoom.us/s/94623801186
Jan 26 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2021/2022 school year

 

Abstract:

An enduring goal of AI and robotics has been to build a robot capable of robustly performing a wide variety of tasks in a wide variety of environments; not by sequentially being programmed (or taught) to perform one task in one environment at a time, but rather by intelligently choosing appropriate actions for whatever task and
environment it is facing. This goal remains a challenge. In this talk I’ll describe recent work in our lab aimed at the goal of general-purpose robot manipulation by integrating task-and-motion planning with various forms of model learning. In particular, I’ll describe approaches to manipulating objects without prior shape models, to acquiring composable sensorimotor skills, and to exploiting past experience for more efficient planning.

 

Biography:

Tomas Lozano-Perez is professor in EECS at MIT, and a member of CSAIL. He was a recipient of the 2011 IEEE Robotics Pioneer Award and a co-recipient of the 2021 IEEE Robotics and Automation Technical Field Award. He is a Fellow of the AAAI, ACM, and
IEEE.

Feb
2
Wed
LCSR Seminar: Xuesu Xiao “Deployable Robots that Learn” @ https://wse.zoom.us/s/94623801186
Feb 2 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2021/2022 school year

 

Abstract:

While many robots are currently deployable in factories, warehouses, and homes, their autonomous deployment requires either the deployment environments to be highly controlled, or the deployment to only entail executing one single preprogrammed task. These deployable robots do not learn to address changes and to improve performance. For uncontrolled environments and for novel tasks, current robots must seek help from highly skilled robot operators for teleoperated (not autonomous) deployment.

 

In this talk, I will present two approaches to removing these limitations by learning to enable autonomous deployment in the context of mobile robot navigation, a common core capability for deployable robots: (1) Adaptive Planner Parameter Learning utilizes existing motion planners, fine-tunes these systems using simple interactions with non-expert users before autonomous deployment, adapts to different deployment environments, and produces robust autonomous navigation; (2) Learning from Hallucination enables agile navigation in highly-constrained deployment environments by exploring in a completely safe training environment and creating synthetic obstacle configurations to learn from. Building on robust autonomous navigation, I will discuss my vision toward a hardened, reliable, and resilient robot fleet which is also task-efficient and continually learns from each other and from humans.

 

Biography:

Xuesu Xiao is an incoming Assistant Professor in the Department of Computer Science at George Mason University starting Fall 2022. Currently, he is a roboticist on The Everyday Robot Project at X, The Moonshot Factory, and a research affiliate in the Department of Computer Science at The University of Texas at Austin. Dr. Xiao’s research focuses on field robotics, motion planning, and machine learning. He develops highly capable and intelligent mobile robots that are robustly deployable in the real world with minimal human supervision. Dr. Xiao received his Ph.D. in Computer Science from Texas A&M University in 2019, Master of Science in Mechanical Engineering from Carnegie Mellon University in 2015, and dual Bachelor of Engineering in Mechatronics Engineering from Tongji University and FH Aachen University of Applied Sciences in 2013.