Calendar

Apr
10
Wed
LCSR Seminar: Jana Kosecka “Visual representations for navigation and object discovery” @ Hackerman B-17
Apr 10 @ 12:00 pm – 1:00 pm

Abstract:

Deliberate navigation in previously unseen environments and detection of novel objects instances are some of the key functionalities of intelligent agents engaged in fetch and delivery tasks. While data-driven deep learning approaches fueled rapid progress in object category recognition and semantic segmentation by exploiting large amounts of labelled data, extending this learning paradigm to robotic setting comes with challenges.

 

To overcome the need for large amount of labeled data for training object instance detectors we use active self-supervision provided by a robot traversing an environment. The knowledge of ego-motion enables the agent to effectively associate multiple object hypotheses, which serve as training data for learning novel object embeddings from unlabelled data. The object detectors trained in this manner achieve higher mAP compared to off-the-shelf detectors trained on this limited data.

 

I will describe an approach towards semantic target driven navigation, which entails finding a way through a complex environment to a target object. The proposed approach learns navigation policies on top of representations that capture spatial layout and semantic contextual cues. The choice of this representation exploits models trained on large standard vision datasets, enables better generalization and joint use of simulated environments and real images for effective training of navigation policies.

 

Bio:

Jana Kosecka is Professor at the Department of Computer Science, George Mason University. She obtained Ph.D. in Computer Science from University of Pennsylvania. Following her PhD, she was a postdoctoral fellow at the EECS Department at University of California, Berkeley. She is the recipient of David Marr’s prize  and received the National Science Foundation CAREER Award. Jana is a chair of IEEE technical Committee of Robot Perception, Associate Editor of IEEE Robotics and Automation Letters and International Journal of Computer Vision, former editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. She held visiting positions at Stanford University, Google and Nokia Research. She  is a co-author of a monograph titled Invitation to 3D vision: From Images to Geometric Models. Her general research interests are in Computer Vision and Robotics. In particular she is interested ‘seeing’ systems engaged in autonomous tasks, acquisition of static and dynamic models of environments by means of visual sensing and human-computer interaction.

 

Recorded Spring 2019 Seminars

Apr
17
Wed
LCSR Seminar: 2 presentations @ Hackerman B-17
Apr 17 @ 12:00 pm – 1:00 pm

12:00pm Presentation: Shahriar Sefati

Title: FBG-Based Position Estimation of Highly Deformable Continuum Manipulators: Model-Dependent vs. Data-Driven Approaches

Abstract: Fiber Bragg Grating (FBG) is a promising strategy for sensing in flexible medical instruments and continuum manipulators. Conventional shape sensing techniques using FBG involve finding the curvature at discrete FBG active areas and integrating curvature over the length of the continuum dexterous manipulator (CDM) for tip position estimation. However, due to limited number of sensing locations and many geometrical assumptions, these methods are prone to large error propagation especially when the CDM undergoes large deflections or interacts with obstacles. In this talk, I will give an overview of complications in using the conventional tip position estimation methods that are dependent on sensor model, and propose a new data-driven method that overcomes these challenges. The method’s performance is evaluated on a CDM developed for orthopedic applications, and the results are compared to conventional model-dependent methods during large deflection bending and interactions with obstacles.

Bio: Shahriar Sefati is a Ph.D. candidate in the Department of Mechanical Engineering at Johns Hopkins University, and affiliated with Biomechanical and Image-guided Surgical Systems (BIGSS) as part of Laboratory for Computational Sensing and Robotics (LCSR). He received his B.S. degree in Mechanical Engineering from Sharif University of Technology in 2014 and M.S.E. degree in Computer Science from Johns Hopkins University in 2017. He has also been a robotics and controls engineer intern at Verb Surgical, Inc. in summer 2018. Shahriar’s research focuses on continuum manipulators and flexible robotics for less-invasive surgery.

 

12:30 Presenation: Iulian Iordachita

Title: Safe Robot-assisted Retinal Surgery

Abstract: Modern patient health care involves maintenance and restoration of health by medication or surgical intervention. This research talk focuses solely on surgical procedures, like retinal surgery, where surgeons perform high-risk but necessary treatments whilst facing significant technical and human limitations in an extremely constrained environment. Inaccuracy in tool positioning and movement are among the important factors limiting performance in retinal surgery. The challenges are further exacerbated by the fact that in the majority of contact events, the forces encountered are below the tactile perception of the surgeon. Inability to detect surgically relevant forces leads to a lack of control over potentially injurious factors that result in complications. This situation is less than optimal and can significantly benefit from the recent advances in robot assistance, sensor feedback and human machine interface design. Robotic assistance may be ideally suited to address common problems encountered in most (micro)manipulation tasks, including hand tremor, poor tool manipulation resolution, and accessibility, and to open up surgical strategies that are beyond human capability. Various force sensors have been developed for microsurgery and minimally invasive surgery. Optical fibers strain sensors, specifically fiber Bragg gratings (FBG), are very sensitive, capable of detecting sub-micro strain chances, are very small in size, lightweight, biocompatible, sterilizable, multiplexable, and immune to electrostatic and electromagnetic noise. In retinal surgery, FBG-based force-sensing tools can provide the necessary information that will guide the surgeon through any maneuver, effectively reduce forces with improved precision and potentially improve the safety and efficacy of the surgical procedure. Optical fiber-based sensorized instruments in correlation with robot-assisted (micro)surgery could address the current limitations in surgery by integrating novel technology that transcend human sensor-motor capabilities into robotic systems that provide both, real-time significant information and physical support to the surgeon, with the ultimate goal of improving clinical care and enabling novel therapies.

 

Recorded Spring 2019 Seminars

Apr
24
Wed
LCSR Seminar: Shai Revzen “Geometric Mechanics and Robots with Multiple Contacts” @ Hackerman B-17
Apr 24 @ 12:00 pm – 1:00 pm

Abstract:

Modeling and control problems generally get harder the more Degrees of Freedom (DoF) are involved, suggesting that moving with many legs or grasping with many fingers should be difficult to describe. In this talk I will show how insights from the theory of geometric mechanics, a theory developed about 20-30 years ago by Marsden, Ostrowski, and Bloch, might turn that notion on its head. I will motivate the claim that when enough legs contact the ground, the complexity associated with momentum is gone, to be replaced by a problem of slipping contacts. In this regime, equations of motion are replaced by a “connection” which is both simple to estimate in a data driven form, and easy to simulate by adopting some non-conventional friction models. The talk will contain a brief intro to geometric mechanics, and consist mostly of results showing that: (i) this class of models is more general than may seem at first; (ii) they can be used for very rapid hardware in the loop gait optimization of both simple and complex robots; (iii) they motivate a simple motion model that fits experimental results remarkably well. If successful, this research agenda could improve motion planning speeds for multi-contact robotic systems by several orders of magnitude, and explain how simple animals can move so well with many limbs.

 

Bio:

Shai Revzen is an Assistant Professor in the University of Michigan, Ann Arbor. His primary appointment is in the department of Electrical Engineering and Computer Science in the College of Engineering. He holds a courtesy faculty appointment in the Department of Ecology and Evolutionary Biology, and is an Assistant Director of the Michigan Robotics Institute. He received his PhD in Integrative Biology from the University of California at Berkeley, and an M.Sc. in Computer Science from the Hebrew University in Jerusalem. In addition to his academic work, Shai was Chief Architect R&D of the convergent systems division of Harmonic Lightwaves (HLIT), and a co-founder of Bio-Systems Analysis, a biomedical technology start-up. As principal investigator of the Biologically Inspired Robotics and Dynamical Systems (BIRDS) lab, Shai sets the research agenda and approach of the lab: a focus on fundamental science, realizing its transformative influence on robotics and other technology. Work in the lab is equally split between robotics, mathematics, and biology.

 

Recorded Spring 2019 Seminars

May
1
Wed
LCSR Seminar: Panel Discussion with Experts from Academia and Industry: Life After Grad School: Today’s Opportunities @ Hackerman B-17
May 1 @ 12:00 pm – 1:00 pm

Hosted by:

Ehsan Azimi and Shahriar Sefati

 

 

Recorded Spring 2019 Seminars

Sep
4
Wed
LCSR Seminar: Robotics Kickoff Town Hall @ Hackerman B-17
Sep 4 @ 12:00 pm – 1:00 pm
Sep
18
Wed
LCSR Seminar: Joseph Singapogu @ Hackerman B-17
Sep 18 @ 12:00 pm – 1:00 pm

Abstract:

TBA

 

Bio:

TBA

Sep
25
Wed
LCSR Seminar: IP/COI Laura Evans and Peter Sheppard @ Hackerman B-17
Sep 25 @ 12:00 pm – 1:00 pm

Peter A. Sheppard – Sr. Intellectual Property Manager Johns Hopkins Technology Ventures

“Intellectual Property Primer For Conflict of Interest Training.”

 

Laura M. Evans – Senior Policy Associate, Director, Homewood IRB

“Conflicts of Interest: Identification, Review, and Management.”

 

 LCSR Seminar Video Link

Oct
2
Wed
LCSR Seminar: David Blei “The Blessings of Multiple Causes” @ Hackerman B-17
Oct 2 @ 12:00 pm – 1:00 pm

Abstract:

Causal inference from observational data is a vital problem, but it comes with strong assumptions. Most methods require that we observe all confounders, variables that affect both the causal variables and the outcome variables. But whether we have observed all confounders is a famously untestable assumption. We describe the deconfounder, a way to do causal inference with weaker assumptions than the classical methods require.
How does the deconfounderwork? While traditional causal methods measure the effect of a single cause on an outcome, many modern scientific studies involve multiple causes, different variables whose effects are simultaneously of interest. The deconfounderuses the correlation among multiple causes as evidence for unobserved confounders, combining unsupervised machine learning and predictive model checking to perform causal inference.We demonstrate the deconfounderon real-world data and simulation studies, and describe the theoretical requirements for the deconfounderto provide unbiased causal estimates.

 

Bio:

David Bleiis a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. He studies probabilistic machine learning, including its theory, algorithms, and application. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), BlavatnikFaculty Award (2013), ACM-Infosys Foundation Award (2013), a Guggenheim fellowship (2017), and a Simons Investigator Award (2019). He is the co-editor-in-chief of the Journal of Machine Learning Research.He is a fellow of the ACM and the IMS.
[*] https://arxiv.org/abs/1805.06826

 

 LCSR Seminar Video Link

Oct
9
Wed
LCSR Seminar: Zhou Yu “Enabling Machines with Situational Awareness, Communication, and Decision-Making Abilities Leveraging Multimodal Information” @ Hackerman B-17
Oct 9 @ 12:00 pm – 1:00 pm

Abstract:

Humans interact with other humans or the world through information from various channels including vision, audio, language, haptics, etc.  To simulate intelligence, machines require similar abilities to process and combine information from different channels to acquire better situation awareness, better communication ability, and better decision-making ability. In this talk, we describe three projects. In the first study, we enable a robot to utilize both vision and audio information to achieve better user understanding. Then we use incremental language generation to improve the robot’s communication with a human. In the second study, we utilize multimodal history tracking to optimize policy planning in task-oriented visual dialogs. In the third project, we tackle the well-known trade-off between dialog response relevance and policy effectiveness in visual dialog generation. We propose a new machine learning procedure that alternates from supervised learning and reinforcement learning to optimum language generation and policy planning jointly in visual dialogs. We will also cover some recent ongoing work on image synthesis through dialogs, and generating social multimodal dialogs with a blend of GIF and words.

 

Bio:

Zhou Yu is an Assistant Professor at the Computer Science Department at UC Davis. She received her PhD from Carnegie Mellon University in 2017.  Zhou is interested in building robust and multi-purpose dialog systems using fewer data points and less annotation. She also works on language generation, vision and language tasks. Zhou’s work on persuasive dialog systems received an ACL 2019 best paper nomination recently. Zhou was featured in Forbes as 2018 30 under 30 in Science for her work on multimodal dialog systems. Her team recently won the 2018 Amazon Alexa Prize on building an engaging social bot for a $500,000 cash award.

 

 LCSR Seminar Video Link

Oct
16
Wed
LCSR Seminar: Nikolai Matni “Safety and robustness guarantees with learning in the loop” @ Hackerman B-17
Oct 16 @ 12:00 pm – 1:00 pm

Abstract:

In this talk, we present recent progress towards developing learning-based control strategies for the design of safe and robust autonomous systems. Our approach is to recognize that machine learning algorithms produce inherently uncertain estimates or predictions, and that this uncertainty must be explicitly quantified (e.g., using non-asymptotic guarantees of contemporary high-dimensional statistics) and accounted for (e.g., using robust control/optimization) when designing safety critical systems. We focus on the optimal control of unknown systems, and show that by integrating modern tools from high-dimensional statistics and robust control, we can provide, to the best of our knowledge, the first end-to-end finite data robustness, safety, and performance guarantees for learning and control. We also briefly highlight how these ideas can be extended to the large-scale distributed setting by similarly integrating tools from structured linear inverse problems with tools from distributed robust and optimal control.  As a whole, these results provide a rigorous and contemporary perspective on safe reinforcement learning as applied to continuous control. We conclude with our vision for a general theory of safe learning and control, with the ultimate goal being the design of robust and high performing data-driven autonomous systems.

 

Bio:

Nikolai Matni is an assistant professor in the Department of Electrical and Systems Engineering at the University of Pennsylvania, where he is also a member of the GRASP Lab, PRECISE Center, and Applied Mathematics and Computational Science graduate group. Prior to joining Penn, Nikolai was a postdoctoral scholar in EECS at UC Berkeley. He has also held a position as a postdoctoral scholar in the Computing and Mathematical Sciences at Caltech. He received his Ph.D. in Control and Dynamical Systems from Caltech in June 2016. He also holds B.A.Sc. and M.A.Sc. in Electrical Engineering from the University of British Columbia, Vancouver, Canada. His research interests broadly encompass the use of learning, optimization, and control in the design and analysis of safety-critical and data-driven cyber-physical systems.  Nikolai was awarded the IEEE CDC 2013 Best Student Paper Award (first ever sole author winner) and the IEEE ACC 2017 Best Student Paper Award (as co-advisor).

 LCSR Seminar Video Link

Laboratory for Computational Sensing + Robotics