Calendar

Apr
3
Wed
LCSR Seminar: Panagiotis Artemiadis “Modeling and Control of Human-Robot Interaction and Interfaces” @ Hackerman B-17
Apr 3 @ 12:00 pm – 1:00 pm

Abstract:

This talk will focus on modeling and advanced control of robots that physically or cognitively interact with humans. This type of interaction can be found on devices that assist and augment human capabilities, as well as provide motor rehabilitation therapy to impaired individuals. The first part of the talk will present research on myoelectric control interfaces for a variety of robotic mechanisms. Results of a novel method for robust myoelectric control of robots will be presented. This work supports a shift in myoelectric control schemes towards proportional simultaneous controls learned through development of unique muscle synergies. The ability to enhance, retain, and generalize control, without needing to recalibrate or retrain the system, supports control schemes promoting synergy development, not necessarily user-specific decoders trained on a subset of existing synergies, for efficient myoelectric interfaces designed for long-term use. The second part of the talk will focus on a novel approach to robotic interventions for gait therapy, which takes advantage of mechanisms of inter-limb coordination, using a novel robotic system, called Variable Stiffness Treadmill (VST) developed in the HORC Lab in ASU. The methods and results of the presented approach will lay the foundation for model-based rehabilitation strategies for impaired walkers. Finally, results on a novel control interface between humans and multi-agent systems will be presented. The human user will be in control of a swarm of Unmanned aerial vehicles (UAVs) and will be able to provide high-level commands to these agents. The proposed brain-swarm interface allows for advancements in swarm high-level information perception, leading to augmentation of decision capabilities of manned-unmanned systems and promoting the symbiosis between human and machine systems for comprehensive situation awareness.

 

Bio:

Panagiotis (Panos) Artemiadis received the Diploma and the Ph.D. degree in mechanical engineering from the National Technical University of Athens, Athens, Greece, in 2003 and 2009, respectively. From 2007-2009 he worked as Visiting Researcher in Brown University and the Toyota Technological Institute in Chicago. From 2009 to 2011, he was a Postdoctoral Research Associate in the Mechanical Engineering Department, Massachusetts Institute of Technology (MIT). Since 2011, he has been with Arizona State University, where he is currently an Associate Professor in the Mechanical and Aerospace Engineering Department, and the Director of the Human-Oriented Robotics and Control Laboratory (http://horc.engineering.asu.edu/). He is also the Graduate Program Chair for the new MS Degree in Robotics and Autonomous Systems at ASU. His research interests include the areas of rehabilitation robotics, control systems, system identification, brain–machine interfaces and human–swarm interaction. He serves as Editor-in-Chief and Associate Editor in many scientific journals and scientific committees, three of his papers have been nominated or awarded best paper awards, while he has received many awards for his research and teaching (more info at http://www.public.asu.edu/~partemia/.). He is the recipient of the 2014 DARPA Young Faculty Award and the 2014 AFOSR Young Investigator Award, as well as the 2017 ASU Fulton Exemplar Faculty Award. He has the (co-)author of over 80 papers in scientific journals and peer-reviewed conferences, as well as 9 patents (3 issued, 6 pending).

 

Recorded Spring 2019 Seminars

Apr
10
Wed
LCSR Seminar: Jana Kosecka “Visual representations for navigation and object discovery” @ Hackerman B-17
Apr 10 @ 12:00 pm – 1:00 pm

Abstract:

Deliberate navigation in previously unseen environments and detection of novel objects instances are some of the key functionalities of intelligent agents engaged in fetch and delivery tasks. While data-driven deep learning approaches fueled rapid progress in object category recognition and semantic segmentation by exploiting large amounts of labelled data, extending this learning paradigm to robotic setting comes with challenges.

 

To overcome the need for large amount of labeled data for training object instance detectors we use active self-supervision provided by a robot traversing an environment. The knowledge of ego-motion enables the agent to effectively associate multiple object hypotheses, which serve as training data for learning novel object embeddings from unlabelled data. The object detectors trained in this manner achieve higher mAP compared to off-the-shelf detectors trained on this limited data.

 

I will describe an approach towards semantic target driven navigation, which entails finding a way through a complex environment to a target object. The proposed approach learns navigation policies on top of representations that capture spatial layout and semantic contextual cues. The choice of this representation exploits models trained on large standard vision datasets, enables better generalization and joint use of simulated environments and real images for effective training of navigation policies.

 

Bio:

Jana Kosecka is Professor at the Department of Computer Science, George Mason University. She obtained Ph.D. in Computer Science from University of Pennsylvania. Following her PhD, she was a postdoctoral fellow at the EECS Department at University of California, Berkeley. She is the recipient of David Marr’s prize  and received the National Science Foundation CAREER Award. Jana is a chair of IEEE technical Committee of Robot Perception, Associate Editor of IEEE Robotics and Automation Letters and International Journal of Computer Vision, former editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. She held visiting positions at Stanford University, Google and Nokia Research. She  is a co-author of a monograph titled Invitation to 3D vision: From Images to Geometric Models. Her general research interests are in Computer Vision and Robotics. In particular she is interested ‘seeing’ systems engaged in autonomous tasks, acquisition of static and dynamic models of environments by means of visual sensing and human-computer interaction.

 

Recorded Spring 2019 Seminars

Apr
17
Wed
LCSR Seminar: 2 presentations @ Hackerman B-17
Apr 17 @ 12:00 pm – 1:00 pm

12:00pm Presentation: Shahriar Sefati

Title: FBG-Based Position Estimation of Highly Deformable Continuum Manipulators: Model-Dependent vs. Data-Driven Approaches

Abstract: Fiber Bragg Grating (FBG) is a promising strategy for sensing in flexible medical instruments and continuum manipulators. Conventional shape sensing techniques using FBG involve finding the curvature at discrete FBG active areas and integrating curvature over the length of the continuum dexterous manipulator (CDM) for tip position estimation. However, due to limited number of sensing locations and many geometrical assumptions, these methods are prone to large error propagation especially when the CDM undergoes large deflections or interacts with obstacles. In this talk, I will give an overview of complications in using the conventional tip position estimation methods that are dependent on sensor model, and propose a new data-driven method that overcomes these challenges. The method’s performance is evaluated on a CDM developed for orthopedic applications, and the results are compared to conventional model-dependent methods during large deflection bending and interactions with obstacles.

Bio: Shahriar Sefati is a Ph.D. candidate in the Department of Mechanical Engineering at Johns Hopkins University, and affiliated with Biomechanical and Image-guided Surgical Systems (BIGSS) as part of Laboratory for Computational Sensing and Robotics (LCSR). He received his B.S. degree in Mechanical Engineering from Sharif University of Technology in 2014 and M.S.E. degree in Computer Science from Johns Hopkins University in 2017. He has also been a robotics and controls engineer intern at Verb Surgical, Inc. in summer 2018. Shahriar’s research focuses on continuum manipulators and flexible robotics for less-invasive surgery.

 

12:30 Presenation: Iulian Iordachita

Title: Safe Robot-assisted Retinal Surgery

Abstract: Modern patient health care involves maintenance and restoration of health by medication or surgical intervention. This research talk focuses solely on surgical procedures, like retinal surgery, where surgeons perform high-risk but necessary treatments whilst facing significant technical and human limitations in an extremely constrained environment. Inaccuracy in tool positioning and movement are among the important factors limiting performance in retinal surgery. The challenges are further exacerbated by the fact that in the majority of contact events, the forces encountered are below the tactile perception of the surgeon. Inability to detect surgically relevant forces leads to a lack of control over potentially injurious factors that result in complications. This situation is less than optimal and can significantly benefit from the recent advances in robot assistance, sensor feedback and human machine interface design. Robotic assistance may be ideally suited to address common problems encountered in most (micro)manipulation tasks, including hand tremor, poor tool manipulation resolution, and accessibility, and to open up surgical strategies that are beyond human capability. Various force sensors have been developed for microsurgery and minimally invasive surgery. Optical fibers strain sensors, specifically fiber Bragg gratings (FBG), are very sensitive, capable of detecting sub-micro strain chances, are very small in size, lightweight, biocompatible, sterilizable, multiplexable, and immune to electrostatic and electromagnetic noise. In retinal surgery, FBG-based force-sensing tools can provide the necessary information that will guide the surgeon through any maneuver, effectively reduce forces with improved precision and potentially improve the safety and efficacy of the surgical procedure. Optical fiber-based sensorized instruments in correlation with robot-assisted (micro)surgery could address the current limitations in surgery by integrating novel technology that transcend human sensor-motor capabilities into robotic systems that provide both, real-time significant information and physical support to the surgeon, with the ultimate goal of improving clinical care and enabling novel therapies.

 

Recorded Spring 2019 Seminars

Apr
24
Wed
LCSR Seminar: Shai Revzen “Geometric Mechanics and Robots with Multiple Contacts” @ Hackerman B-17
Apr 24 @ 12:00 pm – 1:00 pm

Abstract:

Modeling and control problems generally get harder the more Degrees of Freedom (DoF) are involved, suggesting that moving with many legs or grasping with many fingers should be difficult to describe. In this talk I will show how insights from the theory of geometric mechanics, a theory developed about 20-30 years ago by Marsden, Ostrowski, and Bloch, might turn that notion on its head. I will motivate the claim that when enough legs contact the ground, the complexity associated with momentum is gone, to be replaced by a problem of slipping contacts. In this regime, equations of motion are replaced by a “connection” which is both simple to estimate in a data driven form, and easy to simulate by adopting some non-conventional friction models. The talk will contain a brief intro to geometric mechanics, and consist mostly of results showing that: (i) this class of models is more general than may seem at first; (ii) they can be used for very rapid hardware in the loop gait optimization of both simple and complex robots; (iii) they motivate a simple motion model that fits experimental results remarkably well. If successful, this research agenda could improve motion planning speeds for multi-contact robotic systems by several orders of magnitude, and explain how simple animals can move so well with many limbs.

 

Bio:

Shai Revzen is an Assistant Professor in the University of Michigan, Ann Arbor. His primary appointment is in the department of Electrical Engineering and Computer Science in the College of Engineering. He holds a courtesy faculty appointment in the Department of Ecology and Evolutionary Biology, and is an Assistant Director of the Michigan Robotics Institute. He received his PhD in Integrative Biology from the University of California at Berkeley, and an M.Sc. in Computer Science from the Hebrew University in Jerusalem. In addition to his academic work, Shai was Chief Architect R&D of the convergent systems division of Harmonic Lightwaves (HLIT), and a co-founder of Bio-Systems Analysis, a biomedical technology start-up. As principal investigator of the Biologically Inspired Robotics and Dynamical Systems (BIRDS) lab, Shai sets the research agenda and approach of the lab: a focus on fundamental science, realizing its transformative influence on robotics and other technology. Work in the lab is equally split between robotics, mathematics, and biology.

 

Recorded Spring 2019 Seminars

May
1
Wed
LCSR Seminar: Panel Discussion with Experts from Academia and Industry: Life After Grad School: Today’s Opportunities @ Hackerman B-17
May 1 @ 12:00 pm – 1:00 pm

Hosted by:

Ehsan Azimi and Shahriar Sefati

 

 

Recorded Spring 2019 Seminars

Sep
4
Wed
LCSR Seminar: Robotics Kickoff Town Hall @ Hackerman B-17
Sep 4 @ 12:00 pm – 1:00 pm
Sep
18
Wed
LCSR Seminar: Joseph Singapogu @ Hackerman B-17
Sep 18 @ 12:00 pm – 1:00 pm

Abstract:

TBA

 

Bio:

TBA

Sep
25
Wed
LCSR Seminar: IP/COI Laura Evans and Peter Sheppard @ Hackerman B-17
Sep 25 @ 12:00 pm – 1:00 pm

Peter A. Sheppard – Sr. Intellectual Property Manager Johns Hopkins Technology Ventures

“Intellectual Property Primer For Conflict of Interest Training.”

 

Laura M. Evans – Senior Policy Associate, Director, Homewood IRB

“Conflicts of Interest: Identification, Review, and Management.”

 

 LCSR Seminar Video Link

Oct
2
Wed
LCSR Seminar: David Blei “The Blessings of Multiple Causes” @ Hackerman B-17
Oct 2 @ 12:00 pm – 1:00 pm

Abstract:

Causal inference from observational data is a vital problem, but it comes with strong assumptions. Most methods require that we observe all confounders, variables that affect both the causal variables and the outcome variables. But whether we have observed all confounders is a famously untestable assumption. We describe the deconfounder, a way to do causal inference with weaker assumptions than the classical methods require.
How does the deconfounderwork? While traditional causal methods measure the effect of a single cause on an outcome, many modern scientific studies involve multiple causes, different variables whose effects are simultaneously of interest. The deconfounderuses the correlation among multiple causes as evidence for unobserved confounders, combining unsupervised machine learning and predictive model checking to perform causal inference.We demonstrate the deconfounderon real-world data and simulation studies, and describe the theoretical requirements for the deconfounderto provide unbiased causal estimates.

 

Bio:

David Bleiis a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. He studies probabilistic machine learning, including its theory, algorithms, and application. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), BlavatnikFaculty Award (2013), ACM-Infosys Foundation Award (2013), a Guggenheim fellowship (2017), and a Simons Investigator Award (2019). He is the co-editor-in-chief of the Journal of Machine Learning Research.He is a fellow of the ACM and the IMS.
[*] https://arxiv.org/abs/1805.06826

 

 LCSR Seminar Video Link

Oct
9
Wed
LCSR Seminar: Zhou Yu “Enabling Machines with Situational Awareness, Communication, and Decision-Making Abilities Leveraging Multimodal Information” @ Hackerman B-17
Oct 9 @ 12:00 pm – 1:00 pm

Abstract:

Humans interact with other humans or the world through information from various channels including vision, audio, language, haptics, etc.  To simulate intelligence, machines require similar abilities to process and combine information from different channels to acquire better situation awareness, better communication ability, and better decision-making ability. In this talk, we describe three projects. In the first study, we enable a robot to utilize both vision and audio information to achieve better user understanding. Then we use incremental language generation to improve the robot’s communication with a human. In the second study, we utilize multimodal history tracking to optimize policy planning in task-oriented visual dialogs. In the third project, we tackle the well-known trade-off between dialog response relevance and policy effectiveness in visual dialog generation. We propose a new machine learning procedure that alternates from supervised learning and reinforcement learning to optimum language generation and policy planning jointly in visual dialogs. We will also cover some recent ongoing work on image synthesis through dialogs, and generating social multimodal dialogs with a blend of GIF and words.

 

Bio:

Zhou Yu is an Assistant Professor at the Computer Science Department at UC Davis. She received her PhD from Carnegie Mellon University in 2017.  Zhou is interested in building robust and multi-purpose dialog systems using fewer data points and less annotation. She also works on language generation, vision and language tasks. Zhou’s work on persuasive dialog systems received an ACL 2019 best paper nomination recently. Zhou was featured in Forbes as 2018 30 under 30 in Science for her work on multimodal dialog systems. Her team recently won the 2018 Amazon Alexa Prize on building an engaging social bot for a $500,000 cash award.

 

 LCSR Seminar Video Link

Laboratory for Computational Sensing + Robotics