Calendar

Sep
19
Wed
LCSR Seminar: Balazs Vagvolgyi “On-Orbit Robotic Satellite Servicing” @ Hackerman B17
Sep 19 @ 12:00 pm – 1:00 pm

Abstract

There are currently over two thousand satellites catalogued on-orbit. Most of them were designed with a finite service life limited by fuel for attitude control and altitude boost. When the fuel is consumed, or a fault occurs in a satellite, we presently lack the ability to conduct on-orbit refueling and repairs. NASA’s Space Shuttle Program enabled a variety of satellite service missions, but all were performed by human spacewalks or robots controlled by crew from within the spacecraft. The most well-known examples are the Hubble Space Telescope servicing missions. However, the risks and cost of using astronauts make satellite servicing by humans prohibitive in all but a very few cases. NASA is currently developing the capabilities necessary to perform satellite servicing tasks telerobotically, with ground-based robot operators. The planned unmanned servicing spacecraft will be equipped with an array of sensors, remotely operated robotic arms, and servicing tools.

In the talk, I will give an overview of NASA’s past and future servicing missions and discuss the partnership between JHU’s Laboratory for Computational Sensing and Robotics (LCSR) and NASA’s Satellite Servicing Projects Division (SSPD) in developing novel robot control methods and robotic tools for upcoming missions. The research efforts at JHU-LCSR focus on facilitating the cutting of thermal insulation on satellites using force sensitive robotic tools and dynamical modeling of the cutting process, and improving the situational awareness of robot operators while performing complex manipulation tasks with limited visual feedback by employing mixed-reality visualization techniques.

 

Bio

Balazs P. Vagvolgyi is an Associate Research Scientist in the Laboratory for Computational Sensing and Robotics at the Johns Hopkins University. He holds a MSc in Computer Science. Before coming to JHU in 2006, he worked on the imaging pipeline of flat-panel interventional vascular X-ray systems at GE Healthcare. He briefly left Hopkins in 2013-2014 to build real-time imaging solutions for mobile as Chief Scientist for Spherical Inc. in San Francisco, CA. His professional interests and research focus on real-time computer vision and visualization, primarily in the context of robotics and medical interventions.

 

Recorded Fall 2018 Seminars

Sep
26
Wed
LCSR Seminar: Career Center “Resume Writing as a Job Search Strategy” @ Hackerman B17
Sep 26 @ 12:00 pm – 1:00 pm
Oct
3
Wed
LCSR Seminar: Adrian Park “Surgical Visualization – an evolution” @ Hackerman B17
Oct 3 @ 12:00 pm – 1:00 pm

Abstract

The advent of laparoscopic cholecystectomy almost 30 years ago would change forever the way surgeons visualize and interact with target anatomy Patients continue to benefit from different yet related image guided therapies that also allow access to pathology by minimally invasive means.  As we continue to depend upon images to guide and inform patient interventions it is instructive to review the advances made in surgical visualization over its recent history and look forward to issues that will need to be addressed toward optimization of interventional visualization.  These issues will be reviewed from the perspective of a clinician, and not a computer scientist nor physicist with attention also paid to the often neglected topics of ergonomics and human factors considerations in surgical visualization.

 

Bio

Dr. Park is Chairman of the Department of Surgery  at  Anne Arundel Medical Center  in Annapolis, MD and Professor of Surgery at Johns Hopkins University School of Medicine. Dr. Park has made major advancements in the improvement of laparoscopic techniques for complex hernia repair, foregut and spleen surgery.

Previously Dr. Park was the Dr. Alex Gillis Professor and Chairman of the Department of Surgery at Dalhousie University in Halifax, NS.  Prior to this appointment, Park served as the Campbell and Jeanette Plugge Professor and Vice Chair for the Department of Surgery, the Head of the Division of General Surgery at the University of Maryland Medical Center, and the Chair of the Maryland Advanced Simulation, Training, Research, and Innovation (MASTRI) Center.

He  is a member of the American Surgical Association, and is a Fellow of the Royal College of Surgeons of Canada, American College of Surgeons and the College of Surgeons of Central Eastern and Southern Africa. Having a long held commitment to  the training of surgeons in sub Saharan Africa, he is a past president of the Pan African Academy of Christian Surgeons (PAACS).

Currently a member of the Board of Directors of the SAGES, he has also served as the Fellowship Council’s founding President and as its Board Chair.  He is editor-in-chief of Surgical Innovation. The author of over 250 scholarly articles and book chapters, he is widely published in the areas of hernia, solid organ laparoscopy, foregut surgery , surgical education, the “Operating Room of the Future”  and surgical ergonomics.  Dr. Park holds 20 patents and has been instrumental in the development and application of new technologies in endoscopic surgery.

Oct
10
Wed
LCSR Seminar: Nikolay Vasilyev “Implantable stretchable sensors and soft robotic assist devices for monitoring and therapy of heart failure” @ Hackerman B17
Oct 10 @ 12:00 pm – 1:00 pm

Abstract

Heart failure (HF) represents a significant healthcare burden in the United States and worldwide. With a prevalence of 5.7 million in the US, HF costs the nation an estimated $30.7 billion each year. About half of people who develop HF die within 5 years of diagnosis.

Continuous monitoring of cardiac function in HF using implantable electronic devices suggests reductions in mortality, all-cause hospitalizations and HF related hospitalizations. However, most of the current monitoring approaches aim for collecting the data (heart rate, pressure, oxygen saturation, metabolites) that are derivative representations of the primary – mechanical pumping – function of the heart.

Current therapy for end-stage HF, when medical management options have been exhausted, includes heart, lung or heart-lung transplantation, or mechanical circulatory support when a donor organ is not available. Several ventricular assist devices (VADs) provide short and long-term mechanical circulatory support for either left or right ventricles, or both. The ventricles have a complex geometry and contraction pattern that involves coordinated motion of the ventricular free walls and the ventricular septum. Current VAD designs do not address these anatomic and physiologic features of the ventricles, as the VADs are designed as pumps that unload the target ventricle by rerouting blood through an artificial circuit. Moreover, blood contact with the artificial circuit necessitates permanent anticoagulation and predisposes patients to bleeding and thromboembolic complications.

We have designed 1) implantable stretchable sensors that continuously acquire myocardial strain data and 2) soft robotic VADs (SR-VADs) with ventricular septal bracing as innovative approaches to continuously monitor ventricular function and to assist native ventricular contraction in end-stage HF. We demonstrated proof of concept in large animal studies by showing that functional prototypes can be safely and rapidly implanted on a beating heart and function for several hours. Future directions include designing sensors that capture multiaxial strain signal, manufacturing soft actuators that fully mimic ventricular motion, incorporating sensors for organ-in-the-loop control and validating the approach in longer-term studies.

 

Bio

Nikolay V. Vasilyev graduated from Sechenov First Moscow State Medical University. He completed his residency and fellowship training in cardiovascular surgery at Bakoulev Center for Cardiovascular Surgery in Moscow, and his research fellowship at the Cleveland Clinic, Cleveland, Ohio, USA. Dr. Vasilyev currently serves as a Staff Scientist at the Department of Cardiac Surgery at Boston Children’s Hospital and as an Assistant Professor of Surgery at the Division of Surgery at Harvard Medical School. His research has been focused on development of image-guided beating-heart cardiovascular interventions and cardiac surgical robotics. This includes clinically driven device design, development of imaging techniques and image processing, computer modeling and simulation. To date Dr. Vasilyev has published over fifty peer-reviewed papers, five book chapters and received four patents, with four more applications are pending. He is a member of the European Association of Cardiothoracic Surgery, where he served on the International Co-Operation Committee, and a member of the American Heart Association and American Society for Artificial Internal Organs. He is a Co-Founder and a Director of a start-up company Nido Surgical Inc.

 

Recorded Fall 2018 Seminars

 

 

Oct
17
Wed
LCSR Seminar: Seth Hutchinson “Design, Modeling and Control of a Biologically-Inspired Bat Robot” @ Hackerman B17
Oct 17 @ 12:00 pm – 1:00 pm

Abstract

Bats have a complex skeletal morphology, with both ball-and-socket and revolute joints that interconnect the bones and muscles to create a musculoskeletal system with over 40 degrees of freedom, some of which are passive. Replicating this biological system in a small, lightweight, low-power air vehicle is not only infeasible, but also undesirable; trajectory planning and control for such a system would be intractable, precluding any possibility for synthesizing complex agile maneuvers, or for real-time control. Thus, our goal is to design a robot whose kinematic structure is topologically much simpler than a bat’s, while still providing the ability to mimic the bat-wing morphology during flapping flight, and to find optimal trajectories that exploit the natural system dynamics, enabling effective controller design.

 

The kinematic design of our robot is driven by motion capture experiments using live bats. In particular, we use principal component analysis to capture the essential bat-wing shape information, and solve a nonlinear optimization problem to determine the optimal kinematic parameters for a simplified parallel kinematic wing structure. We then derive the Lagrangian dynamic equations for this system, along with a model for the aerodynamic forces. We use a shooting-based optimizer to locate physically feasible, periodic solutions to this system, and an event-based control scheme is then derived in order to track the desired trajectory. We demonstrate our results with flight experiments on our robotic bat.

 

Bio

Seth Hutchinson is Professor and KUKA Chair for Robotics in the School of Interactive Computing at the Georgia Institute of Technology, where he also serves as Associate Director of the Institute for Robotics and Intelligent Machines. His research in robotics spans the areas of planning, sensing, and control. He has published more than 200 papers on these topics, and is coauthor of the books “Principles of Robot Motion: Theory, Algorithms, and Implementations,” published by MIT Press, and “Robot Modeling and Control,” published by Wiley.

Hutchinson currently serves on the editorial board of the International Journal of Robotics Research and chairs the steering committee of the IEEE Robotics and Automation Letters. He was Founding Editor-in-Chief of the IEEE Robotics and Automation Society’s Conference Editorial Board (2006-2008) and Editor-in-Chief of the IEEE Transaction on Robotics (2008-2013).

Hutchinson is an Emeritus Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign, where he was Professor of ECE until 2018, serving as Associate Head for Undergraduate Affairs from 2001 to 2007. He received his Ph.D. from Purdue University in 1988. Hutchinson is a Fellow of the IEEE.

 

Recorded Fall 2018 Seminars

 

 

Oct
24
Wed
LCSR Seminar: Tim Bretl “What use are fiducial markers in Structure-from-Motion (SfM)?” @ Hackerman B17
Oct 24 @ 12:00 pm – 1:00 pm

Abstract

Everybody knows that adding fiducial markers to a scene will improve the performance of Structure-from-Motion (SfM) algorithms for vision-based 3D reconstruction, but nobody knows exactly how. I’ll show you several obvious ways to use markers that work poorly. Then, I’ll show you a simple but less obvious way to use them that seems to work very well.

 

Bio

Timothy Bretl comes from the University of Illinois at Urbana-Champaign, where he is both an Associate Professor and the Associate Head for Undergraduate Programs in the Department of Aerospace Engineering. He holds an affiliate appointment in the Coordinated Science Laboratory, where he leads a research group that works on a diverse set of projects in robotics and neuroscience (http://bretl.csl.illinois.edu/). He has also received every award for undergraduate teaching that is granted by his department, college, and campus.

 

Recorded Fall 2018 Seminars

 

 

Oct
31
Wed
LCSR Seminar: Career Services “Interview Strategies and Preparation” @ Hackerman B17
Oct 31 @ 12:00 pm – 1:00 pm
Nov
7
Wed
LCSR Seminar: Veronica Santos “Artificial haptic intelligence for human-machine systems” @ Hackerman B17
Nov 7 @ 12:00 pm – 1:00 pm

Abstract

The functionality of artificial manipulators could be enhanced by artificial “haptic intelligence” that enables the identification of object features via touch for semi-autonomous decision-making and/or display to a human operator. This could be especially useful when complementary sensory modalities, such as vision, are unavailable. I will highlight past and present work to enhance the functionality of artificial hands in human-machine systems. I will describe efforts to develop multimodal tactile sensor skins, and to teach robots how to haptically perceive salient geometric features such as edges and fingertip-sized bumps and pits using machine learning techniques. I will describe the use of reinforcement learning to teach robots goal-based policies for a functional contour-following task: the closure of a ziplock bag. Our Contextual Multi-Armed Bandits approach tightly couples robot actions to the tactile and proprioceptive consequences of the actions, and selects future actions based on prior experiences, the current context, and a functional task goal. Finally, I will describe current efforts to develop real-time capabilities for the perception of tactile directionality, and to develop models for haptically locating objects buried in granular media. Real-time haptic perception and decision-making capabilities could be used to advance semi-autonomous robot systems and reduce the cognitive burden on human teleoperators of devices ranging from wheelchair-mounted robots to explosive ordnance disposal robots.

 

Bio

Veronica J. Santos is an Associate Professor in the Mechanical and Aerospace Engineering Department at the University of California, Los Angeles, and Director of the UCLA Biomechatronics Lab (http://BiomechatronicsLab.ucla.edu). Dr. Santos received her B.S. in mechanical engineering with a music minor from the University of California at Berkeley (1999), was a Quality and R&D Engineer at Guidant Corporation, and received her M.S. and Ph.D. in mechanical engineering with a biometry minor from Cornell University (2007). As a postdoc at the University of Southern California, she contributed to the development of a biomimetic tactile sensor for prosthetic hands. From 2008 to 2014, Dr. Santos was an Assistant Professor of Mechanical and Aerospace Engineering at Arizona State University. Her research interests include human hand biomechanics, human-machine systems, haptics, tactile sensors, machine perception, prosthetics, and robotics for grasp and manipulation. Dr. Santos was selected for an NSF CAREER Award (2010), three engineering teaching awards (2012, 2013, 2017), an ASU Young Investigator Award (2014), and as a National Academy of Engineering Frontiers of Engineering Education Symposium participant (2010). She currently serves as an Editor for the IEEE International Conference on Robotics and Automation (2017-2019), an Associate Editor for the ASME Journal of Mechanisms and Robotics (2016-2019), and an Associate Editor for the ACM Trans on Human-Robot Interaction (2018- 2021).

 

 

Recorded Fall 2018 Seminars

 

 

Nov
9
Fri
Seminar: Dinesh Manocha “Autonomous Driving: Simulation and Navigation” @ Hackerman 320
Nov 9 @ 12:00 pm – 1:00 pm

Abstract:

Autonomous driving has been an active area of research and development over the last decade. Despite considerable progress, there are many open challenges including automated driving in dense and urban scenes. In this talk, we give an overview of our recent work on simulation and navigation technologies for autonomous vehicles. We present a novel simulator, AutonoVi-Sim, that uses recent developments in physics-based simulation, robot motion planning, game engines, and behavior modeling.  We describe novel methods for interactive simulation of multiple vehicles with unique steering or acceleration limits taking into account vehicle dynamics constraints.  In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians AutonoVi-Sim also facilitates data analysis, allowing for capturing video from the vehicle’s perspective, exporting sensor data such as relative positions of other traffic participants, camera data for a specific sensor, and detection and classification results. We highlight its performance in traffic and driving scenarios. We also present novel multi-agent simulation algorithms using reciprocal velocity obstacles that can model the behavior and trajectories of different traffic agents in dense scenarios, including cars, buses, bicycles and pedestrians. We also present novel methods for extracting trajectories from videos and use them for behavior modeling and safe navigation.

 

Biography:

Dinesh Manocha is the Paul Chrisman Iribe Chair in Computer Science & Electrical and Computer Engineering at the University of Maryland College Park. He is also the Phi Delta Theta/Matthew Mason Distinguished Professor Emeritus of Computer Science at the University of North Carolina – Chapel Hill. He has won many awards, including Alfred P. Sloan Research Fellow, the NSF Career Award, the ONR Young Investigator Award, and the Hettleman Prize for scholarly achievement. His research interests include multi-agent simulation, virtual environments, physically-based modeling, and robotics. He has published more than 500 papers and supervised more than 36 PhD dissertations. He is an inventor of 9 patents, several of which have been licensed to industry. His work has been covered by the New York Times, NPR, Boston Globe, Washington Post, ZDNet, as well as DARPA Legacy Press Release. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc. He is a Fellow of AAAI, AAAS, ACM, and IEEE and also received the Distinguished Alumni Award from IIT Delhi. See http://www.cs.umd.edu/~dm

Nov
14
Wed
LCSR Seminar: Christoffer Heckman “Robust, real-time perception strategies for robotic platforms in challenging environments” @ Hackerman B17
Nov 14 @ 12:00 pm – 1:00 pm

Abstract

Perception precedes action, in both the biological world as well as the technologies maturing today that will bring us autonomous cars, aerial vehicles, robotic arms and mobile platforms. The problem of probabilistic state estimation via sensor measurements takes on a variety of forms, resulting in information about our own motion as well as the structure of the world around us. In this talk, I will discuss some approaches that my research group has been developing that focus on estimating these quantities online and in real-time in extreme environments where dust, fog and other visually obscuring phenomena are widely present and when sensor calibration is altered or degraded over time. These approaches include new techniques in computer vision, visual-inertial SLAM, geometric reconstruction, nonlinear optimization, and even some sensor development. The methods I discuss have an application-specific focus to ground vehicles in the subterranean environment, but are also currently deployed in the agriculture, search and rescue, and industrial human-robot collaboration contexts.

 

Bio

Chris Heckman is an Assistant Professor and the Jacques Pankove Faculty Fellow in the Department of Computer Science at the University of Colorado at Boulder, where he also holds appointments in the Aerospace Engineering Sciences and Electrical and Computer Engineering departments. Professor Heckman earned his B.S. in Mechanical Engineering from UC Berkeley in 2008 and his Ph.D. in Theoretical and Applied Mechanics from Cornell University in 2012, where he was an NSF Graduate Research Fellow. He had postdoctoral appointments at the Naval Research Laboratory in Washington, D.C. as an NRC Research Associate, and in the Autonomous Robotics and Perception Group at CU Boulder as a Research Scientist, before joining the faculty there in 2016. He currently is leading one of the funded competition teams in the DARPA Subterranean Challenge; his past work has been funded by NSF, DARPA and multiple industry partners. His research focuses on developing mathematical and systems-level frameworks for autonomous control and perception, particularly vision and sensor fusion. His work applies concepts of nonlinear dynamical systems to the design of control systems for autonomous agents, in particular ground and aquatic vehicles, enabling them to navigate uncertain and rapidly-changing environments. A hallmark of his research is the implementation of these systems on experimental platforms.

 

Recorded Fall 2018 Seminars

 

 

Laboratory for Computational Sensing + Robotics