Calendar

Feb
12
Wed
LCSR Seminar: Rebecca Kramer-Bottiglio “From Particles to Parts–Building Multifunctional Robots with Programmable Robotic Skins” @ Hackerman B-17
Feb 12 @ 12:00 pm – 1:00 pm

Abstract:

Robots generally excel at specific tasks in structured environments, but lack the versatility and adaptability required to interact-with and locomote-within the natural world. To increase versatility in robot design, my research group is developing soft robotic skins that can wrap around arbitrary deformable objects to induce the desired motions and deformations. The robotic skins integrate programmable composites to embed actuation and sensing into planar substrates that may be applied-to, removed-from, and transferred-between different objects to create a multitude of controllable robots with different functions. During this talk, I will demonstrate the versatility of this soft robot design approach by showing robotic skins in a wide range of applications – including manipulation tasks, locomotion, and wearables – using the same 2D robotic skins reconfigured on the surface of various 3D soft, inanimate objects. Further, I will present recent work towards programmable composites derived from novel functional particulates that address the emerging need for variable stiffness properties, variable trajectory motions, and embedded computation within the soft robotic skins.

 

Bio:

Rebecca Kramer-Bottiglio is the John J. Lee Assistant Professor of Mechanical Engineering and Materials Science at Yale University. She completed her B.S. at the Johns Hopkins University, M.S. at U.C. Berkeley, and Ph.D. at Harvard University. Prior to joining the faculty at Yale, she was an Assistant Professor of Mechanical Engineering at Purdue University. She currently serves as an Associate Editor of Soft Robotics, Frontiers in Robotics and AI, IEEE Robotics and Automation Letters, and Multifunctional Materials, and is an IEEE Distinguished Lecturer. She is the recipient of the NSF CAREER Award, the NASA Early Career Faculty Award, the AFOSR Young Investigator Award, the ONR Young Investigator Award, and the Presidential Early Career Award for Scientists and Engineers (PECASE), and was named to Forbes’ 30 under 30 list in 2015.

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Feb
19
Wed
LCSR Seminar: Cornelia Fermüller “Action perception at multiple time scales” @ Hackerman B-17
Feb 19 @ 12:00 pm – 1:00 pm

Abstract:

Understanding human activity is a very challenging task, but a prerequisite for the autonomy of robots interacting with humans. Solutions that generalize must involve not only perception but also cognition and a grounding in the motor system. Our approach is to describe complex actions as events at multiple time scales. At the lowest level, signals are chunked into primitive symbolic events, and these are then combined into increasingly more complex events of longer and longer time spans. The approach will be demonstrated on our work of creating visually learning robots, and the talk will describe some of its novel components: an architecture that has cognitive and linguistic processes communicate with the vision and motor systems in a dialog fashion; vision processes that parse the objects and movements based on their attributes, spatial relations, and 3D geometry; the combination of tactile sensing with vision for better recognition; and approaches to cover long-term relations in observed activities.

 

Bio:

Cornelia Fermüller is a research scientist at the Institute for Advanced Computer Studies (UMIACS) at the University of Maryland at College Park.  She holds a Ph.D. from the Technical University of Vienna, Austria and an M.S. from the University of Technology, Graz, Austria, both in Applied Mathematics.  She co-founded the Autonomy Cognition and Robotics (ARC) Lab and co-leads the Perception and Robotics Group at UMD. Her research is in the areas of Computer Vision, Human Vision, and Robotics. She studies and develops biologically inspired Computer Vision solutions for systems that interact with their environment. In recent years, her work has focused on the interpretation of human activities, and on motion processing for fast active robots (such as drones) using as input bio-inspired event-based sensors.

http://users.umiacs.umd.edu/users/fer

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Feb
26
Wed
LCSR Seminar: Henry Astley “Using Robotic Models To Explore The Evolution Of Functional Morphology” @ Hackerman B-17
Feb 26 @ 12:00 pm – 1:00 pm

Abstract:

Living organisms face a wide range of physical challenges in their environments, yet frequently display exceptional performance.  The performance is often correlated with morphological features, physiological differences, or particular behaviors, leading to the hypothesis that these traits are adaptations to improve performance.  However, rigorously testing adaptations is extremely difficult, as a particular trait may be suboptimal due to lack of selective pressure, subject to tradeoffs and evolutionary constraints, or even be entirely non-adaptive.  Furthermore, it can be difficult to even truly determine the function of some traits, as they may not be amenable to experimental manipulation or comparative analysis.  However, techniques and tools from engineering are allowing biologists to test the functional consequences of previously untestable physical and behavioral traits and even explore the performance consequences of alternative versions of traits.  This can led to a broader understanding of the trait itself and the evolutionary pressures acting upon it, past and present.  This talk will use several examples of how 3D printing and robotics have been used to establish the functional consequences of enigmatic morphologies and behaviors in snakes, early tetrapods, and fish, and demonstrate the power of these techniques for providing biological insights.

 

Bio:

Henry Astley is currently an Assistant Professor at University of Akron’s Biomimicry Research & Innovation Center (BRIC), working on animal locomotion and biomimetic robotics.  Dr. Astley initially completed a B.S in Aerospace Engineering at Florida Institute of Technology before switching fields and completing a second B.S. and an M.S. in biology at the University of Cincinnati, focusing on arboreal snake locomotion.  Dr. Astley did his Ph.D. on frog jumping at Brown University, followed by a postdoc at Georgia Institute of Technology focusing on locomotion in granular media.

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Mar
20
Fri
Postponed until Spring 2021 – JHU Robotics Industry Day @ Glass Pavilion, Levering Hall
Mar 20 @ 9:00 am – 5:00 pm

After closely monitoring developments related to the COVID-19 outbreak, we have decided to postpone LCSR’s Industry Day on March 20th to the fall semester. The health and well-being of our guests, students, staff, and faculty are our top priority.

We apologize for the difficulty and inconvenience resulting from these changes. While this is not an easy decision, we believe it is in the best interest of all parties. We will share information about when the postponed events will be rescheduled as soon as we have better information.

 

Please direct any questions to Ashley Moriarty (ashleymoriarty@jhu.edu).

 

 

 

 

 

 

The Laboratory for Computational Sensing and Robotics will highlight its elite robotics students and showcase cutting-edge research projects in areas that include Medical Robotics, Extreme Environments Robotics, Human-Machine Systems for Manufacturing, BioRobotics and more. JHU Robotics Industry Day will take place from 8 a.m. to 4 p.m. in Levering Hall on the Homewood Campus at Johns Hopkins University.

Robotics Industry Day will provide top companies and organizations in the private and public sectors with access to the LCSR’s forward-thinking, solution-driven students. The event will also serve as an informal opportunity to explore university-industry partnerships.

You will experience dynamic presentations and discussions, observe live demonstrations, and participate in speed networking sessions that afford you the opportunity to meet Johns Hopkins most talented robotics students before they graduate.

Please contact Ashley Moriarty if you have any questions.


Download our 2018 Industry Day booklet

Please contact Ashley Moriarty if you have any questions.

 

Apr
1
Wed
LCSR Seminar: Brent Gillespie @ Hackerman B-17
Apr 1 @ 12:00 pm – 1:00 pm
Apr
8
Wed
LCSR Seminar: Robert Grupp “Computer-Assisted Fluoroscopic Navigation for Orthopaedic Surgery” @ Hackerman B-17
Apr 8 @ 12:00 pm – 1:00 pm

https://wse.zoom.us/j/348338196

 

Abstract:

In the absence of computer-assistance, orthopaedic surgeons frequently rely on a challenging interpretation of fluoroscopy for intraoperative guidance. Existing computer-assisted navigation systems forgo this mental process and obtain accurate information of visually obstructed objects through the use of 3D imaging and additional intraoperative sensing hardware. This information is attained at the expense of increased invasiveness to patients and surgical workflows. Patients are exposed to large amounts of ionizing radiation during 3D imaging and undergo additional, and larger, incisions in order to accommodate navigational hardware. Non-standard equipment must be present in the operating room and time-consuming data collections must be conducted intraoperatively. Using periacetabular osteotomy (PAO) as the motivating clinical application, we introduce methods for computer-assisted fluoroscopic navigation of orthopaedic surgery, while remaining minimally invasive to both patients and surgical workflows.

Partial computed tomography (CT) of the pelvis is obtained preoperatively, and surface models of the entire pelvis are reconstructed using a combination of thin plate splines and a statistical model of pelvis anatomy. Intraoperative navigation is implemented through a 2D/3D registration pipeline, between 2D fluoroscopy and the 3D patient models. This pipeline recovers relative motion of the fluoroscopic imager using patient anatomy as a fiducial, without any introduction of external objects. PAO bone fragment poses are computed with respect to an anatomical coordinate frame and are used to intraoperatively assess acetabular coverage of the femoral head. Convolutional neural networks perform semantic segmentation and detect anatomical landmarks in fluoroscopy, allowing for automation of the registration pipeline. Real-time tracking of PAO fragments is enabled through the intraoperative injection of BBs into the pelvis; fragment poses are automatically estimated from a single view in less than one second. A combination of simulated and cadaveric surgeries was used to design and evaluate the proposed methods.

 

Bio:

Robert Grupp is a postdoctoral fellow at LCSR primarily working with Mehran Armand in the Biomechanical and Image-Guided Surgical Systems Lab. He recently completed his PhD in the Department of Computer Science at Johns Hopkins University, advised by Russell Taylor. His current research focuses on medical image registration and aims to enable computer-assisted navigation during minimally invasive orthopaedic surgery. Some of this work has been highlighted as a feature article in the February 2020 issue of IEEE Transactions on Biomedical Engineering. Prior to starting his PhD studies, Robert worked on various Synthetic Aperture Radar exploitation algorithms as part of the Automatic Target Recognition group at Northrop Grumman: Electronic Systems. He received a BS in Computer Science and Mathematics from the University of Maryland: College Park.

Apr
15
Wed
LCSR Seminar: Shumon Dar “Understanding the Principles of Swallowing Mechanics and the use of Fluoroscopy in Diagnosis and Management of Patient with Dysphagia” @ Hackerman B-17
Apr 15 @ 12:00 pm – 1:00 pm

The goals for today’s talk with by to understand the basic physiology of swallowing by using fluoroscopy as a guide. Fluoroscopy is consider an instrumental diagnostic tool in assessing swallowing safety and planning surgical intervention in the aerodigestive tract. Currently, fluoroscopy is interpreted by a frame by frame analysis of the study and in some high quality centers formal protocols are used to quantify physiologic deviations in the swallow.  

 

Bio: Dr. Shumon Dhar, Assistant Professor, Dept. of Otolaryngology-Head & Neck Surgery in the Division of Laryngology has expertise in the comprehensive management of voice, upper airway and swallowing disorders. Dr. Dhar is passionate about treating a variety of patients including professional voice users, patients with upper airway stenosis, and those suffering from early cancer of the vocal cords. Dr. Dhar has unique training in bronchoesophagology, which positions him to treat patients with profound swallowing, reflux and motility problems from a holistic perspective. He uses the latest in diagnostic modalities within a multidisciplinary care model. He also offers minimally invasive endoscopic options for the treatment of GERD and Barrett’s esophagus. Advanced interventions performed by Dr. Dhar include endoscopic and open treatment of cricopharyngeus muscle dysfunction and Zenker’s diverticulum, complete pharyngoesophageal stenosis, vocal fold paralysis, and severe dysphagia in head and neck cancer survivors and patients with neuromuscular disease-related swallowing dysfunction.

 

Apr
22
Wed
LCSR Seminar: Nirav Patel “Robotic systems for MRI-guided Interventions and Retinal Surgeries: Journey Towards Clinical Translation” @ Hackerman B-17
Apr 22 @ 12:00 pm – 1:00 pm

Abstract:

In the first half the talk I will present image (MRI) guided robotic interventions. Robotic assistance for precise placement of needle like devices could improve the outcome of diagnostic or therapeutic interventions. Such interventions are often performed under image guidance; image guided robotic system can not only improve the tool placement accuracy but also eliminate the need for manual registration of the surgical scene in clinicians’ mind. This talk will cover recent advances and challenges of developing MRI guided robotic systems for percutaneous interventions. I will present development journey of an integrated robotic system for MRI-guided ablation of brain tumors. Also, I will briefly present MRI-guided robotic systems for prostate biopsy and shoulder arthrography. I will particularly focus on the clinical translation of these systems.

Second half of my talk will be about robot assisted vitreoretinal surgeries. Vitreoretinal surgeries are among the most challenging procedures demanding skills at the limit of human capabilities and requires precise manipulation of multiple surgical instruments in a constrained environment through the small opening on the white part of the eye, the sclera, with limited tool-tissue interaction perception. To alleviate some of these challenges, robotic assistance has been explored to provide steady and precise tool manipulation capabilities. However, in a cooperatively controlled robotic system, the stiff robotic structure removes the tactile perception between the surgeon’s hand holding the tool and sclera, which could result in injury to the sclera. In this part of the talk I will present recent advancements in robot control strategies to secure the sclera from any possible injury due to excessive forces and evaluation of Steady Hand Eye Robot in in vivo studies.

 

Bio:

Nirav A. Patel received his B.E in Computer Engineering from North Gujarat University in 2005, and M.Tech in Computer Science and Engineering from Nirma University in 2007. After completing M.Tech, he worked on industrial robots with ABB’s corporate research center in Bangalore till 2009. From 2009 to 2012, he worked as a faculty member, teaching at undergraduate and graduate levels. In 2012, he joined Robotics Engineering Ph.D program at Worcester Polytechnic Institute and received doctorate in 2017. Since 2017, he has been working as a postdoctoral fellow with Laboratory for Computational Sensing and Robotics (LCSR) at the Johns Hopkins University. His research interests include development of image guided robotic systems for percutaneous intervention, minimally invasive robotic surgeries and development of sensors and control strategies for safety in robot assisted minimally invasive surgeries. He is particularly interested in clinical translation of these technologies.

 

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

If you aren’t able to join us live or wanted to review the presentation, here is the link. It should be up at least through the end of the Spring semester. OneDrive Link

Apr
29
Wed
LCSR Seminar: Andy Ruina “Passive dynamics is a good basis for robot design and control. Not.” @ Hackerman B-17
Apr 29 @ 12:00 pm – 1:00 pm

Abstract:
Many airplanes can, or nearly can, glide stably without control. So, it seems natural that the first successful powered flight followed from mastery of gliding. Many bicycles can, or nearly can, balance themselves when in motion. Bicycle design seems to have evolved to gain this feature. Also, we can make toys and ‘robots’ that, like a stable glider or coasting bicycle, stably walk without motors or control in a remarkably human-like way. Again, it seems to make sense to use `passive-dynamics’ as a core for developing the control of walking robots and to gain understanding of the control of walking people. That’s what I used to think. But, so far, this passive approach has not led to robust walking robots. What about human evolution? We didn’t evolve dynamic bodies and then learn to control them. Rather, people had elaborate control systems way back when we were fish and even worms. However: if control is paramount, why is it that uncontrolled passive-dynamic walkers walk so much like humans? It seems that energy optimal, yet robust, control, perhaps a proxy for evolutionary development, arrives at solutions that have some features in common with passive-dynamics. Instead of thinking of good powered walking as passive walking with a small amount of control added, I now think of good powered walking, human or robotic, as highly controlled, while optimized mostly for avoiding falls and, secondarily, for minimal actuator use. When well done, much of the motor effort, always at the ready, is usually titrated out. Thus, deceptively looking, “passive”.

 

Speaker:
Andy Ruina, Mechanical Engineering, Cornell University
My graduate education was mostly in solid mechanics.  That morphed into biomechanics, dynamics and robotics.  Recently, I am primarily interested in the mechanics of underactuated motion and locomotion

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Laboratory for Computational Sensing + Robotics