Calendar

Dec
4
Wed
LCSR Seminar: Career Services @ Hackerman B-17
Dec 4 @ 12:00 pm – 1:00 pm

Abstract:

TBA

 

Bio:

TBA

 

 LCSR Seminar Video Link

Jan
29
Wed
LCSR Seminar: Robert Pless “Supporting Sex Trafficking Investigations with Deep Metric Learning” @ Hackerman B-17
Jan 29 @ 12:00 pm – 1:00 pm

Abstract:

This talk shares work to develop traffickCam, a system to support sex trafficking investigations by recognizing the hotel rooms in pictures of trafficking victims.  I’ll share context for this project and ways that this system is currently being used at the National Center for Missing and Exploited Children, as well as special challenges that come from this problem domain such as dramatic differences in rooms within a hotel and the similarity of rooms across chains.  Attacking these problems led us to specific improvements in large scale classification with Deep Metric Learning, including novel training algorithms, visual explainability and new visualization approaches to compare and understand the representations they learn.

Bio:

Robert Pless is the Patrick and Donna Martin Professor and Chair of Computer Science at George Washington University.  Dr. Pless was born at Johns Hopkins hospital in 1972, has a Bachelors Degree in Computer Science from Cornell University in 1994 and a PhD from the University of Maryland, College Park in 2000.  He was on the faculty of Computer Science at Washington University in St. Louis from 2000-2017. His research focuses on geometrical and statistical Computer Vision.

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Feb
5
Wed
LCSR Seminar: Debra Mathews “Implementing ethics in autonomous systems” @ Hackerman B-17
Feb 5 @ 12:00 pm – 1:00 pm

Abstract:
AI is moving both faster and slower than the general public realizes — it is being deployed extensively across many domains, but we are also still very, very far away from the sorts of autonomous robots imagined by Asimov or anyone who watched the Jetsons as a child or watches Westworld today. Overlapping sets of ethical issues are raised by both current stages and domains of AI development and use and by the autonomy and uses we imagine and hope AI will have in the future. What is crucial is that we appreciate and attend to these issues today, so that we can maximize the benefits of this technology for humanity while minimizing the harms.

Bio:
Debra JH Mathews, PhD, MA, is the Assistant Director for Science Programs for the Johns Hopkins Berman Institute of Bioethics, an Associate Professor in the Department of Pediatrics, Johns Hopkins School of Medicine. Dr. Mathews earned her PhD in genetics from Case Western Reserve University. Concurrent with her PhD, she earned a Master’s degree in bioethics, also from Case. She completed a Post-Doctoral Fellowship in genetics at Johns Hopkins, and the Greenwall Fellowship in Bioethics and Health Policy at Johns Hopkins and Georgetown Universities. Dr. Mathews has also spent time at the Genetics and Public Policy Center, the US Department of Health and Human Services, and the Presidential Commission for the Study of Bioethical Issues, working in various capacities on science policy. Dr. Mathews’s academic work focuses on ethics and policy issues raised by emerging biotechnologies, with particular focus on genetics, stem cell science, neuroscience, and synthetic biology.

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Feb
12
Wed
LCSR Seminar: Rebecca Kramer-Bottiglio “From Particles to Parts–Building Multifunctional Robots with Programmable Robotic Skins” @ Hackerman B-17
Feb 12 @ 12:00 pm – 1:00 pm

Abstract:

Robots generally excel at specific tasks in structured environments, but lack the versatility and adaptability required to interact-with and locomote-within the natural world. To increase versatility in robot design, my research group is developing soft robotic skins that can wrap around arbitrary deformable objects to induce the desired motions and deformations. The robotic skins integrate programmable composites to embed actuation and sensing into planar substrates that may be applied-to, removed-from, and transferred-between different objects to create a multitude of controllable robots with different functions. During this talk, I will demonstrate the versatility of this soft robot design approach by showing robotic skins in a wide range of applications – including manipulation tasks, locomotion, and wearables – using the same 2D robotic skins reconfigured on the surface of various 3D soft, inanimate objects. Further, I will present recent work towards programmable composites derived from novel functional particulates that address the emerging need for variable stiffness properties, variable trajectory motions, and embedded computation within the soft robotic skins.

 

Bio:

Rebecca Kramer-Bottiglio is the John J. Lee Assistant Professor of Mechanical Engineering and Materials Science at Yale University. She completed her B.S. at the Johns Hopkins University, M.S. at U.C. Berkeley, and Ph.D. at Harvard University. Prior to joining the faculty at Yale, she was an Assistant Professor of Mechanical Engineering at Purdue University. She currently serves as an Associate Editor of Soft Robotics, Frontiers in Robotics and AI, IEEE Robotics and Automation Letters, and Multifunctional Materials, and is an IEEE Distinguished Lecturer. She is the recipient of the NSF CAREER Award, the NASA Early Career Faculty Award, the AFOSR Young Investigator Award, the ONR Young Investigator Award, and the Presidential Early Career Award for Scientists and Engineers (PECASE), and was named to Forbes’ 30 under 30 list in 2015.

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Feb
19
Wed
LCSR Seminar: Cornelia Fermüller “Action perception at multiple time scales” @ Hackerman B-17
Feb 19 @ 12:00 pm – 1:00 pm

Abstract:

Understanding human activity is a very challenging task, but a prerequisite for the autonomy of robots interacting with humans. Solutions that generalize must involve not only perception but also cognition and a grounding in the motor system. Our approach is to describe complex actions as events at multiple time scales. At the lowest level, signals are chunked into primitive symbolic events, and these are then combined into increasingly more complex events of longer and longer time spans. The approach will be demonstrated on our work of creating visually learning robots, and the talk will describe some of its novel components: an architecture that has cognitive and linguistic processes communicate with the vision and motor systems in a dialog fashion; vision processes that parse the objects and movements based on their attributes, spatial relations, and 3D geometry; the combination of tactile sensing with vision for better recognition; and approaches to cover long-term relations in observed activities.

 

Bio:

Cornelia Fermüller is a research scientist at the Institute for Advanced Computer Studies (UMIACS) at the University of Maryland at College Park.  She holds a Ph.D. from the Technical University of Vienna, Austria and an M.S. from the University of Technology, Graz, Austria, both in Applied Mathematics.  She co-founded the Autonomy Cognition and Robotics (ARC) Lab and co-leads the Perception and Robotics Group at UMD. Her research is in the areas of Computer Vision, Human Vision, and Robotics. She studies and develops biologically inspired Computer Vision solutions for systems that interact with their environment. In recent years, her work has focused on the interpretation of human activities, and on motion processing for fast active robots (such as drones) using as input bio-inspired event-based sensors.

http://users.umiacs.umd.edu/users/fer

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Feb
26
Wed
LCSR Seminar: Henry Astley “Using Robotic Models To Explore The Evolution Of Functional Morphology” @ Hackerman B-17
Feb 26 @ 12:00 pm – 1:00 pm

Abstract:

Living organisms face a wide range of physical challenges in their environments, yet frequently display exceptional performance.  The performance is often correlated with morphological features, physiological differences, or particular behaviors, leading to the hypothesis that these traits are adaptations to improve performance.  However, rigorously testing adaptations is extremely difficult, as a particular trait may be suboptimal due to lack of selective pressure, subject to tradeoffs and evolutionary constraints, or even be entirely non-adaptive.  Furthermore, it can be difficult to even truly determine the function of some traits, as they may not be amenable to experimental manipulation or comparative analysis.  However, techniques and tools from engineering are allowing biologists to test the functional consequences of previously untestable physical and behavioral traits and even explore the performance consequences of alternative versions of traits.  This can led to a broader understanding of the trait itself and the evolutionary pressures acting upon it, past and present.  This talk will use several examples of how 3D printing and robotics have been used to establish the functional consequences of enigmatic morphologies and behaviors in snakes, early tetrapods, and fish, and demonstrate the power of these techniques for providing biological insights.

 

Bio:

Henry Astley is currently an Assistant Professor at University of Akron’s Biomimicry Research & Innovation Center (BRIC), working on animal locomotion and biomimetic robotics.  Dr. Astley initially completed a B.S in Aerospace Engineering at Florida Institute of Technology before switching fields and completing a second B.S. and an M.S. in biology at the University of Cincinnati, focusing on arboreal snake locomotion.  Dr. Astley did his Ph.D. on frog jumping at Brown University, followed by a postdoc at Georgia Institute of Technology focusing on locomotion in granular media.

 

 This talk will be recorded. Click Here for all of the recorded seminars for the 2019-2020 academic year.

Mar
20
Fri
Postponed until Spring 2021 – JHU Robotics Industry Day @ Glass Pavilion, Levering Hall
Mar 20 @ 9:00 am – 5:00 pm

After closely monitoring developments related to the COVID-19 outbreak, we have decided to postpone LCSR’s Industry Day on March 20th to the fall semester. The health and well-being of our guests, students, staff, and faculty are our top priority.

We apologize for the difficulty and inconvenience resulting from these changes. While this is not an easy decision, we believe it is in the best interest of all parties. We will share information about when the postponed events will be rescheduled as soon as we have better information.

 

Please direct any questions to Ashley Moriarty (ashleymoriarty@jhu.edu).

 

 

 

 

 

 

The Laboratory for Computational Sensing and Robotics will highlight its elite robotics students and showcase cutting-edge research projects in areas that include Medical Robotics, Extreme Environments Robotics, Human-Machine Systems for Manufacturing, BioRobotics and more. JHU Robotics Industry Day will take place from 8 a.m. to 4 p.m. in Levering Hall on the Homewood Campus at Johns Hopkins University.

Robotics Industry Day will provide top companies and organizations in the private and public sectors with access to the LCSR’s forward-thinking, solution-driven students. The event will also serve as an informal opportunity to explore university-industry partnerships.

You will experience dynamic presentations and discussions, observe live demonstrations, and participate in speed networking sessions that afford you the opportunity to meet Johns Hopkins most talented robotics students before they graduate.

Please contact Ashley Moriarty if you have any questions.


Download our 2018 Industry Day booklet

Please contact Ashley Moriarty if you have any questions.

 

Apr
1
Wed
LCSR Seminar: Brent Gillespie @ Hackerman B-17
Apr 1 @ 12:00 pm – 1:00 pm
Apr
8
Wed
LCSR Seminar: Robert Grupp “Computer-Assisted Fluoroscopic Navigation for Orthopaedic Surgery” @ Hackerman B-17
Apr 8 @ 12:00 pm – 1:00 pm

https://wse.zoom.us/j/348338196

 

Abstract:

In the absence of computer-assistance, orthopaedic surgeons frequently rely on a challenging interpretation of fluoroscopy for intraoperative guidance. Existing computer-assisted navigation systems forgo this mental process and obtain accurate information of visually obstructed objects through the use of 3D imaging and additional intraoperative sensing hardware. This information is attained at the expense of increased invasiveness to patients and surgical workflows. Patients are exposed to large amounts of ionizing radiation during 3D imaging and undergo additional, and larger, incisions in order to accommodate navigational hardware. Non-standard equipment must be present in the operating room and time-consuming data collections must be conducted intraoperatively. Using periacetabular osteotomy (PAO) as the motivating clinical application, we introduce methods for computer-assisted fluoroscopic navigation of orthopaedic surgery, while remaining minimally invasive to both patients and surgical workflows.

Partial computed tomography (CT) of the pelvis is obtained preoperatively, and surface models of the entire pelvis are reconstructed using a combination of thin plate splines and a statistical model of pelvis anatomy. Intraoperative navigation is implemented through a 2D/3D registration pipeline, between 2D fluoroscopy and the 3D patient models. This pipeline recovers relative motion of the fluoroscopic imager using patient anatomy as a fiducial, without any introduction of external objects. PAO bone fragment poses are computed with respect to an anatomical coordinate frame and are used to intraoperatively assess acetabular coverage of the femoral head. Convolutional neural networks perform semantic segmentation and detect anatomical landmarks in fluoroscopy, allowing for automation of the registration pipeline. Real-time tracking of PAO fragments is enabled through the intraoperative injection of BBs into the pelvis; fragment poses are automatically estimated from a single view in less than one second. A combination of simulated and cadaveric surgeries was used to design and evaluate the proposed methods.

 

Bio:

Robert Grupp is a postdoctoral fellow at LCSR primarily working with Mehran Armand in the Biomechanical and Image-Guided Surgical Systems Lab. He recently completed his PhD in the Department of Computer Science at Johns Hopkins University, advised by Russell Taylor. His current research focuses on medical image registration and aims to enable computer-assisted navigation during minimally invasive orthopaedic surgery. Some of this work has been highlighted as a feature article in the February 2020 issue of IEEE Transactions on Biomedical Engineering. Prior to starting his PhD studies, Robert worked on various Synthetic Aperture Radar exploitation algorithms as part of the Automatic Target Recognition group at Northrop Grumman: Electronic Systems. He received a BS in Computer Science and Mathematics from the University of Maryland: College Park.