Calendar

Sep
28
Wed
LCSR Seminar: Amy Bastian “Learning and relearning human movement” @ Hackerman B17
Sep 28 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2022/2023 school year

 

Abstract:

Human motor learning depends on a suite of brain mechanisms that are driven by different signals and operate on timescales ranging from minutes to years.  Understanding these processes requires identifying how new movement patterns are normally acquired, retained, and generalized, as well as the effects of distinct brain lesions.  The lecture will focus on normal and abnormal motor learning, and how we can use this information to improve rehabilitation for individuals with neurological damage.

 

Bio:

Dr. Amy Bastian is a neuroscientist who has made important contributions to the neuroscience of sensorimotor control.  She is the Chief Science Officer at the Kennedy Krieger Institute, and Director of the motion analysis laboratory that studies the neural control of human movement.  Dr. Bastian is also a Professor of Neuroscience, Neurology and PM&R at the Johns Hopkins University School of Medicine.  Dr. Bastian is a recognized and highly accomplished neuroscientists whose interests include understanding cerebellar function/dysfunction, locomotor learning mechanisms, motor learning in development, and how to rehabilitate people with many types of neurological diseases.

Oct
5
Wed
LCSR Seminar: Malcolm MacIver “Biological planning deciphered via AI algorithms and robot-animal competition in partially observable environments” @ Hackerman B17
Oct 5 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2022/2023 school year

 

Abstract: Planning, the ability to imagine different futures and select one assessed to have high value, is one of the most vaunted of animal capacities. As such it has been a central target of artificial intelligence work from the origins of that field, in addition to being a focus of neuroscience and cognitive science. These separate and sometimes synergistic traditions are combined in our new work exploring the origin and mechanics of planning in animals. We will show how mammals evade autonomous robot “predators” in complex large arenas. We have discovered that depending on the arrangement and density of barriers to vision, animals appear to carefully manage their uncertainty about the predator’s location in order to reach their goal. Their behavior appears unlikely to be driven by cached responses that were successful in the past, but rather based on planning during brief pauses over which they peek at the hidden robot adversary that is looking for them. After peeking, they re-route to avoid the predator.

 

Bio: Malcolm A. MacIver is a group leader of the Center for Robotics and Biosystems at Northwestern University, with a joint appointment between Mechanical Engineering and Biomedical Engineering, and courtesy appointments in the Department of Neurobiology and the Department of Computer Science. His work focuses on extracting principles underlying animal behavior, focusing on interactions between biomechanics, sensory systems, and planning circuits. He then incorporates these principles into biorobotic systems or simulations of the animal in its environment for synergy between technological and scientific advances. For this work he received the 2009 Presidential Early Career Award for Science and Engineering from President Obama at the White House. MacIver has also developed interactive science-inspired art installations that have exhibited internationally, and consults for science fiction film and TV series makers.

Oct
7
Fri
JHU Robotics Career Fair
Oct 7 @ 1:00 pm – 4:00 pm

LCSR Career Fair – October 7, 2022 1-4pm EDT

We would like to invite you to participate in the first annual Johns Hopkins Robotics Career Fair. The event will be completely online (virtual) on Gather.town.  The goal is to help connect students and industry with internships and jobs. The tentative schedule include a keynote speaker from 1-2pm, elevator pitch practice with Industry professionals working one-on-one with students from 2-3pm, and then virtual company job-fair from 3-4pm in which each company/organization will have a dedicated virtual “table” to meet with our students.

Friday 10/7
Gather.Town
1:00 pm Keynote Speaker: Keynote by Stephen Aylward, Senior Director of Strategic Initiatives at Kitware
2:00 pm Elevator Pitch Practice: For students with Industry Professionals
3:00 pm Virtual Company Job Fair

If you would like to participate, please email Ashley Moriarty by Thursday September 15, 2022. More info on our Industry Page

 

Keynote Speaker: Stephen Aylward “Do something slightly different”

 

Abstract: This talk explores the increasing overlap that exists in academic and industry environments, the role of research and product development in those environments, and how you can shape your career to succeed in either.  It also explores how adopting the concepts and tools of open science can lead to success in both.

Bio: Stephen Aylward’s industry career began as an MS graduate surrounded by PhDs in the AI research labs at McDonnell Douglas.  He then received a PhD in computer science and became a tenured associate professor in the department of radiology at UNC.  That was followed by him pivoting back to industry and founding Kitware’s office in North Carolina, where he has had many roles as the company grew.  He successfully patented and licensed software while in academia and played lead roles in the development of numerous open-source projects including ITK and 3D Slicer while in industry.  He now serves as Senior Director of Strategic Initiatives at Kitware, as an adjunct professor in computer science at UNC, and as chair of the advisory board for the development of MONAI, a leading open-source PyTorch library for medical AI.  His NIH, DARPA, and DoD funded research currently focuses on point-of-care AI and developing quantitative ultrasound spectroscopy measures to aid in the care of trauma victims in ambulances, emergency departments, and intensive care units.

 

Oct
12
Wed
LCSR Seminar: Student Seminar @ Hackerman B17
Oct 12 @ 12:00 pm – 1:00 pm
Oct
19
Wed
LCSR Seminar: Alireza Ramezani “Bat-inspired Dynamic Morphing Wing Flight Through Morphology and Control Design” @ Hackerman B17
Oct 19 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2022/2023 school year

 

Abstract:

When a flapping bat propels through its fluidic environment, it creates periodic air jets in the form of wake structures downstream of its flight path. The animal’s remarkable dexterity to quickly manipulate these wakes with fine-grained, fast body adjustments is key to retaining the force-moment needed for an all-time controllable flight, even near stall conditions, sharp turns, and heel-above-head maneuvers. We refer to bats’ locomotion based on dexterously manipulating the fluidic environment through dynamically versatile wing conformations as dynamic morphing wing flight.

In this talk, I will describe some of the challenges facing the design and control of dynamic morphing Micro Aerial Vehicles (MAV) and report our latest morphing flying robot design called Aerobat. Dynamic morphing is the defining characteristic of bat locomotion and is key to their agility and efficiency. Unlike a jellyfish whose body conformations are fully dominated by its passive dynamics, a bat employs its active and passive dynamics to achieve dynamic morphing within its gaitcycles with a notable degree of control over joint movements. Copying bats’ morphing wings has remained an open engineering problem due to a classical robot design challenge: having many active coordinates in MAVs is impossible because of prohibitive design restrictions such as limited payload and power budget. I will propose a framework based on integrating low-power, feedback-driven components within computational structures (mechanical structures with computational resources) to address two challenges associated with gait generation and regulation. We call this framework Morphing via Integrated Mechanical Intelligence and Control (MIMIC). Based on this framework, my team at SiliconSynapse Laboratory at Northeastern University has copied bat dynamically versatile wing conformations in untethered flight tests.

 

Bio:

Alireza Ramezani is an assistant professor at the Department of Electrical & Computer Engineering at Northeastern University (NU). Before joining NU in 2018, he was a post-doc at Caltech’s Division of Engineering and Applied Science. He received his Ph.D. degree in Mechanical Engineering from the University of Michigan, Ann Arbor, with Jessy Grizzle. His research interests are the design of bioinspired robots with nontrivial morphologies (high degrees of freedom and dynamic interactions with the environment), analysis, and nonlinear, closed-loop feedback design of locomotion systems. His designs have been featured in high-impact journals, including two cover articles in Science Robotics Magazine and research highlights in Nature. Alireza has received NASA’s Space Technology Mission Directorate’s Program Award in designing bioinspired locomotion systems for the exploration of the Moon and Mars craters two times. He is the recipient of Caltech’s Jet Propulsion Lab (JPL) Faculty Research Program Position. Alireza’s research has been covered by over 200 news outlets, including The New York Times, The Wall Street Journal, The Associated Press, CNN, NBC, and Euronews. Currently, he is leading a $1 Million NSF project to design and control bat-inspired MAVs in the confined space of sewer networks for monitoring and inspection.

Oct
26
Wed
LCSR Seminar: Masaki Nakada “Foids: Bio-Inspired Fish Simulation for Generating Synthetic Datasets” @ Hackerman B17
Oct 26 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2022/2023 school year

 

Abstract:

I will present a bio-inspired fish simulation platform, which we call “Foids”, to generate realistic synthetic datasets for an use in computer vision algorithm training. This is a first-of-its-kind synthetic dataset platform for fish, which generates all the 3D scenes just with a simulation. One of the major challenges in deep learning based computer vision is the preparation of the annotated dataset. It is already hard to collect a good quality video dataset with enough variations; moreover, it is a painful process to annotate a sufficiently large video dataset frame by frame. This is especially true when it comes to a fish dataset because it is difficult to set up a camera underwater and the number of fish (target objects) in the scene can range up to 30,000 in a fish cage on a fish farm. All of these fish need to be annotated with labels such as a bounding box or silhouette, which can take hours to complete manually, even for only a few minutes of video. We solve this challenge by introducing a realistic synthetic dataset generation platform that incorporates details of biology and ecology studied in the aquaculture field. Because it is a simulated scene, it is easy to generate the scene data with annotation labels from the 3D mesh geometry data and transformation matrix. To this end, we develop an automated fish counting system utilizing the part of synthetic dataset that shows comparable counting accuracy to human eyes, which reduces the time compared to the manual process, and reduces physical injuries sustained by the fish.

 

Bio: Masaki Nakada obtained a master degree in physics at Waseda University in Japan. Then, he finished PhD in computer science at UCLA and worked as a postdoc for another year, where he published a series of scientific papers. (https://www.masakinakada.com/) He devoted more than 10 years in the research of artificial life, specifically in the area of biomechanical human simulation with musculoskeletal models, neuromuscular controllers, and biomimetic vision. Previously, he worked for Intel as a software engineer. He received  MIT Technology Review Innovator Award Under 35, Forbes Next 1000, Institute for Digital Research and Education Postdoctoral Scholar Award, Siggraph Thesis Fast Forward Honorable mention, TEEC Cup North American Entrepreneurship Competition in Silicon Valley, Japan Student Services Organization Fellowship, Rotary Ambassadorial Fellowship, Itoh Foundation Fellowship, Entrepreneurship Foundation Fellowship, Aoi Foundation Fellowship and winner of several Startup business competition & hackathons. He founded NeuralX, Inc (https://www.neuralx.ai/) in 2019 based on the IP he has developed over the decade of research. The company provides an interactive online fitness service Presence.fit (https://www.presence.fit/), where it combines the power of human instructor and motion analytics AI, which enables them to provide highly interactive online fitness experience.

Nov
2
Wed
LCSR Seminar: Kapil Katyal “Robot Manipulation and Navigation Research at JHU/APL” @ Hackerman B17
Nov 2 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2022/2023 school year

 

Abstract:

This talk will describe the robotics and AI activities and projects within JHU/APL’s Research and Exploratory Development Department. I will present motivating challenge problems faced by various defense, military and medical sponsors across a number of government agencies. Further, I will highlight several research projects we are currently executing in the areas of robot manipulation, navigation and human robot interaction. Specifically, the projects will highlight areas including underwater manipulation, learned policies for off-road and complex terrain navigation, human robot interaction, heterogenous robot teaming, and fixed wing aerial navigation. Finally, I will present areas of future research and exploration and possible intersections with LCSR.

 

Bio:

Kapil Katyal is a principal researcher and robotics program manager in the Research and Exploratory Development Department at JHU/APL. He completed his PhD at JHU advised by Greg Hager on prediction and perception capabilities for robot navigation. He has worked at JHU/APL since 2007 on several projects spanning robot manipulation, brain machine interfaces, vision algorithms for retinal prosthetics and robot navigation in complex terrains. He holds 5 patents and has co-authored over 30 publications in areas of robotics and AI.

 

 

Nov
9
Wed
LCSR Seminar: Alessandro Roncone “Robots working with and around people” @ Hackerman B17
Nov 9 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2022/2023 school year

 

Abstract: Robots have begun to transition from assembly lines, where they are physically separated from humans, to human-populated environments and human-enhancing applications, where interaction with people is inevitable. With this shift, research in human-robot interaction (HRI) has grown to allow robots to work with and around humans on complex tasks, augment and enhance people, and provide the best support to them. In this talk, I will provide an overview of the work performed in the HIRO Group and our efforts toward intuitive, human-centered technologies for the next generation of robot workers, assistants, and collaborators. More specifically, I will present our research on: a) robots that are safe to people, b) robots that are capable of operating in complex environments, and c) robots that are good teammates. In all, this research will enable capabilities that were not previously possible, and will impact work domains such as manufacturing, construction, logistics, the home, and health care.

 

Bio: Alessandro Roncone is Assistant Professor in the Computer Science Department at University of Colorado Boulder. He received his B.Sc. summa cum laude in Biomedical Engineering in 2008, and his M.Sc. summa cum laude in NeuroEngineering in 2011 from the University of Genoa, Italy. In 2015 he completed his Ph.D. in Robotics, Cognition and Interaction Technologies from the Italian Institute of Technology [IIT], working on the iCub humanoid in the Robotics, Brain and Cognitive Sciences department and the iCub Facility. From 2015 to 2018, he was Postdoctoral Associate at the Social Robotics Lab in Yale University, performing research in Human-Robot Collaboration for advanced manufacturing. He joined as faculty at CU Boulder in August 2018, where he is director of the Human Interaction and Robotics Group (HIRO, https://hiro-group.ronc.one/ ) and co-director of the Interdisciplinary Research Theme in Engineering Education Research and AI-augmented Learning (EER-AIL IRT,  https://www.colorado.edu/irt/engineering-education-ai/ ).

 

Nov
14
Mon
Special LCSR Seminar: Desire Pantalone “Robotic Surgery in Space” @ Malone G33/35
Nov 14 @ 11:00 am – 12:00 pm

Link for Live Seminar

Link for Recorded seminars – 2022/2023 school year

 

Abstract: The target of human flight in space is missions beyond low earth orbit and the Lunar Gateway for deep space exploration and Missions to Mars. Several conditions, such as the effect of weightlessness and radiations on the human body, behavioral health decrements, and communication latency have to be considered. Telemedicine and telerobotic applications, robot-assisted surgery with some hints on experimental surgical procedures carried out in previous missions, have to be considered as well. The need for greater crew autonomy in dealing with health issues is related to the increasing severity of medical and surgical interventions that could occur in these missions, and the presence of a highly trained surgeon on board would be recommended. A surgical robot could be a valuable aid but only insofar as it is provided with multiple functions, including the capability to perform certain procedures autonomously. Providing a multi-functional surgical robot is the new frontier. Research in this field shall be paving the way for the development of new structured plans for human health in space, as well as providing new suggestions for clinical applications on Earth.

 

Bio: Dr. Desire Pantalone MD is a general surgeon with a particular interest in trauma surgery and emergency surgery. She is a staff surgeon in the Unit of Emergency Surgery and part of the Trauma Team of the University Hospital Careggi in Florence. She is also a specialist in General Surgery and  Vascular Surgery. She previously was a Research Associate at the University of Chicago (IL) (Prof M. Michelassi) for Oncological Surgery and for Liver Transplantation and Hepatobiliary Surgery (Dr. J Emond). She is also an instructor for the Advanced Trauma Operative Management (American College of Surgeons Committee for Trauma) and a Fellow of the American College of Surgeons. She is also a Core Board member responsible for “Studies on traumatic events and surgery” in the ESA-Topical Team on “Tissue Healing in Space: Techniques for promoting and monitoring tissue repair and regeneration” for Life Science Activities.

 

Nov
16
Wed
LCSR Seminar: Student Seminar @ Hackerman B17
Nov 16 @ 12:00 pm – 1:00 pm

Link for Live Seminar

Link for Recorded seminars – 2022/2023 school year

 

Student 1: Maia Stiber “Supporting Effective HRI via Flexible Robot Error Management Using Natural Human Responses”

Abstract: Unexpected robot errors during human-robot interaction are inescapable; they can occur during any task and do not necessarily fit human expectations of possible errors. When left unmanaged, robot errors’ impact on an interaction harms task performance and user trust, resulting in user unwillingness to work with a robot. Prior error management techniques often do not possess the versatility to appropriately address robot errors across tasks and error types as they frequently use task or error specific information for robust management. In this presentation, I describe my work on exploring techniques for creating flexible error management through leveraging natural human responses (social signals) to robot errors as input for error detection and classification across tasks, scenarios, and error types in physical human-robot interaction. I present an error detection method that uses facial reactions for real-time detection and temporal localization of robot error during HRI,  a flexible error-aware framework using traditional and social signal inputs that allow for improved error detection, and an exploration on the effects of robot error severity on natural human responses. I will end my talk by discussing how my current and future work further investigates the use of social signals in the context of HRI for flexible error detection and classification.

Bio: Maia Stiber is a Ph.D. candidate in the Department of Computer Science, co-advised by Dr. Chien-Ming Huang and Dr. Russell Taylor. Her work focuses on leveraging natural human responses to robot errors in an effort to develop flexible error management techniques in support of effective human-robot interaction.

 

 

Student 2: Akwasi Akwaboah “Neuromorphic Cognition and Neural Interfaces”

Abstract: I present research at the Ralph Etienne-Cummings-led Computational Sensor-Motor Systems Lab, Johns Hopkins University on two fronts – (1) Neuromorphic Cognition (NC) focused on the emulation neural physiology at algorithmic and hardware levels, and (2) Neural Interfaces with emphasis on electronics for neural MicroElectrode Array (MEA) characterization. The motivation for the NC front is as follows. The human brain expends a mere 20 watts in learning and inference, exponentially lower than state-of-the-art large language models (GPT-3 and LaMDA). There is the need to innovate sustainable AI hardware as the 3.4x compute doubling per month has drastically outpaced Moore’s law, i.e., a 2-year transistor doubling. Efforts here are geared towards realizing biologically plausible learning rules such as the Hebb’s rule-based Spike-Timing-Dependent Plasticity (STDP) algorithmically and in correspondingly low-power mixed analog-digital VLSI implements. On the same front of achieving a parsimonious artificial intelligence, we are investigating the outcomes of using our models of the primate visual attention to selectively sparsify computation in deep neural networks. At the NI front, we are developing an open-source multichannel potentiostat with parallel data acquisition capability. This work holds implications for rapid characterization and monitoring of neural MEAs often adopted in neural rehabilitation and in neuroscientific experiments. A standard characterization technique is the Electrochemical Impedance (EI) Spectrometry. However, the increasing channel counts in state-of-the-art MEAs (100x and 1000x) imposes the curse of prolonged acquisition time needed for high spectral resolution. Thus, a truly parallel EI spectrometer made available to the scientific community will ameliorate prolonged research time and cost.

Bio: Akwasi Akwaboah joined the Computational Sensory-Motor Systems (CSMS) Lab in Fall 2020 and is working towards his PhD. He received the MSE in Electrical Engineering from the Johns Hopkins University, Baltimore, MD in Summer 2022 en route the PhD. He received the B.Sc. Degree in Biomedical Engineering (First Class Honors) from the Kwame Nkrumah University of Science and Technology, Ghana in 2017. He also received the M.S. degree in Electronics Engineering from Norfolk State University, Norfolk, VA, USA in 2020. His master’s thesis there focused on the formulation of a heuristically optimized computational model of a stem cell-derived cardiomyocyte with implications in cardiac safety pharmacology. He subsequently worked at Dr. James Weiland’s BioElectronic Vision Lab at the University of Michigan, Ann Arbor, MI, USA in 2020; where he collaborated on research in retinal prostheses, calcium imaging and neural electrode characterization. His current interests include the following: neuromorphic circuits and systems, bio-inspired algorithms, computational biology, and neural interfaces. On the lighter side, Akwasi loves to cook and listen to classical and Afrobeats music. He lives by Marie Curie’s quote – “Nothing in life is to be feared, it is only to be understood …

 

Johns Hopkins University

Johns Hopkins University, Whiting School of Engineering

3400 North Charles Street, Baltimore, MD 21218-2608

Laboratory for Computational Sensing + Robotics