Calendar

Feb
21
Wed
LCSR Seminar: Alexander Spinos, “Variable Topology Truss: A Novel Approach to Modular Self-Reconfigurable Robots” @ Hackerman B17
Feb 21 @ 12:00 pm – 1:00 pm

“Variable Topology Truss: A Novel Approach to Modular Self-Reconfigurable Robots”

Abstract:
Conventional robotic systems are most effective in structured environments with well-defined tasks. The next frontier of robotics is to create systems that can operate in challenging environments while autonomously adapting to changing and uncertain task requirements. In the field of modular self-reconfigurable robotics, we approach this challenge by designing a set of robotic building blocks that can be combined to form a variety of robot morphologies. By autonomously rearranging these modules, the system can change its shape to complete a wider variety of tasks than is possible with a fixed morphology.

In this talk, I will present my research on a new modular robot, the Variable Topology Truss (VTT). Most existing modular self-reconfigurable robots use cube-shaped modules that connect together on a lattice or as a serial string of joints. These architectures are convenient when it comes to designing reconfiguration algorithms, but they face serious practical challenges when it comes to scaling the system up to solve larger tasks. Instead, VTT uses a truss-based architecture. Individual modules are beams which can extend or retract using a novel high-extension-ratio linear actuator: the Spiral Zipper. By connecting the beam modules together like a truss, we can create large, lightweight structures with much greater structural efficiency than conventional modular architectures. Furthermore, the length-changing ability of the Spiral Zipper allows the system to more flexibly adapt its scale and geometry without needing to use as many modules. However, the truss architecture poses new challenges when it comes to motion and reconfiguration planning. I will discuss the hardware design of the VTT system as well as my research on collision-free motion and reconfiguration planning for this novel system.

Bio:
Alexander Spinos received his Bachelor’s degree in Mechanical Engineering from Johns Hopkins University. He then joined the Modlab in GRASP at the University of Pennsylvania, where he received his PhD in Mechanical Engineering and Applied Mechanics. His dissertation centered around the mechanical design and self-reconfiguration planning of the Variable Topology Truss, a modular self-reconfigurable parallel robot. He is now a robotics researcher at the JHU Applied Physics Lab, where he works on multi-robot planning and the design of novel robot hardware.

Feb
28
Wed
LCSR Seminar: Sherry Yang, “Decision Making with Internet-Scale Knowledge” @ Hackerman B17
Feb 28 @ 12:00 pm – 1:00 pm

Title: Decision Making with Internet-Scale Knowledge

 

Abstract: Machine learning models pretrained on internet data have acquired broad knowledge about the world but struggle to solve complex tasks that require extended reasoning and planning. Sequential decision making, on the other hand, has empowered AlphaGo’s superhuman performance, but lacks visual, language, and physical knowledge about the world. In this talk, I will present my research towards enabling decision making with internet-scale knowledge. First, I will illustrate how language models and video generation are unified interfaces that can integrate internet knowledge and represent diverse tasks, enabling the creation of a generative simulator to support real-world decision-making. Second, I will discuss my work on designing decision making algorithms that can take advantage of generative language and video models as agents and environments. Combining pretrained models with decision making algorithms can effectively enable a wide range of applications such as developing chatbots, learning robot policies, and discovering novel materials.

Bio: Sherry is a final year PhD student at UC Berkeley advised by Pieter Abbeel and a senior research scientist at Google DeepMind. Her research aims to develop machine learning models with internet-scale knowledge to make better-than-human decisions. To this end, she has developed techniques for generative modeling and representation learning from large-scale vision, language, and structured data, coupled with developing algorithms for sequential decision making such as imitation learning, planning, and reinforcement learning. Sherry initiated and led the Foundation Models for Decision Making workshop at NeurIPS 2022 and 2023, bringing together research communities in vision, language, planning, and reinforcement learning to solve complex decision making tasks at scale.  Before her current role, Sherry received her Bachelor’s degree and Master’s degree from MIT advised by Patrick Winston and Julian Shun.

Mar
6
Wed
LCSR Seminar: Ayush Tewari, “Learning to See the World in 3D” @ Hackerman B17
Mar 6 @ 12:00 pm – 1:00 pm

Abstract:

Humans can effortlessly construct rich mental representations of the 3D world from sparse input, such as a single image. This is a core aspect of intelligence that helps us understand and interact with our surroundings and with each other. My research aims to build similar computational models–artificial intelligence methods that can perceive properties of the 3D structured world from images and videos. Despite remarkable progress in 2D computer vision, 3D perception remains an open problem due to some unique challenges, such as limited 3D training data and uncertainties in reconstruction.
In this talk, I will discuss these challenges and explain how my research addresses them by posing vision as an inverse problem, and by designing machine learning models with physics-inspired inductive biases. I will demonstrate techniques for reconstructing 3D faces and objects, and for reasoning about uncertainties in scene reconstruction using generative models. I will then discuss how these efforts advance us toward scalable and generalizable visual perception and how they advance application domains such as robotics and computer graphics.

Bio:

Ayush Tewari is a postdoctoral researcher at MIT CSAIL with William Freeman, Vincent Sitzmann, and Joshua Tenenbaum. He previously completed his Ph.D. at the Max Planck Institute for Informatics, advised by Christian Theobalt. His research interests lie at the intersection of computer vision, computer graphics, and machine learning, focusing on 3D perception and its applications. Ayush was awarded the Otto Hahn medal from the Max Planck Society for his scientific contributions as a Ph.D. student.

Mar
13
Wed
LCSR Student Seminar: TBD @ Hackerman B17
Mar 13 @ 12:00 pm – 1:00 pm
Mar
20
Wed
LCSR Seminar: Tom Silver, “Learning and Planning with Relational Abstractions” @ Hackerman B17
Mar 20 @ 12:00 pm – 1:00 pm

Abstract: Decision-making in robotics domains is complicated by continuous state and action spaces, long horizons, and sparse feedback. One way to address these challenges is to perform bilevel planning, where decision-making is decomposed into reasoning about “what to do” (task planning) and “how to do it” (continuous optimization). Bilevel planning is powerful, but it requires multiple types of domain-specific abstractions that are often difficult to design by hand. In this talk, I will give an overview of my work on learning these abstractions from data. This work represents the first unified system for learning all the abstractions needed for bilevel planning. In addition to learning to plan, I will also discuss planning to learn, where the robot uses planning to collect additional data that it can use to improve its abstractions. My long-term goal is to create a virtuous cycle where learning improves planning and planning improves learning, leading to a very general library of abstractions and a broadly competent robot.

Bio: Tom Silver is a final year PhD student at MIT EECS advised by Leslie Kaelbling and Josh Tenenbaum. His research is at the intersection of machine learning and planning with applications to robotics and often uses techniques from task and motion planning, program synthesis, and neuro-symbolic learning. Before graduate school, he was a researcher at Vicarious AI and received his B.A. from Harvard with highest honors in computer science and mathematics in 2016. He has also interned at Google Research (Brain Robotics) and currently splits his time between MIT and the Boston Dynamics AI Institute. His work is supported by an NSF fellowship and an MIT presidential fellowship.

 

Mar
27
Wed
LCSR Seminar: Joseph Moore, “Assured robotic super-autonomy” @ Hackerman B17
Mar 27 @ 12:00 pm – 1:00 pm

Title: “Assured robotic super-autonomy”

Abstract: In recent years, we have observed the rise of robotic super-autonomy, where computational control and machine learning methods enable agile robots to far exceed the performance of conventional autonomous systems.  However, as powerful as these methods are, they often fail to provide the robustness and performance guarantees required for real-world deployment. In this talk, I will present some of the research I am leading at the Johns Hopkins University Applied Physics Laboratory (JHU/APL) to develop dramatically more capable robots that can operate at the very edge of physical and computational limits.  In particular, I will present on our efforts to achieve agile autonomous flight with fixed-wing aerial vehicles for precision perching, landing, and high-speed navigation in constrained urban environments. I will also present our approaches for computing performance guarantees for these complex systems and control paradigms in the presence of environmental uncertainty and model mismatch. Finally, I will discuss future potential research directions for safe learning-based control, risk-aware multi-robot coordination, and assured control for nonlinear hybrid dynamical systems.

Bio: Dr. Joseph Moore is the Chief Scientist for Robotics in the Research and Exploratory Development Department at the Johns Hopkins University Applied Physics Laboratory (JHU/APL) and an Assistant Research Professor in the Mechanical Engineering Department at the JHU Whiting School of Engineering. Dr. Moore received his Ph.D. in Mechanical Engineering from the Massachusetts Institute of Technology where he focused on control algorithms for robust post-stall perching with autonomous fixed-wing aerial vehicles. During his time at JHU/APL, Dr. Moore has developed control and planning strategies for hybrid unmanned aerial-aquatic vehicles, heterogeneous multi-robot teams, and aerobatic fixed-wing vehicles. He has served as the Principal Investigator for both Office of Naval Research (ONR) and Defense Advanced Research Project Agency (DARPA) programs that have developed flight controllers for aggressive post-stall maneuvering with fixed-wing aerial vehicles to enable precision landing and high-speed navigation in constrained environments. He is also a Principal Investigator for the Army Research Lab (ARL) Tactical Behaviors for Autonomous Maneuver program which seeks to enable tactical coordination of multi-robot teams in complex environments and terrains.

Website: https://www.jhuapl.edu/work/our-organization/research-and-exploratory-development/red-staff-directory/joseph-moore

Host: Anton Dahbura

Zoom: Meeting ID 955 8366 7779; Passcode 530803
https://wse.zoom.us/j/95583667779

 

Apr
3
Wed
LCSR Seminar: Seth Hutchinson, “Model-Based Methods in Today’s Data-Driven Robotics Landscape @ Hackerman B17
Apr 3 @ 12:00 pm – 1:00 pm

Model-Based Methods in Today’s Data-Driven Robotics Landscape
Seth Hutchinson, Georgia Tech

Abstract:
Data-driven machine learning methods are making advances in many long-standing problems in robotics, including grasping, legged locomotion, perception, and more. There are, however, robotics applications for which data-driven methods are less effective. Data acquisition can be expensive, time consuming, or dangerous — to the surrounding workspace, humans in the workspace, or the robot itself. In such cases, generating data via simulation might seem a natural recourse, but simulation methods come with their own limitations, particularly when nondeterministic effects are significant, or when complex dynamics are at play, requiring heavy computation and exposing the so-called sim2real gap. Another alternative is to rely on a set of demonstrations, limiting the amount of required data by careful curation of the training examples; however, these methods fail when confronted with problems that were not represented in the training examples (so-called out-of-distribution problems), and this precludes the possibility of providing provable performance guarantees.

In this talk, I will describe recent work on robotics problems that do not readily admit data-driven solutions, including flapping flight by a bat-like robot, vision-based control of soft continuum robots, a cable-driven graffiti-painting robot, and ensuring safe operation of mobile manipulators in HRI scenarios. I will describe some specific difficulties that confront data-driven methods for these problems, and describe how model-based approaches can provide workable solutions. Along the way, I will also discuss how judicious incorporation of data-driven machine learning tools can enhance performance of these methods.

BIO:

Seth Hutchinson is the Executive Director of the Institute for Robotics and Intelligent Machines at the Georgia Institute of Technology, where he is also Professor and KUKA Chair for Robotics in the School of Interactive Computing. Hutchinson received his Ph.D. from Purdue University in 1988, and in 1990 joined the University of Illinois in Urbana-Champaign (UIUC), where he was a Professor of Electrical and Computer Engineering (ECE) until 2017, serving as Associate Department Head of ECE from 2001 to 2007.

Hutchinson served as president of the IEEE Robotics and Automation Society (RAS) 2020-21. He has previously served as a member of the RAS Administrative Committee, as the Editor-in-Chief for the “IEEE Transactions on Robotics” and as the founding Editor-in-Chief of the RAS Conference Editorial Board. He has served on the organizing committees for more than 100 conferences, has more than 300 publications on the topics of robotics and computer vision, and is coauthor of the books “Robot Modeling and Control,” published by Wiley, “Principles of Robot Motion: Theory, Algorithms, and Implementations,” published by MIT Press, and the forthcoming “Introduction to Robotics and Perception,” to be published by Cambridge University Press. He is a Fellow of the IEEE.

 

Apr
10
Wed
LCSR Seminar: Glen Chou, “Toward End-to-end Reliable Robot Learning for Autonomy and Interaction” @ Hackerman B17
Apr 10 @ 12:00 pm – 1:00 pm

Abstract:

Robots must behave safely and reliably if we are to confidently deploy them in the real world around humans. To complete tasks, robots must manage a complex, interconnected autonomy stack of perception, planning, and control software. While machine learning has unlocked the potential for full-stack end-to-end control in the real world, these methods can be catastrophically unreliable. In contrast, model-based safety-critical control provides rigorous guarantees, but struggles to scale to real systems, where common assumptions, e.g., perfect task specification and perception, break down.

However, we need not choose between real-world utility and safety. By taking an end-to-end approach to safety-critical control that builds and leverages knowledge of where learned components can be trusted, we can build practical yet rigorous algorithms that can make real robots more reliable. I will first discuss how to make task specification easier and safer by learning hard constraints from human task demonstrations, and how we can plan safely with these learned specifications despite uncertainty. Then, given a task specification, I will discuss how we can reliably leverage learned dynamics and perception for planning and control by estimating where these learned models are accurate, enabling probabilistic guarantees for end-to-end vision-based control. Finally, I will provide perspectives on open challenges and future opportunities in assuring algorithms for space autonomy, including robust perception-based hybrid control algorithms for reliable data-driven robotic manipulation and human-robot collaboration.

Bio:

Glen Chou is a postdoctoral associate at MIT CSAIL, advised by Prof. Russ Tedrake. His research focuses on end-to-end safety and reliability guarantees for learning-enabled robots that operate around humans. Previously, Glen received his PhD in Electrical and Computer Engineering from the University of Michigan in 2022, where he was advised by Profs. Dmitry Berenson and Necmiye Ozay. Prior to that, he received dual B.S. degrees in Electrical Engineering and Computer Science and Mechanical Engineering from UC Berkeley in 2017. He is a recipient of the National Defense Science and Engineering Graduate (NDSEG) fellowship, the NSF Graduate Research fellowship, and is a Robotics: Science and Systems Pioneer.

Website: https://glenchou.github.io/

Zoom: Meeting ID 955 8366 7779; Passcode 530803
https://wse.zoom.us/j/95583667779

 

Apr
17
Wed
LCSR Seminar: David Porfirio, “Robot Application Development: A Shifting Paradigm” @ Hackerman B17
Apr 17 @ 12:00 pm – 1:00 pm

Title: Robot Application Development: A Shifting Paradigm

Abstract:
Interfaces for Robot Application Development (RAD) have proven effective at empowering non-roboticist developers (i.e., robot end users and non-robotics domain experts) to specify tasks for robots to perform. Historically, RAD has adopted development paradigms that have strong ties to traditional computer programming. With recent advancements in robot artificial intelligence, however, there is a pressing need for RAD interfaces to serve instead as communication intermediaries between the developer and the robot. As communication intermediaries, these interfaces should be designed to harness any relevant developer knowledge that is unknown to the robot, while at the same time appropriately leveraging the robot’s intelligent capabilities and communicating this information back to the developer. This talk describes two separate research threads to facilitate developer-robot communication through RAD interfaces. The first thread investigates how interfaces should be designed to appropriately leverage the robot’s knowledge and capabilities. The second thread investigates how interfaces should be designed to elicit relevant tacit knowledge from developers.

Bio:
David Porfirio is an NRC Postdoctoral Research Associate at the U.S. Naval Research Laboratory. His interests lie in designing and evaluating user interfaces that facilitate robot end-user development with the goal of making robot programming more accessible and reliable for non-roboticists. His work has been published in top-tier conferences in both human-robot interaction and human-computer interaction. Prior to his postdoctoral appointment, David received his Ph.D. in 2022 from the University of Wisconsin–Madison (UW–Madison), where he was advised by Drs. Bilge Mutlu and Aws Albarghouthi. During his Ph.D., he was supported by the NSF GRFP, Microsoft Dissertation Grant, and Cisco Wisconsin Distinguished Graduate Fellowship.

 

Apr
24
Wed
LCSR Seminar: Marin Kobilarov and Louis Whitcomb, “Interviewing for Jobs in Academia and Industry” @ Hackerman B17
Apr 24 @ 12:00 pm – 1:00 pm

Abstract:
This LCSR Professional Development Seminar is focused on essential skills for interviewing for technical jobs in industry and in academia

BIO:

Marin Kobilarov is an Associate Professor at the Johns Hopkins University and a Principal Engineer at Zoox/Amazon. At JHU he leads the Autonomous Systems, Control and Optimization (ASCO) lab which develops algorithms and software for planning, learning, and control of autonomous robotic systems. Their focus is on computational theory at the intersection of planning and learning, and on the system integration and deployment of robots that can operate safely and efficiently in challenging environments.

 


Whitcomb is Professor of Mechanical Engineering at the Johns Hopkins University School of Engineering, and Adjunct Scientist at the Woods Hole Oceanographic Institution. His research focuses on the navigation, dynamics, and control of robot systems in extreme environments. He is the founding Director (2007-2013) of the JHU Laboratory for Computational Sensing and Robotics and former Chair (2013-2017) of the JHU Department of Mechanical Engineering. He has received numerous best paper awards and teaching awards. He is a Fellow of the IEEE.

 

 

Laboratory for Computational Sensing + Robotics