Calendar

Apr
1
Wed
CANCELLED George J. Pappas: Active Information Acquisition with Mobile Robots and Configurable Sensing Systems @ B17 Hackerman Hall
Apr 1 @ 12:00 pm – 1:00 pm

Due to circumstances beyond our control, today’s seminar has been cancelled.

 

 

Abstract

As the world is getting instrumented with numerous sensors, cameras, and robots, there is potential to transform industries as diverse as environmental monitoring, search and rescue, security and surveillance, localization and mapping, and structure inspection. Successful estimation techniques for gathering information in these scenarios have been designed and implemented. However, one of the great technical challenges today is to intelligently control the sensors, cameras, and robots in order to extract information actively and autonomously. In this talk, I will present a unified approach for active information acquisition, aimed at improving the accuracy and efficiency of tracking evolving phenomena of interest. I formulate a decision problem for maximizing relevant information measures and focus on the design of scalable control strategies for multiple sensing systems. First, I will present a greedy approach for information acquisition via applications in source seeking and mobile robot localization. Next, information acquisition with a longer planning horizon will be considered in the context of linear Gaussian models. I will develop an approximation algorithm with suboptimality guarantees to reduce the complexity in the planning horizon and the number of sensors and will present an application to active multi-robot localization and mapping. Finally, non-greedy information acquisition with general sensing models will be used for active object recognition. The techniques presented in this talk offer an effective and scalable approach for controlled information acquisition with multiple sensing systems.

 

Speaker Bio

George J. Pappas is the Joseph Moore Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is member of theGRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, theO. Hugo Schuck Best Paper Award, and the National Science Foundation PECASE.

 

Apr
8
Wed
Julie Shah: Integrating Robots into Team-Oriented Environments @ B17 Hackerman Hall
Apr 8 @ 12:00 pm – 1:00 pm

Abstract

Recent advances in computation, sensing, and hardware enable robotics to perform an increasing percentage of traditionally manual tasks in manufacturing. Yet, often the assembly mechanic cannot be removed entirely from the process. This provides new economic motivation to explore opportunities where human workers and industrial robots may work in close physical collaboration. In this talk, I present the development of new algorithmic techniques for collaborative plan execution that scale to real-world industrial applications. I also discuss the design of new models for robot planning, which use insights and data derived from the planning and execution strategies employed by successful human teams, to support more seamless robot participation in human work practices. This includes models for human-robot team training, which involves hands-on practice to clarify sequencing and timing of actions, and for team planning, which includes communication to negotiate and clarify allocation and sequencing of work. The aim is to support both the human and robot workers in co-developing a common understanding of task responsibilities and information requirements, to produce more effective human-robot partnerships.

 

Speaker Bio

Julie Shah is an Assistant Professor in the Department of Aeronautics and Astronautics at MIT and leads the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory. Shah received her SB (2004) and SM (2006) from the Department of Aeronautics and Astronautics at MIT, and her PhD (2010) in Autonomous Systems from MIT. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. She has developed innovative methods for enabling fluid human-robot teamwork in time-critical, safety-critical domains, ranging from manufacturing to surgery to space exploration. Her group draws on expertise in artificial intelligence, human factors, and systems engineering to develop interactive robots that emulate the qualities of effective human team members to improve the efficiency of human-robot teamwork. In 2014, Shah was recognized with an NSF CAREER award for her work on “Human-aware Autonomy for Team-oriented Environments,” and by the MIT Technology Review TR35 list as one of the world’s top innovators under the age of 35. Her work on industrial human-robot collaboration was also recognized by the Technology Review as one of the 10 Breakthrough Technologies of 2013, and she has received international recognition in the form of best paper awards and nominations from the International Conference on Auto- mated Planning and Scheduling, the American Institute of Aeronautics and Astronautics, the IEEE/ACM International Conference on Human-Robot Interaction, the International Symposium on Robotics, and the Human Factors and Ergonomics Society.

Apr
15
Wed
Jim Freudenberg: 15 years of Embedded Control Systems at the University of Michigan @ B17 Hackerman Hall
Apr 15 @ 12:00 pm – 1:00 pm

Abstract

In 1998 we were asked by the local automotive industry to create a class to better train engineers to wirote embedded control software. We have now taught the class for 15 years at the University of Michigan and at ETH Zurich, and are currently serving almost 200 students/year. In this talk, we describe the motivations for the class from an automotive perspective, as well as the instructional lab, which uses an industry state of the art microprocessor (Freescale MPC5643L) and haptic wheels. We then describe the NSF sponsored research into haptics and cyberphysical systems that we perform in the lab, in collaboration with Professor Brent Gillespie of the UofM Mechanical Engineering Department.

 

Apr
22
Wed
Louis Whitcomb: Nereid Under-Ice: A Remotely Operated Underwater Vehicle for Oceanographic Access Under Ice
Apr 22 @ 12:00 pm – 1:00 pm

Abstract

This talk reports recent advances in underwater robotic vehicle research to enable novel oceanographic operations in extreme ocean environments, with focus on two recent novel vehicles developed by a team comprised of the speaker and his collaborators at the Woods Hole Oceanographic Institution. First, the development and operation of the Nereus underwater robotic vehicle will be briefly described, including successful scientific observation and sampling dive operations at hadal depths of 10,903 m. on a NSF sponsored expedition to the Challenger Deep of the Mariana Trench – the deepest place on Earth. Second, development and first sea trials of the new Nereid Under-Ice (UI) underwater vehicle will be described. NUI is a novel remotely-controlled underwater robotic vehicle capable of being teleoperated under ice under remote real-time human supervision. We report the results of NUI’s first under-ice deployments during a July 2014 expedition aboard R/V Polarstern at 83° N 6 W° in the Arctic Ocean – approximately 200 km NE of Greenland.

 

Speaker Bio

Louis L. Whitcomb completed a B.S. in Mechanical Engineering in 1984 and a Ph.D. in Electrical Engineering in 1992 at Yale University. From 1984 to 1986 he was a Research and Development engineer with the GMFanuc Robotics Corporation in Detroit, Michigan. He joined the Department of Mechanical Engineering at the Johns Hopkins University in 1995, after post doctoral fellowships at the University of Tokyo and the Woods Hole Oceanographic Institution. His research focuses on the navigation, dynamics, and control of robot systems – including industrial, medical, and underwater robots. Whitcomb is a principal investigator of the Nereus and Nereid Under-Ice Projects. He is former (founding) Director of the JHU Laboratory for Computational Sensing and Robotics. Whitcomb is presently Professor and Chairman at the Department of Mechanical Engineering, with secondary appointment in Computer Science, at the Johns Hopkins University’s Whiting School of Engineering. Whitcomb received teaching awards at Johns Hopkins in 2001, 2002, 2004, and 2011, was awarded a National Science Foundation Career Award, and an Office of Naval Research Young Investigator Award. He is a Fellow of the IEEE. He is also Adjunct Scientist, Department of Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution.

Apr
29
Wed
David Han: : Science of Autonomy in DOD basic research @ B17 Hackerman Hall
Apr 29 @ 12:00 pm – 1:00 pm

Abstract

The Department of Defense (DOD) funds long-term basic research in a wide variety of scientific and engineering fields with a goal of exploiting new knowledge to enhance-and where possible, transform-future capabilities. DOD-funded research is known for high-risk endeavors that have led to paradigm shifts in the nation’s technical capabilities. Revolutionary technologies such as GPS, Gallium Arsenide (GaAs) Microwave Electronics, Magnetic Random Access Memory (MRAM), and Kalman Filter are few of the examples.

 

Recently, the Science of Autonomy (SoA) came into focus as one of the DOD research interests with expectations of a great impact. The effort addresses critical multi-disciplinary research challenges that cut across different DOD departments and warfighting areas/domains. This involves control theory, computational intelligence, human factors, and related fields such as biology/animal behavior/cognition, economics/management theory, cognitive science/psychology, and neuroscience. Today’s presentation will briefly describe the backgrounds, some key aspects of these challenges, and the programs tackling these challenges.

 

Speaker Bio

David Han is the Associate Director for Basic Research in Machine Intelligence
and Robotics in the Office of the Assistant Secretary of Defense Research &
Engineering (ASD (R&E)). The Basic Research Office (BRO) of the ASD (R&E)
oversees the entire basic research portfolio of the US DoD. He is an ASME
fellow and an IEEE senior member, and had been certified as a Professional
Engineer (PE) in mechanical branch in the State of Hawaii in 1985. Dr. Han
received a BS from Carnegie-Mellon University, and a MSE and a PhD from
Johns Hopkins University.
Early part of his career includes naval nuclear engineering at Pearl Harbor Naval
shipyard and design engineering at the R.M. Towill Corporation in Honolulu. He
had been a research engineer at the Naval Surface Warfare Center (NSWC) at
White Oak in the underwater weapons program, and had worked as a senior
professional staff at Johns Hopkins University Applied Physics Laboratory (JHU
APL) in naval missile defense and satellite power programs. He had been with
the University of Maryland at College Park as a visiting associate professor and
the Deputy Director of the Center for Energetic Concepts Development (CECD),
and also he was the Distinguished IWS Chair Professor of the Systems
Engineering Department of the US Naval Academy in Annapolis. He spent over
eleven total years as a program officer at the Office of Naval Research (ONR)
managing basic and applied research, and advanced technology programs. From
2012 to 2014 he served as the Deputy Director of Research of the Office of
Naval Research (ONR) overseeing the Discovery and Invention (D&I) portfolio of
over $900 million dollars annually of basic and applied research.
Dr. Han has authored/coauthored over 60 peer-reviewed papers including 4 book
chapters. He has taught at Johns Hopkins University, University of Maryland
Baltimore County, and Korea University. His research interests include
image/speech processing and recognition, machine learning, and human robot
interaction.

Jun
11
Thu
Fijs W.B. van Leeuwen: Image guidance technologies as an add on to robotic surgery @ 320 Hackerman Hall
Jun 11 @ 12:00 pm – 1:00 pm

Abstract:

 

Image guided interventions are increasingly gaining interest from surgical and radiological disciplines. In addition to radiological imaging technologies, radionuclear imaging and optical imaging have the potential to visualize molecular features of disease. To fully exploit the potential of these technologies, it is instrumental to understand the advantages and shortcomings that come with them. Based on this knowledge the clinically applied guidance approaches and well-known surgical planning such as US, CT, MRI, SPECT, and PET can be placed in perspective. The approaches used for radionuclear and optical modalities are often complementary, a feature that can be exploited further with the use of hybrid tracers. The latter molecular parameters can be detected both in-depth (using radionuclear imaging modalities) and superficially (using optical imaging modalities).

The (da Vinci) robot provides an ideal platform to effectively integrate the image guidance technologies in clinical routine and provides a solid basis for international dissemination of successful technologies.

In his talk Dr. van Leeuwen will illustrate the clinical implementation of a range of radionuclear and optical guidance technologies during robot assisted laparoscopic prostatectomy (RALP) combined with sentinel lymph node dissection.

 

Bio:

Fijs did his masters in Chemistry at the Bioinorganic and Coordination Chemistry group (prof. dr. Jan Reedijk; Leiden Institute of Chemistry) followed by a PhD at the MESA+ Institute for Nanotechnology (University of Twente) in the former Supramolecular Chemistry and Technology group headed by prof. dr. David Reinhoudt. During this period he also performed research at the Irradiation & Development business unit (dr. Ronald Schram) of the Dutch Nuclear Research and Consultancy group (NRG) in Petten. After obtaining his PhD he made the shift to biomedical research by pursuing a postdoctoral fellowship in the Chemical Biology group (Dr. Huib Ovaa) at the department of Cellular Biochemistry at the Netherlands Cancer Institute – Antoni van Leeuwenhoek Hospital (NKI-AvL). After being awarded a personal VENI grant from the Dutch Research Counsel he moved, within the NKI-AvL, to the clinical departments of Radiology and Nuclear Medicine were he became senior postdoctoral fellow in the medical image processing group of dr. Kenneth Gilhuijs. Under the guidance of the diagnostic oncology devision heads, initially prof. dr. Marc van de Vijver and later by prof. dr. Laura van ‘t Veer, he started to set up his own molecular imaging research line. In 2009 he obtained a personal cancer career award from the Dutch Cancer Society (KWF) on the development of multimodal imaging agents and was appointed associate staff scientist at the NKI-AVL. In 2010 he obtained a VIDI-grant from the Dutch Research Counsel on the development of imaging agents for surgical guidance. Soon after this he moved to the Leiden University Medical Center (LUMC) to become an associate professor at the department of Radiology (2011). Here he received an ERC-starting grant (2012) for the illumination of peripheral nerve structures. At the LUMC he heads the highly multidisciplinary Interventional Molecular Imaging laboratory, wherein the “from molecule to man” principle is actively pursued.

Jun
15
Mon
Nicolas Padoy: Radiation Exposure Monitoring in the Hybrid Operating Room using a Multi-camera RGBD System @ B17 Hackerman Hall
Jun 15 @ 10:00 am – 11:00 am

Abstract:

The growing use of image-guided minimally-invasive surgical procedures is confronting clinicians and surgical staff to new radiation exposure risks from X-ray imaging devices. Furthermore, the current surgical practice of wearing a single dosimeter at chest level to measure radiation exposure does not provide a sufficiently accurate estimation of radiation absorption throughout the body. Therefore, our aim is to develop a global radiation awareness system that can more accurately estimate intra-operative radiation exposure, thereby increasing staff awareness of radiation exposure risks and enabling the implementation of well-adapted safety measures.
In this talk, I will present our work towards the development of such a system. I will first present a computer vision approach that combines data from wireless dosimeters with the simulation of radiation propagation in order to compute a global radiation risk map in the area near the X-ray device. A multi-camera RGBD system is used to estimate the layout of the room and display the estimated risk map using augmented reality. By using real-time wireless dosimeters in our system, we can both calibrate the simulation and validate its accuracy at specific locations in real-time.
I will then describe our recent work on human pose estimation and activity recognition using RGBD data recorded during real X-ray guided interventions. Among other applications, the pose estimation of the persons present in the room will allow the computation of the radiation exposure per body part over time and the recognition of surgical activities will permit the correlation of these activities with the radiation risk they pose to staff and clinicians.

 

 

Bio:

Nicolas Padoy is an Assistant Professor at the University of Strasbourg, holding a Chair of Excellence in medical robotics within the ICube laboratory. He leads the research group CAMMA on Computational Analysis and Modeling of Medical Activities, which focuses on computer vision, activity recognition and the applications thereof to surgical workflow analysis and human-machine cooperation during surgery. He graduated with a Maîtrise in Computer Science from the Ecole Normale Supérieure de Lyon in 2003 and with a Diploma in Computer Science from the Technische Universität München (TUM), Munich, in 2005. He completed his PhD jointly between the Chair for Computer Aided Medical Procedures at TUM and the INRIA group MAGRIT in Nancy. Subsequently, he was a postdoctoral researcher and later an Assistant Research Professor in the Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, USA.

Jul
31
Fri
Special Seminar: Yunhui Liu: Towards Fusion of Vision with Robot Motion @ 320 Hackerman Hall
Jul 31 @ 12:00 pm – 1:00 pm

Abstract

Human heavily relies on visual feedback from eyes to control his/her motion.  To develop a robotic vision system that functions like human eyes, one of the crucial and difficult problems is how to effectively incorporate visual information to motion control of a robot whose dynamics is highly nonlinear. This talk presents our recent efforts and latest results on vision-based control of robotic systems. The controllers developed embed feedback from visual sensors into the low-level loop of robot motion control. It will be demonstrated that by an innovative and simple design of the visual feedback, we can solve several difficult problems in visual servoing such as uncalibrated dynamic visual servoing, trajectory tracking of nonholonomic mobile robots without position measurement, visual odometry, and model-free manipulation of deformable objects like soft tissues. Applications of the visual servoing approaches in robotic surgery will be also introduced.

 

Bio

Yunhui Liu received his B. Eng. degree in Applied Dynamics from Beijing Institute of Technology, China, in 1985, his M. Eng. degree in Mechanical Engineering from Osaka University in 1989, and his Ph.D. degree in Mathematical Engineering and Information Physics from the University of Tokyo, Japan, in 1992.  He worked at the Electrotechnical Laboratory, MITI, Japan from 1992 to 1995 as a Research Scientist. He has been with Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong since 1995, and is currently a Professor, Director of Networked Sensors and Robotics Laboratory, Director of Medical Robotics Laboratory. Professor Liu is interested in vision-based robot control, medical robotics, aerial robotics, multi-fingered grasping, and robot applications. His research has been widely funded by the Research Grants Councils, the Innovation and Technology Fund, and the Quality Education Fund in Hong Kong, and by the national 863 and 973 programs in Mainland China. He has published over 200 papers in refereed professional journals and international conference proceedings. He has received a number of best paper awards from international journals and major international conferences in robotics.  He is the Editor-in-Chief of Robotics and Biomimetics and an Editor of Advanced Robotics, and was an Associate Editor of IEEE Transactions on Robotics and Automation. He was listed in Highly Cited Authors (Engineering) by Thomson Reuters in 2013. Professor Liu was the General Chair of 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). He is a Fellow of IEEE.

Aug
26
Wed
Special Seminar: Darius Burschka “Vision-Based Interaction in Dynamic Scenes” @ B17 Hackerman Hall
Aug 26 @ 12:00 pm – 1:00 pm

Abstract

While perception and modelling of static environments became a well-research problem, independent motions in the scene are still difficult to acquire and represent for robotic tasks. The challenges range from the required significantly higher sampling rate for a correct representation of the motion to appropriate representation of actions and behaviours in a knowledge database. For an observation of human actions, the typical sampling rate of a standard video-camera is not sufficient to catch the details of the transport action beyond the registration of the resulting position change of an object. A high-speed motion tracking system is necessary to analyse the intentions of the agent while performing an action. At the same time, the dynamic change in the scene is often used not only for task analysis but also for the implementation of reactive behaviours on systems. An interesting aspect in this context is to find a robust representation for the information exchange that is insensitive to calibration errors in the visual and the control part of the system. Experiments show that exchange in the three-dimensional Cartesian space is not optimal although it is easier to understand by the human operator.

In my talk, I will present the newest research results in my group that allow a fast labelling and estimation of physical properties of dynamic objects in manipulation scenarios and that allow also to implement low level reactive behaviours on mobile and flying robots without exact camera calibration. The developed hybrid stereo system allows motion acquisitions up to 120Hz providing a better sampling of the human behaviours. I will also present our work on motion representation for trajectory planning and collision avoidance on our robotic car platform RoboMobil.

Speaker Bio

Darius Burschka received his PhD degree in Electrical and Computer Engineering in 1998 from the Technical University Munich (TUM) in the field of vision-based navigation and map generation with binocular stereo systems. In 1999, he was a Postdoctoral Associate at Yale University, New Haven, Connecticut,  where he worked on laser-based map generation and landmark selection from video images for vision-based navigation systems. From 1999 to 2003, he was an Associate Research Scientist at the Johns Hopkins University.  Later 2003 to 2005, he was an Assistant Research Professor in Computer Science at JHU.
Currently, he is an Associate Professor in Computer  Science at the TUM in Germany, where he heads the computer vision and perception group. He has a close collaboration with the German Aerospace Agency (DLR). He is a Co-Chair of the IEEE RAS Technical Committee for Computer and Robot Vision, Co-Chair of the Computer Vision and Perception Topic Group at euRobotics (EU Horizon2020), and a Senior Member of IEEE.

Sep
2
Wed
LCSR/ERC Seminar: Welcome/Welcome Back Town Hall
Sep 2 @ 12:00 pm – 1:00 pm

Abstract

This is the Fall 2015 Kick-Off Seminar, presenting an overview of LCSR, useful information, and an introduction to the faculty and labs.

 

Laboratory for Computational Sensing + Robotics