Past Seminars

Please join the robotics-announce mailing list at: to receive email notification of upcoming seminars.

Seminar Name/Date/Time/Location Description



Grace Xingxin Gao, PhD

University of Illinois at Urbana-Champaign

Thursday 3/16/2017 4:00-5:00pm 


FXB 1109 Boeing Lecture Hall

 The ever-growing applications of Unmanned Aerial Vehicles (UAVs) require UAVs to navigate at low altitude below 2000 feet. Traditionally, a UAV is equipped with a single GPS receiver. When flying at low altitude, a single GPS receiver may receive signals from less than four GPS satellites in the partially visible sky, not sufficient to conduct trilateration. In such a situation, GPS coordinates become unavailable and the partial GPS information is discarded. A GPS receiver may also suffer from multipath errors, causing the navigation solution to be inaccurate and unreliable.

In this talk, we present our recent work on UAV navigation using not one, but multiple GPS receivers, either on the same UAV or across different UAVs fused with other navigational sensors, such as IMUs and vision. We integrate and take use of the partial GPS information from peer GPS receivers and are able to dramatically improve GPS availability. We apply advanced filtering algorithms to multiple GPS measurements on the same UAV to mitigate multipath errors. Furthermore, multiple

UAVs equipped with on-board communication capabilities can cooperate by forming a UAV network to further improve navigation accuracy, reliability and security.

Brendan Englot, PhD

Stevens Institute of Technology

Friday 3/17/2017 12:00-1:00pm 


EECS 1008

  I will discuss a three-tiered research effort to develop algorithms that will enable autonomous underwater robots to operate reliably in complex, cluttered 3D environments. The first provides a foundation for navigating in the absence of prior knowledge of the environment – 3D occupancy mapping with an underwater robot equipped with a scanning sonar. We apply Gaussian processes and other supervised learning techniques to build real-time predictive occupancy maps over sparse and noisy data. The middle tier of our effort addresses exploring an unknown environment while a map is being constructed – and the application of supervised learning to efficiently predict the information gain of candidate sensing actions. This is achieved with the aid of Bayesian optimization. Finally, when an accurate model of the environment is available, I will discuss approaches for motion planning under uncertainty that will allow a robot to curb the growth of localization error under limited sensing resources. A carefully chosen metric to represent localization uncertainty will allow the efficient propagation of uncertainty along a graph, and the search of the graph for paths that optimally curb goal-state uncertainty. 

Gregory A. Clark, Ph.D.

Thursday, September 29, 2016

9:00 – 10:00 am

133 Chrysler Center


“It Takes a Lot of Nerve: Restoring Sensorimotor

Hand Function in Humans After Long-Term

Amputation of the Hand”

Prof. Wolfram Burgard.

Albert-Ludwigs-Universität Freiburg

Wednesday April 6th, 2016



Dow 1005.  


Deep Learning for Robot Navigation and Perception


Abstract: Autonomous robots are faced with a series of learning problems to optimize their behavior. In this presentation I will describe recent approaches developed in my group based on deep learning architectures for object recognition and body part segmentation from RGB(-D) images. In addition, I will present a terrain classification approach that utilizes sound. For all approaches I will describe expensive experiments quantifying in which way the corresponding algorithm extends the state of the art.

Prof. Dmitry Berenson

Worcester Polytechnic Institute

March 28, 2016 9-10:30am

1005 EECS

We envision a future where robots are integrated seamlessly into our factories, hospitals, and homes as autonomous agents that interact with the physical world as fluently and efficiently as humans do. While a great deal of work has investigated the manipulation of rigid objects in these settings, manipulation of deformable objects like cables, muscle tissue, and cloth remains extremely under-explored. The problem is indeed challenging, as these objects are not straightforward to model and have infinite-dimensional configuration spaces, making it difficult to apply established motion planning approaches. Our approach seeks to bypass these difficulties by representing deformable objects using simplified geometric models at both the global and local planning levels. Though we cannot predict the state of the object precisely, we can nevertheless perform tasks such as cable-routing, cloth folding, and surgical probe insertion in geometrically-complex environments. Building on this work, our new projects in this area aim to blend exploration of the model space with goal-directed manipulation of deformable objects and to generalize the methods we have developed to motion planning for soft robot arms, where we can exploit contact to mitigate the actuation uncertainty inherent in these systems. 

 Prof. Gaurav Sukhatme


Friday, February 5, 2016

3:30 to 4:30p.m

Room 1500 EECS Building

Underwater robotics is undergoing a transformation. Recent advances in AI and machine learning are enabling a new generation of underwater robots to make intelligent decisions (where to sample? how to navigate?) by reasoning about their environment (what is the shipping and water forecast?). At USC, we are engaged in a long-term effort to develop persistent, autonomous underwater robotic systems. In this talk, I will give an overview of some of our recent results focusing on two problems in adaptive sampling: underwater change detection and biological sampling. Time permitting; I will also present our recent work on hazard avoidance, allowing robots to operate in regions where there is substantial ship traffic. 

Prof. Russ Tedrake


Jan 21st 2016


1005 EECS

The Bigger They Are, The Harder They Fall:

Optimization Methods for Robust Planning and Control of Humanoids and UAVs

Despite the incredible success stories from robotics in the last few years, many of our best planning and control algorithms are still far from transitioning out of the research lab and into the real world. Fielding a humanoid robot in a disaster environment, or flying a UAV at high speeds through a cluttered environment requires reliable online planning in novel environments, and robustness to uncertainty from perception, imperfect actuators, and model errors. For legged robots and robot manipulation, this is made even more challenging by the fact that contact with the (uncertain) environment plays a central role in the dynamics.

I believe that transitioning dynamic robots to the real world requires an explicit focus on robustness, which has natural formulations using optimization. Making these optimizations tractable requires exploiting sparsity and convexity in our robot equations, and making informed relaxations. In this talk, I will review our best attempts to date and give examples with fast vision-based UAV flight through clutter and MIT’s entry into the DARPA Robotics Challenge. 

Prof. Wesley McGee

Design Robotics

Friday Nov 13 12:00-1:00pm

Room 2000 Phoenix Memorial Lab.

In the past two decades architects have increasingly adopted CNC fabrication technologies common in other manufacturing sectors like the aerospace and automotive industries, and more recently this has rapidly expanded to include industrial robots. This talk will discuss how the growing accessibility of industrial robotic tools is changing the way designers and architects think about fabrication technologies and construction, as well as fueling new modes of collaboration between designers, architects and engineers. Several recent digital fabrication projects conducted in the Taubman College robotics lab will be discussed. 

Underwater 3D Reconstruction Based on Physical Models for Refraction and Underwater Light Propagation

Dr. Anne Jordt
GEOMAR Helmholtz Centre for Ocean Research Kiel

Friday Sep 25 12:00-1:00pm

 Location: GFL107  (Gorguze Family Laboratory)

Compared to aerial photogrammetry (image-based measurements), precise and reliable measurements with underwater cameras are complicated by several phenomena. First, while traveling through water, light is scattered and attenuated, depending on the actual composition of the water, the wavelength etc. This distorts the “true” object colors or intensities depending on the object distance, and often leads to blueish or greenish images. Second, light rays can be refracted at the interfaces between water, glass and air, when entering the camera housing. In particular with flat port cameras, this affects the geometric image formation process and needs to be considered when reasoning in 3D. In this talk I will discuss novel automated machine vision approaches for reconstructing the geometry and colors of a 3D scene from an underwater video or photo sequence. These approaches are based on physical models for refraction and light propagation and will be demonstrated on underwater footage from ROV as well as from archeology divers.  

From Global Properties to Local Rules in Multi-Agent Systems

Prof. Magnus Egerstedt, 

Schlumberger Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology

April 24 10:00 – 11:00 a.m., Room 1200 EECS

The last few years have seen significant progress in our understanding of how one should structure multi-robot systems. New control, coordination, and communication strategies have emerged and, in this talk, we discuss some of these developments. In particular, we will show how one can go from global, geometric, team-level specifications to local coordination rules for achieving and maintaining formations, area coverage, and swarming behaviors. One aspect of this concerns how users can interact with networks of mobile robots in order to inject new, global information and objectives. We will also investigate what global objectives are fundamentally implementable in a distributed manner on a collection of spatially distributed and locally interacting agents.  

Compiling Global Behaviors into Local Controllers for Mobile Sensor Networks

Prof. Kevin Lynch,  Professor and Chair, Mechanical Engineering at Northwestern University

April 27  10:00am, Johnson Rooms, 3rd Floor, Lurie Bldg.

Mobile sensor networks can be deployed for tasks such as environmental monitoring or searching a collapsed building for survivors. Decentralized control algorithms allow the mobile sensors to adapt to a changing environment and to failure of individual sensors, without the need for a centralized controller.

I will describe the control theory we are developing to support “swarms” of mobile sensors. This work is based on the concept of “information diffusion” in ad hoc communication networks and motion control laws that drive the sensors to optimally acquire information.

From Global Properties to Local Rules in Multi-Agent Systems

Prof. Magnus Egerstedt 

Schlumberger Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology

Friday, April 10th. 2015
 3:30 – 4:30 p.m.
Room 1500 EECS Building

 The last few years have seen significant progress in our understanding of how one should structure multi-robot systems. New control, coordination, and communication strategies have emerged and, in this talk, we discuss some of these developments. In particular, we will show how one can go from global, geometric, team-level specifications to local coordination rules for achieving and maintaining formations, area coverage, and swarming behaviors. One aspect of this concerns how users can interact with networks of mobile robots in order to inject new, global information and objectives. We will also investigate what global objectives are fundamentally implementable in a distributed manner on a collection of spatially distributed and locally interacting agents. 

A Long-term View of SLAM

 Friday, April 10th, 2015
 3:30 – 4:30 p.m.
Room 1500 EECS Building

Prof. John J. Leonard Professor MIT, Department of Mechanical and Ocean Engineering

This talk will provide a long-term view on the Simultaneous Localization and Mapping (SLAM) problem in Robotics. The first part of the talk will review the history of SLAM research and define some of the major challenges in SLAM, including choosing a map representation, developing algorithms for efficient state estimation, and solving for data association and loop closure. Next, we will give a snapshot of recent MIT research in SLAM based on joint work with the National University of Ireland, Maynooth. A major new trend in SLAM is the development of real-time dense mapping system using RGB-D cameras. We will describe Kintinuous, a new SLAM system capable of producing high quality globally consistent surface reconstructions over hundreds of meters in real-time with only a cheap commodity RGB-D sensor. The approach is based on three key innovations in volumetric fusion-based SLAM: (1) using a GPU-based 3D cyclical buffer trick to extend dense volumetric fusion of depth maps to an unbounded spatial region; (2) combining both dense geometric and photometric camera pose constraints, and (3) efficiently applying loop closure constraints by the use of an as-rigid-as-possible space deformation. Experimental results will be presented for a wide variety of data sets to demonstrate the system’s performance. We will conclude the talk with a discussion of current and future research topics, including object-based and semantic mapping, lifelong learning, and advanced physical interaction with the world. We will also discuss potential implications of SLAM research on the development of self-driving cars. 

Robust and Efficient Real-time Mapping for Autonomous Robots

Friday, March 27, 2015
3:30pm – 4:30pm
1500 EECS

Prof. Michael Kaess

Assistant Research Professor
Carnegie Mellon University

We are starting to see the emergence of autonomous robots that operate outside of controlled factory environments in various applications ranging from driverless cars, space and underwater exploration to service robots for businesses and homes. One of the very first challenges encountered on the way to autonomy is perception: obtaining information about the environment that allows the robot to efficiently navigate through, interact with and manipulate it. Moreover, in many such applications, models of the environment are either unavailable or outdated, thus necessitating real-time robotic mapping using onboard sensors. In this talk I will present my recent research on robust and efficient optimization techniques for real-time robotic mapping. I will focus on our recently developed incremental nonlinear least-squares solver, termed incremental smoothing and mapping (iSAM2). Based on our new probabilistic model called the Bayes tree, iSAM2 efficiently updates an existing solution to a nonlinear least-squares problem after new measurements are added. I will describe some of the key aspects of my work and also address robustness in optimization. Lastly, I will present applications enabled by iSAM2 including long-term visual mapping and Kintinuous — our recent work on dense mapping with RGB-D cameras.

From Running Cockroaches to Foam Robots: why trying to learn from nature is hard, and we should do it anyway

Prof. Shai Revzen

Wednesday, Jan 21 4-5pm, Little building rm. 2548

Biologically inspired robotics is in vogue these days, with new videos of animal-like robots appearing almost daily. Yet despite superficial similarity, actually copying from nature’s playbook is a difficult task. In this talk I will discuss some of these difficulties, and show vignettes from biology, robotics and mathematics that illustrate one approach toward learning about the control of rapid legged locomotion in animals and how to instantiate similar controls in robots.

Motion Planning and Control for Robot and Human Manipulation Lecture

Dr. Kevin M. Lynch

Thursday, December 18, 2014

3:00 – 4:00PM

1200 EECS

The talk briefly describes the progress on motion planning and control for two very different manipulations problems: (1) nonprehensile manipulation by robots and (2) control of neuroprosthetics for humans with spinal cord injuries. The first part focuses on graspless manipulation modes commonly used by humans and animals but mostly avoided by rots, such as rolling, pushing, pivoting, tapping, and throwing anc catching.  The second part will described a recent project on control of a functional electrical stimulation neuroprosthetic for the human arm. 

Control Seminar / Robotics Seminar – From Motion to Actor and Action Inference in Unconstrained Video

Speaker: Professor Jason Corso

Friday, November 7, 2014
3:30 – 4:30 p.m.
1500 EECS


Can a human fly? Can a baby run? Can a car walk? Emphatically the answer is no to all of these questions. Yet, the primary agenda in the action recognition literature has focused strictly on the action ignoring who or what is doing the acting. Concurrently, the object recognition community has focused strictly on identifying the humans, the babies, the cars, etc. without considering what these “actors” are doing in the video. Yet, the articulated motion induced from, say, a bird eating versus an adult human eating is significantly different. In this talk, I will describe our recent work to unify these two problems into joint actor-action recognition with structured inference in unconstrained video. I will present a sequence of increasingly more sophisticated models, from an independent naive Bayes model through a hierarchical graphical model for jointly capturing these two inferential goals. We have developed a new dataset with seven actor classes and nine action classes (including the null action), and will present and discuss results of all models on this challenging dataset.


Control Seminar / Robotics Seminar – Advances in Underwater Robotic Vehicles for Oceanographic Exploration in Extreme Environments

Speaker: Louis L. Whitcomb

Friday, November 14, 2014
3:30 – 4:30 p.m.
1500 EECS

This talk reports recent advances in underwater robotic vehicle research to enable novel oceanographic operations in extreme ocean environments, with focus on two recent novel vehicles developed by a team comprised of the speaker’s group and his collaborators at the Woods Hole Oceanographic Institution. First, the development and operation of the Nereus underwater robotic vehicle will be briefly described, including successful scientific observation and sampling dive operations at hadal depths of 10,903 m on a NSF sponsored expedition to the Challenger Deep of the Mariana Trench – the deepest place on Earth. Second, development and first sea trials of the new Nereid Under-Ice (UI) underwater vehicle will be described. NUI is a novel remotely-controlled underwater robotic vehicle capable of being teleoperated under ice under remote real-time human supervision. The goal of NUI is to enable exploration and detailed examination of biological and physical environments including the ice-ocean interface in marginal ice zones, in the water column of ice-covered seas, at glacial ice-tongues, and ice-shelf margins, delivering realtime high-definition video in addition to survey data from on board acoustic, optical, chemical, and biological sensors. We report the results of NUI’s first under-ice deployments during a July 2014 expedition aboard R/V Polarstern at 83° N 6 W° in the Arctic Ocean – approximately 200 km NE of Greenland – in which we conducted 4 dives under the moving polar ice-pack to evaluate and develop NUI’s overall functioning and its individual engineered subsystems and under-ice scientific survey capabilities for biological oceanography and sea-ice physics. 



AI-Seminar – Human-Centered Principles and Methods for Designing Robotic Technologies

Speaker: Prof. Bilge Mutlu

Tuesday, October 21, 2014
4:00 – 5:30 p.m.
3725 BBB

The emergence of robotic products that serve as automated tools, assistants, and collaborators promises tremendous benefits across a range of everyday settings from the home to manufacturing facilities. While these products promise interactions that can be far more complex than those with conventional products, their successful integration into the human environment requires these interactions to be also natural and intuitive. To achieve complex but intuitive interactions, designers and developers must simultaneously understand and address computational and human challenges. In this talk, I will present my group’s work on building human-centered guidelines, methods, and tools to address these challenges in order to facilitate the design of robotic technologies that are more effective, intuitive, acceptable, and even enjoyable. In particular, I will present a series of projects that demonstrate how a marrying of knowledge about people and computational methods can enable effective user interactions with social, assistive, and telepresence robots and the development of novel tools and methods that support complex design tasks across the key stages of analysis, synthesis, and evaluation in the design process. I will additionally present ongoing work that applies these guidelines to the development of real-world applications of robotic technology.


Control Seminar / Robotics Seminar – Unobtrusive Monitoring for Individuals Using Inertial Systems

Speaker: Lauro Ojeda

Friday, October 10, 2014
3:30 – 4:30 p.m.
1500 EECS

The use of inertial measurement units for tracking and monitoring applications is gaining in importance as the size, power consumption and cost of these sensors have all been decreased by their wide spread use in smartphones and other consumer electronics. As such, inertial sensors are becoming the preferred sensing modality for a growing number of applications, including human and animal movement. In this presentation, I will first present an overview of how data from inertial sensors, in conjunction with the requisite algorithm development, enabled personal dead reckoning for 3D localization. Next, I will describe current work that uses these sensors and algorithms to help understand “how the person walks” instead of simply “where the person is”. Further, analysis of human movement is typically confined to specialized labs instrumented with expensive motion capture systems. To address this, we have demonstrated that inertial systems can be used not only to obtain comparable measurements at a much lower cost, but also effectively monitor subjects in daily life conditions outside of the lab. We have collected week-long, real-world sensor data on subjects, and for the first time, we have been able to capture and reproduce loss of balances in older adults who self- report balance difficulty. Finally, work extending inertial systems to free swimming marine mammals will be presented. Our preliminary results provide accurate estimates of mechanical work on swimming animals for the first time. The work presented here demonstrates the advantages of inertial systems for monitoring and quantifying health and well-being of both people and animals. 

A biographical sketch and pictures are available at

Making Atlas Move: Dynamic Planning and Control for a Hydraulic Humanoid Robot

Speaker: Dr. Scott Kuindersma

Friday, September 26, 2014
12:00 – 1:00 p.m.
Room: 133 Chrysler

What does it take to make a 155kg, 1.8m tall humanoid perform useful work?  In this talk, I will describe our approach to achieving reliable dynamic balancing, walking, and manipulation with Atlas. Starting with an efficient optimization-based controller for whole-body locomotion, I will share our process of moving from simulation to the real robot, including system identification, state estimation, and joint-level control. I will also describe our approach to collision-free manipulation planning and highlight our current efforts toward achieving highly dynamic motions with Atlas.

Minimally Invasive Robotic Catheters: Addressing Challenges through Modeling, Control, and Design

Speaker: Dr. Michael Zinn

Tuesday, September 23, 2014
4:00 – 5:00 p.m.
Room: 1200 EECS

In recent years, minimally-invasive surgical systems based on flexible robotic manipulators have met with success. One major advantage of the flexible manipulator approach is its superior safety characteristics as compared to rigid manipulators – a critical characteristic in sensitive applications including cardio-vascular and neurosurgical procedures. However, their soft compliant structure, in combination with internal friction, results in poor position and force regulation which have limited their use to simpler surgical procedures. To understand the underlying reasons of their performance limitations and potentially overcome them, we have undertaken a coordinated effort to develop improved modeling, controls, and device manipulation approaches. The modeling investigation has focused on developing improved models by which this behavior is explained and predicted. In this work, we explicitly incorporate internal device friction which, in combination with the flexible device structure and drive train, predicts behavior including motion hysteresis, whipping, and control-tendon aligned lobbing – behaviors which are not captured using standard linear-elastic descriptions. However, the underlying history-dependent behavior limits the ability to apply these modeling results towards improved device performance. As such, we have investigated the use of closed-loop control to mitigate the effect of both nonlinear disturbances, such as internal friction, and dynamic flexible body motions. In particular, the use of a hybrid tracking controller, where motions are decomposed and controlled in both modal and flexible-segment joint-space coordinates, has shown significant improvement in both the steady-state and dynamic response. While these improvements are notable, the interaction of the controller and uncontrolled flexible body modes limit the extent of the possible performance improvements. Finally, in an attempt to address the limitations of flexible manipulators directly, we discuss a new approach to continuum robotic manipulation, referred to as interleaved continuum-rigid manipulation, which combines flexible, actively actuated continuum segments with small rigid-link actuators. In this approach the small rigid-link joints are interleaved between successive continuum segments and provide a redundant motion and error correction capability.