Robotic platforms now deliver vast amounts of sensor data from large unstructured environments. In attempting to process and interpret this data there are many unique challenges in bridging the gap between prerecorded datasets and the field. Deep learning has pushed successes in many computer vision tasks through the use of standardized datasets.
We focus on solutions to several novel problems that arise when attempting to deploy such techniques on fielded robotic systems. Themes to Michigan’s research in this area include: how can we integrate such learning techniques into the traditional probabilistic tools that are well known in robotics, and are there ways of avoiding the labor-intensive human labeling required for supervised learning? These questions give rise to several lines of research based around dimensionality reduction, adversarial learning, and simulation. Researchers apply these techniques in many robotics areas, including: self-driving cars, self-flying planes, acoustic localization, and optical underwater reconstruction.