A New Framework to Guide the Processing of RGBD Video

Dr. Jason Corso and Dr. Brent Griffin are extending prior work in bottom-up video segmentation to include depth information from RGBD video, which allows us to better train for specific tasks and adaptively update representations of objects in complex environments. For robotics applications, we are incorporating this into a framework that guides the processing of RGBD video using a kinematic description of a robot’s actions, thereby increasing the quality of observations while reducing the overall computational costs. Using kinematically-guided RGBD video, we are able to provide feedback to a robot in real-time to: identify task failure, detect external objects or agents moving into a workspace, and develop a better understanding of objects while interacting them.

U-M CSE Graduate Students present papers at ICAPS 2017

CSE graduate students Qi Zhang and Shun Zhang will present exciting research papers at ICAPS 2017, the 27th International Conference on Automated Planning and Scheduling, taking place this June at Carnegie Mellon University in Pittsburgh PA.

Here are the papers:

Minimizing Maximum Regret in Commitment Constrained Sequential Decision Making

Approximately-Optimal Queries for Planning in Reward-Uncertain Markov Decision Processes


Robots beware: we will detect your anomalies!


Professor Necmiye Ozay has been awarded the NASA Early CAREER Faculty award, which enables Professor Ozay and her team to develop “Run-time anomaly detection and mitigation in information-rich cyber-physical systems.” The crowning application will be an exploration problem involving humans and robots.

More information here.