Using computer vision to track social distancing

April 15, 2020

With advanced computer vision models and live public street cam video, a University of Michigan startup is tracking social distancing behaviors in real time at some of the most visited places in the world.

Voxel51’s new tool shows—quite literally—an uptick in public gathering in Dublin on St. Patrick’s Day, for example, and at New Jersey’s Seaside Heights boardwalk during a recent weekend of unusually good weather.

Continue reading ⇒

A quicker eye for robotics to help in our cluttered, human environments

May 23, 2019
Chad Jenkins, seen here with a Fetch robot, leads the Laboratory for Progress, which aims to discover methods for computational reasoning and perception that will enable robots to effectively assist people in common human environments. Karthik Desingh, lead author on the paper, is a member of the lab. Photo: Joseph Xu/Michigan Engineering

In a step toward home-helper robots that can quickly navigate unpredictable and disordered spaces, University of Michigan researchers have developed an algorithm that lets machines perceive their environments orders of magnitude faster than similar previous approaches.

Continue reading ⇒

A New Framework to Guide the Processing of RGBD Video

August 30, 2017

Dr. Jason Corso and Dr. Brent Griffin are extending prior work in bottom-up video segmentation to include depth information from RGBD video, which allows us to better train for specific tasks and adaptively update representations of objects in complex environments. For robotics applications, we are incorporating this into a framework that guides the processing of RGBD video using a kinematic description of a robot’s actions, thereby increasing the quality of observations while reducing the overall computational costs. Using kinematically-guided RGBD video, we are able to provide feedback to a robot in real-time to: identify task failure, detect external objects or agents moving into a workspace, and develop a better understanding of objects while interacting them.

Chad Jenkins named Editor-in-Chief of the ACM Transactions on Human-Robot Interaction (THRI)

July 12, 2017

We are thrilled to become part of the ACM family of journals,” explained THRI Co-Editor-in-Chief Odest Chadwicke Jenkins of the University of Michigan. “ACM’s reputation as a publisher of computing research is unparalleled. At the same time, the broad representation of computing disciplines in the ACM, the organization’s global reach, and platforms such as the Digital Library, are a perfect complement to our own goals for THRI.

Jenkins, along with Co-Editor-in-Chief Selma Šabanović of Indiana University, have set three primary goals for the journal in the coming years, including: 1) Sustaining the intellectual growth of HRI as a field of study (both quantitatively and qualitatively), 2) Enabling timely and productive feedback from readers, and 3) Cultivating new and leading-edge ideas in both robotics and the human-centered sciences

The inaugural issue of the rebranded ACM Transactions on Human-Robot Interaction (THRI) is planned for March 2018. Those seeking to submit for the publication, or who have questions for the editors, are encouraged to visit the current HRI Journal website.

The full article.

U-M Professor Corso Awarded NSF Robotics Grant

September 1, 2015

Professor Jason Corso was awarded a new grant from the National Robotics Initiative at the National Science
Foundation. The project, a collaboration with Professor Jeffrey Siskind at Purdue University, is entitled
“RobotSLANG: Simultaneous Localization, Mapping, and Language Acquisition.” This exciting new project seeks
to address the challenge of natural communication between robots and humans for tasks involving spatial
navigation. Language is routine for most humans. Language serves myriad purposes ranging from everyday
conversation to cataloging international law; most relevant to this project are the rich linguistic elements
describing the spatial environment, the objects and places within it, and the navigable paths through it.
Yet, language continues to evade robot systems; mobile robot platforms are adept at mapping and navigation,
but they rely on metric representations of the environments. Humans and robots do not share a common language.

The project seeks to overcome this significant limitation by conjoining the well understood problem of mapping,
or more generally simultaneous localization and mapping (SLAM), with that of language acquisition that will
enable a new symbiosis between mobile robots and humans in the context of navigation tasks in novel
environments.

http://nsf.gov/awardsearch/showAward?AWD_ID=1522904