Rubble-roving robots use hands and feet to navigate treacherous terrain

August 13, 2021

Humans are adept at using our hands to keep our balance, whether by grabbing a railing as we climb stairs, walking with help from a cane, or gripping a strap on the subway. Now, University of Michigan researchers have enabled humanoid robots to use their hands in a similar way, so the robots can better travel across rough terrain, such as disaster areas or construction sites.

“In a collapsed building or on very rough terrain, a robot won’t always be able to balance itself and move forward with just its feet,” said Dmitry Berenson, professor of electrical and computer engineering and core faculty in the Robotics Institute. 

“You need new algorithms to figure out where to put both feet and hands. You need to coordinate all these limbs together to maintain stability, and what that boils down to is a very difficult problem.”

Continue reading ⇒

Helping robots learn what they can and can’t do in new situations

May 19, 2021

The models that robots use to do tasks work well in the structured environment of the laboratory. Outside the lab, however, even the most sophisticated models may prove inadequate in new situations or in difficult to model tasks, such as working with soft materials like rope and cloth. 

To overcome this problem, University of Michigan researchers have created a way for robots to predict when they can’t trust their models, and to recover when they find that their model is unreliable. 

“We’re trying to teach the robot to make do with what it has,” said Peter Mitrano, Robotics PhD student.

Continue reading ⇒

Using computer vision to track social distancing

April 15, 2020

With advanced computer vision models and live public street cam video, a University of Michigan startup is tracking social distancing behaviors in real time at some of the most visited places in the world.

Voxel51’s new tool shows—quite literally—an uptick in public gathering in Dublin on St. Patrick’s Day, for example, and at New Jersey’s Seaside Heights boardwalk during a recent weekend of unusually good weather.

Continue reading ⇒

A quicker eye for robotics to help in our cluttered, human environments

May 23, 2019
Chad Jenkins, seen here with a Fetch robot, leads the Laboratory for Progress, which aims to discover methods for computational reasoning and perception that will enable robots to effectively assist people in common human environments. Karthik Desingh, lead author on the paper, is a member of the lab. Photo: Joseph Xu/Michigan Engineering

In a step toward home-helper robots that can quickly navigate unpredictable and disordered spaces, University of Michigan researchers have developed an algorithm that lets machines perceive their environments orders of magnitude faster than similar previous approaches.

Continue reading ⇒

A New Framework to Guide the Processing of RGBD Video

August 30, 2017

Dr. Jason Corso and Dr. Brent Griffin are extending prior work in bottom-up video segmentation to include depth information from RGBD video, which allows us to better train for specific tasks and adaptively update representations of objects in complex environments. For robotics applications, we are incorporating this into a framework that guides the processing of RGBD video using a kinematic description of a robot’s actions, thereby increasing the quality of observations while reducing the overall computational costs. Using kinematically-guided RGBD video, we are able to provide feedback to a robot in real-time to: identify task failure, detect external objects or agents moving into a workspace, and develop a better understanding of objects while interacting them.