Robots who goof: Can we trust them again?

August 10, 2021
A mechanical arm in a virtual work setting with boxes and working statistics.
The human-like, android robot used in the virtual experimental task of handling boxes.

When robots make mistakes—and they do from time to time—reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear, according to University of Michigan research.

In a study that examined multiple trust repair strategies—apologies, denials, explanations or promises—the researchers found that certain approaches directed at human co-workers are better than others and often are impacted by how the robots look.

“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers,” said Lionel Robert, associate professor at the U-M School of Information and core faculty of the Robotics Institute.

Continue reading ⇒

An ultra-precise mind-controlled prosthetic

March 9, 2020

In a major advance in mind-controlled prosthetics for amputees, University of Michigan researchers have tapped faint, latent signals from arm nerves and amplified them to enable real-time, intuitive, finger-level control of a robotic hand.

To achieve this, the researchers developed a way to tame temperamental nerve endings, separate thick nerve bundles into smaller fibers that enable more precise control, and amplify the signals coming through those nerves. The approach involves tiny muscle grafts and machine learning algorithms borrowed from the brain-machine interface field. 

Continue reading ⇒

How can pedestrians trust autonomous vehicles

January 23, 2020

When at a crosswalk, humans can easily read a driver’s slightest nod. These gestures give us the confidence to step out into the road full of two-ton machines. With an automated vehicle, however, that human to human communication is unreliable: the driver may not be in control or even be paying attention, leaving the pedestrian unsure if they’ll be safe while crossing. 

To inform future solutions to this, a team led by Michigan researchers observed how we act as pedestrians in a virtual reality city full of autonomous vehicles.

“Pedestrians are the most vulnerable road users,” said Suresh Kumaar Jayaraman, a PhD student in mechanical engineering. “If we want wide-scale adoption of autonomous vehicles, we need those who are inside and outside of the vehicles to be able to trust and be comfortable with a vehicle’s actions.” 

Continue reading ⇒

What humans want, in an automated vehicle

November 8, 2019
Professors Lionel Robert and X. Jessie Yang stand in front of an autonomous vehicle at Mcity, an autonomous vehicle testing ground. Photo: Jeffrey M. Smith/School of Information.

Agreeable, conscientious, and stable. These are three human personality traits that, it turns out, we want to see in our driverless cars regardless of whether we possess them ourselves, according to a new study from the University of Michigan.

The researchers set out to examine how a person’s perception of safety in an autonomous vehicle was influenced by the degree to which the vehicle and the rider seemed to share certain “personality” traits.

Continue reading ⇒

Humans and robots: the emotional connection

July 22, 2019
Robot plays soccer
YiBin Jiang, Medical School Research Technician, plays soccer with a robot. Photo: Joseph Xu.

Soldiers develop attachments to the robots that help them diffuse bombs in the field. Despite numerous warnings about privacy, millions of us trust smart speakers like Alexa to listen into our daily lives. Some of us name our cars and even shed tears when we trade them in for shiny new vehicles.

Research has shown that individually we develop emotional, trusting relationships with robotic technology, but until now little has been known about whether groups that work with robots develop attachments, and if so, if such emotions affect team performance. 

Continue reading ⇒