Even robots make mistakes: How humans walk with imperfect exoskeletons

March 12, 2024

Researchers assessed the impact of exoskeleton accuracy on walking behavior, working towards co-adaptive exoskeleton algorithms

A person wearing sturdy boots and jeans walks on a treadmill wearing a metal brace from the ankle to just below the knee.
When a lower limb exoskeleton does not operate properly, some users recover their step quickly while others overcompensate with their hip or ankle. Credit: Brenda Ahearn, Michigan Engineering.

When lower limb exoskeletons—mechanical structures worn on the leg—do not operate properly, some people adjust quickly while others compensate with their ankle or hip, expending more energy than necessary, according to a new study by University of Michigan researchers. 

The results, published in IEEE Robotics and Automation Letters, inform the development of future co-adaptive exoskeleton algorithms—meaning both the person and exoskeleton learn from and adjust to one another.

Continue reading ⇒

How we can better link mind and machine

July 28, 2022
A user's legs walking with a powered ankle exoskeleton on a treadmill
A user demonstrates walking with a lower-body exoskeleton. In a new study, powered exoskeleton users had trouble incorporating instructional haptic feedback cues, informing how future human-machine interaction must be designed. Photo: Brenda Ahearn/University of Michigan, College of Engineering, Communications and Marketing

A team led by University of Michigan researchers recently tested how exoskeleton users responded to the task of matching haptic feedback to the timing of each footstep. The team found that the haptic cues added mental workload, causing less effective use of the exoskeleton, and demonstrated the hurdles in future human-machine design.

“When we introduce haptic feedback while walking with an exoskeleton, we usually intend for the user to understand and maintain coordination with the exoskeleton,” said Man I (Maggie) Wu, a robotics PhD student.

“We discovered that the exoskeleton actually introduces a competing mental load. We really need to understand how this affects the user while they attempt to complete tasks.”

Continue reading ⇒

Robots who goof: Can we trust them again?

August 10, 2021
A mechanical arm in a virtual work setting with boxes and working statistics.
The human-like, android robot used in the virtual experimental task of handling boxes.

When robots make mistakes—and they do from time to time—reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear, according to University of Michigan research.

In a study that examined multiple trust repair strategies—apologies, denials, explanations or promises—the researchers found that certain approaches directed at human co-workers are better than others and often are impacted by how the robots look.

“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers,” said Lionel Robert, associate professor at the U-M School of Information and core faculty of the Robotics Institute.

Continue reading ⇒

An ultra-precise mind-controlled prosthetic

March 9, 2020

In a major advance in mind-controlled prosthetics for amputees, University of Michigan researchers have tapped faint, latent signals from arm nerves and amplified them to enable real-time, intuitive, finger-level control of a robotic hand.

To achieve this, the researchers developed a way to tame temperamental nerve endings, separate thick nerve bundles into smaller fibers that enable more precise control, and amplify the signals coming through those nerves. The approach involves tiny muscle grafts and machine learning algorithms borrowed from the brain-machine interface field. 

Continue reading ⇒

How can pedestrians trust autonomous vehicles

January 23, 2020

When at a crosswalk, humans can easily read a driver’s slightest nod. These gestures give us the confidence to step out into the road full of two-ton machines. With an automated vehicle, however, that human to human communication is unreliable: the driver may not be in control or even be paying attention, leaving the pedestrian unsure if they’ll be safe while crossing. 

To inform future solutions to this, a team led by Michigan researchers observed how we act as pedestrians in a virtual reality city full of autonomous vehicles.

“Pedestrians are the most vulnerable road users,” said Suresh Kumaar Jayaraman, a PhD student in mechanical engineering. “If we want wide-scale adoption of autonomous vehicles, we need those who are inside and outside of the vehicles to be able to trust and be comfortable with a vehicle’s actions.” 

Continue reading ⇒