Focus Areas

Artificial Intelligence | Autonomous & Connected Vehicles (land, air, sea, space) | Deep Learning for RoboticsHuman-Robot Interaction | Legged Locomotion and Exoskeletons | Manipulation | Manufacturing and Construction Motion PlanningRehabilitation Robotics and Brain-Machine Interfaces  | Robot Perception & Computer Vision | Robot Teams and Swarms | Simultaneous Localization and Mapping (SLAM) Validation and Verification of Robotic Systems and Safe Autonomy

 

Artificial Intelligence

The Artificial Intelligence (AI) program at the University of Michigan comprises a multidisciplinary group of researchers conducting theoretical, experimental, and applied investigations of intelligent systems. Current projects include research in rational decision making, distributed systems of multiple agents, machine learning, reinforcement learning, cognitive modeling, game theory, natural language processing, machine perception, healthcare computing, and robotics.

Research in the Artificial Intelligence tends to be highly interdisciplinary, building on ideas from computer science, linguistics, psychology, economics, biology, controls, statistics, and philosophy. In pursuing this approach, faculty and students work closely with colleagues throughout the University. This collaborative environment, coupled with our diverse perspectives, leads to a valuable interchange of ideas within and across research groups.

Robotics Faculty doing Artificial Intelligence research include: Ben Kuipers, Chad Jenkins, Ed Olson, John Laird, Satinder Singh

Top of page


Autonomous & Connected Vehicles (land, sea, air, space)

A roundabout at Mcity.
Mcity, a test site for autonomous vehicles that emulates the urban driving experience, is just half a mile from the robotics building site.

Driverless cars steal the headlines, but autonomous vehicles in general can do much more to improve the safety of humans and help protect the environment. Autonomous aerial vehicles can observe farm crops, fish populations, wildfires and droughts. Pollutants, temperature and other indicators of the ocean’s health can be monitored at a variety of depths through the use of autonomous underwater and surface drones or gliders.

Autonomous cars could reduce crashes, congestion and emissions. In anticipation of the future demand, all of the major automobile companies now conduct and sponsor research and development of systems to either assist the driver or serve as chauffeur. With the Mcity autonomous vehicle test site on campus, U-M is a natural leader in the field. Michigan Robotics research addresses a variety of technical concerns to make autonomous operation safe and reliable. Other outstanding questions include how autonomous or semi-autonomous systems should interact with humans and how they may affect society. In answering these questions and developing solutions, core robotics faculty benefit from collaboration with the strong psychology, social science and public policy departments at U-M.

Robotics Faculty doing Autonomous & Connected Vehicles research include: Ella AtkinsKira Barton, Jason CorsoEd DurfeeTulga Ersal, Ryan EusticeAnouck Girard, Matt Johnson-RobersonVineet KamatEd Olson, Necmiye OzayDimitra PanagouHuei Peng, Lionel Robert, Ram Vasudevan

Top of page

 


Deep Learning for Robotics

Robotic platforms now deliver vast amounts of sensor data from large unstructured environments. In attempting to process and interpret this data there are many unique challenges in bridging the gap between prerecorded datasets and the field.  Deep learning has pushed successes in many computer vision tasks through the use of standardized datasets. We focus on solutions to several novel problems that arise when attempting to deploy such techniques on fielded robotic systems. Themes to Michigan’s research in this area include: how can we integrate such learning techniques into the traditional probabilistic tools that are well known in robotics, and are there ways of avoiding the labor-intensive human labeling required for supervised learning? These questions give rise to several lines of research based around dimensionality reduction, adversarial learning, and simulation.  Researchers apply these techniques in many robotics areas, including: self-driving cars, acoustic localization, and optical underwater reconstruction.

Robotics Faculty doing Deep Learning for Robotics research include: Jason Corso, Matt Johnson-Roberson, Honglak Lee

Top of page


Human-Robot Interaction 

Autonomous robots are rarely purely autonomous. Robots will need a skill set that allows them to interact effectively with humans. This area of robotics benefits from collaboration with social scientists, who can elucidate how teams of humans operate and how a robot might understand these teams and its own role.

Human-robot interaction includes physical elements, such as tele-operation and safety, as well as intellectual and emotional engagement, such as ensuring that humans and robots can accurately predict one another’s behaviors. It also explores how much autonomy should be given to a robot for a particular task, taking into account whether control could be passed to a human operator who would be alert and aware enough to handle the potentially dangerous situation.

Robotics Faculty doing Human-Robot Interaction research include: Ella AtkinsKira Barton, Dmitry Berenson, Cynthia ChestekJason CorsoJia Deng, Brent Gillespie, Chad JenkinsBen KuipersWalter LaseckiGabor Orosz, Emily Mower Provost, Lionel RobertNadine Sarter, Xi Jessie Yang

Top of page


Legged Locomotion and Exoskeletons

Michigan researchers are giving legged robots remarkable new capabilities. Human-sized bipedal robots are charging over unstructured terrain with unprecedented speed and agility, using advanced feedback control algorithms built on cutting-edge mathematics. Dog-sized quadrupedal robots are being designed to break through energy consumption barriers that threatened to keep legged machines more of a curiosity item rather than ubiquitous tools. Another effort is focused on highly novel multi-pedal mechanical designs and associated algorithms that give legged machines a level of adaptability seen now only in the animal world.

This research is directly tackling one of the greatest challenges for mobile robots, the negotiation of human environments, full of furniture, fallen debris, catwalks, discrete footholds, and other obstacles inhospitable to wheeled vehicles. U-M Robotics faculty engaged in research on new mathematical algorithms to give legged robots the ability to move freely in challenging environments, and on the design of mechanisms to make legged robots more adept and efficient include: Jessy GrizzleC. David RemyShai Revzen, Elliott Rouse

Top of page


Manipulation

Can you imagine a robot with the dexterity of a human? Even the dexterity of an “ordinary” human would be incredible, but what if it had the skills of a surgeon or a pianist? Michigan roboticists are laying the foundational blocks with the goal of first meeting and then exceeding the manual dexterity of an ordinary human.

Manipulation – the study of physical interaction between robots and objects – has long been one of the principal challenges of robotics. To be useful for tasks like caring for the elderly, cleaning up a home, stocking a warehouse, assembling electronics, or exploring space, robots must be able to understand actions that we as humans often take for granted, such as “picking-and-placing” objects, using tools, and operating devices. This interaction involves reasoning about the properties of objects, as well as the constraints of the task and uncertainty in the robot’s sensing and motion. The lack of structure in environments like homes means that robots cannot execute a fixed program but instead must constantly adapt to their changing environment, synthesizing new plans and strategies as needed. The presence of humans in the environment also requires the robot to understand human expectations and reason about what humans are likely to do so that it can operate safely and efficiently.

Robots with simple grippers could do amazing things if they had human-level manipulation skills

U-M faculty are developing leading computational methods that allow robots to perform manipulation in human workspaces and building integrated systems that allow robots to perceive, grasp, and plan to manipulate household objects. Advanced research in manipulation includes artificial intelligence and machine learning methods to teach robots new manipulation tasks and skills, computer vision and scene estimation for taskable goal-directed manipulation, and robust motion planning for manipulation of soft materials like cloth, string, and tissue.

Robotics Faculty doing Manipulation research include: Dmitry Berenson, Chad Jenkins, Ben Kuipers and John Laird

Top of page


Manufacturing and Construction

Michigan researchers in manufacturing and construction robotics are driving innovation to tackle important design challenges in sensing, actuation, and automation. U-M faculty are engaged in research on the use of advanced manufacturing tools and designs to fabricate morphable mechanisms and customizable sensors with unique materials and higher resolutions, and on hybrid modeling and agent-based reasoning and learning methodologies to enable more secure and efficient system-level manufacturing. U-M faculty are also creating collaborative autonomous robots for construction, assembly, inspection, and maintenance of civil infrastructure. Research specifically focuses on finding solutions to the robot localization, pose estimation, and scene understanding problems in unstructured and evolving environments.

Robotics Faculty doing Manufacturing and Construction research include: Kira Barton, Dmitry Berenson, Vineet Kamat, Wesley McGee, Chinedum Okwudire, C. David Remy, Kazuhiro Saitou, Dawn Tilbury, A. Galip Ulsoy

Top of page


Motion Planning

Motion planning is a term used in robotics for the process of breaking down a desired movement task into discrete motions that satisfy movement constraints and possibly optimize some aspect of the movement.

For example, consider navigating a mobile robot inside a building to a distant waypoint. It should execute this task while avoiding walls and not falling down stairs. A motion planning algorithm would take a description of these tasks as input, and produce the speed and turning commands sent to the robot’s wheels. Motion planning algorithms might address robots with a larger number of joints (e.g., industrial manipulators), more complex tasks (e.g. manipulation of objects), different constraints (e.g., a car that can only drive forward), and uncertainty (e.g. imperfect models of the environment or robot).

Motion planning has several robotics applications, such as autonomy, automation, and robot design in CAD software, as well as applications in other fields, such as animating digital characters, video game, artificial intelligence, architectural design, robotic surgery, and the study of biological molecules.

Robotics Faculty doing Motion Planning research include: Ella Atkins, Dmitry Berenson, Dimitra Panagou

Top of page


Rehabilitation Robotics and Brain-Machine Interfaces

The University of Michigan is a nationwide leader in the scientific effort to design, control, and evaluate robots and other devices that assist people with disabilities. Our research in the field of Rehabilitation Robotics aims at developing machines that directly and physically interact with humans. These robots deliver physical therapy to assist with recovery after musculoskeletal or neurological injury – or they replace functionality that was lost irreversibly. To this end, we study how the brain and body work, develop powered prostheses and exoskeletons, and investigate novel ways to interact with the human body.

A young man with a mask over his nose and mouth, connected to boxes on his chest
Oxygen measurements inform the design of an exoskeleton.

There are a large number of ongoing exciting projects. U-M researchers work on brain-machine interfaces, such that users can control robots and other assistive devices with their mind; they design and control the next generation of active prostheses and exoskeletons to make them lighter, smarter, and more intuitive to control; they work on robotic exercise equipment that can challenge a patient in just the right way during therapy; and they investigate how the human brain learns and re-learns after neurological injury. Our robots help with all sorts of tasks from walking to manipulation, and they come in all shapes and forms, from large wearable devices to tiny sensors and actuators.

Since Rehabilitation Robotics is a truly multi-disciplinary and collaborative effort, our team spans the entire College of Engineering and we further collaborate with specialists from the school of kinesiology and the medical school. The complementary expertise of our faculty creates an unrivaled environment that accelerates the development, testing, and effective use of devices that improve mobility and function in individuals with physical disabilities and limitations.

Robotics Faculty doing Rehab Robotics and Brain-Machine Interfaces research include: Cynthia ChestekDeanna GatesBrent Gillespie, Jessy Grizzle, Chandramouli KrishnanBen KuipersC. David Remy, Elliott Rouse

Top of page

 


Robot Perception & Computer Vision

Michigan researchers are endowing robots with exceptional capabilities for perceiving their environments and their place within them. Our capabilities integrate vision, sonar, and LIDAR, with proprioception such as encoders. State-of-the-art technologies in simultaneously mapping and localizing the positions of robots both indoors and outdoors have resulted from our mobile robotics programs. We are experts in air, land and sea—our robots already inspect ship hulls automatically for damage; they already handle wildly cluttered indoor settings so a wheelchair can navigate autonomously. Exciting developments in not only visual perception but also audio perception are paving the way for more interactive robots.

Perception research at Michigan directly targets foundational questions:

  • What is the relationship between sensed signals?
  • What are the semantics and the language that humans use to describe actions?
  • How can modeling and the use of both active and passive algorithms enrich a robot’s awareness of the world around it?
  • How can modeling and fusion over many complementary sensors converge into a single overall representation?
  • How can the progressive learning of robotic perceptive intelligence leverage modern machine learning and signal processing methodologies?

These questions, when answered, will lead to huge advances in robotics. Robotics Faculty engaged in Robot Perception & Computer Vision research include: Jason Corso, Jia Deng, Jason CorsoRyan Eustice, Chad Jenkins, Matt Johnson-RobersonBen Kuipers, Honglak LeeLauro OjedaEd Olson

Top of page


 

Robot Teams and Swarms

Michigan researchers are investigating the theory and engineering of robot teams and swarms, to enable applications as diverse as search-and-rescue, aerial surveillance and data gathering, space robotics, robotic assistants, and autonomous driving. Robots collaborating in teams can complete tasks more efficiently than a robot or person working alone. Faculty and students at Michigan focus on the theory and development of cutting-edge integrated systems for autonomous deployment in real-world environments. The research aims at highly capable and scalable teams of networked robots, that can safely and securely interact with other robots and humans, despite the effect of environmental uncertainty and adversarial (faulty, compromised or malicious) information within the network. Planning, navigation and control of multiple interacting robots or vehicles demands expertise in a range of technical disciplines, including path and motion planning, network control and estimation, decision-making, queuing and logistics, distributed systems, and discrete event systems.

Robotics Faculty doing Robot Teams and Swarms research include: Anouck Girard, Necmiye Ozay, and Dimitra Panagou

Top of page


Simultaneous Localization and Mapping (SLAM)

In robotic mapping and navigation, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it. While this initially appears to be a chicken-and-egg problem there are several algorithms known for solving it, at least approximately, in tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, and GraphSLAM.

SLAM algorithms are tailored to the available resources, hence not aimed at perfection, but at operational compliance. Published approaches are employed in self-driving cars, unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, newer domestic robots and even inside the human body.

Robotics Faculty doing Simultaneous Localization and Mapping (SLAM) research include: Ryan Eustice, Matt Johnson-Roberson, Ed Olson, Ram Vasudevan

Top of page


Validation and Verification of Robotic Systems and Safe Autonomy

From robots working alongside humans every day to the one-of-a-kind rovers sent to Mars once in a while (at least for now!), it is crucial to ensure correct and safe operation of robotic systems. The need for integrating several software modules (e.g., for perception, high-level decision-making, and control) for complex robotic systems exacerbates the problem because, if it is not done with care, unexpected interactions can lead to system-level failures after integration. Testing each and every scenario and interaction the system can go trough is not feasible, therefore more systematic means for verifying safety and correctness are needed. The goal of V&V research is to provide certificates on the correct and safe operation of the system given a set of assumptions on the system’s environment and the implementation platform. The problems Michigan faculty and students work on include:
* Specification languages and modeling formalisms: How to describe correct behavior of different types of robotic systems in a way amenable to analysis?
* Verification and formal methods for robotics: Algorithmic techniques to prove safety and correctness of the system or to find corner cases that can lead to problems.
* Fault-tolerance and monitoring: How to do health-monitoring and prognostics to improve the self-awareness of robotic systems at run-time? How to locate and mitigate faults in a timely manner?
* Correct-by-construction control synthesis: Algorithmic techniques to automate the design of control and decision-making modules (software designed by another software!) in a way that given system models and requirements, the resulting software modules are guaranteed to enforce the specifications.

Robotics Faculty doing V&V of Robotic Systems and Safe Autonomy research include: Ella Atkins, Necmiye Ozay

Top of page