Inside Robotics

Speaking like dolphins, a robot fleet takes on underwater tasks

robot in pool
An underwater autonomous robot built to inspect dams, bridges, and hulls of ships practices by inspecting the side of a pool. Courtesy of Joshua Mangelson.

In a Navy shipyard in San Diego, a new generation of underwater robots are learning to communicate and collaborate in order to inspect boats, bridges, pipelines, and other underwater structures. Developed by Joshua Mangelson, a University of Michigan doctoral student in Robotics, the autonomous vehicles overcome the many challenges posed by murky water by simplifying how the robots coordinate and communicate.

Water, while the basis of life for many, means death for wireless communication. “Below a meter or two of water, Wi-Fi cuts out completely,” Mangelson said. “Same with GPS signals. This is because water attenuates electromagnetic signals very quickly, which makes underwater exploration and mapping a very interesting problem.”

Instead of the wireless communications used above the sea, underwater wireless communications rely on either light or acoustic transmissions. Light-based underwater communication is still early in research, but can require clear line-of-sight between vehicles in often-cloudy water.

Mangelson launches robot
Joshua Mangelson, Robotics PhD student, launches an autonomous underwater inspection vehicle. Courtesy of Joshua Mangelson.

“Using sound and acoustics only allows communication of bits per second, however,” Mangelson said. “A previous labmate likened underwater communication to sending one tweet per minute–and that is a tweet with just characters, no photo.”

During an underwater inspection, the team of robots need to let one another know where they are headed and where they have been. If the robots had the communication bandwidth available from Wi-Fi or location data from GPS, they could determine location and orient themselves using common objects captured by sensors or even send exact coordinates. But, communicating such heavy data is currently impossible.

With this lack of bandwidth of underwater communications, Mangelson needed to simplify and condense what the robots said to each other.

“As the robots move through the underwater environment, they measure the bathymetry, or depth of the seafloor, and how it changes,” Mangelson explained. “Using our optimization framework, the robots can use the bathymetry data to align their trajectories.” Only able to whisper in acoustic pings, the robots send and receive this sparse depth data and, utilizing Mangelson’s solution, can use it to work together in mapping and inspecting underwater objects.

Important to Mangelson’s solution is that it does not rely on any initial location data, such as GPS coordinates. This is because the approach is based on convex optimization, which unlike other types of optimization, does not require an initial estimate of the solution to find the optimal solution. Mangelson designed the problem so that there are no other possible answers that the algorithm might confuse for its universal solution, which also makes his approach more robust.

The paper that outlines the work, “Communication Constrained Trajectory Alignment for Multi-Agent Inspection via Linear Programming,” won first place in the student poster competition at the IEEE OES/MTS OCEANS Conference.

Robots inspect ship hull
These robots work in tandem to inspect the hull of a ship. Courtesy of Joshua Mangelson.

Also authors on the paper are Ram Vasudevan, Assistant Professor of Mechanical Engineering, and Ryan Eustice, Associate Professor of Naval Architecture and Marine Engineering. Both are also Core Faculty of the Robotics Institute.

Mangelson continues to test and advance this research in experiments with the Navy in San Diego and Coast Guard in Boston, which allows him to encounter real world problems with the technology.

“During field trial experiments, you have to deal with the real environment, such as hardware leaking, in addition to developing your algorithms, so the algorithms end up more closely tied to reality. That’s one of the reasons I’m interested in field robotics, because you get to sit right on the boundary of the gap between theory and application.”

Mangelson hopes to expand the number of robots sailing in his fleet, and eventually enable quick, automated inspections of important vessels and infrastructure.

The research was supported by the Office of Naval Research under award N00014-16-1-2102.

Decentralized air traffic control for drone-laden skies

Anticipating skies crowded with crisscrossing autonomous vehicles, University of Michigan researchers have developed a future air traffic control system that allows any number of autonomous planes to safely route around each other to their final destinations.

Kunal Garg, a University of Michigan graduate student, designed the system to be utilized by autonomous fixed-wing aircraft, which require more time and space to turn than rotorcraft such as quadcopters. The work could be extended to autonomous vehicles, which follow similar turning dynamics.

autonomous air traffic simulation
In this simulation, autonomous fixed-wing aircraft change goal points and flight modes to avoid collisions based on control laws developed by Kunal Garg.

“It doesn’t seem that complicated: given two points, design a safe trajectory,” Garg said. “But, adding the dynamics and constraints of fixed-wing aircraft, decentralizing the system to allow for unlimited scalability, and doing all this with a limited communication radius makes it complicated.”

In the paper, “Hybrid Planning and Control for Multiple Fixed-Wing Aircraft under Input Constraints,” Garg and his advisor Dimitra Panagou, Core Faculty in Robotics and Assistant Professor of Aerospace Engineering, show that for an arbitrarily large number of planes, the planes will always be able to resolve their conflicts, reach their goal location, and maintain safety.  The paper is a finalist in the 2019 American Institute of Aeronautics and Astronautics Scitech Conference Graduate Student Paper Competition.

In approaching the problem, Garg first simplified it with a realistic constraint. While planes can ascend or descend to avoid one another, there are likely to be altitude restrictions on autonomous aircraft, especially in urban areas. Given this, he removed altitude from the equation, and focused on creating safe flight paths in two dimensions.

With the simpler problem, Garg created the control laws.

“We wanted to exploit hybrid systems because designing a single controller for this complicated system would have been very difficult,” Garg said. “We divided the system into five modes and, depending on what the aircraft are doing, they will decide which mode to choose. Each mode which has its own controller that will make sure it is always safe.”

“If the aircraft is alone, the controller will take it to its goal location. If it has one aircraft coming head on, it will do a roundabout maneuver. If there are more than two aircraft, depending on their location, the algorithm will decide the safe mode for all of those aircraft.”

Allowing for safe paths in this system is also possible because aircraft can be assigned temporary goals in place of their final destination in order to avoid collisions. And, because the aircraft are not solving any optimization problems, the algorithms can be implemented on low-cost microcontroller boards that are often already onboard flying autonomous vehicles.

“Instead of trying to solve a complete problem in one go, hybrid system theory allows us to solve individual parts, where the analysis becomes simpler, and then we can stitch them together into a complete solution,” said Garg.

While the modes were hand-designed by the researchers, they hope in the future to automate the design of such a system, or as Garg said, “How can we design an algorithm that will find an algorithm for safe trajectories?”

In addition, the team is working on adding other types of aircraft, such as quadcopters, to their system. They are also testing cases where there might be disturbances, such as malfunctioning sensors or wind.

The research is supported by NASA Grant NNX16AH81A and the Air Force Office of Scientific Research award number FA9550-17-1-0284.

Robo Sapiens Japanicus: U-M Professor Jennifer Robertson’s Recent Publications


Book: ROBO SAPIENS JAPANICUS: ROBOTS, GENDER, FAMILY AND THE JAPANESE NATION (University of California Press) was published and became available November 2017. A Japanese translation is forthcoming from Shueisha.

Robots: The Backstories


2018 “Gotai: Corporeal Aesthetics and Robotic Exoskeletons in Japan and Beyond.” Invited chapter, Designing Humans, Designing Robots. Cathrine Hasse and Dorte Marie Søndergaard, eds., London & New York:  Routledge. In press.

2018 “Robot Reincarnation: Garbage, Artefacts, and Mortuary Rituals,” pp. 153-173. Consuming Post-Bubble Japan. Ewa Machotka and Katarzyna Cwiertka, eds., Amsterdam University Press. Also available online:

Outreach, Podcasts:

2018 Interview with Elizabeth Fernandez (recorded podcast) on Japanese robotics, robot gender, and social technologies, SparkDialog, 20 August.

PDFs of published work can be found on Prof. Robertson’s website:


Research on Small-scale and Micro-robotic Needs at U-M

Professor Kenn Oldham and his team’s most recent work on robotics is tied to the attached series of related articles regarding testing piezoelectric/polymer small-scale robots along with a bit of battery modeling motivated by micro-robotic needs.

Article: Polymer Leg Mechanisms for Millimeter-scale Robotics

Article: Microrobot Locomotion

Article: Dynamic Structural and Contact Modeling for a Silicon Hexapod Microrobot


A New Framework to Guide the Processing of RGBD Video

Dr. Jason Corso and Dr. Brent Griffin are extending prior work in bottom-up video segmentation to include depth information from RGBD video, which allows us to better train for specific tasks and adaptively update representations of objects in complex environments. For robotics applications, we are incorporating this into a framework that guides the processing of RGBD video using a kinematic description of a robot’s actions, thereby increasing the quality of observations while reducing the overall computational costs. Using kinematically-guided RGBD video, we are able to provide feedback to a robot in real-time to: identify task failure, detect external objects or agents moving into a workspace, and develop a better understanding of objects while interacting them.

Chad Jenkins named Editor-in-Chief of the ACM Transactions on Human-Robot Interaction (THRI)

We are thrilled to become part of the ACM family of journals,” explained THRI Co-Editor-in-Chief Odest Chadwicke Jenkins of the University of Michigan. “ACM’s reputation as a publisher of computing research is unparalleled. At the same time, the broad representation of computing disciplines in the ACM, the organization’s global reach, and platforms such as the Digital Library, are a perfect complement to our own goals for THRI.

Jenkins, along with Co-Editor-in-Chief Selma Šabanović of Indiana University, have set three primary goals for the journal in the coming years, including: 1) Sustaining the intellectual growth of HRI as a field of study (both quantitatively and qualitatively), 2) Enabling timely and productive feedback from readers, and 3) Cultivating new and leading-edge ideas in both robotics and the human-centered sciences

The inaugural issue of the rebranded ACM Transactions on Human-Robot Interaction (THRI) is planned for March 2018. Those seeking to submit for the publication, or who have questions for the editors, are encouraged to visit the current HRI Journal website.

The full article.

Yuxiao Chen’s Formal Design Process: Achieving Ever-Greater Balance Between Safety and Performance

The concept of “formal methods”, also know as “correct by construction” is applied to the trajectory planning for mobile robots/small autonomous vehicles. The main challenges are (i) the presence of multiple moving objects (pedestrians, other robots/vehicles), and (ii) plant uncertainties. We aim to address both in our research.

The example publications show a formal design process to deal with multiple moving objects (without considering plant uncertainties). Our method achieves better balance between safety (zero collision!) and performance (the robot does not keep stopping to avoid collisions) compared with other methods from the literature.

Mr. Yuxiao Chen is a U-M graduate student co-advised by Professors Huei Peng and Jessy Grizzle.

UM Roboticist Prof. Ed Olson Stands the Test of Time!

Michigan Robotics is proud to highlight that one of its founding members, Prof. Ed Olson, has made the inaugural list of the Google Scholar “Classic Papers That Have Stood The Test of Time.”  You can find Ed’s paper listed here, and general background for this list is given here.

If you check out Ed on his website, you’ll find that he is withstanding the test of time rather well himself.

Necmiye Ozay Wins Hybrid Systems Prize

For her outstanding work in hybrid systems, a theoretical area very important to robotics, Professor Necmiye Ozay has received a major best paper award. The details of her award are here. Necmiye’s work on Correct-by-Design Control Software Synthesis is aimed at breaking down the barriers that have prevented this field from tackling important industrial problems. In the paper, she and her co-author develop finite abstractions that are equipped with robustness margins, allowing sensing and model imperfections to be addressed in a formally correct manner. They apply the results to Adaptive Cruise Control, an important Automated Drive Assist System, and point out other important applications in robotics and autonomous vehicles.