Invited speakers present on a range of robotics topics
The Robotics Seminar Series invites robotics experts to present their work to the Michigan Robotics community.
The presentations are held every other Thursday from 3:30-4:30pm on the campus of the University of Michigan in 2300 FMCRB and on Zoom when available. Recordings will be posted when available.
Past speakers
Recording
Abstract
This research focuses on integration of multi-sensor fusion, robotics/mechatronics, development of perception algorithms, human-robot/human-machine interfaces, high & low-level robot controller design, general robot learning, artificial intelligence, study of human-robot interactions, and user research. The core theme of his research is on the development of robot technology that can be easily utilized by non-experts. In the future, he envisions robots being effortlessly taught new tasks as required by a human user. To address this long-term goal, his current agenda focuses on exploring demonstration learning for robots and effective strategies for teaching between humans and robots (user experience in human-robot interactions), and applying these techniques to real-world application scenarios such as healthcare, rehabilitation, service, personal, and manufacturing robotics.
Bio
Wing-Yue Geoffrey Louie obtained his B.A.Sc in Mechanical Engineering and Ph.D. in robotics from the University of Toronto. His Ph.D. was completed in the Autonomous Systems & Biomechatronics Laboratory with his advisor Professor Goldie Nejat. His work during this time has been featured by the Globe and Mail, Toronto Star, Bloomberg TV, CBC News, and Toronto Sun.
Abstract
Biological animals excel at navigating complex and challenging environments, leveraging their hardware to perform dynamic motions and athletic skills that overcome diverse obstacles. Despite recent advancements, robotic systems still lack comparable locomotion capabilities. To push the boundaries of robot capabilities, unique hardware designs suited for diverse environments are essential, alongside models that understand hardware characteristics and control systems that effectively utilize these features. In this talk, I will present our lab's efforts to bridge this gap by developing robust hardware and effective control algorithms that enable both agility and robustness in legged robots. I will introduce our quadruped platforms: HOUND, designed for high-speed locomotion on complex terrains with custom electric actuators, and MARVEL, which employs specialized magnetic feet for versatile climbing. I will then discuss the control algorithms driving these robots, which leverage model predictive control and reinforcement learning techniques. Our results demonstrate that HOUND achieves speeds of up to 9.5 m/s while MARVEL traverses ceilings at 0.5 m/s and vertical walls at 0.7 m/s.
Bio
Prof. Hae-Won Park is the director of the Humanoid Robot Research Center and an Associate Professor of Mechanical Engineering at KAIST. He received his B.S. and M.S. from Yonsei University and his Ph.D. from the University of Michigan, Ann Arbor. Before joining KAIST, he was an Assistant Professor at the University of Illinois at Urbana-Champaign and a postdoctoral researcher at MIT. His research focuses on learning, model-based control, and robotic design, especially in legged and bio-inspired robots. Prof. Park has received several prestigious awards, including the NSF CAREER Award and the RSS Early-Career Spotlight Award, and serves on editorial boards for top robotics journals and conferences such as IJRR and IEEE ICRA.
Recording
Abstract
While miniaturization has been a goal in robotics for nearly 40 years, roboticists have struggled to access sub-millimeter dimensions without making sacrifices to on-board information process-ing. Consequently, microrobots often lack the key features that distin-guish their macroscopic cousins from other machines, namely on-robot systems for decision making, sensing, feedback, and programmable computation. This talk is about bringing the power of information processing to robots too small to see by eye. I’ll show how to fit circuits for memory, sensing, computing, propulsion, power, and communica-tion into a single robot comparable in size to a paramecium. Using these circuits, microrobots can carry out useful, autonomous behaviors like temperature reporting and gradient climbing without additional instruc-tions or supervision. Finally, I’ll discuss some near term applications of smart, inexpensive, programmable microrobots, including peripheral nerve repair and nanomanufacturing.
Bio
Marc Miskin is an assistant professor of Electrical and Systems Engineering at the University of Pennsylvania. He received a BSc. in Physics at Rensselaer Polytechnic Institute and PhD in Physics from the University of Chicago. Prior to joining the faculty at U. Penn, he was a Kavli Postdoctoral Fellow for Nanoscale Science at Cornell University. Currently, he is interested in the design and fabrication of microscopic robots. His work has won awards from the Air Force Office of Scientific Research, the Army Research Office, a Sloan Research Fellowship, a Packard Fellowship and been featured in several media outlets, including the New York Times, the MIT Tech Review’s 35 under 35 list, CNN, the BBC, and NPR. Outside of research, he is actively involved in science education, frequently presenting at science museums
Recording
Abstract
Long-horizon planning is fundamental to our ability to solve complex physical problems, from using tools to cooking dinners. Despite recent progress in commonsense-rich foundation models, the ability to do the same is still lacking in robots, particularly with learning-based approaches. In this talk, I will present a body of work that aims to transform Task and Motion Planning—one of the most powerful computational frameworks in robot planning—into a fully generative model framework, enabling compositional generalization in a largely data-driven approach. I will explore how to chain together modular diffusion-based skills through iterative forward-backward denoising, how to formulate TAMP as a factor graph problem with generative models serving as learned constraints for planning, and how to integrate task and motion planning within a single generative process. I'll conclude by discussing the “reasoning” paradigms of recent robot foundation models and the need to move beyond language-aided chain of thought, and the prospect of extracting planning representations from demonstrations.
Bio
Danfei Xu is an Assistant Professor in the School of Interactive Computing at Georgia Institute of Technology, where he directs the Robot Learning and Reasoning Lab (RL2). He is also a researcher at NVIDIA AI. He earned his Ph.D. in Computer Science from Stanford University in 2021. His research focuses on machine learning methods for robotics, particularly in manipulation planning and imitation learning. His work has received Best Paper nominations at the Conference on Robot Learning (CoRL) and IEEE Robotics and Automation Letters (RA-L). His research is funded by National Science Foundation, Autodesk Research, and Meta Platforms.
Recording
Abstract
Recent advances in robot materials and algorithms have enabled new levels of adaptive and versatile behavior. In this talk I will describe my lab’s efforts to create robots, mechanisms, and control algorithms capable of adaptive behaviors in response to perturbations from the environment or body morphology. I will first describe how the modulation of material curvature can enable reconfigurable robot appendages and bodies, culminating in new modes of robot manipulation and locomotion. Next, I will describe our work on flapping wing actuation through bioinspired autonomous oscillators with adaptive and responsive dynamics. Lastly, I will describe how to design adaptive proprioceptive feedback laws which can enable robot groups to synchronize locomotion purely through contact interactions. In total this work illustrates how simple mechanisms and algorithms can give rise to a rich design space for dynamic and responsive robots.
Bio
Nick Gravish is an associate professor in the Mechanical & Aerospace Engineering department and a faculty member in the Contextual Robotics Institute. Dr. Gravish received his PhD from Georgia Tech in 2013, and was postdoctoral fellow at Harvard from 2013-2016 supported by a James S. McDonnell fellowship in complex systems science. His research focuses on bio-inspiration, biomechanics, and robotics, towards the development of new locomotion systems.
Abstract
Current robots are primarily rigid machines that exist in highly constrained or open environments such as factory floors, warehouses, or fields. There is an increasing demand for more adaptable, mobile, and flexible robots that can manipulate or move through complex environments. This problem is currently being addressed in two complementary ways: (i) learning and control algorithms to enable the robot to better sense and adapt to the surrounding environment and (ii) embedded intelligence in mechanical structures. My vision is to create robots that can mechanically conform to the environment or objects that they interact with to alleviate the need for high-speed, high-accuracy, and high-precision controllers. In this talk, I will give an overview of our key challenges and contributions to developing mechanically conformable robots, including compliant dexterous manipulation and heterogenous collaborative robots.
Bio
Zeynep Temel is an Assistant Professor with the Robotics Institute at Carnegie Mellon University. Her research focuses on developing robots that can mechanically conform to the environment or objects that they interact with. Prior to joining RI, she was a postdoctoral Fellow at the Microrobotics Lab in Harvard University and she received her Ph.D. from Sabanci University, Turkey. In 2020, she was selected as one of 25 members of the Young Scientists Community of World Economic Forum.
Recording
Abstract
Foundation models, such as GPT-4 Vision, have marked significant achievements in the fields of natural language and vision, demonstrating exceptional abilities to adapt to new tasks and scenarios. However, physical interaction—such as cooking, cleaning, or caregiving—remains a frontier where foundation models and robotic systems have yet to achieve the desired level of adaptability and generalization. In this talk, I will discuss the opportunities for incorporating foundation models into classic robotic pipelines to endow robots with capabilities beyond those achievable with traditional robotic tools. The talk will focus on three key improvements in (1) task specification, (2) low-level, and (3) high-level scene modeling. The central idea behind this research is to translate the commonsense knowledge embedded in foundation models into structural priors that can be integrated into robot learning systems. This approach leverages the strengths of different modules (e.g., VLM for task interpretation and constrained optimization for motion planning), achieving the best of both worlds. I will demonstrate how such integration enables robots to interpret instructions provided in free-form natural language, and how foundation models can be augmented with additional memory mechanisms, such as an action-conditioned scene graph, to handle a wide range of real-world manipulation tasks. Toward the end of the talk, I will discuss the limitations of the current foundation models, challenges that still lie ahead, and potential avenues to address these challenges.
Bio
Yunzhu Li is an Assistant Professor of Computer Science at Columbia University. Before joining Columbia, he was an Assistant Professor at UIUC CS and spent time as a Postdoc at Stanford, collaborating with Fei-Fei Li and Jiajun Wu. Yunzhu earned his PhD from MIT under the guidance of Antonio Torralba and Russ Tedrake. His work lies at the intersection of robotics, computer vision, and machine learning, with the goal of helping robots perceive and interact with the physical world as dexterously and effectively as humans do. Yunzhu’s work has been recognized with the Best Paper Award at ICRA, the Best Systems Paper Award at CoRL, and the Best Paper Awards at multiple workshops. Yunzhu is also the recipient of the AAAI New Faculty Highlights, the Sony Faculty Innovation Award, the Adobe Research Fellowship, and was selected as the First Place Recipient of the Ernst A. Guillemin Master’s Thesis Award in AI and Decision Making at MIT. His research has been published in top journals and conferences, including Nature, Science, RSS, NeurIPS, and CVPR, and featured by major media outlets such as CNN, BBC, and The Wall Street Journal.
Recording
Abstract
In the last few decades, most robotics success stories have been limited to structured or controlled environments. A major challenge is to develop robot systems that can operate in complex or unstructured environments corresponding to homes, dense traffic, outdoor terrains, public places, etc. In this talk, we give an overview of our ongoing work on developing robust planning and navigation technologies that use recent advances in computer vision, sensor technologies, machine learning, and motion planning algorithms. We present new methods that utilize multi-modal observations from an RGB camera, 3D LiDAR, and robot odometry for scene perception, along with deep reinforcement learning for reliable planning. The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. We have integrated these methods with wheeled robots, home robots, and legged platforms and highlight their performance in crowded indoor scenes, home environments, and dense outdoor terrains.
Bio
Dinesh Manocha is Paul Chrisman-Iribe Chair in Computer Science & ECE and Distinguished University Professor at University of Maryland College Park. His research interests include virtual environments, physically-based modeling, and robotics. His group has developed a number of software packages that are standard and licensed to 60+ commercial vendors. He has published more than 800 papers & supervised 53 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, IEEE, and NAI and member of ACM SIGGRAPH and IEEE VR Academies, and Bézier Award from Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi the Distinguished Career in Computer Science Award from Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc in November 2016.
Recording
Abstract
Scaling up data and computation are regarded today as the key to achieving unprecedented performance in many perception tasks. Biological perception is characterized, though, by principles of efficiency implemented through symmetry and efficient sensing. By respecting the symmetries of the problem at hand, models can generalize better, often requiring fewer parameters and less data to learn effectively. Moreover, they provide insights into the underlying structures and symmetries of the data, which can be invaluable in developing more robust and interpretable models. The incoming sensing bandwidth is remarkably low for vision in biological brains, while current computer vision systems are based on full video frames and many views of the world. We will present an active approach to view and touch selection based on information-theoretic principles. We will finish with a new sensor paradigm that senses only visual events rather than whole scenes and show how it can solve basic tasks fundamental to embodied intelligence.
Bio
Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania, where he has been faculty since 1998. He is an IEEE Fellow. He was the director of the GRASP laboratory from 2008 to 2013. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD in Computer Science from the University of Karlsruhe, 1992. He received the Best Conference Paper Award at ICRA 2017. He co-chaired ECCV 2010 and 3DPVT 2006. His most cited works have been on event-based vision, equivariant learning, 3D human pose, and hand-eye calibration.