Invited speakers present on a range of robotics topics
The Robotics Seminar Series invites robotics experts to present their work to the Michigan Robotics community.
The presentations are held every other Thursday from 3:30-4:30pm on the campus of the University of Michigan in 2300 FMCRB and on Zoom when available. Recordings will be posted when available.
Upcoming speakers
Abstract
Current robots are primarily rigid machines that exist in highly constrained or open environments such as factory floors, warehouses, or fields. There is an increasing demand for more adaptable, mobile, and flexible robots that can manipulate or move through complex environments. This problem is currently being addressed in two complementary ways: (i) learning and control algorithms to enable the robot to better sense and adapt to the surrounding environment and (ii) embedded intelligence in mechanical structures. My vision is to create robots that can mechanically conform to the environment or objects that they interact with to alleviate the need for high-speed, high-accuracy, and high-precision controllers. In this talk, I will give an overview of our key challenges and contributions to developing mechanically conformable robots, including compliant dexterous manipulation and heterogenous collaborative robots.
Bio
Zeynep Temel is an Assistant Professor with the Robotics Institute at Carnegie Mellon University. Her research focuses on developing robots that can mechanically conform to the environment or objects that they interact with. Prior to joining RI, she was a postdoctoral Fellow at the Microrobotics Lab in Harvard University and she received her Ph.D. from Sabanci University, Turkey. In 2020, she was selected as one of 25 members of the Young Scientists Community of World Economic Forum.
Abstract
Information to come.
Bio
Information to come.
Abstract
Information to come.
Bio
Information to come.
Abstract
Information to come.
Bio
Information to come.
Abstract
Information to come.
Bio
Information to come.
Past speakers
Recording
Abstract
Foundation models, such as GPT-4 Vision, have marked significant achievements in the fields of natural language and vision, demonstrating exceptional abilities to adapt to new tasks and scenarios. However, physical interaction—such as cooking, cleaning, or caregiving—remains a frontier where foundation models and robotic systems have yet to achieve the desired level of adaptability and generalization. In this talk, I will discuss the opportunities for incorporating foundation models into classic robotic pipelines to endow robots with capabilities beyond those achievable with traditional robotic tools. The talk will focus on three key improvements in (1) task specification, (2) low-level, and (3) high-level scene modeling. The central idea behind this research is to translate the commonsense knowledge embedded in foundation models into structural priors that can be integrated into robot learning systems. This approach leverages the strengths of different modules (e.g., VLM for task interpretation and constrained optimization for motion planning), achieving the best of both worlds. I will demonstrate how such integration enables robots to interpret instructions provided in free-form natural language, and how foundation models can be augmented with additional memory mechanisms, such as an action-conditioned scene graph, to handle a wide range of real-world manipulation tasks. Toward the end of the talk, I will discuss the limitations of the current foundation models, challenges that still lie ahead, and potential avenues to address these challenges.
Bio
Yunzhu Li is an Assistant Professor of Computer Science at Columbia University. Before joining Columbia, he was an Assistant Professor at UIUC CS and spent time as a Postdoc at Stanford, collaborating with Fei-Fei Li and Jiajun Wu. Yunzhu earned his PhD from MIT under the guidance of Antonio Torralba and Russ Tedrake. His work lies at the intersection of robotics, computer vision, and machine learning, with the goal of helping robots perceive and interact with the physical world as dexterously and effectively as humans do. Yunzhu’s work has been recognized with the Best Paper Award at ICRA, the Best Systems Paper Award at CoRL, and the Best Paper Awards at multiple workshops. Yunzhu is also the recipient of the AAAI New Faculty Highlights, the Sony Faculty Innovation Award, the Adobe Research Fellowship, and was selected as the First Place Recipient of the Ernst A. Guillemin Master’s Thesis Award in AI and Decision Making at MIT. His research has been published in top journals and conferences, including Nature, Science, RSS, NeurIPS, and CVPR, and featured by major media outlets such as CNN, BBC, and The Wall Street Journal.
Recording
Abstract
In the last few decades, most robotics success stories have been limited to structured or controlled environments. A major challenge is to develop robot systems that can operate in complex or unstructured environments corresponding to homes, dense traffic, outdoor terrains, public places, etc. In this talk, we give an overview of our ongoing work on developing robust planning and navigation technologies that use recent advances in computer vision, sensor technologies, machine learning, and motion planning algorithms. We present new methods that utilize multi-modal observations from an RGB camera, 3D LiDAR, and robot odometry for scene perception, along with deep reinforcement learning for reliable planning. The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. We have integrated these methods with wheeled robots, home robots, and legged platforms and highlight their performance in crowded indoor scenes, home environments, and dense outdoor terrains.
Bio
Dinesh Manocha is Paul Chrisman-Iribe Chair in Computer Science & ECE and Distinguished University Professor at University of Maryland College Park. His research interests include virtual environments, physically-based modeling, and robotics. His group has developed a number of software packages that are standard and licensed to 60+ commercial vendors. He has published more than 800 papers & supervised 53 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, IEEE, and NAI and member of ACM SIGGRAPH and IEEE VR Academies, and Bézier Award from Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi the Distinguished Career in Computer Science Award from Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc in November 2016.
Recording
Abstract
Scaling up data and computation are regarded today as the key to achieving unprecedented performance in many perception tasks. Biological perception is characterized, though, by principles of efficiency implemented through symmetry and efficient sensing. By respecting the symmetries of the problem at hand, models can generalize better, often requiring fewer parameters and less data to learn effectively. Moreover, they provide insights into the underlying structures and symmetries of the data, which can be invaluable in developing more robust and interpretable models. The incoming sensing bandwidth is remarkably low for vision in biological brains, while current computer vision systems are based on full video frames and many views of the world. We will present an active approach to view and touch selection based on information-theoretic principles. We will finish with a new sensor paradigm that senses only visual events rather than whole scenes and show how it can solve basic tasks fundamental to embodied intelligence.
Bio
Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania, where he has been faculty since 1998. He is an IEEE Fellow. He was the director of the GRASP laboratory from 2008 to 2013. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD in Computer Science from the University of Karlsruhe, 1992. He received the Best Conference Paper Award at ICRA 2017. He co-chaired ECCV 2010 and 3DPVT 2006. His most cited works have been on event-based vision, equivariant learning, 3D human pose, and hand-eye calibration.