Invited speakers present on a range of robotics topics
The Robotics Seminar Series invites robotics experts to present their work to the Michigan Robotics community.
The presentations are held on Wednesdays from 4:00 - 5:30pm on the University of Michigan Ann Arbor campus in 2300 FMCRB and on Zoom when available. Recordings will be posted when available. Light refreshments will be provided.
Upcoming speakers
Abstract
In recent years, nuclear energy has entered a renaissance, fueled by the growing demand for reliable, low-carbon power and the maturity of advanced reactor technologies that promise safer, more efficient, and more sustainable solutions. Yet, the nuclear environment remains one of the most dangerous, dirty, and dull workplaces—making it ideal for robotic intervention. To meet these challenges, new robot technologies and customized robotic systems are being developed. For example, counterintuitively, soft robots can be more radiation resistant than many traditional robots in some cases. In this talk, I will present our recent work on the development of soft robots, as well as several other customized robotic platforms, which are designed for nuclear and homeland security applications but also have potential for broader civilian applications.
Bio
Y Z is a Professor in the Department of Nuclear Engineering and Radiological Sciences, Department of Electrical Engineering and Computer Science, Department of Materials Science and Engineering, Department of Robotics, and Applied Physics Program at University of Michigan. He is also the Chair of the Engineering Physics Program. He received his B.S. in Electrical Science and Technology from University of Science and Technology of China in 2004 and his Ph.D. in Nuclear Science and Engineering from Massachusetts Institute of Technology in 2010. He was a Clifford G. Shull Fellow at Oak Ridge National Laboratory (2010-2012) and a professor at University of Illinois Urbana-Champaign (2012-2022). The Z Lab’s research can be summarized into two words: Matter and Machine. In the Matter domain, his group synergistically integrates statistical physics, molecular simulations, artificial intelligence, and neutron scattering experiments to extend our understanding of rare/extreme events and long timescale phenomena in complex materials. Particular emphasis is given to the physics and chemistry of liquids, glasses, and complex fluids, especially at interfaces, under extreme conditions, or when driven away from equilibrium. Concurrently, on the Machine front, his group builds robots for extreme environments, including swarm robots, wheel-leg hybrid robots, and soft robots. These two research areas, spanning from fundamental to applied, serve as integral pillars in their overarching mission to foster a sustainable, resilient, and secure energy infrastructure.

Abstract
Information to come.
Bio
Information to come.
Abstract
Making use of visual display technology and human-robotic interfaces, many researchers have illustrated various opportunities to distort visual and physical realities. We have had success with interventions such as error augmentation, sensory crossover, and negative viscosity. Judicial application of these techniques leads to training situations that enhance the learning process and can restore movement ability after neural injury. I will trace out clinical studies that have employed such technologies to improve the health and function, as well as share some leading-edge insights that include deceiving the patient, examining clinical effectiveness of technology, and moving software "smarts" of into the physical hardware — smushware.
Bio
James L. Patton received BS mechanical engineering & engineering science from University of Michigan (1989), MS in theoretical mechanics from Michigan State (1993), and PhD biomedical engineering from Northwestern University (1998). He is Richard and Loan Hill Professor of BioMedicalEngineering at the University of Illinois Chicago, and research scientist at the Shirley Ryan AbilityLab. He worked in automotive manufacturing and nuclear medicine before discovering control of human movement. His interests include robotic teaching, controls, haptics, modeling, human-machine interfaces, and technology-facilitated recovery from a brain injury. Pattonwas vice president of conferences for the IEEE-EMB society, and Associate Editor of IEEE Transactions on Biomedical Engineering, and IEEE Transactions Medical Robotics and Bionics

Abstract
Information to come.
Bio
Information to come.
Abstract
The decisions roboticists make when designing and deploying robots are shaped by our shared understanding of what robots are, what they should look like, and what they are for. That is, they are shaped by the robot imaginary. To understand why roboticists follow certain design patterns today, and to understand the impact of those design decisions, we need to understand the nature of the robot imaginary and the specific power dynamics that imaginary tends to reflect and reinforce. In this talk, based on my forthcoming book Degrees of Freedom: On Robotics and Social Justice (MIT press 2025), I will begin by describing the ways that the modern robotic imaginary originated in the aftermath of the US Civil War as a vision for how White men would maintain power after the end of slavery. Then, I will leverage my theory of Matrix-Guided Technology Power Analysis to analyze the ways that, as a result of this origin story, today's robots tend to reinforce White and patriarchal power in the cultural, disciplinary, and structural domains... and what roboticists can do to change this status quo to ensure their robots are used as forces for good. Overall, I will draw on computer science; history and politics; law, criminology, and sociology; feminist, ethnic, and Black studies; literary and media studies; and social, moral, and cognitive psychology, to make a compelling call for action for a more socially just future of robotics.
Bio
Tom Williams is an Associate Professor of Computer Science at the Colorado School of Mines, Director of the MIRRORLab, and author of Degrees of Freedom: On Robotics and Social Justice (2025). Tom earned joint PhDs in Computer Science and Cognitive Science from Tufts University, and is an internationally recognized scholar of Human-Robot Interaction whose work examines cognitive, social, and moral dimensions of Human-Robot Interaction. Tom is the winner of Early Career awards from the National Science Foundation, the Air Force Office of Scientific Research, and NASA.

Abstract
Information to come.
Bio
Information to come.
Abstract
Robotic assisted surgery (RAS) systems incorporate highly dexterous tools, hand tremor filtering, and motion scaling to enable a minimally invasive surgical approach, reducing collateral damage and patient recovery times. However, current state-of-the-art telerobotic surgery requires a surgeon operating every motion of the robot, resulting in long procedure times and inconsistent results. The advantages of autonomous robotic functionality have been demonstrated in applications outside of medicine, such as manufacturing and aviation. A limited form of autonomous RAS with pre-planned functionality was introduced in orthopedic procedures, radiotherapy, and cochlear implants. Efforts in automating soft tissue surgeries have been limited so far to elemental tasks such as knot tying, needle insertion, and executing predefined motions. The fundamental problems in soft tissue surgery include unpredictable shape changes, tissue deformations, and perception challenges. My research goal is to transform current manual and teleoperated robotic soft tissue surgery to autonomous robotic surgery, improving patient outcomes by reducing the reliance on the operating surgeon, eliminating human errors, and increasing precision and speed. This presentation will discuss our novel strategies to overcome the challenges encountered in soft tissue autonomous surgery. Presentation topics will include a robotic system for supervised autonomous laparoscopic anastomosis, partial nephrectomy, and end-to-end imitation learning of surgical tasks and procedures.
Bio
Axel Krieger, PhD, joined the Johns Hopkins University in the Department of Mechanical Engineering in July 2020. He is leading a team of students, scientists, and engineers in the research and development of robotic systems for surgery and interventions. Projects include the development of a surgical robot called smart tissue autonomous robot (STAR) and the use of 3D printing for surgical planning and patient specific implants. Professor Krieger is a recipient of the NSF CAREER award and an inventor of over thirty patents and patent applications. Licensees of his patents include medical device start-ups Activ Surgical and PeriCor as well as industry leaders such as Siemens, Philips, and Intuitive Surgical. Before joining the Johns Hopkins University, Professor Axel Krieger was Assistant Professor in Mechanical Engineering at the University of Maryland and Assistant Research Professor and Program Lead for Smart Tools at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. He has several years of experience in private industry at Sentinelle Medical Inc and Hologic Inc. His role within these organizations was Product Leader developing devices and software systems from concept to FDA approval and market introduction. Dr. Krieger completed his undergraduate and master’s degrees at the University of Karlsruhe in Germany and his doctorate at Johns Hopkins, where he pioneered an MRI guided prostate biopsy robot used in over 50 patient procedures at three hospitals.
Past speakers
Abstract
People with visual impairment face challenges as pedestrians in complex environments. Numerous hand-held or wearable systems have been developed to provide assistance, but all have limitations. Existing XR systems are large, power hungry, and not designed specifically as assistive technology. Smart phone apps are not hands free and typically help with a single task. We are building a wearable system to serve as an Adaptive Vision Assistant (AVA). AVA.glass is a headworn user interface. A camera (embedded in AVA.glass) will obtain environmental information for apps that rely on real-time video. AVA.glass is controlled by AVA.app, a smartphone app that serves as an intelligent assistant for the user. AVA.app integrates information from multiple sources and automatically launches single task apps when needed. This system will be hands free and will relieve the user from opening and closing multiple apps while also navigating, thus addressing two deficiencies with current assistive technology. To increase user confidence, our system will provide multi-modal input, with information provided via vibrotactile and auditory channels. AVA will adapt its operation to environmental conditions and learn user behavior during real-world operation.
Bio
James Weiland received his B.S. in Electrical Engineering from the University of Michigan in 1988. After 4 years in industry with Pratt & Whitney Aircraft Engines, he returned to Michigan for graduate school, earning degrees in Biomedical Engineering (M.S. 1993, Ph.D. 1997) and Electrical Engineering (M.S. 1995). He joined the Wilmer Ophthalmological Institute at Johns Hopkins University in 1997 as a postdoctoral fellow and, in 1999, was appointed an assistant professor of ophthalmology at Johns Hopkins. Dr. Weiland was appointed assistant professor at the Doheny Eye Institute-University of Southern California in 2001, and was promoted to Professor of Ophthalmology and Biomedical Engineering in 2013. In 2017, Dr. Weiland was appointed as Professor of Biomedical Engineering (Medical School) and Ophthalmology & Visual Sciences at the University of Michigan. He is a Fellow of the American Institute of Medical and Biological Engineering and a Fellow of the IEEE. Dr. Weiland’s research interests include retinal prostheses, neural prostheses, electrode technology, visual evoked responses, implantable electrical systems, and wearable visual aids for the blind.
Recording
Abstract
This research focuses on integration of multi-sensor fusion, robotics/mechatronics, development of perception algorithms, human-robot/human-machine interfaces, high & low-level robot controller design, general robot learning, artificial intelligence, study of human-robot interactions, and user research. The core theme of his research is on the development of robot technology that can be easily utilized by non-experts. In the future, he envisions robots being effortlessly taught new tasks as required by a human user. To address this long-term goal, his current agenda focuses on exploring demonstration learning for robots and effective strategies for teaching between humans and robots (user experience in human-robot interactions), and applying these techniques to real-world application scenarios such as healthcare, rehabilitation, service, personal, and manufacturing robotics.
Bio
Wing-Yue Geoffrey Louie obtained his B.A.Sc in Mechanical Engineering and Ph.D. in robotics from the University of Toronto. His Ph.D. was completed in the Autonomous Systems & Biomechatronics Laboratory with his advisor Professor Goldie Nejat. His work during this time has been featured by the Globe and Mail, Toronto Star, Bloomberg TV, CBC News, and Toronto Sun.
Abstract
Biological animals excel at navigating complex and challenging environments, leveraging their hardware to perform dynamic motions and athletic skills that overcome diverse obstacles. Despite recent advancements, robotic systems still lack comparable locomotion capabilities. To push the boundaries of robot capabilities, unique hardware designs suited for diverse environments are essential, alongside models that understand hardware characteristics and control systems that effectively utilize these features. In this talk, I will present our lab's efforts to bridge this gap by developing robust hardware and effective control algorithms that enable both agility and robustness in legged robots. I will introduce our quadruped platforms: HOUND, designed for high-speed locomotion on complex terrains with custom electric actuators, and MARVEL, which employs specialized magnetic feet for versatile climbing. I will then discuss the control algorithms driving these robots, which leverage model predictive control and reinforcement learning techniques. Our results demonstrate that HOUND achieves speeds of up to 9.5 m/s while MARVEL traverses ceilings at 0.5 m/s and vertical walls at 0.7 m/s.
Bio
Prof. Hae-Won Park is the director of the Humanoid Robot Research Center and an Associate Professor of Mechanical Engineering at KAIST. He received his B.S. and M.S. from Yonsei University and his Ph.D. from the University of Michigan, Ann Arbor. Before joining KAIST, he was an Assistant Professor at the University of Illinois at Urbana-Champaign and a postdoctoral researcher at MIT. His research focuses on learning, model-based control, and robotic design, especially in legged and bio-inspired robots. Prof. Park has received several prestigious awards, including the NSF CAREER Award and the RSS Early-Career Spotlight Award, and serves on editorial boards for top robotics journals and conferences such as IJRR and IEEE ICRA.
Recording
Abstract
While miniaturization has been a goal in robotics for nearly 40 years, roboticists have struggled to access sub-millimeter dimensions without making sacrifices to on-board information process-ing. Consequently, microrobots often lack the key features that distin-guish their macroscopic cousins from other machines, namely on-robot systems for decision making, sensing, feedback, and programmable computation. This talk is about bringing the power of information processing to robots too small to see by eye. I’ll show how to fit circuits for memory, sensing, computing, propulsion, power, and communica-tion into a single robot comparable in size to a paramecium. Using these circuits, microrobots can carry out useful, autonomous behaviors like temperature reporting and gradient climbing without additional instruc-tions or supervision. Finally, I’ll discuss some near term applications of smart, inexpensive, programmable microrobots, including peripheral nerve repair and nanomanufacturing.
Bio
Marc Miskin is an assistant professor of Electrical and Systems Engineering at the University of Pennsylvania. He received a BSc. in Physics at Rensselaer Polytechnic Institute and PhD in Physics from the University of Chicago. Prior to joining the faculty at U. Penn, he was a Kavli Postdoctoral Fellow for Nanoscale Science at Cornell University. Currently, he is interested in the design and fabrication of microscopic robots. His work has won awards from the Air Force Office of Scientific Research, the Army Research Office, a Sloan Research Fellowship, a Packard Fellowship and been featured in several media outlets, including the New York Times, the MIT Tech Review’s 35 under 35 list, CNN, the BBC, and NPR. Outside of research, he is actively involved in science education, frequently presenting at science museums
Recording
Abstract
Long-horizon planning is fundamental to our ability to solve complex physical problems, from using tools to cooking dinners. Despite recent progress in commonsense-rich foundation models, the ability to do the same is still lacking in robots, particularly with learning-based approaches. In this talk, I will present a body of work that aims to transform Task and Motion Planning—one of the most powerful computational frameworks in robot planning—into a fully generative model framework, enabling compositional generalization in a largely data-driven approach. I will explore how to chain together modular diffusion-based skills through iterative forward-backward denoising, how to formulate TAMP as a factor graph problem with generative models serving as learned constraints for planning, and how to integrate task and motion planning within a single generative process. I'll conclude by discussing the “reasoning” paradigms of recent robot foundation models and the need to move beyond language-aided chain of thought, and the prospect of extracting planning representations from demonstrations.
Bio
Danfei Xu is an Assistant Professor in the School of Interactive Computing at Georgia Institute of Technology, where he directs the Robot Learning and Reasoning Lab (RL2). He is also a researcher at NVIDIA AI. He earned his Ph.D. in Computer Science from Stanford University in 2021. His research focuses on machine learning methods for robotics, particularly in manipulation planning and imitation learning. His work has received Best Paper nominations at the Conference on Robot Learning (CoRL) and IEEE Robotics and Automation Letters (RA-L). His research is funded by National Science Foundation, Autodesk Research, and Meta Platforms.
Recording
Abstract
Recent advances in robot materials and algorithms have enabled new levels of adaptive and versatile behavior. In this talk I will describe my lab’s efforts to create robots, mechanisms, and control algorithms capable of adaptive behaviors in response to perturbations from the environment or body morphology. I will first describe how the modulation of material curvature can enable reconfigurable robot appendages and bodies, culminating in new modes of robot manipulation and locomotion. Next, I will describe our work on flapping wing actuation through bioinspired autonomous oscillators with adaptive and responsive dynamics. Lastly, I will describe how to design adaptive proprioceptive feedback laws which can enable robot groups to synchronize locomotion purely through contact interactions. In total this work illustrates how simple mechanisms and algorithms can give rise to a rich design space for dynamic and responsive robots.
Bio
Nick Gravish is an associate professor in the Mechanical & Aerospace Engineering department and a faculty member in the Contextual Robotics Institute. Dr. Gravish received his PhD from Georgia Tech in 2013, and was postdoctoral fellow at Harvard from 2013-2016 supported by a James S. McDonnell fellowship in complex systems science. His research focuses on bio-inspiration, biomechanics, and robotics, towards the development of new locomotion systems.
Abstract
Current robots are primarily rigid machines that exist in highly constrained or open environments such as factory floors, warehouses, or fields. There is an increasing demand for more adaptable, mobile, and flexible robots that can manipulate or move through complex environments. This problem is currently being addressed in two complementary ways: (i) learning and control algorithms to enable the robot to better sense and adapt to the surrounding environment and (ii) embedded intelligence in mechanical structures. My vision is to create robots that can mechanically conform to the environment or objects that they interact with to alleviate the need for high-speed, high-accuracy, and high-precision controllers. In this talk, I will give an overview of our key challenges and contributions to developing mechanically conformable robots, including compliant dexterous manipulation and heterogenous collaborative robots.
Bio
Zeynep Temel is an Assistant Professor with the Robotics Institute at Carnegie Mellon University. Her research focuses on developing robots that can mechanically conform to the environment or objects that they interact with. Prior to joining RI, she was a postdoctoral Fellow at the Microrobotics Lab in Harvard University and she received her Ph.D. from Sabanci University, Turkey. In 2020, she was selected as one of 25 members of the Young Scientists Community of World Economic Forum.
Recording
Abstract
Foundation models, such as GPT-4 Vision, have marked significant achievements in the fields of natural language and vision, demonstrating exceptional abilities to adapt to new tasks and scenarios. However, physical interaction—such as cooking, cleaning, or caregiving—remains a frontier where foundation models and robotic systems have yet to achieve the desired level of adaptability and generalization. In this talk, I will discuss the opportunities for incorporating foundation models into classic robotic pipelines to endow robots with capabilities beyond those achievable with traditional robotic tools. The talk will focus on three key improvements in (1) task specification, (2) low-level, and (3) high-level scene modeling. The central idea behind this research is to translate the commonsense knowledge embedded in foundation models into structural priors that can be integrated into robot learning systems. This approach leverages the strengths of different modules (e.g., VLM for task interpretation and constrained optimization for motion planning), achieving the best of both worlds. I will demonstrate how such integration enables robots to interpret instructions provided in free-form natural language, and how foundation models can be augmented with additional memory mechanisms, such as an action-conditioned scene graph, to handle a wide range of real-world manipulation tasks. Toward the end of the talk, I will discuss the limitations of the current foundation models, challenges that still lie ahead, and potential avenues to address these challenges.
Bio
Yunzhu Li is an Assistant Professor of Computer Science at Columbia University. Before joining Columbia, he was an Assistant Professor at UIUC CS and spent time as a Postdoc at Stanford, collaborating with Fei-Fei Li and Jiajun Wu. Yunzhu earned his PhD from MIT under the guidance of Antonio Torralba and Russ Tedrake. His work lies at the intersection of robotics, computer vision, and machine learning, with the goal of helping robots perceive and interact with the physical world as dexterously and effectively as humans do. Yunzhu’s work has been recognized with the Best Paper Award at ICRA, the Best Systems Paper Award at CoRL, and the Best Paper Awards at multiple workshops. Yunzhu is also the recipient of the AAAI New Faculty Highlights, the Sony Faculty Innovation Award, the Adobe Research Fellowship, and was selected as the First Place Recipient of the Ernst A. Guillemin Master’s Thesis Award in AI and Decision Making at MIT. His research has been published in top journals and conferences, including Nature, Science, RSS, NeurIPS, and CVPR, and featured by major media outlets such as CNN, BBC, and The Wall Street Journal.
Recording
Abstract
In the last few decades, most robotics success stories have been limited to structured or controlled environments. A major challenge is to develop robot systems that can operate in complex or unstructured environments corresponding to homes, dense traffic, outdoor terrains, public places, etc. In this talk, we give an overview of our ongoing work on developing robust planning and navigation technologies that use recent advances in computer vision, sensor technologies, machine learning, and motion planning algorithms. We present new methods that utilize multi-modal observations from an RGB camera, 3D LiDAR, and robot odometry for scene perception, along with deep reinforcement learning for reliable planning. The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. The latter is also used to compute dynamically feasible and spatial aware velocities for a robot navigating among mobile obstacles and uneven terrains. We have integrated these methods with wheeled robots, home robots, and legged platforms and highlight their performance in crowded indoor scenes, home environments, and dense outdoor terrains.
Bio
Dinesh Manocha is Paul Chrisman-Iribe Chair in Computer Science & ECE and Distinguished University Professor at University of Maryland College Park. His research interests include virtual environments, physically-based modeling, and robotics. His group has developed a number of software packages that are standard and licensed to 60+ commercial vendors. He has published more than 800 papers & supervised 53 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, IEEE, and NAI and member of ACM SIGGRAPH and IEEE VR Academies, and Bézier Award from Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi the Distinguished Career in Computer Science Award from Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc in November 2016.
Recording
Abstract
Scaling up data and computation are regarded today as the key to achieving unprecedented performance in many perception tasks. Biological perception is characterized, though, by principles of efficiency implemented through symmetry and efficient sensing. By respecting the symmetries of the problem at hand, models can generalize better, often requiring fewer parameters and less data to learn effectively. Moreover, they provide insights into the underlying structures and symmetries of the data, which can be invaluable in developing more robust and interpretable models. The incoming sensing bandwidth is remarkably low for vision in biological brains, while current computer vision systems are based on full video frames and many views of the world. We will present an active approach to view and touch selection based on information-theoretic principles. We will finish with a new sensor paradigm that senses only visual events rather than whole scenes and show how it can solve basic tasks fundamental to embodied intelligence.
Bio
Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania, where he has been faculty since 1998. He is an IEEE Fellow. He was the director of the GRASP laboratory from 2008 to 2013. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD in Computer Science from the University of Karlsruhe, 1992. He received the Best Conference Paper Award at ICRA 2017. He co-chaired ECCV 2010 and 3DPVT 2006. His most cited works have been on event-based vision, equivariant learning, 3D human pose, and hand-eye calibration.