Summer Undergraduate Research in Engineering (SURE) offers summer research internships to outstanding current U-M undergraduate students who have completed their sophomore or junior year (preference will be given to those who have completed three years of study) by the time of their internship. Participants have the opportunity to conduct 10-12 weeks of full-time summer research with some of the country’s leading faculty in a wide range of engineering disciplines. The program provides opportunities for students to assess their interests and potential in pursuing research at the Masters or Ph.D. level in graduate school.
All participants must apply online through the SURE website. Accepted applicants receive guidance by a faculty advisor in a College of Engineering research facility, a stipend of $6,000, attend regular meetings and engage in activities that help prepare students for graduate school.
Important Dates
- Application Opens: November 14, 2025
- Application Due: January 9, 2026
- Review Period: January - February
- Offers Begin: Late February - March
- Projects Begin: May 2026
Selection Process
Eligible applications will be reviewed by department staff and provided to the Faculty Mentors, who will review your application materials for their selection. It is possible that they will reach out to you directly for further information. You do not need to do anything else or reach out to the Faculty Mentors, but if you have any specific questions regarding a SURE Project, you are welcome to reach out to the listed Faculty Mentor. Any notification of an offer will be sent from the SURE Manager. Learn more: https://sure.engin.umich.edu/
Frequently Asked Questions
You’re welcome to, but it’s not a requirement. It would be a good idea to reach out if you have any specific questions about the project, expectations, and/or simply wish to begin a connection. Any listed Alternate Contacts below are typically PhDs that you could be working closely with, and may also be a good source for information.
Yes, you can, as long as it doesn’t interfere with your SURE commitment. Taking one class could work, but any more would be more difficult to manage while working 30-40 hours a week in the lab. It’s also advised that you discuss this with faculty mentor.
No, unfortunately. The SURE program is only available to current undergraduate students.
Additional SURE FAQS can be found here.
The 2026 SURE Application opens November 14, 2025.
Robotics SURE Projects
Faculty Mentor: Yulun Tian, yulunt@umich.edu
Prerequisites: Strong programming background (EECS 281 or equivalent). Experience with SLAM (ROB 330) and familiarity with ROS environments preferred.
Project Description: This project focuses on integrated software and hardware development for large-scale robotic perception. The goal is to enable real-time, robust simultaneous localization and mapping (SLAM) in unstructured environments using a combination of 3D vision and optimization-based techniques. The student will get experience with RGB-D cameras, LiDARs, inertial sensors (IMUs), and embedded computing platform such as NVIDIA Jetson Orin.
The project includes several milestones. The first is conducting data collection using our in-house mobile sensor rig and quadruped robot. The second is benchmarking state-of-the-art SLAM algorithms to evaluate performance. Finally, we will deploy and improve the SLAM system for real-time operation onboard the robot, integrating perception, mapping, and state estimation modules into a unified software stack.
Research Mode: In Lab
Faculty Mentor: Dimitra Panagou, dpanagou@umich.edu
Alternative Contact: Taekyung Kim, taekyung@umich.edu
Prerequisites: Proficiency in Python; basic understanding of robotics or control systems; familiarity with ROS2, OpenCV; interest in computer vision and/or machine learning
Project Description: This project involves deploying and experimentally validating advanced planning and control algorithms for safe social navigation using real hardware. The student will work with mobile robots to implement a social navigation stack, which combines RL policy with control barrier function to ensure safety around other agents. Experiments will be conducted in a dynamic environment to emulate a realistic social setting like a supermarket or hospital hallway. Another key component of this project will be setting up an external vision system (e.g., a camera mounted on the ceiling) and utilizing computer vision algorithms (such as OpenCV) to track the real-time poses of the ego-robot, other robots, and human participants.
Research Mode: In Lab
Faculty Mentor: Steven Ceron, sceron@umich.edu
Alternative Contact: Xinyue Xu, xxinyue@umich.edu
Prerequisites: Python is a must. Experience building electronics is a plus.
Project Description: Our lab is developing algorithms and hardware that enable robots to come together and dock onto each other’s perimeters to form desired structures that can change over time depending on how the task at hand changes. For example, we aim to have robots that can dock onto each other’s perimeters such that they can encapsulate an object and collectively transport it around obstacles toward some desired location. The student working on this project will further develop a simulation framework from our lab to study swarm reconfiguration algorithms that can then be tested on real hardware we have been building in the lab. The student will also get to work with and potentially iterate on the modular robots being built in the lab to realize some of the simulated behaviors. The student will get to join for all lab activities and meetings.
Research Mode: In Lab
Faculty Mentor: Steven Ceron, sceron@umich.edu
Alternative Contact: Tianyi Hu, tianyihu@umich.edu
Prerequisites: Python is a must. Experience designing and building mechanical and electronic components is also required.
Project Description: This project is related to the field of autonomous collective construction, where multiple robots collaborate to build a desired structure. Our lab is interested in exploring distributed and centralized coordination schemes that enable robots to work together to build complex structures; we are approaching this through simulation and physical experiments. The student working on this project will be in charge of continuing the development and testing of a simulation framework that can be used to explore how a group of robots need to move about a structure that they are collectively building, while taking into consideration limitations in pairwise communication, knowledge of their surroundings, and estimation of their neighbors’ states. The student will also be in charge of working with and further developing the real hardware to begin the process of implementing the simulated collective construction behaviors on the physical platform.
Research Mode: In Lab
Faculty Mentor: Leia Stirling, leias@umich.edu
Prerequisites: N/A
Project Description: In this interdisciplinary research group, we bring together methods from human factors, biomechanics, and robotics. We strive to understand the physical and cognitive interactions for goal-oriented human task performance and support operational decision making that relies on manual task performance. These goals may include reducing musculoskeletal injury risks, supporting telehealth, and improving technology usability.
There are different projects students may support. In your application, please note which project(s) you are interested in joining.
Upper Extremity Exoskeletons for Industrial Applications: Exoskeletons are currently being evaluated for many different applications. In this project, the student may support the development of a study and/or data analysis related to a powered elbow exoskeleton designed to support activities of daily living.
Aerospace Human Factors: In this project, the student may support the development of a study and/or data analysis related to studying a simulated lunar/Martian extravehicular activity.
Home-based Social Robots: In this project, students will support software development for a social robot that aids in practicing mental health activities.
Research Mode: In Lab
Faculty Mentor: Robert Gregg, rdgregg@umich.edu
Prerequisites: N/A
Project Description: The student will assist PhD students in conducting experiments with powered lower-limb prostheses and exoskeletons used by human participants including above-knee amputees, elderly adults, etc. This project will involve operating robot control systems, human subjects research, and data analysis.
Research Mode: In Lab
Faculty Mentor: Jason Corso, jjcorso@umich.edu
Student Mentor: Filipos Bellos
Prerequisites: Experience in programming with Python required; familiarity with concepts in deep learning helpful.
Project Description: This project focuses on applying state-of-the-art ML techniques to understand surgical videos. The student will identify publicly available YouTube videos depicting surgical procedures and use our automated pipeline—combining Whisper for audio transcription and a large language model (LLM) for text-based reasoning—to segment each video into distinct surgical phases. After the automated mapping, the student will manually verify the accuracy of both the temporal segmentation and the correspondence between video segments and actual procedural steps. Through this project, the student will gain hands-on experience with advanced AI tools for multimodal analysis, including speech recognition, language and video understanding, while contributing to the creation of a valuable multimodal dataset.
Research Mode: In Lab
Faculty Mentor: Jason Corso, jjcorso@umich.edu
Student Mentor: Filipos Bellos
Prerequisites: Experience in programming with Python required; familiarity with concepts in deep learning helpful.
Project Description: This project aims to construct a multimodal reasoning dataset designed to improve the alignment and interpretability of vision–language models. The student will experiment with advanced multimodal models to generate structured, step-by-step explanations for visual question–answer pairs drawn from existing benchmarks (MMMU, CLEVR, GeoQA+, ScienceQA, and CLEVR-Math). Building on prior reasoning datasets, the project will explore methods for curating and refining high-quality reasoning traces that capture intermediate thought processes. The resulting dataset will serve as a foundation for training and evaluating models like InternVL3, QVQ and Qwen-VL.
Research Mode: In Lab
Faculty Mentor: Jason Corso, jjcorso@umich.edu
Student Mentor: Colin Fuelberth
Preferred Skills: Experience with python, robotics simulators and CAD
Project Description: This project focuses on turning existing 3D models into a dataset that is actually usable in physics based simulators such as Isaac Sim. The student will start from publicly available or lab-provided 3D assets and annotate them with the physical information that simulators need, such as mass, density, friction, restitution, and material type. A key part of the project is to research and employ low cost, efficient strategies to estimate these physical parameters from available object information, for example using simple geometric measurements, material assumptions, or reference lookup, and then confirm that the resulting values produce realistic behavior in simulation. The student will record the parameters in a consistent schema, check that the models import correctly, and document any assumptions. Through this project, the student will gain experience with physics oriented asset preparation, validation of simulator realism, and building structured datasets that support future robotics and perception research.
Research Mode: In Lab
Faculty Mentor: Jason Corso, jjcorso@umich.edu
Student Mentor: Andrew Krikorian
Prerequisites: Experience in programming with Python required; familiarity with concepts in deep learning helpful.
Project Description: Our lab is building an intelligent platform (VIGIL) to support physically-grounded AI agents in rural healthcare delivery. A central component of this platform is the VIGIL Agent, a conversational assistant designed to help healthcare generalists operate effectively in low-resource settings. The agent must be capable of calling external tools and resources dynamically, guided by the context of the ongoing conversation, while grounded to what’s happening in the physical environment. To make this possible, we are exploring new approaches to agentic model design that move beyond traditional prompt engineering and prompt tuning. Instead of relying on handcrafted prompts, our research focuses on training tool-calling behavior directly into a model’s prior weights or developing a dedicated tool-calling head that integrates seamlessly with a base model. The student will contribute by benchmarking agent performance, collecting and curating datasets, and bringing creative ideas to develop and evaluate novel approaches to tool-use learning.
Research Mode: In Lab
Faculty Mentor: Jason Corso, jjcorso@umich.edu
Student Mentor: Andrew Krikorian
Prerequisites: Experience in programming with Python required; familiarity with concepts in deep learning helpful.
Project Description: Our lab is conducting independent research on improving Vision-Language Models (VLMs) through novel attention boosting mechanisms. These mechanisms are designed to help models attend more effectively to visual features and tokens, leading to stronger alignment between visual and textual representations. By enhancing the model’s ability to focus on the most relevant visual information, we aim to improve both accuracy and interpretability across a range of multimodal tasks. The undergraduate assistant will contribute by testing and benchmarking these new attention mechanisms on datasets such as MMMU using various VLMs (InternVL3, QvQ, and Llava R1), assisting with data collection, analysis, and visualization, and contributing creative ideas for evaluating and refining model performance.
Research Mode: In Lab
Faculty Mentor: Jason Corso, jjcorso@umich.edu
Student Mentor: Aidan Dempster
Prerequisites: Experience in programming with Python required; with IsaacSim or other physical simulators helpful.
Project Description: To effectively collaborate with humans, AI agents must understand when a command is trustworthy and clear enough to execute safely. Ambiguous natural language instructions, such as “get the box,” can cause a robot to perform unexpected or incorrect actions. This project aims to bridge the gap between vision-language understanding and embodied AI by creating a system where a robot proactively asks clarifying questions to resolve ambiguity in a manipulation task. Using a simulated environment, a robotic arm will be tasked with manipulating objects based on these vague commands. The system will leverage a pre-existing language model to generate clarifying questions (e.g., “Do you mean the small box on the left?”) and then use existing manipulation algorithms to execute the now-unambiguous command. The student will be responsible for integrating the language model with the robot controller in simulation and evaluating the system’s task success rate across a variety of ambiguous scenarios.
Research Mode: In Lab
Faculty Mentor: Jason Corso, jjcorso@umich.edu
Student Mentor: Audrey Douglas
Prerequisites: Experience in programming with Python required; familiarity with concepts in deep learning helpful.
Project Description: This project will focus on developing a system that takes video footage from one or more fixed cameras positioned around a controlled environment, such as a classroom or athletic field. The goal is to process this footage to produce a time-indexed, overhead (bird’s eye) map of activities, allowing users to interact with and analyze movements and events within an intuitive interface. The student will leverage geometric techniques, including homographies to relate camera perspectives, and combine these with state-of-the-art deep learning methods for object detection, matching, and tracking. The final outcome will be a manipulable visualization platform that makes complex spatial and temporal data easy to explore for end users.
Research Mode: In Lab