Bridging the AI literacy gap in higher education

The rapid advance of large language models (LLMs) such as ChatGPT has transformed the educational landscape, offering exciting new possibilities as well as challenges for learning and teaching. As universities increasingly integrate these tools into academic settings, understanding how students engage with and understand these models has become essential to both maximizing their benefits and curbing potential drawbacks.
Researchers in Computer Science and Engineering at the University of Michigan, a leader in the development of campus-specific AI tools, have undertaken a groundbreaking study exploring how students engage with LLMs. Led by CSE PhD student Snehal Prabhudesai, with Prof. Nikola Banovic, the study provides crucial insights into how students interact with AI technologies and what support they need to improve their engagement with these tools.
Their study, titled “‘Here the GPT made a choice, and every choice can be biased’: How Students Critically Engage with LLMs through End-User Auditing Activity,” will be presented at the 2025 ACM Conference on Human Factors in Computing Systems (CHI). Other authors on the paper include CS undergraduate student Ananya Kasi, recent alum Anmol Mansingh, CSE PhD student Anindya Das Antar, and former postdoc Hua Shen.
With LLM use surging, some universities have taken proactive steps by introducing their own platforms, including U-M GPT, a model designed by Information and Technology Services (ITS) that is specifically tailored for the U-M community. Despite these advances, research is still catching up, and much remains to be learned about how students are engaging with these tools and whether they are able to parse AI responses effectively.
To this end, Prabhudesai, Banovic, and their coauthors aimed to better understand the challenges and opportunities presented by LLMs in an academic context. Their study focuses on assessing students’ ability to critically engage with AI outputs using various strategies, including focus groups and an auditing framework featuring structured exercises to evaluate participants’ understanding of LLM outputs.
“Our goal is to examine not just how students are using these technologies, but how we can improve that interaction to enhance learning and understanding,” said Banovic. “We aim to provide practical recommendations that can be implemented to better support students as they increasingly interface with these new technologies.”
To better examine how students are engaging with LLMs, the researchers developed a framework called PromptAuditor, which determines how well students can identify and analyze biases within LLM outputs. By equipping students with structured auditing exercises during real-time interactions with AI, the team successfully identified potential barriers and areas for increased support.
Focus group discussions facilitated by the research team further revealed that while LLMs can serve as a helpful supplement to academic resources, students without a technical background often need more support. While students were able to recognize biases in AI outputs to some extent, they needed additional guidance to fully understand them.

Through their innovative user auditing methods and discussions with participants, the researchers found that targeted instructional support greatly enhanced students’ AI literacy. With structured guidance, students transitioned from confusion to focused evaluative interactions, which improved their ability to identify biases and resulted in a deeper comprehension of LLM outputs.
“We found that structured scaffolding significantly improved students’ ability to critically evaluate AI outputs,” said Prabhudesai. “This suggests that providing more targeted support can help bridge the gap in AI literacy.”
Students who participated in the study echoed these findings. “I was not able to audit the model in the beginning. But then as I went on trying new prompts… I was able to better understand how to audit the model,” one participant noted.
While these findings illustrate key challenges associated with LLMs, including potential biases and the need for additional scaffolded support for students, they also lay a foundation for policy and instructional improvements to enhance their use.
Looking ahead, the researchers intend to collaborate further with ITS and units across U-M to expand AI literacy programs and refine the deployment of LLMs and similar tools. “This research is invaluable as we continue refining our AI tools,” said Ravi Pendse, Vice President for Information Technology and Chief Information Officer at U-M. “It provides us with a clearer pathway to creating more supportive learning resources for our students and our community as a whole.”