Faculty Interviewer: “How attractive, as a student, is it if all of these readings you get in a class you could, yourself, just put them into a podcast and listen to a five or ten minute podcast rather than reading the papers?”

Undergraduate Student: “Oh, it’s very attractive. But the thing is, this brings up a concern of mine. I feel there’s more to these papers…so much time has been…I don’t know, sometimes I feel some ethical issues.”

Interviewer: “How big a concern is this? Big enough to get you to change any habits, or do you just feel bad about listening to the podcast?”

Student: [laughing nervously] “I just feel bad.”

– May 2025 interview on the Modem Futura podcast. Andrew Maynard, Faculty Researcher at Arizona State University and Bella F., a student, discuss the use of Google NotebookLM to listen to AI-generated audio summaries of assigned texts rather than reading them.


As the student interview excerpt above suggests, even students who genuinely value their learning and want to discuss it openly with instructors and other authority figures can find themselves taken in by the conveniences of AI tools. This can happen even when they suspect the shortcut is costing them something–whether they can fully articulate that lost knowledge or not. For a local example, we can look at Josie Barboriak’s May 2024 article in The Chicago Maroon describing a conversation with an anonymous student who reported using ChatGPT for help writing discussion posts on dense readings, understanding neuroscience concepts, and organizing thoughts. Far from being naive about the tendency of LLMs to provide inaccurate outputs, the student emphasized the need to fact check. Furthermore, the student could tell that the tool interrupted his learning process at times and even expressed regret about using ChatGPT:

“I’m trying not to use it at all this week…For having high-level thoughts, I feel like my soul has to be attached to that…ChatGPT is an easy cop-out, and there’s a part to learning and critical analysis that I’ve missed. It detaches me from what I write about. …I know this is hypocritical, given how much I’ve used it, but I wholeheartedly think it should be banned. I really regret using it so heavily in my first year. And if you go to the A-level [of the Regenstein Library] now, you’ll see so many screens with ChatGPT!”

In this situation, a ban offers a tempting open-and-shut solution. However, as discussed in the first post of this series and as the opening excerpts of this entry demonstrate, this increasingly available technology that can automate the performance of intellectual tasks like AI can blur lines for all users, even students who come to class to learn in good faith, as most students do. Students can find themselves pulled in different directions, between convenience, good intentions, and pressure to perform. This blog series puts forth strategies meant to address the long term challenges presented by these technologies and help strengthen learners’ discernment when it comes to AI use.

In this post, you’ll find resources to help your class reflect on the role that convenient-seeming and easy to access automation technologies should be in their learning and make thoughtful choices in collaboration with you. These include:

  • Discussion questions for meaningful conversations with students about GAI tools
  • Ideas to address learning goals, GAI tools, and how students perceive group norms about both
  • Reading suggestions to support critical thinking about AI

How Can a Conversation Help with this?

In the new book The Opposite of Cheating, academic integrity experts Tricia Bertram Gallant and David A. Rettinger explain that while a clear policy laying out your non-negotiable boundaries is necessary, it can be even more impactful to invite your students to co-create ethical standards with you and their peers. While co-creating norms may sound overwhelming within the constraints of a short and busy quarter, Gallant and Rettinger have found that even a conversation as short as 20 to 30 minutes of class time (supported by some independent reading) can be extremely beneficial. Here’s how they describe the benefits of these conversations as a supplement to your policy:

“The syllabus can cover the basics of academic integrity–the behaviors you’re not willing to compromise on and the values you think are important. Then, facilitate students talking to one another about integrity. This strategy will take more class time and more facilitative effort on your part than other options. However, it’s not only entirely doable, but hearing your students talking about values and how they want to uphold them in the class can be really rewarding. It will remind you that most of your students want the same thing as you do–an honest environment in which everyone can learn.” (Gallant and Rettinger, 54-55)

Gallant and Rettinger describe a few major ways that the collaborative work of shaping standards can help:

  • Informational: Bringing consensus and clarity about concrete definitions of ethical lines
  • Attitudinal: Helping shape attitudes about what we collectively find to be acceptable in the classroom community
  • Social: Leveraging research showing that social norms within a group can impact decision-making more than the risk of punishment.

The authors provide a couple of starter questions for these discussions:

  1. “How do you ethically use GenAI tools to assist you with your learning and academic work?”
  2. “What are you worried about when it comes to the use of GenAI by me or other students?”(Gallant and Rettinger 55)

Meanwhile, former ATS Instructional Technology Fellow Tessa Webb has also written an excellent blog post with additional questions (and polling tool suggestions) you can use to continue that conversation:

  1. Do you utilize AI for your schoolwork? If so, which AI tool(s) do you utilize?
  2. Is the AI tool you use free or subscription-based?
  3. Are there specific AI tools or technologies you believe should or should not be allowed for use in academic assignments?
  4. Do you feel a need to acquire proficiency in using AI tools to be prepared for your future workplace?
  5. What is the primary purpose for which you use AI if you use it?
  6. Which of these actions would you consider “cheating” or “plagiarism”?
  7. What level of AI usage would you be comfortable with other students in your class employing?

Read Together to Build Shared Understanding of GAI

You may, however, be wondering what else can support you in this conversation. History professor and author of the recent book A Pedagogy of Kindness Cate Denial provides an excellent example of how the principles described by Gallant and Rettinger can be put into practice. In order to help her students think critically about Generative AI, she has them read articles and discuss topics related to AI, many of which you may already be thinking about, but perhaps in different ways than your students are. Based on original curation by Denial as well as some of our contributions, here’s a list of readings you might want to browse and select from for grounding these important conversations. (Although, you may also want to look for readings related to your specific discipline.)

How AI works: Demystifying and Misperception Busting
How AI Can Help Us Learn or Work Against Our Goals
Labor Practices Related to AI and their Consequences
Environmental Concerns and Costs Related to AI
How AI Serves or Disserves Various Communities
Copyright and Data Privacy Issues

Based on their collaborative reading, Denial leads her students to reason through and collaboratively articulate their own norms, a step that aligns with her overarching value of showing her students she trusts them. The process culminates in an AI use and citation policy that she holds them to for the rest of the term. In much the way Gallant and Rettinger suggest, this work of actively shaping policy can contribute to a feeling of autonomy among students.

In Closing

The work of promoting positive decision-making around AI requires time and planning, but there are many resources available to instructors looking to do this in service of their material and their students. As Peter D. Hershock argues in Buddhism and Intelligent Technology, “In the context of today’s recursively evolving human-technology-world relationship, the difference between being implicated in a system for intelligent and perhaps liberating self-discipline is ultimately not a technical matter of design. It is a profoundly ethical matter of attentive quality and values.” (Hershock, 12)

The next post in this series will continue this work by showing how you can create instructive shared experiences using AI, in order to demonstrate their strengths and limitations relative to the goal of learning. This post will provide context and prompts for hands-on AI exercises, which can create opportunities to reflect on the cognitive impacts of these tools. In the meantime, If you need additional context on AI for yourself or your students, you may want to check out these Canvas courses:

  • Teaching in the Generative AI Landscape: ATS, the CCTL, and the Library have joined forces to create a Canvas course to help faculty and instructors as they think through the teaching and learning implications of generative artificial intelligence. The course is asynchronous and allows self-enrollment.
  • Getting Started with AI: Academic Technology Solutions and the Library have collaborated to provide students with guidance on learning and AI, with information on tools that are available from the university, how to use those tools, and what it means to learn and work in a world where AI tools are available. While this is directed toward students, you may find it helpful to review some new messaging on learning and even test out some of the prompts within, which give users the opportunity to assess for accuracy and short-circuited learning.

Subscribe to ATS’ blog and newsletter for updates on new resources to support you in using these tools. For individual assistance, you can visit our office hours, book a consultation with an instructional designer, or email academictech@uchicago.edu. For a list of our upcoming ATS workshops, please visit our workshop schedule for events that fit your schedule.

Header photo by Nahrizul Kadri on Unsplash