“A computing system that permits the asking of only certain kinds of questions, that accepts only certain kinds of ‘data,’ and that cannot even in principle be understood by those who rely on it, such a computing system has effectively closed many doors that were open before it was installed:’
– Joseph Weizenbaum, creator of ELIZA, an early chatbot, in Computer Power and Human Reason (1976)
In the previous entries in this blog series, we’ve introduced an approach to AI in teaching that focuses on helping students to develop discernment about these tools, to encourage critical decision-making habits about when/whether to use AI and how to use AI. As described previously, this would entail leading discussions with students about these tools, co-creating norms based on those discussions, and even bringing in context from articles on issues like ethics, labor, the environment, how learning works, and more. This post builds on that work by describing how instructors can build shared experiences with AI that help students practice discernment of the strengths and flaws of AI outputs.
Like our first post in this series, the opening quote by seminal chatbot creator Joseph Weizenbaum suggests, the design of a technology can steer users without them noticing or fully understanding it–sometimes even changing their habits. The challenge isn’t always going to be as clear as deliberate attempts to take shortcuts or “cheat.” As the students quoted in our second post describe it, it’s easy to feel like a mindful, empowered, and self-directed user of the tool and only later to find that you lost something important (albeit difficult to describe) in exchange for the convenient boost of the AI tool.
By allowing students to experiment with AI, scrutinize that experience, and make their own judgments and decisions, you can help build important skills that last beyond the individual quarter.
Share an AI Experience, and Show That Their Experience Matters
After describing all of the well-intentioned and thoughtful ways he used AI, the anonymous UChicago student quoted in the previous post in this series laments the loss of some creative and cognitive opportunities and suggests that the solution to the problem is banning AI. However, a more effective solution to this false feeling of self-direction promoted by AI tools may actually be to preserve student autonomy while providing context and cognitive mentorship.
The power of providing people knowledge to make informed choices and affording them the latitude to exercise their autonomy stands out again and again across multiple settings beyond education. In his research on campaigns to influence audiences (particularly of people ages 10-25) to promote behaviors like quitting smoking or adhering to unpleasant but necessary medication, psychology researcher David Yeager found that people made better choices when the campaigns appealed to their desire to feel:
- Respected as someone who can make their own decisions (rather than someone incapable of doing so who must be coerced)
- That their choices have some significance (in that they assert agency and independence by making those choices, particularly important to audiences like novice learners and young people)
If, similarly, we consider AI use to be a long-term risk in terms of decision-making that we either want to guide in positive directions or discourage altogether, it’s worth considering the promise of the social resolutions over temporary solutions like detection tools whose reliability lags in the face of increasingly advanced tools. This approach might entail:
- Conveying trust in students’ ability to make thoughtful decisions about their learning (Through practices like the collaborative norms creation described in the previous post)
- Providing information that allows them to make informed choices (As with readings like the ones shared, again, in our previous post)
- Re-envisioning the value propositions for their options (as we’ll offer some ideas for in the next section)
Designing a New Experience with GAI
While it may seem counterintuitive to instructors who want to prevent AI use in their classes, it may be worth taking the time to design an AI experience you and your class can share and reflect on. Choosing from a variety of tools, including UChicago’s private tool similar to ChatGPT, PhoenixAI, thoughtful prompting exercises, and some reflection questions, you and your students can put to the test the value proposition of using these tools. Even if you don’t intend to allow AI use in your class broadly, discussing the benefits and drawbacks students notice in collaborative AI experimentation can support the autonomous student discernment of AI tools this blog series addresses. As with Gallant and Rettinger’s norms creation activity shared in the previous post, the reflections can start with simple questions:
- Does AI do the task well?
- Does it help you learn, or does it merely simulate the appearance of learning?
- How well does it reflect your body of knowledge in its outputs?
- Are the outputs accurate or flawed in some way?
- Do you find areas where it felt you missed or lost out on something, as the students who opened this post did?
But what kind of prompting exercises might you use to set up these discussions? Let’s start with a couple of examples of AI use from UChicago instructors.
Prompting to Demonstrate Features of AI
Demonstrate How AI outputs can be wrong or superficial
If you’re hoping to demonstrate the value of doing the work on your own, scrutinizing and reflecting on AI outputs relevant to your discipline together can be an effective exercise.
- Urdu instructor Romeena Kureishy’s work is a good example of this approach. When asking ChatGPT to play the conversational role of a customs agent in the Karachi airport, Kureishy and her students found outputs that were grammatically incorrect, nonsensical, and insufficient in response to cultural idioms.
- Hoyt Long, a professor of Japanese at UChicago, had his Humanities Core students compare ChatGPT’s demonstrated “understanding” of an assigned text with their own insights from reading the text. Students’ attempts to prompt for insightful conversations in historical personae both highlighted some limitations of these tools and engaged them with resources from the university library.
Is a Complex Issue Being Oversimplified without My Noticing?
You and your class may benefit from starting with a tool you’re likely to have all already used (whether you meant to or not): Google’s AI Overviews. You may have already noticed in the past year that when you search for something in Google, the normal web results we’re all accustomed to are preceded by a block of text offering an LLM-generated paragraph on the topic linking to a few sources. The convenience of this tool, as Google describes it, is to “Let Google do the searching for you.” This tool has been automatically added to user search results, and Google is encouraging users to employ this AI addition to their searches to do everything from basic reference questions to larger scale planning.
This tool could be a useful starting point for your and your students for a few reasons:
- It exemplifies AI you don’t opt into but should be discerning of
- It demonstrates how tools can be designed to change our habits
- Finally, it’s a good bite-sized sample to assess for quality of sources and their summarization, with a somewhat lower cost in time and setup than other activities.
How Are My Habits Changing?
A recent study by Pew Research Center on Google AI summaries reported a few key findings:
- “Google users who encounter an AI summary are less likely to click on links to other websites than users who do not see one.
- Google users are more likely to end their browsing session entirely after visiting a search page with an AI summary than on pages without a summary.
- The most frequently cited sources in both Google AI summaries and standard search results are Wikipedia, YouTube and Reddit. “
Why is this an issue? As you likely already know, LLMs are designed to provide users answers that are fluent, easy to read, and by default noncontroversial (sometimes to the point of sycophancy, which we’ll discuss later). Combining that technology with a search engine that people have trusted for decades means that users are primed to accept reductive or simplistic answers about complex bodies of knowledge and contested issues that are presented using a falsely neutral “view from nowhere,” explains Alex Hanna, Director of Research at the Distributed AI Research Institute.
In a conversation with Hanna, Safiya Noble, a professor of African American Studies, Gender Studies, & Information Studies at UCLA and author of the book Algorithms of Oppression, describes the danger more starkly:
“…the truth is…people who use search engines believe that what they find there is credible and trustworthy. And part of the reason is because when you look for very banal things in the prior logics of search, where people used minimal numbers of keywords, to look for something,…they get a lot of things that are kind of seemingly banal. And so they seem not controversial. But when you start asking more complex questions, it completely goes off the rails…So to me, to see Google moving in the direction of encouraging people to ask more complex questions, especially about society…I think we know where this is headed…Very quickly, you’re going to have a generation of people who just trust that the machine has done the right summarization and who will lose sight of all of the contested or multiple or millions of kinds of web pages that they could also have explored.”
This risk offers not just an opportunity but perhaps also an imperative to teach students the habit of mind of scrutinizing neat answers, especially when they come from tools that are nearly impossible to opt out of. If you’re looking for resources to help students develop those habits of mind, consider the following:
- Customized research instruction from the UChicago Library
- “Information Literacy Instruction in an AI Landscape,” a handy guide on other support the library can provide, available in the self-enroll Canvas site for instructors Teaching in the AI Landscape
As with the earlier example where Hoyt Long’s students found ChatGPT outputs insufficiently nuanced compared to their own library research, you may find it helps students develop that discernment if they first build their own knowledge and then compare it to the neater answers they find in these difficult-to-avoid summaries. What’s more, a comparative reflection can be folded into an existing project without adding an entirely new unit to your likely busy course schedule.
Hanna and her co-author Emily Bender describe the vital function of this activity in a way that may be useful to your students in The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want:
“Choosing a particular link and evaluating what we find there allows us to both situate the information we have found in its context and add to our understanding of how sources relate to each other and to our information needs. Finding contradictory answers on different pages— and, crucially, knowing the source of each—allows us to learn what kinds of knowledge are contested, who is doing the contesting, and how each of those sources fits into our own positions.” (Bender and Hanna, 172)
How Do You Know It Isn’t Just Telling You What You Want to Hear?
For a slightly more time-intensive, but instructive experience, you may also want to invite students to scrutinize the contents of an extended conversation with an AI tool, rather than just a quick summary. As educational-technology scholar Dr. Punya Mishra points out, there’s a real danger in asking for easy answers about a subject you don’t know well from a conversational tool that’s built to satisfy the user. ChatGPT’s default way of speaking to you is built to be pleasing and helpful. It’s also in OpenAI’s best interest to keep users on the website in order to keep getting free training data from users as they write prompt after prompt. There’s an inherent risk of sycophantic conversations in this dynamic, one that OpenAI even had to acknowledge this Spring.
Mishra describes it as an equation that amounts to easy answers plus sycophantic systems plus uninformed users equals conversational drift:
“Easy answers are easy to come by…Then combine that with the design of these large language model systems which are sycophantic in the extreme, which means they’ll always agree with you. If…you have no expertise in the domain so you’re going to trust [the LLM] and so invariably what’ll end up happening is this sort of conversational drift where you will move further and further away.” (Modem Futura podcast)
There are two main options in this approach:
- Invite students to scrutinize outputs related to your discipline, using your subject matter expertise as a safety net, as Romeena Kureishy did in her Urdu classes mentioned above.
- Ask students to scrutinize outputs related to something they know well. Putting them in the place of the expert not only emphasizes to them that their knowledge is valuable, but it also creates an opportunity to engage the strong feelings that come up when an issue you care about is misrepresented. With that in mind, you might consider the prompt activity shared below, from the student-facing Canvas site Getting Started with AI.
Test The AI (and Yourself): Argue with It about Something You’re an Expert in
To help students get a sense of how different the experience is using AI when you’re knowledgeable about a topic (like a teacher in a class) or less expert (like you might be when you first start studying something), it can be illuminating for them to ask a tool like PhoenixAI to talk with them about their own areas of specialized knowledge. It can be an interest or hobby, as long as it’s something they know well and have strong opinions on. Whether it’s sports, a musical artist, film, or something else, they simply need to know enough to fill in this prompt (adapted from the book Teaching Effectively with ChatGPT. You can copy and paste the template below.)
You are an expert in [CHOOSE ONE POINT OF VIEW IN YOUR FIELD OF INTEREST]. I am an expert in [INSERT OPPOSING VIEWPOINT]. We are having a debate in which we will each make the case for our favorite [INSERT RELEVANT POSITION]. Your task is to counter my arguments with expert knowledge. Let’s keep our debate respectful, insightful, and rigorous.
In Closing
As Karen Hao notes in the recent book Empire of AI, the answer to resolving the issues you see created by AI is to demystify them:
“Finally, to redistribute power,…we need broad-based education. The antidote to the mysticism and mirage of AI hype is to teach people about how Al works, about its strengths and shortcomings, about the systems that shape its development, about the worldviews and fallibility of the people and companies developing these technologies.”
ATS can help you create the conditions for students to not only use AI effectively, but to choose the right situations to use it to serve the goal of learning. If you need additional context on AI for yourself or your students, you may want to check out these Canvas courses:
- Teaching in the Generative AI Landscape: ATS, the CCTL, and the Library have joined forces to create a Canvas course to help faculty and instructors as they think through the teaching and learning implications of generative artificial intelligence. The course is asynchronous and allows self-enrollment.
- Getting Started with AI: Academic Technology Solutions and the Library have collaborated to provide students with guidance on learning and AI, with information on tools that are available from the university, how to use those tools, and what it means to learn and work in a world where AI tools are available. While this is directed toward students, you may find it helpful to review some new messaging on learning and even test out some of the prompts within, which give users the opportunity to assess for accuracy and short-circuited learning.
Subscribe to ATS’ blog and newsletter for updates on new resources to support you in using these tools. For individual assistance, you can visit our office hours, book a consultation with an instructional designer, or email academictech@uchicago.edu. For a list of our upcoming ATS workshops, please visit our workshop schedule for events that fit your schedule.
Header Image by Mimi Thian