“Empirical evidence supplied by several researchers indicates that a decision to adopt or reject a new idea is often not the terminal stage in the innovation-decision process. …At the confirmation stage the individual (or other decision-making unit) seeks reinforcement for the innovation-decision already made, and may reverse this decision if exposed to conflicting messages about the innovation.
At the confirmation stage, the individual seeks to avoid a state of dissonance or to reduce it if it occurs.” (189)
– Everett M. Rogers, Diffusion of Innovations, 5th ed, 2003
As previous posts in this series have noted, our attempts to address AI’s impact on learning benefit from our efforts to understand how human behavior changes in reaction to technology. Why does a person choose to start using a technology, and in what context does that happen? This next post offers insight about technology adoption behavior at both the individual and the systemic level, as well as ways that instructors can disrupt or influence that process toward the goal of learning–which is often not the purpose of tools that prioritize productivity.
In education, we often discuss the adoption of AI in terms of shortcuts, lost learning, or cheating. However, rather than understanding use in the context of rules and infractions, our efforts to promote or protect learning may benefit from the well-established body of research studying how and why people have adopted or rejected new ideas across decades (and even centuries). Since its coalescence into a field of research in the 1960s, the study of diffusion innovation has concerned the ways in which people change their behavior in reaction to a variety of novel ideas and technologies, from technologies like seat belts and cell phones to public health initiatives like boiling drinking water and quitting smoking. By understanding the process of how people adopt ideas and technologies, we may also find ways to interrupt user behavior and influence student adoption in meaningful and positive ways.
The stages of these decision-making processes, based on examples across many contexts, communities, and technologies, offer us a framework to understand this technology’s adoption and organize our efforts. Whether the goal is to shape AI use in a specific learning-supportive way or to diminish AI use altogether, adoption decisions (as Rogers notes above) are changeable, and analyzing them can be a useful first step.
How Technology Adoption Works (or Doesn’t Work)
Based on research across contexts from farming, public health, consumer technology, and more, Everett Rogers, one of the seminal researchers in innovation studies, described five stages of adoption:
- Knowledge: Learning that an innovation exists and getting a sense of how it works
- Persuasion: Developing a positive or negative attitude about the innovation
- Decision: Engaging in activities that help you choose to adopt or reject
- Implementation: Starting to use a tool and/or adapt it to your own needs
- Confirmation: Looking for support of your decision to adopt OR reversing that decision based on new information (Rogers, 20)
It’s important to keep in mind that the adoption of technology like AI differs so greatly between instructors and students (in terms of pace, concerns, impressions of usefulness, etc) because their circumstances are so different, in terms of professional experience, expertise in a discipline, life stages, goals, and more. Consider the adoption stages above and whether any of your experiences thinking about AI in the past few years fall into these categories. (You can also look back at your own adoption of technologies like smartphones, the internet, or social media for comparison.) Those adoption processes may look different based on your age and circumstances, and that frame of reference can help examine the student adoption process in a new light. Students face a complex set of influences that drive their decision-making, including but not limited to:
- High-pressure situations that incentivize shortcuts
- The paths of least resistance offered by technology
- The standards created within peer networks;
- and seemingly straightforward bargains that “assist with” cognitive work while shaping our thinking and habits in subtle ways.
We can look at each stage of adoption, ask questions, and find opportunities for the work of education to apply. It may also help to ask about how that process has looked for ourselves in the case of AI or some other technology, like mobile phones, the internet, or a new medicine.
Analyzing the Student Adoption Process
In this context, we can ask questions within those same categories to understand the problems and opportunities posed by student AI use in manageable stages.
Questions:
- Knowledge: Right now, who is supplying students’ knowledge about AI? Is it the companies that make these tools? Peers? Instructors or other intellectual mentors? All of the above? (Furthermore, how are students weighing these different sources ?)
- Persuasion: What factors influence negative and positive attitudes toward these tools?
- Decision: What actions do people take using these tools that influence them to continue using them or reject the technology? Is there an intentionally designed activity that shapes adoption or rejection decisions?
- Implementation: What is the implementation and adaptation process like? What things do people decide to do with AI, and what novel or creative uses do people find for these tools after becoming familiar with them?
- Confirmation: Who provides affirmation or other feedback on their use? What encouraging or discouraging information or feedback is meaningful to the user? What outcome influences their future choices?
Some of these questions you may already have answers for, and some you may need insight on from students themselves. It can make a big difference to zoom in from the larger discourse about AI and to focus more on what can be learned from and in collaboration with students.
As UChicago computer science instructor Chelsea Troy explains, her decisions on AI policy in her class are based in part on the importance of having data about student use:
“…suppose I tell students “don’t use this,” knowing full well that in the vast majority of cases I could not build a solid case for discerning whether they used it. Then I ask them in surveys how they used it. What’s a rational actor going to say? They’ll say they didn’t use it. Then I get no data about how they used it. Again: these tools are new. We do not know how they integrate with an academic context yet. I have no interest in incentivizing my students to withhold that information from me.”
What It’s Looked Like So Far
The stages Rogers describes, born out of studies of how farmers adopt new pesticides or how the laptop became a normal household item, might seem irrelevant in their decades of distance from today’s newest technology. However, we can look back at a couple of examples of student use from a previous post in this series to see how these frameworks may be useful. We can see in these reflections this phenomenon of feeling in charge, prompting AI tools strategically and employing their outputs with learning as their intention. The students describe perceived agency and intentionality, but they also go on to express nagging doubts about something they’ve lost in the process of using AI. One anonymous UChicago second-year student described very thoughtful practices about using AI to get more done while still learning, but after looking back, resolved to use AI less:
“I’m trying not to use it at all this week…For having high-level thoughts, I feel like my soul has to be attached to that…ChatGPT is an easy cop-out, and there’s a part to learning and critical analysis that I’ve missed. It detaches me from what I write about.…
I know this is hypocritical, given how much I’ve used it, but I wholeheartedly think it should be banned,” he said. “I really regret using it so heavily in my first year. And if you go to the A-level [of the Regenstein Library] now, you’ll see so many screens with ChatGPT!” (Barboriak, “ChatGPT vs. the UChicago Core”)
Similarly, an undergraduate at Arizona State University explained in a podcast interview that while she feels ethical reservations about using an AI tool to get a summary of class readings, the concern is not enough to change her choices as a user:
Faculty Interviewer: “How attractive, as a student, is it if all of these readings you get in a class you could, yourself, just put them into a podcast and listen to a five or ten minute podcast rather than reading the papers?”
Undergraduate Student: “Oh, it’s very attractive. But the thing is, this brings up a concern of mine. I feel there’s more to these papers…so much time has been…I don’t know, sometimes I feel some ethical issues.”
Interviewer: “How big a concern is this? Big enough to get you to change any habits, or do you just feel bad about listening to the podcast?”
Student: [laughing nervously] “I just feel bad.” (Modem Futura)
You can see something encouraging rather than discouraging in these two excerpts: the students are reflecting on their adoption practices. They are attempting to reverse their previous decisions, they’re questioning previous decisions, and they’re thinking a little more about how AI helps or doesn’t help them. These potential rejections or changes in adoption behavior, unmotivated by punishment or policy, demonstrate how people are not unchanged by the practice of using a technology, and that (as Rogers explains in the excerpt that opens this blog post) the decision to use a technology can be reversed.
Can this reflective process be intentionally supported?
How do we make the adoption process more intentional? If the excerpts above demonstrate that students do adopt AI tools in good faith with learning in mind and that they do reconsider their adoption decisions in a way that aligns at least somewhat with established research on technology adoption, is it possible for instructors to help students in those reflective processes?
A good example of how one might do this comes from one scholar who combines critical literacy practices that predate AI with elaborate AI experimentation and customization.
Preserving the Journey and Discovery of Learning
Mike Caulfield, creator of the widely-used SIFT Method for evaluating sources online and co-author of Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online, has been exploring how to combat misinformation in the age of AI by helping develop better user practices with AI. To that end, he’s developed a “superprompt” called Deep Background that helps users investigate claims and ask more questions, rather than simply receiving answers. It’s a superprompt because it’s about ten pages of detailed, multi-step instructions that any user can paste into their AI tool before investigating a claim. The AI tool then returns a step-by-step report that a student could use to think through biases and search strategies. (Feel free to view a transcript of a Deep Background session in PhoenixAI.)
In response to a test claim,“I have heard a claim that the moon landing was faked and that Stanley Kubrick filmed it,” Caulfield’s directions elicit a response with the following sections:
- Overarching Claims (Including a moderate and extreme iteration of the claim)
- Preview Four Possible Searches & Critique for Bias
- Conduct Four Real Searches to Offset Bias
- Errors and Corrections Table
- Corrections Summary
- Potential Leads
- Assessment of Source Usefulness
- Revised Summary (Corrected & Contextualized)
- Notes on the Information Environment
- Tip Suggestion
The report that results demands attention from the user, rather than the receipt of a quick answer; the user would have to read attentively to even find something to copy and paste into an assignment. The way this prompt works reflects that the person who created it is an expert in fact-checking; the outputs bear out an opinionated design from someone with learning in mind. Take, for example, the previewed searches section, which raises flags for screening searches that a user might otherwise simply type into Google:
“Previewed Searches:
- “Evidence moon landing was faked”
Bias Risk: May surface conspiracy sites biased toward hoax explanations. - “Stanley Kubrick moon landing theory debunked”
Bias Risk: Leans toward mainstream science, may minimize any contradictory interpretations or why this theory persists. - “Evidence Apollo 11 landed on the moon”
Bias Risk: Likely retrieves only pro-landing sources, might miss reported anomalies or fringe claims. - “Stanley Kubrick admits to faking moon landing”
Bias Risk: May pull up misattributed or satirical sources mistaken as real.”
In a recent interview on the Teaching in Higher Ed podcast, Caulfield describes the importance of maintaining the ways in which learning is like a journey, in which the learner discovers new sources, and even finds themselves experiencing surprise at what they discover. With that in mind, he designed his tool not just to generate a fact-checking report, but to invite the user to undertake multiple rounds and dig deeper into the initial report, at times revealing contradictions or deeper nuances to the original results.
“…a piece of what we’re missing with AI is the journey, right? It’s not just that the AI comes back with this answer. It’s not even just that we’re offloading things to AI, but searching for information is a journey, and we experience it as a journey and we process it as a journey…These investigations we do, we’re set up to process them as an intellectual journey, and we short circuit that if we get something back that seems fully formed from the mouth of Zeus….” (Teaching in Higher Ed)
One of the risks for users of AI, as Caulfield describes, is the definitive feel to the answers, the polish that can lull the user’s inquisitive impulses. With that in mind, his superprompt invites users to ask an additional question or simply type “Another Round” in order to have the AI tool run an analysis of its original output and flag new concerns, perhaps creating more interest in the complexity of the issue or helping the user feel a little more empowered and motivated to doubt or scrutinize AI outputs. The design choice Caulfield describes harkens back to the student realizations quoted earlier in this post:
“…this is a very specific thing that I think is addressable in the way that we teach students to interact with these things—and the way that we teach them to react if suddenly they come up with something and the AI comes back with something that kind of contradicts what it said before. To see that as…”great, that’s discovery” and make sure we preserve that feeling of discovery in the way that they interact with this technology.” (Teaching in Higher Ed)
While this may not reflect a use case universal across disciplines, it shows that subject experts can design experiences that both scaffold appropriate skills and give students an opportunity to think about whether/how AI was an appropriate tool for the task. (You can see similar disciplinary expert design choices in Selma Yildirim’s work creating chatbots for math.)
A new set of practices to support better adoption-decision processes
Questions:
- Knowledge: What information are students missing about your discipline and AI?
- Under the Read Together heading in the second post in this series, you can find some resources on different topics related to AI.
- Persuasion: Can you learn about their current attitudes about these tools?
- The second post in this series gives some questions you might use to start talking with students about their attitudes toward and uses of AI.
- Decision: What activity would help students explore the usefulness or drawbacks of using AI in your discipline?
- It may be beneficial to design your own task for students to undertake using AI, drawing attention to how these tools meet (or don’t meet) the goals of your course. These can be as elaborate as Caulfield’s work or as simple as scrutinizing a Google AI Summary that automatically comes with your search (for better or worse).
- Implementation: Give your students an opportunity to adapt their use of AI to serve the goals of the class.
- Without requiring AI use, you can invite interested students to complete a supplemental activity that demonstrates what they see as the value of these tools in their work. In line with Troy’s thinking, that data can be very useful.
- Confirmation: Prompt students to reflect on how that use supported their learning. Give them feedback based on your disciplinary expertise.
- Think back to the student from the Maroon article quoted earlier in this post. Is there a way to bring students to reflect on their use earlier and make an assessment, rather than feeling troubled after most of an academic year has passed.
- Marc Watkins has written a useful document with reflection questions that might help you think about guiding your own students in reflection. You may even consider a task completed without AI in class as a comparison to AI-assisted work for students reflect on .
In Closing
The next post in this series will build on this understanding of technology adoption by exploring ways technology design in recent decades has strategically short-circuited people’s adoption processes–and what we can do about it using strategies like metacognitive exercises that can be done with or without AI.
In the meantime, ATS can help you create the conditions for students to not only use AI effectively, but to choose the right situations to use it to serve the goal of learning. If you need additional context on AI for yourself or your students, you may want to check out these Canvas courses:
- Teaching in the Generative AI Landscape: ATS, the CCTL, and the Library have joined forces to create a Canvas course to help faculty and instructors as they think through the teaching and learning implications of generative artificial intelligence. The course is asynchronous and allows self-enrollment.
- Getting Started with AI: Academic Technology Solutions and the Library have collaborated to provide students with guidance on learning and AI, with information on tools that are available from the university, how to use those tools, and what it means to learn and work in a world where AI tools are available. While this is directed toward students, you may find it helpful to review some new messaging on learning and even test out some of the prompts within, which give users the opportunity to assess for accuracy and short-circuited learning.
Subscribe to ATS’ blog and newsletter for updates on new resources to support you in using these tools. For individual assistance, you can visit our office hours, book a consultation with an instructional designer, or email academictech@uchicago.edu. For a list of our upcoming ATS workshops, please visit our workshop schedule for events that fit your schedule.
Join Our Exploratory Teaching Group to Go Deeper
If you’re interested in acquiring new context for addressing the way AI has impacted academic life, discussing with colleagues, and designing experiences that prioritize learning, consider joining the new ETG focused on AI in teaching and learning. Led by Selma Yildirim, Associate Instructional Professor of Mathematics, and Michael Hernandez, Instructional Designer with ATS, this year-long, open enrollment group aims to bridge instructor and student understanding about the appropriate use of AI tools in the learning process and the establishment of norms in the classroom community. In the first quarter, the group will meet in a low-commitment reading format, inviting faculty and instructors to join in discussing research and planning for Winter and Spring. Learn more about this year’s ETGs and complete this interest form to get involved. Our first meeting will be Thursday, October 16 from 1:30 to 2:45 PM, but there will be continuing opportunities to participate!
Header Photo by Lachlan Donald on Unsplash