“Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening in the state of play in order to nudge, coax, tune, and herd behavior toward profitable outcomes. Competitive pressures produced this shift, in which automated machine processes not only know our behavior but also shape our behavior at scale. With this reorientation from knowledge to power, it is no longer enough to automate information flow about us; the goal is now to automate us…Instrumentarian power knows and shapes behavior toward others’ ends. Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of “smart” networked devices, things, and spaces.” (8)

– Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power


The previous post in this series dug into seminal research in technology adoption to contextualize student AI use today. To build on that, this post explores the large-scale manipulations that short-circuit thoughtful technology adoption now–and how learning design might address that. As social psychologist Shoshanna Zuboff explains, the project of accumulating user data has in recent decades evolved into a project of shaping the behaviors that create that data. (For an example of how this kind of influence supports profit, note two recent announcements from OpenAI: the new integration of Walmart shopping into ChatGPT and the release of a ChatGPT-integrated web browser that collects alarming amounts of user data.) There is value in considering AI use not just from the perspective of cheating or deception, but from the perspective of technology adoption, because in large part the use of digital technology today is a result of marketing, user experience design, and manipulation of people using data. Very familiar technology companies figure in Zuboff’s analysis of modern corporate influence over individuals, and that’s a valuable supplement to our historical frameworks of how individuals, including students, decide to use or not use a tool like AI.

Short-Circuited Adoption as Precursor to Short-Circuited Learning

When was the last time you read the terms of service in full for a tool you use? Zuboff explains that as far back as 2008, companies have been using “clickwrap,” an approach that has somewhat strategically overwhelmed and ultimately numbed us to accepting invasive and extractive terms–which are also unlikely to be challenged by courts (Zuboff 48-50). Knowing that generally we don’t read these agreements–and that in a practical sense, we can’t, the purveyors of software and online services have been short-circuiting the more mindful parts of the adoption process in a deliberate way for years. In the 2008 study “The Cost of Reading Privacy Policies“, Aleecia M. McDonald and Lorrie Faith Cranor made the following estimate:

“…if all American Internet users were to annually read the online privacy policies word-for-word each time they visited a new site, the nation would spend about 54 billion hours reading privacy policies.”

They then estimate that individuals would spend 244 hours per year reading these agreements, or an average of 40 minutes a day. (McDonald and Cranor). Since 2008, two important things to keep in mind have happened:

  1. The percentage of Americans who own a smartphone has gone from 35% in 2011 to 91% in 2024, according to Pew Research.
  2. Nearly the full eighteen or nineteen-year lifespan of a conventionally-aged incoming undergraduate has passed.

This would likely suggest huge increases in time spent using services with unread privacy policies by the majority of us, and that this technology adoption behavior has become largely standard for the average person. For many of us, the making of a decision may often not register at all.

If you’re looking for a way to make students more aware of the tradeoffs of using these tools, there are multiple angles from which to approach that:

  • The legal tradeoffs previewed above
  • How algorithmic bias can shape learning in subtle ways (see the fourth post in this series for examples)
  • The long term effectiveness of learning compromised for short term conveniences
  • The original thinking closed off by algorithmic predictability

Making the Agreement More Conscious

If you’re looking for a way to make students more aware of the tradeoffs of using these tools, one place to start is the legal agreement a person makes in using AI tools. The AI Pedagogy Project at Harvard shares excellent activities for instructors to use, including “Close Reading the Terms of Service,” by Autumn Caines at the University of Michigan. This assignment, which can be set up in Canvas using Hypothesis, is designed to help students “become more familiar with the data and privacy impacts of creating an account with OpenAI, and gain experience with legal and technical texts along the way.”

You can break one of these privacy agreements into manageable chunks for small groups to cover, and ask students to reflect on the trade-offs we agree to every day, what is surprising, what they’re already comfortable with based on other technology, and even what invasions of privacy they take for granted.

Screenshot of a Hypothesis assignment in Canvas. The OpenAI terms of service are in the main pane of the screen with multiple passages highlighted. The right pane shows comments that go with those highlights.

Reflecting on What We Value in a Learning Experience

While legal tradeoffs are deliberately opaque, student comments on AI use show an awareness of the exchange that AI in education entails–especially when those insights are built on time, experimentation, and reflection. In contrast to the somewhat sensational and triumphant description of AI use in an article like “I’m a Student. You Have No Idea How Much We’re Using ChatGPT,” there are also examples of students considering real factors:

  • The pressures inherent to a high stakes academic environment
  • Perceived efficiency completing tasks
  • The extent to which they feel they’re still learning when using AI
  • Longer-term comparisons of what they thought they’d gain versus what they later felt was missing when using AI in academic work

It’s been quoted repeatedly in this series, but you can see the intention and effort toward a mindful adoption process in an anonymous student interview with the Chicago Maroon last year, and it bears re-examining in this context. The student describes using AI extensively in his first year as a UChicago undergrad, in classes as disparate as Humanities and Neuroscience and with applications such as:

  • Getting a starting point for a discussion post based on dense reading
  • Structuring an argument
  • Helping find textual evidence in support of an argument
  • Organizing his thoughts for class writing
  • Understanding diagrams

He reported that the practice of writing AI prompts helped him get better at asking questions in class and that he found it important to fact-check LLM outputs. Ultimately, however, he also expresses a desire to stop using it as much. In the long run, he found himself detached from the writing process and missing some of the critical process involved in reading. You can see in these reflections (and others like them quoted in the previous post in this series) that complex set of factors listed above.

Is it possible to help a student like the UChicago undergraduate described above come to value working without AI sooner? While he suggests banning these tools, stopping AI use may not be as simple as prohibition–or even warning the student that they’d regret it. (See our previous post describing how lessons from public health campaigns for more on that.) Rather, it may be more effective to help students reflect on the many factors that go into the often too-easy decision to use AI and look back at what came of it.

How can I promote reflection?

It may help to target a particular habit, like the ones described in Carter Moulton’s Analog Inspiration project. Moulton provides a list of forty-seven reflection prompts and activities for students and instructors to reflect on the skills and experiences they value most, even when AI offers to do it for them. The themes in these include concepts like:

  • Critical thinking
  • Dialogue
  • Discernment

The reflection for discernment, for example, goes as follows:

  • “What am I trying to avoid by using AI right now?” Ask yourself this question, and encourage students to do the same.
    • Is it… confusion?
    • Perfectionism?
    • Boredom?
  • Do I really need to use AI right now?
    • Encourage students to jot down these observations. Even a 30-second pause can lead to more intentional engagement with AI.

You could help people reflect on these questions in small groups, in anonymous polls using Poll Everywhere, or in a written reflection they submit to you privately in Canvas. In this way, wrestling with one’s own motives and having an opportunity to evaluate the results, students can build toward insight about what they value in their time in your class and in their education at UChicago more broadly.

Reconciling Gaps in What What We Value and What We Do

As referenced above, students face a high degree of pressure to perform, and AI tools are built to make you feel in control even as they remove your opportunities for the productive friction that makes durable learning. As fourth-year undergraduate Camille Cypher describes in the Chicago Maroon, students face a conflict between the rewards of reading and writing in the Core and perceived efficiency in a competitive environment:

“…at a school like UChicago, where students learn at a breakneck pace, RSOs are a career necessity, and efficiency is the mindset of an economics-dominated campus, students might fear falling behind as they see their peers generate A-minus-level papers in seconds. AI poses an easy off-ramp for overexerted students and for incoming freshmen who must quickly adapt to UChicago’s academic rigor.” (Cypher)

Against these pressures, there’s room for significant dissonance: on one hand, you value a unique learning experience you only have access to for a few short years and on the other hand the education is a means to an end that exists among many material pressures. There is room here for conflict between the learning students value and the convenience of AI tools. Long before the advent of clickwrap, innovation scholar Everett Rogers described this phenomenon based on his study of the spread of everything from public health initiatives to cell phones:

“…in many cases attitudes and actions may be disparate…This attitude-use discrepancy is commonly called the “KAP-gap” (KAP refers to “knowledge, attitudes, practice”). So the formation of a favorable or unfavorable attitude toward an innovation does not always lead directly or indirectly to an adoption or rejection decision.” (Rogers, 177)

With that in mind, it may be much more helpful to approach this real problem with the goal of making adoption conscious and interrupting the process that’s largely been scripted for users, and which is taking place in and outside of educational institutions around the world.

Can this Discrepancy Be an Opportunity?

It may help to resist seeking fault in this contradiction between stated value and behavior, because examples like the interview in the Maroon show that there is room to help people reconsider their adoption decisions, their attitudes, and their practices when it comes to AI.

It’s important to resist any temptation to find fault in this contradiction between stated value and behavior, because what these examples show is that there is room for us in education to better understand the persuasion phase–and furthermore, it shows that people can revise their adoption decisions, their attitudes, and their practices.

If we’re talking about the “conventionally-aged” undergraduate student, it’s vital to keep in mind how early and pervasive automatic adoption of new technologies has likely been. In all likelihood, the adoption process has often not risen to the level described by Rogers in the case of farmers or large corporations. Those adoption decisions have likely been influenced by peer networks and marketing at early and impressionable ages or made entirely for them by adults. Even in the case of older students, it’s true that with digital technology the process of adoption has been sped up, truncated, and made somewhat unconscious for many of us for some time.

Looking Back to Make Future Use More Conscious

In a recent interview on the Tea for Teaching podcast, Emily Pitts Donahoe emphasizes the importance of thoughtfully developing attitudes toward emergent technologies, rather than focusing exclusively on students’ use of them. Donahoe, Associate Director of Instructional Support and writing instructor at University of Mississippi, gives a compelling example of how adoption attitudes can differ between students using this technology in what seems to be the same way:

“A couple of years ago, I surveyed students at the end of this semester in my writing class about their uses of AI, and I had one student say ‘I used AI to correct my grammar, and so I learned a lot from that.’

And then I had another student say ‘I only used AI to correct my grammar, so I didn’t learn very much.’

And so the difference is not about the usage. They were both using AI in the same ways, but it was about how they were approaching that use and what they were trying to get out of it. One student was approaching it with the idea that AI could help them learn something here, and the other one was approaching it with the orientation of ‘AI can kind of correct a product that I’ve created.’

So I think that the key is that when students approach AI, they need to be approaching it with a learning orientation, or they’re not gonna get much out of it.” (Donahoe)

What kind of opportunities for reflection can you create in your discipline? Stay posted for more ideas and resources curated by ATS from thoughtful instructors and technology research.

In Closing

In the meantime, ATS can help you create the conditions for students to not only use AI effectively, but to choose the right situations to use it to serve the goal of learning. If you need additional context on AI for yourself or your students, you may want to check out these Canvas courses and our upcoming event, Byte-Sized AI.

  • Teaching in the Generative AI Landscape: ATS, the CCTL, and the Library have joined forces to create a Canvas course to help faculty and instructors as they think through the teaching and learning implications of generative artificial intelligence. The course is asynchronous and allows self-enrollment.
  • Getting Started with AI: Academic Technology Solutions and the Library have collaborated to provide students with guidance on learning and AI, with information on tools that are available from the university, how to use those tools, and what it means to learn and work in a world where AI tools are available. While this is directed toward students, you may find it helpful to review some new messaging on learning and even test out some of the prompts within, which give users the opportunity to assess for accuracy and short-circuited learning.
  • Byte-Sized AI, A Sample of AI Tips, Tools, and Traps: Students, instructors, and staff are welcome drop-in on November 12 from 1:00-4:00 PM at the Regenstein Library. Attendees will be able to to sample a variety of bite-sized explainers and activities to reflect on a variety of topics related to AI. As a collaboration between the Library, Academic Technology Solutions, and the Chicago Center for Teaching and Learning, this “tasting” is sure to have something for you!

Subscribe to ATS’ blog and newsletter for updates on new resources to support you in using these tools. For individual assistance, you can visit our office hours, book a consultation with an instructional designer, or email academictech@uchicago.edu. For a list of our upcoming ATS workshops, please visit our workshop schedule for events that fit your schedule.

Catch Up on The AI-Discerning Learners Series

  1. Part One: Developing Judgment for Technology Use
    • The first post lays out the rationale for this approach, from more theoretical influences like the study of technology using influences from Buddhism to practical reflections from a UChicago computer science instructor.
  2. Part Two: Start with a Conversation
    • Our second post in the series offers some readings and suggestions for co-creating group norms with your students on AI use in class, reflecting on ethics and coming to some agreements, especially as some research shows that perception among your community may influence our behavior more than fear of punishment or even its severity.
  3. Part Three: Experiment with (and Scrutinize) AI Together
    • The third focuses on sharing AI experiences in order to guide students’ reflection on the agency they experience (or lose) in using AI. AI tools have been known to oversimplify contested issues and change users’ critical habits over time—this is true even when the outputs are not technically or obviously incorrect. This post draws some inspiration from self-determination theory to propose experiences that can influence students’ behavior more deeply than simple prohibition.
  4. Part Four: Demystifying Experiences to Mitigate AI
    • The fourth builds on AI ethicist Shannon Vallor’s work to help students examine the flawed mirrors created by the data sets that AI uses. You can find here comparisons between several AI tools to the same seemingly simple question that can help draw students’ attention to this, perhaps in a way that might be adapted to your own work.
  5. Part Five: Understand Tool Adoption to Prioritize Learning
    • The most recent draws on the study of technology adoption over decades to help reflect on the different factors that influence students to use or not use AI. It also uses a longstanding framework of technology adoption behavior to suggest ways instructors can actively join their decision-making.