Regardless of your stated approach to AI in your class, there are still trends and attitudes in other classes or in the world outside higher education that stand to have an impact on the environment you want to cultivate. Norms from other classes regarding AI and even counterproductive habits of use can come with students into your classroom and potentially come up against your teaching goals. With that in mind, you may find it useful to contextualize where your teaching style or approach sits in the larger scope that learners may be bringing to the table, particularly with regard to AI tools.
With that in mind, this series of posts presents a selection of frameworks to help you better contextualize your teaching and even potentially consider new ideas to incorporate into your work. Generative AI in teaching can be a fuzzy mess of issues, but with the help of the right lens, some of the lines you want to draw between helpful and harmful uses can acquire greater definition.
Framework One: “Teaching Against, Teaching Around, Teaching With”
The first–and perhaps the simplest–framework organizes the broad spectrum of approaches, concerns, and pedagogical dispositions toward teaching in higher education in the following three categories:
- Teaching Against: a range of approaches including, but not limited to, explicitly banning AI tools in the interest of protecting learning outcomes and promoting academic integrity
- Teaching Around: a mindset that includes mitigating the intrusion of these tools by adjusting activities and assignments to be both more engaging to students and less vulnerable to completion with AI
- Teaching With: a set of approaches that allows use of these tools for specific activities, modeling uses that keep the student learning through strategic prompting
While slightly reductive, this framework can provide you a useful example to react to–even when you find the categories imperfect. You might find that your approach changes based on the goals of the course or assignment. To review some of the detailed examples of these categories from real instructors, take a look at the handout from the Exploratory Teaching Group session that inspired this post.
Teaching Against
Banning the use of these tools outright, creating assignments that are harder to complete using AI, and discussing this situation in a direct and open manner with students, with particular attention to the skills that a given activity is meant to build. As explained so compellingly by Russell P. Johnson of the UChicago Divinity School, “Writing bakes our half-baked ideas. It makes the mind more transparent to itself…If you use ChatGPT for your writing assignments, you deprive yourself of the benefits of coming to terms with what you actually believe.” Explaining to your students this rationale for working without AI assistance is an important part of an effective “Teaching Against” approach.
In this case, it’s especially important to set expectations and clarify learning goals, or as one event attendee this Winter described it, “making the case for learning.” There are some excellent suggestions on how to combat academic dishonesty in the context of AI tools in ATS’ blog post “Authentic Assessments and the Challenge of AI” by Thomas Keith.
If you’re interested in exploring policy guidance and samples to help you shape your communications, check out the excellent resources shared by both the Chicago Center for Teaching and Learning (CCTL) and The Sentient Syllabus Project. Finally, we encourage you to review the very promising communication strategies laid out by ATS Digital Pedagogy Fellow Tessa Webb in the post “Navigate the AI Conversation: Talking To Your Students About AI.”
Teaching Around
Mitigating the intrusion of these tools by emphasizing the benefits of learning outcomes the work serves, assessing authentically, and assessing more frequently, with lower stakes. Create learning activities that promote authentic engagement–and which may be harder to apply AI-generated writing to. You can find some ideas for this kind of learning design in ATS’ blog post on “AI-Resistant Assignments.”
One example of that reflects this approach for your consideration is Derek Bruff’s “Assignment Makeovers in the AI Age,” in which he proposes the following six questions to help reconsider a given assignment with the learning goals in mind, assessing risk to learning posed by AI but then seeking ways that AI can also enhance or support that work.
- Why does this assignment make sense for this course?
- What are specific learning objectives for this assignment?
- How might students use AI tools while working on this assignment?
- How might AI undercut the goals of this assignment? How could you mitigate this?
- How might AI enhance the assignment? Where would students need help figuring that out?
- Focus on the process. How could you make the assignment more meaningful for students or support them more in the work?
Teaching With
Specifying stages of the composition process in which generative tools are a valid aid (brainstorming, revision, etc), specifying guidelines for disclosure and attribution in your course, and collaboratively analyzing responses created by AI tools for their strengths and weaknesses
For a couple of examples of instructors who embrace AI tools actively in service of learning, you might consider Cynthia Alby’s “training wheels” theory or Ethan Mollick’s “Seven Ways of Using AI in Class”.
Alby, a professor of secondary education, trains future educators in work like creating lesson plans, writing learning objectives, and other documents that are important but also rely on boilerplate and highly rigid formatting. She provided prompts to generate things like learning objectives, which can be hard for new education students, and had students critique and revise the output and then reprompt. In the cycle of “self-AI-self,” she argues that students can build the same skills as before while also achieving those higher-order skills with more guidance. Reflecting on that work, Alby found that actively engaging students with AI tools actually decreased their dependence on them.
Mollick, in his AI-required course, emphasizes the “chat” element sometimes lost in discussions about ChatGPT. Not only do better outcomes result from iterative AI use, but these tools may also be useful as a conversational partner. With thoughtful structured prompts, students and teachers can design AI personae to sharpen ideas against. He presents seven general roles that can be designed, refined, and iterated to the teacher’s purpose:
- Mentor: providing feedback
- Coach: prompting metacognition
- Tutor: direct instruction
- Teammate: increasing team performance
- Student: receiving explanations from the user
- Simulator: deliberate practice to aid knowledge transfer
- Tool: accomplishing tasks
If you’re interested in these approaches, you may also want to review ATS’ detailed post on prompts for learning.
In Part Two
In the next installment of this series, we will offer additional frameworks for your consideration, including:
- A revisitation of Bloom’s Taxonomy (itself one of the most influential frameworks in the world of teaching)
- Categories for AI Use in Learning Design (and the Professional World in General) developed by writers from Stanford and EDUCAUSE
In Closing
For more ideas on this topic, please see our previous blog posts about generative AI. For individual assistance, you can visit our office hours, book a consultation with an instructional designer, or email academictech@uchicago.edu. For a list of our and upcoming ATS workshops, please visit our workshop schedule for events that fit your schedule. For support with training students in research skills, see our colleagues at the UChicago Library.