“Al is like a free puppy; knowing when to say yes and when to skip it will be important.”
– José Antonio Bowen and C. Edward Watson, Teaching with AI: a practical guide to a new era of human learning
With a new academic year underway and the University’s private Generative AI tool, PhoenixAI, now broadly available, you may be considering how new developments in AI impact the learning environment in your class. In this new environment:
- All students have access to a high-quality generative AI tool regardless of ability to pay a monthly fee
- Unlike other commercially available generative AI, this tool does not send user inputs back to the parent company to use for additional training data, making user data more private
As such, you may be reconsidering whether there’s some role for AI in your students’ learning process or in your own work. You may even find that the work of mitigating unwelcome use requires some interaction with the tools. As Marc Watkins suggests, the work may be less about adoption versus prohibition and more about “ways to negate the most disruptive parts of uncritical adoption.” With this in mind, I have organized insights from instructor experimentation at UChicago and across higher education to present you with a framework to assess opportunities, risks, and resources as you consider new approaches to AI in your class.
This post follows up on a couple of previous posts rounding up other writers and educators’ thinking about when and how to bring GAI into their work, from a set of categories for professionals to a reconsideration of Bloom’s Taxonomy that highlights what AI can’t do.
What Benefit Do I Want this Tool to Provide?
Several common types of approach stand out among educators who have adopted GAI tools in some way, and it may help to consider them in terms of the intended benefit.
Potential Benefits | Approaches |
---|---|
Promote future critical AI use by students | Analyze, fact-check, and correct AI outputs (using library and disciplinary resources). Hallucinations offer an opportunity for students to become keener evaluators of all “facts” presented to them–not just AI. This skill serves not only multiple disciplines, but also everyday decision making. As author Mike Caulfield noted on the Teaching in Higher Ed Podcast, “in a world where anything can seem authoritative, provenance matters more. Knowing where it came from is going to matter a lot more than knowing whether it looks credible.” |
Complement disciplinary knowledge (or help students interact with it) |
|
Use AI functions for specific tasks to promote equity | Relieve demands that are not relevant to the course or assessment at hand.
For example, if a student’s skills with English for academic writing are not germane to a given task, can they use a tool to help with that–or even submit their work in another modality? (The concept of “construct relevance” from Universal Design for Learning may help here.) |
Support students to meet their Zone of Proximal Development | Scaffold for new, relevant skills using a “training wheels” approach to writing in formats with highly specific generic constraints.
(For example: Cynthia Alby’s work with secondary education students learning genres like lesson plans.) |
Two Major Decisions: Centrality and Guidance
The spectrum of responses to generative AI in higher education and the approaches to folding it into the work of learning in a thoughtful way can be overwhelming, so this framework that tries to make that process of integrating or allowing AI use more manageable by using four quadrants across two fairly straightforward axes. (Note that while prohibiting AI use is the valid response in some contexts, this matrix focuses specifically on the decision to allow AI in order to uncover nuances in that particular area. For support outside of this matrix, see Thomas Keith’s blog series on academic integrity.)
GAI in Learning Design Matrix
- X-Axis: How central should this tool be to the work of my course?
- This concerns AI use in terms of centrality, or how necessary it is to accomplish the work of the class. One one end we have Optional, and at the other Integral to the completion of the work.
- Y-Axis: How much will I guide my students’ use of the tool?
- This addresses guidance, how much direction or modeling in specific methods for using AI that instructor will give. While it could become a question of equity when some students are more skilled at using AI, many instructors are rightly wary of AI distracting from the real purpose of their classes. As such, the two ends of this axis are Instructor-Guided and Self-Guided.
What combination of guidance and centrality makes sense for me?
Below you can find a brief illustration of each quadrant, with opportunities for learning, risks to your learning goals, and an example.
Top Right: Instructor-Guided x Integral
Opportunity: Instructors can directly model critical use and draw attention to potential for misinformation.
Risk: The activity may take time and attention from other disciplinary work in class.
Example: UChicago Urdu instructor Romeena Kureishy engaged her students in critiquing AI outputs as a class. Students conversed with AI in Urdu under Kureishy’s supervision, learning about the language by seeing the errors the tool produced.
Bottom Right: Self-Guided x Integral
Opportunity: Students can learn about AI through trial and error and engage more deeply with disciplinary knowledge to achieve the desired outcome.
Risk: If novice learners don’t have the knowledge to assess the output accurately (or guidance in assessing it), there is potential for missed learning or misinformation.
Example: Japanese literature professor Hoyt Long worked with students in the Humanities core to use AI to play historical personae. Students researched historical figures using library resources, designed prompts using that information, and shared reflections with Long afterward.
Top Left: Guided x Optional
Opportunity: Students can use the tools in sanctioned ways to achieve a task, benefit from instructor guidance in using AI, and reflect on the work they do both with and without AI assistance.
Risk: If the work can be done with or without AI, the instructor may need to assess differently depending on either case.
Example: Toward the goal of integrating reflective AI use in student writing, Lisa Rosen, Associate Senior Instructional Professor and Associate Director of the Committee on Education, allowed optional GAI use in her course, but provided both resources and guidelines for use that included:
- notes on what uses she saw as more and less helpful for learning
- some best practices and sample prompts
- a required reflection for both AI users and non-users
- clear requirements for documentation of their AI use
Bottom Left: Optional x Self-Guided
Opportunity: Students may learn about these tools from trial and error. Those who are engaged with these tools may find themselves further engaged with the material.
Risks: In its most extreme form, this approach would introduce potential for inequitable use. The combination of unsupervised and uneven use creates an unfair advantage for students who may already know how to use these tools well and a disadvantage for those who may not have been educated about misinformation and other risks. There is also a real risk of “short-circuited learning,” or completion of the wrong tasks using AI without truly engaging with the material.
Example: Currently, we do not have an example of this kind of implementation to share, as it has high potential for the “disruptive, uncritical adoption” that Watkins refers to.
What new knowledge, structure, or resources do I need to support successful AI use in class?
Is the acquisition of that knowledge, development of that skill, or building of that resource prohibitive? What resources exist to help with that?
Prerequisite for Implementation | Resources |
---|---|
Instructor AI Knowledge
|
|
Guidance for Student AI Use
|
Guidelines for productive and reflective AI use created by ATS in collaboration with UChicago instructor Lisa Rosen. |
Rules for Citation/Disclosure
|
Templates and examples of syllabus statements from UChicago instructors, shared by the Center for Teaching and Learning. |
Please keep watching our blog for more resources to help instructors respond to AI in their teaching. If you’re a UChicago instructor trying something new with AI, we’re interested in hearing about it! Email michaelhernandez@uchicago.edu.
Further Resources
For more ideas on this topic, please see our previous blog posts about generative AI. For individual assistance, you can visit our office hours, book a consultation with an instructional designer, or email academictech@uchicago.edu. For a list of our upcoming ATS workshops, please visit our workshop schedule for events that fit your schedule.
Header Image by Gerd Altmann from Pixabay