As we prepare for a new academic year, you may find yourself thinking about how to move forward in a teaching and learning environment that’s been altered and complicated by the widespread availability of generative AI tools like ChatGPT. To aid instructors in this work, a panel of colleagues from across the disciplines at UChicago gathered to reflect on the major questions and contexts related to teaching in the age of AI.

This panel discussion, presented in April 2023, was convened by an AI Tools working group of faculty and staff brought together by the Chicago Center for Teaching and Learning, including the UChicago Library, the College, and Academic Technology Solutions. We invite you to review the panelists’ insights summarized and excerpted below and to watch out for additional resources coming soon from this working group.

Panelists

      • Bringing her expertise from the fields of both computer science and linguistics, Prof. Ettinger established context for the panel’s reflections by explaining the technology behind tools like ChatGPT, and offered some cautions about the risks and limitations involved in using these technologies.
  • Navneet Bhasin, Associate Sr. Instructional Professor, Biological Sciences Collegiate Division
      • Prof. Bhasin, who teaches biology for both majors and non-majors, discussed how she and her colleagues at BSCD are currently thinking about the impact of these tools on student learning. This includes the situations in which use of generative AI tools may be appropriate, the specific classroom contexts and specialized texts that these tools cannot appropriately address, and the enduring importance of teaching critical analysis and research skills.
  • Patrick Jagoda, William Rainey Harper Professor, English, Cinema & Media Studies, and Obstetrics & Gynecology
      • Prof. Jagoda placed recent developments and discourse about tools like ChatGPT in the larger context of their many antecedents in culture and humanistic thought throughout history. He situated this technology within longstanding cultural debates over the nature of creativity and provided some applications in which it may be a useful aid to teaching and learning.
  • Lisa Rosen, Associate Sr. Instructional Professor and Associate Director, Committee on Education
      • Reflecting on her work in the first two quarters since the release of ChatGPT, Prof. Rosen shared how the challenge of the moment had prompted her to double down on the learning goals that have always been central to teaching. Through thoughtful conversations and assignment design, Professor Rosen has made explicit to her students the goal of using their own writing in order to sharpen their thinking.
  • Borja Sotomayor, Associate Sr. Instructional Professor, Computer Science, and Director of the Master’s Program in Computer Science
    • In collaboration with his colleagues in computer science, Prof. Sotomayor has found it important to balance both the authentic software development situations in which students will be expected to employ generative AI tools and the learning situations in which restricting the use of these tools has an important skill-building purpose. As such, instructors may need to employ different policies at different times in the learning process.

Allyson Ettinger

Bringing her expertise from the fields of both computer science and linguistics, Prof. Ettinger established context for the panel’s reflections by explaining three concepts important to this conversation: deep learning, language models, and, of course, ChatGPT.

Deep learning, Ettinger explained, refers to large neural network models that are taught to map certain types of inputs to certain types of outputs. These models are taught to provide correct outputs through an iterative training process. Language models, Ettinger further explained, are models “specifically being trained to predict words or sub-word tokens on the basis of context.“ An example setting would involve inputting sentences with certain words masked, and training the language model to predict the missing words based on the context provided by other words in the sentence. Another example setting would involve training language models to predict the next word in a sentence given previous words. Training with these kinds of simple prediction tasks on massive amounts of data, language models can learn to produce surprisingly complex and impressive behaviors.

Ettinger went on to give one example of this kind of sophisticated language model behavior with GPT-2, a predecessor to ChatGPT, which was able to take a prompt about unicorns and generate a detailed and fluent passage continuing that prompt, as a result of the ability to predict probable next words in a sequence. These kinds of outputs are often good examples of fluent language, but Ettinger cautioned that the outputs can often contain illogical components upon closer inspection. Turning to ChatGPT, Ettinger explained that while the details behind this new version are more opaque, its training process involved Reinforcement Learning from Human Feedback (RLHF), an additional layer of training that helps ChatGPT to provide responses that align more with what a human writer would produce. Of course, ChatGPT has also been trained on a massive quantity of data to produce responses at the level of quality and detail that have brought it to such prominence. Remaining flaws include the persistence of illogical responses masked by fluent language, and even assertions that seem likely or plausible but are actually fabricated and potentially harmful, sometimes referred to as “hallucinations.”

Having demystified the technology behind these tools, Ettinger offered a closing warning about the degree to which a user might trust these models due to the fluency of their responses outpacing their veracity.

“… these models should be used with caution. There are lots of things that they can potentially do that are not desirable. They can say things that are false, both because they may see false things on the Internet, and because they can just make things up that sound plausible and probable based on their training distributions. They also have the risk of just seeming so human-like, that people start treating them in ways that give them that trust in them more than they should, or are more influenced by the models than they should be, when interacting with an agent that is so fluent like this. Folks tend not to have experienced that other than when interacting with other humans. But it can be very risky to do.”

After discussing the impact of these tools with her colleagues, Prof. Bhasin came to a conclusion for her teaching that’s very much in line with Prof. Ettinger’s words of caution for using tools like ChatGPT: “When we give them assignments, we can say as a first draft, students could potentially use generative AI tools to streamline their thought processes, but then make sure that their write ups check out and align with what’s known in literature.”

Furthermore, she found that currently the AI technology is not sufficient to answer the demands of the highly specific lab work that builds on a robust and constantly growing body of research such as that in biotechnology. As a result, Prof. Bhasin found that, currently, “if we, as instructors, can stay current with the developments in the field, and give prompts that are more current, I think the issue of students using these AI tools to do their assignments is already taken care of.”

Regarding the potential advance of these tools past their current limitations, Prof. Bhasin expects that instructors will need to revisit and reassess the situation, supporting learning with more specific prompts, frequent feedback, and scaffolding. Prof. Bhasin notes that the purpose of learning is still being served “as long as we can make sure that our students are learning how to critically think, analyze information, take their results and plug them into the continuum of research and verify results obtained from generative AI tools.”

Patrick Jagoda

Bringing a humanities perspective to this conversation, Prof. Jagoda situated this technology within longstanding cultural debates over the nature of creativity and provided some applications in which it may be a useful aid to teaching and learning.

“In many ways as for many scholarly fields, a tool like ChatGPT is new for the humanities, but it’s also picking up on categories that have been central to us, or ways of thinking that have been widespread for centuries.”

Professor Jagoda drew connections to structural myths like Pandora’s Box and Galatea in the Pygmalion myth, as well as technological phenomena like de Vaucanson’s automaton of a digesting duck, as historical context for our propensity to think of these new technologies as intelligence rather than algorithms–harkening back to Prof. Ettinger’s cautionary remarks about placing too much trust in the output of these tools.

Furthermore, Jagoda noted ethical questions regarding the environmental impact of ChatGPT, the amount of water required to support its processing power, as well the labor implications brought up by visual AI outputs derived from human-created art used without any financial compensation. Those questions, as Jagoda explained, “are the kinds of questions that I think the humanities can help us with in terms of contextualizing, not fighting against what is happening in computer science necessarily, but understanding these developments in a broader historical and cultural context.”

In terms of practical applications for the tool that take into account its limitations and strengths, Professor Jagoda noted its usefulness for providing creative writing prompts, brainstorming, practicing language skills for non-native speakers of a language, generating examples, and–surprisingly–generating syllabi. The last example, as Jagoda noted, requires human screening for reading assignments that are not real publications, but the examples speak to the usefulness of a tool that’s designed to produce plausible textual responses to human prompts.

Additionally, Prof. Jagoda described philosophical opportunities offered to us by these tools, prompted by the attempt to use AI tools to make art with, about and even for AIs. AI art “raises all kinds of questions about otherness, for instance…It raises questions of what creativity even is. Fundamentally, it’s not saying ‘Humans no longer need to be creative, because ChatGPT can do this for us.’ It’s asking ‘What was creativity in the first place? How did we misapprehend it? How can these tools help us produce better definitions?’

Lisa Rosen

Through thoughtful conversations and assignment design, Prof. Rosen has leveraged her course learning goals to make it explicit to her students that the goal of using one’s own writing is to sharpen one’s thinking.

In those conversations with her students, she communicated a general stance that these tools can be used to support their work, but not as a substitute for doing the work. Rosen noted that as the tools will soon be “widespread, virtually undetectable, and ubiquitous,” her view is that “we have to help students understand what it will mean to use these tools responsibly in particular disciplinary contexts.”

For example, in her writing-intensive seminar courses, she explicitly communicates to her students that the goal is for them to “clarify and sharpen” their ideas through the process of organizing them in writing. Prof. Rosen has included a policy about the use of Generative AI in her syllabi this year and has found revisions to be necessary and expected based on conversations with students about their actual use.

While a well-communicated and regularly revised policy is a start, Prof. Rosen also noted that realizing the ease with which students can use tools like ChatGPT to avoid doing their own work has caused her to focus even more on her relationship with her students and the learning goals her assignments serve. Based on that thoughtful engagement with both the issue and her students, Professor Rosen has revised not only her policy but also her assignments. She explained that one of her major writing assignments has been restructured to be completed in installments, with opportunities for feedback and revision before the final submission. Similarly to Prof. Bhasin. Prof. Rosen noted that the goal of supporting learning can still be served by the longstanding pedagogical practice of scaffolding assignments with more structure and multiple drafts.

As Rosen explained, “…if I want students to actually do the work I’m asking of them, I need to do a couple of important things. I need to appeal to their desire to learn. I need to demonstrate that I care about their learning. I want them to succeed, and I need to structure assessments that instantiate these commitments by providing opportunities to iterate. And I need to be explicit about all of these motivations about why I’ve designed the assignment the way that I have.”

Borja Sotomayor

Similarly, Prof. Sotomayor described a similar emphasis on clear communication about learning goals in his panel remarks.

Sotomayor and his colleagues in computer science have found that some of the major challenges of AI tools are:

  • They are not dissimilar from other legitimate aids a developer might use, like asking a colleague for help or looking up a solution on a site like Stack Overflow.
  • They are productivity tools that students will probably be expected to use as part of their work when they enter the field.

As such, Sotomayor and his colleagues in computer science have found it important to balance two important truths about learning in this field:

  • Authentic software development situations will require and encourage students to rely on colleagues, tools like Stack Overflow, and (in this new landscape) generative AI tools.
  • Learning situations restricting the use of these tools have an important skill-building purpose. Even tasks that AI could do for students offer the benefit of developing the skills to resolve the more complicated challenges they’ll face in the field.

Sotomayor describes his approach to this challenge as follows:

“We ask them to work on short individual exercises and we have to be very clear to connect that to the learning goals of these classes, and to help them understand the reason why we’re asking them to do these problems and why we have constraints like not working with another person, not using someone else’s code, etc. It’s not because we want to be draconian academic honesty enforcers. It’s because it serves a legitimate learning purpose: if you just take these exercises, solve them with ChatGPT, and just submit that and don’t actually write the code yourself that is going to affect you later on, when you actually do have to work on these much more complex systems.”

In Closing

Based on the insights shared by these instructors who have actively engaged with the new challenges and opportunities AI tools have created in pedagogy, we can offer the following brief takeaways.

  • Communicate openly: Discuss with your students the learning goals the work in your class serves, the way AI tools might support or impede those goals, and what your expectations will be in that context. (The idea of “construct relevance” in Universal Design for Learning may be a helpful guide to those looking to narrow down the most important purpose any given learning activity serves.)
  • Maintain critical awareness: While these tools may be useful to organize thoughts, generate ideas, or refine writing, it is important not to be misled by the fluency of the text these tools generate, which can outpace its veracity.
  • Balance utility with a desirable level of challenge: There may be situations in students’ professional futures in which they are allowed, encouraged, or even expected to be fluent in using AI tools to complete their work. However, you can also communicate to learners the skills and expertise they stand to build by completing work themselves that may be done more quickly using AI tools, whether it’s the basics required to solve complex coding problems or the refinement of one’s ideas that happens between drafts.

For more resources and future events on teaching in the context of new generative AI technology, please continue to follow the ATS Blog, where you can also read previous blog posts about generative AI. For a list of ATS office hours and upcoming workshops, please visit our workshop schedule for events that fit your schedule. For individual consultations, please send an email to academictech@uchicago.edu.

Image by Claudia from Pixabay