It’s the elephant in the classroom. According to a recent survey of 1,000 college students conducted by the online magazine Intelligent, 30% of college students have used ChatGPT on written assignments. 

Interestingly, the study found that three-quarters of students who have used ChatGPT for homework consider it ‘somewhat’ (46%) or ‘definitely’ (29%) cheating. Clearly, there is dissonance and confusion around the ethics of appropriate usage.

The US Department of Education advocates for the adoption of the “human in the loop” approach to AI, including generative AI. This approach underscores the crucial role of educators as instructors and decision-makers in the educational process. They argue it is essential for educators to critically analyze the increasing influence of AI in these processes and take necessary actions to maintain the primacy of human judgment in educational systems.

The University of Chicago’s academic integrity statement makes it clear that usage of other people’s ideas needs to be identified, and ITS provides guidance on issues including the protection of university data. However, as a faculty member or instructor, you ultimately decide how to address the influx of generative AI tools, including whether and how best to adapt your teaching. 

A year on from the release of Chat GPT 3.5, many professors are taking a proactive yet measured approach by outlining their expectations in the course syllabus. Policies range from a strict zero-usage of any generative AI to allowing limited AI-powered tools for specific purposes like research, citation checks, or grammar correction. The University of Chicago’s Center for Teaching and Learning has resources for crafting syllabus statements on the use of AI tools. 

The Challenge of AI Detection in Education: A Call for Prevention and Pedagogical Innovation

Despite an arms race between detection services and the exponentially increasing ability of AI to generate more “human” content, there are currently no foolproof methods for accurately detecting AI usage. Even the most advanced anti-cheating software exhibits high rates of false negatives and positives when identifying work generated by AI. 

This particularly affects students for whom English is a second language. AI detectors are typically programmed to flag writing as AI-generated when the word choice is predictable, which aligns with patterns often utilized by those writing in their non-native language. 

A more effective and fair approach may be to focus on prevention rather than detection. As an instructor, your pedagogical approach can play a pivotal role in discouraging students from resorting to AI. For practical guidance on crafting assignments resistant to AI manipulation, refer to our comprehensive guide.

AI usage by students may be inevitable or even unintentional, particularly with the integration of generative AI into word processing programs like Microsoft Word. It is crucial for classroom rules to be fair and communicated clearly, as interpretations may vary among individuals. Initiating an open dialogue at the start of the semester can set the foundation for a successful academic term and, in itself, serve as a valuable lesson for students.

Gain Insight Through Open and Honest Dialogue

Discussions about AI use are an opportunity to remind students of their roles in building a strong academic culture and community and to ensure students are familiar with the University’s academic integrity policy

On the first day of class, consider evaluating students’ perspectives on the possible integration or guidelines for the use of AI in the learning experience through an open and transparent discussion. While this may consume some valuable learning time, the long-term benefits for both students and instructors make it a worthwhile investment. 

The University of Chicago academic integrity policy specifically states that “One of the functions of teaching is to educate students in the norms and ethics of scholarly work, as well as in the substance of the field.’’ Engaging in this discussion helps students foster critical thinking on when and where these tools might be appropriate, how they could be leveraged for out of class learning, and how to correctly cite AI tools following disciplinary conventions: most style guides now specifically address this issue, such as in this FAQ by the University of Chicago Press. 

Use Poll Everywhere to Open a Discussion

Conversations regarding AI should ideally be student-centered, transparent, and non-accusatory. Consider using Poll Everywhere, a tool that enables anonymous responses, to facilitate this discussion. Here are some suggested questions and tips:

  • Do you utilize AI for your schoolwork? If so, which AI tool(s) do you utilize? 

Given that most of your students are digital natives, they may be using tools you are not yet aware of. Ask this as an open-ended question. 

  • Is the AI tool you use free or subscription-based? 

AI tools for academic use often come with a subscription fee, placing them behind a paywall. Mathway, for instance, is a widely used application that comes at a monthly charge of $9.99 and ChatGPT now has a paid option, ChatGPT Plus, with increased capabilities such as data visualization. This could contribute to educational inequality for students who do not have the financial means to afford such services.

  • Are there specific AI tools or technologies you believe should or should not be allowed for use in academic assignments? 

Again, this could help you become aware of types of AI you may not be familiar with, and gain input from students on their stances to AI. 

  • Do you feel a need to acquire proficiency in using AI tools to be prepared for your future workplace?

This question could be helpful in guiding decisions about integrating AI usage into the curriculum to ensure students are well-prepared for their professional paths, especially in industries where AI serves as a “force multiplier.

  • What is the primary purpose for which you use AI if you use it?

Leave this question open-ended, or provide options if it is only applicable in limited situations in your class that are specific to your subject. 

  • Which of these actions would you consider “cheating” or “plagiarism”? 

Create a scale like the one below which was created by Matt Miller of Ditch that Textbook, customized to the content and assessment style of your course. 

  1. The student plugs the prompt into the AI, copies the response, and submits it to the teacher.
  2. AI creates a response. The student reads, edits, adjusts, and submits it.
  3. The student creates multiple AI responses, uses the best parts, edits, and submits.
  4. The student writes the main ideas. AI generates a draft and offers feedback to improve.
  5. The student consults the internet/AI for ideas, then writes and submits the assignment, citing the AI. 
  6. The student writes all assignment content without consulting AI or the internet.
  • What level of AI usage would you be comfortable with other students in your class employing? 

Give this open-ended prompt to find a consensus; you could use PollEverywhere’s settings that allow respondents to vote options up and down to see if others agree. 

In order to gain more insight from the responses to these questions and encourage honest answers, ensure that the poll is anonymous. You can enable this mode by selecting the activity, clicking on Configure, then Audience restriction & identity. Under the heading “How do you want to identify participants?” check Completely anonymous.

By gaining insight into how your students use and think about AI, you can navigate this increasingly relevant ethical and pedagogical issue with more confidence throughout the quarter. Keeping your guidelines up to date is crucial given the rapid pace of advancements in the field, and the ongoing evolution of both free and paid forms of AI that are accessible to students, so consider repeating this exercise with each class. 

Further Resources

(Cover photo by Mojahid Mottakin on Unsplash)