5 reasons educators need to have “the talk” with students about using AI for homework

Seven weeks after its launch, Turnitin's AI detector flagged millions of submissions for containing AI-generated content, but there's no reason to panic just yet.

As schools prepare for summer break, some leaders might see this as the perfect time to revamp their schools’ policies on AI tools like ChatGPT and their use in the classroom. Students and teachers are already using it to streamline learning and work, but as new data suggests, students are also using it to complete their assignments. But the issue may not be getting out of hand just yet.

Seven weeks ago, Turnitin launched its preview for its AI writing detection tool. As of May 14, the company has processed at least 38.5 million submissions for AI writing, and, to no surprise, they’re uncovering AI-written text, according to a recent blog post from Turnitin’s Chief Product Officer Annie Chechitelli.

According to the data, 9.6% of their total submissions contain over 20% of AI writing and 3.5% contained between 80% and 100%.

“It’s important to consider that these statistics also include assignments in which educators may have authorized or assigned the use of AI tools, but we do not distinguish that in these numbers,” Chechitelli wrote. “We are not ready to editorialize these metrics as ‘good’ or ‘bad’; the data is the data.”

She also stresses that the data is imperfect. Like with any plagiarism or AI detector, there’s a chance that they’ll mistakenly flag a student’s assignment.

“As a result of this additional testing, we’ve determined that in cases where we detect less than 20% of AI writing in a document, there is a higher incidence of false positives,” she wrote. “This is inconsistent behavior, and we will continue to test to understand the root cause.”

Such mistakes could also leave educators puzzled about how to resolve the issue of suspected cheating by students. Based on feedback from teachers using Turnitin’s AI detector, Chechitelli notes that many simply don’t know how to react and approach students after their assignments are flagged for AI-written text.

Fortunately, the company has published several resources educators and district leaders should take advantage of when considering AI’s capabilities for enhancing student learning—when used ethically—in the classroom. Here’s a look at all five:

  • How to approach a student misusing AI: This guide helps educators learn about how to approach this conversation with a student, starting with collecting “clear and definitive documentation.”
  • Discussion starters for tough conversations about AI: Discussions surrounding the issue should support honest, open dialogue. Start with addressing the students’ strengths demonstrated in the assignment, their weaknesses and then their apparent misuse of AI.
  • How to handle false positive flags: While false positive rates are small, it’s important that educators know how to begin the conversation when it occurs.
  • Handling false positives as a student: Before submitting assignments, students should make sure they know the rules regarding AI use and what is and isn’t acceptable.
  • Ethical AI use checklist for students: Educators encouraging the use of AI in and out of the classroom should take steps to ensure students are upholding academic integrity by following these guidelines.
Micah Ward
Micah Wardhttps://districtadministration.com
Micah Ward is a District Administration staff writer. He recently earned his master’s degree in Journalism at the University of Alabama. He spent his time during graduate school working on his master’s thesis. He’s also a self-taught guitarist who loves playing folk-style music.

Most Popular