0

100% AI Turnitin result

As an IB Extended Essay Coordinator at our school, one of my practices is to run student essays through Turnitin at the Second Draft stage in order to get an early feel for where the student’s work stands in regard to traditional source documentation—and, more recently, AI use.

I have been using Turnitin for several years now and am fairly experienced with the reports it generates.  Last week, however, we had a result that I am at a loss to explain, and I was hoping for some insight.  The essay in question was written by a very strong student who, in my capacity as the Extended Essay class teacher, I can personally confirm created their own research space and source annotations.  Similarly, the student’s Extended Essay supervisor can confirm that the essay which grew out of the student’s research space is their own authentic work.

Last week I ran about 25 student essays through the Turnitin scan, and the results more or less conformed to my expectations—with the one notable exception of this student whose work came up as 100% generated by AI.  I’d never seen a 100% report in the context of a research essay, but I was especially struck by the result because of the student in question.  While I have no doubt this is authentic work written by this student—and my colleagues concur in this—my concern is that this particular task is externally assessed by the IB who conceivably could run the same scan.

In my efforts to better understand the results of this particular scan I spoke with the student and shared my concerns.  Explaining to the student that the use of certain programs not normally associated with generative AI tools could still cause a positive scan result, I asked them to detail for me any practice that could conceivably have resulted in a positive result.  The student explained to me that, first, they are in the habit of rigorously structuring their arguments—even at the paragraph level—to conform to argumentative structures taught in class.  They also reported using Grammarly to look at sections of their essay.  As they did not have access to the paid Pro version, they would simply reword and rework indicated areas of their work (the free version of Grammarly will signal words/areas that it is programmed to interpret as grammatically incorrect or awkward without specifying just what the problem or suggested changes are) until there were no highlighted words in the Grammarly results.  So while it is true that in doing so the student has stripped their writing of individuality, both in word use and structure, no tools were used to assist in the writing itself.

I’m aware that AI detection is based on predictive models of word choice, and that these behaviors on the part of the student might cause a percentage of the essay to read as AI-assisted when in fact it might not have been, but it is the 100% result that left me needing some insight as to what might have happened.  Otherwise I’m not sure how to regard the results of other AI-positive scans (albeit with lower percentages) for other students in this cohort.

I would appreciate any assistance you can give me in helping to better understanding and interpret the results of the AI scan—particularly in the case of a 100% result.  Thanks very much

1 reply

null
    • Online Community Manager
    • kat_turnitin
    • 2 wk ago
    • Official response
    • Reported - view

    Hello there  👋

    We’d love for you to join our conversation about AI in teaching and learning! Check out our #AskTurnitin: Month-Long Q&A with the Turnitin Team on Navigating AI in Teaching and Learning event and share your thoughts or questions in the thread. Our team members, Patti and Gailene, are excited to dive into this discussion with you.

    Please post your question or comment directly in the thread so we can continue the conversation there! 💙

    - Kat, Turnitin Team 

Content aside

  • Status Answered
  • 2 wk agoLast active
  • 1Replies
  • 39Views
  • 2 Following