5

#AskTurnitin: Month-Long Q&A with the Turnitin Team on Navigating AI in Teaching and Learning

Thu Nov 20 - Sat Dec 20
Event by Turnitin Official

Are you curious about how AI detection can support trust, integrity, and student authorship in your classroom? Or maybe you want to explore practical strategies for navigating AI responsibly in teaching?

Join #AskTurnitin with Turnitin team members Patti West-Smith and Gailene Nelson as they discuss how educators can approach AI in the classroom with balance and insight.

Explore how thoughtful use of AI detection and Turnitin tools can support academic integrity, empower educator judgment, and enhance the learning experience.

Meet our team:

  • Patti West-Smith  – Senior Director of Global Customer Engagement at Turnitin
  • Gailene Nelson  – Senior Director of Product Management at Turnitin

How it works:

#AskTurnitin will be open in TEN for 30 days, giving you plenty of time to post your questions and join the discussion. Patti and Gailene will be checking in regularly to respond and share their insights.

Ask about:

  • How to discuss AI and authorship with students
  • When AI detection is most helpful—or most challenging
  • Balancing innovation and integrity in AI-enabled learning
  • How to interpret AI detection results ethically
  • What support or resources would make AI detection more meaningful for your context

#AskTurnitin Guidelines:

  1. Be respectful: Treat all participants with kindness and professionalism.
  2. Stay on topic: Questions should relate to AI detection, teaching strategies, and classroom experiences.
  3. No product support requests: Technical or account issues should be directed to Turnitin Support.
  4. Avoid sensitive personal info: Do not share personally identifiable information about yourself, your institution, or students.
  5. Engage constructively: Share insights, ask thoughtful questions, and build on others’ contributions.

Helpful resources to support your participation:

Start the conversation:

Reply to this post with your questions, and Patti and Gailene will jump in with their insights. Let’s connect, share experiences, and learn from each other as we explore the role of AI in education!

25 replies

null
    • Patti_WestSmith
    • 2 wk ago
    • Official response
    • Reported - view

     Hi everyone, Patti here—welcome to our month-long #AskTurnitin conversation!

    To kick things off, I’d love to highlight a thoughtful piece on LinkedIn written by our CPO, Annie Chechitelli: “AI Detection Is Imperfect—And Should Be.” If you haven’t seen it yet, it’s a great read and sets the stage for why this discussion matters so much right now. Annie reminds us that detection isn’t about perfection; it’s about giving educators insight, context, and confidence as they navigate authorship in an AI era.

    With that in mind, Gailene and I are here all month to talk openly about how you’re approaching AI with your students, what challenges you’re facing, and how tools like AI detection can support integrity and student learning without getting in the way of the teaching moment.

    We can't wait to hear from you! Drop your thoughts or questions below—big or small. We’re excited to learn from you and support your conversations about responsible, balanced AI use over the next 30 days.

    • Online Community Manager
    • kat_turnitin
    • 2 wk ago
    • Official response
    • Reported - view

    Hello TEN Community 👋

    Welcome to our very first #AskTurnitin. If you’ve got questions for  and about AI in education, this is the place to ask! Can’t wait to see what you’re curious about! 

    • Online Community Manager
    • kat_turnitin
    • 2 wk ago
    • Official response
    • Reported - view

    I also want to give a big shoutout to   who earned the Top Contributor badge🏆

    Your thoughtful contributions truly make our TEN community better, and we’d love for you to take part in this conversation 💙 

      • Online Community Manager
      • kat_turnitin
      • 5 days ago
      • Official response
      • Reported - view

      I want to give another big shoutout to   for recently earning the Top Contributor badge!  🏆

      We’re thrilled to welcome you to our very first #AskTurnitin event and can’t wait for you to join this engaging discussion about AI in the classroom!   

    • Aboli_Karnik
    • 2 wk ago
    • Reported - view

    This year, even when students write original work, AI still shows 50% accuracy. However, it is possible to obtain a false positive. How can we demonstrate that the work is the student's original work? Educator is the best judge. It is the educators' description because they know the students are fine, but how can we justify giving the same answer to all students? 

      • Gailene_Nelson
      • 2 wk ago
      • Official response
      • Reported - view

       You are correct - the educator is the best judge. Your knowledge of the student's voice, writing habits and context of the assignment is the most reliable form of assessment.

      AI tools primarily look for patterns, sentence structure and vocabulary choices common to large language models (LLMs), and a student's original work can sometimes mimic the output of an LLM. Additionally, built-in AI-powered features within Grammarly, MS Word, Google Docs, etc. that "refine" or "improve" text are also widely used by students during the editing process. These tools can introduce AI-generated text in ways that may not be obvious to users. Considering these points, detectors should be used to provide instructors with data signals, but they should never be used over professional judgment. 

      • Patti_WestSmith
      • 13 days ago
      • Official response
      • Reported - view

       I want to add my thoughts to Gailene's response. As a former educator myself, one of the things we're proud of at Turnitin is that we have a whole swath of former educators built in across many different functions at the company, including on our AI team. I point that out because one of our grounding philosophies is to respect educators for their professional judgment and discretion. It is never our intention to supplant the educator; it is only ever our intent to supplement with the data points and insights our tools can provide. 

      Another area of your original question worth considering is HOW educators and students can demonstrate that the work is the student's original work. As a former English teacher, I believe that some of the core tenets of writing pedagogy remain true, and one of those is around the writing process. Using a process approach to writing assignments helps to demonstrate how the work takes shape over time, where educators have some visibility into the choices the student writer has made along the way, including whether they did/did not use generative AI and how they used it if they did. My team of pedagogical experts has been saying this since even before Turnitin's AI detector launched, and we're still saying it. Putting that kind of visibility together with a detector is a way of bringing together multiple data points.

      With the release of Turnitin Clarity in 2025, there's now even more visibility possible as Turnitin Clarity can make the writing process even more transparent. We often talk about how Turnitin Clarity helps educators, but in this case, it helps students as well. Students told us that they feared not being able to prove or document their integrity, and tools like Turnitin Clarity give them an actual record of how their writing took shape. We like to say that its use can help to rebuild any erosion of trust between educator and student. 

    • Gailene_Nelson
    • 2 wk ago
    • Official response
    • Reported - view

      Hi everyone! Gailene here—jumping in with a product perspective as we continue our #AskTurnitin conversation.

    One thing I see often in my role is how educators are trying to balance trust, student agency, and the realities of AI, all while interpreting detection insights in a way that supports learning rather than policing. Something Annie’s article touches on (and that we think about constantly on the product team) is this idea: AI detection isn’t meant to be a verdict—it’s meant to be a signal. A starting point for a conversation, not the end of one.

    With that in mind, I’d love to hear from you all! Your feedback genuinely influences what we prioritize, so anything you share helps us make these tools more meaningful and supportive for real teaching moments.

    Looking forward to hearing your thoughts!

    • Mary_Beth_Kwase
    • 11 days ago
    • Reported - view

    Hello,

    I have started using Clarity and have activated the grammar, citation, and AI assist.  I was disappointed to find that when a student uses the allowed AI assist, that turnitin still flags the essay as AI generated.  One essay came back as 100 percent AI generated.  I do not think this student cheated in anyway, so I am not concerned about accusing him wrongly, but I really would hope that in the future, you could create a tool that would flag AI usage outside the AI assist.  I know this may be challenging, but since the system knows what it suggested (and presumably the system didn't actually write the paper or offer wording), the system could discount the suggestions as AI generated.  It's a sophisticated bit of work, but AI can probably help you with it.  :)  

    The other part of Clarity I would recommend improving is seeking to add more tools for grading like they do in the program our college uses, Canvas.  If it were not for the AI assist, I would not use Clarity because the grading is clunky.  I miss being able to highlight student passages to comment on them, for example. The rubric is also very basic.  I am going to try a weighted rubric to see if I can get more flexibility in awarding points.  

    I am an early adopter at my college for the Clarity system, and I look forward to growing with you as new capabilities and tools become available.  I would enjoy hearing your feedback on the topics I have raised above. 

    Thank you!

      • Gailene_Nelson
      • 11 days ago
      • Official response
      • Reported - view

       Hi Mary Beth! Thank you for your detailed feedback! There's a lot to unpack, but we're looking into each of your points. It may take us a few days to get back to you, with the U.S. holidays, but I wanted you to know we're looking!

      • Gailene_Nelson
      • 5 days ago
      • Official response
      • Reported - view

      Hi  

      We really appreciate your early adoption of Clarity, and look forward to working with you and other educators as we build out new functionality that supports a more transparent and formative learning process.

      I believe your first set of feedback is related to the AI Writing report, but if that’s not the view you are referring to, please let me know. In this case, our team is actively investigating how we can distinguish the AI contributed from the Clarity Assistant from other AI content. We are looking to solve for this in 2026.

      It’s also important to note that many common tools like grammar and spelling - even auto-correction - are powered more and more by AI now, so using those tools can also contribute to the detection score. That said, I’d be remiss if I didn’t emphasize that our detection tools do not make any determination of misconduct. We want you to provide you with data that contributes to how you make your decisions about academic integrity. Detection is just one data point in your process. 

      I’d like to dive into the second part a little more. Are you looking for more grading tools to use as part of the writing process (pre-submission), or as part of the Clarity Writing Report? Apologies if you already have this information, but we do have advanced grading tools as part of the final paper Similarity Report view (more details here).

      Aside from being able to highlight and add comments, what other tools are you using via your Canvas integration that you would like to see when grading the students’ Clarity Writing Report? I’m also curious to understand how we might improve the grading and feedback discovery and flow between the various reports.

      Thank you again for your patience with our response, and for sharing your experience with Clarity. I know this is a busy time of year, so I’ll keep an eye on this thread for any new responses.

       

      Best -

      Gailene

      • Mary_Beth_Kwase
      • 4 days ago
      • Reported - view

        Hello, I would be happy to meet with someone via Zoom to go over my grading suggestions.  I have kept a list of a few examples so I can show someone.  Perhaps, some of the issues I am dealing with are because I am trying to teach myself how to use this.  I really like the ability to see how the document is created.

      Is there anyway for me to know if the students used the grammar check or citation checker?  

      Is there anyone who could work with me to be sure I am using all the tools correctly?

      I teach composition, and unfortunately, I will be turning off the AI assist for spring semester - and not recommending this to my colleagues until you improve it.  I allowed the AI assist for a few assignments this fall, and I found the AI  gave students answers and even reworded/created sentences for them to use in their papers.

      One recent assignment was about the civil rights era and the students had to explain how three acts of non-violent resistance were effective.  I played with the AI and giving it little direction, and the AI essentially gave me the answer.  The information below is what the students should have been coming up with themselves from the materials we have been studying. 

      From the AI in Clarity:

      Non-Violent Resistance in the Montgomery Bus Boycott

      Here are some powerful moments that illustrate non-violent resistance:

      • Peaceful Protest: African American community organized a mass transportation boycott
      • Economic Strategy: Refusing to ride buses cut into transit system's revenue
      • Personal Sacrifice: Participants walked miles instead of using segregated buses
      • Moral Courage: Maintaining dignity while facing harassment and potential violence

      Key Quotes from MLK to Consider

      • Resistance was about "winning friendship and understanding"
      • The goal was to oppose the system of segregation, not harm individuals
      • Protesters refused to hate their opponents, showing "agape" love

      Then - it GAVE me a topic sentence: 

      • "Through strategic economic pressure and unwavering dignity, African American community members transformed the Montgomery Bus Boycott into a powerful example of non-violent resistance."

      For me, this is far too much "help."  Once you  at Turnitin create the ability to allow the instructor to limit how the AI is used, I will try it again.  I would accept "Hint Only Mode" and "Rewrite Restriction" - but not what it's doing now.  

      I would like to talk with someone about the grading features I intend to continue using - to offer you suggestions and to be sure I am using the tools correctly. 

      • Gailene_Nelson
      • 3 days ago
      • Official response
      • Reported - view

       I appreciate your persistence! It's really helpful to understand how you would like to use these tools, and how you are finding your way. I would love to connect you with one of our Product team members so you can walk them through your experience.

      If I understand what you are saying about the AI Assistant, it's giving you way more content than you would have expected. I suspect that different levels of complexity with different prompts will have some variability, but our goal has never been to write the work for the student. 

      As you know, we just launched Clarity this year, and the team is actively working to address enhancements and updates as we hear back from our early adopters, like you. We are in the early stages of defining AI customization features that will allow instructors to configure the type of assistance they want to enable (or restrict) within the AI Chat for students. I've passed your feedback along to the team so they can take this into consideration. There's definitely room to grow and improve our user experience, so feedback with context like yours really helps - thank you!

    • Michael_Augello
    • 6 days ago
    • Reported - view

    Hello - 

    While I recognize that there are merits to using AI to support writing in certain ways, because I teach English courses to juniors and seniors that center around composition (word choice, organizational choices, making arguments that should be unique, etc.) I avoid it in nearly all my formal writing assignments. 

    I recognize that the new Clarity program exists (a discussion for another day), I am very aware of how TurnItIn’s AI detection evaluates writing, and I definitely know it isn’t perfect. I totally understand and appreciate the way TurnItIn labeled this tool as the start of the conversation rather than the, for lack of a better term, smoking gun that points to this form of plagiarism that is becoming increasingly common.

    All that to say... I see a lot here about professional judgment, but even though I am in my 12th year in the classroom, I am losing confidence in my judgment every day. I’ve been having a lot of these AI conversations with students of late. I’ve had one or two students that logged 96% or 100% that there really isn’t much of a conversation about. But I’ve also had quite of few who logged the “*,” showing that there’s AI, but not enough to be confident, and more who have logged between 20 and 50%. 

    My question: the majority of students I’ve spoken to are shocked that the AI score they have is as high as it is. I am not naive - I know I might have a few being dishonest. At this point I ask them to tell me ANYTHING they might have done differently and the most common answer is… they accepted choices recommended by the Grammarly extension or in other cases, passed it through a program and said “check my grammar.” Is this enough to set off the alarms? Can you explain to me how this works so I can explain it to them? I have bounced around some other forums where this has been discussed, but I figured I’d come right to the source. 

    Also I guess while we're here... can you define "false positive" as it is seen by the program? Can these come from honest writing that even educators with a clear tone have written?

    This is a cool forum! 

      • Patti_WestSmith
      • 5 days ago
      • Official response
      • Reported - view

       

      Hi Michael — really appreciate you joining our chat and bringing such a meaty question. I want to say up front: you’re asking the exact questions I hear from so many educators right now. Before I stepped into my role here as Sr Director of Customer Engagement at Turnitin, I taught for years too, and the feeling you described — trusting your professional judgment but also second-guessing more than you ever expected in year 12 — is so real. You’re not alone!

      Let's unpack a couple things from your post:

      First: the uptick in AI scores.
      You’re not imagining it. We updated our AI detector recently to better identify content from the newest large language models (think GPT-5, Gemini 2.5 Flash, etc.) and AI bypassers. Anytime generative AI advances, we have to move just as quickly to keep detection accurate. So the higher scores you’re seeing lately? That tracks with what we’d expect after an update.

      We test extensively before we release anything — more than 700,000 academic papers written before ChatGPT even existed, plus another nearly one million human-written documents based on customer feedback. That’s how we maintain a false positive rate below 1%. In our latest testing, even among documents showing more than 20% AFTER the update, that false positive rate stayed under 1%.

      All that said, the AI score is still just one data point. It should never be the sole basis for an academic integrity decision, and you’re already doing exactly what we encourage educators to do: talking with students, reviewing drafts, and leaning on your knowledge of your students and their writing.

      On Grammarly, Gemini, and other tools:
      You hit the nail on the head. Students often don’t realize that when they accept extensive revision suggestions — not just punctuation fixes, but full-sentence replacements or paraphrasing — those changes DO introduce AI-generated content. And that can absolutely raise the AI score. It doesn’t automatically mean its misuse; it just means their writing now contains patterns common in AI-generated text, which you, as the educator can now identify and decide how to approach instructionally.

      That’s can be helpful framing for students:
      “Your score may be higher not because you did something wrong intentionally, but because the tool changed your voice more than you realized. Now, what do you want to do about it?”

      Your initial post is so packed with ideas that I want to bring in another voice from our team. I’m tagging in another former educator at Turnitin, who helps create many of our pedagogical resources, to address false positives.  Your turn! 

    • Senior Teaching & Learning Innovations Specialist
    • Karen_Turnitin
    • 5 days ago
    • Official response
    • Reported - view

    Hi 

    Always happy to talk Turnitin! 

    A “false positive” for us means human-written text flagged as AI-generated. Because we keep that rate below 1%, it’s rare, but even honest student writing can sometimes get flagged, especially if it’s very polished, very formulaic, or unusually consistent for that writer. If you ever have a submission you truly believe is misclassified, please share it with us. We can’t always give individual responses due to volume, but we do review them and use those cases to keep improving accuracy.

    In the meantime, I'm thinking of a few resources that may be useful to you as we navigate AI and false positives: (in no particular order, I should add)

     

    1. Discussion starters for tough conversations about AI
    2. How to interpret Turnitin's AI writing score and dialogue with students
    3. AI conversations: Handling false positives for educators
    4. Approaching a student regarding potential AI misuse

    There are actually more on this general topic (AI bypassers, for example) on this page: Academic integrity in the age of AI

    I’m delighted you brought this topic here as this is exactly the kind of nuanced, real-world conversation we hoped this forum would spark. If you want to talk through a specific example or walk through what you’re seeing in more detail, I’m happy to dig in with you.

    • Online Community Manager
    • kat_turnitin
    • 5 days ago
    • Official response
    • Reported - view

    #AskTurnitin Conversation Starters

    Hi TEN Community! We’ve seen some educators in TEN noticing that AI scores seem to be rising, sometimes more than expected. Let’s have this conversation here! 

    Some context:
    - Our newest model detects bypassers more accurately
    - It identifies ChatGPT-5–level writing better
    - This means that higher scores may reflect improved detection, not changes in students’ writing

    Question: How are these shifts showing up in your classrooms? What questions or challenges are you facing when interpreting AI scores?

        and fellow educators, we’d love to hear your tips, strategies, or experiences navigating these changes.
     

    Resources:
    In a world of AI, why citation and referencing still matter (Maybe now more than ever!)  
    Taking a deeper dive into AI writing at Turnitin

      • Senior Teaching & Learning Innovations Specialist
      • Karen_Turnitin
      • 5 days ago
      • Official response
      • Reported - view

       I'm glad you asked! 

      I know that time is a valuable commodity and it's more likely that things-to-do are added to educators' lists more often than taking them away, but one thing I'd like to mention is a point that  brought up: each AI score is a starting point. 

      1️⃣ It needn't be lengthy, but a student reflection on how they used AI or using something like this student checklist (created specifically with Turnitin Clarity in mind, but there are others!) can give you the first insights needed to start making a determination. Is in a lack in skills development or an attempt to plagiarize?

      Why a reflection? Intent is important, but often hard to determine simply by reading a student's work. Make this a regular part of the writing process so no one feels as if they are being accused simply by being asked to fill this out.

      2️⃣ Use this information to formulate a plan for a conversation with the student. It is often presented as an either-or choice, but it is possible to determine from a student reflection whether they have intentionally plagiarized or not. BUT, I cannot emphasize enough, even if you are required to turn it over to an integrity council, teaching the skills that they need to complete the assignment is worth the time.

      Again, I am not so long out of the classroom that I have forgotten the many demands on instructors' time, but I do believe it's a valuable investment of time.

    • Patti_WestSmith
    • 5 days ago
    • Official response
    • Reported - view

     provided some great tips and resources! In a previous response I suggested that a discussion with a student (or group of students) like this could be helpful framing: “Your score may be higher not because you did something wrong intentionally, but because the tool changed your voice more than you realized. Now, what do you want to do about it?”

    AI has become so prevalent in so many tools that the odds are quite high that students don't even realize they're using AI or that it is shaping their work. A question such as the one above helps to open the discussion to what it means to have a distinct voice and style and to be intentional about if/how AI may be impacting that. Helping students to "interrogate the work" is really building their ability to think critically; as educators, we would want them to do that about any information they're consuming so we should also want them to apply that same lens when looking at their own work, especially when a tool begins to change it from what they may have originally intended. 

    Like  and  I'm mindful of how much time some of this could take so one suggestion might be to use this as a whole class or small group activity rather than trying to use a 1:1 conference approach. I suspect that more students than not could use this kind of discussion and practice, and it will certainly make it more feasible for instructors. 

      • Michael_Augello
      • 5 days ago
      • Reported - view

       

      Thanks for getting back to me, everyone. All of the ideas certainly help (though they get my mind going in a thousand different directions for what I can do/say and how I can best assess next). A few ideas stick out to me. When evaluating AI scores: "detection doesn't always mean misuse." When discussing with students: "Your score may be higher not because you did something wrong intentionally, but because the tool changed your voice more than you realized." 

      I will say that those conversations are very easy on paper, but the students are human (and young!) so that call over to my desk and "Let's talk about this assignment" can lead to the red flush, quivery lip, etc. and I want to have the most confidence I can in those conversations, but I'm not really at that point right now.

      I'm doing my best to keep AI use transparent: "use it to do ____ and ____" and "Absolutely no AI when you're doing ______." But in what I deem the Wild West of AI where information runs free, that's going to take a lot of practice.

      I have colleagues who believe they have the ultimate AI defense... pen and paper. That may prevent misuse, but I don't think that it's practical for the students to avoid digital tools they'll likely be using when they're out of school. 

      • Senior Teaching & Learning Innovations Specialist
      • Karen_Turnitin
      • 4 days ago
      • Reported - view

       I can relate all that you've said. Throughout most of my teaching days, I worked with 8th and 9th graders, and in spite of their apparent toughness, they can be quite sensitive. 

      Also, I really appreciate the points you're making. Yes, the language and parameters all need to make a shift. And it will definitely take time, quite possibly a lot of it. I will say it's helpful if all the teachers in your institution are working in similar ways. Paper and pen may be appropriate for some assignments, but it won't give the same insights into student writing and can be less helpful if that's all that's happening. In any case, I sense we're on the same page!

      I considered another resource to add to my previous post, but I realize now that maybe I should have!

      1. My first suggestion is to use a checklist as a guide for students in conferences. If both of you agree on using that as a (starting) script, it will hopefully take away some of the nerves and potential defensiveness--and tears! I linked the student-reflection in my previous post, but the Ethical AI checklist is one that covers before, during, and after "checkpoints."  
      2. My second is a guide for students: AI conversations: Handling false positives for students. This is similar to the guide for you. I'm curious as to how this would work with students? I'm thinking a role play with them? Maybe you take on the role of student in the "Hot Seat" and demonstrate the less effective ways of handling these conversations (My students used to laugh as I really played it up although obviously with different content) and more effective, productive ways. And practice makes perfect!
      3. I maybe should have listed this one first: Research planning worksheet. This is for note-taking as you go so that the information is there to pull together for drafting, citations, etc. A comparison by the student of their notes and draft can reveal a lot.  One sheet/source if you print front and back, or even better if shared electronically!

      Finally, we've been working on a plan for what rolling out AI might look like, both for you and students. It's full of resources like those we've been listing and may be helpful. Funnily enough, I just posted about it in here this morning! Take a look and let me know what you think :)

    • Claire_Eaton
    • 4 days ago
    • Reported - view

    Hello, I am from the UK so I apologise if I use different terminology. I would like to say this forum is a fantastic idea and I am really enjoying reading the questions and the advice provided. What I am understanding a lot more now is AI detection isn't about accusing students of misuse but to open up a conversation about how AI has been used. AI is not going anywhere and the education sector needs to embrace it but this is quite difficult. I keep having more conversations now that early transparency is going to be key and to also educate students on what is appropriate use and misuse.

    I have many questions but what I would like to start off with is the technical aspect of the AI detection. I understanding from this forum that it works by identifying 'signals'. What are these signals? What is it what shows up in students work that flags it may be AI writing? I often use my own indicators to have conversations with learners, for example, if there is Americanised language, this is a good one in the UK as we can identify when 'z' is used in words and discuss it. Also, if the work is generic and has no personalised examples in their or if the tone changes. I have heard about the 'rule of 3' but I do not fully understand this or know how accurate this is to comfortably have a conversation about AI use. So what are the signals that flag AI may have been involved? I feel if I understand the technical aspect a bit more it could shape my conversations better. I must say though that I am not in IT so I do not understand a lot of terminology.

    Thank you in advance

      • Digital Customer Experience Manager
      • Audrey_turnitin
      • 3 days ago
      • Reported - view

       

      Thank you so much for this thoughtful question, and for sharing how you’re already approaching conversations with your learners. As a former educator myself, I really appreciate the lens you’re bringing: curiosity, transparency, and a focus on teaching rather than accusing. That mindset is exactly where so many institutions are heading, and it’s wonderful to see it reflected here. ❤️

      I can speak first from the teaching perspective, and then I’ll share how we’re following up on the more technical side.

      From an educator lens:

      The “rule of 3” in the context of AI-written work isn’t an official Turnitin principle, but it’s a shorthand some educators have adopted to help themselves think through patterns rather than one-off quirks when evaluating whether AI may have been involved. In other words, AI use is rarely identifiable from a single strange sentence, but three consistent patterns together may warrant a conversation.

      What educators usually mean by the “rule of 3”:

      It’s the idea that if you notice three separate, meaningful indicators that feel inconsistent with the student’s typical writing, it might be time to check in with them. These indicators can vary, but commonly include a lot of what you mentioned in your post and more, including:

      • A sudden shift in voice or tone (e.g., a student who normally writes plainly suddenly submits highly polished, abstract, or oddly formal writing)
      • Content that is generic or lacks personal detail (AI often produces safe, middle-of-the-road language without specific examples or lived experience)
      • Vocabulary or syntax that doesn’t match the student’s previous work (e.g., advanced or overly consistent sentence structures)
      • Spelling or dialect mismatches (like you said, American spelling from a UK learner who normally writes in British English)
      • Highly structured, formulaic paragraphs that feel unnaturally even or “smooth”
      • Patches of writing that feel disconnected from the rest (e.g., one paragraph is far more polished, academic, or on-topic than others)

      Why the rule exists

      It’s not a detection method, but really a teaching practice. The rule encourages educators to:

      • Avoid making assumptions based on a single clue
      • Look for patterns instead of “gotchas”
      • Use the presence of multiple indicators as a conversation starter, not an accusation

      On the technical side:
      Turnitin’s detector doesn’t use the rule of 3 internally. Instead, it looks for patterns in the writing that are statistically more common in AI-generated text than in human text. These are what we refer to as “signals” — things like predictability, rhythm, and linguistic patterns that large language models tend to produce. They’re not visible in the way a spelling choice might be; they’re detected through computational analysis

      That said, the rule of 3 is still useful pedagogically, because even when a detection score is present, contextual cues from an educator’s own observations can help frame a meaningful, supportive discussion with a learner.

      Since the technical questions you’re asking deserve the most accurate explanation possible,    is currently in touch with our AI team and will follow up in this thread with additional detail on what these signals are, more context for the “rule of 3”, and how educators can use detection results responsibly and confidently. We want to make sure you have clarity without needing an IT background,  just practical language you can bring into your learner conversations.

      Thank you again for raising such an important question. Your focus on transparency and student understanding is exactly what helps AI become a tool for learning instead of a barrier.

      Looking forward to the continued discussion! 🎉

    • Digital Customer Experience Manager
    • Audrey_turnitin
    • 3 days ago
    • Official response
    • Reported - view

    Hi TEN Community! 👋

    This entire thread has been an amazing deep dive into what educators are experiencing in the classroom. Following the information shared, we have been inspired to update one of our more popular educator resources: “AI-Generated Text: What Educators Are Saying.”

    We released the original version two years ago (a lifetime in AI years!), and so much has changed, as clearly expressed in this thread: your curriculum, your tools, your conversations with students, and the role AI now plays in teaching and learning. That’s why we’d love to feature your experiences in the refreshed edition.

    We’re looking for short reflections around:

    ✨ Your challenges interpreting or talking about AI use
    ✨ Your successes teaching with or about AI
    ✨ Your wonderings — What still feels unclear? What are you grappling with?
    ✨ Your observations about how students are using (or misusing) AI
    ✨ Your evolving relationship with AI detection and trust

    We’ll be selecting a handful of quotes for the updated publication and will anonymize everything, of course. 🙂

    If you’re willing to share, please drop your thoughts right here in the thread. Even a few sentences can help fellow educators around the world feel less alone in navigating this fast-moving space. (And for those educators who have already posted such thoughtful insights, we hope to reach out to you separately to see if we can include your perspective in our resource:  and ).

    📘 Current resource for reference:
    https://www.turnitin.com/papers/ai-generated-text-what-educators-are-saying

    Thank you for helping shape the conversation — and for everything you do to support integrity, trust, and learning. 🙌

    • Peter_Pollack
    • 2 days ago
    • Reported - view

    Hi I hope I'm asking this in the right place and apologize if this has been covered as I just joined:  I teach a 100% online course, and my course requires a research paper 1,500-1,800 words.  I'm receiving such a large number of papers with high detection (80--100% range) of AI 'generative text' (blue); I routinely plug the same text into other AI detection sites with wildly varying results.  Students all say they only use AI for grammar and/or organization but not to 'generate' their paper, which is my real concern:  That they're simply using a prompt and having AI actually write their paper.  Do you feel confident in TurnItIn's accuracy?  Am I misunderstanding 'generative text'?

Content aside

Attendees

Stats

  • 25Replies
  • 357Views