6

#AskTurnitin: Month-Long Q&A with the Turnitin Team on Navigating AI in Teaching and Learning

Thu Nov 20 - Sat Dec 20
Event by Turnitin Official

Are you curious about how AI detection can support trust, integrity, and student authorship in your classroom? Or maybe you want to explore practical strategies for navigating AI responsibly in teaching?

Join #AskTurnitin with Turnitin team members Patti West-Smith and Gailene Nelson as they discuss how educators can approach AI in the classroom with balance and insight.

Explore how thoughtful use of AI detection and Turnitin tools can support academic integrity, empower educator judgment, and enhance the learning experience.

Meet our team:

  • Patti West-Smith  – Senior Director of Global Customer Engagement at Turnitin
  • Gailene Nelson  – Senior Director of Product Management at Turnitin

How it works:

#AskTurnitin will be open in TEN for 30 days, giving you plenty of time to post your questions and join the discussion. Patti and Gailene will be checking in regularly to respond and share their insights.

Ask about:

  • How to discuss AI and authorship with students
  • When AI detection is most helpful—or most challenging
  • Balancing innovation and integrity in AI-enabled learning
  • How to interpret AI detection results ethically
  • What support or resources would make AI detection more meaningful for your context

#AskTurnitin Guidelines:

  1. Be respectful: Treat all participants with kindness and professionalism.
  2. Stay on topic: Questions should relate to AI detection, teaching strategies, and classroom experiences.
  3. No product support requests: Technical or account issues should be directed to Turnitin Support.
  4. Avoid sensitive personal info: Do not share personally identifiable information about yourself, your institution, or students.
  5. Engage constructively: Share insights, ask thoughtful questions, and build on others’ contributions.

Helpful resources to support your participation:

Start the conversation:

Reply to this post with your questions, and Patti and Gailene will jump in with their insights. Let’s connect, share experiences, and learn from each other as we explore the role of AI in education!

56 replies

    • Claire_Eaton
    • 3 wk ago
    • Reported - view

    Hello, I am from the UK so I apologise if I use different terminology. I would like to say this forum is a fantastic idea and I am really enjoying reading the questions and the advice provided. What I am understanding a lot more now is AI detection isn't about accusing students of misuse but to open up a conversation about how AI has been used. AI is not going anywhere and the education sector needs to embrace it but this is quite difficult. I keep having more conversations now that early transparency is going to be key and to also educate students on what is appropriate use and misuse.

    I have many questions but what I would like to start off with is the technical aspect of the AI detection. I understanding from this forum that it works by identifying 'signals'. What are these signals? What is it what shows up in students work that flags it may be AI writing? I often use my own indicators to have conversations with learners, for example, if there is Americanised language, this is a good one in the UK as we can identify when 'z' is used in words and discuss it. Also, if the work is generic and has no personalised examples in their or if the tone changes. I have heard about the 'rule of 3' but I do not fully understand this or know how accurate this is to comfortably have a conversation about AI use. So what are the signals that flag AI may have been involved? I feel if I understand the technical aspect a bit more it could shape my conversations better. I must say though that I am not in IT so I do not understand a lot of terminology.

    Thank you in advance

      • Digital Customer Experience Manager
      • Audrey_turnitin
      • 5 days ago
      • Official response
      • Reported - view

       ,

      I wanted to follow up with additional information for you here. David Adamson, our Distinguished Machine Learning Scientist here at Turnitin, wanted to offer some more insights for you:

      If you were to look at tens of thousands of examples of AI-generated text, or in text that has passed through an AI "humanizer" or "paraphraser", there are distinctive cues that begin to emerge. Many of these aren't just predictable patterns in next-word selection. Some are contextual, like how similar one sentence is to its neighbor, or metaphors that are awkwardly mixed. Others are structural -- some sentence starters and clause structures (or patterns of these, overall several sentences) are more common, either because of a bias in training data, an emphasized example in an AI prompt, or a rewriting template in a hand-coded "humanizer."

      The model is a little opaque -- it uses an internal representation that doesn't lend itself to explainability. So we can't *directly* tell you "this is why the model highlighted this particular sentence." We can instead describe these patterns after the fact, by comparing the chunks of text that were (and weren't) predicted as AI writing, and describing their characteristics (sentence structure, word choice, etc), leading to observations like the ones above.

      Any one of these little signals isn't evidence for AI writing or rewriting on its own. At detection time, if bunches of these overlapping, contextually-linked patterns show up together in a single chunk of text, the combined weight of these signals tips the scale towards predicting the text as AI.

      We hope this helps! 

      • Patti_WestSmith
      • 5 days ago
      • Official response
      • Reported - view

       After nearly 10 years of working with David, I still learn something new every time he shares! 

      • Senior Teaching & Learning Innovations Specialist
      • Karen_Turnitin
      • 4 days ago
      • Official response
      • Reported - view

       always insightful and explains things so well!

      • Gailene_Nelson
      • 4 days ago
      • Official response
      • Reported - view

       Helps that he was a high school teacher in a previous life! David is the best!

    • Digital Customer Experience Manager
    • Audrey_turnitin
    • 2 wk ago
    • Official response
    • Reported - view

    Hi TEN Community! 👋

    This entire thread has been an amazing deep dive into what educators are experiencing in the classroom. Following the information shared, we have been inspired to update one of our more popular educator resources: “AI-Generated Text: What Educators Are Saying.”

    We released the original version two years ago (a lifetime in AI years!), and so much has changed, as clearly expressed in this thread: your curriculum, your tools, your conversations with students, and the role AI now plays in teaching and learning. That’s why we’d love to feature your experiences in the refreshed edition.

    We’re looking for short reflections around:

    ✨ Your challenges interpreting or talking about AI use
    ✨ Your successes teaching with or about AI
    ✨ Your wonderings — What still feels unclear? What are you grappling with?
    ✨ Your observations about how students are using (or misusing) AI
    ✨ Your evolving relationship with AI detection and trust

    We’ll be selecting a handful of quotes for the updated publication and will anonymize everything, of course. 🙂

    If you’re willing to share, please drop your thoughts right here in the thread. Even a few sentences can help fellow educators around the world feel less alone in navigating this fast-moving space. (And for those educators who have already posted such thoughtful insights, we hope to reach out to you separately to see if we can include your perspective in our resource:  and ).

    📘 Current resource for reference:
    https://www.turnitin.com/papers/ai-generated-text-what-educators-are-saying

    Thank you for helping shape the conversation — and for everything you do to support integrity, trust, and learning. 🙌

    • Peter_Pollack
    • 2 wk ago
    • Reported - view

    Hi I hope I'm asking this in the right place and apologize if this has been covered as I just joined:  I teach a 100% online course, and my course requires a research paper 1,500-1,800 words.  I'm receiving such a large number of papers with high detection (80--100% range) of AI 'generative text' (blue); I routinely plug the same text into other AI detection sites with wildly varying results.  Students all say they only use AI for grammar and/or organization but not to 'generate' their paper, which is my real concern:  That they're simply using a prompt and having AI actually write their paper.  Do you feel confident in TurnItIn's accuracy?  Am I misunderstanding 'generative text'?

      • Gailene_Nelson
      • 2 wk ago
      • Official response
      • Reported - view

       

      Thanks for your question! This is a common concern. I don't think you are misunderstanding generative text, but perhaps expand your definition of what could be contributing the AI-generated content.

      We are seeing more and more general purpose tools like grammar checking, auto-complete and other "acceptable use" tools being powered by AI. Small changes may actually be introducing more AI into the text than your students realize. Some of these tools offer re-write suggestions that are likely powered by AI. Our latest update has shown our model to be more sensitive to re-writes & modifications that are more than just grammar edits.

      We continuously monitor and run evaluation test sets against our detection tools, and those results confirm the reliability and accuracy of our detection tool.  

    • Digital Customer Experience Manager
    • Audrey_turnitin
    • 2 wk ago
    • Official response
    • Reported - view

    Thank you for your question, ! In addition to what you posed (above), we’ve noticed that other educators in TEN have shared moments where an AI score raised questions, and it’s not always clear what the next step should be.

    AI detection is a signal, not a decision. We’d love to hear from everyone in our TEN community:

    • What’s your first move when you see a surprising score?

    • What questions do you ask students with regards to a particular assignment?

    • How do you avoid over-interpreting a single result?

    • What does your workflow look like — even if it’s still a work in progress?

     , , - we'd love your insights coming from the Turnitin perspective. And for members of the community who post, your experiences and tips can help everyone build shared clarity and practical strategies for handling these situations.     and  - what are you seeing with your students? 

    Resources:
    👉  The “Show Your Work” approach to student writing
    👉  Getting started with AI writing at Turnitin

      • Gailene_Nelson
      • 2 wk ago
      • Official response
      • Reported - view

      Thanks  for the question. I think we're all aligned on the need to step back from the score as a definitive indicator for judgement, and instead, see the technology as a tool to open a dialogue with students. We've mentioned it in prior posts, but we intentionally optimized our detection tool to minimize false positives as a guardrail to protect students from being falsely accused of AI. 

      These are opportunities for all of us - educators, students and companies like Turnitin - to learn what using AI responsibly means, and how it can translate back to the learning process. We understand the importance of maintaining trust between educators and students. With that lens, we are exploring how we might bring more insight and interpretability to the AI detection report, enabling educators to have more fruitful conversations with students, and diffuse the potential for negative dialogue and mistrust.  

      We look forward to working with you as we continue to evolve our tools to better support your needs!

      • Online Community Manager
      • kat_turnitin
      • 2 wk ago
      • Official response
      • Reported - view

      I’d like to add to the discussion from a TEN member  

      He shared in an earlier post:

      “Hi, I have some questions about the AI detection feature in Turnitin. In several reports for my students, I noticed that the system identifies a high percentage of AI-generated content. However, when I check with the students, they confirm that they did not use any AI tools to write those sections. In some cases, Turnitin even flags properly cited content taken from websites as AI-generated, despite students having correctly referenced their sources. Therefore, I would like to understand the reason behind this issue and how best to respond to students in such cases. Also, what are the most effective strategies to help students avoid using AI tools in their writing tasks?”

      This is a great example of the kinds of questions we hear frequently. And one resource we feel that addresses these types of questions effectively is a recent blog post: “How the ‘show your work’ approach is redefining student writing.”  

      In it, our very own  frames what’s happening in classrooms right now and why many educators are rethinking the structure of their assignments. She shares that “US schools are responding to AI in student writing by redesigning their assessments to reduce the risk of student misconduct and missteps.

      Designing assignments that require process documentation and iterative development – ‘show your work’ – is one approach. Other educators have found themselves reverting to more traditional assignments, using handwritten work and oral presentations.” 

      Check out the blog above to learn more. We feel like it really shows how this conversation is constantly evolving. There’s no one-size-fits-all approach, and educators everywhere are still figuring out what works best for their classrooms. At Turnitin, we’re constantly developing and improving our tools like Turnitin Clarity to better support these challenges, and feedback like this is incredibly valuable.

      Does anyone else have insights to share with  ?

      We truly appreciate hearing experiences from different parts of the world, and we encourage everyone to keep sharing 💙

      • Gailene_Nelson
      • 13 days ago
      • Official response
      • Reported - view

      Hi 

      Thank you for sharing this experience. While we are continuously monitoring our tool's efficacy to keep our False Positive Rate (FPR) below 1%, the tool can make some mistakes. Some tools that generate AI content are better than others at overlapping human writing. When we train our model to learn from these tools, there is the opportunity that we'll incorrectly identify that overlapping text. When content is less complex, very uniform in structure and has perfectly grammatical, if not robotic, sentences, these signals can indicate potential AI content.

      It's also possible that the cited text was short enough for us to find some signals, but they may be weaker than others in the paper because of the length of the sentences. All predictions come with a spectrum of confidence, and we are researching ways we can improve our detection report to make it easier to interpret how we are detecting the content. We're excited to deliver more of these usability updates in the coming months!

      Another area we're hearing more about is when students over-edit to try and "perfect" their writing, they can unintentionally reduce or eliminate their unique, natural voice, making the text read as more synthetic. This is admirable, but risks the student's ability to come across as original.

      Finally, even generalized tools that are acceptable to use in academia, like grammar and citation checkers are more and more being powered by AI, leaving their fingerprints behind for detectors like ours to pick them up.

      All these scenarios reinforce for me that no detector should be used as a sole decision point. They are entry points to conversations about how a student embarked on their writing process. Did the student use any tools to help them format the citation?  If not, were they editing and polishing their document excessively? Our Chief Product Officer, Annie Chechitelli, recently posted an article on LinkedIn that resonates so well with me. She mentions tools like health screening, metal detectors and weather predictions as guides to help me decide my next course of action. This is how we see our detection tool being a valuable guide for educators. Hopefully you find the article helpful as well!

    • Online Community Manager
    • kat_turnitin
    • 13 days ago
    • Official response
    • Reported - view

    #AskTurnitin Conversation Starters: Scenarios, Edge Cases & Bypassers

    Hi all! We’ve seen several educators in TEN share moments where an AI score, especially one influenced by bypassers, left them scratching their heads.

    With the updated model detecting bypassers more effectively, these situations can still feel confusing or unexpected. So let’s build a shared “AI Scenario Playbook” together.

    If you’ve had a moment where you thought, “I have NO idea how to handle this,” share it (anonymized!) below. and  will help break it down, and your example could help another educator next week.

    For instance,  recently asked

    “Has Turnitin come up with a remedy to detect the use of a Humanizer on a student's paper? I am seeing the use of humanizers on students' papers, and Turnitin doesn't seem to be detecting any AI use with these papers. Has anyone come up with a remedy to detect humanizer use?”

    What scenario would you most like guidance on? Comment below, reply to this post, or tag us in your examples. 

    Resource:
    Leveraging Turnitin Clarity: A student guide to AI prompting

      • Gailene_Nelson
      • 13 days ago
      • Official response
      • Reported - view

       I'd love to hear more about what you are seeing, especially in the past few months. In August we released an update to our AI detection tool to include Humanizers, so any submissions made after that release will automatically check for generative AI, whether it comes from an LLM, AI Paraphraser or AI bypasser (humanizer),  however we don't differentiate between the likely AI generated text and the likely AI bypassed text in our AI report. 

       

      How recent were these submissions? What types of "signals" are you seeing that indicate the use of humanizers? While no detection solution is perfect, we can certainly learn from educators like you if there are indicators you are picking up on that we might be missing. If you have a suspicious document that was submitted after our Bypasser release, and willing to share it so we can analyze it further, let me know. I'll reach out to you separately so we're not posting sensitive information in this forum.

      More and more tools are coming out with ways to make AI-generated text sound more human, offering features like "contextual word choice suggestions" which may seem innocuous to a student but ends up inserting AI-generated text into their documents. It's certainly an arm's race to keep up with these technologies, and we appreciate your questions and feedback. 

      • Online Community Manager
      • kat_turnitin
      • 7 days ago
      • Official response
      • Reported - view

      Adding to this discussion a post by    she writes:

      “I’d like to report a possible issue with the AI writing detection results in Turnitin.
      We have two versions of the same Extended Essay submitted by the same student. The first version was not flagged for AI writing, while the second version — which includes only minor edits (formatting adjustments, small grammar fixes, and simplified references) — received a 73% AI writing score.
      Since both versions are almost identical in content and structure, this discrepancy seems unusual.”

      I want to echo  thoughts here. It’s definitely an ongoing arms race to keep up with these technologies, and we really appreciate you sharing your experiences. Highlighting specific scenarios like this helps us as we continue improving our AI Writing Detection model. If you’d like to see the updates on the progress we’ve made so far, here’s the link. 

      We still have time until December 20th before we wrap up this Q&A, so please keep sharing your examples and experiences!  

      We know the year-end is busy with the holiday break approaching, and educators around the world are wrapping up the semester or school year, so your contributions are especially appreciated. 

    • Peter_Pollack
    • 13 days ago
    • Reported - view

    Thank you so much Gailene, Audrey, and Kat!  This information is very helpful!

      • Gailene_Nelson
      • 13 days ago
      • Official response
      • Reported - view

      Thank YOU   !

    • Peter_Pollack
    • 10 days ago
    • Reported - view

    Hi Gailene - I would like to share a few examples with you - can you reach out to me separately?

      • Digital Customer Experience Manager
      • Audrey_turnitin
      • 7 days ago
      • Official response
      • Reported - view

       -  should be reaching out shortly (if she hasn't already!). 

      Thanks again for your thoughtful participation in this conversation. We are so grateful for your insights and feedback.

      • Peter_Pollack
      • 7 days ago
      • Reported - view

       thank you, Audrey, so very much for the opportunity!

    • Paula_da_Igreja_Rembisch
    • 7 days ago
    • Reported - view

    I’ve noticed that some essays flagged by AI Detection tend to have a more descriptive and technical tone, particularly in experimental Extended Essays. For example, one Chemistry EE I received showed consistently high AI scores. Even after revision, the score decreased only slightly—from 69% to 63%.

    In contrast, I had an English EE that initially scored around 30%. For review, I asked the student to strengthen their personal voice, especially in the introduction and conclusion. In this case, the revision was effective, and the new version was no longer flagged.

    These experiences make me wonder how disciplinary writing conventions—especially highly standardized, technical, and descriptive scientific writing—may be influencing AI detection results, and how reliably these scores distinguish between legitimate student authorship and AI-generated text.

      • Gailene_Nelson
      • 6 days ago
      • Official response
      • Reported - view

      Hi  

      Thank you for the question, and for sharing these specific examples. Your experience illustrates one of the challenges in AI detection: the style gap between disciplines. 

      We train our model using large, diverse datasets, including multidisciplinary academic writing, including subject areas like anthropology, geology, sociology and other to minimize bias, but when student writing is highly structured or consistently uses precise terminology, the detector can misinterpret it as AI-generated text. English subjects more often have variations in sentence length, unique metaphors and personal voice, which most often get correctly identified as human.

      We monitor our model health regularly to look for irregularities, but this level of detail hasn’t surfaced. We appreciate you bringing this disciplinary nuance to our attention, as it helps us understand the real-world application of our tools. 

      Were the revisions to the Chemistry paper done to add more of the student’s unique evaluative voice in the conclusion? That can help balance the standardized tone of the methodology by bringing in the ‘human’ element you encouraged in the English EE - this is the strongest indicator of authorship.

      We continue to refine our detection to better distinguish between academic and AI-generated text. Please keep sharing these insights with us as you encounter them - they are crucial for our improvement.

    • Paul_Curran
    • 6 days ago
    • Reported - view

    Thank you so much for this, it's a very interesting thread. I'm sorry I've only joined in so close to the end. My role is a support role in Higher Education. I work with educators and advise them about AI use/misuse (amongst other things!). Often they come to me looking for reassurance about an AI score.

    Sometimes they trust it and just want me to confirm their view, sometimes they don't think it fits their impression of the student. I always remind them about the AI report being something to bring to a conversation and not a "smoking gun" in and of itself. But I find they still want reassurance of some sort. As others have mentioned above they are trusting their own instincts less.

    I've tried to use Authorship reports to find other indicators to give confidence to an opinion, one way or another. I had been using data points such as a high editing time and a high revision count as indicators of authentic effort from students, and a corresponding high AI score might be due to unintended misuse of Grammarly, for example. But low editing time could easily be because a student downloads a .docx file from Word online, after spending hours writing it, just before uploading it. I've started to disregard editing times of 0-2 minutes completely.

    I've also tried to get insight from writing style, but I'm less and less confident about saying anything based on that data. Maybe consistent writing is because of consistent use of AI and corresponding AI reports that were low in the past but high now are simply evidence of the detector improving? Maybe a spike in writing style is because of the nature of an assignment being different or maybe there was an update to the student's preferred AI model? The more I look and test it myself the less useful insight I feel I have to pass on.

    Are there resources or training around this?

      • Senior Teaching & Learning Innovations Specialist
      • Karen_Turnitin
      • 5 days ago
      • Official response
      • Reported - view

      Hello  - thank you so much for your insightful question. First I want to say that your question contains much of the same sorts of actions we would initially respond to you with! 

      One action I would add is to ask students regularly to write a reflection post-assignment. This is twofold: One, it gives both of you a starting point for a conversation (a "map" of sorts), and, two, if it is done regularly as a part of each writing assignment, asking for a reflection isn't going to scare a student into thinking they are in trouble. I do believe that this is the part that allows an instructor to regain some of their confidence in their own judgement. 

      We do have a reflection guide for students. Although it is labeled for Turnitin Clarity, it can be used without with a few minor adaptions. Page 1 focuses on AI use; if AI use isn't allowed for that assignment, we recommend Page 2. The second side focuses on the decisions and challenges students faced and what they learned from these. I'd recommend it as a "blueprint" of sorts for a conversation with the instructor, and not always to be used when misuse may be suspected as stated above. 

      We have a couple of resources for instructors that I'd recommend to guide those conversations:

      1. Approaching a student regarding potential AI misuse
      2. Conversation starters to discuss AI bypassers with students -- Don't let the name fool you! There is a menu of topics on p. 3 that can be used for conversations that are not specifically focused on AI bypassers. Pick and choose what fits the assignment and student work.
      3. How to interpret Turnitin's AI writing score and dialogue with students - this is a bit of a dense read, but provides some context and possible explanations for why you're seeing what you're seeing!

      I really hope that I have given you something to help with your questions. Please feel free to respond if these aren't exactly helpful -- or even if they are!

      • Gailene_Nelson
      • 4 days ago
      • Official response
      • Reported - view

      Hi  

      I wanted to follow up from the Authorship side. As a tool to help gather evidence, Authorship does have a broad range of data signals the help investigators look deeper to analyze a student's writing patterns over time. Your insight related to editing time is really interesting. It could very well be that the primary writing is on a different platform or tool. Do you find this is common pattern of writing (more often than not, the editing time is 0-2 minutes, for example)? Does it help to see the student's past writing to compare against their current papers? Are there other language or stylistic signals you are looking for that you don't currently see in Authorship?

      We're building out our 2026 roadmap, and if you are interested, we'd love to follow up with you on your use of Authorship, and how we might improve it.

      • Paul_Curran
      • 4 days ago
      • Reported - view

       Hi Gailene. Thanks for following up. And to  too.

      I have noticed that more and more recently yes, and to be honest it maps to how I like to work myself. I work in the cloud then download it, double check it and resave.

      One thing I've looked for recently have been signs that a student may have been inadvertently accepting corrections from their personal Grammarly account (or similar) without realising AI features are enabled by default (we offer institutional licences to select students but have these features disabled).

      We regard this practice as much less serious than large scale copy and pasting directly from an LLM. But it can lead to a similar AI report (interestingly in cases where I strongly suspect this is what happened the text is flagged by Turnitin as AI generated not AI paraphrased).

      One indicator I use to give reassurance to an educator that this is indeed what happened is a high editing time and high revision count. I take those as signs of authentic student effort. And this would inform the conversation that follows. But sometimes now I have nothing to go off if those data points are extremely low.

      The best thing about Authorship for me is tracking consistent patterns over time. False positives become much less likely that way.

      I saw that the AI score is coming to Authorship, that's very welcome. I'd be delighted to chat more about it.

Content aside

Attendees

Stats

  • 56Replies closed
  • 580Views