5

#AskTurnitin: Month-Long Q&A with the Turnitin Team on Navigating AI in Teaching and Learning

Thu Nov 20 - Sat Dec 20
Event by Turnitin Official

Are you curious about how AI detection can support trust, integrity, and student authorship in your classroom? Or maybe you want to explore practical strategies for navigating AI responsibly in teaching?

Join #AskTurnitin with Turnitin team members Patti West-Smith and Gailene Nelson as they discuss how educators can approach AI in the classroom with balance and insight.

Explore how thoughtful use of AI detection and Turnitin tools can support academic integrity, empower educator judgment, and enhance the learning experience.

Meet our team:

  • Patti West-Smith  – Senior Director of Global Customer Engagement at Turnitin
  • Gailene Nelson  – Senior Director of Product Management at Turnitin

How it works:

#AskTurnitin will be open in TEN for 30 days, giving you plenty of time to post your questions and join the discussion. Patti and Gailene will be checking in regularly to respond and share their insights.

Ask about:

  • How to discuss AI and authorship with students
  • When AI detection is most helpful—or most challenging
  • Balancing innovation and integrity in AI-enabled learning
  • How to interpret AI detection results ethically
  • What support or resources would make AI detection more meaningful for your context

#AskTurnitin Guidelines:

  1. Be respectful: Treat all participants with kindness and professionalism.
  2. Stay on topic: Questions should relate to AI detection, teaching strategies, and classroom experiences.
  3. No product support requests: Technical or account issues should be directed to Turnitin Support.
  4. Avoid sensitive personal info: Do not share personally identifiable information about yourself, your institution, or students.
  5. Engage constructively: Share insights, ask thoughtful questions, and build on others’ contributions.

Helpful resources to support your participation:

Start the conversation:

Reply to this post with your questions, and Patti and Gailene will jump in with their insights. Let’s connect, share experiences, and learn from each other as we explore the role of AI in education!

37 replies

null
    • Digital Customer Experience Manager
    • Audrey_turnitin
    • 9 days ago
    • Official response
    • Reported - view

    Hi TEN Community! 👋

    This entire thread has been an amazing deep dive into what educators are experiencing in the classroom. Following the information shared, we have been inspired to update one of our more popular educator resources: “AI-Generated Text: What Educators Are Saying.”

    We released the original version two years ago (a lifetime in AI years!), and so much has changed, as clearly expressed in this thread: your curriculum, your tools, your conversations with students, and the role AI now plays in teaching and learning. That’s why we’d love to feature your experiences in the refreshed edition.

    We’re looking for short reflections around:

    ✨ Your challenges interpreting or talking about AI use
    ✨ Your successes teaching with or about AI
    ✨ Your wonderings — What still feels unclear? What are you grappling with?
    ✨ Your observations about how students are using (or misusing) AI
    ✨ Your evolving relationship with AI detection and trust

    We’ll be selecting a handful of quotes for the updated publication and will anonymize everything, of course. 🙂

    If you’re willing to share, please drop your thoughts right here in the thread. Even a few sentences can help fellow educators around the world feel less alone in navigating this fast-moving space. (And for those educators who have already posted such thoughtful insights, we hope to reach out to you separately to see if we can include your perspective in our resource:  and ).

    📘 Current resource for reference:
    https://www.turnitin.com/papers/ai-generated-text-what-educators-are-saying

    Thank you for helping shape the conversation — and for everything you do to support integrity, trust, and learning. 🙌

    • Peter_Pollack
    • 8 days ago
    • Reported - view

    Hi I hope I'm asking this in the right place and apologize if this has been covered as I just joined:  I teach a 100% online course, and my course requires a research paper 1,500-1,800 words.  I'm receiving such a large number of papers with high detection (80--100% range) of AI 'generative text' (blue); I routinely plug the same text into other AI detection sites with wildly varying results.  Students all say they only use AI for grammar and/or organization but not to 'generate' their paper, which is my real concern:  That they're simply using a prompt and having AI actually write their paper.  Do you feel confident in TurnItIn's accuracy?  Am I misunderstanding 'generative text'?

      • Gailene_Nelson
      • 5 days ago
      • Official response
      • Reported - view

       

      Thanks for your question! This is a common concern. I don't think you are misunderstanding generative text, but perhaps expand your definition of what could be contributing the AI-generated content.

      We are seeing more and more general purpose tools like grammar checking, auto-complete and other "acceptable use" tools being powered by AI. Small changes may actually be introducing more AI into the text than your students realize. Some of these tools offer re-write suggestions that are likely powered by AI. Our latest update has shown our model to be more sensitive to re-writes & modifications that are more than just grammar edits.

      We continuously monitor and run evaluation test sets against our detection tools, and those results confirm the reliability and accuracy of our detection tool.  

    • Digital Customer Experience Manager
    • Audrey_turnitin
    • 5 days ago
    • Official response
    • Reported - view

    Thank you for your question, ! In addition to what you posed (above), we’ve noticed that other educators in TEN have shared moments where an AI score raised questions, and it’s not always clear what the next step should be.

    AI detection is a signal, not a decision. We’d love to hear from everyone in our TEN community:

    • What’s your first move when you see a surprising score?

    • What questions do you ask students with regards to a particular assignment?

    • How do you avoid over-interpreting a single result?

    • What does your workflow look like — even if it’s still a work in progress?

     , , - we'd love your insights coming from the Turnitin perspective. And for members of the community who post, your experiences and tips can help everyone build shared clarity and practical strategies for handling these situations.     and  - what are you seeing with your students? 

    Resources:
    👉  The “Show Your Work” approach to student writing
    👉  Getting started with AI writing at Turnitin

      • Gailene_Nelson
      • 4 days ago
      • Official response
      • Reported - view

      Thanks  for the question. I think we're all aligned on the need to step back from the score as a definitive indicator for judgement, and instead, see the technology as a tool to open a dialogue with students. We've mentioned it in prior posts, but we intentionally optimized our detection tool to minimize false positives as a guardrail to protect students from being falsely accused of AI. 

      These are opportunities for all of us - educators, students and companies like Turnitin - to learn what using AI responsibly means, and how it can translate back to the learning process. We understand the importance of maintaining trust between educators and students. With that lens, we are exploring how we might bring more insight and interpretability to the AI detection report, enabling educators to have more fruitful conversations with students, and diffuse the potential for negative dialogue and mistrust.  

      We look forward to working with you as we continue to evolve our tools to better support your needs!

      • Online Community Manager
      • kat_turnitin
      • 3 days ago
      • Official response
      • Reported - view

      I’d like to add to the discussion from a TEN member  

      He shared in an earlier post:

      “Hi, I have some questions about the AI detection feature in Turnitin. In several reports for my students, I noticed that the system identifies a high percentage of AI-generated content. However, when I check with the students, they confirm that they did not use any AI tools to write those sections. In some cases, Turnitin even flags properly cited content taken from websites as AI-generated, despite students having correctly referenced their sources. Therefore, I would like to understand the reason behind this issue and how best to respond to students in such cases. Also, what are the most effective strategies to help students avoid using AI tools in their writing tasks?”

      This is a great example of the kinds of questions we hear frequently. And one resource we feel that addresses these types of questions effectively is a recent blog post: “How the ‘show your work’ approach is redefining student writing.”  

      In it, our very own  frames what’s happening in classrooms right now and why many educators are rethinking the structure of their assignments. She shares that “US schools are responding to AI in student writing by redesigning their assessments to reduce the risk of student misconduct and missteps.

      Designing assignments that require process documentation and iterative development – ‘show your work’ – is one approach. Other educators have found themselves reverting to more traditional assignments, using handwritten work and oral presentations.” 

      Check out the blog above to learn more. We feel like it really shows how this conversation is constantly evolving. There’s no one-size-fits-all approach, and educators everywhere are still figuring out what works best for their classrooms. At Turnitin, we’re constantly developing and improving our tools like Turnitin Clarity to better support these challenges, and feedback like this is incredibly valuable.

      Does anyone else have insights to share with  ?

      We truly appreciate hearing experiences from different parts of the world, and we encourage everyone to keep sharing 💙

      • Gailene_Nelson
      • 2 days ago
      • Official response
      • Reported - view

      Hi 

      Thank you for sharing this experience. While we are continuously monitoring our tool's efficacy to keep our False Positive Rate (FPR) below 1%, the tool can make some mistakes. Some tools that generate AI content are better than others at overlapping human writing. When we train our model to learn from these tools, there is the opportunity that we'll incorrectly identify that overlapping text. When content is less complex, very uniform in structure and has perfectly grammatical, if not robotic, sentences, these signals can indicate potential AI content.

      It's also possible that the cited text was short enough for us to find some signals, but they may be weaker than others in the paper because of the length of the sentences. All predictions come with a spectrum of confidence, and we are researching ways we can improve our detection report to make it easier to interpret how we are detecting the content. We're excited to deliver more of these usability updates in the coming months!

      Another area we're hearing more about is when students over-edit to try and "perfect" their writing, they can unintentionally reduce or eliminate their unique, natural voice, making the text read as more synthetic. This is admirable, but risks the student's ability to come across as original.

      Finally, even generalized tools that are acceptable to use in academia, like grammar and citation checkers are more and more being powered by AI, leaving their fingerprints behind for detectors like ours to pick them up.

      All these scenarios reinforce for me that no detector should be used as a sole decision point. They are entry points to conversations about how a student embarked on their writing process. Did the student use any tools to help them format the citation?  If not, were they editing and polishing their document excessively? Our Chief Product Officer, Annie Chechitelli, recently posted an article on LinkedIn that resonates so well with me. She mentions tools like health screening, metal detectors and weather predictions as guides to help me decide my next course of action. This is how we see our detection tool being a valuable guide for educators. Hopefully you find the article helpful as well!

    • Online Community Manager
    • kat_turnitin
    • 3 days ago
    • Official response
    • Reported - view

    #AskTurnitin Conversation Starters: Scenarios, Edge Cases & Bypassers

    Hi all! We’ve seen several educators in TEN share moments where an AI score, especially one influenced by bypassers, left them scratching their heads.

    With the updated model detecting bypassers more effectively, these situations can still feel confusing or unexpected. So let’s build a shared “AI Scenario Playbook” together.

    If you’ve had a moment where you thought, “I have NO idea how to handle this,” share it (anonymized!) below. and  will help break it down, and your example could help another educator next week.

    For instance,  recently asked

    “Has Turnitin come up with a remedy to detect the use of a Humanizer on a student's paper? I am seeing the use of humanizers on students' papers, and Turnitin doesn't seem to be detecting any AI use with these papers. Has anyone come up with a remedy to detect humanizer use?”

    What scenario would you most like guidance on? Comment below, reply to this post, or tag us in your examples. 

    Resource:
    Leveraging Turnitin Clarity: A student guide to AI prompting

      • Gailene_Nelson
      • 2 days ago
      • Official response
      • Reported - view

       I'd love to hear more about what you are seeing, especially in the past few months. In August we released an update to our AI detection tool to include Humanizers, so any submissions made after that release will automatically check for generative AI, whether it comes from an LLM, AI Paraphraser or AI bypasser (humanizer),  however we don't differentiate between the likely AI generated text and the likely AI bypassed text in our AI report. 

       

      How recent were these submissions? What types of "signals" are you seeing that indicate the use of humanizers? While no detection solution is perfect, we can certainly learn from educators like you if there are indicators you are picking up on that we might be missing. If you have a suspicious document that was submitted after our Bypasser release, and willing to share it so we can analyze it further, let me know. I'll reach out to you separately so we're not posting sensitive information in this forum.

      More and more tools are coming out with ways to make AI-generated text sound more human, offering features like "contextual word choice suggestions" which may seem innocuous to a student but ends up inserting AI-generated text into their documents. It's certainly an arm's race to keep up with these technologies, and we appreciate your questions and feedback. 

    • Peter_Pollack
    • 3 days ago
    • Reported - view

    Thank you so much Gailene, Audrey, and Kat!  This information is very helpful!

      • Gailene_Nelson
      • 2 days ago
      • Official response
      • Reported - view

      Thank YOU   !

    • Peter_Pollack
    • 6 hrs ago
    • Reported - view

    Hi Gailene - I would like to share a few examples with you - can you reach out to me separately?

Content aside

Attendees

Stats

  • 37Replies
  • 463Views