5

#AskTurnitin: Month-Long Q&A with the Turnitin Team on Navigating AI in Teaching and Learning

Thu Nov 20 - Sat Dec 20
Event by Turnitin Official

Are you curious about how AI detection can support trust, integrity, and student authorship in your classroom? Or maybe you want to explore practical strategies for navigating AI responsibly in teaching?

Join #AskTurnitin with Turnitin team members Patti West-Smith and Gailene Nelson as they discuss how educators can approach AI in the classroom with balance and insight.

Explore how thoughtful use of AI detection and Turnitin tools can support academic integrity, empower educator judgment, and enhance the learning experience.

Meet our team:

  • Patti West-Smith  – Senior Director of Global Customer Engagement at Turnitin
  • Gailene Nelson  – Senior Director of Product Management at Turnitin

How it works:

#AskTurnitin will be open in TEN for 30 days, giving you plenty of time to post your questions and join the discussion. Patti and Gailene will be checking in regularly to respond and share their insights.

Ask about:

  • How to discuss AI and authorship with students
  • When AI detection is most helpful—or most challenging
  • Balancing innovation and integrity in AI-enabled learning
  • How to interpret AI detection results ethically
  • What support or resources would make AI detection more meaningful for your context

#AskTurnitin Guidelines:

  1. Be respectful: Treat all participants with kindness and professionalism.
  2. Stay on topic: Questions should relate to AI detection, teaching strategies, and classroom experiences.
  3. No product support requests: Technical or account issues should be directed to Turnitin Support.
  4. Avoid sensitive personal info: Do not share personally identifiable information about yourself, your institution, or students.
  5. Engage constructively: Share insights, ask thoughtful questions, and build on others’ contributions.

Helpful resources to support your participation:

Start the conversation:

Reply to this post with your questions, and Patti and Gailene will jump in with their insights. Let’s connect, share experiences, and learn from each other as we explore the role of AI in education!

32 replies

null
    • Peter_Pollack
    • 5 days ago
    • Reported - view

    Hi I hope I'm asking this in the right place and apologize if this has been covered as I just joined:  I teach a 100% online course, and my course requires a research paper 1,500-1,800 words.  I'm receiving such a large number of papers with high detection (80--100% range) of AI 'generative text' (blue); I routinely plug the same text into other AI detection sites with wildly varying results.  Students all say they only use AI for grammar and/or organization but not to 'generate' their paper, which is my real concern:  That they're simply using a prompt and having AI actually write their paper.  Do you feel confident in TurnItIn's accuracy?  Am I misunderstanding 'generative text'?

      • Gailene_Nelson
      • 2 days ago
      • Official response
      • Reported - view

       

      Thanks for your question! This is a common concern. I don't think you are misunderstanding generative text, but perhaps expand your definition of what could be contributing the AI-generated content.

      We are seeing more and more general purpose tools like grammar checking, auto-complete and other "acceptable use" tools being powered by AI. Small changes may actually be introducing more AI into the text than your students realize. Some of these tools offer re-write suggestions that are likely powered by AI. Our latest update has shown our model to be more sensitive to re-writes & modifications that are more than just grammar edits.

      We continuously monitor and run evaluation test sets against our detection tools, and those results confirm the reliability and accuracy of our detection tool.  

    • Digital Customer Experience Manager
    • Audrey_turnitin
    • 2 days ago
    • Official response
    • Reported - view

    Thank you for your question, ! In addition to what you posed (above), we’ve noticed that other educators in TEN have shared moments where an AI score raised questions, and it’s not always clear what the next step should be.

    AI detection is a signal, not a decision. We’d love to hear from everyone in our TEN community:

    • What’s your first move when you see a surprising score?

    • What questions do you ask students with regards to a particular assignment?

    • How do you avoid over-interpreting a single result?

    • What does your workflow look like — even if it’s still a work in progress?

     , , - we'd love your insights coming from the Turnitin perspective. And for members of the community who post, your experiences and tips can help everyone build shared clarity and practical strategies for handling these situations.     and  - what are you seeing with your students? 

    Resources:
    👉  The “Show Your Work” approach to student writing
    👉  Getting started with AI writing at Turnitin

      • Gailene_Nelson
      • yesterday
      • Official response
      • Reported - view

      Thanks  for the question. I think we're all aligned on the need to step back from the score as a definitive indicator for judgement, and instead, see the technology as a tool to open a dialogue with students. We've mentioned it in prior posts, but we intentionally optimized our detection tool to minimize false positives as a guardrail to protect students from being falsely accused of AI. 

      These are opportunities for all of us - educators, students and companies like Turnitin - to learn what using AI responsibly means, and how it can translate back to the learning process. We understand the importance of maintaining trust between educators and students. With that lens, we are exploring how we might bring more insight and interpretability to the AI detection report, enabling educators to have more fruitful conversations with students, and diffuse the potential for negative dialogue and mistrust.  

      We look forward to working with you as we continue to evolve our tools to better support your needs!

      • Online Community Manager
      • kat_turnitin
      • 15 hrs ago
      • Official response
      • Reported - view

      I’d like to add to the discussion from a TEN member  

      He shared in an earlier post:

      “Hi, I have some questions about the AI detection feature in Turnitin. In several reports for my students, I noticed that the system identifies a high percentage of AI-generated content. However, when I check with the students, they confirm that they did not use any AI tools to write those sections. In some cases, Turnitin even flags properly cited content taken from websites as AI-generated, despite students having correctly referenced their sources. Therefore, I would like to understand the reason behind this issue and how best to respond to students in such cases. Also, what are the most effective strategies to help students avoid using AI tools in their writing tasks?”

      This is a great example of the kinds of questions we hear frequently. And one resource we feel that addresses these types of questions effectively is a recent blog post: “How the ‘show your work’ approach is redefining student writing.”  

      In it, our very own  frames what’s happening in classrooms right now and why many educators are rethinking the structure of their assignments. She shares that “US schools are responding to AI in student writing by redesigning their assessments to reduce the risk of student misconduct and missteps.

      Designing assignments that require process documentation and iterative development – ‘show your work’ – is one approach. Other educators have found themselves reverting to more traditional assignments, using handwritten work and oral presentations.” 

      Check out the blog above to learn more. We feel like it really shows how this conversation is constantly evolving. There’s no one-size-fits-all approach, and educators everywhere are still figuring out what works best for their classrooms. At Turnitin, we’re constantly developing and improving our tools like Turnitin Clarity to better support these challenges, and feedback like this is incredibly valuable.

      Does anyone else have insights to share with  ?

      We truly appreciate hearing experiences from different parts of the world, and we encourage everyone to keep sharing 💙

    • Online Community Manager
    • kat_turnitin
    • 10 hrs ago
    • Official response
    • Reported - view

    #AskTurnitin Conversation Starters: Scenarios, Edge Cases & Bypassers

    Hi all! We’ve seen several educators in TEN share moments where an AI score, especially one influenced by bypassers, left them scratching their heads.

    With the updated model detecting bypassers more effectively, these situations can still feel confusing or unexpected. So let’s build a shared “AI Scenario Playbook” together.

    If you’ve had a moment where you thought, “I have NO idea how to handle this,” share it (anonymized!) below. and  will help break it down, and your example could help another educator next week.

    For instance,  recently asked

    “Has Turnitin come up with a remedy to detect the use of a Humanizer on a student's paper? I am seeing the use of humanizers on students' papers, and Turnitin doesn't seem to be detecting any AI use with these papers. Has anyone come up with a remedy to detect humanizer use?”

    What scenario would you most like guidance on? Comment below, reply to this post, or tag us in your examples. 

    Resource:
    Leveraging Turnitin Clarity: A student guide to AI prompting

    • Peter_Pollack
    • 10 hrs ago
    • Reported - view

    Thank you so much Gailene, Audrey, and Kat!  This information is very helpful!

Content aside

Attendees

Stats

  • 32Replies
  • 413Views