#AskTurnitin: Month-Long Q&A with the Turnitin Team on Navigating AI in Teaching and Learning
Are you curious about how AI detection can support trust, integrity, and student authorship in your classroom? Or maybe you want to explore practical strategies for navigating AI responsibly in teaching?
Join #AskTurnitin with Turnitin team members Patti West-Smith and Gailene Nelson as they discuss how educators can approach AI in the classroom with balance and insight.
Explore how thoughtful use of AI detection and Turnitin tools can support academic integrity, empower educator judgment, and enhance the learning experience.
Meet our team:
- Patti West-Smith – Senior Director of Global Customer Engagement at Turnitin
- Gailene Nelson – Senior Director of Product Management at Turnitin
How it works:
#AskTurnitin will be open in TEN for 30 days, giving you plenty of time to post your questions and join the discussion. Patti and Gailene will be checking in regularly to respond and share their insights.
Ask about:
- How to discuss AI and authorship with students
- When AI detection is most helpful—or most challenging
- Balancing innovation and integrity in AI-enabled learning
- How to interpret AI detection results ethically
- What support or resources would make AI detection more meaningful for your context
#AskTurnitin Guidelines:
- Be respectful: Treat all participants with kindness and professionalism.
- Stay on topic: Questions should relate to AI detection, teaching strategies, and classroom experiences.
- No product support requests: Technical or account issues should be directed to Turnitin Support.
- Avoid sensitive personal info: Do not share personally identifiable information about yourself, your institution, or students.
- Engage constructively: Share insights, ask thoughtful questions, and build on others’ contributions.
Helpful resources to support your participation:
- AI is here to stay in the classroom, so why do we need AI detectors? | Turnitin Blog
- In a world of AI, why citation and referencing still matter | Turnitin Blog
- Bridging the AI divide: Teaching writing and building trust | Turnitin Blog
- How the ‘show your work’ approach is redefining student writing | Turnitin Blog
Start the conversation:
Reply to this post with your questions, and Patti and Gailene will jump in with their insights. Let’s connect, share experiences, and learn from each other as we explore the role of AI in education!
56 replies
-
Hello, I am from the UK so I apologise if I use different terminology. I would like to say this forum is a fantastic idea and I am really enjoying reading the questions and the advice provided. What I am understanding a lot more now is AI detection isn't about accusing students of misuse but to open up a conversation about how AI has been used. AI is not going anywhere and the education sector needs to embrace it but this is quite difficult. I keep having more conversations now that early transparency is going to be key and to also educate students on what is appropriate use and misuse.
I have many questions but what I would like to start off with is the technical aspect of the AI detection. I understanding from this forum that it works by identifying 'signals'. What are these signals? What is it what shows up in students work that flags it may be AI writing? I often use my own indicators to have conversations with learners, for example, if there is Americanised language, this is a good one in the UK as we can identify when 'z' is used in words and discuss it. Also, if the work is generic and has no personalised examples in their or if the tone changes. I have heard about the 'rule of 3' but I do not fully understand this or know how accurate this is to comfortably have a conversation about AI use. So what are the signals that flag AI may have been involved? I feel if I understand the technical aspect a bit more it could shape my conversations better. I must say though that I am not in IT so I do not understand a lot of terminology.
Thank you in advance -
Hi TEN Community!
This entire thread has been an amazing deep dive into what educators are experiencing in the classroom. Following the information shared, we have been inspired to update one of our more popular educator resources: “AI-Generated Text: What Educators Are Saying.”
We released the original version two years ago (a lifetime in AI years!), and so much has changed, as clearly expressed in this thread: your curriculum, your tools, your conversations with students, and the role AI now plays in teaching and learning. That’s why we’d love to feature your experiences in the refreshed edition.
We’re looking for short reflections around:
Your challenges interpreting or talking about AI use
Your successes teaching with or about AI
Your wonderings — What still feels unclear? What are you grappling with?
Your observations about how students are using (or misusing) AI
Your evolving relationship with AI detection and trust
We’ll be selecting a handful of quotes for the updated publication and will anonymize everything, of course.
If you’re willing to share, please drop your thoughts right here in the thread. Even a few sentences can help fellow educators around the world feel less alone in navigating this fast-moving space. (And for those educators who have already posted such thoughtful insights, we hope to reach out to you separately to see if we can include your perspective in our resource: and ).
Current resource for reference:
https://www.turnitin.com/papers/ai-generated-text-what-educators-are-sayingThank you for helping shape the conversation — and for everything you do to support integrity, trust, and learning.
-
Hi I hope I'm asking this in the right place and apologize if this has been covered as I just joined: I teach a 100% online course, and my course requires a research paper 1,500-1,800 words. I'm receiving such a large number of papers with high detection (80--100% range) of AI 'generative text' (blue); I routinely plug the same text into other AI detection sites with wildly varying results. Students all say they only use AI for grammar and/or organization but not to 'generate' their paper, which is my real concern: That they're simply using a prompt and having AI actually write their paper. Do you feel confident in TurnItIn's accuracy? Am I misunderstanding 'generative text'?
-
Thank you for your question, ! In addition to what you posed (above), we’ve noticed that other educators in TEN have shared moments where an AI score raised questions, and it’s not always clear what the next step should be.
AI detection is a signal, not a decision. We’d love to hear from everyone in our TEN community:
What’s your first move when you see a surprising score?
What questions do you ask students with regards to a particular assignment?
How do you avoid over-interpreting a single result?
What does your workflow look like — even if it’s still a work in progress?
, , - we'd love your insights coming from the Turnitin perspective. And for members of the community who post, your experiences and tips can help everyone build shared clarity and practical strategies for handling these situations. and - what are you seeing with your students?
Resources:
The “Show Your Work” approach to student writing
Getting started with AI writing at Turnitin
-
#AskTurnitin Conversation Starters: Scenarios, Edge Cases & Bypassers
Hi all! We’ve seen several educators in TEN share moments where an AI score, especially one influenced by bypassers, left them scratching their heads.
With the updated model detecting bypassers more effectively, these situations can still feel confusing or unexpected. So let’s build a shared “AI Scenario Playbook” together.
If you’ve had a moment where you thought, “I have NO idea how to handle this,” share it (anonymized!) below. and will help break it down, and your example could help another educator next week.
For instance, recently asked
“Has Turnitin come up with a remedy to detect the use of a Humanizer on a student's paper? I am seeing the use of humanizers on students' papers, and Turnitin doesn't seem to be detecting any AI use with these papers. Has anyone come up with a remedy to detect humanizer use?”
What scenario would you most like guidance on? Comment below, reply to this post, or tag us in your examples.
Resource:
Leveraging Turnitin Clarity: A student guide to AI prompting -
Thank you so much Gailene, Audrey, and Kat! This information is very helpful!
-
Hi Gailene - I would like to share a few examples with you - can you reach out to me separately?
-
I’ve noticed that some essays flagged by AI Detection tend to have a more descriptive and technical tone, particularly in experimental Extended Essays. For example, one Chemistry EE I received showed consistently high AI scores. Even after revision, the score decreased only slightly—from 69% to 63%.
In contrast, I had an English EE that initially scored around 30%. For review, I asked the student to strengthen their personal voice, especially in the introduction and conclusion. In this case, the revision was effective, and the new version was no longer flagged.
These experiences make me wonder how disciplinary writing conventions—especially highly standardized, technical, and descriptive scientific writing—may be influencing AI detection results, and how reliably these scores distinguish between legitimate student authorship and AI-generated text.
-
Thank you so much for this, it's a very interesting thread. I'm sorry I've only joined in so close to the end. My role is a support role in Higher Education. I work with educators and advise them about AI use/misuse (amongst other things!). Often they come to me looking for reassurance about an AI score.
Sometimes they trust it and just want me to confirm their view, sometimes they don't think it fits their impression of the student. I always remind them about the AI report being something to bring to a conversation and not a "smoking gun" in and of itself. But I find they still want reassurance of some sort. As others have mentioned above they are trusting their own instincts less.
I've tried to use Authorship reports to find other indicators to give confidence to an opinion, one way or another. I had been using data points such as a high editing time and a high revision count as indicators of authentic effort from students, and a corresponding high AI score might be due to unintended misuse of Grammarly, for example. But low editing time could easily be because a student downloads a .docx file from Word online, after spending hours writing it, just before uploading it. I've started to disregard editing times of 0-2 minutes completely.
I've also tried to get insight from writing style, but I'm less and less confident about saying anything based on that data. Maybe consistent writing is because of consistent use of AI and corresponding AI reports that were low in the past but high now are simply evidence of the detector improving? Maybe a spike in writing style is because of the nature of an assignment being different or maybe there was an update to the student's preferred AI model? The more I look and test it myself the less useful insight I feel I have to pass on.
Are there resources or training around this?
