#AskTurnitin: Month-Long Q&A with the Turnitin Team on Navigating AI in Teaching and Learning
Are you curious about how AI detection can support trust, integrity, and student authorship in your classroom? Or maybe you want to explore practical strategies for navigating AI responsibly in teaching?
Join #AskTurnitin with Turnitin team members Patti West-Smith and Gailene Nelson as they discuss how educators can approach AI in the classroom with balance and insight.
Explore how thoughtful use of AI detection and Turnitin tools can support academic integrity, empower educator judgment, and enhance the learning experience.
Meet our team:
- Patti West-Smith – Senior Director of Global Customer Engagement at Turnitin
- Gailene Nelson – Senior Director of Product Management at Turnitin
How it works:
#AskTurnitin will be open in TEN for 30 days, giving you plenty of time to post your questions and join the discussion. Patti and Gailene will be checking in regularly to respond and share their insights.
Ask about:
- How to discuss AI and authorship with students
- When AI detection is most helpful—or most challenging
- Balancing innovation and integrity in AI-enabled learning
- How to interpret AI detection results ethically
- What support or resources would make AI detection more meaningful for your context
#AskTurnitin Guidelines:
- Be respectful: Treat all participants with kindness and professionalism.
- Stay on topic: Questions should relate to AI detection, teaching strategies, and classroom experiences.
- No product support requests: Technical or account issues should be directed to Turnitin Support.
- Avoid sensitive personal info: Do not share personally identifiable information about yourself, your institution, or students.
- Engage constructively: Share insights, ask thoughtful questions, and build on others’ contributions.
Helpful resources to support your participation:
- AI is here to stay in the classroom, so why do we need AI detectors? | Turnitin Blog
- In a world of AI, why citation and referencing still matter | Turnitin Blog
- Bridging the AI divide: Teaching writing and building trust | Turnitin Blog
- How the ‘show your work’ approach is redefining student writing | Turnitin Blog
Start the conversation:
Reply to this post with your questions, and Patti and Gailene will jump in with their insights. Let’s connect, share experiences, and learn from each other as we explore the role of AI in education!
25 replies
-
Hi everyone, Patti here—welcome to our month-long #AskTurnitin conversation!
To kick things off, I’d love to highlight a thoughtful piece on LinkedIn written by our CPO, Annie Chechitelli: “AI Detection Is Imperfect—And Should Be.” If you haven’t seen it yet, it’s a great read and sets the stage for why this discussion matters so much right now. Annie reminds us that detection isn’t about perfection; it’s about giving educators insight, context, and confidence as they navigate authorship in an AI era.
With that in mind, Gailene and I are here all month to talk openly about how you’re approaching AI with your students, what challenges you’re facing, and how tools like AI detection can support integrity and student learning without getting in the way of the teaching moment.
We can't wait to hear from you! Drop your thoughts or questions below—big or small. We’re excited to learn from you and support your conversations about responsible, balanced AI use over the next 30 days. -
Hello TEN Community
Welcome to our very first #AskTurnitin. If you’ve got questions for and about AI in education, this is the place to ask! Can’t wait to see what you’re curious about!
-
I also want to give a big shoutout to who earned the Top Contributor badge!
Your thoughtful contributions truly make our TEN community better, and we’d love for you to take part in this conversation -
This year, even when students write original work, AI still shows 50% accuracy. However, it is possible to obtain a false positive. How can we demonstrate that the work is the student's original work? Educator is the best judge. It is the educators' description because they know the students are fine, but how can we justify giving the same answer to all students?
-

Hi everyone! Gailene here—jumping in with a product perspective as we continue our #AskTurnitin conversation.
One thing I see often in my role is how educators are trying to balance trust, student agency, and the realities of AI, all while interpreting detection insights in a way that supports learning rather than policing. Something Annie’s article touches on (and that we think about constantly on the product team) is this idea: AI detection isn’t meant to be a verdict—it’s meant to be a signal. A starting point for a conversation, not the end of one.
With that in mind, I’d love to hear from you all! Your feedback genuinely influences what we prioritize, so anything you share helps us make these tools more meaningful and supportive for real teaching moments.
Looking forward to hearing your thoughts!
-
Hello,
I have started using Clarity and have activated the grammar, citation, and AI assist. I was disappointed to find that when a student uses the allowed AI assist, that turnitin still flags the essay as AI generated. One essay came back as 100 percent AI generated. I do not think this student cheated in anyway, so I am not concerned about accusing him wrongly, but I really would hope that in the future, you could create a tool that would flag AI usage outside the AI assist. I know this may be challenging, but since the system knows what it suggested (and presumably the system didn't actually write the paper or offer wording), the system could discount the suggestions as AI generated. It's a sophisticated bit of work, but AI can probably help you with it. :)
The other part of Clarity I would recommend improving is seeking to add more tools for grading like they do in the program our college uses, Canvas. If it were not for the AI assist, I would not use Clarity because the grading is clunky. I miss being able to highlight student passages to comment on them, for example. The rubric is also very basic. I am going to try a weighted rubric to see if I can get more flexibility in awarding points.
I am an early adopter at my college for the Clarity system, and I look forward to growing with you as new capabilities and tools become available. I would enjoy hearing your feedback on the topics I have raised above.
Thank you!
-
Hello -
While I recognize that there are merits to using AI to support writing in certain ways, because I teach English courses to juniors and seniors that center around composition (word choice, organizational choices, making arguments that should be unique, etc.) I avoid it in nearly all my formal writing assignments.
I recognize that the new Clarity program exists (a discussion for another day), I am very aware of how TurnItIn’s AI detection evaluates writing, and I definitely know it isn’t perfect. I totally understand and appreciate the way TurnItIn labeled this tool as the start of the conversation rather than the, for lack of a better term, smoking gun that points to this form of plagiarism that is becoming increasingly common.
All that to say... I see a lot here about professional judgment, but even though I am in my 12th year in the classroom, I am losing confidence in my judgment every day. I’ve been having a lot of these AI conversations with students of late. I’ve had one or two students that logged 96% or 100% that there really isn’t much of a conversation about. But I’ve also had quite of few who logged the “*,” showing that there’s AI, but not enough to be confident, and more who have logged between 20 and 50%.
My question: the majority of students I’ve spoken to are shocked that the AI score they have is as high as it is. I am not naive - I know I might have a few being dishonest. At this point I ask them to tell me ANYTHING they might have done differently and the most common answer is… they accepted choices recommended by the Grammarly extension or in other cases, passed it through a program and said “check my grammar.” Is this enough to set off the alarms? Can you explain to me how this works so I can explain it to them? I have bounced around some other forums where this has been discussed, but I figured I’d come right to the source.
Also I guess while we're here... can you define "false positive" as it is seen by the program? Can these come from honest writing that even educators with a clear tone have written?
This is a cool forum!
-
Hi
Always happy to talk Turnitin!
A “false positive” for us means human-written text flagged as AI-generated. Because we keep that rate below 1%, it’s rare, but even honest student writing can sometimes get flagged, especially if it’s very polished, very formulaic, or unusually consistent for that writer. If you ever have a submission you truly believe is misclassified, please share it with us. We can’t always give individual responses due to volume, but we do review them and use those cases to keep improving accuracy.
In the meantime, I'm thinking of a few resources that may be useful to you as we navigate AI and false positives: (in no particular order, I should add)
- Discussion starters for tough conversations about AI
- How to interpret Turnitin's AI writing score and dialogue with students
- AI conversations: Handling false positives for educators
- Approaching a student regarding potential AI misuse
There are actually more on this general topic (AI bypassers, for example) on this page: Academic integrity in the age of AI.
I’m delighted you brought this topic here as this is exactly the kind of nuanced, real-world conversation we hoped this forum would spark. If you want to talk through a specific example or walk through what you’re seeing in more detail, I’m happy to dig in with you.
-
#AskTurnitin Conversation Starters
Hi TEN Community! We’ve seen some educators in TEN noticing that AI scores seem to be rising, sometimes more than expected. Let’s have this conversation here!Some context:
- Our newest model detects bypassers more accurately
- It identifies ChatGPT-5–level writing better
- This means that higher scores may reflect improved detection, not changes in students’ writingQuestion: How are these shifts showing up in your classrooms? What questions or challenges are you facing when interpreting AI scores?
and fellow educators, we’d love to hear your tips, strategies, or experiences navigating these changes.
Resources:
In a world of AI, why citation and referencing still matter (Maybe now more than ever!)
Taking a deeper dive into AI writing at Turnitin -
provided some great tips and resources! In a previous response I suggested that a discussion with a student (or group of students) like this could be helpful framing: “Your score may be higher not because you did something wrong intentionally, but because the tool changed your voice more than you realized. Now, what do you want to do about it?”
AI has become so prevalent in so many tools that the odds are quite high that students don't even realize they're using AI or that it is shaping their work. A question such as the one above helps to open the discussion to what it means to have a distinct voice and style and to be intentional about if/how AI may be impacting that. Helping students to "interrogate the work" is really building their ability to think critically; as educators, we would want them to do that about any information they're consuming so we should also want them to apply that same lens when looking at their own work, especially when a tool begins to change it from what they may have originally intended.
Like and I'm mindful of how much time some of this could take so one suggestion might be to use this as a whole class or small group activity rather than trying to use a 1:1 conference approach. I suspect that more students than not could use this kind of discussion and practice, and it will certainly make it more feasible for instructors. -
Hello, I am from the UK so I apologise if I use different terminology. I would like to say this forum is a fantastic idea and I am really enjoying reading the questions and the advice provided. What I am understanding a lot more now is AI detection isn't about accusing students of misuse but to open up a conversation about how AI has been used. AI is not going anywhere and the education sector needs to embrace it but this is quite difficult. I keep having more conversations now that early transparency is going to be key and to also educate students on what is appropriate use and misuse.
I have many questions but what I would like to start off with is the technical aspect of the AI detection. I understanding from this forum that it works by identifying 'signals'. What are these signals? What is it what shows up in students work that flags it may be AI writing? I often use my own indicators to have conversations with learners, for example, if there is Americanised language, this is a good one in the UK as we can identify when 'z' is used in words and discuss it. Also, if the work is generic and has no personalised examples in their or if the tone changes. I have heard about the 'rule of 3' but I do not fully understand this or know how accurate this is to comfortably have a conversation about AI use. So what are the signals that flag AI may have been involved? I feel if I understand the technical aspect a bit more it could shape my conversations better. I must say though that I am not in IT so I do not understand a lot of terminology.
Thank you in advance -
Hi TEN Community!
This entire thread has been an amazing deep dive into what educators are experiencing in the classroom. Following the information shared, we have been inspired to update one of our more popular educator resources: “AI-Generated Text: What Educators Are Saying.”
We released the original version two years ago (a lifetime in AI years!), and so much has changed, as clearly expressed in this thread: your curriculum, your tools, your conversations with students, and the role AI now plays in teaching and learning. That’s why we’d love to feature your experiences in the refreshed edition.
We’re looking for short reflections around:
Your challenges interpreting or talking about AI use
Your successes teaching with or about AI
Your wonderings — What still feels unclear? What are you grappling with?
Your observations about how students are using (or misusing) AI
Your evolving relationship with AI detection and trust
We’ll be selecting a handful of quotes for the updated publication and will anonymize everything, of course.
If you’re willing to share, please drop your thoughts right here in the thread. Even a few sentences can help fellow educators around the world feel less alone in navigating this fast-moving space. (And for those educators who have already posted such thoughtful insights, we hope to reach out to you separately to see if we can include your perspective in our resource: and ).
Current resource for reference:
https://www.turnitin.com/papers/ai-generated-text-what-educators-are-sayingThank you for helping shape the conversation — and for everything you do to support integrity, trust, and learning.
-
Hi I hope I'm asking this in the right place and apologize if this has been covered as I just joined: I teach a 100% online course, and my course requires a research paper 1,500-1,800 words. I'm receiving such a large number of papers with high detection (80--100% range) of AI 'generative text' (blue); I routinely plug the same text into other AI detection sites with wildly varying results. Students all say they only use AI for grammar and/or organization but not to 'generate' their paper, which is my real concern: That they're simply using a prompt and having AI actually write their paper. Do you feel confident in TurnItIn's accuracy? Am I misunderstanding 'generative text'?
