top of page
Search

When Your Hard Work Gets Mislabelled as AI: A Halloween Horror Story

  • psychbyshalom
  • Mar 12
  • 5 min read

It was Halloween morning, but instead of costumes and candy, I faced a blank document and a dreadful deadline.


At 8:00 AM, I sat down to write a hypothetical psychology case example due by 5:00 PM. Like many students, I had procrastinated. But unlike some, I felt confident. Between my coursework and my job as a behavioral technician, the material was familiar. I believed a few focused hours would be enough to produce a solid paper. The topic at hand was Autism Spectrum Disorder (ASD). I was excited to write this paper and flex all the knowledge I had gathered from work.


What happened next was something I never expected: my hard work was flagged as mostly AI-generated. This is the story of how a computer wrongly accused me of cheating and what it taught me about trust, technology, and academic integrity.


Procrastination is a common student experience. I wasn’t careless; I simply trusted my understanding of the subject. The case study involved analyzing behaviors, and to make the scenario flow naturally, I used the name of my boyfriend’s brother, Riley (RJ). It was a small detail that personalized it even more.


After about an hour, I finished the paper. I reread it, made some edits, and submitted it through Canvas. I felt proud. That mix of relief and accomplishment after last-minute work is familiar to many students.


The Unexpected AI Flag


Later that day, I received a message from my professor, Dr.Barry: my paper was flagged by AI detection software Turn-It-In. The system claimed 92% of my work was AI-generated. I stared at the screen in disbelief; my stomach dropped. I had written every word myself, without any AI tools or shortcuts.


The real RJ himself watched me write it..


This moment was unsettling. Academic integrity is not just a rule; it’s a student’s entire reputation in. academia world. For me, it mattered deeply. My dad has a PhD and spent years in academia. Growing up, I learned the value of honest work and the consequences of dishonesty. His countless theses and research papers were written well before AI existed, so the pressure was on.


But instead of spiraling, I decided to approach it head-on.

When I sat down with Dr. Barry (my awesome professor at the time) and walked through everything, the conversation shifted quickly.

Instead of feeling like an accusation, it became a conversation about how these automated systems work and how often they produce misleading results.

Then I looked at the actual Turnitin similarity report on my end.

The similarity score?



A whopping 3%.


And the only phrases it highlighted were things like:

“Is a 4-year-old boy” and “Autism Spectrum Disorder (ASD).”

In other words, phrases that appear in thousands of psychology papers.

Nothing about the structure or ideas of my work suggested AI generation. I gathered anything that could help explain where my knowledge was actually coming from.

I brought:


• A visual schedule I had created that same day for one of the children I work with (redacted)

• A sample data collection sheet from my job (with all identifying information removed)

• A photo of a PECS communication book used in sessions, and my explanation of the behavioral frameworks I referenced in the paper


Some of the things I wrote about in the assignment come directly from my work experience. Concepts like reinforcement systems and prompting strategies are part of my daily environment. But as a professor, how can you know that it was my job and I lived it? There's no way to be sure.


Why AI Detection Software Can Be Wrong


AI detection tools are designed to identify text that seems generated by artificial intelligence. They analyze patterns, phrasing, and other markers. But these tools are not perfect. They can misinterpret writing styles, especially when:


  • The writing is clear and well-structured

  • The vocabulary is consistent and formal

  • The text lacks typical human errors or informal language


In my case, my familiarity with the subject and careful writing made the paper appear “too perfect” to the software. This is a known issue with AI detectors: they sometimes flag genuine work because it doesn’t fit their expectations of human writing.


The Impact of False Accusations


Being wrongly accused of cheating can have serious consequences:


  • Emotional stress: Feeling unfairly judged can cause anxiety and self-doubt.

  • Academic risk: Investigations can delay grades or lead to penalties and academic integrity reports.

  • Damage to reputation: Even a false accusation can affect how professors and peers view you.


For students who rely on their integrity and hard work, this experience can feel like a personal attack.


What Students Can Do to Protect Themselves


If you face a similar situation, here are some practical steps:


  • Keep drafts and notes: Save your work process to prove originality with change histories.

  • Communicate clearly: Explain your writing process to your instructor if flagged.

  • Request a review: Ask for a manual check rather than relying solely on software.

  • Use plagiarism checkers: Before submitting, run your paper through trusted tools to catch issues.

  • Stay informed: Understand how AI detection works and its limitations.


These actions can help you defend your work and maintain trust with your educators.


What Educators and Institutions Should Consider


The rise of AI detection software raises important questions for schools:


  • Balance technology with human judgment: Software should support, not replace, instructor review.

  • Educate students about AI tools: Teach how to use AI ethically and how detection works.

  • Develop clear policies: Define how flagged cases are handled to protect students’ rights.

  • Encourage transparency: Share detection results and allow students to respond.


A fair system respects both academic integrity and the challenges of new technology.


Reflecting on the Experience


My Halloween nightmare morning turned into a lesson about trust in technology and the value of honest work. Despite the scare, I stood by my paper and my process. The experience showed me how important it is to:


  • Stay confident in your abilities

  • Document your work carefully

  • Advocate for yourself when technology makes mistakes


Technology can help education, but it cannot replace the human element of learning and fairness.

After a few weeks had passed, one of my peers also experienced this, so I wasn't the only one. In her situation, she was threatened by the Academic Integrity Board, but thankfully, she managed to beat the case.


The Assignment

For context, the paper referenced in this post was a case study assignment for my psychology course. The goal was to describe a hypothetical child or adolescent with Autism Spectrum Disorder (ASD), outline their developmental level and behaviors of concern, and then explain how those concerns might be addressed through appropriate intervention strategies.

The assignment was designed to help students review how ASD may present in clinical or classroom settings and practice applying treatment approaches discussed in class.

For transparency, I’ve included the original essay below so readers can see the assignment.




 
 
 

Comments


bottom of page