News

AI and Academic Integrity: A Growing Crisis

April 29, 2026

By: Ben Winter

AI and Academic Integrity: A Growing Crisis

Within the past few years, generative artificial intelligence (AI) tools such as ChatGPT have transformed the academic landscape in colleges and universities throughout the United States and abroad. While AI was initially welcomed as a tool to boost productivity, AI has evolved into a widespread and deeply controversial tool within the context of higher education. It should come as no surprise that academic misconduct allegations continue to surge as AI continues to become more sophisticated. However, the methodologies and procedures to flag or investigate the use of AI by students have not kept pace with the rapid expansion of AI, resulting in a fast developing legal and ethical crisis affecting universities, colleges, and students across the State of Ohio and the country.

How AI Detection Works — and Where It Falls Short

AI tools are now commonly used by students for brainstorming, drafting, editing, and even completing entire assignments. As AI usage has increased, so too have accusations of academic dishonesty. Professors and administrators concerned about preserving academic integrity have turned to automated detection tools to identify suspected misuse.

Universities often use tools such as Turnitin to flag academic dishonesty involving AI. This has led to a sharp increase in misconduct cases. Reports indicate that AI-related plagiarism disputes now make up a significant portion of academic integrity investigations in higher education.1  Yet the surge in accusations has not necessarily been matched by reliable evidence. Instead, many cases hinge on algorithmic assessments that are, at best, probabilistic guesses rather than definitive proof. In practice, universities will use software to detect the use of AI by a student, but the software itself will ironically use AI to detect the use of AI. Universities will also look for the “hallmarks” of AI use, pointing to indications such as the use of straight quotes (the quotes used in Word when copying from an outside document and pasting into the Word document).

The Problem With AI Detection Tools

For example, Turnitin is fraught with known issues producing false positives. In fact, Annie Chechitelli, the Chief Product Officer of Turnitin, admits that there is a 4% false positive rate.2  That means for a university like The Ohio State University with approximately 66,901 students, 2,676 students will be falsely accused of using AI. This is far too high of a number considering the stakes. This is precisely why Vanderbilt University discontinued its use of Turnitin—the reality of innocent students being falsely accused of using AI—an issue which has been widely reported by other universities.3

Turnitin has been known to flag innocent students as using AI, as reported by the Washington Post.4

According to Vanderbilt University’s Brightspace guidance: “Additionally, there is a larger question of how Turnitin detects AI writing and if that is even possible. To date, Turnitin gives no detailed information as to how it determines if a piece of writing is AI-generated or not. The most they have said is that their tool looks for patterns common in AI writing, but they do not explain or define what those patterns are.”5

Even Turnitin advises users that, “our AI writing detection model may not always be accurate (it may misidentify human-written, AI-generated, and AI-generated and AI-paraphrased text) [sic], so it should not be used as the sole basis for adverse actions against a student.”6

Furthermore, OpenAI has stated that when it comes to AI detectors and whether such detectors work, “our research into detectors didn’t show them to be reliable enough given that educators could be making judgments about students with potentially lasting consequences.”7

In short, these systems are not designed to provide definitive proof of misconduct—yet they are often treated as such in disciplinary proceedings.

What to Expect: The Academic Disciplinary Process

If you are a student flagged for using AI, you can expect the following disciplinary process, which may vary slightly between different academic institutions:

  1. Initial Flag: A professor or automated system flags a paper as potentially AI-generated.
  2. Faculty Review: The instructor reviews the flagged content and may compare it to the student’s prior work or request drafts.
  3. Referral to Academic Integrity Office: If suspicion remains, the case is referred to a formal disciplinary body.
  4. Notice to the Student: The student is notified of the allegation and given an opportunity to respond. The university or college may offer an informal resolution at this point in exchange for an admission of guilt. This typically results in some form of negative notation on the student’s academic transcript, and/or the student’s ability to participate in university or college activities such as sports teams may be restricted.
  5. Hearing or Administrative Review: The student may present evidence (drafts, notes, version history). Some schools conduct formal hearings; others rely on administrative decisions.
  6. Determination and Sanctions: A finding is issued, often based on a “preponderance of the evidence” standard rather than proof beyond a reasonable doubt (the standard of review criminal defendants are entitled to).
The Stakes: Potential Consequences for Students

The consequences of being found guilty of AI-related cheating can be severe and long lasting, which may include, but are not limited to, failing grades on assignments or entire courses, academic probation or suspension, expulsion, and permanent disciplinary record remarks—all of these consequences can impact the ability of a student to attend graduate school and, in turn, can have an extremely negative impact on a student’s career prospects. In some cases, students may even lose scholarships.

Even if accusations are later dismissed, the process itself can be stressful, time-consuming, and damaging to a student’s academic standing and mental health.

Why Legal Representation Matters

It is important that if you are accused of cheating by using AI you seek legal representation. An attorney can ensure that the university follows their own procedures and does not improperly rely on flawed evidence. Furthermore, an attorney can assist in piecing together the facts and challenging the scientific validity and reliability of AI detection tools. Even if you are truly guilty, legal counsel can negotiate on your behalf to mitigate potentially lasting consequences and can appeal your case in court if necessary. As a result of university disciplinary processes lacking safeguards of courts, students may be at a significant disadvantage without professional representation.

Furthermore, an attorney can help expedite the overly bureaucratic process of the college or university’s review process.

I have experience successfully representing students accused of AI-related academic misconduct and academic misconduct in general. If you are accused (whether rightfully or wrongfully), call me to discuss your legal options, and I, along with my fellow attorneys at Strauss Troy, will zealously do everything possible to defend you and protect your academic reputation.

Sources:

[1] Business Insider, AI Cheating in Colleges: Plagiarism and ChatGPT (2024).

[2] Turnitin, Annie Chechitelli, Understanding the False Positive Rate for Our AI Writing Detection Capability.

[3] Vanderbilt University Brightspace, Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector (Aug. 16, 2023).

[4] Washington Post, ChatGPT Cheating Detection: Turnitin (Apr. 1, 2023).

[5] Vanderbilt University Brightspace. Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector. August 16, 2023.

[6] Turnitin Help Center, AI Writing Detection in the Classic Report View.

[7] OpenAI Help Center, How Can Educators Respond to Students Presenting AI-Generated Content as Their Own?