AI Detection tools DOESN’T WORK!

Srilal S. Siriwardhane
4 min readFeb 27, 2024

--

Before discussing the “Why” let’s have a little bit of background into this. a few months back I saw a few people in academics saying that Turnitin can detect AI-generated content now. and some even saying they will punish students if they get caught using GenAI content. recently i saw some students got caught by Turnitin as it detected their work was AI-generated. problem is no one can prove they used GenAI or not!. (From here and out, GenAI will generally refer to LLMs.)

Now, In a paper (Link) published by a team of computer scientists and mathematicians at the University of Maryland, they have shown that even with the LLM watermarking method, these AI Detectors have no more than 50% accuracy at best. to quote Dr. Soheil Feizi, “It will be very close to a random coin flip in terms of detecting AI-generated text or human-generated text”.

So, the below code can predict if a text is written by GenAI with equal accuracy to all the so-called AI detection tools.

const randomNumber: number = Math.floor(Math.random() * 101);
console.log(`GenAI Probability: ${randomNumber}%`);

You would get better results asking chatGPT if he wrote the It! It doesn’t matter if the tool says it’s 0% AI-generated or 100% AI-generated, in the end, this number is meaningless.

But why we can’t build something that can detect GenAI content? This is simply because modern Large Language Models (LLMs) closely resemble human writing more than the average human does. think of how an average human writes. half the population writes worse than that. If you were to place LLMs in the bell curve of Human writing abilities, modern LLMs will be on the far right side!

This is why OpenAI has abandoned its efforts to create an AI detection tool. in a Blog post from OpenAI (Link) they state that “…Our research into detectors didn’t show them to be reliable enough given that educators could be making judgments about students with potentially lasting consequences.”

In their marketing, Turnitin claims to only have a false positive/negative rate of 1% but as studies have shown it’s much higher than that (it’s only accurate only about 50%). for example, if you put a paragraph from a famous book, like from the Bible, it will say it’s AI-generated with a high probability. that shows how inaccurate these things are. these so-called AI Detection tools only exist to extort money from gullible academic institutions. since Turnitin is used by many academic institutions it’s just a bit more marketing on their side to sell this.

Let’s say an institution with 1000 students uses a so-called AI detection tool to detect if students submit AI-generated content. even if this tool only has a 1% false positive rate that means about 10 students will get falsely accused of using AI-generated content.

But how can they prove if a student used AI? How do they prove if a given text is AI-generated?

Well, obviously they can’t! these AI detection tools cannot provide any actual evidence other than some arbitrary percentage number pulled from its algorithmic posteriors!

after all, since these tools themself are AI’s, they are Blackboxes. you won’t get any reasons why they think that it’s AI-generated. even in some cases, changing a single letter will drastically change the detected probability. since all academic writing is similar in style, the chances of a false positive increase. also for someone whose English is the second language chances of false positives, or even false negatives can be increased.

Even some universities have stopped using Turnitin or any other so-called AI Detection tools altogether due to this unreliable nature. this article (Link) from Vanderbilt University in Tennessee, USA provides the reasoning behind their decision. Bloomberg reports (Link), some universities have decided to stop using AI writing detection tools to evaluate students’ work.

Instead of trying to fight an already lost war on AI Detection, the best course of action would be to adopt the GenAI into academics and we see some universities have already started this process.

The Northwestern University in Illinois, USA has provided some guidelines (Link) on how to adopt GenAI into the classroom. also in their reasoning, they state a very important fact. to quote them directly, “GAI is integrated deeply into many tools and will become increasingly difficult to opt out of.” (The provided link from Northwestern University is a very interesting and helpful course on using GenAI. they have written a set of articles that will be essential for anyone who uses GenAI)

As more and more tools we use start to implement GenAI, such as Microsoft Office, it will be difficult to actually not use GenAI even if the writer wanted to. Many have already introduced how to cite GenAI in various citing formats.

Finally, if anyone falsely accuses you of using GenAI, ask for proof. Also, you can simply find a piece of text written by the accuser and put it in multiple so-called AI Detection tools. and unsurprisingly each tool will have a different opinion and will show the text having AI-generated parts with arbitrary probability numbers that cannot be proven.

--

--