The makers of arguably the world’s most popular chatbot proposed a solution to help people differentiate between human- and robot-generated text.
OpenAI, the company behind ChatGPT and text-to-image generator DALL-E, said it’s “trained a classifier to distinguish between text written by a human and text written by AIs,” in a blog post yesterday (Jan. 31). You can try it here.
But there’s a catch: It’s not entirely reliable yet. In testing so far, 26% of AI-written texts were flagged as “likely AI-written,” while human-written text was incorrectly labeled as AI-written 9% of the time. The tool proved more effective on chunks of texts longer than 1,000 words, but even then the results were quite iffy.
OpenAI defended the tool’s flaws as part of the process, saying they released it at this stage of development “to get feedback on whether imperfect tools like this one are useful.”
What is ChatGPT?
Made by OpenAI, which also made the text-to-image generator DALL-E, chatbot ChatGPT has become a talking point since it was launched as a prototype in November 2022. Microsoft, which has pumped in tens of millions into the company, could reportedly use it to advance its Bing search engine.
Applications of the tool stand to benefit various professions and endevors. The dialogue-based artificial intelligence (AI) tool helps real estate agents create online listings, students churn out essays, and developers write code, among other things.
OpenAI’s classifier can’t be used to prevent cheating in schools
One of the biggest concerns about ChatGPT is its application for cheating in school exams. Some institutions have started blocking ChatGPT on their devices and networks. OpenAI released its AI-identifying tool to partially address those issues.
“We recognise that many school districts and higher education institutions do not currently account for generative AI in their policies on academic dishonesty. We also understand that many students have used these tools for assignments without disclosing their use of AI,” the company has acknowledged.
Unfortunately for these institutions, the AI text classifier is “far from foolproof” and can’t be used to detect plagiarism, OpenAI warned. Not only can it misclassify AI text as human writing and vice versa, students could also learn to dodge the system by modifying some words or clauses in the generated content.
For now, educators have to encourage students to be more honest and transparent about their use of the chatbot.
A non-exhaustive list of OpenAI’s text classifier’s limitations
🗣 It really only works on English text. In other languages and code, it is even less accurate.
💬 Predictable text such as a list of prime minister, which would largely be the same whether a human or bot wrote it, cannot be picked up by the classifier.
⏳ The detection may be ephemeral given that AI-written text can be edited to evade the classifier.
❌ For inputs that are very different from text in the AI’s training set, which ends in 2021 and likely can’t handle very complex asks, the classifier could confidently answer incorrectly.
Tool of interest: GPTZero
A 22-year-old developer, Edward Tian, wrote an app to sniff out text generated by ChatGPT and launched it on Jan. 3. The Princeton University student, who is months away from graduating, based his detection system on analysing two factors: perplexity, which refers to randomness in the text, and burstiness, which refers to variations in sentence formulations.
Taking in feedback from educators, Tian added more nuance to the tool, which can now identify a mix of AI and human text, and highlights portions of text that are most likely to be AI generated. The team of four engineers working on the system also built a pipeline to handle file batch uploads in PDF, Word, and .txt format so educators can run multiple files through GPTZero at once.
🎨 The best examples of DALL-E 2’s strange, beautiful AI art
💸 Microsoft makes its third multi-billion dollar investment in ChatGPT creator OpenAI
🌐 Microsoft is expanding access to its AI toolkit, including ChatGPT