whatsapp

OpenAI to Shut Down its Inaccurate AI Detector

08 August 2023 10:07 AM

Due to substandard performance and accuracy, OpenAI has deactivated its AI Classifier, a tool launched in January to discern humans from AI-generated text. The Classifier often misjudged human-generated text as AI-authored, particularly with content under 1000 characters.

While the company is researching more effective text provenance techniques, the closure has sparked debate within the educational sector, where the tool was seen as a deterrent against students using AI services like ChatGPT to write essays. OpenAI pledges to continue refining its AI-detection strategies.

In January, an artificial intelligence tool unveiled a technology that can determine if a generative AI tool has produced the content. This technology may rescue the world or at least save the sanity of teachers & academics. That tool is no longer usable, failing to fulfill its purpose half a year later. The announcement from OpenAI says:

“As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text. We have committed to developing and deploying mechanisms that enable users to understand if audio or visual content is AI-generated.”

OpenAI, the firm that created ChatGPT, secretly turned off the AI Classifier recognition tool last week. Its "low accuracy rate," according to the firm, was the reason for this choice. This justification is presented in the article that first introduced the tool. There is no longer a link to OpenAI’s classifier.

The employment of more complex artificial intelligence is made possible by the nearly daily appearance of new technologies. A tiny business for AI detectors has consequently grown.

When OpenAI announced the release of its AI classifier, the company asserted that it was capable of differentiating between a text written by a human or an AI tool. OpenAI, on the other hand, labeled the classifier as "not fully reliable." The study's authors noted that testing on a "challenge set" of texts correctly classified 26% of AI-written material as "likely AI- written." They mistook 9% of human-written material for AI-written, though.

The AI Classifier, according to OpenAI, had some shortcomings, such as being untrustworthy on text with fewer than 1,000 characters. Outside of its training data, it performed poorly and misclassified stuff created by humans as having been generated by artificial intelligence.

The education sector is particularly interested in accurately detecting artificial intelligence. Since the chatbot's November introduction, educators have raised concerns about pupils using ChatGPT to create essays.

"We recognize that identifying AI-written text has been an important point of discussion among educators, and equally important is recognizing the limits and impacts of AI-generated text classifiers in the classroom," according to OpenAI, the business will keep expanding its reach as it gains knowledge.

Decrypt has contacted OpenAI for comment, but they have not yet provided one.