Tom Mitchelhill7 hours agoOpenAI shutters AI detector due to low accuracyOpenAI has quietly pulled the plug on its AI Classifier, which aimed to help teachers, professors and others distinguish between human- and AI-written text2330 Total views21 Total sharesListen to article 0:00NewsJoin us on social networksArtificial intelligence powerhouse OpenAI has discreetly pulled the pin on its AI-detection software, citing a low rate of accuracy.
The OpenAI-developed AI classifier was first launched on Jan. 31 and aimed to aid users, such as teachers and professors, in distinguishing human-written text from AI-generated text.
However, per the original blog post that announced the launch of the tool, the AI classifier has been shut down as of July 20:“As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy.”
The link to the tool is no longer functional, while the note offered only simple reasoning as to why the tool was shut down. However, the company explained that it was looking at new, more effective ways of identifying AI-generated content.
“We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated,” the note read.OpenAI"s former AI classifier in action. Source: OriginalityAI
From the get go, OpenAI made it clear the detection tool was prone to errors and could not be considered “fully reliable.”
The company said limitations of its AI detection tool included being “very inaccurate” at verifying text with less than 1,000 characters and could “confidently” label text written by humans as AI-generated.
Related:Apple has its own GPT AI system but no stated plans for public release: Report
The classifier is the latest of OpenAI’s products to come under scrutiny.
On July 18, researchers from Stanford and UC Berkeley published a study indicating that OpenAI’s flagship product ChatGPT was getting significantly worse with age.We evaluated #ChatGPT's behavior over time and found substantial diffs in its responses to the *same questions* between the June version of GPT4 and GPT3.5 and the March versions. The newer versions got worse on some tasks. w/ Lingjiao Chen @matei_zaharia https://t.co/TGeN4T18Fd https://t.co/36mjnejERy pic.twitter.com/FEiqrUVbg6— James Zou (@james_y_zou) July 19, 2023
Researchers found that over the course of the last few months, ChatGPT-4’s ability to accurately identify prime numbers had plummeted from 97.6% to just 2.4%. Additionally, both ChatGPT-3.5 and ChatGPT-4 witnessed a significant decline in being able to generate new lines of code.
Collect this article as an NFTto preserve this moment in history and show your support for independent journalism in the crypto space.
AI Eye:AI’s trained on AI content go MAD, is Threads a loss leader for AI data?# Business# AI# Coding# ChatGPTAdd reactionAdd reactionRelated NewsHow to check an Ethereum transaction7 most in-demand programming languages to learnHistory of Python programming languageApple has its own GPT AI system but no stated plans for public release: ReportChatGPT’s capabilities are getting worse with age, new study claimsOpenAI launches ‘custom instructions’ for ChatGPT so users don’t have to repeat themselves in every prompt