OpenAI shuts down AI detector due to low accuracy
Artificial intelligence powerhouse OpenAI has cautiously ramped up its AI detection software, citing low accuracy.
The AI classifier, developed by OpenAI, was first launched in January. 31 and is intended to help users such as teachers and professors distinguish human-written text from AI-generated text.
However, according to the original blog post Who announced the launch of the tool, AI Ratings has been discontinued on July 20:
“As of July 20, 2023, AI ratings will no longer be available due to low accuracy.”
The link to the tool is no longer working, while the note contained only a simple rationale as to why the tool was discontinued. However, the company pointed out that it was looking for new, more effective ways to identify AI-generated content.
“We are working to incorporate feedback and are currently exploring more effective text origination techniques, and remain committed to developing and deploying mechanisms that allow users to understand whether audio or visual content is AI-generated,” the note said.
From the outset, OpenAI made it clear that the detection tool was error-prone and could not be considered “completely reliable”.
The company said that the limitations of its AI detection tool include being “highly inaccurate” when validating text of less than 1,000 characters and being able to “confidently” label human-written text as AI-generated.
Connected: Apple Has Its Own GPT AI System, But No Announced Plans For Public Release: Report
The classifier is the latest product from OpenAI under investigation.
Researchers from Stanford and UC Berkeley on July 18 published a study that found OpenAI’s flagship product, ChatGPT, has deteriorated significantly with age.
we rated #chatgptSubstantial differences were found in its behavior over time and the answers to *same questions* between the June versions of GPT4 and GPT3.5 and the March versions. Newer versions broke down in some functions. with Lingjiao Chen @matei_zaharia https://t.co/TGeN4T18Fd https://t.co/36mjnejERy pic.twitter.com/FEiqrUVbg6
— James Zou (@james_y_zou) 19 July 2023
The researchers found that during the past few months, ChatGPT-4’s ability to correctly identify prime numbers has dropped from 97.6% to only 2.4%. Furthermore, a significant decrease in newline code generation capacity was observed in both ChatGPT-3.5 and ChatGPT-4.
O eye: Trained AIs Go Crazy on AI Stuff, Are Threads Leading to Losses for AI Data?
Stay Connected With Us On Social Media Platforms For Instant Updates, Click Here To Join Us Facebook