IMG-LOGO

News Feed - 2023-07-20 12:07:22

Tom Mitchelhill5 hours agoChatGPT’s capabilities are getting worse with age, new study claimsSome of ChatGPT"s responses have shown the model"s accuracy deteriorated over the last few months and researchers can"t figure out why.2255 Total views54 Total sharesListen to article 0:00NewsJoin us on social networksOpenAI’s artificial intelligence-powered chatbot ChatGPT seems to be getting worse as time goes on and researchers can’t seem to figure out the reason why. 


In a July 18 study, researchers from Stanford and UC Berkeley found ChatGPT’s newest models had become far less capable of providing accurate answers to an identical series of questions within the span of a few months.


The study’s authors couldn’t provide a clear answer as to why the AI chatbot’s capabilities had deteriorated.


To test how reliable the different models of ChatGPT were, researchers Lingjiao Chen, Matei Zaharia and James Zou asked ChatGPT-3.5 and ChatGPT-4 models to solve a series of math problems, answer sensitive questions, write new lines of code and conduct spatial reasoning from prompts.We evaluated #ChatGPT"s behavior over time and found substantial diffs in its responses to the *same questions* between the June version of GPT4 and GPT3.5 and the March versions. The newer versions got worse on some tasks. w/ Lingjiao Chen @matei_zaharia https://t.co/TGeN4T18Fd https://t.co/36mjnejERy pic.twitter.com/FEiqrUVbg6— James Zou (@james_y_zou) July 19, 2023


According to the research, in March ChatGPT-4 was capable of identifying prime numbers with a 97.6% accuracy rate. In the same test conducted in June, GPT-4’s accuracy had plummeted to just 2.4%.


In contrast, the earlier GPT-3.5 model had improved on prime number identification within the same time frame.


Related:SEC’s Gary Gensler believes AI can strengthen its enforcement regime


When it came to generating lines of new code, the abilities of both models deteriorated substantially between March and June.


The study also found ChatGPT’s responses to sensitive questions — with some examples showing a focus on ethnicity and gender — later became more concise in refusing to answer.


Earlier iterations of the chatbot provided extensive reasoning for why it couldn’t answer certain sensitive questions. In June however, the models simply apologized to the user and refused to answer.


“The behavior of the ‘same’ [large language model] service can change substantially in a relatively short amount of time,” the researchers wrote, noting the need for continuous monitoring of AI model quality.


The researchers recommended users and companies who rely on LLM services as a component in their workflows implement some form of monitoring analysis to ensure the chatbot remains up to speed.


On June 6, OpenAI unveiled plans to create a team that will help manage the risks that could emerge from a superintelligent AI system, something it expects to arrive within the decade.


Collect this article as an NFTto preserve this moment in history and show your support for independent journalism in the crypto space.


AI Eye:AI’s trained on AI content go MAD, is Threads a loss leader for AI data?# Business# AI# Coding# ChatGPTAdd reactionAdd reactionRelated NewsHow to buy NFTs without owning cryptoAI opens doors for NFT artist Ellie Pritts at Bitforms gallery in NYC5 free data set sources to use for data science projectsReport: China to tighten rules around releasing generative AI toolsHigh-skilled jobs most exposed to AI, impact still unknown — ReportReport: Meta to release commercial AI tools to rival Google, OpenAI