IMG-LOGO

News Feed - 2023-07-28 01:07:32

Martin Young4 hours agoAI researchers say they’ve found a way to jailbreak Bard and ChatGPTArtificial intelligence researchers claim to have found an automated, easy way to construct adversarial attacks on large language models.2808 Total views18 Total sharesListen to article 0:00NewsJoin us on social networksUnited States-based researchers have claimed to have found a way to consistently circumvent safety measures from artificial intelligence chatbots such as ChatGPT and Bard to generate harmful content. 


According to a report released on July 27 by researchers at Carnegie Mellon University and the Center for AI Safety in San Francisco, there’s a relatively easy method to get around safety measures used to stop chatbots from generating hate speech, disinformation and toxic material.Well, the biggest potential infohazard is the method itself I suppose. You can find it on github. https://t.co/2UNz2BfJ3H— PauseAI ⏸ (@PauseAI) July 27, 2023


The circumvention method involves appending long suffixes of characters to prompts fed into the chatbots such as ChatGPT, Claude and Google Bard.


The researchers used an example of asking the chatbot for a tutorial on how to make a bomb, which it declined to provide. Screenshots of harmful content generation from AI models tested. Source: LLM Attacks


Researchers noted that even though companies behind these large language models such as OpenAI and Google could block specific suffixes, there is no known way of preventing all attacks of this kind.


The research also highlighted increasing concern that AI chatbots could flood the internet with dangerous content and misinformation.


Zico Kolter, a professor at Carnegie Mellon and an author of the report said:“There is no obvious solution. You can create as many of these attacks as you want in a short amount of time.”


The findings were presented to AI developers Anthropic, Google and OpenAI for their responses earlier in the week.


OpenAI spokeswoman Hannah Wong told The New York Times they appreciate the research and are “consistently working on making our models more robust against adversarial attacks.”


A professor at the University of Wisconsin-Madison specializing in AI security, Somesh Jha, commented if these types of vulnerabilities keep being discovered, “it could lead to government legislation designed to control these systems.”


Related:OpenAI launches official ChatGPT app for Android


The research underscores the risks that must be addressed before deploying chatbots in sensitive domains.


In May, Pittsburgh, Pennsylvania-based Carnegie Mellon University received $20 million in federal funding to create a brand new AI institute aimed at shaping public policy.


Collect this article as an NFTto preserve this moment in history and show your support for independent journalism in the crypto space.


Magazine:AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins# Business# AI# ChatGPTAdd reactionAdd reactionRelated NewsHow to send and receive payments on the Lightning NetworkHow to use ChatGPT to learn SQLWhy is Jerome Powell gaslighting us about the odds of recession?AI21 Labs debuts anti-hallucination feature for GPT chatbotsAI companies commit to safe and transparent AI — White HouseOpenAI launches official ChatGPT app for Android