IMG-LOGO

News Feed - 2023-10-19 02:10:00

Tristan Greene7 hours agoAnthropic built a democratic AI chatbot by letting users vote for its valuesThe value responses from 1,000 test subjects were used to tune a more democratic large language model.631 Total views16 Total sharesListen to article 0:00NewsJoin us on social networksIn what may be a first-of-its-kind study, artificial intelligence (AI) firm Anthropic has developed a large language model (LLM) that’s been fine-tuned for value judgments by its user community.What does it mean for AI development to be more democratic? To find out, we partnered with @collect_intel to use @usepolis to curate an AI constitution based on the opinions of ~1000 Americans. Then we trained a model against it using Constitutional AI. pic.twitter.com/ZKaXw5K9sU— Anthropic (@AnthropicAI) October 17, 2023


Many public-facing LLMs have been developed with guardrails — encoded instructions dictating specific behavior — in place in an attempt to limit unwanted outputs. Anthropic’s Claude and OpenAI’s ChatGPT, for example, typically give users a canned safety response to output requests related to violent or controversial topics.


However, many pundits argue that guardrails and other interventional techniques can serve to remove users’ agency, as what’s considered acceptable isn’t always useful, and what’s considered useful isn’t always acceptable. At the same time, definitions for morality or value-based judgments can vary between cultures, populaces and periods of time.


Related:UK to target potential AI threats at planned November summit


One possible remedy to this is to allow users to dictate value alignment for AI models. Anthropic’s “Collective Constitutional AI” experiment is an attempt at this “messy challenge.”


Anthropic, in collaboration with Polis and Collective Intelligence Project, tapped 1,000 users across diverse demographics and asked them to answer a series of questions via polling.Source: Anthropic


The challenge centers around allowing users the agency to determine what’s appropriate without exposing them to inappropriate outputs. This involved soliciting user values and then implementing those ideas into a model that’s already been trained.


Anthropic uses a method called “Constitutional AI” to direct its efforts at tuning LLMs for safety and usefulness. Essentially, this involves giving the model a list of rules it must abide by and then training it to implement those rules throughout its process, much like a constitution serves as the core document for governance in many nations.


In the Collective Constitutional AI experiment, Anthropic attempted to integrate group-based feedback into the model’s constitution. The results, according to a blog post from Anthropic, appear to have been a scientific success in that it illuminated further challenges toward achieving the goal of allowing the users of an LLM product to determine their collective values.


One of the difficulties the team had to overcome was coming up with a novel method for the benchmarking process. As this experiment appears to be the first of its kind, and it relies on Anthropic’s Constitutional AI methodology, there isn’t an established test for comparing base models to those tuned with crowd-sourced values.


Ultimately, it appears as though the model that implemented data resulting from user polling feedback “slightly” outperformed the base model in the area of biased outputs.


Per the blog post:“More than the resulting model, we’re excited about the process. We believe that this may be one of the first instances in which members of the public have, as a group, intentionally directed the behavior of a large language model. We hope that communities around the world will build on techniques like this to train culturally- and context-specific models that serve their needs.”# Study# AIAdd reactionAdd reactionRead moreAI tech boom: Is the artificial intelligence market already saturated?AI a powerful tool for devs to change gaming, says former Google gaming headGoogle requests dismissal of AI data scraping class-action suit