Anthropic Plans to Fund New Generation of AI Benchmarks

Anthropic Plans to Fund New Generation of AI Benchmarks

Anthropic is introducing a program to finance the creation of new types of benchmarks capable of assessing the performance and effect of AI models, including generative models such as its own Claude.

Anthropic’s program, which was announced on Monday, would compensate third-party organizations that can, as stated in a blog post, “effectively measure advanced capabilities in AI models.” Interested parties can submit applications, which will be reviewed on a rolling basis.

“Our investment in these evaluations is intended to elevate the entire field of AI safety, providing valuable tools that benefit the whole ecosystem,” the company said in a blog post. “Developing high-quality, safety-relevant evaluations remains challenging, and the demand is outpacing the supply.”

As previously stated, AI has a benchmarking difficulty. Today’s most often referenced AI benchmarks do a terrible job of portraying how the ordinary human utilizes the technologies under examination. Given their antiquity, several benchmarks, particularly those issued before the start of contemporary generative AI, are also questioned as to whether they measure what they claim to measure.

Anthropic proposes a very high-level, harder-than-it-sounds way of developing demanding benchmarks with an emphasis on AI security and social ramifications using new tools, infrastructure, and methodologies.

The corporation particularly requests experiments that evaluate a model’s potential to carry out cyberattacks, “enhance” weapons of mass devastation (e.g., nuclear bombs), and mislead or fool people (for example, using deepfakes or disinformation). Anthropic claims it is dedicated to building an “early warning system” for identifying and analyzing AI dangers related to national security and defense, although it does not specify what such a system may involve in the blog post.

Anthropic also plans to use its new program to fund research on benchmarks and “end-to-end” projects that examine AI’s potential for assisting in scientific research, speaking in many languages, and minimizing inherent biases, as well as self-censoring toxicity.

To do this, Anthropic anticipates new platforms that enable subject-matter experts to create their assessments and large-scale testing of models involving “thousands” of people. The corporation has recruited a full-time coordinator for the program and plans to buy or grow projects that it feels have the potential to scale.

“We offer a range of funding options tailored to the needs and stage of each project,” Anthropic says in the article, but an Anthropic spokesman declined to share any further information about those possibilities. “Teams will have the opportunity to interact directly with Anthropic’s domain experts from the frontier red team, fine-tuning, trust and safety, and other relevant teams.”

Anthropic’s initiative to promote new AI standards is admirable, provided there is enough funding and labor behind it. However, considering the company’s business aspirations in the AI race, it may be difficult to trust totally.

Anthropic is rather open about the fact that it wants some assessments it finances to conform with the AI safety categories it created (with some help from third parties such as the nonprofit AI research organization METR). That is well within the company’s rights. However, it may require candidates for the program to adopt concepts of “safe” or “risky” AI with which they disagree.

A section of the AI community is also likely to take issue with Anthropic’s allusions to “catastrophic” and “deceptive” AI concerns, such as nuclear weapons hazards. Many academics believe there is no indication that AI as we know it will achieve world-changing, human-outsmarting capabilities very soon, if ever. Claims of coming “superintelligence” serve simply to divert attention away from the pressing AI regulatory challenges of the day, including AI’s hallucinogenic tendencies, these experts argue.

Anthropic states in its article that it believes its program will be “a catalyst for progress towards a future where comprehensive AI evaluation is an industry-standard.” That’s a purpose that many open, corporate-free projects to develop stronger AI benchmarks can relate to. However, it remains to be seen if such efforts will collaborate with an AI provider whose ultimate loyalty is to shareholders.

Source- TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *