News
The CEO of Windsurf, a popular AI-assisted coding tool, said Anthropic is limiting its direct access to certain AI models.
Anthropic says its Claude Opus 4 model frequently tries to blackmail software engineers when they try to take it offline.
Vibe-coding startup Windsurf revealed that Anthropic significantly reduced its first-party access to its Claude 3.7 Sonnet ...
A proposed 10-year ban on states regulating AI "is far too blunt an instrument," Amodei wrote in an op-ed. Here's why.
The release follows a broader trend of increased ties between AI companies and the US government amidst uncertain AI policy.
AI model threatened to blackmail engineer over affair when told it was being replaced: safety report
maurice norbert – stock.adobe.com Geeks at Anthropic then gave Claude access to a trove of emails ... and that the engineer responsible for the change was having an extramarital affair.
Anthropic has launched a new set of AI-powered models “Claude Gov” particularly tailored for U.S. national security customers ...
In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validated [Claude's] capabilities with a demanding open-source refactor ...
Safety testers then gave Claude Opus 4 access to fictional ... and that the engineer behind the change was cheating on their spouse. In these scenarios, Anthropic says Claude Opus 4 "will often ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results