News

Anthropic retired its Claude 3 Sonnet model. Several days later, a post on X invited people to celebrate it: "if you're ...
Anthropic launches automated AI security tools for Claude Code that scan code for vulnerabilities and suggest fixes, ...
AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still ...
Anthropic partners with the U.S. government to offer AI tools like Claude for as little as $1, enhancing national security ...
In one instance, Claude is said to have solved 11 of 20 progressively harder problems in just 10 minutes, and after another ...
Claude Opus 4.1 scores 74.5% on the SWE-bench Verified benchmark, indicating major improvements in real-world programming, bug detection, and agent-like problem solving.
Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
Artificial intelligence (AI) models from OpenAI, Google and Anthropic have been added to a government purchasing system, ...
Anthropic using AI agents to audit its AI models like Opus 4 for hidden flaws such as misinformation bias and malicious ...
Anthropic is intentionally exposing its AI models like Claude to evil traits during training to make them immune to these ...
The US government’s central purchasing arm is adding OpenAI, Alphabet Inc.’s Google and Anthropic to a list of approved ...
Act now and Uncle Sam will throw in ChatGPT Enterprise for your agency for just $1 It's just become a lot easier for US ...