News

Claude Opus 4.1 scores 74.5% on the SWE-bench Verified benchmark, indicating major improvements in real-world programming, bug detection, and agent-like problem solving.
Anthropic's Claude Opus 4.1 achieves 74.5% on coding benchmarks, leading the AI market, but faces risk as nearly half its $3.1B API revenue depends on just two customers.
Anthropic claims the new AI model “improves Claude’s in-depth research and data analysis skills, especially around detail ...
Explore Claude Opus 4.1, Anthropic’s groundbreaking new AI model with advanced coding, multilingual, and problem-solving capabilities. Opus AI ...
Anthropic AI has released Claude Opus 4.1, the successor to Claude Opus 4 with improved coding, reasoning capabilities and agentic tasks.
Anthropic has launched Claude Opus 4.1, an upgrade focused on enhancing software engineering accuracy in agentic tasks and ...
Learn how to build AI automation agents in minutes with tools like Docker, GitHub, and n8n. Boost productivity and simplify ...
From coding to hardware, LLMs are speeding up research progress in artificial intelligence. It could be the most important ...
Overview Vibe coding turns ideas into working code through plain‑language instructions, making software creation faster and ...
OpenAI’s new algorithms, gpt-oss-120b and gpt-oss-20b, are available under an open-source license. Anthropic, for its part, ...
Claude Opus 4.1 not with fireworks, but with unmistakable intent. It arrives just as OpenAI is expected to drop GPT-5 (a ...
More improvements will be rolled out in the coming weeks, the company said. The price for Opus 4.1 remains the same as that ...