Claude Opus 4.7: Anthropic's Most Capable Model with xhigh Effort and Adaptive Thinking
Anthropic released Claude Opus 4.7, its most capable generally available model to date. Opus 4.7 reclaims the lead on SWE-bench Verified with 87.6% and scores 64.3% on SWE-bench Pro β the hardest publicly available real-world coding benchmark. The model ships with a new xhigh effort tier, adaptive thinking that replaces fixed reasoning budgets, high-resolution image support jumping to 3.75 megapixels (3Γ the prior ceiling), and significantly improved long-horizon agentic reliability. Pricing stays at $5 / $25 per million input / output tokens. Opus 4.7 is available today across the Claude API, Amazon Bedrock, Google Vertex AI, Claude Code, GitHub Copilot, Bolt, and other downstream tools.
Key Takeaways
- Opus 4.7 reclaims the frontier: 87.6% on SWE-bench Verified and 64.3% on SWE-bench Pro β the highest scores of any generally available model at launch, beating GPT-5.4 on the hardest coding benchmarks.
- A ~13% jump over Opus 4.6 on internal coding evals and a 12-point CursorBench improvement means the upgrade is felt in everyday IDE work, not just benchmark leaderboards.
- New
xhigheffort tier sits betweenhighandmaxand becomes the recommended default for agentic work β the sweet spot between reasoning depth and latency/cost. - Fixed thinking budgets are gone: adaptive thinking lets the model decide dynamically when to reason longer, making it faster on simple queries and more deliberate on hard ones with zero configuration.
- Image support tripled to 3.75 megapixels, unlocking much richer multimodal workflows (design review, screenshot debugging, chart interpretation).
- Long-horizon reliability is the headline real-world gain: Bolt's evaluation had Opus 4.7 work coherently for 7.5 hours on a single task, with 3β4 context compressions and no plan drift.
- Pricing stays at $5 / $25 per million tokens β a pure capability upgrade, not a premium tier, and available day one across the API, Bedrock, Vertex AI, Claude Code, GitHub Copilot, Bolt, Codex, and more.
Sources & Mentions
5 external resources covering this update
Anthropic releases Claude Opus 4.7, narrowly retaking lead for most powerful generally available LLM
VentureBeat
Anthropic rolls out Claude Opus 4.7, an AI model that is less risky than Mythos
CNBC
Anthropic releases Claude Opus 4.7, concedes it trails unreleased Mythos
Axios
Claude Opus 4.7
Hacker News
Claude Opus 4.7 arrives with better vision, memory, and instruction-following
The New Stack
Claude Opus 4.7: Anthropic Reclaims the Frontier
On April 16, 2026, Anthropic released Claude Opus 4.7, its most capable generally available model to date. The release arrives with a clear positioning: Opus 4.7 is Anthropic's answer to the past few months of frontier-model competition, posting headline benchmark results and β more importantly β delivering measurable real-world gains in the kind of long-horizon, tool-using, agentic coding tasks that the Claude family has become known for.
Benchmark Results
Opus 4.7 posts strong results across the benchmarks that matter for developer work:
- SWE-bench Verified: 87.6% β the highest score of any generally available model at launch
- SWE-bench Pro: 64.3% β the hardest publicly available real-world coding benchmark, where Opus 4.7 outperforms OpenAI's GPT-5.4 and every other major frontier model
- CursorBench: 70% (vs 58% for Opus 4.6) β a 12-point jump on a benchmark calibrated to real IDE tasks
- General coding improvements: Anthropic reports a ~13% improvement over Opus 4.6 on internal coding evaluations
The gap over Opus 4.6 is not subtle. On coding-heavy benchmarks the improvement is roughly one full generational step, and Anthropic has been careful to note that the model was designed specifically for long-horizon agentic work β the category where previous models tended to lose context or compound errors.
What's New in the Model
A New Effort Tier: xhigh
Opus 4.7 introduces a new xhigh effort level that sits between high and max. It is designed as the new sweet spot for agentic work: substantially more reasoning than high, with meaningfully lower latency and cost than max. Anthropic recommends xhigh as the default for most coding and agentic use cases and reserves max for genuinely hard problems where the additional cycles justify the overhead.
Adaptive Thinking Replaces Fixed Budgets
Fixed thinking budgets are gone. Opus 4.7 uses adaptive thinking, deciding dynamically at each step whether extended reasoning adds value. The model is faster on simple queries and more deliberate on complex ones β without requiring any configuration from the developer. Anthropic notes it is also less prone to overthinking than Opus 4.6, which previously introduced unnecessary latency and cost in agentic chains.
High-Resolution Image Support Tripled
Image support ceiling jumps from 1,568 pixels to 2,576 pixels β roughly 3.75 megapixels, more than three times the prior ceiling. This is a meaningful unlock for multimodal workflows: design review, screenshot-driven debugging, chart interpretation, and UI analysis all benefit directly.
Long-Horizon Agentic Reliability
Bolt's evaluation reports Opus 4.7 worked on a previously AI-resistant coding problem for 7.5 hours β with individual reasoning blocks exceeding one hour β and arrived at a legitimate solution without looping or regressing. Context compression is another area of meaningful progress: Opus 4.7 compressed its context three to four times on a single task and continued executing against the original plan each time, a scenario where prior models typically drifted.
Reduced Tool and Subagent Usage by Default
Opus 4.7 reasons more internally and spawns parallel subagents more conservatively than Opus 4.6. Teams that relied on aggressive tool invocations or parallel delegation will need to prompt for that behavior explicitly.
Cybersecurity Safeguards
The release ships with automated cybersecurity safeguard checks built in. Anthropic has also expanded its evaluation suite around misuse resistance, and Opus 4.7 is positioned as a lower-risk model than the unreleased "Mythos" system referenced internally.
Availability and Pricing
Opus 4.7 is available on day one across every major distribution surface:
- Claude API β direct access for developers
- Amazon Bedrock and Google Vertex AI β enterprise cloud integrations
- Claude Code β now the default model, shipped in v2.1.111
- GitHub Copilot β replacing Opus 4.5 and 4.6 in the model picker for Pro+ subscribers
- Bolt, Codex, Mistral Vibe, Cursor β integrated via downstream tools
Pricing remains unchanged from Opus 4.6: $5 per million input tokens and $25 per million output tokens. Given the meaningful capability gains and stable pricing, Anthropic is clearly positioning this as a pure upgrade rather than a premium tier.
Business Context
The release coincides with Anthropic reaching an annual revenue run-rate of $30 billion, driven heavily by enterprise adoption and the rapid growth of Claude Code. Opus 4.7 is Anthropic's most visible move to reclaim the frontier narrative heading into the next release cycle, and the simultaneous availability across every major developer surface underscores how central the coding use case has become to Anthropic's strategy.