Model
1M Context Window Now Generally Available for Claude Opus & Sonnet 4.6
![]()
Hey everyone! Big news from the AI front: Anthropic has just announced that the groundbreaking 1M token context window for both Claude Opus 4.6 and Sonnet 4.6 is now generally available. This isn't just a minor upgrade; it's a massive leap forward, giving these powerful models the ability to process an unprecedented amount of information, all at standard pricing. No more paying extra for those super-long contexts! This update opens up a world of possibilities for developers and businesses looking to build more sophisticated and capable AI applications.
What It's For
So, what exactly does a 1M context window mean for you? Imagine feeding an AI model an entire book, thousands of pages of legal documents, or a full codebase – and having it understand, reason, and respond based on all that information simultaneously. That's precisely what 1M tokens allows. It's roughly equivalent to processing over 750,000 words, or around 3,000 standard book pages, in a single go.
Beyond just text, this general availability significantly boosts media processing capabilities. You can now send up to 600 images or PDF pages per request – a substantial 6x increase from the previous limit of 100. This is a game-changer for tasks involving extensive visual data, detailed reports, or large multi-modal datasets, enabling Claude to retain the fidelity of crucial work without needing to compact context prematurely.
Why It Matters
This isn't just about handling more data; it's about handling it better and more affordably. A key highlight of this release is that standard pricing now applies across the full 1M window for both Opus 4.6 and Sonnet 4.6, meaning there's no long-context premium. Opus 4.6 is priced at an accessible $5 per million input tokens and $25 per million output tokens, while Sonnet 4.6 comes in at $3 per million input tokens and $15 per million output tokens. You can find more details on Claude API Pricing directly.
But capacity isn't enough without performance. Anthropic notes that Opus 4.6 truly shines here, scoring an impressive 78.3% on MRCR v2 at 1M context. This makes it the highest-performing model among frontier models at this context length, ensuring that the model not only remembers the details but can also accurately reason across them. This means less engineering work for you, as the need for lossy summarization or constant context clearing is dramatically reduced, allowing the full conversation or document to remain intact.
How Developers and Teams Can Use It
The implications of this expanded context window are huge for various use cases. For developers, especially those using Claude Code, the 1M context is a game-changer. For Max, Team, and Enterprise users with Opus 4.6, it's now included, resulting in fewer compactions and more intact conversations during complex debugging or code review sessions. Imagine feeding an entire large diff and getting higher-quality reviews without breaking it into chunks.
In legal tech, attorneys can now cross-reference 400-page deposition transcripts or analyze entire case files in a single pass, enabling agents to surface key connections and deliver materially higher-quality answers. For scientific research, this means agentic systems can synthesize hundreds of papers, mathematical frameworks, and codebases simultaneously, accelerating fundamental and applied physics research. Even for in-house lawyers, analyzing five turns of a 100-page partnership agreement in one session becomes possible, providing a full view of a negotiation without losing track of changes. For more technical guidance, check out the Claude Context Windows Documentation.
Where You Get It
Ready to dive in? The 1M context window is available today! You can access it natively on the Claude Platform, through Microsoft Azure Foundry, and on Google Cloud’s Vertex AI. If you've previously been using a beta header for longer contexts, don't worry – it's now ignored, so no code changes are required. Requests over 200K tokens will simply work automatically. This seamless integration means you can start leveraging the full power of a 1M token context window without any additional setup headaches.
This announcement truly empowers users to tackle previously impossible tasks with AI, from complex financial analyses to deep scientific discovery and sophisticated legal reviews. Stay tuned to the Product Announcements Blog Category for more updates.
Read more: Context Windows Documentation for details and links to AI Studio, Vertex AI.