Anthropic’s Claude Sonnet 4 now supports 1 million-token context window and memory feature

Anthropic’s Claude Sonnet 4 now supports 1 million-token context window and memory feature

Anthropic has expanded Claude Sonnet 4’s context window from 200,000 to 1 million tokens, surpassing the most recent OpenAI GPT-5 model with its 192,000-token limit and moving closer to other large-context leaders such as Google Gemini 2.5 Pro with up to 2 million tokens, and Meta’s Llama 4 Scout, which reaches 10 million tokens.

This fivefold increase for Claude enables developers to send entire codebases exceeding 75,000 lines in a single API request, supporting more complex problem-solving workflows. The long-context option is currently in public beta via the Anthropic API and Amazon Bedrock, with Google Cloud Vertex AI support coming soon, and for now is limited to Tier 4 developers with custom rate limits, with broader rollout planned in the coming weeks. Pricing increases for prompts above 200,000 tokens, but batch processing and prompt caching can help lower costs.

Anthropic has also introduced a memory feature that lets Claude reference information from previous conversations. This setting-based feature is currently available only to Enterprise, Team, and Max subscribers, with plans to extend it to other tiers.

by Mauricio B. Holguin

Claude iconClaude
  81
  • ...

Claude is a next-gen AI assistant built on Anthropic's research, focusing on creating helpful, honest, and harmless AI systems. Accessible via chat interface and API, it handles diverse conversational and text processing tasks. Rated 3.5, Claude's key features include being AI-powered and functioning as a chatbot.

No comments so far, maybe you want to be first?
Gu