Anthropic Accused of Claude Opus 4.6 'Shrinkflation'
Why It Matters
The controversy highlights the transparency gap in hosted AI models, where providers can silently degrade performance to manage margins. It intensifies the debate between closed-source dependency and local model sovereignty.
Key Points
- An analysis of 6,800 sessions alleges a 67% reduction in Claude Opus 4.6 thinking depth.
- Data suggests the model now reads significantly less code per file, leading to 'hallucinated' edits.
- Critics argue the performance drop was a deliberate cost-cutting measure by Anthropic.
- The controversy erupted the same day Anthropic released its new Mythos model.
- The incident has prompted a surge in interest for local, self-hosted models like Google's Gemma 4.
Anthropic is facing intense scrutiny following the release of an independent analysis alleging significant performance degradation, or 'shrinkflation,' in its Claude Opus 4.6 model. The study, which reviewed over 6,800 user sessions, claims the model's thinking depth decreased by 67% and its code-reading efficiency collapsed from 6.6 reads per file to just two. Critics allege that Anthropic intentionally 'nerfed' the older model to reduce operational compute costs while maintaining premium pricing for users. The timing of the data leak coincided with the launch of Anthropic's new Mythos model, leading to accusations that the company is manipulating model performance to force user migration to newer systems. Anthropic has not yet released an official response to the data, which has sparked a broader conversation regarding the reliability of commercial AI APIs for enterprise workflows.
Imagine you are paying for a premium coffee subscription, but the company secretly starts using cheaper beans while charging you the same price. That is essentially what people are accusing Anthropic of doing with their Claude Opus 4.6 model. A new study found that the model has become much less 'thoughtful' and is making mistakes in coding because it is barely reading the files it is supposed to analyze. This happened right as they launched a new model called Mythos, making it look like they intentionally broke the old one to save money and force everyone to switch. It is a big wake-up call for people who rely on these tools for work.
Sides
Critics
Claims Anthropic 'nerfed' the old model to slash compute costs and force upgrades to Mythos.
Defenders
Has remained silent on specific degradation allegations while focusing on the launch of the Mythos model.
Neutral
Divided between those experiencing similar 'vibe shifts' in performance and those awaiting more peer-reviewed data.
Noise Level
Forecast
Anthropic will likely release technical benchmarks or a blog post to address 'model drift' without admitting to intentional nerfing. Developers will increasingly shift towards hybrid setups that use local models for sensitive logic to avoid third-party performance volatility.
Based on current signals. Events may develop differently.
Timeline
Shrinkflation Data Leaks
Analysis of 6,800+ sessions is published online, alleging massive drops in Claude Opus 4.6 performance metrics.
Mythos Model Launched
Anthropic officially unveils its latest model, Mythos, positioned as its most capable system yet.
Join the Discussion
Discuss this story
Community comments coming in a future update
Be the first to share your perspective. Subscribe to comment.