Is Anthropic holding back Mythos to protect the internet — or itself?
Anthropic may be limiting the release of Mythos, raising questions about AI safety, internet impact, and whether the move protects users or the company.
Anthropic said this week that it has restricted the release of its latest AI model, called Mythos, citing concerns about its ability to identify security vulnerabilities in widely used software systems.
Rather than making Mythos broadly available, the company plans to provide access only to a select group of large organisations responsible for critical digital infrastructure, including Amazon Web Services and JPMorgan Chase.
OpenAI is reportedly exploring a similar approach for its own upcoming cybersecurity-focused tools. The stated goal is to allow major enterprises to prepare for and defend against potential threats before such capabilities become accessible to malicious actors. However, questions remain about whether cybersecurity concerns are the only factor behind this strategy.
Dan Lahav, CEO of the AI cybersecurity firm Irregular, previously pointed out that while AI systems can identify vulnerabilities, the real-world impact of those discoveries depends on how they can be exploited, either individually or in combination with other weaknesses.
Anthropic has claimed that Mythos is significantly more capable of identifying and exploiting vulnerabilities than its earlier model, Opus. Still, it is unclear whether Mythos represents a decisive breakthrough in cybersecurity capabilities.
Some in the industry have challenged the notion that such models are uniquely powerful. For example, the AI security startup Aisle said it could replicate many of the results attributed to Mythos with smaller, open-weight models. According to Aisle, cybersecurity outcomes often depend on combining multiple models rather than relying on a single system.
Beyond security considerations, there may be additional motivations behind limiting access. Restricting advanced models to enterprise customers can strengthen relationships with large organisations while also making it more difficult for competitors to replicate capabilities via distillation, in which smaller models are trained on outputs from more advanced systems.
David Crawshaw, CEO of the startup exe.dev, suggested in a social media post that such restrictions could help keep cutting-edge models within enterprise environments. According to this perspective, by the time broader access is granted, newer, more advanced versions may already be reserved for enterprise use, reinforcing a cycle that prioritises high-value contracts.
This dynamic reflects a broader trend within the AI industry. Frontier labs are racing to build increasingly powerful models, while other companies rely on combinations of models or open-source alternatives — often developed more cost-effectively — to compete. In response, leading AI companies, including Anthropic, Google, and OpenAI, have taken a firmer stance against distillation. Reports indicate that these firms are collaborating to detect and block attempts to replicate their models, particularly amid concerns about unauthorised copying.
Distillation poses a significant challenge to these labs' business models, as it can erode the competitive advantage gained from large-scale investment in AI development. Limiting access to advanced systems, therefore, not only addresses security concerns but also helps protect intellectual property and maintain market positioning.
Whether Mythos genuinely represents a risk to global cybersecurity remains uncertain. However, a controlled rollout to select organisations may help manage potential risks while testing the model's capabilities in real-world environments.
Anthropic did not respond to questions about whether concerns over distillation also influence its release strategy. Still, the approach suggests that the company may be balancing two objectives at once: safeguarding digital infrastructure and protecting its competitive edge.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0