Anthropic’s ‘Mythos’ AI Sparks Security Fears
Anthropic, a leading artificial intelligence company, has developed a new AI model called ‘Mythos’ that shows incredible coding abilities. However, the company is hesitant to release it publicly due to serious security concerns. This model’s power to find software flaws could potentially be misused for hacking and cyberattacks.
AI Achieves Unprecedented Coding Skills
The ‘Mythos’ model has demonstrated remarkable performance on several AI benchmarks. It scored 13 points higher on the Swebench test and over 20 points higher on the Swebench Pro version. It also achieved a 20-point improvement on the Terminal Bench. These scores suggest ‘Mythos’ is currently the most capable AI model available for coding tasks.
To understand these scores, think of benchmarks like tests for AI. The higher the score, the better the AI is at performing a specific task, in this case, writing and understanding computer code. For example, Swebench tests how well an AI can complete coding tasks found in real-world software projects.
Why Anthropic is Holding Back the Model
Despite its impressive capabilities, Anthropic plans to keep ‘Mythos’ under wraps for now. Boris, the head of Claude Code at Anthropic, explained in a tweet that the model is so good, they cannot release it widely. The company is concerned that its advanced coding skills could be used for malicious purposes, such as hacking or finding security holes in software.
Anthropic’s ‘Project Glass Wing’ was formed specifically because of the observed abilities of this new frontier model. They believe it could significantly change the field of cybersecurity. AI models have reached a point where they can outperform even highly skilled humans in identifying and exploiting software weaknesses.
‘Mythos’ Finds Thousands of High-Severity Vulnerabilities
Early tests of ‘Mythos’ have been startling. The preview version alone has already uncovered thousands of serious security vulnerabilities. These include flaws found in every major operating system and web browser. This highlights the model’s advanced ability to analyze complex code and discover hidden problems.
For instance, ‘Mythos’ preview found a vulnerability in OpenBSD that had been present for 27 years. It also identified a 16-year-old flaw in FFmpeg, a popular media software. In a more complex demonstration, the AI autonomously found and linked together several vulnerabilities within the Linux kernel, a core part of the Linux operating system.
The Broader Implications for AI Safety
The development of ‘Mythos’ raises important questions about the speed of AI progress and its potential risks. Anthropic emphasizes its commitment to AI safety and security. They view closed-source development as a way to manage potential dangers.
However, the capabilities of ‘Mythos’ represent a significant leap. The company fears that if such a powerful tool were released publicly, it could be used by bad actors to cause widespread harm. The potential fallout for economies, public safety, and national security could be severe.
Restricted Access and Future Plans
Given these risks, Anthropic’s initial plan is to provide access to ‘Mythos’ to governmental security organizations. This approach aims to ensure the model’s power is used responsibly and for defensive purposes. The company is also facing technical challenges, as the model is massive and likely requires immense computing power. They are already experiencing a GPU (graphics processing unit) shortage, which could make serving the model to the public difficult even if they chose to do so.
Why This Matters
The ‘Mythos’ situation shows the double-edged nature of advanced AI. On one hand, AI models like ‘Mythos’ can help us find and fix bugs in software much faster than humans ever could, making our digital world safer. On the other hand, their ability to find vulnerabilities also means they could be used by hackers to break into systems. Anthropic’s decision highlights the growing challenge of balancing AI innovation with public safety. As AI gets smarter, companies and governments will need to work together to create safeguards against potential misuse.
Source: Earth SHATTERING New AI Model (YouTube)