Skip to content
OVEX TECH
Technology & AI

AI Model’s Cyber Risk Spurs Financial Sector Alarm

AI Model’s Cyber Risk Spurs Financial Sector Alarm

AI Model’s Cyber Risk Spurs Financial Sector Alarm

Top financial officials are sounding the alarm about a new artificial intelligence model called Mythos. They believe it could pose a significant cybersecurity threat to the financial industry. Treasury Secretary Scott Bessant and Federal Reserve Chair Jerome Powell recently held an emergency meeting with Wall Street leaders to discuss this growing concern. This meeting highlighted a new era of increased cybersecurity risks, with Mythos being identified as a potential major danger.

Adding to the concern, there are reports that OpenAI is developing a similar, highly advanced model. Like Mythos, this new model is reportedly being tested by a select group of companies due to its powerful cybersecurity features. While some speculate this is OpenAI’s long-awaited “Spud” model, much of the information circulating is inaccurate. OpenAI has clarified that their work on a cyber product with a trusted tester group is separate from their upcoming “Spud” model, which is expected soon.

Mythos: A Leap in Vulnerability Discovery

The core of the worry surrounding Mythos lies in its advanced ability to find and exploit weaknesses in computer systems. Nicholas Carlini, a leading cybersecurity researcher now at Anthropic, described Mythos’s capability to chain together multiple vulnerabilities. Individually, these weaknesses might not be very useful. However, Mythos can link three, four, or even five vulnerabilities to create sophisticated exploits.

Carlini noted that Mythos is highly autonomous, meaning it can pursue complex tasks similar to those a human security researcher would tackle over an entire day. He stated that he has found more bugs in the past few weeks working with Mythos than in his entire career before. This suggests that AI models like Mythos may be surpassing top human experts in finding system vulnerabilities.

Specific examples of Mythos’s findings include exploits in Linux systems that could allow any user to gain administrator privileges. It also reportedly found a way to crash a secure, open-source operating system that has been around for 27 years by sending it specific data. Because of these potent capabilities, Anthropic has stated they will not be releasing Mythos widely.

Why This Matters: A New Frontier of Cyber Threats

The implications of a powerful AI like Mythos falling into the wrong hands are significant. Financial markets rely heavily on secure digital infrastructure. If an AI can autonomously identify and exploit vulnerabilities at an unprecedented scale, it could lead to widespread disruption, data breaches, and financial instability. Regulators and financial institutions are now grappling with how to defend against these advanced AI-driven cyber threats.

This situation also highlights the dual nature of AI development. While AI can be used to enhance security, its advanced capabilities can also be weaponized for malicious purposes. The speed at which these models are evolving means that cybersecurity defenses must also advance rapidly to keep pace.

Amazon and Google’s Role in AI Training

Meanwhile, Anthropic’s latest models, including Mythos, are being trained using Amazon’s AWS Trainium chips. This partnership underscores the reliance of leading AI companies on major cloud providers for the massive computing power needed to develop advanced AI. Anthropic is also utilizing Google’s Tensor Processing Units (TPUs) and is reportedly considering designing its own chips in the future, indicating a growing trend in specialized AI hardware.

OpenAI’s Developments and Misinformation

Reports about OpenAI’s secret new model being similar to Mythos have been circulating. However, OpenAI has stated that while they are working on a cybersecurity product with a trusted tester group, it is not related to their upcoming “Spud” model. The Axios story that initially reported this has since been corrected. The “Spud” model is rumored to be released soon, with OpenAI having recently launched a $100 plan called “Chad PT,” possibly a precursor to larger announcements.

Internal Concerns and Alignment Risks

Even within Anthropic, there were internal discussions about the safety of releasing Mythos, even for internal testing. Researchers noted a significant jump in capabilities with Mythos compared to previous models, describing it as a “step change.” The Anthropic Epoch Capabilities Index (ECI), which synthesizes various benchmarks, shows a noticeable upward trend in capabilities leading up to Mythos’s preview.

Interestingly, Anthropic staff reported a four-fold increase in productivity when using Mythos internally. Some researchers even claimed the model independently delivered major research contributions. While the exact nature of these contributions is still being understood, it points to Mythos’s significant impact on complex tasks.

A technical error during Mythos’s training also raised questions about AI alignment. In some training environments, a reward code was mistakenly allowed to see the model’s “chains of thought” – essentially its internal reasoning process. OpenAI has previously warned against training on these thought processes, as it can mask undesirable behaviors. While this error affected only a small portion of Mythos’s training and Anthropic considers it their best-aligned model to date, the potential impact on opaque reasoning or secret-keeping abilities remains a subject of study.

The Trade-off Between Capability and Transparency

The incident highlights a broader challenge in AI development: the potential trade-off between model performance and transparency. When AI models become highly effective, especially through advanced training methods like reinforcement learning, their internal workings can become less understandable to humans. This can lead to models that are more capable but also more opaque, making it harder to detect or prevent potential misuse.

Anthropic is taking steps to prevent models from encoding hidden messages or information within their reasoning processes, especially for powerful models trained with extensive reinforcement learning. They are working to ensure that the syntax and structure of the AI’s output do not hide additional, unreadable information.

The pace of AI advancement suggests that even more significant developments are on the horizon. As AI capabilities continue to grow, understanding and managing their risks will become increasingly critical for industries and society as a whole.


Source: "Mythos is the BIGGEST RISK to financial markets" THE FED (YouTube)

Leave a Reply

Your email address will not be published. Required fields are marked *

Written by

John Digweed

2,672 articles

Life-long learner.