Over the past few days I have spoken to boards, CEOs and Technology leaders who are, understandably, very concerned about what Anthropic’s new AI cyber capabilities mean for their organisation. While every leadership team will have a different set of circumstances and cyber maturity, there are some points that should be on every board agenda this week to help develop a measured and effective response.
So, what’s changed?
In summary, Anthropic’s announcement of the latest capabilities of Claude Mythos suggests a step-change in the cyber risk landscape. We’ve seen clear evidence that AI can identify, chain and exploit zero-day vulnerabilities across major operating systems and browsers. Modern AI models can now analyse source code, reason about complex interactions, and uncover exploitable vulnerabilities at great speed.
This matters because it invalidates several long-standing assumptions:
-
Code written decades ago and assumed stable or ‘secure enough’ is now potentially exploitable by AI
-
Vulnerability discovery is no longer constrained by human review cycles
-
Responsible disclosure and remediation timelines designed around human research no longer reflect reality
-
The accepted risk window to address vulnerabilities has shrunk and periodic patch cycles will no longer be appropriate
Mythos is, for now, tightly controlled through Project Glasswing, but the direction is clear. Vulnerability discovery is accelerating, attack surfaces are becoming wider and more visible, and for security leaders patching at scale is about to get a lot harder. This has implications for system design, organisational resilience, software development and therefore risk management. It is likely particularly acute in critical national infrastructure where stability and availability has been the dominant consideration.
What questions are we hearing from CEOs?
Our clients are asking practical questions: What does this change in my environment? Where am I exposed because of legacy code and technical debt? How do I get assurance over the software I buy and build? How do I reduce the time between discovery and remediation? And how do I explain defensibility to boards, regulators and insurers in an AI-accelerated world?
Priority measures for boards to address these issues head on
Invest in cyber resilience for the AI-era. This is an arms race that is happening at great speed, the tools defenders gain today, threat actors will gain tomorrow. Your organisation must move from ‘point-in-time’ testing to more continuous, AI-augmented assurance approaches, combining automated discovery and increasingly automated remediation.
Reframe supply chain assurance beyond “compliance”. As we have seen time and time again from attacks over the past 18 months, risk does not stop at your own door. There is no boundary between your risk and third-party risk. Challenge your team to go beyond inventories and checklists and instead focus on understanding exposure and what might fail first when analysed by AI-enabled adversaries.
Get ahead of regulatory and insurance expectations. AI is rapidly being written into existing cyber regulation, meaning that if you’re building, buying or deploying AI, be aware that regulators might not ask for AI compliance per se, but it will soon be necessary with cyber assurance, supplier security reviews, procurement and digital safety regimes. Assurance that may have been enough for insurance yesterday isn’t going to cut it anymore. This places a huge emphasis on boards to have visibility and a defensible response.
Understand your exposure to digital sovereignty. AI and geopolitics are dominating headlines, driving digital sovereignty in a way that proliferates, rather than condenses the rulebook. Governments and regulatory bodies are looking particularly closely at risk and vulnerability from sovereign and critical infrastructure. AI-driven vulnerability discovery increases urgency for national resilience. You need trusted and independent counsel with insight into and relationships with the public sector to navigate this.
Start red-teaming with AI-augmented adversary simulation. AI powered attacks are not science fiction or tomorrow’s problem. You need to get to grips with your risk today. As listed above, some of the base assumptions of cyber assurance are being upended – so simulations from even six or twelve months ago may already be irrelevant. Get ahead of this new reality now.
Reevaluate trust
This is a period of rapid change. To thrive you must resist the urge to be reactionary and piecemeal. Invest in partnerships with technical authority, expertise and depth. There is no silver bullet. In an AI-accelerated world, the challenge is to look beyond hype and to trust in advice and action you can stand behind. AI is an enormous opportunity for us all, in every organisation and in every industry. Those who stand still will be left behind, action must be taken now to be part of the change, rather than a victim of it.
A realistic note to finish
It is important to take this potential ‘paradigm shift’ seriously but equally to not overreact to the hype. It is also important to reflect that when looking at even AI native organisations, humans in the loop remains key. AI does hallucinate, balanced decisions need to be made and expertise is needed to execute the shift that is required. It is also important to remember that many of the major breaches that we have seen in recent years often involved compromising humans through social engineering and not through finding zero-day vulnerabilities. Doing the basics is the most important defence and can be done right now.
About Mike Maddison
Mike is a former CISO turned CEO, who advises executives and boards on cyber resilience strategy and navigating technological disruption. Mike is a trusted adviser to global technology companies, manufacturers, financial institutions, critical national infrastructure, retailers and government. He has led NCC Group since 2022 and has overseen the transformation of the Group into a pure play cyber security services leader.
About NCC Group
NCC Group, is a people-powered, tech-enabled global cyber resilience business that has been advising organisations and the public sector for over 25 years. NCC Group works at the forefront of this reality. We’re trusted by hyperscalers and frontier AI labs to assess high‑risk technologies, translating that insight and experience into credible, real‑world assurance for clients. Our heritage and deep technical expertise combined with our powerful, dedicated AI computing infrastructure means we have enterprise-grade AI capability, fully in‑house, secure, and built for cyber to help organisations prioritise risk, remediate faster, and adopt AI securely and responsibly, with trust at the core.