Technology

AI Will Not Fix Poor Governance, It Will Expose It

Andy Lynch, Head of Technical Services at Cased Dimensions

An MSSP view on shadow IT, shadow AI, and why the cracks are getting harder to hide

AI is changing the cyber landscape fast. But it’s not a silver bullet, and it’s definitely not a reset button. More than anything, AI acts as an accelerant. It takes whatever is already there, good or bad, and turns the volume up.

In well‑governed environments, that means faster detection, better visibility, and more resilience. In environments with weak controls, unclear ownership, and a lot of unmanaged sprawl, it means something else entirely: problems surface quicker, and they’re much easier to exploit.

From an MSSP perspective, especially when working with organisations across the UK and EU under regulatory and operational‑resilience regimes, the pattern is consistent: uncontrolled IT, shadow AI, and poor governance no longer sit quietly in the background. They’re now exposed, reachable, and exploitable at speed.

For boards and senior leaders, this matters because AI shortens the distance between known weaknessesand real incidents. Under frameworks like DORA, NIS2, and existing UK expectations, the room to carry unmanaged risk is shrinking quickly.

Uncontrolled IT Becomes an AI‑Scaled Problem

Shadow IT isn’t new. What is new is how easy it’s become to exploit.

In the past, finding and chaining weaknesses took time and skill. That friction slowed attackers down. AI removes much of it.

Recent testing of advanced frontier models, including Anthropic’s Claude Mythos Preview, shows how quickly AI can identify weaknesses and link them together across complex environments. Evaluations by the UK’s AI Security Institute demonstrated that models like this can carry out multi‑step attack paths that used to take skilled humans many hours or days to complete.

In estates with weak governance, this usually looks familiar:

  • Systems nobody clearly owns
  • Patching that’s inconsistent or assumed rather than verified
  • Too many identities with too much standing access
  • Legacy applications that exist “because they always have”

In regulated or cross‑border environments, these issues often sit just outside individual audit findings. Taken together, though, they directly undermine operational resilience and AI has no problem connecting those dots.

AI doesn’t need full visibility. It just needs enough inconsistency to get started.

Shadow AI: The Next Iteration of Shadow IT

Shadow AI isn’t coming – it’s already here.

Users are routinely copying sensitive data, credentials, architectural details, and internal context into consumer AI tools that sit completely outside organisational control. Often with good intent. Rarely with a clear understanding of the risk.

Compared to traditional shadow IT, shadow AI introduces risks that linger and compound:

  • Prompts and data may be retained or reused elsewhere
  • Context leakage can reveal internal structures or control gaps
  • AI‑generated output can reinforce insecure shortcuts

For organisations dealing with GDPR, ISO 27001, Cyber Essentials, NIS2, or DORA, this raises awkward questions. Risk is no longer just about where data is stored – it’s also about where decisions and reasoningare taking place.

Microsoft and others have been clear on this point: shadow AI can’t be solved by blanket bans. It needs clear policy, identity‑based controls, and visibility over how AI is actually being used.

Without that, organisations don’t just expose themselves to attackers; they also make life harder when regulators come asking questions.

Why Anthropic’s Claude Mythos Matters (and Why It’s a Problem)

Claude Mythos isn’t risky because it’s “AI”. It’s risky because of what it’s now proven capable of doing at scale.

Independent evaluations of Claude Mythos Preview showed something security teams have worried about for a long time: once you remove human time and effort from the equation, weakness discovery and exploitation speed changes completely. Tasks that used to require experienced people, coordination, and patience can now be compressed into much shorter cycles.

What makes Mythos particularly concerning isn’t a single breakthrough, but the combination of capabilities:

  • It can identify subtle vulnerabilities that automated tools and human review have missed for years
  • It can chain weaknesses together across systems, identities, and network boundaries
  • It operates consistently, without fatigue, and at a pace defenders struggle to match

In other words, it doesn’t just find bugs. It maps environments the way attackers need them mapped.

The fact that Anthropic has chosen not to release Mythos publicly; and instead restricted access to a small group of trusted partners, should tell us something important. This isn’t about hype control. It’s an acknowledgement that, in the wrong environment, models like this dramatically lower the effort required to turn governance gaps into working attack paths.

For organisations with weak ownership, undocumented dependencies, or informal controls, this is where the real risk lies. AI like Mythos doesn’t care that something fell between teams, outside audit scope, or into “we’ll deal with that later”. It simply treats those gaps as usable terrain.

That’s why efforts like Project Glasswing exist: to try to use these capabilities defensively before equivalent tools are widely available to attackers.

The takeaway is simple. Claude Mythos doesn’t create new weaknesses; it makes existing ones much harder to ignore.

AI Doesn’t Hide Governance Gaps; It Brings Them Forward

Models like Claude Mythos Preview are being tightly controlled for a reason. Not because they’re theoretical, but because they work.

Research shows they can uncover vulnerabilities that survived years, or decades, of human review. That capability is why initiatives like Project Glasswing exist: to use AI defensively before similar tools become widely available to attackers.

The key point isn’t that every attacker has this level of capability today. It’s that the direction of travel is obvious.

If your environment relies on undocumented knowledge, informal controls, or “we know who owns that,” AI will strip away those assumptions very quickly.

In heavily regulated sectors and especially where operational resilience and third‑party risk matter; governance gaps are no longer slow‑burn audit issues. They’re becoming clear, high‑confidence entry points.

Why the MSSP Role Has to Change

This shift changes what it means to be an MSSP.

Static baselines, annual reviews, and point‑in‑time assessments don’t hold up well in an AI‑accelerated threat model.

A modern MSSP has to focus on:

  • Ongoing visibility into real exposure, not just compliance status
  • Detecting drift across identity, cloud, endpoints, and AI usage
  • Turning emerging research into practical control changes
  • Reducing complexity rather than adding yet another tool

Increasingly, the role is less about just monitoring and response, and more about stewardship and helping organisations keep ownership, intent, and control intact as their environments evolve.

Microsoft’s Security Response Center has made this point clearly: AI lets defenders operate faster and at greater scale, but only when it sits on top of strong processes, clear ownership, and disciplined governance.

Without that foundation, AI ends up helping attackers first and regulators tend to arrive shortly after.

Governance Still Matters, Now More Than Ever

Governance often gets framed as bureaucracy. In reality, it’s protection.

Good governance looks like:

  • Clear ownership of systems, data, and identities
  • AI usage rules that are defined, enforced, and reviewed
  • Continuous visibility, not periodic snapshots
  • Fixing issues early, not after incidents or audits

Done well, governance doesn’t slow innovation; it makes it safe to scale, especially in highly regulated and multi‑jurisdictional environments.

From an MSSP point of view, the job isn’t just to respond when things go wrong. It’s to help organisations staystructurally defensible as technology, regulation, and attacker capability all accelerate.

AI will keep improving. Attackers will keep adopting it. Regulators will keep raising the bar.

The organisations that cope best won’t be the ones with the most tools; they’ll be the ones with the clearest ownership, the fewest grey areas, and governance that actually reflects how the environment works.

If AI took a close look at your environment today, would it see discipline or disorder?