SIEM, AI, and the Mythical ‘Solved SOC’

I’ve spent 25 years in information security. Long enough to see SIEM rise from the old audit requirements of “aggregated, network based logging”, with the birth of the “correlation” buzzword, and with its fall seen the rise of the “normalisaton” buzzword. I’ve built SOCs, tuned them, fought alert noise, and tried to control the spiralling cost that comes with doing security badly at scale.

And through all of that, one principle has never changed.

Know your environment. Know your security strategy. Understand your threat model. Build a picture of normal. Alert on what is truly abnormal—and truly risky.

The Sacred Cows of the SOC

If you follow the above-described approach, something interesting happens. A number of “must-haves” in modern SOC conversations start to look… negotiable:

  • SOAR is not always mandatory
  • Threat hunting is cool
  • More analysts does not equal better outcomes
  • Expensive threat intelligence feeds are mandatory

These are not heretical views—they’re just uncomfortable ones. Because they challenge a model where complexity and cost are often mistaken for maturity.

In reality, a well-understood environment with sharply defined risk tolerance will outperform a bloated SOC every time.

Enter AI: The New Buzzword Cycle

Now we have a new layer: AI, and if you read the current wave of content, you’ll notice a pattern. The most confident “AI success stories” tend to avoid talking about SIEM at all!

Instead, we get familiar phrases:

  • “Enriching data feeds”
  • “Augmenting analysts”

Let’s pause on one of those.

“Enrichment” of Data

“Enrichment” has quietly joined the long list of SIEM buzzwords. But what does it actually mean? Better data? According to whom? Data quality is not universal. It is entirely dependent on your environment, your systems, and your risks. An event that is critical in one organisation is meaningless in another.

You can train an AI model to process data. But can you teach it what matters in a specific, messy, evolving environment? That’s not just a data problem. That’s a context problem.

The Analyst Productivity Argument

Another popular claim:

“AI won’t replace analysts, but it will make them more productive.”

It sounds reasonable. It’s also mostly unproven in the SIEM context. Take “noise reduction”—a classic SOC problem. Who defines noise?

That decision requires:

  • Knowledge of the environment
  • Understanding of business risk
  • Familiarity with attacker behaviour

That’s not magic. But it does require experience and judgement. Can an AI learn this? Possibly, in constrained scenarios. Can it generalise this across real-world environments without introducing blind spots? That’s much harder to believe.

The Missing Layer: Technical Depth

What’s consistently absent from “AI transformed my SOC” narratives is technical depth.

Where are the discussions about?:

  • Operating systems
  • Network behaviour
  • Application logic

Even in fully SaaS environments, you are still dealing with operating systems, identity layers, protocols, and execution paths.

Can AI Fix a Broken SOC?

AI can easily flog a dead horse that is a cost-sinkhole SOC (plenty of those around on all continents – it needs a collective noun), but can it get a SOC to rise phoenix-like from the ashes of its dark history? That is as close to a ‘no’ as you’re ever going to get without it actually being a ‘no’.

I remain open minded, but i want to see technically grounded discussions in order to change that position. The SIEM community has long conversations for the long nights of winter e.g. “do i want to use sysmon if i have a CIS benchmark compliant Audit Policy service?”. This is the level that the conversation has to be, at, without perhaps the same volume.

AI can absolutely make those SOCs more efficient at being inefficient. can it transform a fundamentally flawed SOC into an effective one? That’s very close to a “no.” Not because AI is weak—but because the problem is structural.

If you don’t understand your environment, your risk, and your attack surface, no amount of automation will fix that.

The Hard Problem: Teaching an Attack Mindset

At its core, effective detection relies on an attack mindset.

That comes from:

  • Understanding how systems really behave
  • Knowing how they break
  • Seeing how attackers chain small weaknesses into real impact

We’ve seen early attempts to automate parts of this—especially in areas like network path analysis and automated penetration testing.

But anyone who has participated in real-world red teaming or unrestricted penetration testing knows the truth: This is not a linear process. It involves intuition, creativity, and adapting to incomplete information. Teaching that to an AI agent is not impossible—but it is a very hard problem.

Final Thought

I’m not anti-AI. Far from it. But I am sceptical of narratives that skip over the hard parts.

Can AI replace the fundamentals?:

  • Understanding your environment
  • Defining risk properly
  • Thinking like an attacker

Until AI can operate meaningfully at that level, it remains a tool, of sorts, hopefullly not a very expensive tool.