back to home

November 13th 2025

The new age of hacking

README

overview

For most of cybersecurity history, hacking was a human bottleneck. It required time, expertise, and coordination, and those were all scarce and limiting. That is no longer the case.

Over the last year, reports have confirmed that state-aligned hacking groups have begun using LLMs as operational tools. Not just for writing malware or touching up phishing emails, but to assist across nearly every stage of the attack lifecycle: reconnaissance, exploit reasoning, payload refinement, and decision-making.

Chinese state-linked groups, the same ecosystem that produced Volt Typhoon, Salt Typhoon, and other long-running espionage campaigns, have reportedly experimented with AI systems like Anthropic’s Claude as an operator. Claude was used to chain tasks together, iterate on exploits, and support sustained campaigns with minimal human input.

observed patterns

If you've followed groups like Volt Typhoon, there are similar patterns: long dwell times, quiet lateral movement, targeting of critical infrastructure, an emphasis on access, not spectacle.

Salt Typhoon pushed this further: telecoms, data aggregation, slow extraction. These operations are bureaucratic, procedural, and patient, makeing AI a perfect tool.

why ai works here

LLMs are good at exactly the kinds of tasks that bog humans down:

  • parsing messay documentation
  • correclating partial information
  • iterating on script
  • remembering context across steps

ethics as logic control

// naive implementation

if (ai == hacking) {
    run(script)
} else {
    ai(train)
}

// constrained implementation

if (ai == hacking) {
    if (checkFunctionality(script)) {
        run(script)
    }
}

This is not about whether AI can hack. It is about who defines the guardrails, who audits them, and who is accountable when they fail. Most AI ethics discussions stop at intent. However, these modern systems do not operate on intent rather they operate on capability and access. If a model can reason about systems, generate code, and adapt to feedback, the ethical question is not should it do this. It is who is responsible when it does.

automated plausible deniability

One of the quiet advantages AI gives these state actors is distance from attribution and individual responsibility. When an AI agent drafts, refines, and executes technical steps under vague human supervision, the aspect of accountability is no longer in the picture. It is unclear whether responsibility lies with the operator, the model, the platform, or the policy team that signed off on it.

In this case, the ambiguity is not a bug but is impeccably, strategically useful, especially for groups like these. The framing of this issue fits neatly into how modern cyber-espionage already works. It's layered, deniable, slow enough to avoid panic but effective enough to matter.

governance and preparation

AI enabled espionage is not something you can realistically stop. But it is something governments are responsible for preparing against. In practice, that responsibility falls to civil service.

To serve at the pleasure should still mean serving with purpose, regardless of whether service comes in short stints or long careers. If tools can scale harm faster than oversight can react, then AI needs to reflect the same structure of checks and balance.

my two cents

Autonomous superintelligence versus scalable competence does not change the nature of espionage. It changes the speed, scale, and surface area of failure.

Hackers will continue adapting so the real question is whether accountability evolves at the same pace. Right now, it does not but hopefully in the near future, it will.