Framework Paper

Toward AI-Assisted Cyber Attribution™: A Framework for Traceability in Complex Threat Environments
Author: Kevin McGuinness
Affiliation: cyberattribution.ai™
06.14.2025


Executive Summary
Attribution—tracing who’s behind a cyberattack—is critical to accountability, deterrence, and defense. But right now, most attribution is slow, siloed, and reserved for government actors or elite security teams.

This paper proposes a new framework: AI-assisted attribution agents that help human analysts correlate evidence across digital domains (network traffic, malware behavior, deepfakes, and more). The system does not make final decisions in isolation—it surfaces evidence and scoring in a transparent, auditable way that accelerates expert judgment while preserving accountability.

We focus on the hardest areas:

  • Industrial control systems (OT)
  • AI-generated threats (deepfakes, synthetic media)
  • Supply chain attacks
  • Underserved organizations (SMBs)

This is not a black-box AI. It is a transparent, explainable system designed to scale the analyst’s reach and allow for audit, override, and shared review—especially in future environments where machine-led analysis becomes necessary.


1. Introduction: Why Attribution Needs a Rethink
Consider this: you’re the lead security analyst at a mid-sized utility company. You’ve just seen unusual traffic coming from a remote terminal. Your OT logs are patchy. There’s no malware sample yet. Your team is two people. Now ask yourself: How do you figure out who’s behind it?

That’s the modern attribution problem.

Attribution isn’t just technical—it’s political, operational, and social. And right now, it’s slow, opaque, and inaccessible to the people who need it most. Large governments and private firms guard their intelligence. The rest of the world guesses, often late.

We believe that’s solvable.

This paper introduces a modular, AI-assisted attribution framework that helps analysts gather, correlate, and visualize evidence across domains. It is built for traceability and analyst control—but flexible enough to support future systems that must operate at machine speed.


2. What’s Broken in Current Attribution
Most attribution today relies on:

  • Infrastructure analysis (IP tracing, DNS logs)
  • Payload analysis (reverse engineering, malware signatures)
  • Behavioral inference (timing, targeting patterns, geopolitical context)

These are all useful—but:

  • Infrastructure is easily faked or proxied
  • Malware evolves faster than analysts can reverse it
  • Behavioral inference is often too subjective to act on

Even in elite environments, attribution is siloed, manual, and rarely auditable. For smaller orgs, it’s effectively impossible. There’s no clear place to start, no shared scoring system, and no open-source tools that say:
“Here’s what we see, here’s why we think it matters, and here’s how confident we are.”

We’re not proposing to fix attribution overnight. But we are proposing a way to make it faster, clearer, and usable by real analysts under pressure—with help from machine-scale tools that stay explainable and overrideable.


3. The Framework: Analyst-Guided Attribution Agents
We propose a system of semi-autonomous software agents that assist attribution at scale without sacrificing transparency or expert override.

The system has four layers:

  1. Signal Ingestion Layer
    Collects logs, telemetry, alerts, open-source intelligence, and even deepfake content samples. OT environments include SCADA, PLCs, and other cyber-physical systems.
  2. Entity Extraction + Profiling
    Groups data into meaningful actors: IPs, hashes, accounts, devices. Builds evolving behavior profiles across time.
  3. Attribution Agents
    Run correlations, build hypotheses, assign confidence scores, and show their reasoning (e.g., “This IP is linked to prior APT campaigns via X and Y”).
  4. Analyst Review Interface
    Visualizes the evidence graph, confidence scoring, and alternate hypotheses. Allows human analysts to audit, override, or approve findings. Designed for traceability and structured disagreement.

⚠️ This is not blame-by-AI. It’s a co-pilot that accelerates human expertise without hiding its logic.


4. Use Cases and Threat Surfaces

🔧 4.1 Operational Technology (OT)
Attribution in energy, manufacturing, and critical infrastructure. Helps correlate firmware changes, access logs, and malware signatures across cyber-physical systems.

🎭 4.2 AI-Generated Threats
Flags signs of synthetic media: GAN fingerprints, voice clone artifacts, repeated visual noise. Correlates these with deployment patterns and user behavior.

🔗 4.3 Supply Chain Compromise
Detects shared code, reused certificates, and suspicious vendor updates across clients—even before the vendor issues an alert.

🧩 4.4 SMB Attribution-as-a-Service
Lets small orgs submit suspicious logs or files and get back structured, explainable threat maps. Think “Shodan meets Chainalysis—but for accountability.”


5. Challenges and the AGI Horizon

Data + Access
Attribution is only as good as your signals. The hardest part is often getting clean, timely, cross-domain data.

Ethics and Transparency
Attribution can lead to retaliation or sanctions. Any system that suggests “who did this” must be transparent and auditable. Ours is.

Interface Design
Analysts need tools, not riddles. Confidence scoring, override buttons, and explanation-first design are core.

What Happens When AGI Arrives?
If attackers use AGI, attribution becomes more critical, not less. You’ll need trusted systems to:

  • Detect deception
  • Reconstruct evidence trails
  • Validate blame without bias

This framework is built to meet that moment: modular, explainable, and designed for human-auditable attribution in both human-paced and machine-paced environments.


6. Conclusion
Attribution is broken—and fixing it requires more than better malware signatures.

We need systems that can ingest cross-domain signals, reason with transparency, and put power back in the hands of analysts, not just governments or vendors.

The framework we propose is early-stage—but necessary. It gives defenders new tools, makes attribution faster and fairer, and opens the door to trust at scale.

We’re building it now. And we welcome partners, critics, and contributors who believe attribution should be something you can see, understand, and use—even in an era where the machines are moving first.


📚 Related Work

  • Rid, T. (2020). Active Measures. FSG.
  • Buchanan, B. (2017). The Cybersecurity Dilemma. OUP.
  • Buchanan, B. (2022). AI and the Security Dilemma. Belfer Center.
  • Schneier, B. (2021). Public-Interest Technology and Security.
  • Strom, B. et al. (2018). MITRE ATT&CK Framework.
  • DARPA (2016). Active Attribution Program.