Beyond Borders: How Threat Intelligence Provenance Can Save Global Cybersecurity From Geopolitical Fragmentation
In mid-January 2026, the Chinese government allegedly announced a sweeping ban on cybersecurity software from more than a dozen U.S. and Israeli firms, including industry giants like Palo Alto Networks, CrowdStrike, and Check Point. The stated reason: concerns that foreign software could collect and transmit confidential information abroad.
This move represents more than just another salvo in ongoing tech tensions between the two governments. It threatens to fracture a foundational practice of internet cybersecurity: the global threat intelligence ecosystem that allows defenders worldwide to collect, analyze, and share information about emerging attacks and responses to cyber threats that know no borders.
But there’s a way forward. Recent research from Georgia Tech reveals both the problem and a potential solution: provenance could allow threat intelligence to remain global even as geopolitical tensions push nations toward digital isolation.
A Competitive Ecosystem Under Threat
Threat intelligence is produced in a complex cybersecurity institutional landscape, governed at the organizational, national, and transnational levels. Most cybersecurity practitioners understand that threat intelligence—information about malware, malicious infrastructure, and attacker TTPs—flows through a complex network of actors, can be difficult to integrate, and provides questionable value.
In a DARPA-funded study to be presented at the 2026 Network and Distributed System Security Symposium, GT computer scientists developed a novel method to trace the propagation of threat intelligence through this ecosystem, which consists of vendors (TI platforms, antivirus, sandboxes), researchers, and operators. By embedding unique watermarks in benign test files and tracking them as they moved between actors, they uncovered several findings:
While 67% of vendors perform dynamic malware analysis, only 17% share the intelligence they extract. Network indicators (like malicious domains or URLs) are shared 20 times more frequently than the actual malware binaries—meaning defenders often get conclusions without the evidence needed to validate them. Most vendors consume information, and a handful of “nexus vendors” like VirusTotal act as central aggregation points, creating potential points of failure. Delays of hours to days in sharing the data slow coordinated responses. Adversaries are actively exploiting predictable sandbox environments and fingerprinting techniques to evade detection. The researchers found hundreds of malware samples actively using publicly available blocklists of sandbox IP addresses to avoid analysis—a technique that reduces the number of vendors receiving intelligence by 25%.
Characterizing these findings as troubling, the research makes several recommendations, including that vendors perform recursive analysis of malware to uncover full attack chains, diversify the IP space used in analysis infrastructure to avoid adversary counter-measures, and use watermarked binaries to allow auditing of data sharing. While there were limitations in examining temporal fingerprint data and freely available analysis environments, the technical mechanisms for improving the quality of TI data were well developed. However, the paper leaves open questions about how to coordinate the adoption of these recommendations across a thriving industry sector driven by incentives that vary and are sometimes aligned and sometimes competing.
The Geopolitical Fracture
Now overlay geopolitical tensions. China’s ban isn’t happening in isolation. The United States has previously banned Russian antivirus firm Kaspersky, and sought to ban the TikTok app based on the premise that it could provide data about the American population. There have also been efforts to discredit threat intel research by Chinese and American cybersecurity firms. Similar dynamics have played out with telecom infrastructure, semiconductors, artificial intelligence applications, and other digital technologies deemed “strategic.”
The core tension is this: producing cybersecurity requires global visibility to be effective. Malware developed in one country can attack targets worldwide within minutes. Botnets span continents. Phishing campaigns exploit infrastructure in dozens of jurisdictions. Yet the tools and practices to defend against these global threats are increasingly being carved up along national lines.
When China bans Western cybersecurity vendors, Chinese network operators lose access to threat intelligence from those sources. When the U.S. bans Russian tools, American defenders become blind to threats that Russian vendors might detect first. Each ban reduces global visibility and risks hindering collective response.
Network Operators Hold the Key
Here’s the crucial insight: while states make bans and set public policies, it is network operators—the security teams at corporations, universities, service providers, and government agencies—who make the actual decisions about what threat intelligence to use and how to act on it. These operators face a difficult choice. Follow geopolitically-driven bans and lose access to potentially valuable threat intelligence? Or find workarounds that might violate regulations?
But what if there were a third option? What if operators could use threat intelligence regardless of origin, as long as it met certain verifiable quality and process standards?
Secure Provenance Incentives: From “Who” to “How”
This is where secure provenance systems, which store ownership and process history of data objects and can ensure confidentiality, integrity, and availability, come in. Instead of focusing on “who produced this threat intelligence?”, such a system would allow defenders to ask “how was this intelligence produced and validated?” It creates a trustworthy, auditable trail documenting the entire lifecycle of a piece of threat intelligence:
- Where and when was it first observed?
- How was it analyzed (static analysis, sandbox execution, manual review)?
- How deep was the analysis (did analysts examine dropped files and network connections)?
- Which independent parties validated it?
- How long did each step take?
The Georgia Tech research demonstrates that answering these questions is technically feasible. Their watermarking system tracked threat intelligence through multiple vendors, distinguishing between binary and network indicator sharing, and timing each stage of propagation. A secure provenance system could formalize and standardize this tracking. With it, network operators can use or filter policy-compliant threat intelligence without necessarily relying on the country of origin. Consider these scenarios:
- Prohibited from using certain vendors’ software, a firm could accept threat intelligence where provenance shows it was independently re-analyzed by approved methods or domestic vendors.
- Firms subject to restrictions can contribute to global threat intelligence where provenance metadata is sanitized to protect operational details (e.g., IoCs designated TLP:AMBER or PAP:RED), analysis occurs through neutral intermediaries, and compliance audit trails can be generated.
Beyond enabling compliance with conflicting domestic regulations, secure provenance addresses concrete operational challenges revealed by the research. The study found that some vendors delay sharing by hours to days, slowing disruption of attacks by 20%. Provenance makes delays visible, allowing operators to avoid bottlenecks or request parallel analysis. Similarly, when 85% of antivirus vendors and 57% of sandboxes failed to execute packed malware, potentially valuable intelligence was lost. Provenance incentivizes deeper analysis by making quality visible and valuable in the marketplace. Relatedly, many vendors reshare IoCs and detection labels, creating the “illusion of consensus”. Provenance reveals actual analytical independence, helping operators set appropriate thresholds for intelligence use, and makes the diversity of the analysis environment visible and verifiable.
Building Blocks
What is required for a secure provenance system? While formal definitions and threat modeling are needed, an LLM-grounded analysis (Claude Code, Opus 4.6) of the sources reviewed here suggests the system needs to combine cryptographic chaining for local data integrity supported by anchoring in global, decentralized trust — allowing multiple untrusted organizations to independently verify the complete lineage of any piece of threat intelligence. Architectural components include:
Layer 1: Collection — Trusted Hardware & Kernel Modules
At the edge where TI is generated, integrity begins with kernel-level collectors that automatically capture metadata during analysis, backed by hardware attestation to ensure the collection environment itself hasn’t been tampered with.
Layer 2: Data Model — Provenance Record Graphs & Chains
Each action on TI data produces a provenance record containing who holds the data and what was done to it. Records can be linked into a graph structure that captures complex, multi-actor lineage, with cryptographic chaining that ensures that reordering or deleting historical records is detectable.
Layer 3: Identity & Trust — PKI and Digital Signatures
Every actor signs provenance records created using managed key pairs, ensuring authenticity and non-repudiation. Records are cryptographically bound to participants’ identities, preventing selective removal of records from the chain.
Layer 4: Storage & Verification — Distributed Ledger
Provenance record hashes can be anchored to a distributed ledger, providing tamper-evident, immutable storage without a central authority. Automated contracts enforce validation and access rules when actors submit or query TI lineage.
Layer 5: Privacy & Access Control
Encryption with selective disclosure allows owners to reveal specific chain segments to auditors without exposing sensitive details. Policy-based encryption embeds access rules directly into the data, and conditional privacy mechanisms protect actor identities while preserving accountability.
Figure 1. Provenance operational flow
Binary submitted → Vendor analyzes, generates signed provenance record ↓ Record linked into provenance graph via cryptographic chaining ↓ Hash anchored to distributed ledger ↓ TI migrates across organizational boundaries → Receiver validates sender’s signature via PKI ↓ Auditor verifies full chain using public keys + ledger anchors
Granted, even this rough brainstorming raises sticky questions. A secure provenance system needs apolitical, transnational governance structure(s) with infrastructure distributed across multiple jurisdictions, verifiable information and reporting, without exposing sensitive capabilities. This is the unsolved institutional design problem that future work must consider.
Conclusion: Institutional Economics of Global TI Provenance
Peer production has a long history in cybersecurity—but it also has limits in environments hostile to openness. The response was the emergence of a network of sub-groups of vetted actors voluntarily collaborating to produce and share specialized and relevant threat intelligence in trusted environments. But aggravated by a decade of growing geopolitical tensions, the main threat to collaborative threat intelligence now comes from states. What’s needed now are governance structures that allow operators, vendors, and researchers to continue cooperating globally while adhering to various governments’ incompatible notions of jurisdictionally-bound identity, sovereignty, and compliance.
Implementing secure provenance for TI data objects could be a step toward that. Club good production of threat intelligence already exists; organizations like FS-ISAC, the Cyber Threat Alliance, and FIRST operate as membership-based sharing communities, alongside a thriving private market. But exclusion in these existing clubs is based on organizational identity and trust relationships — precisely the attributes targeted by geopolitical bans. Provenance can create an excludability mechanism that transforms high-quality global threat intelligence from an underprovided public good into a sustainable club good: participation in the verification chain becomes both the “credential” for access and the incentive for contribution, solving free-rider problems that the GT study documented without requiring a central authority to enforce sharing norms. Provenance shifts excludability from who produced the intelligence to how it was produced and verified, making the club resilient to national identity and sovereignty-based restrictions while preserving the quality assurance that excludability provides.
Chinese, American, and other participants (both public and private) will have incentives to use the same provenance system, not out of altruism, but because exclusion from the verifiable pool of TI is operationally costly in a threat environment that remains stubbornly global. Universality and flexibility of applying different usage policies at the operator level mean provenance can accommodate divergent regulatory regimes without fragmenting the underlying intelligence. Existing guidance, standards and protocols, and certificate authorities could be leveraged to begin building such a system. But the harder challenge is institutional: secure provenance requires transnational governance structure(s) perceived as legitimate by participants operating under conflicting state mandates — without which threat intelligence risks becoming a zero-sum geopolitical competition.
References and Further Reading
Galloway et al. (2026). “Actively Understanding the Dynamics and Risks of the Threat Intelligence Ecosystem.” Network and Distributed System Security Symposium. https://tillsongalloway.com/ti-ecosystem-ndss.pdf
Hasan, R., Sion, R., & Winslett, M. (2007, October). Introducing secure provenance: problems and challenges. In Proceedings of the 2007 ACM workshop on Storage security and survivability (pp. 13-18).
Pan, B., Stakhanova, N., & Ray, S. (2023). Data provenance in security and privacy. ACM Computing Surveys, 55(14s), 1-35.
Reuters (January 14, 2026). “Exclusive: Beijing tells Chinese firms to stop using US and Israeli cybersecurity software.” https://www.reuters.com/world/china/beijing-tells-chinese-firms-stop-using-us-israeli-cybersecurity-software-sources-2026-01-14/
Wang, X., Zeng, K., Govindan, K., & Mohapatra, P. (2012, October). Chaining for securing data provenance in distributed information networks. In MILCOM 2012-2012 IEEE Military Communications Conference (pp. 1-6). IEEE.
For more on STIX (Structured Threat Information Expression) and existing threat intelligence sharing standards, see: https://oasis-open.github.io/cti-documentation/
The post Beyond Borders: How Threat Intelligence Provenance Can Save Global Cybersecurity From Geopolitical Fragmentation appeared first on Internet Governance Project.
