Hardware-Level Governance of AI Compute: A Feasibility Taxonomy for Regulatory Compliance and Treaty Verification

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI governance policies often rely on compute controls yet lack systematic evaluation of the engineering feasibility of hardware-level mechanisms. This work introduces the first taxonomy encompassing twenty hardware-based governance mechanisms, organized by monitoring, verification, and enforcement functions, and evaluates their technical feasibility across four governance scenarios through a layered adversarial threat model. It innovatively proposes “tamper-evident verifiability” as a more practical security standard in place of absolute tamper-proofing, identifies treaty verification as the scenario requiring the lowest mechanism maturity, and reveals that the window of opportunity created by semiconductor manufacturing concentration is rapidly closing. These insights yield a concrete, verifiable roadmap for international AI governance agreements grounded in near-term technical realizability.
📝 Abstract
The governance of frontier AI increasingly relies on controlling access to computational resources, yet the hardware-level mechanisms invoked by policy proposals remain largely unexamined from an engineering perspective. This paper bridges the gap between AI governance and computer engineering by proposing a taxonomy of 20 hardware-level governance mechanisms, organised by function (monitoring, verification, enforcement) and assessed for technical feasibility on a four-point scale from currently deployable to speculative. For each mechanism, we provide a technical description, a feasibility rating, and an identification of adversarial vulnerabilities. We map the taxonomy onto four governance scenarios: domestic regulation, bilateral agreements, multilateral treaty verification, and industry self-regulation. Our analysis reveals a structural mismatch: the mechanisms most needed for treaty verification, including on-chip compute metering, cryptographic proof-of-training, and hardware-embedded enforcement, are also the least mature. We assess principal threats to compute-based governance, including algorithmic efficiency gains, distributed training methods, and sovereignty concerns. We identify a temporal constraint: the window during which semiconductor manufacturing concentration makes hardware-level governance implementable is narrowing, while R&D timelines for critical mechanisms span years. We present an adversary-tiered threat analysis distinguishing commercial, non-state, and nation-state actors, arguing the appropriate security standard is tamper-evident assurance analogous to IAEA verification rather than absolute tamper-proofing. The taxonomy, feasibility classification, and mechanism-to-scenario mapping provide a technical foundation for policymakers and identify the R&D investments required before hardware-level governance can support verifiable international agreements.
Problem

Research questions and friction points this paper is trying to address.

AI governance
hardware-level control
compute regulation
treaty verification
technical feasibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

hardware-level governance
feasibility taxonomy
compute metering
cryptographic proof-of-training
tamper-evident assurance