🤖 AI Summary
This study addresses the critical gap in understanding how AI vendors’ vulnerability disclosure policies address AI-specific risks—such as jailbreaking, hallucination, model extraction, and training-data leakage—despite their growing security implications.
Method: We conduct the first large-scale, systematic evaluation of disclosure policies from 264 AI vendors using a mixed-methods approach: quantitative coding of policy texts, qualitative thematic analysis of vendor stances, and lag analysis against 1,130 real-world AI security incidents and 359 academic publications.
Contribution/Results: Only 18% of vendors explicitly reference AI-specific threats; 36% provide no formal disclosure channel; most policies exclude emerging risks and lack AI-tailored incentives. We propose a tripartite typology of vendor postures—“Oblivious,” “Reactive,” and “Proactive”—and demonstrate that current policies significantly lag behind both academic advances and evolving threat landscapes. This work establishes the first empirical benchmark for AI disclosure governance and offers actionable pathways for policy refinement and regulatory alignment.
📝 Abstract
As AI is increasingly integrated into products and critical systems, researchers are paying greater attention to identifying related vulnerabilities. Effective remediation depends on whether vendors are willing to accept and respond to AI vulnerability reports. In this paper, we examine the disclosure policies of 264 AI vendors. Using a mixed-methods approach, our quantitative analysis finds that 36% of vendors provide no disclosure channel, and only 18% explicitly mention AI-related risks. Vulnerabilities involving data access, authorization, and model extraction are generally considered in-scope, while jailbreaking and hallucination are frequently excluded. Through qualitative analysis, we further identify three vendor postures toward AI vulnerabilities - proactive clarification (n = 46, include active supporters, AI integrationists, and back channels), silence (n = 115, include self-hosted and hosted vendors), and restrictive (n = 103). Finally, by comparing vendor policies against 1,130 AI incidents and 359 academic publications, we show that bug bounty policy evolution has lagged behind both academic research and real-world events.