🤖 AI Summary
This study addresses regulatory demands for transparency in AI risk disclosure by systematically examining the practice and quality of AI-related risk disclosures in U.S. public companies’ SEC Form 10-K filings.
Method: Leveraging over 30,000 10-K filings from more than 7,000 firms (2020–2024), we integrate natural language processing, keyword-based retrieval, quantitative statistics, and manual coding to conduct the first large-scale, longitudinal empirical analysis of corporate AI risk disclosure.
Contribution/Results: We find disclosure prevalence surged from 4% to 43%, yet content remains highly generic—lacking granular identification of specific AI risk categories (e.g., legal, competitive, societal) and concrete mitigation strategies. Notably, our analysis reveals a rising emphasis on legal, competitive, and social risks—an emerging trend previously undocumented. To support reproducibility and policy development, we publicly release our keyword extraction and analytical toolkit. This work provides foundational empirical evidence and methodological infrastructure for AI governance frameworks and regulatory policymaking.
📝 Abstract
As Artificial Intelligence becomes increasingly central to corporate strategies, concerns over its risks are growing too. In response, regulators are pushing for greater transparency in how companies identify, report and mitigate AI-related risks. In the US, the Securities and Exchange Commission (SEC) repeatedly warned companies to provide their investors with more accurate disclosures of AI-related risks; recent enforcement and litigation against companies' misleading AI claims reinforce these warnings. In the EU, new laws - like the AI Act and Digital Services Act - introduced additional rules on AI risk reporting and mitigation. Given these developments, it is essential to examine if and how companies report AI-related risks to the public. This study presents the first large-scale systematic analysis of AI risk disclosures in SEC 10-K filings, which require public companies to report material risks to their company. We analyse over 30,000 filings from more than 7,000 companies over the past five years, combining quantitative and qualitative analysis. Our findings reveal a sharp increase in the companies that mention AI risk, up from 4% in 2020 to over 43% in the most recent 2024 filings. While legal and competitive AI risks are the most frequently mentioned, we also find growing attention to societal AI risks, such as cyberattacks, fraud, and technical limitations of AI systems. However, many disclosures remain generic or lack details on mitigation strategies, echoing concerns raised recently by the SEC about the quality of AI-related risk reporting. To support future research, we publicly release a web-based tool for easily extracting and analysing keyword-based disclosures across SEC filings.