🤖 AI Summary
This paper addresses critical gaps in AI supply chain awareness and governance. We propose the first formal, directed-graph-based modeling framework to explicitly represent propagation mechanisms and dependency relationships among models, data, and services across multi-stakeholder collaborations. Through two in-depth case studies—integrating empirical fieldwork and causal inference—we identify and validate two structural problems: (1) systematic attenuation of interpretability information along the chain, and (2) systemic diffusion of accountability across hierarchical collaboration boundaries. Furthermore, we quantitatively assess the cascading impact of upstream foundation model design choices on downstream fine-tuning performance and behavioral bias, revealing an implicit upstream dominance over downstream applications. Our work establishes a novel cross-stakeholder analytical paradigm for AI governance, providing both theoretically grounded and empirically validated foundations for regulatory policy design and legal liability attribution.
📝 Abstract
The widespread adoption of AI in recent years has led to the emergence of AI supply chains: complex networks of AI actors contributing models, datasets, and more to the development of AI products and services. AI supply chains have many implications yet are poorly understood. In this work, we take a first step toward a formal study of AI supply chains and their implications, providing two illustrative case studies indicating that both AI development and regulation are complicated in the presence of supply chains. We begin by presenting a brief historical perspective on AI supply chains, discussing how their rise reflects a longstanding shift towards specialization and outsourcing that signals the healthy growth of the AI industry. We then model AI supply chains as directed graphs and demonstrate the power of this abstraction by connecting examples of AI issues to graph properties. Finally, we examine two case studies in detail, providing theoretical and empirical results in both. In the first, we show that information passing (specifically, of explanations) along the AI supply chains is imperfect, which can result in misunderstandings that have real-world implications. In the second, we show that upstream design choices (e.g., by base model providers) have downstream consequences (e.g., on AI products fine-tuned on the base model). Together, our findings motivate further study of AI supply chains and their increasingly salient social, economic, regulatory, and technical implications.