🤖 AI Summary
Legal compliance of machine learning models cannot be directly encoded; instead, abstract legal obligations must be “indirectly operationalized” into verifiable model design choices. Existing approaches either focus narrowly on software-level compliance or overlook legal complexity, failing to address two core challenges: the multiplicity of legal interpretations and the unpredictability of performance–compliance trade-offs. Method: We propose a five-stage interdisciplinary framework introducing the first legal–ML co-modeling paradigm, embedding legal reasoning throughout the ML development lifecycle. It features a legally adaptable operationalization mechanism and a multi-objective trade-off evaluation system. Contribution/Results: Evaluated in an anti-money laundering use case, the framework identifies an optimal configuration achieving both high detection accuracy (12% F1-score improvement) and legal defensibility, demonstrating its systematic capacity to jointly optimize predictive performance and legal legitimacy.
📝 Abstract
Organizations developing machine learning-based (ML) technologies face the complex challenge of achieving high predictive performance while respecting the law. This intersection between ML and the law creates new complexities. As ML model behavior is inferred from training data, legal obligations cannot be operationalized in source code directly. Rather, legal obligations require"indirect"operationalization. However, choosing context-appropriate operationalizations presents two compounding challenges: (1) laws often permit multiple valid operationalizations for a given legal obligation-each with varying degrees of legal adequacy; and, (2) each operationalization creates unpredictable trade-offs among the different legal obligations and with predictive performance. Evaluating these trade-offs requires metrics (or heuristics), which are in turn difficult to validate against legal obligations. Current methodologies fail to fully address these interwoven challenges as they either focus on legal compliance for traditional software or on ML model development without adequately considering legal complexities. In response, we introduce a five-stage interdisciplinary framework that integrates legal and ML-technical analysis during ML model development. This framework facilitates designing ML models in a legally aligned way and identifying high-performing models that are legally justifiable. Legal reasoning guides choices for operationalizations and evaluation metrics, while ML experts ensure technical feasibility, performance optimization and an accurate interpretation of metric values. This framework bridges the gap between more conceptual analysis of law and ML models' need for deterministic specifications. We illustrate its application using a case study in the context of anti-money laundering.