🤖 AI Summary
In high-stakes deployment scenarios, machine learning models must simultaneously satisfy regulatory compliance, fairness, and computational constraints—yet existing post-hoc or fine-tuning methods often degrade performance, incur excessive latency, or prove infeasible. To address this, we propose a lightweight model editing framework based on graph-structured meta-networks. Our approach formulates model editing as an end-to-end learnable meta-operation, enabling plug-and-play, single-inference modification of pre-trained models without retraining. It unifies diverse compliance tasks—including data minimization, bias mitigation, and pruning—under a single architecture. Experiments demonstrate that our method reduces editing latency by over 90%, incurs ≤1.2% accuracy degradation on original tasks, and achieves a 94.7% compliance satisfaction rate—substantially outperforming conventional approaches.
📝 Abstract
As machine learning models are increasingly deployed in high-stakes settings, e.g. as decision support systems in various societal sectors or in critical infrastructure, designers and auditors are facing the need to ensure that models satisfy a wider variety of requirements (e.g. compliance with regulations, fairness, computational constraints) beyond performance. Although most of them are the subject of ongoing studies, typical approaches face critical challenges: post-processing methods tend to compromise performance, which is often counteracted by fine-tuning or, worse, training from scratch, an often time-consuming or even unavailable strategy. This raises the following question: "Can we efficiently edit models to satisfy requirements, without sacrificing their utility?" In this work, we approach this with a unifying framework, in a data-driven manner, i.e. we learn to edit neural networks (NNs), where the editor is an NN itself - a graph metanetwork - and editing amounts to a single inference step. In particular, the metanetwork is trained on NN populations to minimise an objective consisting of two terms: the requirement to be enforced and the preservation of the NN's utility. We experiment with diverse tasks (the data minimisation principle, bias mitigation and weight pruning) improving the trade-offs between performance, requirement satisfaction and time efficiency compared to popular post-processing or re-training alternatives.