🤖 AI Summary
In high-stakes decision-making, employing asymmetric loss functions during training to align with human preferences—such as differential costs of false positives versus false negatives—can be counterproductive: while it calibrates final decisions, it undermines the model’s incentive to learn discriminative features, causing systematic misalignment between human and algorithmic objectives. This paper introduces a novel “training–calibration separation” paradigm grounded in incentive theory: symmetric loss is used during training to maximize learning incentives, and human preference alignment is achieved post hoc via threshold adjustment at deployment. We provide theoretical analysis proving that this approach restores optimal learning incentives. Empirical evaluation on real-world disease screening and credit approval tasks demonstrates that our method significantly improves both classification performance and alignment with human objectives, yielding higher overall system utility than conventional end-to-end asymmetric training.
📝 Abstract
The cost of error in many high-stakes settings is asymmetric: misdiagnosing pneumonia when absent is an inconvenience, but failing to detect it when present can be life-threatening. Because of this, artificial intelligence (AI) models used to assist such decisions are frequently trained with asymmetric loss functions that incorporate human decision-makers'trade-offs between false positives and false negatives. In two focal applications, we show that this standard alignment practice can backfire. In both cases, it would be better to train the machine learning model with a loss function that ignores the human's objective and then adjust predictions ex post according to that objective. We rationalize this result using an economic model of incentive design with endogenous information acquisition. The key insight from our theoretical framework is that machine classifiers perform not one but two incentivized tasks: choosing how to classify and learning how to classify. We show that while the adjustments engineers use correctly incentivize choosing, they can simultaneously reduce the incentives to learn. Our formal treatment of the problem reveals that methods embraced for their intuitive appeal can in fact misalign human and machine objectives in predictable ways.