🤖 AI Summary
This study addresses the challenges of cross-project aging-related defect prediction, where distribution discrepancies between source and target domains and severe class imbalance hinder performance, and existing approaches often overlook inter-metric dependencies. To tackle these issues, this work proposes the ARFT-Transformer framework, which introduces a metric-level multi-head attention mechanism—the first of its kind in this task—to explicitly model dependencies among software metrics. Additionally, Focal Loss is integrated to enhance learning from hard-to-classify samples. Extensive experiments on three large-scale open-source projects demonstrate that ARFT-Transformer achieves significant improvements over state-of-the-art methods, with Balance scores increasing by 29.54% in single-source and 19.92% in multi-source scenarios, respectively.