🤖 AI Summary
To address the lack of end-to-end traceability, interpretability, and verifiability for trustworthy AI models in 6G networks, this paper proposes REASON—the first holistic framework integrating AI orchestration (AIO), cognitive evaluation and explainability (COG), and AI monitoring (AIM), deeply augmented with digital twin technology to enable dynamic verification and closed-loop feedback. REASON supports privacy-aware real-time performance assessment, online eXplainable AI (XAI) interpretation, and trustworthy xAPP-level deployment. Evaluated on representative 6G use cases, the framework significantly enhances transparency, robustness, and regulatory compliance of AI services. It establishes a systematic, lifecycle-oriented technical pathway for trustworthy AI management in critical infrastructure—spanning development, deployment, operation, and governance—thereby advancing foundational trustworthiness guarantees for AI-native 6G systems.
📝 Abstract
Artificial intelligence (AI) is expected to play a key role in 6G networks, including optimizing system management, operation, and evolution. This requires systematic lifecycle management of AI models, ensuring their impact on services and stakeholders is continuously monitored. While current 6G initiatives introduce AI, they often fall short in addressing end-to-end intelligence and crucial aspects like trust, transparency, privacy, and verifiability. Trustworthy AI is vital, especially for critical infrastructures like 6G. This article introduces the REASON approach for holistically addressing AI's native integration and trustworthiness in future 6G networks. The approach comprises AI orchestration (AIO) for model lifecycle management, cognition (COG) for performance evaluation and explanation, and AI monitoring (AIM) for tracking and feedback. Digital twin (DT) technology is leveraged to facilitate real-time monitoring and scenario testing, which are essential for AIO, COG, and AIM. We demonstrate this approach through an AI-enabled xAPP use case, leveraging a DT platform to validate, explain, and deploy trustworthy AI models.