Explainable AI for Sentiment Analysis of Human Metapneumovirus (HMPV) Using XLNet

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Public emotional responses to human metapneumovirus (HMPV) outbreaks are difficult to model with interpretability using conventional approaches. Method: We propose the first explainable sentiment analysis framework integrating XLNet—a Transformer-based language model—with SHAP (Shapley Additive Explanations) for fine-grained analysis of HMPV-related social media texts. Our approach employs an XLNet classifier for high-accuracy sentiment prediction and leverages SHAP to generate user-level, causally grounded feature attributions. Contribution/Results: This work pioneers the synergistic application of XLNet and SHAP to respiratory virus-related舆情 analysis, overcoming the opacity of traditional black-box models. Evaluated on a real-world HMPV social media dataset, our framework achieves 93.50% accuracy—significantly outperforming BERT and LSTM baselines—while delivering intuitive, trustworthy, instance-level explanations. The framework thus enables transparent, evidence-informed public health surveillance and intervention decision-making.

Technology Category

Application Category

📝 Abstract
In 2024, the outbreak of Human Metapneumovirus (HMPV) in China, which later spread to the UK and other countries, raised significant public concern. While HMPV typically causes mild symptoms, its effects on vulnerable individuals prompted health authorities to emphasize preventive measures. This paper explores how sentiment analysis can enhance our understanding of public reactions to HMPV by analyzing social media data. We apply transformer models, particularly XLNet, achieving 93.50% accuracy in sentiment classification. Additionally, we use explainable AI (XAI) through SHAP to improve model transparency.
Problem

Research questions and friction points this paper is trying to address.

Analyze public sentiment on HMPV outbreak
Apply XLNet for sentiment classification
Enhance model transparency using explainable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

XLNet for sentiment analysis
Explainable AI with SHAP
93.50% accuracy achieved
🔎 Similar Papers
No similar papers found.
M
Md. Shahriar Hossain Apu
Department of IoT and Robotics Engineering, Bangabandhu Sheikh Mujibur Rahman Digital University, Bangladesh
M
Md Saiful Islam
Department of Educational Technology and Engineering, Bangabandhu Sheikh Mujibur Rahman Digital University, Bangladesh
Tanjim Taharat Aurpa
Tanjim Taharat Aurpa
Gazipur Digital University
Natural Language ProcessingSocial Media AnalysisMachine LearningDeep LearningComputer Vision