🤖 AI Summary
Fine-grained emotions—such as fear, joy, and sadness—in multi-label sentiment classification suffer from high ambiguity, semantic overlap, and label correlation, leading to poor discriminability and interpretability.
Method: We propose an explainability-enhanced framework that explicitly integrates semantically rich, Llama-3-generated explanatory text into the classification pipeline. This explanation clarifies ambiguous emotional expressions by providing contextual grounding; it is jointly encoded with the original input via a fine-tuned RoBERTa encoder and optimized end-to-end using multi-label classification loss.
Contribution/Results: Evaluated on SemEval-2025 Task 11, our method achieves significant gains in macro-F1, particularly for highly ambiguous classes (e.g., fear, joy, sadness), demonstrating both improved predictive accuracy and enhanced model transparency. By bridging generative explanation with discriminative classification, the framework establishes a novel paradigm for interpretable multi-label sentiment analysis.
📝 Abstract
This paper presents a novel approach for multi-label emotion detection, where Llama-3 is used to generate explanatory content that clarifies ambiguous emotional expressions, thereby enhancing RoBERTa's emotion classification performance. By incorporating explanatory context, our method improves F1-scores, particularly for emotions like fear, joy, and sadness, and outperforms text-only models. The addition of explanatory content helps resolve ambiguity, addresses challenges like overlapping emotional cues, and enhances multi-label classification, marking a significant advancement in emotion detection tasks.