🤖 AI Summary
Cross-lingual aspect-based sentiment analysis (ABSA) for low-resource languages often relies on external translation, limiting robustness and scalability—especially for complex linguistic phenomena (e.g., nested aspects, coreference).
Method: This paper proposes a translation-free sequence-to-sequence framework leveraging multilingual large language models (mLLMs), introducing constrained decoding for the first time to enforce fine-grained generation control and directly output cross-lingual sentiment triplets (aspect, opinion, sentiment).
Contribution/Results: The approach enables unified multilingual modeling without translation intervention and handles complex ABSA structures natively. Experiments across multiple low-resource languages show an average 10% improvement over prevailing translate-then-predict paradigms. Fine-tuned mLLMs achieve performance on par with state-of-the-art methods, whereas monolingual English LLMs underperform significantly—demonstrating both efficacy and practicality for low-resource cross-lingual ABSA.
📝 Abstract
Aspect-based sentiment analysis (ABSA) has made significant strides, yet challenges remain for low-resource languages due to the predominant focus on English. Current cross-lingual ABSA studies often centre on simpler tasks and rely heavily on external translation tools. In this paper, we present a novel sequence-to-sequence method for compound ABSA tasks that eliminates the need for such tools. Our approach, which uses constrained decoding, improves cross-lingual ABSA performance by up to 10%. This method broadens the scope of cross-lingual ABSA, enabling it to handle more complex tasks and providing a practical, efficient alternative to translation-dependent techniques. Furthermore, we compare our approach with large language models (LLMs) and show that while fine-tuned multilingual LLMs can achieve comparable results, English-centric LLMs struggle with these tasks.