🤖 AI Summary
Low-resource languages are underrepresented in cross-lingual aspect-based sentiment analysis (ABSA), and existing approaches rely heavily on external translation, hindering robust modeling of complex, structured ABSA tasks. Method: We propose a unified sequence-to-sequence generation framework with constrained decoding that eliminates translation intermediaries and jointly models multiple ABSA subtasks—including aspect term extraction, sentiment classification, and their joint prediction—while enforcing syntactically and semantically valid structured outputs via explicit decoding constraints. Contribution/Results: Evaluated across seven languages and six ABSA tasks, our method achieves an average 5% improvement over strong baselines, with gains exceeding 10% in multi-task settings. It establishes the first benchmarks for several newly introduced ABSA tasks. Comprehensive evaluation under zero-shot, few-shot, and fine-tuning paradigms demonstrates superior cross-lingual generalization and practical deployability.
📝 Abstract
While aspect-based sentiment analysis (ABSA) has made substantial progress, challenges remain for low-resource languages, which are often overlooked in favour of English. Current cross-lingual ABSA approaches focus on limited, less complex tasks and often rely on external translation tools. This paper introduces a novel approach using constrained decoding with sequence-to-sequence models, eliminating the need for unreliable translation tools and improving cross-lingual performance by 5% on average for the most complex task. The proposed method also supports multi-tasking, which enables solving multiple ABSA tasks with a single model, with constrained decoding boosting results by more than 10%.
We evaluate our approach across seven languages and six ABSA tasks, surpassing state-of-the-art methods and setting new benchmarks for previously unexplored tasks. Additionally, we assess large language models (LLMs) in zero-shot, few-shot, and fine-tuning scenarios. While LLMs perform poorly in zero-shot and few-shot settings, fine-tuning achieves competitive results compared to smaller multilingual models, albeit at the cost of longer training and inference times.
We provide practical recommendations for real-world applications, enhancing the understanding of cross-lingual ABSA methodologies. This study offers valuable insights into the strengths and limitations of cross-lingual ABSA approaches, advancing the state-of-the-art in this challenging research domain.