🤖 AI Summary
This study systematically evaluates the zero-shot cross-lingual aspect-based sentiment analysis (ABSA) capabilities of large language models (LLMs), benchmarking nine prominent models under diverse prompting strategies—including chain-of-thought, self-debate, and self-consistency. Unlike prior work relying on fine-tuning, this is the first large-scale empirical comparison of zero-shot prompting for multilingual ABSA. Results show that while LLMs exhibit non-negligible cross-lingual ABSA potential, their overall performance remains below that of task-specific fine-tuned models. Notably, in high-resource languages (e.g., English), simple prompts outperform complex reasoning strategies—revealing a strong inverse correlation between prompt complexity and language resource availability. The core contributions are: (1) a unified multilingual sequence-labeling evaluation framework for ABSA; and (2) the empirical discovery of the “simplicity-over-complexity” principle in prompting, offering methodological guidance for efficient ABSA in low-resource settings.
📝 Abstract
Aspect-based sentiment analysis (ABSA), a sequence labeling task, has attracted increasing attention in multilingual contexts. While previous research has focused largely on fine-tuning or training models specifically for ABSA, we evaluate large language models (LLMs) under zero-shot conditions to explore their potential to tackle this challenge with minimal task-specific adaptation. We conduct a comprehensive empirical evaluation of a series of LLMs on multilingual ABSA tasks, investigating various prompting strategies, including vanilla zero-shot, chain-of-thought (CoT), self-improvement, self-debate, and self-consistency, across nine different models. Results indicate that while LLMs show promise in handling multilingual ABSA, they generally fall short of fine-tuned, task-specific models. Notably, simpler zero-shot prompts often outperform more complex strategies, especially in high-resource languages like English. These findings underscore the need for further refinement of LLM-based approaches to effectively address ABSA task across diverse languages.