🤖 AI Summary
This paper addresses the challenge of high-quality aspect-level labeling for multilingual, cross-domain online reviews—hampered by scarce large-scale annotated data. We propose the first fully unsupervised multilingual cross-domain aspect-level label generation framework. Methodologically, it integrates clustering-based aspect candidate discovery, negative-sampling-driven aspect-aware text embedding, and collaborative fine-tuning of multilingual pretrained models (mBERT/XLM-R), supporting both Korean and English across multiple industry domains. Our key contributions are threefold: (1) the first unsupervised, multilingual, cross-domain framework unifying aspect extraction and semantic modeling; (2) automatically generated labels achieving human-annotation-level quality—with superior consistency and scalability; and (3) empirical validation on downstream tasks demonstrating significant improvements over state-of-the-art LLMs in accuracy, robustness, and generalization.
📝 Abstract
Effectively analyzing online review data is essential across industries. However, many existing studies are limited to specific domains and languages or depend on supervised learning approaches that require large-scale labeled datasets. To address these limitations, we propose a multilingual, scalable, and unsupervised framework for cross-domain aspect detection. This framework is designed for multi-aspect labeling of multilingual and multi-domain review data. In this study, we apply automatic labeling to Korean and English review datasets spanning various domains and assess the quality of the generated labels through extensive experiments. Aspect category candidates are first extracted through clustering, and each review is then represented as an aspect-aware embedding vector using negative sampling. To evaluate the framework, we conduct multi-aspect labeling and fine-tune several pretrained language models to measure the effectiveness of the automatically generated labels. Results show that these models achieve high performance, demonstrating that the labels are suitable for training. Furthermore, comparisons with publicly available large language models highlight the framework's superior consistency and scalability when processing large-scale data. A human evaluation also confirms that the quality of the automatic labels is comparable to those created manually. This study demonstrates the potential of a robust multi-aspect labeling approach that overcomes limitations of supervised methods and is adaptable to multilingual, multi-domain environments. Future research will explore automatic review summarization and the integration of artificial intelligence agents to further improve the efficiency and depth of review analysis.