🤖 AI Summary
This study addresses star-rating classification for telecom customer reviews, systematically evaluating the impact of word embedding methods—including BERT, Word2Vec, and Doc2Vec—on both classification performance and computational energy consumption, while investigating the critical roles of feature engineering and dimensionality reduction. We propose a novel word-vector fusion strategy based on the first principal component (PCA), replacing conventional averaging, and introduce the first unified framework jointly optimizing classification accuracy and energy efficiency. Experimental results demonstrate that the BERT+PCA approach achieves the highest precision, recall, and F1-score among all configurations. Our PCA-based fusion improves F1-score by up to 4.2% while substantially reducing computational energy consumption. These findings provide a reproducible, energy-aware methodology for efficient text classification in resource-constrained environments.
📝 Abstract
Telecom services are at the core of today's societies' everyday needs. The availability of numerous online forums and discussion platforms enables telecom providers to improve their services by exploring the views of their customers to learn about common issues that the customers face. Natural Language Processing (NLP) tools can be used to process the free text collected. One way of working with such data is to represent text as numerical vectors using one of many word embedding models based on neural networks. This research uses a novel dataset of telecom customers' reviews to perform an extensive study showing how different word embedding algorithms can affect the text classification process. Several state-of-the-art word embedding techniques are considered, including BERT, Word2Vec and Doc2Vec, coupled with several classification algorithms. The important issue of feature engineering and dimensionality reduction is addressed and several PCA-based approaches are explored. Moreover, the energy consumption used by the different word embeddings is investigated. The findings show that some word embedding models can lead to consistently better text classifiers in terms of precision, recall and F1-Score. In particular, for the more challenging classification tasks, BERT combined with PCA stood out with the highest performance metrics. Moreover, our proposed PCA approach of combining word vectors using the first principal component shows clear advantages in performance over the traditional approach of taking the average.