🤖 AI Summary
In sentiment analysis (SA), conventional dependency parsing enhances accuracy and interpretability but incurs prohibitive computational overhead, hindering practical deployment. This paper proposes the Sequence Labeling Syntactic Parser (SELSP), the first approach to formulate dependency parsing as a lightweight sequence labeling task, enabling efficient syntactic integration. SELSP incorporates a polarity-aware sentiment lexicon, employs ternary and quinary classification schemes, and is rigorously evaluated via multi-model ablation studies. Compared to Stanza and VADER, SELSP achieves significant gains in both accuracy and inference speed; against Transformer-based baselines, it accelerates inference by multiple orders of magnitude while retaining competitive performance on ternary sentiment classification. Key contributions are: (i) pioneering a sequence labeling paradigm for dependency parsing; (ii) empirically validating that sentiment lexicons grounded in polarity discrimination differences yield superior performance; and (iii) achieving a balanced optimization of inference speed, predictive accuracy, and model interpretability.
📝 Abstract
Sentiment Analysis (SA) is a crucial aspect of Natural Language Processing (NLP), addressing subjective assessments in textual content. Syntactic parsing is useful in SA because explicit syntactic information can improve accuracy while providing explainability, but it tends to be a computational bottleneck in practice due to the slowness of parsing algorithms. This paper addresses said bottleneck by using a SEquence Labeling Syntactic Parser (SELSP) to inject syntax into SA. By treating dependency parsing as a sequence labeling problem, we greatly enhance the speed of syntax-based SA. SELSP is trained and evaluated on a ternary polarity classification task, demonstrating its faster performance and better accuracy in polarity prediction tasks compared to conventional parsers like Stanza and to heuristic approaches that use shallow syntactic rules for SA like VADER. This increased speed and improved accuracy make SELSP particularly appealing to SA practitioners in both research and industry. In addition, we test several sentiment dictionaries on our SELSP to see which one improves the performance in polarity prediction tasks. Moreover, we compare the SELSP with Transformer-based models trained on a 5-label classification task. The results show that dictionaries that capture polarity judgment variation provide better results than dictionaries that ignore polarity judgment variation. Moreover, we show that SELSP is considerably faster than Transformer-based models in polarity prediction tasks.