Enhancing Hate Speech Detection on Social Media: A Comparative Analysis of Machine Learning Models and Text Transformation Approaches

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the urgent need for effective detection and neutralization of hate speech proliferating on social media. It systematically evaluates the performance of CNN, LSTM, BERT, and their variants in hate speech identification and proposes a novel text transformation method that automatically converts harmful content into semantically preserved neutral expressions. Furthermore, a hybrid model integrating the strengths of multiple architectures is developed, significantly enhancing detection accuracy in specific scenarios. Experimental results demonstrate that BERT-based models achieve superior performance owing to their deep contextual understanding, while the proposed text transformation strategy effectively mitigates the adverse impact of toxic content. The findings validate the feasibility and efficacy of a synergistic framework that jointly performs detection and neutralization.

Technology Category

Application Category

📝 Abstract
The proliferation of hate speech on social media platforms has necessitated the development of effective detection and moderation tools. This study evaluates the efficacy of various machine learning models in identifying hate speech and offensive language and investigates the potential of text transformation techniques to neutralize such content. We compare traditional models like CNNs and LSTMs with advanced neural network models such as BERT and its derivatives, alongside exploring hybrid models that combine different architectural features. Our results indicate that while advanced models like BERT show superior accuracy due to their deep contextual understanding, hybrid models exhibit improved capabilities in certain scenarios. Furthermore, we introduce innovative text transformation approaches that convert negative expressions into neutral ones, thereby potentially mitigating the impact of harmful content. The implications of these findings are discussed, highlighting the strengths and limitations of current technologies and proposing future directions for more robust hate speech detection systems.
Problem

Research questions and friction points this paper is trying to address.

hate speech
social media
content moderation
offensive language
online toxicity
Innovation

Methods, ideas, or system contributions that make the work stand out.

hate speech detection
text transformation
hybrid neural models
BERT
content neutralization
🔎 Similar Papers
No similar papers found.