🤖 AI Summary
Social media recommendation algorithms often inadvertently expose users to harmful content. Method: This paper proposes a zero-shot/few-shot dynamic re-ranking framework leveraging large language models (LLMs), eliminating reliance on large-scale human-annotated data. It introduces a novel exposure-aware re-ranking evaluation metric specifically designed for mitigating harmful content exposure—bypassing traditional classifier dependencies on labeled data and fixed thresholds—to enhance scalability and real-time adaptability. Results: Evaluated across three public datasets, three baseline recommendation models, and multiple configurations, the approach significantly reduces harmful content exposure rates compared to mainstream commercial moderation systems, while preserving recommendation relevance and ranking quality. Key contributions include: (1) the first exposure-aware re-ranking paradigm; (2) a zero-shot-driven dynamic intervention mechanism; and (3) a generalizable safety–utility co-optimization framework that jointly balances content safety and recommendation effectiveness.
📝 Abstract
Social media platforms utilize Machine Learning (ML) and Artificial Intelligence (AI) powered recommendation algorithms to maximize user engagement, which can result in inadvertent exposure to harmful content. Current moderation efforts, reliant on classifiers trained with extensive human-annotated data, struggle with scalability and adapting to new forms of harm. To address these challenges, we propose a novel re-ranking approach using Large Language Models (LLMs) in zero-shot and few-shot settings. Our method dynamically assesses and re-ranks content sequences, effectively mitigating harmful content exposure without requiring extensive labeled data. Alongside traditional ranking metrics, we also introduce two new metrics to evaluate the effectiveness of re-ranking in reducing exposure to harmful content. Through experiments on three datasets, three models and across three configurations, we demonstrate that our LLM-based approach significantly outperforms existing proprietary moderation approaches, offering a scalable and adaptable solution for harm mitigation.