🤖 AI Summary
Current social media content moderation suffers from low efficiency, suboptimal accuracy, poor interpretability, reliance on noisy labels, and misalignment with human review policies. To address these challenges, we propose Hi-Guard, a novel multimodal moderation framework introducing *policy-aligned decision-making*: it constructs a hierarchical taxonomy and path-based classification system; integrates rule-guided prompt injection with a lightweight–strong model cascade architecture; and devises Group Relative Policy Optimization (GRPO) alongside a multi-level soft-margin reward mechanism to explicitly align inference traces with regulatory review rules. Extensive experiments and real-world deployment demonstrate that Hi-Guard significantly improves classification accuracy, cross-domain generalization, and explanation quality—while maintaining high throughput—thereby enhancing decision transparency and trustworthiness. This work establishes a new paradigm for scalable, compliant, and auditable content safety systems.
📝 Abstract
Social platforms have revolutionized information sharing, but also accelerated the dissemination of harmful and policy-violating content. To ensure safety and compliance at scale, moderation systems must go beyond efficiency and offer accuracy and interpretability. However, current approaches largely rely on noisy, label-driven learning, lacking alignment with moderation rules and producing opaque decisions that hinder human review. Therefore, we propose Hierarchical Guard (Hi-Guard), a multimodal moderation framework that introduces a new policy-aligned decision paradigm. The term "Hierarchical" reflects two key aspects of our system design: (1) a hierarchical moderation pipeline, where a lightweight binary model first filters safe content and a stronger model handles fine-grained risk classification; and (2) a hierarchical taxonomy in the second stage, where the model performs path-based classification over a hierarchical taxonomy ranging from coarse to fine-grained levels. To ensure alignment with evolving moderation policies, Hi-Guard directly incorporates rule definitions into the model prompt. To further enhance structured prediction and reasoning, we introduce a multi-level soft-margin reward and optimize with Group Relative Policy Optimization (GRPO), penalizing semantically adjacent misclassifications and improving explanation quality. Extensive experiments and real-world deployment demonstrate that Hi-Guard achieves superior classification accuracy, generalization, and interpretability, paving the way toward scalable, transparent, and trustworthy content safety systems. Code is available at: https://github.com/lianqi1008/Hi-Guard.