Towards Trustworthy Multimodal Moderation via Policy-Aligned Reasoning and Hierarchical Labeling

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current social media content moderation suffers from low efficiency, suboptimal accuracy, poor interpretability, reliance on noisy labels, and misalignment with human review policies. To address these challenges, we propose Hi-Guard, a novel multimodal moderation framework introducing *policy-aligned decision-making*: it constructs a hierarchical taxonomy and path-based classification system; integrates rule-guided prompt injection with a lightweight–strong model cascade architecture; and devises Group Relative Policy Optimization (GRPO) alongside a multi-level soft-margin reward mechanism to explicitly align inference traces with regulatory review rules. Extensive experiments and real-world deployment demonstrate that Hi-Guard significantly improves classification accuracy, cross-domain generalization, and explanation quality—while maintaining high throughput—thereby enhancing decision transparency and trustworthiness. This work establishes a new paradigm for scalable, compliant, and auditable content safety systems.

Technology Category

Application Category

📝 Abstract
Social platforms have revolutionized information sharing, but also accelerated the dissemination of harmful and policy-violating content. To ensure safety and compliance at scale, moderation systems must go beyond efficiency and offer accuracy and interpretability. However, current approaches largely rely on noisy, label-driven learning, lacking alignment with moderation rules and producing opaque decisions that hinder human review. Therefore, we propose Hierarchical Guard (Hi-Guard), a multimodal moderation framework that introduces a new policy-aligned decision paradigm. The term "Hierarchical" reflects two key aspects of our system design: (1) a hierarchical moderation pipeline, where a lightweight binary model first filters safe content and a stronger model handles fine-grained risk classification; and (2) a hierarchical taxonomy in the second stage, where the model performs path-based classification over a hierarchical taxonomy ranging from coarse to fine-grained levels. To ensure alignment with evolving moderation policies, Hi-Guard directly incorporates rule definitions into the model prompt. To further enhance structured prediction and reasoning, we introduce a multi-level soft-margin reward and optimize with Group Relative Policy Optimization (GRPO), penalizing semantically adjacent misclassifications and improving explanation quality. Extensive experiments and real-world deployment demonstrate that Hi-Guard achieves superior classification accuracy, generalization, and interpretability, paving the way toward scalable, transparent, and trustworthy content safety systems. Code is available at: https://github.com/lianqi1008/Hi-Guard.
Problem

Research questions and friction points this paper is trying to address.

Ensuring accurate and interpretable content moderation on social platforms
Aligning moderation systems with evolving policy rules and transparency
Improving multimodal moderation via hierarchical classification and policy-aligned reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical binary and fine-grained risk classification
Policy-aligned rule integration in model prompts
Multi-level soft-margin reward with GRPO optimization
🔎 Similar Papers
No similar papers found.