Revealing Positive and Negative Role Models to Help People Make Good Decisions

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge that individuals in social networks may emulate negative role models due to uncertainty about their true labels, thereby undermining social welfare. To mitigate this, the authors investigate how a social planner can strategically disclose role model labels under a limited disclosure budget to steer users toward positive exemplars and maximize overall welfare. They propose a proxy welfare function that preserves submodularity—overcoming the issue that revealing negative labels typically breaks submodularity—and design a constant-factor approximation algorithm. Novel mechanisms are introduced to ensure fairness, prioritize intervention for high-risk individuals, and expand coverage radius. Experiments on four real-world datasets demonstrate that the approach achieves a constant approximation ratio even in the presence of a constant number of negative neighbors, while guaranteeing equitable welfare gains across diverse population groups.

Technology Category

Application Category

📝 Abstract
We consider a setting where agents take action by following their role models in a social network, and study strategies for a social planner to help agents by revealing whether the role models are positive or negative. Specifically, agents observe a local neighborhood of possible role models they can emulate, but do not know their true labels. Revealing a positive label encourages emulation, while revealing a negative one redirects agents toward alternative options. The social planner observes all labels, but operates under a limited disclosure budget that it selectively allocates to maximize social welfare (the expected number of agents who emulate adjacent positive role models). We consider both algorithms and hardness results for welfare maximization, and provide a sample-complexity guarantee when the planner observes a sampled subset of agents. We also consider fairness guarantees when agents belong to different groups. It is a technical challenge that the ability to reveal negative role models breaks submodularity. We thus introduce a proxy welfare function that remains submodular even when revealed targets include negative ones. When each agent has at most a constant number of negative target neighbors, we use this proxy to achieve a constant-factor approximation to the true optimal welfare gain. When agents belong to different groups, we also show that each group's welfare gain is within a constant factor of the optimum achievable if the full budget were allocated to that group. Beyond this basic model, we also propose an intervention model that directly connects high-risk agents to positive role models, and a coverage radius model that expands the visibility of selected positive role models. Lastly, we conduct extensive experiments on four real-world datasets to support our theoretical results and assess the effectiveness of the proposed algorithms.
Problem

Research questions and friction points this paper is trying to address.

social networks
role models
information disclosure
social welfare
fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

submodular optimization
role model intervention
social welfare maximization
fairness in networks
negative influence mitigation
🔎 Similar Papers
No similar papers found.