Large Language Model-Informed Feature Discovery Improves Prediction and Interpretation of Credibility Perceptions of Visual Content

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Predicting and explaining the credibility of visual content on social media remains challenging due to the lack of interpretable, quantifiable features grounded in human perception and social science principles. Method: This paper proposes an LLM-guided, interpretable feature discovery paradigm: leveraging GPT-4o with targeted prompt engineering to generate visual-semantic explanations for credibility, which are then formalized into reusable, quantifiable credibility-driving factors (e.g., “information specificity”, “image format”) and integrated into a supervised regression model. Contribution/Results: The approach bridges the representational power of multimodal large language models with domain-specific interpretability requirements, achieving the first systematic translation of LLM reasoning into structured, measurable features. Evaluated on 4,191 cross-domain visual posts, the model achieves a 13% improvement in R² over strong baselines and significantly outperforms zero-shot GPT-based prediction, demonstrating both the effectiveness and generalizability of interpretable features for visual credibility modeling.

Technology Category

Application Category

📝 Abstract
In today's visually dominated social media landscape, predicting the perceived credibility of visual content and understanding what drives human judgment are crucial for countering misinformation. However, these tasks are challenging due to the diversity and richness of visual features. We introduce a Large Language Model (LLM)-informed feature discovery framework that leverages multimodal LLMs, such as GPT-4o, to evaluate content credibility and explain its reasoning. We extract and quantify interpretable features using targeted prompts and integrate them into machine learning models to improve credibility predictions. We tested this approach on 4,191 visual social media posts across eight topics in science, health, and politics, using credibility ratings from 5,355 crowdsourced workers. Our method outperformed zero-shot GPT-based predictions by 13 percent in R2, and revealed key features like information concreteness and image format. We discuss the implications for misinformation mitigation, visual credibility, and the role of LLMs in social science.
Problem

Research questions and friction points this paper is trying to address.

Predicting credibility of visual social media content
Understanding features driving human credibility judgments
Improving misinformation mitigation using LLM-informed features
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-informed feature discovery framework
Multimodal LLMs for credibility evaluation
Interpretable features improve prediction accuracy
🔎 Similar Papers