🤖 AI Summary
This study investigates how generative AI (GenAI) can effectively support local communities in co-constructing cultural heritage narratives, addressing critical challenges in image generation—including cultural feature distortion, loss of contextual detail, and implicit bias. Employing Stable Diffusion–based image synthesis, human-AI collaborative workshops, and qualitative narrative analysis, the research identifies three culturally grounded narrative strategies: “illuminating,” “amplifying,” and “reinterpreting.” A novel Cultural Feature Bias Assessment Framework is developed and empirically applied, revealing that while GenAI enhances narrative agency, it suffers from weak cultural accuracy, low controllability, and insufficient bias mitigation. The study innovatively proposes three synergistic intervention pathways: prompt engineering optimization, lightweight model fine-tuning, and construction of culturally sensitive training datasets. These contributions advance both theoretical understanding and practical methodologies for fostering trustworthy, equitable human-AI collaboration in cultural heritage contexts.
📝 Abstract
Visitors to cultural heritage sites often encounter official information, while local people's unofficial stories remain invisible. To explore expression of local narratives, we conducted a workshop with 20 participants utilizing Generative AI (GenAI) to support visual narratives, asking them to use Stable Diffusion to create images of familiar cultural heritage sites, as well as images of unfamiliar ones for comparison. The results revealed three narrative strategies and highlighted GenAI's strengths in illuminating, amplifying, and reinterpreting personal narratives. However, GenAI showed limitations in meeting detailed requirements, portraying cultural features, and avoiding bias, which were particularly pronounced with unfamiliar sites due to participants' lack of local knowledge. To address these challenges, we recommend providing detailed explanations, prompt engineering, and fine-tuning AI models to reduce uncertainties, using objective references to mitigate inaccuracies from participants' inability to recognize errors or misconceptions, and curating datasets to train AI models capable of accurately portraying cultural features.