🤖 AI Summary
This study addresses the challenge of visual memorability modeling in the absence of large-scale unsupervised annotations by proposing an unsupervised learning paradigm grounded in natural-language “tip-of-the-tongue” (ToT) recall descriptions. Methodologically, we construct the first large-scale ToT dataset—comprising 82K videos and their open-ended recall texts scraped from platforms such as Reddit—and formulate a multimodal ToT retrieval task. Leveraging vision-language foundation models, we jointly optimize recall generation and cross-modal retrieval via contrastive learning and online ToT query-driven fine-tuning. Our contributions are threefold: (1) the first large-scale, unsupervised visual memorability dataset; (2) the first formalization and modeling of fine-grained memory signals embedded in natural-language recall; and (3) the first model enabling multimodal ToT retrieval, which surpasses strong baselines—including GPT-4o—in recall generation and significantly improves memorability prediction performance.
📝 Abstract
Visual content memorability has intrigued the scientific community for decades, with applications ranging widely, from understanding nuanced aspects of human memory to enhancing content design. A significant challenge in progressing the field lies in the expensive process of collecting memorability annotations from humans. This limits the diversity and scalability of datasets for modeling visual content memorability. Most existing datasets are limited to collecting aggregate memorability scores for visual content, not capturing the nuanced memorability signals present in natural, open-ended recall descriptions. In this work, we introduce the first large-scale unsupervised dataset designed explicitly for modeling visual memorability signals, containing over 82,000 videos, accompanied by descriptive recall data. We leverage tip-of-the-tongue (ToT) retrieval queries from online platforms such as Reddit. We demonstrate that our unsupervised dataset provides rich signals for two memorability-related tasks: recall generation and ToT retrieval. Large vision-language models fine-tuned on our dataset outperform state-of-the-art models such as GPT-4o in generating open-ended memorability descriptions for visual content. We also employ a contrastive training strategy to create the first model capable of performing multimodal ToT retrieval. Our dataset and models present a novel direction, facilitating progress in visual content memorability research.