Future You: Designing and Evaluating Multimodal AI-generated Digital Twins for Strengthening Future Self-Continuity

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how multimodal AI-generated digital twins of one’s future self—comprising text, synthetic speech, and photorealistic avatars—affect psychological connectedness to the future self, emotional well-being, and intertemporal decision-making. We constructed personalized digital twins using voice cloning, age-progression facial rendering, autobiographical narrative generation, and large language models (with Claude 4 as the core). A controlled experimental design compared the effects of three modalities. Results show that all modalities significantly enhance future self-continuity, positive affect, and behavioral motivation; no significant overall difference emerged across modalities, but interaction quality—particularly perceived realism and persuasive efficacy—served as a critical mediating factor. Crucially, Claude 4 outperformed baseline LLMs in amplifying these psychological effects. This work provides the first systematic empirical validation of multimodal digital twins for temporal self-intervention and reveals that interaction quality, rather than modality per se, is the primary driver of efficacy.

Technology Category

Application Category

📝 Abstract
What if users could meet their future selves today? AI-generated future selves simulate meaningful encounters with a digital twin decades in the future. As AI systems advance, combining cloned voices, age-progressed facial rendering, and autobiographical narratives, a central question emerges: Does the modality of these future selves alter their psychological and affective impact? How might a text-based chatbot, a voice-only system, or a photorealistic avatar shape present-day decisions and our feeling of connection to the future? We report a randomized controlled study (N=92) evaluating three modalities of AI-generated future selves (text, voice, avatar) against a neutral control condition. We also report a systematic model evaluation between Claude 4 and three other Large Language Models (LLMs), assessing Claude 4 across psychological and interaction dimensions and establishing conversational AI quality as a critical determinant of intervention effectiveness. All personalized modalities strengthened Future Self-Continuity (FSC), emotional well-being, and motivation compared to control, with avatar producing the largest vividness gains, yet with no significant differences between formats. Interaction quality metrics, particularly persuasiveness, realism, and user engagement, emerged as robust predictors of psychological and affective outcomes, indicating that how compelling the interaction feels matters more than the form it takes. Content analysis found thematic patterns: text emphasized career planning, while voice and avatar facilitated personal reflection. Claude 4 outperformed ChatGPT 3.5, Llama 4, and Qwen 3 in enhancing psychological, affective, and FSC outcomes.
Problem

Research questions and friction points this paper is trying to address.

Evaluates how different AI modalities affect psychological impact of future self digital twins
Assesses interaction quality as key predictor of intervention effectiveness across modalities
Compares performance of Claude 4 against other LLMs in enhancing future self-continuity
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-generated digital twins with multimodal interactions
Randomized controlled study comparing text, voice, avatar modalities
Claude 4 outperforms other LLMs in psychological outcomes
🔎 Similar Papers
No similar papers found.
C
Constanze Albrecht
MIT Media Lab, Cambridge, MA, USA
C
Chayapatr Archiwaranguprok
MIT Media Lab, Cambridge, MA, USA
R
Rachel Poonsiriwong
Harvard University, Cambridge, MA, USA
A
Awu Chen
MIT Media Lab, Cambridge, MA, USA
P
Peggy Yin
Stanford University, Stanford, CA, USA
M
Monchai Lertsutthiwong
KASIKORN Labs, Nonthaburi, Thailand
K
Kavin Winson
KASIKORN Labs, Nonthaburi, Thailand
Hal Hershfield
Hal Hershfield
UCLA, Los Angeles, CA, USA
Pattie Maes
Pattie Maes
Professor of Media Arts and Sciences, MIT
human computer interactionartificial intelligencedigital health
Pat Pataranutaporn
Pat Pataranutaporn
PhD Researcher at MIT Media Lab, Massachusetts Institute of Technology
Human-AI InteractionCyborg PsychologyHuman FlourishingBio-Digital Interfaces