🤖 AI Summary
This study investigates individuals’ decision-making mechanisms when selecting between generative AI and human therapists for psychological support. Grounded in an extended Health Belief Model (HBM) adapted to multimodal mental health contexts, it analyzes large-scale survey data from nationally representative adult and university student samples. Repeated-measures ANOVA and LASSO regression identify salient belief factors influencing intention to use. Results indicate that emotional connection and personalization constitute core advantages of human therapists, whereas accessibility and cost drive AI adoption—yet privacy concerns and reliability doubts substantially inhibit acceptance. The study’s key contribution lies in the first theoretical adaptation of the HBM to human–AI collaborative mental health service contexts, revealing a critical divergence in adoption logic: students predominantly perceive AI as a supplementary tool, whereas adults are more likely to view it as a substitute for human providers.
📝 Abstract
As generative artificial intelligence (GAI) enters the mental health landscape, questions arise about how individuals weigh AI tools against human therapists. Drawing on the Health Belief Model (HBM), this study examined belief-based predictors of intention to use GAI and therapists across two populations: a university sample (N = 1,155) and a nationally representative adult sample (N = 651). Using repeated-measures ANOVA and LASSO regression, we found that therapists were consistently valued for emotional, relational, and personalization benefits, while GAI was favored for accessibility and affordability. Yet structural advantages alone did not predict adoption; emotional benefit and personalization emerged as decisive factors. Adoption patterns diverged across groups: students treated GAI as a complement, whereas national adults approached it as a substitute. Concerns about privacy and reliability constrained GAI use in both groups. These findings extend HBM to multi-modality contexts and highlight design implications for trustworthy, emotionally resonant digital mental health tools.