🤖 AI Summary
This study addresses a significant perceptual misalignment between freelancers and clients regarding the disclosure of AI use in freelance work. Freelancers commonly rely on “passive disclosure,” assuming clients can discern AI-assisted outputs, whereas clients expect explicit, active disclosure and struggle to accurately assess the extent of AI involvement. Through qualitative interviews and two rounds of quantitative surveys, this research systematically identifies the “passive disclosure” phenomenon for the first time and highlights the associated trust risks it engenders. The findings reveal that the absence of clear AI disclosure policies on freelance platforms is a primary source of this misunderstanding. The study calls for standardized disclosure guidelines to inform platform governance, offering both theoretical grounding and a practical framework for mitigating disclosure-related ambiguities in AI-mediated freelance labor.
📝 Abstract
The growing use of AI applications among freelance workers is reshaping trust and relationships with clients. This paper investigates how both workers and clients perceive AI use and disclosure in the freelance economy through a three-stage study: interviews with workers and two survey studies with workers and clients. Findings first reveal a key expectation gap around disclosure: Workers often adopt passive disclosure practices, revealing AI use only when asked, as they assume clients can already detect it. Clients, however, are far less confident in recognizing AI-assisted work and prefer proactive disclosure. A second finding highlights the role of unclear or absent client AI policies, which leave workers consistently misinterpreting clients'expectations for AI use and disclosure. Together, these gaps point to the need for clearer guidelines and practices for AI disclosure. Insights extend beyond freelancing, offering implications for trust, accountability, and policy design in other AI-mediated work domains.