🤖 AI Summary
Multimodal large language model agents (MLAs) face critical trustworthiness challenges—authenticity, controllability, safety, and privacy—when performing high-risk interactions in GUI environments. Method: This work introduces the first comprehensive trustworthiness evaluation framework for MLAs operating on real-world web pages and mobile apps. It formally defines and quantifies four-dimensional trust metrics, constructs a benchmark comprising 34 high-risk tasks, and integrates multimodal adversarial injection, behavioral trajectory auditing, and risk attribution analysis. Contributions/Results: We uncover a nonlinear risk accumulation mechanism induced by multi-step interactions; demonstrate that interactive MLAs exhibit significantly degraded trustworthiness versus static multimodal LLMs (e.g., a 3.2× increase in harmful content generation); identify novel trust vulnerabilities across 13 state-of-the-art MLAs; and open-source an extensible evaluation toolkit enabling cross-platform, continuous assessment.
📝 Abstract
The emergence of multimodal LLM-based agents (MLAs) has transformed interaction paradigms by seamlessly integrating vision, language, action and dynamic environments, enabling unprecedented autonomous capabilities across GUI applications ranging from web automation to mobile systems. However, MLAs introduce critical trustworthiness challenges that extend far beyond traditional language models' limitations, as they can directly modify digital states and trigger irreversible real-world consequences. Existing benchmarks inadequately tackle these unique challenges posed by MLAs' actionable outputs, long-horizon uncertainty and multimodal attack vectors. In this paper, we introduce MLA-Trust, the first comprehensive and unified framework that evaluates the MLA trustworthiness across four principled dimensions: truthfulness, controllability, safety and privacy. We utilize websites and mobile applications as realistic testbeds, designing 34 high-risk interactive tasks and curating rich evaluation datasets. Large-scale experiments involving 13 state-of-the-art agents reveal previously unexplored trustworthiness vulnerabilities unique to multimodal interactive scenarios. For instance, proprietary and open-source GUI-interacting MLAs pose more severe trustworthiness risks than static MLLMs, particularly in high-stakes domains; the transition from static MLLMs into interactive MLAs considerably compromises trustworthiness, enabling harmful content generation in multi-step interactions that standalone MLLMs would typically prevent; multi-step execution, while enhancing the adaptability of MLAs, involves latent nonlinear risk accumulation across successive interactions, circumventing existing safeguards and resulting in unpredictable derived risks. Moreover, we present an extensible toolbox to facilitate continuous evaluation of MLA trustworthiness across diverse interactive environments.