🤖 AI Summary
To address performance degradation in multimodal large language models (MLLMs) caused by inter-modal data conflicts when jointly processing 2D GUI and 3D embodied tasks, this paper proposes a layer-heterogeneous Mixture-of-Experts (MoE) architecture: shallow layers share parameters to model cross-modal synergy, while deep layers employ modality-specific parameters to suppress interference. Additionally, we introduce a unified action space and jointly train the model on large-scale GUI and embodied interaction datasets. Inspired by functional brain parcellation, this design achieves both modality compatibility and task decoupling. Experiments demonstrate that the proposed agent outperforms unimodal specialized models on both GUI and embodied benchmarks—particularly excelling in 2D interface manipulation tasks—and exhibits strong cross-task generalization capability.
📝 Abstract
Multimodal large language models are evolving toward multimodal agents capable of proactively executing tasks. Most agent research focuses on GUI or embodied scenarios, which correspond to agents interacting with 2D virtual worlds or 3D real worlds, respectively. However, many complex tasks typically require agents to interleavely interact with these two types of environment. We initially mix GUI and embodied data to train, but find the performance degeneration brought by the data conflict. Further analysis reveals that GUI and embodied data exhibit synergy and conflict at the shallow and deep layers, respectively, which resembles the cerebrum-cerebellum mechanism in the human brain. To this end, we propose a high-performance generalist agent OmniActor, designed from both structural and data perspectives. First, we propose Layer-heterogeneity MoE to eliminate the conflict between GUI and embodied data by separating deep-layer parameters, while leverage their synergy by sharing shallow-layer parameters. By successfully leveraging the synergy and eliminating the conflict, OmniActor outperforms agents only trained by GUI or embodied data in GUI or embodied tasks. Furthermore, we unify the action spaces of GUI and embodied tasks, and collect large-scale GUI and embodied data from various sources for training. This significantly improves OmniActor under different scenarios, especially in GUI tasks. The code will be publicly available.