🤖 AI Summary
V2X cooperative perception faces two key challenges: multimodal heterogeneity across agents and poor scalability—specifically, difficulty in cross-agent feature alignment and the infeasibility of full-parameter fine-tuning for plug-and-play integration of new agents. To address these, we propose HeatV2X, a framework built upon a heterogeneous graph attention backbone that enables efficient inter-agent feature alignment and collaborative perception. We introduce two novel lightweight adapters: the Hetero-Aware Adapter, which mitigates heterogeneity-induced representation loss, and the Multi-Cognitive Adapter, which explicitly models cognitive discrepancies across diverse sensor modalities and agent configurations. Coupled with a hybrid fine-tuning strategy—local heterogeneous adaptation and global collaborative optimization—our approach drastically reduces training overhead. Extensive experiments on OPV2V-H and DAIR-V2X demonstrate state-of-the-art performance with significantly fewer parameters, validating both effectiveness and scalability.
📝 Abstract
Vehicle-to-Everything (V2X) collaborative perception extends sensing beyond single vehicle limits through transmission. However, as more agents participate, existing frameworks face two key challenges: (1) the participating agents are inherently multi-modal and heterogeneous, and (2) the collaborative framework must be scalable to accommodate new agents. The former requires effective cross-agent feature alignment to mitigate heterogeneity loss, while the latter renders full-parameter training impractical, highlighting the importance of scalable adaptation. To address these issues, we propose Heterogeneous Adaptation (HeatV2X), a scalable collaborative framework. We first train a high-performance agent based on heterogeneous graph attention as the foundation for collaborative learning. Then, we design Local Heterogeneous Fine-Tuning and Global Collaborative Fine-Tuning to achieve effective alignment and interaction among heterogeneous agents. The former efficiently extracts modality-specific differences using Hetero-Aware Adapters, while the latter employs the Multi-Cognitive Adapter to enhance cross-agent collaboration and fully exploit the fusion potential. These designs enable substantial performance improvement of the collaborative framework with minimal training cost. We evaluate our approach on the OPV2V-H and DAIR-V2X datasets. Experimental results demonstrate that our method achieves superior perception performance with significantly reduced training overhead, outperforming existing state-of-the-art approaches. Our implementation will be released soon.