HeatV2X: Scalable Heterogeneous Collaborative Perception via Efficient Alignment and Interaction

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
V2X cooperative perception faces two key challenges: multimodal heterogeneity across agents and poor scalability—specifically, difficulty in cross-agent feature alignment and the infeasibility of full-parameter fine-tuning for plug-and-play integration of new agents. To address these, we propose HeatV2X, a framework built upon a heterogeneous graph attention backbone that enables efficient inter-agent feature alignment and collaborative perception. We introduce two novel lightweight adapters: the Hetero-Aware Adapter, which mitigates heterogeneity-induced representation loss, and the Multi-Cognitive Adapter, which explicitly models cognitive discrepancies across diverse sensor modalities and agent configurations. Coupled with a hybrid fine-tuning strategy—local heterogeneous adaptation and global collaborative optimization—our approach drastically reduces training overhead. Extensive experiments on OPV2V-H and DAIR-V2X demonstrate state-of-the-art performance with significantly fewer parameters, validating both effectiveness and scalability.

Technology Category

Application Category

📝 Abstract
Vehicle-to-Everything (V2X) collaborative perception extends sensing beyond single vehicle limits through transmission. However, as more agents participate, existing frameworks face two key challenges: (1) the participating agents are inherently multi-modal and heterogeneous, and (2) the collaborative framework must be scalable to accommodate new agents. The former requires effective cross-agent feature alignment to mitigate heterogeneity loss, while the latter renders full-parameter training impractical, highlighting the importance of scalable adaptation. To address these issues, we propose Heterogeneous Adaptation (HeatV2X), a scalable collaborative framework. We first train a high-performance agent based on heterogeneous graph attention as the foundation for collaborative learning. Then, we design Local Heterogeneous Fine-Tuning and Global Collaborative Fine-Tuning to achieve effective alignment and interaction among heterogeneous agents. The former efficiently extracts modality-specific differences using Hetero-Aware Adapters, while the latter employs the Multi-Cognitive Adapter to enhance cross-agent collaboration and fully exploit the fusion potential. These designs enable substantial performance improvement of the collaborative framework with minimal training cost. We evaluate our approach on the OPV2V-H and DAIR-V2X datasets. Experimental results demonstrate that our method achieves superior perception performance with significantly reduced training overhead, outperforming existing state-of-the-art approaches. Our implementation will be released soon.
Problem

Research questions and friction points this paper is trying to address.

Addresses multi-modal heterogeneous agent collaboration in V2X perception systems
Solves scalability challenges when accommodating new agents in collaborative frameworks
Mitigates heterogeneity loss through efficient cross-agent feature alignment methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneous graph attention for agent training
Local fine-tuning with Hetero-Aware Adapters
Global fine-tuning using Multi-Cognitive Adapter
🔎 Similar Papers
No similar papers found.
Y
Yueran Zhao
Beijing Institute of Technology, Beijing, China
Z
Zhang Zhang
Beijing Institute of Technology, Beijing, China
C
Chao Sun
Beijing Institute of Technology, Beijing, China
Tianze Wang
Tianze Wang
AI/ML Researcher, Microsoft (ABK)
distributed deep learningsystems for mlml systemsautoml
C
Chao Yue
Beijing Institute of Technology, Beijing, China
N
Nuoran Li
Beijing Institute of Technology, Beijing, China