Dual-Domain Representation Alignment: Bridging 2D and 3D Vision via Geometry-Aware Architecture Search

๐Ÿ“… 2026-03-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Deploying large vision models at the edge is hindered by high inference costs and the challenges of multi-objective optimization, particularly the expensive evaluation and inconsistent ranking of candidate architectures. To address these issues, this work proposes EvoNAS, a framework that constructs a hybrid supernetwork integrating Vision State Space Models and Vision Transformers. It introduces Cross-Architecture Dual-Domain Knowledge Distillation (CA-DDKD) to enhance the supernetworkโ€™s representational capacity and ranking consistency, and designs a Distributed Multi-Model Parallel Evaluation (DMMPE) mechanism to substantially reduce search overhead. Evaluated on benchmarks including COCO, ADE20K, KITTI, and NYU-Depth v2, the discovered EvoNets achieve lower latency and higher throughput than prevailing CNN, ViT, and Mamba-based models while maintaining strong generalization capabilities.

Technology Category

Application Category

๐Ÿ“ Abstract
Modern computer vision requires balancing predictive accuracy with real-time efficiency, yet the high inference cost of large vision models (LVMs) limits deployment on resource-constrained edge devices. Although Evolutionary Neural Architecture Search (ENAS) is well suited for multi-objective optimization, its practical use is hindered by two issues: expensive candidate evaluation and ranking inconsistency among subnetworks. To address them, we propose EvoNAS, an efficient distributed framework for multi-objective evolutionary architecture search. We build a hybrid supernet that integrates Vision State Space and Vision Transformer (VSS-ViT) modules, and optimize it with a Cross-Architecture Dual-Domain Knowledge Distillation (CA-DDKD) strategy. By coupling the computational efficiency of VSS blocks with the semantic expressiveness of ViT modules, CA-DDKD improves the representational capacity of the shared supernet and enhances ranking consistency, enabling reliable fitness estimation during evolution without extra fine-tuning. To reduce the cost of large-scale validation, we further introduce a Distributed Multi-Model Parallel Evaluation (DMMPE) framework based on GPU resource pooling and asynchronous scheduling. Compared with conventional data-parallel evaluation, DMMPE improves efficiency by over 70% through concurrent multi-GPU, multi-model execution. Experiments on COCO, ADE20K, KITTI, and NYU-Depth v2 show that the searched architectures, termed EvoNets, consistently achieve Pareto-optimal trade-offs between accuracy and efficiency. Compared with representative CNN-, ViT-, and Mamba-based models, EvoNets deliver lower inference latency and higher throughput under strict computational budgets while maintaining strong generalization on downstream tasks such as novel view synthesis. Code is available at https://github.com/EMI-Group/evonas
Problem

Research questions and friction points this paper is trying to address.

Neural Architecture Search
Multi-objective Optimization
Efficient Inference
Edge Deployment
Ranking Consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Architecture Search
Vision State Space
Vision Transformer
Knowledge Distillation
Distributed Evaluation
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Haoyu Zhang
Hangzhou Normal University, Hangzhou, P.R. China
Z
Zhihao Yu
Hangzhou Normal University, Hangzhou, P.R. China
Rui Wang
Rui Wang
China University of Geosciences
Lithium ion batteries
Y
Yaochu Jin
Trustworthy and General AI Lab, School of Engineering, Westlake University, Hangzhou, P.R. China
Q
Qiqi Liu
Trustworthy and General AI Lab, School of Engineering, Westlake University, Hangzhou, P.R. China
R
Ran Cheng
Department of Data Science and Artificial Intelligence, and the Department of Computing, The Hong Kong Polytechnic University, Hong Kong SAR, China; also with The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China