๐ค AI Summary
Deploying large vision models at the edge is hindered by high inference costs and the challenges of multi-objective optimization, particularly the expensive evaluation and inconsistent ranking of candidate architectures. To address these issues, this work proposes EvoNAS, a framework that constructs a hybrid supernetwork integrating Vision State Space Models and Vision Transformers. It introduces Cross-Architecture Dual-Domain Knowledge Distillation (CA-DDKD) to enhance the supernetworkโs representational capacity and ranking consistency, and designs a Distributed Multi-Model Parallel Evaluation (DMMPE) mechanism to substantially reduce search overhead. Evaluated on benchmarks including COCO, ADE20K, KITTI, and NYU-Depth v2, the discovered EvoNets achieve lower latency and higher throughput than prevailing CNN, ViT, and Mamba-based models while maintaining strong generalization capabilities.
๐ Abstract
Modern computer vision requires balancing predictive accuracy with real-time efficiency, yet the high inference cost of large vision models (LVMs) limits deployment on resource-constrained edge devices. Although Evolutionary Neural Architecture Search (ENAS) is well suited for multi-objective optimization, its practical use is hindered by two issues: expensive candidate evaluation and ranking inconsistency among subnetworks. To address them, we propose EvoNAS, an efficient distributed framework for multi-objective evolutionary architecture search. We build a hybrid supernet that integrates Vision State Space and Vision Transformer (VSS-ViT) modules, and optimize it with a Cross-Architecture Dual-Domain Knowledge Distillation (CA-DDKD) strategy. By coupling the computational efficiency of VSS blocks with the semantic expressiveness of ViT modules, CA-DDKD improves the representational capacity of the shared supernet and enhances ranking consistency, enabling reliable fitness estimation during evolution without extra fine-tuning. To reduce the cost of large-scale validation, we further introduce a Distributed Multi-Model Parallel Evaluation (DMMPE) framework based on GPU resource pooling and asynchronous scheduling. Compared with conventional data-parallel evaluation, DMMPE improves efficiency by over 70% through concurrent multi-GPU, multi-model execution. Experiments on COCO, ADE20K, KITTI, and NYU-Depth v2 show that the searched architectures, termed EvoNets, consistently achieve Pareto-optimal trade-offs between accuracy and efficiency. Compared with representative CNN-, ViT-, and Mamba-based models, EvoNets deliver lower inference latency and higher throughput under strict computational budgets while maintaining strong generalization on downstream tasks such as novel view synthesis. Code is available at https://github.com/EMI-Group/evonas