🤖 AI Summary
One-shot neural architecture search (NAS) methods, such as DARTS, suffer from high GPU memory consumption and low search efficiency due to the large candidate search space. To address this, this paper proposes a zero-shot NAS-based automatic pre-pruning strategy for the search space: it first employs zero-shot proxies—computationally efficient, training-free metrics—to identify and eliminate low-performing sub-architectures; then performs differentiable one-shot search over the pruned space. This is the first work to leverage zero-shot NAS for search space pre-pruning. On the DARTS benchmark, our method reduces GPU memory usage by 81%, significantly accelerates the search process, and shrinks the search space by 50%, while achieving final architecture accuracy on par with that obtained from full-space search. The approach thus achieves a balanced optimization across search efficiency, memory footprint, and model performance.
📝 Abstract
Neural Architecture Search (NAS) is a powerful tool for automating architecture design. One-Shot NAS techniques, such as DARTS, have gained substantial popularity due to their combination of search efficiency with simplicity of implementation. By design, One-Shot methods have high GPU memory requirements during the search. To mitigate this issue, we propose to prune the search space in an efficient automatic manner to reduce memory consumption and search time while preserving the search accuracy. Specifically, we utilise Zero-Shot NAS to efficiently remove low-performing architectures from the search space before applying One-Shot NAS to the pruned search space. Experimental results on the DARTS search space show that our approach reduces memory consumption by 81% compared to the baseline One-Shot setup while achieving the same level of accuracy.