🤖 AI Summary
Existing neural architecture search (NAS) methods often prioritize accuracy over efficiency, limiting their applicability to resource-constrained edge devices. To address this, we propose M-factor—the first unified metric jointly quantifying model accuracy and parameter count. Evaluating across a ResNet search space of 19,683 architectures on CIFAR-10, we systematically compare Policy-Based Reinforcement Learning (M-factor = 0.84), Regularized Evolution (0.82), and multi-trial random search (0.75). Results show that under M-factor guidance, simple random search achieves performance comparable to state-of-the-art NAS methods while converging significantly faster—within only 20–39 iterations. This work demonstrates that efficiency-aware evaluation substantially reduces NAS complexity, offering a novel paradigm and practical benchmark for lightweight NAS.
📝 Abstract
Neural Architecture Search (NAS) aims to automate the design of deep neural networks. However, existing NAS techniques often focus on maximising accuracy, neglecting model efficiency. This limitation restricts their use in resource-constrained environments like mobile devices and edge computing systems. Moreover, current evaluation metrics prioritise performance over efficiency, lacking a balanced approach for assessing architectures suitable for constrained scenarios. To address these challenges, this paper introduces the M-factor, a novel metric combining model accuracy and size. Four diverse NAS techniques are compared: Policy-Based Reinforcement Learning, Regularised Evolution, Tree-structured Parzen Estimator (TPE), and Multi-trial Random Search. These techniques represent different NAS paradigms, providing a comprehensive evaluation of the M-factor. The study analyses ResNet configurations on the CIFAR-10 dataset, with a search space of 19,683 configurations. Experiments reveal that Policy-Based Reinforcement Learning and Regularised Evolution achieved M-factor values of 0.84 and 0.82, respectively, while Multi-trial Random Search attained 0.75, and TPE reached 0.67. Policy-Based Reinforcement Learning exhibited performance changes after 39 trials, while Regularised Evolution optimised within 20 trials. The research investigates the optimisation dynamics and trade-offs between accuracy and model size for each strategy. Findings indicate that, in some cases, random search performed comparably to more complex algorithms when assessed using the M-factor. These results highlight how the M-factor addresses the limitations of existing metrics by guiding NAS towards balanced architectures, offering valuable insights for selecting strategies in scenarios requiring both performance and efficiency.