The M-factor: A Novel Metric for Evaluating Neural Architecture Search in Resource-Constrained Environments

📅 2025-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural architecture search (NAS) methods often prioritize accuracy over efficiency, limiting their applicability to resource-constrained edge devices. To address this, we propose M-factor—the first unified metric jointly quantifying model accuracy and parameter count. Evaluating across a ResNet search space of 19,683 architectures on CIFAR-10, we systematically compare Policy-Based Reinforcement Learning (M-factor = 0.84), Regularized Evolution (0.82), and multi-trial random search (0.75). Results show that under M-factor guidance, simple random search achieves performance comparable to state-of-the-art NAS methods while converging significantly faster—within only 20–39 iterations. This work demonstrates that efficiency-aware evaluation substantially reduces NAS complexity, offering a novel paradigm and practical benchmark for lightweight NAS.

Technology Category

Application Category

📝 Abstract
Neural Architecture Search (NAS) aims to automate the design of deep neural networks. However, existing NAS techniques often focus on maximising accuracy, neglecting model efficiency. This limitation restricts their use in resource-constrained environments like mobile devices and edge computing systems. Moreover, current evaluation metrics prioritise performance over efficiency, lacking a balanced approach for assessing architectures suitable for constrained scenarios. To address these challenges, this paper introduces the M-factor, a novel metric combining model accuracy and size. Four diverse NAS techniques are compared: Policy-Based Reinforcement Learning, Regularised Evolution, Tree-structured Parzen Estimator (TPE), and Multi-trial Random Search. These techniques represent different NAS paradigms, providing a comprehensive evaluation of the M-factor. The study analyses ResNet configurations on the CIFAR-10 dataset, with a search space of 19,683 configurations. Experiments reveal that Policy-Based Reinforcement Learning and Regularised Evolution achieved M-factor values of 0.84 and 0.82, respectively, while Multi-trial Random Search attained 0.75, and TPE reached 0.67. Policy-Based Reinforcement Learning exhibited performance changes after 39 trials, while Regularised Evolution optimised within 20 trials. The research investigates the optimisation dynamics and trade-offs between accuracy and model size for each strategy. Findings indicate that, in some cases, random search performed comparably to more complex algorithms when assessed using the M-factor. These results highlight how the M-factor addresses the limitations of existing metrics by guiding NAS towards balanced architectures, offering valuable insights for selecting strategies in scenarios requiring both performance and efficiency.
Problem

Research questions and friction points this paper is trying to address.

Neural Architecture Search (NAS)
Resource Efficiency
Accuracy-Resource Trade-off
Innovation

Methods, ideas, or system contributions that make the work stand out.

M-Factor
Neural Architecture Search (NAS)
Efficiency-Performance Tradeoff
S
Srikanth Thudumu
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia
H
Hy Nguyen
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia
Hung Du
Hung Du
Applied Artificial Intelligence Institute - Deakin University
Deep Reinforcement LearningMulti-agent SystemsContext-aware SystemsTranslational Research
N
Nhat Duong
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia
Z
Zafaryab Rasool
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia
R
Rena Logothetis
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia
S
Scott Barnett
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia
Rajesh Vasa
Rajesh Vasa
Head of Translational Research, Applied Artificial Intelligence Institute, Deakin University
Artificial IntelligenceSoftware EvolutionAutomated Software EngineeringTools
K
K. Mouzakis
Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong VIC 3216, Australia