🤖 AI Summary
To address the challenge of jointly optimizing accuracy, fairness, robustness, and generalization in edge-deployed deep neural networks (DNNs), this paper proposes the first multi-objective neural architecture search (NAS) framework that integrates Mixture-of-Experts (MoE) structures into the NAS search space. Leveraging reinforcement learning–driven NAS, we jointly model these four dimensions and design a lightweight multi-objective reward function. Our method achieves synergistic improvements without significant parameter overhead (+0.4M parameters). On the FACET benchmark, it outperforms state-of-the-art edge DNNs by +4.02% accuracy, reduces inter-skin-tone accuracy disparity from 14.09% to 5.60%, improves adversarial robustness by 3.80%, and attains an exceptionally low overfitting rate of 0.21%. To our knowledge, this is the first work to systematically tackle the joint optimization of multidimensional trustworthiness attributes—accuracy, fairness, robustness, and generalization—in edge DNNs.
📝 Abstract
There has been a surge in optimizing edge Deep Neural Networks (DNNs) for accuracy and efficiency using traditional optimization techniques such as pruning, and more recently, employing automatic design methodologies. However, the focus of these design techniques has often overlooked critical metrics such as fairness, robustness, and generalization. As a result, when evaluating SOTA edge DNNs' performance in image classification using the FACET dataset, we found that they exhibit significant accuracy disparities (14.09%) across 10 different skin tones, alongside issues of non-robustness and poor generalizability. In response to these observations, we introduce Mixture-of-Experts-based Neural Architecture Search (MoENAS), an automatic design technique that navigates through a space of mixture of experts to discover accurate, fair, robust, and general edge DNNs. MoENAS improves the accuracy by 4.02% compared to SOTA edge DNNs and reduces the skin tone accuracy disparities from 14.09% to 5.60%, while enhancing robustness by 3.80% and minimizing overfitting to 0.21%, all while keeping model size close to state-of-the-art models average size (+0.4M). With these improvements, MoENAS establishes a new benchmark for edge DNN design, paving the way for the development of more inclusive and robust edge DNNs.