FAQNAS: FLOPs-aware Hybrid Quantum Neural Architecture Search using Genetic Algorithm

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the accuracy–computational complexity (FLOPs) trade-off in training hybrid quantum neural networks (HQNNs) on classical simulators during the NISQ era, this paper proposes FAQNAS—the first FLOPs-aware HQNN neural architecture search (NAS) framework. Built upon a multi-objective genetic algorithm, FAQNAS jointly optimizes the parameterized quantum circuit topology, classical layer configuration, and total FLOPs, explicitly incorporating FLOPs as a primary optimization objective for the first time. Experiments across five benchmark datasets demonstrate that Pareto-optimal solutions identified by FAQNAS achieve, on average, a 37% reduction in FLOPs while maintaining or surpassing the accuracy of baseline HQNN models. This significantly enhances training efficiency and scalability of HQNNs under realistic resource constraints.

Technology Category

Application Category

📝 Abstract
Hybrid Quantum Neural Networks (HQNNs), which combine parameterized quantum circuits with classical neural layers, are emerging as promising models in the noisy intermediate-scale quantum (NISQ) era. While quantum circuits are not naturally measured in floating point operations (FLOPs), most HQNNs (in NISQ era) are still trained on classical simulators where FLOPs directly dictate runtime and scalability. Hence, FLOPs represent a practical and viable metric to measure the computational complexity of HQNNs. In this work, we introduce FAQNAS, a FLOPs-aware neural architecture search (NAS) framework that formulates HQNN design as a multi-objective optimization problem balancing accuracy and FLOPs. Unlike traditional approaches, FAQNAS explicitly incorporates FLOPs into the optimization objective, enabling the discovery of architectures that achieve strong performance while minimizing computational cost. Experiments on five benchmark datasets (MNIST, Digits, Wine, Breast Cancer, and Iris) show that quantum FLOPs dominate accuracy improvements, while classical FLOPs remain largely fixed. Pareto-optimal solutions reveal that competitive accuracy can often be achieved with significantly reduced computational cost compared to FLOPs-agnostic baselines. Our results establish FLOPs-awareness as a practical criterion for HQNN design in the NISQ era and as a scalable principle for future HQNN systems.
Problem

Research questions and friction points this paper is trying to address.

Optimizing hybrid quantum neural networks for accuracy and computational efficiency
Introducing FLOPs-aware neural architecture search for quantum-classical models
Balancing quantum circuit complexity with classical computational costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

FLOPs-aware neural architecture search for quantum networks
Multi-objective optimization balancing accuracy and computational cost
Genetic algorithm discovers Pareto-optimal hybrid quantum architectures
🔎 Similar Papers
No similar papers found.