Quick-CapsNet (QCN): A fast alternative to Capsule Networks

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Capsule Networks (CapsNets) suffer from slow inference speeds, hindering their deployment in real-time applications. To address this, we propose Quick-CapsNet (QCN), a computationally efficient CapsNet variant that significantly accelerates inference while preserving classification accuracy. QCN achieves this by drastically reducing the number of capsules per layer and introducing a lightweight yet robust decoder, all while retaining the core dynamic routing mechanism between capsules. Crucially, QCN optimizes computation-intensive components—such as matrix multiplication and iterative routing—without compromising structural interpretability or representational fidelity. Extensive experiments on MNIST, Fashion-MNIST, SVHN, and CIFAR-10 demonstrate that QCN attains up to a 5× speedup in inference latency over the original CapsNet, with negligible accuracy degradation (<0.3% absolute drop). This work establishes a new paradigm for practical, high-efficiency capsule network deployment, bridging the gap between theoretical interpretability and real-world applicability.

Technology Category

Application Category

📝 Abstract
The basic computational unit in Capsule Network (CapsNet) is a capsule (vs. neurons in Convolutional Neural Networks (CNNs)). A capsule is a set of neurons, which form a vector. CapsNet is used for supervised classification of data and has achieved state-of-the-art accuracy on MNIST digit recognition dataset, outperforming conventional CNNs in detecting overlapping digits. Moreover, CapsNet shows higher robustness towards affine transformation when compared to CNNs for MNIST datasets. One of the drawbacks of CapsNet, however, is slow training and testing. This can be a bottleneck for applications that require a fast network, especially during inference. In this work, we introduce Quick-CapsNet (QCN) as a fast alternative to CapsNet, which can be a starting point to develop CapsNet for fast real-time applications. QCN builds on producing a fewer number of capsules, which results in a faster network. QCN achieves this at the cost of marginal loss in accuracy. Inference is 5x faster on MNIST, F-MNIST, SVHN and Cifar-10 datasets. We also further enhanced QCN by employing a more powerful decoder instead of the default decoder to further improve QCN.
Problem

Research questions and friction points this paper is trying to address.

Addressing slow training and testing in Capsule Networks
Reducing computational bottleneck for real-time applications
Maintaining accuracy while significantly accelerating inference speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reduced capsule count for faster processing
Marginal accuracy trade-off for speed enhancement
Enhanced decoder for improved network performance
🔎 Similar Papers
No similar papers found.
Pouya Shiri
Pouya Shiri
University of Saskatchewan
Medical ImagingDiffusion ModelsGenerative AI
R
Ramin Sharifi
Department of Electrical and Computer Engineering, University of Victoria, 3800 Finnerty Rd, Victoria, BC, V8P 5J2 Canada
Amirali Baniasadi
Amirali Baniasadi
University of Victoria
Computer Architecture