DL-CapsNet: A Deep and Light Capsule Network

πŸ“… 2025-11-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Capsule Networks (CapsNets) outperform CNNs in overlapping-digit classification and affine-transformed image recognition but suffer from excessive parameter count, high computational complexity, and poor scalability to multi-class tasks. To address these limitations, we propose Deep Lightweight Capsule Network (DL-CapsNet): it employs a deep stacked capsule architecture to enhance representational capacity; introduces a novel Capsule Aggregation Layer to replace redundant fully connected capsule layers; and integrates dynamic routing optimization with parameter compression techniques. DL-CapsNet reduces model parameters by approximately 62% compared to the baseline CapsNet, significantly accelerating both training and inference. Crucially, it maintains or even surpasses the original CapsNet’s classification accuracy on challenging benchmarks including Multi-MNIST and SVHN. To our knowledge, DL-CapsNet is the first capsule-based architecture to jointly achieve high efficiency and competitive accuracy in high-cardinality classification scenarios.

Technology Category

Application Category

πŸ“ Abstract
Capsule Network (CapsNet) is among the promising classifiers and a possible successor of the classifiers built based on Convolutional Neural Network (CNN). CapsNet is more accurate than CNNs in detecting images with overlapping categories and those with applied affine transformations. In this work, we propose a deep variant of CapsNet consisting of several capsule layers. In addition, we design the Capsule Summarization layer to reduce the complexity by reducing the number of parameters. DL-CapsNet, while being highly accurate, employs a small number of parameters and delivers faster training and inference. DL-CapsNet can process complex datasets with a high number of categories.
Problem

Research questions and friction points this paper is trying to address.

Develops a deep capsule network for improved image classification accuracy
Reduces model complexity and parameters for faster training and inference
Enhances capability to handle complex datasets with overlapping categories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep variant of CapsNet with multiple capsule layers
Capsule Summarization layer reduces parameters and complexity
Few parameters enable fast training and high accuracy
πŸ”Ž Similar Papers
No similar papers found.