🤖 AI Summary
To address feature redundancy and gradient degradation in deep convolutional networks, this paper proposes an enhanced Inception architecture integrating residual connections with multi-scale feature reuse. The core innovation is a dynamic cross-layer feature reutilization mechanism: within each Inception branch, historical features are iteratively activated and reweighted, guided by channel-wise attention and a gradient-gated unit to improve representational continuity and parameter efficiency. Experiments on ImageNet demonstrate that the proposed method achieves a 1.3% absolute improvement in Top-1 accuracy, reduces model parameters by 18%, accelerates inference speed by 22%, and significantly enhances generalization under few-shot learning settings.