π€ AI Summary
Existing deformable registration methods face efficiency and accuracy bottlenecks in jointly modeling coarse-grained anatomical structures and fine-grained local deformations. To address this, we propose FF-PNetβa dual-path pyramid network for brain image registration that concurrently processes feature representations and deformation fields. FF-PNet introduces, for the first time, a Residual Feature Fusion Module (RFFM) and a Residual Deformation Field Fusion Module (RDFFM), enabling efficient multi-scale feature disentanglement and cascaded deformation optimization within a pure CNN encoder architecture. By deliberately omitting attention mechanisms and MLPs, FF-PNet significantly reduces computational overhead while enhancing representational capacity. Evaluated on the LPBA40 and OASIS-3 datasets, FF-PNet achieves state-of-the-art Dice scores across all anatomical regions among unsupervised methods, demonstrating superior accuracy, computational efficiency, and generalizability.
π Abstract
In recent years, deformable medical image registration techniques have made significant progress. However, existing models still lack efficiency in parallel extraction of coarse and fine-grained features. To address this, we construct a new pyramid registration network based on feature and deformation field (FF-PNet). For coarse-grained feature extraction, we design a Residual Feature Fusion Module (RFFM), for fine-grained image deformation, we propose a Residual Deformation Field Fusion Module (RDFFM). Through the parallel operation of these two modules, the model can effectively handle complex image deformations. It is worth emphasizing that the encoding stage of FF-PNet only employs traditional convolutional neural networks without any attention mechanisms or multilayer perceptrons, yet it still achieves remarkable improvements in registration accuracy, fully demonstrating the superior feature decoding capabilities of RFFM and RDFFM. We conducted extensive experiments on the LPBA and OASIS datasets. The results show our network consistently outperforms popular methods in metrics like the Dice Similarity Coefficient.