π€ AI Summary
Manual annotation of anatomical landmarks in medical images is time-consuming and labor-intensive, while existing supervised methods rely heavily on large-scale, high-quality labeled data and generalize poorly across multi-contrast MRI. To address this, we propose the first unsupervised 3D brain landmark detection framework, requiring only a single reference image to achieve high-precision localization on unlabeled T1w/T2w and multi-field-strength MRI. Our approach introduces a contrast-agnostic detection paradigm, integrating inter-subject consistency regularization with deformable registration joint loss, 3D convolutional contrastive enhancement, and an adaptive hybrid loss scheduling mechanism. Evaluated on four multimodal datasets, our method significantly outperforms state-of-the-art approaches: it reduces mean radial error by 19.3% and improves successful detection rate by 12.7%, demonstrating strong generalizability and clinical applicability.
π Abstract
Anatomical landmark detection in medical images is essential for various clinical and research applications, including disease diagnosis and surgical planning. However, manual landmark annotation is time-consuming and requires significant expertise. Existing deep learning (DL) methods often require large amounts of well-annotated data, which are costly to acquire. In this paper, we introduce CABLD, a novel self-supervised DL framework for 3D brain landmark detection in unlabeled scans with varying contrasts by using only a single reference example. To achieve this, we employed an inter-subject landmark consistency loss with an image registration loss while introducing a 3D convolution-based contrast augmentation strategy to promote model generalization to new contrasts. Additionally, we utilize an adaptive mixed loss function to schedule the contributions of different sub-tasks for optimal outcomes. We demonstrate the proposed method with the intricate task of MRI-based 3D brain landmark detection. With comprehensive experiments on four diverse clinical and public datasets, including both T1w and T2w MRI scans at different MRI field strengths, we demonstrate that CABLD outperforms the state-of-the-art methods in terms of mean radial errors (MREs) and success detection rates (SDRs). Our framework provides a robust and accurate solution for anatomical landmark detection, reducing the need for extensively annotated datasets and generalizing well across different imaging contrasts. Our code will be publicly available at: https://github.com/HealthX-Lab/CABLD.