🤖 AI Summary
This work addresses the challenge of accurate liver segmentation in cone-beam computed tomography (CBCT) images for interventional radiology, where the absence of annotated data hinders supervised learning. To overcome this limitation, the authors propose an unsupervised domain adaptation framework based on an improved Margin Disparity Discrepancy (MDD) approach, leveraging labeled CT data from a source domain and unlabeled CBCT data from a target domain for cross-modal liver segmentation. By reformulating the MDD optimization objective to align domains using only target-domain samples, the method effectively mitigates modality-induced distribution shifts. The proposed approach achieves substantial performance gains under both fully unsupervised and few-shot settings, establishing a new state-of-the-art among unsupervised domain adaptation methods for the CT-to-CBCT liver segmentation task.
📝 Abstract
In interventional radiology, Cone-Beam Computed Tomography (CBCT) is a helpful imaging modality that provides guidance to practicians during minimally invasive procedures. CBCT differs from traditional Computed Tomography (CT) due to its limited reconstructed field of view, specific artefacts, and the intra-arterial administration of contrast medium. While CT benefits from abundant publicly available annotated datasets, interventional CBCT data remain scarce and largely unannotated, with existing datasets focused primarily on radiotherapy applications. To address this limitation, we leverage a proprietary collection of unannotated interventional CBCT scans in conjunction with annotated CT data, employing domain adaptation techniques to bridge the modality gap and enhance liver segmentation performance on CBCT. We propose a novel unsupervised domain adaptation (UDA) framework based on the formalism of Margin Disparity Discrepancy (MDD), which improves target domain performance through a reformulation of the original MDD optimization framework. Experimental results on CT and CBCT datasets for liver segmentation demonstrate that our method achieves state-of-the-art performance in UDA, as well as in the few-shot setting.