🤖 AI Summary
Foundational medical segmentation models (e.g., MedSAM) exhibit limited performance on complex lesions and are highly sensitive to bounding-box prompt perturbations. Existing test-time adaptation (TTA) methods suffer from weak update signals, catastrophic forgetting, and prohibitive computational overhead. To address these issues, we propose a **parameter-free TTA paradigm**: we theoretically establish that optimizing image embeddings is equivalent to fine-tuning model parameters; further, we design a joint objective combining distribution-approximated implicit conditional random field loss and entropy minimization to achieve robust embedding-space adaptation. This approach significantly mitigates forgetting and reduces computational complexity. Evaluated on three medical segmentation benchmarks, our method achieves an average Dice score improvement of ~3% and reduces inference overhead by over 7×, striking an effective balance between efficiency and accuracy.
📝 Abstract
Foundation medical segmentation models, with MedSAM being the most popular, have achieved promising performance across organs and lesions. However, MedSAM still suffers from compromised performance on specific lesions with intricate structures and appearance, as well as bounding box prompt-induced perturbations. Although current test-time adaptation (TTA) methods for medical image segmentation may tackle this issue, partial (e.g., batch normalization) or whole parametric updates restrict their effectiveness due to limited update signals or catastrophic forgetting in large models. Meanwhile, these approaches ignore the computational complexity during adaptation, which is particularly significant for modern foundation models. To this end, our theoretical analyses reveal that directly refining image embeddings is feasible to approach the same goal as parametric updates under the MedSAM architecture, which enables us to realize high computational efficiency and segmentation performance without the risk of catastrophic forgetting. Under this framework, we propose to encourage maximizing factorized conditional probabilities of the posterior prediction probability using a proposed distribution-approximated latent conditional random field loss combined with an entropy minimization loss. Experiments show that we achieve about 3% Dice score improvements across three datasets while reducing computational complexity by over 7 times.