🤖 AI Summary
Current medical image segmentation methods face two critical bottlenecks: (1) adaptation to new tasks requires retraining or fine-tuning, entailing substantial annotated data and domain expertise—hindering clinical deployment; and (2) deterministic outputs fail to capture the inherent uncertainty reflected in inter-observer annotation variability. To address these, we propose the first context-driven stochastic segmentation framework that operates without task-specific retraining. Our approach introduces three key innovations: an interactive convolutional module, context-aware test-time augmentation (in-context TTA), and a task-agnostic feature alignment mechanism—enabling zero-shot, diversity-controllable segmentation sampling. Coupled with a diversity-aware loss, our model generates high-quality, statistically diverse segmentation ensembles zero-shot across multiple medical imaging benchmarks. This significantly enhances uncertainty quantification and improves clinical interpretability.
📝 Abstract
Existing learning-based solutions to medical image segmentation have two important shortcomings. First, for most new segmentation tasks, a new model has to be trained or fine-tuned. This requires extensive resources and machine-learning expertise, and is therefore often infeasible for medical researchers and clinicians. Second, most existing segmentation methods produce a single deterministic segmentation mask for a given image. In practice however, there is often considerable uncertainty about what constitutes the correct segmentation, and different expert annotators will often segment the same image differently. We tackle both of these problems with Tyche, a framework that uses a context set to generate stochastic predictions for previously unseen tasks without the need to retrain. Tyche differs from other in-context segmentation methods in two important ways. (1) We introduce a novel convolution block architecture that enables interactions among predictions. (2) We introduce in-context test-time augmentation, a new mechanism to provide prediction stochasticity. When combined with appropriate model design and loss functions, Tyche can predict a set of plausible diverse segmentation candidates for new or unseen medical images and segmentation tasks without the need to retrain. The Tyche code is available at: https://tyche.csail.mit.edu/