🤖 AI Summary
This study investigates the generalization capabilities of vision foundation models (VFMs) on heterogeneous electron microscopy (EM) images, with a focus on mitochondrial segmentation. We evaluate DINOv2, DINOv3, and OpenCLIP on the Lucchi++ and VNC EM datasets using a frozen backbone paired with a lightweight segmentation head, as well as parameter-efficient fine-tuning via LoRA. Representation spaces are analyzed through PCA, Fréchet DINOv2 distance, and linear probing. Results show strong performance when models are trained on a single dataset, with LoRA further enhancing in-domain accuracy. However, joint training across multiple datasets leads to significant performance degradation, revealing that current VFMs lack cross-domain robustness under implicit domain shifts in EM data. Moreover, existing parameter-efficient fine-tuning strategies fail to mitigate this domain mismatch—an issue systematically uncovered for the first time in this work.
📝 Abstract
Although vision foundation models (VFMs) are increasingly reused for biomedical image analysis, it remains unclear whether the latent representations they provide are general enough to support effective transfer and reuse across heterogeneous microscopy image datasets. Here, we study this question for the problem of mitochondria segmentation in electron microscopy (EM) images, using two popular public EM datasets (Lucchi++ and VNC) and three recent representative VFMs (DINOv2, DINOv3, and OpenCLIP). We evaluate two practical model adaptation regimes: a frozen-backbone setting in which only a lightweight segmentation head is trained on top of the VFM, and parameter-efficient fine-tuning (PEFT) via Low-Rank Adaptation (LoRA) in which the VFM is fine-tuned in a targeted manner to a specific dataset. Across all backbones, we observe that training on a single EM dataset yields good segmentation performance (quantified as foreground Intersection-over-Union), and that LoRA consistently improves in-domain performance. In contrast, training on multiple EM datasets leads to severe performance degradation for all models considered, with only marginal gains from PEFT. Exploration of the latent representation space through various techniques (PCA, Fr\'echet Dinov2 distance, and linear probes) reveals a pronounced and persistent domain mismatch between the two considered EM datasets in spite of their visual similarity, which is consistent with the observed failure of paired training. These results suggest that, while VFMs can deliver competitive results for EM segmentation within a single domain under lightweight adaptation, current PEFT strategies are insufficient to obtain a single robust model across heterogeneous EM datasets without additional domain-alignment mechanisms.