🤖 AI Summary
Fluorescence microscopy videos are often compromised by noise, temporal variability, and signal oscillations, hindering accurate analysis of dynamic biological processes. This work proposes an interpretable, end-to-end computational framework that, for the first time, integrates multi-temporal image registration, feature alignment, and cross-domain interpretable visual algorithms to efficiently compress dynamic video sequences into a single high-quality image while preserving critical biological structures. Experiments on a complex dataset of cardiomyocyte monolayers demonstrate that the proposed method increases the average number of detected cells by 44% compared to existing approaches, significantly enhancing both image quality and downstream segmentation performance.
📝 Abstract
Fluorescence microscopy is widely employed for the analysis of living biological samples; however, the utility of the resulting recordings is frequently constrained by noise, temporal variability, and inconsistent visualisation of signals that oscillate over time. We present a unique computational framework that integrates information from multiple time-resolved frames into a single high-quality image, while preserving the underlying biological content of the original video. We evaluate the proposed method through an extensive number of configurations (n = 111) and on a challenging dataset comprising dynamic, heterogeneous, and morphologically complex 2D monolayers of cardiac cells. Results show that our framework, which consists of a combination of explainable techniques from different computer vision application fields, is capable of generating composite images that preserve and enhance the quality and information of individual microscopy frames, yielding 44% average increase in cell count compared to previous methods. The proposed pipeline is applicable to other imaging domains that require the fusion of multi-temporal image stacks into high-quality 2D images, thereby facilitating annotation and downstream segmentation.