🤖 AI Summary
Existing medical vision-language models (VLMs) predominantly rely on single-frame ultrasound images, limiting their ability to capture cardiac dynamics and view-dependent diagnostic cues—thus constraining echocardiographic video understanding. To address this, we propose the first cross-modal understanding model specifically designed for multi-view echocardiographic videos, integrating full video sequences from five standard anatomical views with corresponding clinical reports within a view-aware temporal semantic alignment framework. Methodologically, we adapt the CLIP architecture to formulate a video-text contrastive learning objective, jointly leverage 3D CNNs and spatiotemporal Transformers for multi-view video representation learning, and introduce a novel cross-view–cross-modal alignment loss. Evaluated on 60,747 real-world clinical cases, our model achieves 4.2–7.8% absolute improvement in diagnostic accuracy over single-view video and single-frame baselines, with particularly notable gains in valvular motion abnormality detection and systolic function assessment.
📝 Abstract
Echocardiography involves recording videos of the heart using ultrasound, enabling clinicians to evaluate its condition. Recent advances in large-scale vision-language models (VLMs) have garnered attention for automating the interpretation of echocardiographic videos. However, most existing VLMs proposed for medical interpretation thus far rely on single-frame (i.e., image) inputs. Consequently, these image-based models often exhibit lower diagnostic accuracy for conditions identifiable through cardiac motion. Moreover, echocardiographic videos are recorded from various views that depend on the direction of ultrasound emission, and certain views are more suitable than others for interpreting specific conditions. Incorporating multiple views could potentially yield further improvements in accuracy. In this study, we developed a video-language model that takes five different views and full video sequences as input, training it on pairs of echocardiographic videos and clinical reports from 60,747 cases. Our experiments demonstrate that this expanded approach achieves higher interpretation accuracy than models trained with only single-view videos or with still images.