π€ AI Summary
Predicting nanoparticle (NP) distribution within the tumor microenvironment (TME) remains challenging due to inherent heterogeneity and divergent spatial distributions across multimodal imaging modalities, which degrade joint modeling performance. Method: We propose the Divergence-Aware Multimodal Predictor (DAMMP), featuring a novel divergence-aware mechanism comprising a Multimodal Fusion Module (MMFM) and an Uncertainty-Aware Fusion Module (UAFM). Built upon a U-Netβbased diffusion architecture, DAMMP integrates cross-modal features, cross-attention mechanisms, and adaptive branch ensembling to dynamically select optimal prediction sources and enable adaptive cooperation between unimodal and multimodal branches. Contribution/Results: DAMMP achieves state-of-the-art performance on NP distribution prediction. Furthermore, it demonstrates strong generalizability and robustness on multimodal brain image synthesis, validating its efficacy beyond oncology applications. This work establishes a new paradigm for modeling TME heterogeneity through divergence-aware multimodal learning.
π Abstract
The prediction of nanoparticles (NPs) distribution is crucial for the diagnosis and treatment of tumors. Recent studies indicate that the heterogeneity of tumor microenvironment (TME) highly affects the distribution of NPs across tumors. Hence, it has become a research hotspot to generate the NPs distribution by the aid of multi-modal TME components. However, the distribution divergence among multi-modal TME components may cause side effects i.e., the best uni-modal model may outperform the joint generative model. To address the above issues, we propose a extbf{D}ivergence- extbf{A}ware extbf{M}ulti- extbf{M}odal extbf{Diffusion} model (i.e., extbf{DAMM-Diffusion}) to adaptively generate the prediction results from uni-modal and multi-modal branches in a unified network. In detail, the uni-modal branch is composed of the U-Net architecture while the multi-modal branch extends it by introducing two novel fusion modules i.e., Multi-Modal Fusion Module (MMFM) and Uncertainty-Aware Fusion Module (UAFM). Specifically, the MMFM is proposed to fuse features from multiple modalities, while the UAFM module is introduced to learn the uncertainty map for cross-attention computation. Following the individual prediction results from each branch, the Divergence-Aware Multi-Modal Predictor (DAMMP) module is proposed to assess the consistency of multi-modal data with the uncertainty map, which determines whether the final prediction results come from multi-modal or uni-modal predictions. We predict the NPs distribution given the TME components of tumor vessels and cell nuclei, and the experimental results show that DAMM-Diffusion can generate the distribution of NPs with higher accuracy than the comparing methods. Additional results on the multi-modal brain image synthesis task further validate the effectiveness of the proposed method.