Biomedical Foundation Model: A Survey

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Biomedical foundation models face challenges in cross-modal and multi-task modeling due to the absence of unified evaluation and adaptation paradigms. Method: We conduct a systematic review across five domains—computational biology, drug discovery, clinical informatics, medical imaging, and public health—and propose the first comprehensive technical analysis framework covering large language models (LLMs), vision-language models (VLMs), multimodal pretraining, self-supervised learning, and domain-adaptive fine-tuning, synthesizing over 100 representative studies. Contribution/Results: We introduce a standardized evaluation framework and a forward-looking challenges map, revealing the synergistic impact of model scale, data quality, and domain alignment on downstream performance. Our work delivers a systematic roadmap for trustworthy deployment and sustained innovation of foundation models in health sciences.

Technology Category

Application Category

📝 Abstract
Foundation models, first introduced in 2021, are large-scale pre-trained models (e.g., large language models (LLMs) and vision-language models (VLMs)) that learn from extensive unlabeled datasets through unsupervised methods, enabling them to excel in diverse downstream tasks. These models, like GPT, can be adapted to various applications such as question answering and visual understanding, outperforming task-specific AI models and earning their name due to broad applicability across fields. The development of biomedical foundation models marks a significant milestone in leveraging artificial intelligence (AI) to understand complex biological phenomena and advance medical research and practice. This survey explores the potential of foundation models across diverse domains within biomedical fields, including computational biology, drug discovery and development, clinical informatics, medical imaging, and public health. The purpose of this survey is to inspire ongoing research in the application of foundation models to health science.
Problem

Research questions and friction points this paper is trying to address.

Explores biomedical foundation models' potential in health science.
Investigates applications in computational biology and drug discovery.
Surveys impact on medical imaging and clinical informatics.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale pre-trained models for biomedical applications
Unsupervised learning from extensive unlabeled datasets
Adaptable to diverse tasks like drug discovery
🔎 Similar Papers
X
Xiangrui Liu
Arizona State University, Tempe, AZ, USA
Y
Yuanyuan Zhang
Purdue University, West Lafayette, IN, USA
Y
Yingzhou Lu
Stanford University, Stanford, CA, USA
Changchang Yin
Changchang Yin
The Ohio State University
AIDeep LearningEHRs
X
Xiaolin Hu
Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
Xiaoou Liu
Xiaoou Liu
Arizona State University
Trustworthy Graph Neural Networks
Lulu Chen
Lulu Chen
Virginia Tech
Machine LearningData MiningBioinformatics
S
Sheng Wang
University of Washington, Seattle, WA, USA
A
Alexander Rodriguez
University of Michigan, Ann Arbor, MI, USA
Huaxiu Yao
Huaxiu Yao
Assistant Professor of Computer Science and Data Science, UNC Chapel Hill
Machine LearningFoundation ModelsAI AlignmentAI AgentRobot Learning
Y
Yezhou Yang
Arizona State University, Tempe, AZ, USA
P
Ping Zhang
The Ohio State University, Columbus, OH, USA
Jintai Chen
Jintai Chen
Assistant Professor@HKUST(GZ)
AI for HealthcareMultimodal LearningDeep Tabular Learning
Tianfan Fu
Tianfan Fu
Nanjing University
AI for DrugAI for ScienceLarge Language Model
X
Xiao Wang
Virginia Polytechnic Institute, Blacksberg, VA, USA