Can Large Language Models Replace Data Scientists in Biomedical Research?

📅 2024-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the feasibility of deploying large language models (LLMs) to replace data scientists in real-world biomedical data analysis. Method: We construct TCGA-CodeBench, the first 293-item data science coding benchmark grounded in multimodal TCGA genomic and clinical data. We systematically identify three critical capability gaps of LLMs in biomedical programming and propose chain-of-thought prompting and self-reflective iterative debugging—improving code accuracy by 21% and 11%, respectively. We further design a lightweight, expert-workflow-integrated LLM assistance platform. Contribution/Results: In a user study with five medical domain experts, 80% of tasks directly adopted LLM-generated code, with up to 96% code reuse, significantly accelerating analysis workflows; however, human oversight remained essential for final decision-making. Our work establishes a reproducible evaluation framework and a practical augmentation paradigm for LLM-powered precision medicine research.

Technology Category

Application Category

📝 Abstract
Data science plays a critical role in biomedical research, but it requires professionals with expertise in coding and medical data analysis. Large language models (LLMs) have shown great potential in supporting medical tasks and performing well in general coding tests. However, existing evaluations fail to assess their capability in biomedical data science, particularly in handling diverse data types such as genomics and clinical datasets. To address this gap, we developed a benchmark of data science coding tasks derived from the analyses of 39 published studies. This benchmark comprises 293 coding tasks (128 in Python and 165 in R) performed on real-world TCGA-type genomics and clinical data. Our findings reveal that the vanilla prompting of LLMs yields suboptimal performances due to drawbacks in following input instructions, understanding target data, and adhering to standard analysis practices. Next, we benchmarked six cutting-edge LLMs and advanced adaptation methods, finding two methods to be particularly effective: chain-of-thought prompting, which provides a step-by-step plan for data analysis, which led to a 21% code accuracy improvement (56.6% versus 35.3%); and self-reflection, enabling LLMs to refine the buggy code iteratively, yielding an 11% code accuracy improvement (45.5% versus 34.3%). Building on these insights, we developed a platform that integrates LLMs into the data science workflow for medical professionals. In a user study with five medical professionals, we found that while LLMs cannot fully automate programming tasks, they significantly streamline the programming process. We found that 80% of their submitted code solutions were incorporated from LLM-generated code, with up to 96% reuse in some cases. Our analysis highlights the potential of LLMs to enhance data science efficiency in biomedical research when integrated into expert workflows.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' capability in biomedical data science tasks
Evaluating LLMs on diverse data types like genomics and clinical datasets
Improving LLM performance in biomedical coding tasks via advanced methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking LLMs on biomedical data science tasks
Using chain-of-thought prompting for accuracy improvement
Integrating self-reflection for iterative code refinement
🔎 Similar Papers
No similar papers found.
Z
Zifeng Wang
Department of Computer Science, University of Illinois Urbana-Champaign
B
Benjamin P. Danek
Department of Computer Science, University of Illinois Urbana-Champaign
Ziwei Yang
Ziwei Yang
Bioinformatics Center, Institute for Chemical Research, Kyoto University
BioinformaticsMachine LearningComputational BiologyBiomedical Data Science
Z
Zheng Chen
Institute of Scientific and Industrial Research, Osaka University
Jimeng Sun
Jimeng Sun
Professor at University of Illinois Urbana-Champaign
AI for healthcareMachine learning for healthcaredeep learning for healthcare