🤖 AI Summary
This study investigates the feasibility of deploying large language models (LLMs) to replace data scientists in real-world biomedical data analysis. Method: We construct TCGA-CodeBench, the first 293-item data science coding benchmark grounded in multimodal TCGA genomic and clinical data. We systematically identify three critical capability gaps of LLMs in biomedical programming and propose chain-of-thought prompting and self-reflective iterative debugging—improving code accuracy by 21% and 11%, respectively. We further design a lightweight, expert-workflow-integrated LLM assistance platform. Contribution/Results: In a user study with five medical domain experts, 80% of tasks directly adopted LLM-generated code, with up to 96% code reuse, significantly accelerating analysis workflows; however, human oversight remained essential for final decision-making. Our work establishes a reproducible evaluation framework and a practical augmentation paradigm for LLM-powered precision medicine research.
📝 Abstract
Data science plays a critical role in biomedical research, but it requires professionals with expertise in coding and medical data analysis. Large language models (LLMs) have shown great potential in supporting medical tasks and performing well in general coding tests. However, existing evaluations fail to assess their capability in biomedical data science, particularly in handling diverse data types such as genomics and clinical datasets. To address this gap, we developed a benchmark of data science coding tasks derived from the analyses of 39 published studies. This benchmark comprises 293 coding tasks (128 in Python and 165 in R) performed on real-world TCGA-type genomics and clinical data. Our findings reveal that the vanilla prompting of LLMs yields suboptimal performances due to drawbacks in following input instructions, understanding target data, and adhering to standard analysis practices. Next, we benchmarked six cutting-edge LLMs and advanced adaptation methods, finding two methods to be particularly effective: chain-of-thought prompting, which provides a step-by-step plan for data analysis, which led to a 21% code accuracy improvement (56.6% versus 35.3%); and self-reflection, enabling LLMs to refine the buggy code iteratively, yielding an 11% code accuracy improvement (45.5% versus 34.3%). Building on these insights, we developed a platform that integrates LLMs into the data science workflow for medical professionals. In a user study with five medical professionals, we found that while LLMs cannot fully automate programming tasks, they significantly streamline the programming process. We found that 80% of their submitted code solutions were incorporated from LLM-generated code, with up to 96% reuse in some cases. Our analysis highlights the potential of LLMs to enhance data science efficiency in biomedical research when integrated into expert workflows.