Can AI Scientist Agents Learn from Lab-in-the-Loop Feedback? Evidence from Iterative Perturbation Discovery

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether AI scientist agents can achieve effective in-context learning through real experimental feedback, with a focus on their sensitivity to such feedback in scientific experimental design. Leveraging 800 independent Cell Painting high-content screening experiments, we compare the performance of large language models (Claude Sonnet 4.5/4.6) with and without access to real feedback, employing randomized label controls to verify that observed improvements depend on the structure of the feedback. Results show that real feedback increases the average number of discoveries per feature by 53.4% (p = 0.003). Furthermore, model upgrades reduce gene hallucination rates to 3–9% and significantly enhance in-context learning efficacy (+11.0 hits, p = 0.003), highlighting the critical role of model capability thresholds in enabling effective learning from feedback.

Technology Category

Application Category

📝 Abstract
Recent work has questioned whether large language models (LLMs) can perform genuine in-context learning (ICL) for scientific experimental design, with prior studies suggesting that LLM-based agents exhibit no sensitivity to experimental feedback. We shed new light on this question by carrying out 800 independently replicated experiments on iterative perturbation discovery in Cell Painting high-content screening. We compare an LLM agent that iteratively updates its hypotheses using experimental feedback to a zero-shot baseline that relies solely on pretraining knowledge retrieval. Access to feedback yields a $+53.4\%$ increase in discoveries per feature on average ($p = 0.003$). To test whether this improvement arises from genuine feedback-driven learning rather than prompt-induced recall of pretraining knowledge, we introduce a random feedback control in which hit/miss labels are permuted. Under this control, the performance gain disappears, indicating that the observed improvement depends on the structure of the feedback signal ($+13.0$ hits, $p = 0.003$). We further examine how model capability affects feedback utilization. Upgrading from Claude Sonnet 4.5 to 4.6 reduces gene hallucination rates from ${\sim}33\%$--$45\%$ to ${\sim}3$--$9\%$, converting a non-significant ICL effect ($+0.8$, $p = 0.32$) into a large and highly significant improvement ($+11.0$, $p=0.003$) for the best ICL strategy. These results suggest that effective in-context learning from experimental feedback emerges only once models reach a sufficient capability threshold.
Problem

Research questions and friction points this paper is trying to address.

in-context learning
scientific discovery
experimental feedback
LLM agents
iterative perturbation
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context learning
experimental feedback
LLM agent
random feedback control
model capability threshold
Gilles Wainrib
Gilles Wainrib
OWKIN
neural networksmachine learningmathematical biologydrug discoveryprecision medicine
Barbara Bodinier
Barbara Bodinier
Owkin, former Imperial College London
Biostatistics
H
Haitem Dakhli
Owkin Inc.
J
Josep Monserrat
Owkin Inc.
A
Almudena Espin Perez
Owkin Inc.
S
Sabrina Carpentier
Owkin Inc.
R
Roberta Codato
Owkin Inc.
John Klein
John Klein
Carnegie Mellon Software Engineering Institute