🤖 AI Summary
This study systematically evaluates the zero-shot generalization capabilities of large language models (LLMs) on public health text classification and information extraction tasks. To address core scenarios—including disease burden assessment, epidemiological risk factor identification, and public health intervention analysis—we construct the first unified benchmark for multidimensional public health tasks, integrating six publicly available and seven newly annotated datasets (13 total), covering diverse text classification, named entity recognition, and relation extraction tasks. We conduct the first comprehensive zero-shot comparison across 11 open-weight LLMs—including Llama-3.3-70B-Instruct—and GPT-4 variants, using micro-F1 as the primary evaluation metric. Results show that Llama-3.3-70B-Instruct achieves top performance on 8 of 16 subtasks (with peak micro-F1 > 80%), yet all models underperform significantly (< 60% micro-F1) on fine-grained challenges such as Contact Classification, revealing a critical bottleneck in current LLMs’ semantic understanding of nuanced public health concepts.
📝 Abstract
Advances in Large Language Models (LLMs) have led to significant interest in their potential to support human experts across a range of domains, including public health. In this work we present automated evaluations of LLMs for public health tasks involving the classification and extraction of free text. We combine six externally annotated datasets with seven new internally annotated datasets to evaluate LLMs for processing text related to: health burden, epidemiological risk factors, and public health interventions. We evaluate eleven open-weight LLMs (7-123 billion parameters) across all tasks using zero-shot in-context learning. We find that Llama-3.3-70B-Instruct is the highest performing model, achieving the best results on 8/16 tasks (using micro-F1 scores). We see significant variation across tasks with all open-weight LLMs scoring below 60% micro-F1 on some challenging tasks, such as Contact Classification, while all LLMs achieve greater than 80% micro-F1 on others, such as GI Illness Classification. For a subset of 11 tasks, we also evaluate three GPT-4 and GPT-4o series models and find comparable results to Llama-3.3-70B-Instruct. Overall, based on these initial results we find promising signs that LLMs may be useful tools for public health experts to extract information from a wide variety of free text sources, and support public health surveillance, research, and interventions.