🤖 AI Summary
Existing health literacy screening tools are difficult to integrate structurally into electronic health records and lack annotated data for automatically identifying health literacy from unstructured clinical notes. To address this gap, this study introduces HEALIX, the first publicly available health literacy annotation dataset derived from real-world clinical notes, comprising 589 samples across nine categories of social work notes and three health literacy labels. Annotation was efficiently performed using a hybrid approach combining keyword filtering with a large language model (LLM)-driven active learning strategy. The utility and validity of HEALIX were demonstrated through evaluations on four open-source LLMs under zero-shot and few-shot settings, confirming its effectiveness for automated health literacy identification in clinical text.
📝 Abstract
Health literacy is a critical determinant of patient outcomes, yet current screening tools are not always feasible and differ considerably in the number of items, question format, and dimensions of health literacy they capture, making documentation in structured electronic health records difficult to achieve. Automated detection from unstructured clinical notes offers a promising alternative, as these notes often contain richer, more contextual health literacy information, but progress has been limited by the lack of annotated resources. We introduce HEALIX, the first publicly available annotated health literacy dataset derived from real clinical notes, curated through a combination of social worker note sampling, keyword-based filtering, and LLM-based active learning. HEALIX contains 589 notes across 9 note types, annotated with three health literacy labels: low, normal, and high. To demonstrate its utility, we benchmarked zero-shot and few-shot prompting strategies across four open source large language models (LLMs).