🤖 AI Summary
This study systematically investigates how tokenization strategies affect Transformer performance in clinical time-series modeling. Using the MIMIC-IV dataset, we conduct controlled ablation experiments comparing explicit temporal encoding schemes, numerical embedding approaches, and encoder training paradigms across four clinical prediction tasks. Key findings are: (1) The raw clinical code sequence alone contains sufficient predictive signal; explicit temporal encoding yields no statistically significant improvement. (2) Freezing pre-trained clinical encoders—such as Med-PaLM or ClinicalBERT embeddings—substantially outperforms end-to-end training, while requiring fewer parameters and enabling faster inference. (3) Larger-scale frozen encoders consistently enhance performance. Collectively, these results demonstrate that lightweight, frozen tokenization strategies achieve both computational efficiency and strong generalization, establishing a simple yet effective paradigm for clinical time-series modeling.
📝 Abstract
Tokenization strategies shape how models process electronic health records, yet fair comparisons of their effectiveness remain limited. We present a systematic evaluation of tokenization approaches for clinical time series modeling using transformer-based architectures, revealing task-dependent and sometimes counterintuitive findings about temporal and value feature importance. Through controlled ablations across four clinical prediction tasks on MIMIC-IV, we demonstrate that explicit time encodings provide no consistent statistically significant benefit for the evaluated downstream tasks. Value features show task-dependent importance, affecting mortality prediction but not readmission, suggesting code sequences alone can carry sufficient predictive signal. We further show that frozen pretrained code encoders dramatically outperform their trainable counterparts while requiring dramatically fewer parameters. Larger clinical encoders provide consistent improvements across tasks, benefiting from frozen embeddings that eliminate computational overhead. Our controlled evaluation enables fairer tokenization comparisons and demonstrates that simpler, parameter-efficient approaches can, in many cases, achieve strong performance, though the optimal tokenization strategy remains task-dependent.