Starting Off on the Wrong Foot: Pitfalls in Data Preparation

πŸ“… 2026-03-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the limitations of conventional random splitting in highly imbalanced insurance loss data, which often compromises model statistical validity and stability. To overcome this, we propose a novel data preparation framework that integrates distribution-consistent train-test partitioning via support point sampling, nonparametric feature selection using the Chatterjee correlation coefficient, and unified missing value handlingβ€”all embedded within our custom InsurAutoML pipeline. This work represents the first systematic incorporation of support points and the Chatterjee correlation coefficient into the preprocessing phase of insurance modeling. Empirical evaluations on both synthetic and real-world datasets demonstrate that the proposed approach significantly enhances model robustness and interpretability while simultaneously reducing computational overhead.

Technology Category

Application Category

πŸ“ Abstract
When working with real-world insurance data, practitioners often encounter challenges during the data preparation stage that can undermine the statistical validity and reliability of downstream modeling. This study illustrates that conventional data preparation procedures such as random train-test partitioning, often yield unreliable and unstable results when confronted with highly imbalanced insurance loss data. To mitigate these limitations, we propose a novel data preparation framework leveraging two recent statistical advancements: support points for representative data splitting to ensure distributional consistency across partitions, and the Chatterjee correlation coefficient for initial, non-parametric feature screening to capture feature relevance and dependence structure. We further integrate these theoretical advances into a unified, efficient framework that also incorporates missing-data handling, and embed this framework within our custom InsurAutoML pipeline. The performance of the proposed approach is evaluated using both simulated datasets and datasets often cited in the academic literature. Our findings definitively demonstrate that incorporating statistically rigorous data preparation methods not only significantly enhances model robustness and interpretability but also substantially reduces computational resource requirements across diverse insurance loss modeling tasks. This work provides a crucial methodological upgrade for achieving reliable results in high stakes insurance applications.
Problem

Research questions and friction points this paper is trying to address.

data preparation
imbalanced insurance data
train-test splitting
statistical validity
insurance loss modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

support points
Chatterjee correlation coefficient
data preparation
imbalanced insurance data
InsurAutoML
πŸ”Ž Similar Papers
No similar papers found.
Jiayi Guo
Jiayi Guo
PhD student, Tsinghua University
computer visionmachine learninggenerative models
P
Panyi Dong
Actuarial and Risk Management Sciences, University of Illinois Urbana-Champaign, 1409 W. Green Street (MC-382), Urbana, IL, 61801, USA
Z
Zhiyu Quan
Actuarial and Risk Management Sciences, University of Illinois Urbana-Champaign, 1409 W. Green Street (MC-382), Urbana, IL, 61801, USA