🤖 AI Summary
Real-world tabular data often suffer from missing values due to measurement errors, privacy constraints, or sensor failures, severely degrading downstream modeling performance. This paper proposes an efficient tabular imputation framework based on implicit neural representations (INRs), the first to achieve fine-tuning-free, instance-adaptive completion. By jointly learning row and feature embeddings within an autoencoder-style INR architecture, the method maps tabular data to a continuous neural function, enabling direct inference of missing entries from partial observations. The approach preserves data distribution consistency, accommodates diverse missingness patterns (MCAR, MAR, MNAR), achieves fast inference, and exhibits robustness across dataset scales. Extensive experiments on 12 real-world benchmarks demonstrate consistent superiority over classical methods—including KNN, MICE, and MissForest—and deep learning baselines—such as GAIN and ReMasker—particularly in high-dimensional settings.
📝 Abstract
Tabular data builds the basis for a wide range of applications, yet real-world datasets are frequently incomplete due to collection errors, privacy restrictions, or sensor failures. As missing values degrade the performance or hinder the applicability of downstream models, and while simple imputing strategies tend to introduce bias or distort the underlying data distribution, we require imputers that provide high-quality imputations, are robust across dataset sizes and yield fast inference. We therefore introduce TabINR, an auto-decoder based Implicit Neural Representation (INR) framework that models tables as neural functions. Building on recent advances in generalizable INRs, we introduce learnable row and feature embeddings that effectively deal with the discrete structure of tabular data and can be inferred from partial observations, enabling instance adaptive imputations without modifying the trained model. We evaluate our framework across a diverse range of twelve real-world datasets and multiple missingness mechanisms, demonstrating consistently strong imputation accuracy, mostly matching or outperforming classical (KNN, MICE, MissForest) and deep learning based models (GAIN, ReMasker), with the clearest gains on high-dimensional datasets.