🤖 AI Summary
Evaluating statistical methods in digital soil mapping (DSM) is hindered by closed, single-source datasets, limiting the generalizability of conclusions. Method: We introduce LimeSoDa—the first open, multi-source, standardized benchmark dataset for DSM—comprising 31 field- or farm-scale datasets from diverse countries, uniformly providing three target variables (soil organic matter, clay content, pH) and spectral/sensing-derived features. We systematically harmonize heterogeneous soil sensing data into ready-to-use tabular formats and propose a context-aware evaluation framework integrating multiple algorithms and realistic scenarios. Contribution/Results: Benchmark experiments reveal that model performance critically depends on feature dimensionality and data provenance: MLR and SVR excel with high-dimensional spectral data, whereas CatBoost and RF outperform in low-dimensional settings (<20 features). LimeSoDa establishes a reproducible, extensible infrastructure for rigorous, comparative assessment of DSM methodologies.
📝 Abstract
Digital soil mapping (DSM) relies on a broad pool of statistical methods, yet determining the optimal method for a given context remains challenging and contentious. Benchmarking studies on multiple datasets are needed to reveal strengths and limitations of commonly used methods. Existing DSM studies usually rely on a single dataset with restricted access, leading to incomplete and potentially misleading conclusions. To address these issues, we introduce an open-access dataset collection called Precision Liming Soil Datasets (LimeSoDa). LimeSoDa consists of 31 field- and farm-scale datasets from various countries. Each dataset has three target soil properties: (1) soil organic matter or soil organic carbon, (2) clay content and (3) pH, alongside a set of features. Features are dataset-specific and were obtained by optical spectroscopy, proximal- and remote soil sensing. All datasets were aligned to a tabular format and are ready-to-use for modeling. We demonstrated the use of LimeSoDa for benchmarking by comparing the predictive performance of four learning algorithms across all datasets. This comparison included multiple linear regression (MLR), support vector regression (SVR), categorical boosting (CatBoost) and random forest (RF). The results showed that although no single algorithm was universally superior, certain algorithms performed better in specific contexts. MLR and SVR performed better on high-dimensional spectral datasets, likely due to better compatibility with principal components. In contrast, CatBoost and RF exhibited considerably better performances when applied to datasets with a moderate number (<20) of features. These benchmarking results illustrate that the performance of a method is highly context-dependent. LimeSoDa therefore provides an important resource for improving the development and evaluation of statistical methods in DSM.