🤖 AI Summary
This study addresses the impact of cognitive biases on human judgment in data cleaning, which often leads to reduced quality and inconsistent outcomes. Through a controlled user experiment grounded in census data scenarios, the research systematically identifies and validates the mechanisms of four key cognitive biases—framing effects, anchoring-and-adjustment, representativeness heuristic, and omission bias—across error detection, repair, missing value imputation, and entity matching tasks. Findings reveal that superficial differences in data formatting trigger false positives, expert-provided hints disproportionately dominate user decisions, atypical yet valid entries are frequently misclassified as errors, and users exhibit a strong preference for retaining missing values rather than performing reasonable imputations. This work provides the first empirical evidence of the prevalence and non-technical origins of cognitive biases in data cleaning, offering both theoretical foundations and practical guidance for designing human-in-the-loop data cleaning systems.
📝 Abstract
Data cleaning is often framed as a technical preprocessing step, yet in practice it relies heavily on human judgment. We report results from a controlled survey study in which participants performed error detection, data repair and imputation, and entity matching tasks on census-inspired scenarios with known semantic validity. We find systematic evidence for several cognitive bias mechanisms in data cleaning. Framing effects arise when surface-level formatting differences (e.g., capitalization or numeric presentation) increase false-positive error flags despite unchanged semantics. Anchoring and adjustment bias appears when expert cues shift participant decisions beyond parity, consistent with salience and availability effects. We also observe the representativeness heuristic: atypical but valid attribute combinations are frequently flagged as erroneous, and in entity matching tasks, surface similarity produces a substantial false-positive rate with high confidence. In data repair, participants show a robust preference for leaving values missing rather than imputing plausible values, consistent with omission bias. In contrast, automation-aligned switching under strong contradiction does not exceed a conservative rare-error tolerance threshold at the population level, indicating that deference to automated recommendations is limited in this setting. Across scenarios, bias patterns persist among technically experienced participants and across diverse workflow practices, suggesting that bias in data cleaning reflects general cognitive tendencies rather than lack of expertise. These findings motivate human-in-the-loop cleaning systems that clearly separate representation from semantics, present expert or algorithmic recommendations non-prescriptively, and support reflective evaluation of atypical but valid cases.