🤖 AI Summary
Existing research on diffusion models for data augmentation lacks standardized protocols in task formulation, model selection, and experimental procedures, hindering fair comparisons. This work proposes UniDiffDA, a unified framework that systematically decomposes diffusion-based data augmentation into three core components: model fine-tuning, sample generation, and sample utilization. We establish a consistent evaluation protocol and a reproducible benchmark to enable rigorous assessment. Through comprehensive experiments and ablation studies across multiple low-data image classification tasks, we elucidate the strengths and limitations of various strategies, clarify critical design choices, and offer practical guidance for deployment. The release of our codebase and benchmark results aims to foster standardized and reproducible research in this emerging area.
📝 Abstract
Diffusion-based data augmentation (DiffDA) has emerged as a promising approach to improving classification performance under data scarcity. However, existing works vary significantly in task configurations, model choices, and experimental pipelines, making it difficult to fairly compare methods or assess their effectiveness across different scenarios. Moreover, there remains a lack of systematic understanding of the full DiffDA workflow. In this work, we introduce UniDiffDA, a unified analytical framework that decomposes DiffDA methods into three core components: model fine-tuning, sample generation, and sample utilization. This perspective enables us to identify key differences among existing methods and clarify the overall design space. Building on this framework, we develop a comprehensive and fair evaluation protocol, benchmarking representative DiffDA methods across diverse low-data classification tasks. Extensive experiments reveal the relative strengths and limitations of different DiffDA strategies and offer practical insights into method design and deployment. All methods are re-implemented within a unified codebase, with full release of code and configurations to ensure reproducibility and to facilitate future research.