🤖 AI Summary
This paper identifies a pervasive data redundancy problem in network traffic classification (TC): over 50% of flow samples in mainstream datasets are duplicated, and improper train/test splits induce label leakage, severely inflating reported model performance and lowering the theoretical accuracy ceiling. To quantify this effect, the authors systematically evaluate a k-NN baseline using packet-sequence metadata (packet sizes, inter-arrival times, directions) across 12 datasets and 15 classification tasks—matching or surpassing state-of-the-art deep learning models. This constitutes the first empirical demonstration that redundancy systematically biases TC evaluation and reveals a fundamental misalignment in directly adopting NLP/CV evaluation paradigms. Key contributions include: (i) quantitative characterization of redundancy and label conflict impacts; (ii) a traffic-aware redefinition of classification tasks and evaluation protocols; and (iii) a rigorous, empirically grounded benchmarking framework to advance methodological soundness in TC research.
📝 Abstract
Machine learning has been applied to network traffic classification (TC) for over two decades. While early efforts used shallow models, the latter 2010s saw a shift toward complex neural networks, often reporting near-perfect accuracy. However, it was recently revealed that a simple k-NN baseline using packet sequences metadata (sizes, times, and directions) can be on par or even outperform more complex methods. In this paper, we investigate this phenomenon further and evaluate this baseline across 12 datasets and 15 TC tasks, and investigate why it performs so well. Our analysis shows that most datasets contain over 50% redundant samples (identical packet sequences), which frequently appear in both training and test sets due to common splitting practices. This redundancy can lead to overestimated model performance and reduce the theoretical maximum accuracy when identical flows have conflicting labels. Given its distinct characteristics, we further argue that standard machine learning practices adapted from domains like NLP or computer vision may be ill-suited for TC. Finally, we propose new directions for task formulation and evaluation to address these challenges and help realign the field.