🤖 AI Summary
To address the lack of interpretability in malware detection, this paper proposes AutoMalDesc—the first framework for automated natural language description generation of threat behaviors across multiple scripting languages. Methodologically, it integrates static code analysis, self-paced learning–driven iterative self-training, and LLM-assisted evaluation, enabling continuous refinement via synthetic data starting from a few labeled examples. Crucially, it jointly optimizes technical accuracy and linguistic fluency. Trained on over 100,000 script samples and evaluated on 3,600 test samples spanning five scripting languages, AutoMalDesc achieves consistent improvements across iterations: +2.1 BLEU score in description quality and +3.7% classification accuracy. We publicly release the complete dataset—including annotated seed examples and test sets—to foster reproducible and scalable research in automated threat explanation.
📝 Abstract
Generating thorough natural language explanations for threat detections remains an open problem in cybersecurity research, despite significant advances in automated malware detection systems. In this work, we present AutoMalDesc, an automated static analysis summarization framework that, following initial training on a small set of expert-curated examples, operates independently at scale. This approach leverages an iterative self-paced learning pipeline to progressively enhance output quality through synthetic data generation and validation cycles, eliminating the need for extensive manual data annotation. Evaluation across 3,600 diverse samples in five scripting languages demonstrates statistically significant improvements between iterations, showing consistent gains in both summary quality and classification accuracy. Our comprehensive validation approach combines quantitative metrics based on established malware labels with qualitative assessment from both human experts and LLM-based judges, confirming both technical precision and linguistic coherence of generated summaries. To facilitate reproducibility and advance research in this domain, we publish our complete dataset of more than 100K script samples, including annotated seed (0.9K) and test (3.6K) datasets, along with our methodology and evaluation framework.