AutoMalDesc: Large-Scale Script Analysis for Cyber Threat Research

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of interpretability in malware detection, this paper proposes AutoMalDesc—the first framework for automated natural language description generation of threat behaviors across multiple scripting languages. Methodologically, it integrates static code analysis, self-paced learning–driven iterative self-training, and LLM-assisted evaluation, enabling continuous refinement via synthetic data starting from a few labeled examples. Crucially, it jointly optimizes technical accuracy and linguistic fluency. Trained on over 100,000 script samples and evaluated on 3,600 test samples spanning five scripting languages, AutoMalDesc achieves consistent improvements across iterations: +2.1 BLEU score in description quality and +3.7% classification accuracy. We publicly release the complete dataset—including annotated seed examples and test sets—to foster reproducible and scalable research in automated threat explanation.

Technology Category

Application Category

📝 Abstract
Generating thorough natural language explanations for threat detections remains an open problem in cybersecurity research, despite significant advances in automated malware detection systems. In this work, we present AutoMalDesc, an automated static analysis summarization framework that, following initial training on a small set of expert-curated examples, operates independently at scale. This approach leverages an iterative self-paced learning pipeline to progressively enhance output quality through synthetic data generation and validation cycles, eliminating the need for extensive manual data annotation. Evaluation across 3,600 diverse samples in five scripting languages demonstrates statistically significant improvements between iterations, showing consistent gains in both summary quality and classification accuracy. Our comprehensive validation approach combines quantitative metrics based on established malware labels with qualitative assessment from both human experts and LLM-based judges, confirming both technical precision and linguistic coherence of generated summaries. To facilitate reproducibility and advance research in this domain, we publish our complete dataset of more than 100K script samples, including annotated seed (0.9K) and test (3.6K) datasets, along with our methodology and evaluation framework.
Problem

Research questions and friction points this paper is trying to address.

Generating natural language explanations for malware detections
Automating static analysis summarization at large scale
Reducing manual annotation in cybersecurity threat research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated static analysis summarization framework for malware
Iterative self-paced learning with synthetic data generation
Combines quantitative metrics and qualitative expert assessments
🔎 Similar Papers
No similar papers found.
A
Alexandru-Mihai Apostu
CrowdStrike
A
Andrei Preda
CrowdStrike
A
Alexandra Daniela Damir
CrowdStrike
D
Diana Bolocan
CrowdStrike
Radu Tudor Ionescu
Radu Tudor Ionescu
Professor, University of Bucharest, Romania
Computer VisionMachine LearningAIComputational LinguisticsMedical Imaging
I
Ioana Croitoru
CrowdStrike
M
Mihaela Gaman
CrowdStrike