Malicious Or Not: Adding Repository Context to Agent Skill Classification

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high false positive rate of existing AI agent skill safety scanners, which frequently misclassify benign skills as malicious. To mitigate this issue, the study introduces— for the first time—the contextual information derived from the GitHub repositories hosting these skills. Conducting a systematic security analysis of 238,180 AI agent skills, the approach evaluates the consistency between skill descriptions and repository code, integrates security behavior classification, and incorporates empirical validation. This methodology reduces the proportion of skills erroneously flagged as non-benign from 46.8% to 0.52%, substantially lowering the false positive rate. Furthermore, it uncovers novel real-world attack vectors, such as skill hijacking via abandoned repositories, thereby establishing a new paradigm for security assessment in the AI agent ecosystem.

Technology Category

Application Category

📝 Abstract
Agent skills extend local AI agents, such as Claude Code or Open Claw, with additional functionality, and their popularity has led to the emergence of dedicated skill marketplaces, similar to app stores for mobile applications. Simultaneously, automated skill scanners were introduced, analyzing the skill description available in SKILL.md, to verify their benign behavior. The results for individual market places mark up to 46.8% of skills as malicious. In this paper, we present the largest empirical security analysis of the AI agent skill ecosystem, questioning this high classification of malicious skills. Therefore, we collect 238,180 unique skills from three major distribution platforms and GitHub to systematically analyze their type and behavior. This approach substantially reduces the number of skills flagged as non-benign by security scanners to only 0.52% which remain in malicious flagged repositories. Consequently, out methodology substantially reduces false positives and provides a more robust view of the ecosystem's current risk surface. Beyond that, we extend the security analysis from the mere investigation of the skill description to a comparison of its congruence with the GitHub repository the skill is embedded in, providing additional context. Furthermore, our analysis also uncovers several, by now undocumented real-world attack vectors, namely hijacking skills hosted on abandoned GitHub repositories.
Problem

Research questions and friction points this paper is trying to address.

AI agent skills
malicious classification
false positives
security analysis
repository context
Innovation

Methods, ideas, or system contributions that make the work stand out.

repository context
agent skill classification
false positive reduction
security analysis
attack vector discovery
🔎 Similar Papers
No similar papers found.