🤖 AI Summary
This study presents the first systematic evaluation of the robustness of Claude Code’s Auto mode in enforcing permission controls under ambiguous authorization scenarios. Addressing a critical blind spot in existing permission gating mechanisms—namely their neglect of non-Shell operations such as file editing—we introduce AmPermBench, a benchmark encompassing four DevOps task categories, three ambiguity dimensions, and 253 fine-grained state-altering actions. Using a two-stage transcription classifier framework validated against human expert annotations, our analysis reveals an end-to-end false negative rate of 81.0%, with 36.8% attributable to uncovered Tier 2 operations (intra-project file edits). Even within the ostensibly covered Tier 3 scope, the false negative rate remains alarmingly high at 70.3%, accompanied by a false positive rate of 31.9%, exposing severe security gaps in current AI-powered coding agent permission systems.
📝 Abstract
Claude Code's auto mode is the first deployed permission system for AI coding agents, using a two-stage transcript classifier to gate dangerous tool calls. Anthropic reports a 0.4% false positive rate and 17% false negative rate on production traffic. We present the first independent evaluation of this system on deliberately ambiguous authorization scenarios, i.e., tasks where the user's intent is clear but the target scope, blast radius, or risk level is underspecified. Using AmPermBench, a 128-prompt benchmark spanning four DevOps task families and three controlled ambiguity axes, we evaluate 253 state-changing actions at the individual action level against oracle ground truth.
Our findings characterize auto mode's scope-escalation coverage under this stress-test workload. The end-to-end false negative rate is 81.0% (95% CI: 73.8%-87.4%), substantially higher than the 17% reported on production traffic, reflecting a fundamentally different workload rather than a contradiction. Notably, 36.8% of all state-changing actions fall outside the classifier's scope via Tier 2 (in-project file edits), contributing to the elevated end-to-end FNR. Even restricting to the 160 actions the classifier actually evaluates (Tier 3), the FNR remains 70.3%, while the FPR rises to 31.9%. The Tier 2 coverage gap is most pronounced on artifact cleanup (92.9% FNR), where agents naturally fall back to editing state files when the expected CLI is unavailable. These results highlight a coverage boundary worth examining: auto mode assumes dangerous actions transit the shell, but agents routinely achieve equivalent effects through file edits that the classifier does not evaluate.