🤖 AI Summary
Non-experts face significant challenges in executing inference attacks and tuning parameters to assess security risks of machine learning (ML) services. Method: This paper proposes a large language model (LLM)-based multi-agent automated evaluation framework. It introduces a task-customized action space and an adaptive policy optimization mechanism to mitigate common agent failures—such as planning errors, instruction deviation, and context loss—and employs models like GPT-4o to instantiate autonomous attack agents that dynamically adapt behaviors under constrained API environments. Contribution/Results: Evaluated on 20 real-world ML services, the framework achieves a 100% task completion rate with an average cost of only $0.627 per attack, matching human expert performance in efficacy. To our knowledge, this is the first work enabling low-barrier, highly robust, near-expert-level automation for inference attack evaluation.
📝 Abstract
Inference attacks have been widely studied and offer a systematic risk assessment of ML services; however, their implementation and the attack parameters for optimal estimation are challenging for non-experts. The emergence of advanced large language models presents a promising yet largely unexplored opportunity to develop autonomous agents as inference attack experts, helping address this challenge. In this paper, we propose AttackPilot, an autonomous agent capable of independently conducting inference attacks without human intervention. We evaluate it on 20 target services. The evaluation shows that our agent, using GPT-4o, achieves a 100.0% task completion rate and near-expert attack performance, with an average token cost of only $0.627 per run. The agent can also be powered by many other representative LLMs and can adaptively optimize its strategy under service constraints. We further perform trace analysis, demonstrating that design choices, such as a multi-agent framework and task-specific action spaces, effectively mitigate errors such as bad plans, inability to follow instructions, task context loss, and hallucinations. We anticipate that such agents could empower non-expert ML service providers, auditors, or regulators to systematically assess the risks of ML services without requiring deep domain expertise.