MicroRemed: Benchmarking LLMs in Microservices Remediation

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based approaches for microservice fault remediation rely heavily on manual prompting and struggle to directly generate executable Ansible playbooks from diagnostic reports. Method: We propose MicroRemed—the first end-to-end evaluation benchmark for microservice remediation—and ThinkRemed, a multi-agent reasoning framework that emulates human SREs’ perception–reflection–decision process to enable prompt-free, iterative, automated remediation. ThinkRemed tightly integrates large language models with structured, agent-driven reasoning—departing from conventional prompt-engineering paradigms. Contribution/Results: Experiments show that state-of-the-art LLMs perform poorly on MicroRemed; in contrast, ThinkRemed significantly improves end-to-end remediation success rate, executable script correctness, and functional recovery. It establishes a novel, reproducible paradigm for AI-driven autonomous operations (AIOps), advancing both benchmarking rigor and practical remediation capability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) integrated with agent-based reasoning frameworks have recently shown strong potential for autonomous decision-making and system-level operations. One promising yet underexplored direction is microservice remediation, where the goal is to automatically recover faulty microservice systems. Existing approaches, however, still rely on human-crafted prompts from Site Reliability Engineers (SREs), with LLMs merely converting textual instructions into executable code. To advance research in this area, we introduce MicroRemed, the first benchmark for evaluating LLMs in end-to-end microservice remediation, where models must directly generate executable Ansible playbooks from diagnosis reports to restore system functionality. We further propose ThinkRemed, a multi-agent framework that emulates the reflective and perceptive reasoning of SREs. Experimental results show that MicroRemed presents substantial challenges to current LLMs, while ThinkRemed improves end-to-end remediation performance through iterative reasoning and system reflection. The benchmark is available at https://github.com/LLM4AIOps/MicroRemed.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking LLMs for autonomous microservice system recovery
Generating executable remediation code from diagnostic reports
Addressing limitations of human-crafted prompts in fault remediation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates Ansible playbooks from diagnosis reports
Uses multi-agent framework for reflective reasoning
Emulates SRE reasoning through iterative system reflection
🔎 Similar Papers
No similar papers found.