Grounded AI for Code Review: Resource-Efficient Large-Model Serving in Enterprise Pipelines

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-compliance settings, static analysis tools suffer from poor interpretability, while large language models (LLMs) exhibit hallucination risks and high computational costs. Method: This paper proposes an AST-guided, decoupled automated code review framework that integrates static analysis with AST-context extraction, incorporates quantized open-source LLMs, and employs a multi-level caching and on-demand service architecture. The system enables independent deployment of inference-enhancement and service layers, delivering low-latency, auditable defect localization, root-cause attribution, and repair suggestions via PR-native feedback and a lightweight real-time inference stack. Results: Evaluated against C/C++ security standards, the framework achieves a median first-feedback latency of <60 seconds and outperforms mainstream proprietary models in violation detection rate. Internal assessments demonstrate significant reductions in manual review iterations and triage effort. The core contribution is a novel, compliance-oriented code review paradigm that jointly ensures trustworthiness, efficiency, and reproducibility.

Technology Category

Application Category

📝 Abstract
Automated code review adoption lags in compliance-heavy settings, where static analyzers produce high-volume, low-rationale outputs, and naive LLM use risks hallucination and incurring cost overhead. We present a production system for grounded, PR-native review that pairs static-analysis findings with AST-guided context extraction and a single-GPU, on-demand serving stack (quantized open-weight model, multi-tier caching) to deliver concise explanations and remediation guidance. Evaluated on safety-oriented C/C++ standards, the approach achieves sub-minute median first-feedback (offline p50 build+LLM 59.8s) while maintaining competitive violation reduction and lower violation rates versus larger proprietary models. The architecture is decoupled: teams can adopt the grounding/prompting layer or the serving layer independently. A small internal survey (n=8) provides directional signals of reduced triage effort and moderate perceived grounding, with participants reporting fewer human review iterations. We outline operational lessons and limitations, emphasizing reproducibility, auditability, and pathways to broader standards and assisted patching.
Problem

Research questions and friction points this paper is trying to address.

Addresses inefficient static analyzers with high-volume low-rationale outputs
Reduces LLM hallucination risks and cost overhead in code review
Enables resource-efficient AI code review for compliance-heavy enterprise pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Static analysis paired with AST-guided context extraction
Single-GPU serving stack with quantized open-weight model
Decoupled architecture enabling independent adoption of components
🔎 Similar Papers
No similar papers found.