Improving LLM Reliability through Hybrid Abstention and Adaptive Detection

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inherent tension between safety and usability in deployed large language models, where overly stringent filtering often leads to false rejections of benign requests, while permissive policies risk generating harmful content. To resolve this trade-off, the authors propose a context-aware adaptive rejection mechanism that dynamically adjusts safety thresholds by integrating real-time signals such as user history and dialogue domain. The approach employs a five-way parallel detector combined with a hierarchical cascaded inference architecture. Experimental results demonstrate that the method achieves high safety precision and near-perfect recall while significantly reducing both false positive rates and response latency, with particularly strong performance in sensitive domains such as medical advice and creative writing.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) deployed in production environments face a fundamental safety-utility trade-off either a strict filtering mechanisms prevent harmful outputs but often block benign queries or a relaxed controls risk unsafe content generation. Conventional guardrails based on static rules or fixed confidence thresholds are typically context-insensitive and computationally expensive, resulting in high latency and degraded user experience. To address these limitations, we introduce an adaptive abstention system that dynamically adjusts safety thresholds based on real-time contextual signals such as domain and user history. The proposed framework integrates a multi-dimensional detection architecture composed of five parallel detectors, combined through a hierarchical cascade mechanism to optimize both speed and precision. The cascade design reduces unnecessary computation by progressively filtering queries, achieving substantial latency improvements compared to non-cascaded models and external guardrail systems. Extensive evaluation on mixed and domain-specific workloads demonstrates significant reductions in false positives, particularly in sensitive domains such as medical advice and creative writing. The system maintains high safety precision and near-perfect recall under strict operating modes. Overall, our context-aware abstention framework effectively balances safety and utility while preserving performance, offering a scalable solution for reliable LLM deployment.
Problem

Research questions and friction points this paper is trying to address.

LLM safety
safety-utility trade-off
false positives
context-aware filtering
guardrail systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive abstention
context-aware safety
cascade detection
false positive reduction
LLM reliability
🔎 Similar Papers
No similar papers found.
A
Ankit Sharma
Department of Computer Science & Engineering, Chhattisgarh Swami Vivekanand Technical University, Bhilai, Chhattisgarh, India
Nachiket Tapas
Nachiket Tapas
Assistant Professor, University Teaching Department, CSVTU Bhilai
BlockchainInternet of ThingsCybersecurityData Mining
J
Jyotiprakash Patra
Department of Computer Science & Engineering, Chhattisgarh Swami Vivekanand Technical University, Bhilai, Chhattisgarh, India