🤖 AI Summary
This study investigates the discrepancy between the rich clinical knowledge embedded in language model internal representations and their suboptimal output performance—a “knowledge-to-action” gap—by systematically evaluating four mechanistic interpretability methods (concept bottlenecks, sparse autoencoders, Logit Lens, and truth-discriminative steering vectors) for correcting false-negative errors in clinical triage. Using a large-scale empirical analysis based on 400 physician-adjudicated cases, the work demonstrates that while linear probes achieve an AUROC of 98.2%, the model’s output sensitivity remains as low as 45.1%. Although the interpretability interventions corrected up to 24% of missed cases, they consistently degraded previously correct predictions. These findings challenge the prevailing assumption in AI safety that interpretability reliably enables controllability, revealing that mechanistic interpretability does not readily translate into effective error correction.
📝 Abstract
Language models encode task-relevant knowledge in internal representations that far exceeds their output performance, but whether mechanistic interpretability methods can bridge this knowledge-action gap has not been systematically tested. We compared four mechanistic interpretability methods -- concept bottleneck steering (Steerling-8B), sparse autoencoder feature steering, logit lens with activation patching, and linear probing with truthfulness separator vector steering (Qwen 2.5 7B Instruct) -- for correcting false-negative triage errors using 400 physician-adjudicated clinical vignettes (144 hazards, 256 benign). Linear probes discriminated hazardous from benign cases with 98.2% AUROC, yet the model's output sensitivity was only 45.1%, a 53-percentage-point knowledge-action gap. Concept bottleneck steering corrected 20% of missed hazards but disrupted 53% of correct detections, indistinguishable from random perturbation (p=0.84). SAE feature steering produced zero effect despite 3,695 significant features. TSV steering at high strength corrected 24% of missed hazards while disrupting 6% of correct detections, but left 76% of errors uncorrected. Current mechanistic interpretability methods cannot reliably translate internal knowledge into corrected outputs, with implications for AI safety frameworks that assume interpretability enables effective error correction.