Interpretability without actionability: mechanistic methods cannot correct language model errors despite near-perfect internal representations

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the discrepancy between the rich clinical knowledge embedded in language model internal representations and their suboptimal output performance—a “knowledge-to-action” gap—by systematically evaluating four mechanistic interpretability methods (concept bottlenecks, sparse autoencoders, Logit Lens, and truth-discriminative steering vectors) for correcting false-negative errors in clinical triage. Using a large-scale empirical analysis based on 400 physician-adjudicated cases, the work demonstrates that while linear probes achieve an AUROC of 98.2%, the model’s output sensitivity remains as low as 45.1%. Although the interpretability interventions corrected up to 24% of missed cases, they consistently degraded previously correct predictions. These findings challenge the prevailing assumption in AI safety that interpretability reliably enables controllability, revealing that mechanistic interpretability does not readily translate into effective error correction.

Technology Category

Application Category

📝 Abstract
Language models encode task-relevant knowledge in internal representations that far exceeds their output performance, but whether mechanistic interpretability methods can bridge this knowledge-action gap has not been systematically tested. We compared four mechanistic interpretability methods -- concept bottleneck steering (Steerling-8B), sparse autoencoder feature steering, logit lens with activation patching, and linear probing with truthfulness separator vector steering (Qwen 2.5 7B Instruct) -- for correcting false-negative triage errors using 400 physician-adjudicated clinical vignettes (144 hazards, 256 benign). Linear probes discriminated hazardous from benign cases with 98.2% AUROC, yet the model's output sensitivity was only 45.1%, a 53-percentage-point knowledge-action gap. Concept bottleneck steering corrected 20% of missed hazards but disrupted 53% of correct detections, indistinguishable from random perturbation (p=0.84). SAE feature steering produced zero effect despite 3,695 significant features. TSV steering at high strength corrected 24% of missed hazards while disrupting 6% of correct detections, but left 76% of errors uncorrected. Current mechanistic interpretability methods cannot reliably translate internal knowledge into corrected outputs, with implications for AI safety frameworks that assume interpretability enables effective error correction.
Problem

Research questions and friction points this paper is trying to address.

mechanistic interpretability
knowledge-action gap
language model errors
error correction
internal representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

mechanistic interpretability
knowledge-action gap
language model error correction
linear probing
concept bottleneck
🔎 Similar Papers
No similar papers found.
S
Sanjay Basu
University of California San Francisco, San Francisco, CA, USA
S
Sadiq Y. Patel
Waymark, San Francisco, CA, USA
Parth Sheth
Parth Sheth
University of Pennsylvania
Machine learningData Science
B
Bhairavi Muralidharan
Waymark, San Francisco, CA, USA
N
Namrata Elamaran
Waymark, San Francisco, CA, USA
A
Aakriti Kinra
Waymark, San Francisco, CA, USA
John Morgan
John Morgan
Professor of Business Administration, UC Berkeley
Economics of the internetcommunicationsvotingexperiments
R
Rajaie Batniji
Waymark, San Francisco, CA, USA