🤖 AI Summary
This study addresses the phenomenon of “moral indifference” in large language models—where surface-level compliance masks misaligned internal representations—rooted in the compression of diverse moral concepts into a single probability distribution. The work demonstrates that scaling, architectural changes, or alignment training alone cannot eliminate this issue. To overcome it, the authors propose a novel endogenous alignment paradigm that shifts from passive correction to active cultivation of moral reasoning. Leveraging prototype theory and the Social-Chemistry-101 dataset, they construct a 251,000-dimensional moral vector space and employ sparse autoencoders to extract mono-semantic moral features from Qwen3-8B, reconstructing its latent moral topology. Evaluated on the adversarial Flames benchmark, the approach achieves a 75% pairwise win rate, substantially enhancing both moral reasoning capability and fine-grained moral discernment.
📝 Abstract
Existing behavioral alignment techniques for Large Language Models (LLMs) often neglect the discrepancy between surface compliance and internal unaligned representations, leaving LLMs vulnerable to long-tail risks. More crucially, we posit that LLMs possess an inherent state of moral indifference due to compressing distinct moral concepts into uniform probability distributions. We verify and remedy this indifference in LLMs' latent representations, utilizing 251k moral vectors constructed upon Prototype Theory and the Social-Chemistry-101 dataset. Firstly, our analysis across 23 models reveals that current LLMs fail to represent the distinction between opposed moral categories and fine-grained typicality gradients within these categories; notably, neither model scaling, architecture, nor explicit alignment reshapes this indifference. We then employ Sparse Autoencoders on Qwen3-8B, isolate mono-semantic moral features, and targetedly reconstruct their topological relationships to align with ground-truth moral vectors. This representational alignment naturally improves moral reasoning and granularity, achieving a 75% pairwise win-rate on the independent adversarial Flames benchmark. Finally, we elaborate on the remedial nature of current intervention methods from an experientialist philosophy, arguing that endogenously aligned AI might require a transformation from post-hoc corrections to proactive cultivation.