Mechanistic Origin of Moral Indifference in Language Models

2026-03-16Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors found that Large Language Models (LLMs) often appear to follow moral rules but actually have internal representations that don’t clearly distinguish different moral ideas, which they call moral indifference. By studying 23 models, they saw that usual changes like bigger size or architecture don’t fix this problem. They used a special method with Sparse Autoencoders to isolate and improve moral features inside one model, which helped it reason about morals better, especially in tough tests. The authors suggest future AI alignment should focus more on shaping the model’s internal understanding proactively, rather than just correcting behavior after training.

Large Language ModelsBehavioral alignmentMoral indifferenceLatent representationsPrototype TheorySparse AutoencodersMoral reasoningModel scalingSocial-Chemistry-101 datasetAI alignment
Authors
Lingyu Li, Yan Teng, Yingchun Wang
Abstract
Existing behavioral alignment techniques for Large Language Models (LLMs) often neglect the discrepancy between surface compliance and internal unaligned representations, leaving LLMs vulnerable to long-tail risks. More crucially, we posit that LLMs possess an inherent state of moral indifference due to compressing distinct moral concepts into uniform probability distributions. We verify and remedy this indifference in LLMs' latent representations, utilizing 251k moral vectors constructed upon Prototype Theory and the Social-Chemistry-101 dataset. Firstly, our analysis across 23 models reveals that current LLMs fail to represent the distinction between opposed moral categories and fine-grained typicality gradients within these categories; notably, neither model scaling, architecture, nor explicit alignment reshapes this indifference. We then employ Sparse Autoencoders on Qwen3-8B, isolate mono-semantic moral features, and targetedly reconstruct their topological relationships to align with ground-truth moral vectors. This representational alignment naturally improves moral reasoning and granularity, achieving a 75% pairwise win-rate on the independent adversarial Flames benchmark. Finally, we elaborate on the remedial nature of current intervention methods from an experientialist philosophy, arguing that endogenously aligned AI might require a transformation from post-hoc corrections to proactive cultivation.