Density-Guided Response Optimization: Community-Grounded Alignment via Implicit Acceptance Signals

2026-03-03Artificial Intelligence

Artificial IntelligenceComputation and Language
AI summary

The authors study how language models can adapt to the unique rules and preferences of different online communities without needing direct human feedback. They found that communities show their preferences indirectly by which responses they accept or reject, and these preferences create recognizable patterns in the model's representation space. The authors use this pattern to develop a new method, DGRO, that aligns language model responses with community norms even when no explicit labels or supervision exist. Tests show DGRO works well across various communities, producing answers preferred by humans and experts compared to other approaches.

Language modelsModel alignmentCommunity normsImplicit feedbackPreference supervisionRepresentation spaceDensity-guided optimizationOnline communitiesAnnotation scarcityHuman preference modeling
Authors
Patrick Gerard, Svitlana Volkova
Abstract
Language models deployed in online communities must adapt to norms that vary across social, cultural, and domain-specific contexts. Prior alignment approaches rely on explicit preference supervision or predefined principles, which are effective for well-resourced settings but exclude most online communities -- particularly those without institutional backing, annotation infrastructure, or organized around sensitive topics -- where preference elicitation is costly, ethically fraught, or culturally misaligned. We observe that communities already express preferences implicitly through what content they accept, engage with, and allow to persist. We show that this acceptance behavior induces measurable geometric structure in representation space: accepted responses occupy coherent, high-density regions that reflect community-specific norms, while rejected content falls in sparser or misaligned areas. We operationalize this structure as an implicit preference signal for alignment and introduce density-guided response optimization (DGRO), a method that aligns language models to community norms without requiring explicit preference labels. Using labeled preference data, we demonstrate that local density recovers pairwise community judgments, indicating that geometric structure encodes meaningful preference signal. We then apply DGRO in annotation-scarce settings across diverse communities spanning platform, topic, and language. DGRO-aligned models consistently produce responses preferred by human annotators, domain experts, and model-based judges over supervised and prompt-based baselines. We position DGRO as a practical alignment alternative for communities where explicit preference supervision is unavailable or misaligned with situated practices, and discuss the implications and risks of learning from emergent acceptance behavior.