Rhetorical Questions in LLM Representations: A Linear Probing Study

2026-04-15Computation and Language

Computation and LanguageArtificial IntelligenceMachine Learning
AI summary

The authors studied how large language models (LLMs) understand rhetorical questions, which are asked to persuade rather than get information. They found that LLMs can tell rhetorical questions apart from normal questions fairly well, especially by looking at the model’s last word output. However, when trying to detect rhetorical questions across different groups of data, the models focus on different features, showing that there isn’t just one single way these questions are represented. This means rhetorical questions appear in LLMs through multiple patterns related to different cues like context or sentence structure.

rhetorical questionslarge language modelslinear probesAUROCcross-dataset transferdiscourse contextrepresentationssyntaxargumentationclassification
Authors
Louie Hong Yao, Vishesh Anand, Yuan Zhuang, Tianyu Jiang
Abstract
Rhetorical questions are asked not to seek information but to persuade or signal stance. How large language models internally represent them remains unclear. We analyze rhetorical questions in LLM representations using linear probes on two social-media datasets with different discourse contexts, and find that rhetorical signals emerge early and are most stably captured by last-token representations. Rhetorical questions are linearly separable from information-seeking questions within datasets, and remain detectable under cross-dataset transfer, reaching AUROC around 0.7-0.8. However, we demonstrate that transferability does not simply imply a shared representation. Probes trained on different datasets produce different rankings when applied to the same target corpus, with overlap among the top-ranked instances often below 0.2. Qualitative analysis shows that these divergences correspond to distinct rhetorical phenomena: some probes capture discourse-level rhetorical stance embedded in extended argumentation, while others emphasize localized, syntax-driven interrogative acts. Together, these findings suggest that rhetorical questions in LLM representations are encoded by multiple linear directions emphasizing different cues, rather than a single shared direction.