AI summaryⓘ
The authors studied how people use tricky language called Algospeak to avoid detection by large language models (LLMs). They found that as this evasive language increases, it becomes harder for both machines and most people to understand. The authors introduced a concept called Majority Understandable Modulation (MUM), which is the point where making Algospeak more complex helps avoid detection but confuses the majority of readers. They created a dataset using COVID-19 misinformation with different levels of Algospeak and tested how well language models could understand and detect it, identifying key trade-offs between clarity and avoidance. Their work provides tools and data to better analyze these language evasion dynamics.
Large Language ModelsAlgospeakLinguistic EvasionDetectionUnderstandabilityModulationCOVID-19 DisinformationMeaning RecoveryClassificationMajority Understandable Modulation
Authors
Jan Fillies, Ronald E. Robertson, Jeffrey Hancock
Abstract
As large language models (LLMs) increasingly mediate both content generation and moderation, linguistic evasion strategies known as Algospeak have intensified the coevolution between evaders and detectors. This research formalizes the underlying dynamics grounded in a joint action model: when Algospeak increases, detectability and understandability decrease. Further, the concept of Majority Understandable Modulation (MUM) is introduced and defined as the modulation level at which additional evasive alteration increases detector evasion but loses comprehension for the majority of recipients. To empirically probe this trade-off, we introduce a reproducible framework that can be used to create meaning-preserving, Algospeak-style variants, based on an existing taxonomy and with tunable modulation levels. Using COVID-19 disinformation as a first proof-by-example setting, we construct a reference dataset of 700 modulated items, drawn from twenty base sentences across five modulation levels and seven strategies. We then run two linked evaluations with seven different language models: one testing for interpretation through meaning recovery and one for disinformation detection through classification. Curve fitting over modulation levels yields an estimate of the Majority Understandable Modulation threshold and enables sensitivity analyses across strategies and models, see Figure 1. Results reveal the characteristic relationships between understandability and modulation. This study lays the groundwork for understanding the dynamics behind Algospeak and provides the framework, dataset, and experimental setups described.