Interpretable Stylistic Variation in Human and LLM Writing Across Genres, Models, and Decoding Strategies
2026-04-15 • Computation and Language
Computation and Language
AI summaryⓘ
The authors studied how text written by humans differs in style from text generated by 11 different large language models (LLMs) across multiple genres and settings. They found that these stylistic differences mostly depend on the model used and the genre of text rather than on how the text is generated or prompted. Additionally, chat-based models tend to have similar styles. This work helps understand what influences the way machine-generated text looks, which could improve how these models are used responsibly.
Large Language Models (LLMs)Stylistic FeaturesGenreDecoding StrategiesPromptingLexicogrammatical FeaturesBiber's FeaturesHuman vs Machine TextChat Models
Authors
Swati Rallapalli, Shannon Gallagher, Ronald Yurko, Tyler Brooks, Chuck Loughin, Michele Sezgin, Violet Turri
Abstract
Large Language Models (LLMs) are now capable of generating highly fluent, human-like text. They enable many applications, but also raise concerns such as large scale spam, phishing, or academic misuse. While much work has focused on detecting LLM-generated text, only limited work has gone into understanding the stylistic differences between human-written and machine-generated text. In this work, we perform a large scale analysis of stylistic variation across human-written text and outputs from 11 LLMs spanning 8 different genres and 4 decoding strategies using Douglas Biber's set of lexicogrammatical and functional features. Our findings reveal insights that can guide intentional LLM usage. First, key linguistic differentiators of LLM-generated text seem robust to generation conditions (e.g., prompt settings to nudge them to generate human-like text, or availability of human-written text to continue the style); second, genre exerts a stronger influence on stylistic features than the source itself; third, chat variants of the models generally appear to be clustered together in stylistic space, and finally, model has a larger effect on the style than decoding strategy, with some exceptions. These results highlight the relative importance of model and genre over prompting and decoding strategies in shaping the stylistic behavior of machine-generated text.