Text Style Transfer with Parameter-efficient LLM Finetuning and Round-trip Translation

2026-02-16Computation and Language

Computation and Language
AI summary

The authors developed a new way to change the style of text by fine-tuning large language models without needing a lot of extra resources. They created special training data by translating sentences back and forth to remove style, helping the model learn from more examples. Their method worked better than other commonly used approaches in various tests. They also improved the system's ability to keep consistent style and use correct names by adding a retrieval step.

Text Style TransferLarge Language ModelsParameter-efficient Fine-tuningRoundtrip TranslationParallel CorpusZero-shot PromptingFew-shot In-context LearningBLEU ScoreRetrieval-augmented GenerationStyle Accuracy
Authors
Ruoxi Liu, Philipp Koehn
Abstract
This paper proposes a novel method for Text Style Transfer (TST) based on parameter-efficient fine-tuning of Large Language Models (LLMs). Addressing the scarcity of parallel corpora that map between styles, the study employs roundtrip translation to synthesize such parallel datasets from monolingual corpora. This approach creates 'neutralized' text devoid of stylistic attributes, essentially creating a shared input style at training-time and inference-time. Experimental results demonstrate consistent superiority of this method over zero-shot prompting and fewshot ICL techniques measured by BLEU scores and style accuracy scores across four investigated domains. Furthermore, the integration of retrieval-augmented generation (RAG) for terminology and name knowledge enhances robustness and stylistic consistency.