Transition-Matrix Regularization for Next Dialogue Act Prediction in Counselling Conversations

2026-04-20Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors explored how to improve the prediction of the next dialogue act in conversations by using real dialogue flow patterns from data. They introduced a method that encourages the predicted dialogue acts to match these observed patterns. Testing on a German counselling dialogue dataset showed meaningful improvements, especially for models that initially performed worse. Their approach also worked well when tested on a different dataset in another language, suggesting it can help in various dialogue settings with limited data.

Next Dialogue Act PredictionKL regularizationdialogue flowmacro-F1 scorepretrained encoderscross-validationdialogue taxonomytransfer learningfine-grained classificationdata-sparsity
Authors
Eric Rudolph, Philipp Steigerwald, Jens Albrecht
Abstract
This paper studies how empirical dialogue-flow statistics can be incorporated into Next Dialogue Act Prediction (NDAP). A KL regularization term is proposed that aligns predicted act distributions with corpus-derived transition patterns. Evaluated on a 60-class German counselling taxonomy using 5-fold cross-validation, this improves macro-F1 by 9--42% relative depending on encoder and substantially improves dialogue-flow alignment. Cross-dataset validation on HOPE suggests that improvements transfer across languages and counselling domains. In systematic ablations across pretrained encoders and architectures, the findings indicate that transition regularization provides consistent gains and disproportionately benefits weaker baseline models. The results suggest that lightweight discourse-flow priors complement pretrained encoders, especially in fine-grained, data-sparse dialogue tasks.