CLSGen: A Dual-Head Fine-Tuning Framework for Joint Probabilistic Classification and Verbalized Explanation
2026-04-13 • Computation and Language
Computation and Language
AI summaryⓘ
The authors address a problem where large language models (LLMs) can't reliably give numbers representing how sure they are about their decisions, which makes them less useful for real-world tasks. Normally, fine-tuning these models to give probabilities causes them to forget how to explain their answers well. They created a new method called CLSGen that allows LLMs to both provide accurate probability estimates and keep their ability to explain decisions clearly. Tests showed their method works better for classification tasks and produces explanations that match the predicted labels and are easy to read.
Large Language ModelsFine-tuningProbability EstimationBinary ClassificationCatastrophic ForgettingModel InterpretabilityAUROCF1-scoreExplanation Generation
Authors
WonJin Yoon, Kangyu Zhu, Ian Bulovic, Autumn Sehy, Yanjun Gao, Dmitriy Dligach, Majid Afshar, Timothy A. Miller
Abstract
With the recent progress of Large Language Models (LLMs), there is a growing interest in applying these models to solve complex and challenging problems. Modern LLMs, capable of processing long contexts and generating verbalized explanations, offer significant potential in addressing real-world applications. However, a critical hurdle in deploying LLMs for practical decision-making is their inability to provide reliable, quantitative probabilities. While task-specific fine-tuning of LLMs using traditional discriminative objectives (similar to encoder-only models) can yield probability estimates, this often leads to catastrophic forgetting and linguistic collapse. Consequently, the model loses its ability to generate explanations, severely undermining its interpretability and usability. To address this challenge, we propose CLSGen, a novel LLM fine-tuning framework designed for binary classification tasks. The CLSGen framework encompasses a new model architecture, training methodology, and data construction strategy to enable robust probability estimation without sacrificing the model's inherent explanation-generation capabilities. Experimental results across multiple benchmark datasets demonstrate that models fine-tuned with CLSGen outperform existing baselines in classification metrics (AUROC and F1-score). Regarding explanation, the results showed strong alignment between predicted labels and generated justifications, as well as high readability.