Calibrate-Then-Act: Cost-Aware Exploration in LLM Agents
2026-02-18 • Computation and Language
Computation and LanguageArtificial Intelligence
AI summaryⓘ
The authors study how large language models (LLMs) can better decide when to keep exploring for more information or stop and give an answer, especially when exploring has costs like writing tests or searching. They create a method called Calibrate-Then-Act (CTA) that gives LLMs extra context about uncertainties and costs so they can make smarter choices during tasks like coding or answering questions. Their experiments show that this approach helps the LLMs make better decisions, even when both the original and new methods are trained with reinforcement learning. Overall, the authors show that explicitly thinking about tradeoffs between cost and uncertainty leads to improved problem-solving strategies.
Large Language ModelsExploration-Exploitation TradeoffSequential Decision-MakingCost-Uncertainty TradeoffReinforcement LearningInformation RetrievalCoding TasksCalibrate-Then-Act FrameworkLatent Environment StatePrior Knowledge
Authors
Wenxuan Ding, Nicholas Tomlin, Greg Durrett
Abstract
LLMs are increasingly being used for complex problems which are not necessarily resolved in a single response, but require interacting with an environment to acquire information. In these scenarios, LLMs must reason about inherent cost-uncertainty tradeoffs in when to stop exploring and commit to an answer. For instance, on a programming task, an LLM should test a generated code snippet if it is uncertain about the correctness of that code; the cost of writing a test is nonzero, but typically lower than the cost of making a mistake. In this work, we show that we can induce LLMs to explicitly reason about balancing these cost-uncertainty tradeoffs, then perform more optimal environment exploration. We formalize multiple tasks, including information retrieval and coding, as sequential decision-making problems under uncertainty. Each problem has latent environment state that can be reasoned about via a prior which is passed to the LLM agent. We introduce a framework called Calibrate-Then-Act (CTA), where we feed the LLM this additional context to enable it to act more optimally. This improvement is preserved even under RL training of both the baseline and CTA. Our results on information-seeking QA and on a simplified coding task show that making cost-benefit tradeoffs explicit with CTA can help agents discover more optimal decision-making strategies.