Process Reward Agents for Steering Knowledge-Intensive Reasoning

2026-04-10Artificial Intelligence

Artificial Intelligence
AI summary

The authors address the problem of making step-by-step reasoning more reliable in complex fields like medicine, where checking each step requires lots of external knowledge. They propose a new method called Process Reward Agents (PRA) that gives ongoing feedback during reasoning, helping pick better answers as they are being generated. Unlike previous methods that only evaluate after the fact, PRA works in real-time without changing the original reasoning model and improves accuracy significantly on medical tests. Their approach works across different sizes of models and suggests it’s possible to improve reasoning by adding reward modules without retraining the main reasoning system.

knowledge-intensive reasoningprocess reward modelsretrieval-augmentationstep-wise rewardssearch-based decodingfrozen policyMedQA benchmarkdynamic inferencereasoning tracemodel generalization
Authors
Jiwoong Sohn, Tomasz Sternal, Kenneth Styppa, Torsten Hoefler, Michael Moor
Abstract
Reasoning in knowledge-intensive domains remains challenging as intermediate steps are often not locally verifiable: unlike math or code, evaluating step correctness may require synthesizing clues across large external knowledge sources. As a result, subtle errors can propagate through reasoning traces, potentially never to be detected. Prior work has proposed process reward models (PRMs), including retrieval-augmented variants, but these methods operate post hoc, scoring completed trajectories, which prevents their integration into dynamic inference procedures. Here, we introduce Process Reward Agents (PRA), a test-time method for providing domain-grounded, online, step-wise rewards to a frozen policy. In contrast to prior retrieval-augmented PRMs, PRA enables search-based decoding to rank and prune candidate trajectories at every generation step. Experiments on multiple medical reasoning benchmarks demonstrate that PRA consistently outperforms strong baselines, achieving 80.8% accuracy on MedQA with Qwen3-4B, a new state of the art at the 4B scale. Importantly, PRA generalizes to unseen frozen policy models ranging from 0.5B to 8B parameters, improving their accuracy by up to 25.7% without any policy model updates. More broadly, PRA suggests a paradigm in which frozen reasoners are decoupled from domain-specific reward modules, allowing the deployment of new backbones in complex domains without retraining.