Resources for Automated Evaluation of Assistive RAG Systems that Help Readers with News Trustworthiness Assessment

2026-02-27Information Retrieval

Information RetrievalArtificial Intelligence
AI summary

The authors organized a competition called the TREC 2025 DRAGUN Track to help create computer programs that support people in judging how trustworthy online news is. They set two tasks: one to come up with important questions to investigate news stories, and another to write short, fact-based reports about news articles. They had human experts create detailed guides to evaluate these tasks and then developed an automated system to judge new program outputs without human help. Their automated system closely matched human judgments, making it easier to test and improve tools that assist readers in evaluating news reliability.

TRECDRAGUN TrackRAG systemsQuestion GenerationReport GenerationMS MARCO Corpustrustworthiness assessmentautomated evaluationKendall's taunews misinformation
Authors
Dake Zhang, Mark D. Smucker, Charles L. A. Clarke
Abstract
Many readers today struggle to assess the trustworthiness of online news because reliable reporting coexists with misinformation. The TREC 2025 DRAGUN (Detection, Retrieval, and Augmented Generation for Understanding News) Track provided a venue for researchers to develop and evaluate assistive RAG systems that support readers' news trustworthiness assessment by producing reader-oriented, well-attributed reports. As the organizers of the DRAGUN track, we describe the resources that we have newly developed to allow for the reuse of the track's tasks. The track had two tasks: (Task 1) Question Generation, producing 10 ranked investigative questions; and (Task 2, the main task) Report Generation, producing a 250-word report grounded in the MS MARCO V2.1 Segmented Corpus. As part of the track's evaluation, we had TREC assessors create importance-weighted rubrics of questions with expected short answers for 30 different news articles. These rubrics represent the information that assessors believe is important for readers to assess an article's trustworthiness. The assessors then used their rubrics to manually judge the participating teams' submitted runs. To make these tasks and their rubrics reusable, we have created an automated process to judge runs not part of the original assessing. We show that our AutoJudge ranks existing runs well compared to the TREC human-assessed evaluation (Kendall's $τ= 0.678$ for Task 1 and $τ= 0.872$ for Task 2). These resources enable both the evaluation of RAG systems for assistive news trustworthiness assessment and, with the human evaluation as a benchmark, research on improving automated RAG evaluation.