A Benchmark for Deep Information Synthesis
2026-02-24 • Artificial Intelligence
Artificial IntelligenceComputation and LanguageInformation RetrievalMachine Learning
AI summaryⓘ
The authors created DEEPSYNTH, a new test to check how well language-model-based agents can handle really complex, real-world tasks that need gathering and combining information from many places. This test has 120 hard problems from different topics and countries, designed by people who carefully prepared official data and questions. When they tried 11 advanced AI agents on these tasks, the results were quite low, showing these tasks are very challenging. The authors found that current agents often make mistakes by imagining incorrect facts and have trouble thinking through lots of information. They suggest DEEPSYNTH is important for improving future AI tools.
large language modelstool usebenchmarkinformation synthesisstructured reasoninghallucinationsF1 scoredata analysistask evaluationmanual annotation
Authors
Debjit Paul, Daniel Murphy, Milan Gritta, Ronald Cardenas, Victor Prokhorov, Lena Sophia Bolliger, Aysim Toker, Roy Miles, Andreea-Maria Oncescu, Jasivan Alex Sivakumar, Philipp Borchert, Ismail Elezi, Meiru Zhang, Ka Yiu Lee, Guchun Zhang, Jun Wang, Gerasimos Lampouras
Abstract
Large language model (LLM)-based agents are increasingly used to solve complex tasks involving tool use, such as web browsing, code execution, and data analysis. However, current evaluation benchmarks do not adequately assess their ability to solve real-world tasks that require synthesizing information from multiple sources and inferring insights beyond simple fact retrieval. To address this, we introduce DEEPSYNTH, a novel benchmark designed to evaluate agents on realistic, time-consuming problems that combine information gathering, synthesis, and structured reasoning to produce insights. DEEPSYNTH contains 120 tasks collected across 7 domains and data sources covering 67 countries. DEEPSYNTH is constructed using a multi-stage data collection pipeline that requires annotators to collect official data sources, create hypotheses, perform manual analysis, and design tasks with verifiable answers. When evaluated on DEEPSYNTH, 11 state-of-the-art LLMs and deep research agents achieve a maximum F1 score of 8.97 and 17.5 on the LLM-judge metric, underscoring the difficulty of the benchmark. Our analysis reveals that current agents struggle with hallucinations and reasoning over large information spaces, highlighting DEEPSYNTH as a crucial benchmark for guiding future research.