General365: Benchmarking General Reasoning in Large Language Models Across Diverse and Challenging Tasks
2026-04-13 • Computation and Language
Computation and LanguageArtificial Intelligence
AI summaryⓘ
The authors created General365, a test to check how well large language models (LLMs) can solve general reasoning problems without relying on expert knowledge. They kept questions at a K-12 difficulty level to focus on general thinking skills rather than specialized knowledge. After testing 26 popular LLMs, they found that even the best model only got about 63% right, which is much lower than their performance on specialized tasks like math or physics. This shows that current LLMs are better at reasoning within specific subjects than in broad, everyday reasoning. The authors hope General365 will help improve LLMs' abilities to reason in more general, real-world situations.
large language modelsgeneral reasoningdomain-specific reasoningbenchmarkK-12 knowledge levelnested logical branchessemantic interferencemodel evaluationaccuracyreal-world reasoning
Authors
Junlin Liu, Shengnan An, Shuang Zhou, Dan Ma, Shixiong Luo, Ying Xie, Yuan Zhang, Wenling Yuan, Yifan Zhou, Xiaoyu Li, Ziwen Wang, Xuezhi Cao, Xunliang Cai
Abstract
Contemporary large language models (LLMs) have demonstrated remarkable reasoning capabilities, particularly in specialized domains like mathematics and physics. However, their ability to generalize these reasoning skills to more general and broader contexts--often termed general reasoning--remains under-explored. Unlike domain-specific reasoning, general reasoning relies less on expert knowledge but still presents formidable reasoning challenges, such as complex constraints, nested logical branches, and semantic interference. To address this gap, we introduce General365, a benchmark specifically designed to assess general reasoning in LLMs. By restricting background knowledge to a K-12 level, General365 explicitly decouples reasoning from specialized expertise. The benchmark comprises 365 seed problems and 1,095 variant problems across eight categories, ensuring both high difficulty and diversity. Evaluations across 26 leading LLMs reveal that even the top-performing model achieves only 62.8% accuracy, in stark contrast to the near-perfect performances of LLMs in math and physics benchmarks. These results suggest that the reasoning abilities of current LLMs are heavily domain-dependent, leaving significant room for improvement in broader applications. We envision General365 as a catalyst for advancing LLM reasoning beyond domain-specific tasks toward robust, general-purpose real-world scenarios. Code, Dataset, and Leaderboard: https://general365.github.io