VIRAASAT: Traversing Novel Paths for Indian Cultural Reasoning
2026-02-20 • Computation and Language
Computation and LanguageInformation Retrieval
AI summaryⓘ
The authors created VIRAASAT, a new question-answer dataset focused on Indian culture that involves multiple steps of reasoning using a large knowledge graph of cultural artifacts. They found that current large language models struggle with these complex cultural questions, especially when fine-tuned with typical reasoning methods. To improve this, the authors developed a new training method called Symbolic Chain-of-Manipulation, which helps models better navigate the knowledge graph structure. Their approach showed better performance than previous techniques. They also released the dataset to help future research on culturally aware AI.
Large Language ModelsMulti-hop Question AnsweringKnowledge GraphIndian CultureChain-of-ThoughtSymbolic Chain-of-ManipulationSupervised Fine-TuningCultural ReasoningDataset GenerationAI Benchmarking
Authors
Harshul Raj Surana, Arijit Maji, Aryan Vats, Akash Ghosh, Sriparna Saha, Amit Sheth
Abstract
Large Language Models (LLMs) have made significant progress in reasoning tasks across various domains such as mathematics and coding. However, their performance deteriorates in tasks requiring rich socio-cultural knowledge and diverse local contexts, particularly those involving Indian Culture. Existing Cultural benchmarks are (i) Manually crafted, (ii) contain single-hop questions testing factual recall, and (iii) prohibitively costly to scale, leaving this deficiency largely unmeasured. To address this, we introduce VIRAASAT, a novel, semi-automated multi-hop approach for generating cultural specific multi-hop Question-Answering dataset for Indian culture. VIRAASAT leverages a Knowledge Graph comprising more than 700 expert-curated cultural artifacts, covering 13 key attributes of Indian culture (history, festivals, etc). VIRAASAT spans all 28 states and 8 Union Territories, yielding more than 3,200 multi-hop questions that necessitate chained cultural reasoning. We evaluate current State-of-the-Art (SOTA) LLMs on VIRAASAT and identify key limitations in reasoning wherein fine-tuning on Chain-of-Thought(CoT) traces fails to ground and synthesize low-probability facts. To bridge this gap, we propose a novel framework named Symbolic Chain-of-Manipulation (SCoM). Adapting the Chain-of-Manipulation paradigm, we train the model to simulate atomic Knowledge Graph manipulations internally. SCoM teaches the model to reliably traverse the topological structure of the graph. Experiments on Supervised Fine-Tuning (SFT) demonstrate that SCoM outperforms standard CoT baselines by up to 20%. We release the VIRAASAT dataset along with our findings, laying a strong foundation towards building Culturally Aware Reasoning Models.