KNIGHT: Knowledge Graph-Driven Multiple-Choice Question Generation with Adaptive Hardness Calibration
2026-02-23 • Computation and Language
Computation and LanguageArtificial IntelligenceInformation Retrieval
AI summaryⓘ
The authors created KNIGHT, a system that uses large language models and knowledge graphs to generate multiple-choice questions from text sources. KNIGHT builds a compact knowledge graph that summarizes important facts, which can be reused to make questions with different difficulty levels without starting from scratch each time. They tested KNIGHT using Wikipedia data to make question sets in subjects like History and Biology and found that the questions were clear, relevant, and reliable. Their approach makes creating and evaluating question datasets more efficient and cost-effective.
Large Language ModelsKnowledge GraphMultiple-Choice QuestionsRetrieval-Augmented GenerationWikipediaQuestion GenerationDifficulty ControlDataset EvaluationHallucination in AIToken Efficiency
Authors
Mohammad Amanlou, Erfan Shafiee Moghaddam, Yasaman Amou Jafari, Mahdi Noori, Farhan Farsi, Behnam Bahrak
Abstract
With the rise of large language models (LLMs), they have become instrumental in applications such as Retrieval-Augmented Generation (RAG). Yet evaluating these systems remains bottlenecked by the time and cost of building specialized assessment datasets. We introduce KNIGHT, an LLM-based, knowledge-graph-driven framework for generating multiple-choice question (MCQ) datasets from external sources. KNIGHT constructs a topic-specific knowledge graph, a structured and parsimonious summary of entities and relations, that can be reused to generate instructor-controlled difficulty levels, including multi-hop questions, without repeatedly re-feeding the full source text. This knowledge graph acts as a compressed, reusable state, making question generation a cheap read over the graph. We instantiate KNIGHT on Wikipedia/Wikidata while keeping the framework domain- and ontology-agnostic. As a case study, KNIGHT produces six MCQ datasets in History, Biology, and Mathematics. We evaluate quality on five criteria: fluency, unambiguity (single correct answer), topic relevance, option uniqueness, and answerability given the provided sources (as a proxy for hallucination). Results show that KNIGHT enables token- and cost-efficient generation from a reusable graph representation, achieves high quality across these criteria, and yields model rankings aligned with MMLU-style benchmarks, while supporting topic-specific and difficulty-controlled evaluation.