Characterising LLM-Generated Competency Questions: a Cross-Domain Empirical Study using Open and Closed Models
2026-04-17 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors study how large language models (LLMs) can automatically generate Competency Questions (CQs), which are natural language questions used to define requirements in building ontologies. They compare different LLMs, both open and closed models, to see how readable, relevant, and complex the generated questions are across various use cases. Their work introduces a way to measure these qualities systematically, finding that different models produce distinct types of questions depending on the scenario. This helps understand how AI can support requirement elicitation in ontology engineering.
Competency QuestionsOntology EngineeringRequirement ElicitationLarge Language Models (LLMs)Generative AIReadabilityStructural ComplexityDomain SpecialisationCross-domain AnalysisNatural Language Processing
Authors
Reham Alharbi, Valentina Tamma, Terry R. Payne, Jacopo de Berardinis
Abstract
Competency Questions (CQs) are a cornerstone of requirement elicitation in ontology engineering. CQs represent requirements as a set of natural language questions that an ontology should satisfy; they are traditionally modelled by ontology engineers together with domain experts as part of a human-centred, manual elicitation process. The use of Generative AI automates CQ creation at scale, therefore democratising the process of generation, widening stakeholder engagement, and ultimately broadening access to ontology engineering. However, given the large and heterogeneous landscape of LLMs, varying in dimensions such as parameter scale, task and domain specialisation, and accessibility, it is crucial to characterise and understand the intrinsic, observable properties of the CQs they produce (e.g., readability, structural complexity) through a systematic, cross-domain analysis. This paper introduces a set of quantitative measures for the systematic comparison of CQs across multiple dimensions. Using CQs generated from well defined use cases and scenarios, we identify their salient properties, including readability, relevance with respect to the input text and structural complexity of the generated questions. We conduct our experiments over a set of use cases and requirements using a range of LLMs, including both open (KimiK2-1T, LLama3.1-8B, LLama3.2-3B) and closed models (Gemini 2.5 Pro, GPT 4.1). Our analysis demonstrates that LLM performance reflects distinct generation profiles shaped by the use case.