AI summaryⓘ
The authors studied how students ask questions when using educational chatbots powered by large language models (LLMs) during two types of learning: self-study and coursework. They analyzed over 6,000 student messages and found that most questions were about procedural steps, especially when students were preparing for exams. They tested whether LLMs could reliably classify student questions and found that LLMs performed as well or better than humans. However, the existing ways to categorize questions were limited and didn’t fully capture the complexity of real conversations with chatbots. The authors suggest using more detailed conversation analysis methods in future research to better understand how chatbots help students.
Large Language ModelsEducational ChatbotsStudent QuestionsProcedural QuestionsFormative AssessmentSummative AssessmentInter-rater ReliabilityConversation AnalysisDiscursive Psychology
Authors
Alexandra Neagu, Marcus Messer, Peter Johnson, Rhodri Nelson
Abstract
Providing scaffolding through educational chatbots built on Large Language Models (LLM) has potential risks and benefits that remain an open area of research. When students navigate impasses, they ask for help by formulating impasse-driven questions. Within interactions with LLM chatbots, such questions shape the user prompts and drive the pedagogical effectiveness of the chatbot's response. This paper focuses on such student questions from two datasets of distinct learning contexts: formative self-study, and summative assessed coursework. We analysed 6,113 messages from both learning contexts, using 11 different LLMs and three human raters to classify student questions using four existing schemas. On the feasibility of using LLMs as raters, results showed moderate-to-good inter-rater reliability, with higher consistency than human raters. The data showed that 'procedural' questions predominated in both learning contexts, but more so when students prepare for summative assessment. These results provide a basis on which to use LLMs for classification of student questions. However, we identify clear limitations in both the ability to classify with schemas and the value of doing so: schemas are limited and thus struggle to accommodate the semantic richness of composite prompts, offering only partial understanding the wider risks and benefits of chatbot integration. In the future, we recommend an analysis approach that captures the nuanced, multi-turn nature of conversation, for example, by applying methods from conversation analysis in discursive psychology.