Codesigning Ripplet: an LLM-Assisted Assessment Authoring System Grounded in a Conceptual Model of Teachers' Workflows
2026-02-25 • Human-Computer Interaction
Human-Computer Interaction
AI summaryⓘ
The authors worked with 13 teachers over seven months to understand how teachers create tests and quizzes, finding that making assessments and figuring out what they need happens at the same time. They built a tool called Ripplet to help teachers easily create and reuse parts of assessments. Teachers using Ripplet made new kinds of tests, focused more on choosing good questions, and thought more about how good their tests were. When 15 more teachers tried Ripplet, they felt it was worth their time and helped improve their tests compared to what they usually do.
assessment authoringformative assessmentcodesign processeducational technologyiterative designassessment qualityteacher practicesweb-based tool
Authors
Yuan Cui, Annabel Goldman, Jovy Zhou, Xiaolin Liu, Clarissa Shieh, Joshua Yao, Mia Cheng, Matthew Kay, Fumeng Yang
Abstract
Assessments are critical in education, but creating them can be difficult. To address this challenge in a grounded way, we partnered with 13 teachers in a seven-month codesign process. We developed a conceptual model that characterizes the iterative dual process where teachers develop assessments while simultaneously refining requirements. To enact this model in practice, we built Ripplet, a web-based tool with multilevel reusable interactions to support assessment authoring. The extended codesign revealed that Ripplet enabled teachers to create formative assessments they would not have otherwise made, shifted their practices from generation to curation, and helped them reflect more on assessment quality. In a user study with 15 additional teachers, compared to their current practices, teachers felt the results were more worth their effort and that assessment quality improved.