Code for All: Educational Applications of the "Vibe Coding" Hackathon in Programming Education across All Skill Levels
2026-04-24 • Software Engineering
Software Engineering
AI summaryⓘ
The authors studied how people learn programming using "vibe coding," where users tell an AI what they want and it writes the code. They ran a month-long online hackathon with participants ranging from beginners to experts, divided into three levels of project complexity. Participants had to rely entirely on AI-generated code without manually editing it. The authors evaluated the projects and surveyed participants to understand how learning happens with this approach, especially as tasks get harder and how the no-edit rule affects problem-solving. Their work helps show how AI tools might be used in programming education and competitions.
large language modelsvibe codingAI-assisted programmingprompt engineeringfrontend developmentbackend developmentweb application deploymenteducational technologyhackathoncode readability
Authors
Ashley J. Chen, Yijia Cao, Minghao Shao, Ramesh Karri, Muhammad Shafique
Abstract
The emergence of large language models has enabled vibe coding, a natural language approach to programming in which users describe intent and AI generates or revises code, potentially broadening access to programming while preserving meaningful learning outcomes. We investigate its educational value through a month-long online hackathon that welcomed participants from multiple countries, ranging from complete beginners to experienced developers. The hackathon offered three tracks with increasing technical demands. Spark emphasized basic frontend functionality and dynamic features such as buttons, forms, and API calls. Build required backend or database integration. Launch targeted production ready web applications, including deployment. Participants were required to develop projects using only LLM generated code without manual edits and submitted complete chat histories, source code, demo videos, and functionality reports. We assessed educational effectiveness with a mixed methods design that combined standardized project evaluations across functionality, user interface and user experience design, impact, prompt quality, and code readability, along with post-hackathon surveys of perceived learning outcomes and thematic analysis of open-ended feedback. Our findings describe how participants with different backgrounds engage with vibe coding as task complexity increases, how the no manual editing constraint shapes prompting and debugging practices, and what these patterns imply for integrating AI assisted development into programming education and competitive learning environments.