Learning to Present: Inverse Specification Rewards for Agentic Slide Generation
2026-03-17 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors created a system where AI models learn to make professional slide presentations by researching topics and designing slides using reinforcement learning. They developed a special reward system that checks if slides look good, follow the structure, have quality content, and match their intended purpose by having another AI guess the original instructions. Using this method, they improved a smaller AI model to nearly match the performance of a much larger one. They also found that how well the AI follows instructions and uses tools matters more than just its size. The authors provide an open-source dataset and code for others to explore.
reinforcement learninglarge language modelspresentation generationreward systeminverse specification rewardQwen2.5-Coder-7BGRPOinstruction adherencetool usemulti-turn rollout
Authors
Karthik Ragunath Ananda Kumar, Subrahmanyam Arunachalam
Abstract
Automated presentation generation remains a challenging task requiring coherent content creation, visual design, and audience-aware communication. This work proposes an OpenEnv-compatible reinforcement learning environment where LLM agents learn to research topics, plan content, and generate professional HTML slide presentations through tool use. We introduce a multi-component reward system combining structural validation, render quality assessment, LLM-based aesthetic scoring, content quality metrics, and an inverse specification reward that measures how faithfully generated slides convey their intended purpose. The inverse specification reward, an "inverse task" where an LLM attempts to recover the original specification from generated slides, provides a holistic quality signal. Our approach fine-tunes Qwen2.5-Coder-7B via GRPO, training only 0.5% of parameters on prompts derived from expert demonstrations collected using Claude Opus 4.6. Experiments on 48 diverse business briefs across six models demonstrate that our fine-tuned 7B model achieves 91.2% of Claude Opus 4.6's quality while improving 33.1% over the base model. The six-model comparison reveals that instruction adherence and tool-use compliance, rather than raw parameter count, determine agentic task performance. We contribute SlideRL, an open-source dataset of 288 multi-turn rollout trajectories across all six models: https://huggingface.co/datasets/KarthikRagunathAnandaKumar/sliderl-multi-turn-rollouts Code: https://github.com/pushing-the-frontier/slide-forge-llm