Wildfire Spread Scenarios: Increasing Sample Diversity of Segmentation Diffusion Models with Training-Free Methods
2026-03-20 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors study how to better predict multiple possible outcomes in uncertain situations like wildfire spread by using diffusion models. They point out that simply sampling many times is slow and inefficient, so they test several methods that encourage more diverse and meaningful predictions without extra training. They adapted existing techniques and created a new one that clusters outputs, showing these methods improve prediction quality on medical, city, and wildfire datasets. Their work suggests it’s possible to get varied results from diffusion models more efficiently.
diffusion modelssegmentationsampling methodsmulti-modal distributionswildfire spread simulationparticle guidanceSPELLLIDC datasetCityscapes datasetdiversity in predictions
Authors
Sebastian Gerard, Josephine Sullivan
Abstract
Predicting future states in uncertain environments, such as wildfire spread, medical diagnosis, or autonomous driving, requires models that can consider multiple plausible outcomes. While diffusion models can effectively learn such multi-modal distributions, naively sampling from these models is computationally inefficient, potentially requiring hundreds of samples to find low-probability modes that may still be operationally relevant. In this work, we address the challenge of sample-efficient ambiguous segmentation by evaluating several training-free sampling methods that encourage diverse predictions. We adapt two techniques, particle guidance and SPELL, originally designed for the generation of diverse natural images, to discrete segmentation tasks, and additionally propose a simple clustering-based technique. We validate these approaches on the LIDC medical dataset, a modified version of the Cityscapes dataset, and MMFire, a new simulation-based wildfire spread dataset introduced in this paper. Compared to naive sampling, these approaches increase the HM IoU* metric by up to 7.5% on MMFire and 16.4% on Cityscapes, demonstrating that training-free methods can be used to efficiently increase the sample diversity of segmentation diffusion models with little cost to image quality and runtime. Code and dataset: https://github.com/SebastianGer/wildfire-spread-scenarios