SCOPE: Scene-Contextualized Incremental Few-Shot 3D Segmentation

2026-03-06Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionMachine Learning
AI summary

The authors study how to teach a 3D computer vision system to recognize new object categories from only a few examples without forgetting what it already learned. They notice that new objects often appear as unlabeled background in earlier training data. Their method, SCOPE, uses these background regions to create helpful "prototype" examples that improve learning new categories without changing the main model or adding complexity. Tests on popular datasets show their approach performs better in recognizing new objects and keeps past knowledge intact.

Incremental Few-Shot Segmentation3D Point CloudsPrototype LearningCatastrophic ForgettingPseudo-InstanceClass-Agnostic SegmentationScanNetS3DISMean IoUNovel Class Learning
Authors
Vishal Thengane, Zhaochong An, Tianjin Huang, Son Lam Phung, Abdesselam Bouzerdoum, Lu Yin, Na Zhao, Xiatian Zhu
Abstract
Incremental Few-Shot (IFS) segmentation aims to learn new categories over time from only a few annotations. Although widely studied in 2D, it remains underexplored for 3D point clouds. Existing methods suffer from catastrophic forgetting or fail to learn discriminative prototypes under sparse supervision, and often overlook a key cue: novel categories frequently appear as unlabelled background in base-training scenes. We introduce SCOPE (Scene-COntextualised Prototype Enrichment), a plug-and-play background-guided prototype enrichment framework that integrates with any prototype-based 3D segmentation method. After base training, a class-agnostic segmentation model extracts high-confidence pseudo-instances from background regions to build a prototype pool. When novel classes arrive with few labelled samples, relevant background prototypes are retrieved and fused with few-shot prototypes to form enriched representations without retraining the backbone or adding parameters. Experiments on ScanNet and S3DIS show that SCOPE achieves SOTA performance, improving novel-class IoU by up to 6.98% and 3.61%, and mean IoU by 2.25% and 1.70%, respectively, while maintaining low forgetting. Code is available https://github.com/Surrey-UP-Lab/SCOPE.