Advancing Vision Transformer with Enhanced Spatial Priors

2026-04-20Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors identified that the Vision Transformer (ViT) struggles with understanding spatial information and has high computational cost. They first introduced RMT, which adds spatial awareness using Manhattan distance and splits attention into horizontal and vertical parts. Then, they improved this with EVT, which uses Euclidean distance for better spatial modeling and a simpler grouping method instead of split attention. Their experiments show that EVT performs very well on several vision tasks without needing extra training data. This work helps transformers better understand space in images while being more flexible.

Vision Transformer (ViT)Self-AttentionSpatial PriorsManhattan DistanceEuclidean DistanceImage ClassificationObject DetectionInstance SegmentationSemantic SegmentationToken Grouping
Authors
Qihang Fan, Huaibo Huang, Mingrui Chen, Hongmin Liu, Ran He
Abstract
In recent years, the Vision Transformer (ViT) has garnered significant attention within the computer vision community. However, the core component of ViT, Self-Attention, lacks explicit spatial priors and suffers from quadratic computational complexity, limiting its applicability. To address these issues, we have proposed RMT, a robust vision backbone with explicit spatial priors for general purposes. RMT utilizes Manhattan distance decay to introduce spatial information and employs a horizontal and vertical decomposition attention method to model global information. Building on the strengths of RMT, Euclidean enhanced Vision Transformer (EVT) is an expanded version that incorporates several key improvements. Firstly, EVT uses a more reasonable Euclidean distance decay to enhance the modeling of spatial information, allowing for a more accurate representation of spatial relationships compared to the Manhattan distance used in RMT. Secondly, EVT abandons the decomposed attention mechanism featured in RMT and instead adopts a simpler spatially-independent grouping approach, providing the model with greater flexibility in controlling the number of tokens within each group. By addressing these modifications, EVT offers a more sophisticated and adaptable approach to incorporating spatial priors into the Self-Attention mechanism, thus overcoming some of the limitations associated with RMT and further enhancing its applicability in various computer vision tasks. Extensive experiments on Image Classification, Object Detection, Instance Segmentation, and Semantic Segmentation demonstrate that EVT exhibits exceptional performance. Without additional training data, EVT achieves 86.6% top1-acc on ImageNet-1k.