Mixed Magnification Aggregation for Generalizable Region-Level Representations in Computational Pathology

2026-02-25Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors studied how computer models analyze large microscope images of tissue by breaking them into small pieces at one zoom level. They noticed that important details sometimes need looking at different zoom levels, just like a pathologist does. To fix this, they created a method that combines image pieces taken at different magnifications into one model. Their tests on cancer data showed better predictions when mixing zoom levels, suggesting that understanding context at multiple scales is helpful.

computational pathologywhole slide imagesfoundation modelsmagnificationimage tilesmasked embedding modelingbiomarker predictionmulti-resolution featurescancer predictiontransfer learning
Authors
Eric Zimmermann, Julian Viret, Michal Zelechowski, James Brian Hall, Neil Tenenholtz, Adam Casson, George Shaikovski, Eugene Vorontsov, Siqi Liu, Kristen A Severson
Abstract
In recent years, a standard computational pathology workflow has emerged where whole slide images are cropped into tiles, these tiles are processed using a foundation model, and task-specific models are built using the resulting representations. At least 15 different foundation models have been proposed, and the vast majority are trained exclusively with tiles using the 20$\times$ magnification. However, it is well known that certain histologic features can only be discerned with larger context windows and requires a pathologist to zoom in and out when analyzing a whole slide image. Furthermore, creating 224$\times$224 pixel crops at 20$\times$ leads to a large number of tiles per slide, which can be gigapixel in size. To more accurately capture multi-resolution features and investigate the possibility of reducing the number of representations per slide, we propose a region-level mixing encoder. Our approach jointly fuses image tile representations of a mixed magnification foundation model using a masked embedding modeling pretraining step. We explore a design space for pretraining the proposed mixed-magnification region aggregators and evaluate our models on transfer to biomarker prediction tasks representing various cancer types. Results demonstrate cancer dependent improvements in predictive performance, highlighting the importance of spatial context and understanding.