DySkew: Dynamic Data Redistribution for Skew-Resilient Snowpark UDF Execution

2026-04-14Distributed, Parallel, and Cluster Computing

Distributed, Parallel, and Cluster ComputingDatabases
AI summary

The authors explain that Snowflake's Snowpark lets users run custom code on data, but uneven data distribution (data skew) can slow things down a lot. To fix this, they created DySkew, which smartly balances the data during processing to prevent slow tasks by adapting in real-time. Their method also considers different row sizes and reduces overhead compared to older approaches. They tested DySkew and found it makes executing custom code on large datasets faster and more efficient.

SnowflakeSnowparkUser-Defined Function (UDF)data skewdata partitioningadaptive data distributionruntime adaptationelastic architecturerow size modelperformance optimization
Authors
Chenwei Xie, Urjeet Shrestha, Corbin McElhanney, Lukas Lorimer, Gopal V, Zihao Ye, Yi Pan, Nic Crouch, Elliott Brossard, Florian Funke, Yuxiong He
Abstract
Snowflake revolutionized data warehousing with an elastic architecture that decouples compute and storage, enabling scalable solutions for diverse data analytics needs. Building on this foundation, Snowflake has advanced its AI Data Cloud vision by introducing Snowpark, a managed turnkey solution that supports data engineering and AI/ML workloads using Python and other programming languages. While Snowpark's User-Defined Function (UDF) execution model offers high throughput, it is highly vulnerable to performance degradation from data skew, where uneven data partitioning causes straggler tasks and unpredictable latency. The non-uniform computational cost of arbitrary user code further exacerbates this classic challenge. This paper presents DySkew, a novel, data-skew-aware execution strategy for Snowpark UDFs. Built upon Snowflake's new generalized skew handling solution, an adaptive data distribution mechanism utilizing per-link state machines. DySkew addresses the unique challenges of user-defined logic with goals of fine-grained per-row mitigation, dynamic runtime adaptation, and low-overhead, cost-aware redistribution. Specifically, for Snowpark, we introduce crucial optimizations, including an eager redistribution strategy and a Row Size Model to dynamically manage overhead for extremely large rows. This dynamic approach replaces the limitations of the previous static round-robin method. We detail the architecture of this framework and showcase its effectiveness through performance evaluations and real-world case studies, demonstrating significant improvements in the execution time and resource utilization for large-scale Snowpark UDF workloads.