MIDST Challenge at SaTML 2025: Membership Inference over Diffusion-models-based Synthetic Tabular data

2026-03-19Machine Learning

Machine Learning
AI summary

The authors studied how well synthetic data created by diffusion models can protect privacy, especially in complex tables of data. They focused on testing if these models can resist membership inference attacks, which try to guess if someone’s data was used to create the synthetic dataset. To do this, they explored different attack methods and developed new ones tailored to diffusion models working with various types of tabular data. Their work helps measure how safe synthetic data is for privacy when generated by these advanced models.

synthetic datadiffusion modelsdata anonymizationprivacy-preservingtabular datamembership inference attacksgenerative modelsblack-box attackswhite-box attacksmulti-relational tables
Authors
Masoumeh Shafieinejad, Xi He, Mahshid Alinoori, John Jewell, Sana Ayromlou, Wei Pang, Veronica Chatrath, Garui Sharma, Deval Pandya
Abstract
Synthetic data is often perceived as a silver-bullet solution to data anonymization and privacy-preserving data publishing. Drawn from generative models like diffusion models, synthetic data is expected to preserve the statistical properties of the original dataset while remaining resilient to privacy attacks. Recent developments of diffusion models have been effective on a wide range of data types, but their privacy resilience, particularly for tabular formats, remains largely unexplored. MIDST challenge sought a quantitative evaluation of the privacy gain of synthetic tabular data generated by diffusion models, with a specific focus on its resistance to membership inference attacks (MIAs). Given the heterogeneity and complexity of tabular data, multiple target models were explored for MIAs, including diffusion models for single tables of mixed data types and multi-relational tables with interconnected constraints. MIDST inspired the development of novel black-box and white-box MIAs tailored to these target diffusion models as a key outcome, enabling a comprehensive evaluation of their privacy efficacy. The MIDST GitHub repository is available at https://github.com/VectorInstitute/MIDST