TableDiffusion is a deep learning algorithm I developed for training diffusion models on tabular data under differential privacy guarantees.
The goal is to enable the synthesis of data that maintains the statistical properties of the original dataset while ensuring the privacy of individuals’ information. This is an extension of my Masters Thesis work. The most notable model from my research is TableDiffusion, the first differentiallyprivate diffusion model for tabular data, which outperformed the preexisting GANbased approaches.
If the above embed doesn’t work, you can watch it directly on YouTube: https://youtu.be/2QRrGWoXOb4
Why synthesise tabular data?
ML is accelerating progress across fields and industries, but relies on accessible and highquality training data. Some of the most important datasets are found in biomedical and financial domains in the form of spreadsheets and relational databases. But this tabular data is often sensitive in nature and made inaccessible by privacy regulations like GDPR, creating a major bottleneck to scientific research and realworld applications of AI.
Synthetic data generation is an emerging field that offers the potential to unlock sensitive data: by training ML models to represent the statistical structure of the dataset and synthesise new samples, without compromising individual privacy.
What’s the catch?
Generative models tend to memorise and regurgitate training data, which undermines the privacy goal.
To remedy this, researchers have incorporated the mathematical framework of Differential Privacy (DP) into the training process of neural networks, limiting memorisation and enforcing provable privacy guarantees. But this creates a tradeoff between the quality and privacy of the resulting data.
The GAN is dead, long live the diffusion model
Generative Adversarial Networks (GANs) were the leading paradigm for synthesising tabular data under Differential Privacy, but suffer from unstable adversarial training and mode collapse. These effects are compounded by the added privacy constraints. The tabular data modality exacerbates this further, as mixed data types, nonGaussian distributions, highcardinality, and sparsity are all challenging for deep neural networks to represent and synthesise. Yet, it is precisely tabular data that is most likely to be sensitive and valuable to solving realworld problems.
My contributions
For my Masters thesis, I optimised the qualityprivacy tradeoff of generative models, producing higher quality tabular datasets with the same privacy guarantees.
 I first developed novel endtoend models that leverage attention mechanisms to learn reversible representations of tabular data. Whilst effective at learning dense embeddings, the increased model size rapidly depletes the privacy budget.
 Next, I introduced TableDiffusion, the first differentiallyprivate diffusion model for tabular data synthesis.
Awesome results
My experiments showed that TableDiffusion produces higherfidelity synthetic datasets, avoids the mode collapse problem, and achieves stateoftheart performance on privatised tabular data synthesis. By implementing TableDiffusion to predict the added noise, we enabled it to bypass the challenges of reconstructing mixedtype tabular data. Overall, the diffusion paradigm proves vastly more data and privacy efficient than the adversarial paradigm, due to augmented reuse of each data batch and a smoother iterative training process.
But why diffusion models?
Unlike traditional generative models (GANs and VAEs) that learn the sampling function endtoend, DMs define the sampling function through a sequential denoising process, which divides the task into many smaller and simpler steps, resulting in smoother and more stable training and simpler neural networks. But diffusion models remain unexplored in the domain of tabular data.
Notably, DMs can be implemented to predict the added Gaussian noise instead of directly denoising data. Outputting standard Gaussian noise is much easier for neural networks than outputting sparse, heterogeneous data with varying nonGaussian distributions. This, along with the training robustness and parameter efficiency of diffusion models, makes them an ideal candidate for differentiallyprivate tabular dataset synthesis.
I based my tabular diffusion models on the widelyvalidated and flexible formulation of DDPMs (Denoising Diffusion Probabilistic Models), which is based on two Markov chains:
 a forward chain that perturbs data to noise, and
 a reverse chain that converts noise back to data.
Forward process
In the forward process, we start with a batch of real data $\mathbf{x}$ and iteratively add increasing amounts of noise according to a scheduler. The simplest version of this would be sampling standard Gaussian noise $\boldsymbol{\xi} \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})$ and scaling it by $\sqrt{\beta_t}$ according to a linear schedule $\beta_t = \frac{t}{T}$, where $T$ is the total number of time steps and $t$ is the current step. In this case, the forward diffusion process can be expressed as:
$$\mathbf{x}_{t+1} = \mathbf{x}_{t} + \boldsymbol{z}_t$$where $\mathbf{x}_t$ is the batch of data at time step $t$ and $\boldsymbol{z}_t \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})\sqrt{\beta_t}$ is scaled Gaussian noise.
Reverse process
In the reverse process, we aim to denoise the data by going from the final time step $\mathbf{x}_T$, which is pure noise, back to the initial step $\mathbf{x}_0$, which approximates the real batch of data $\mathbf{x}$. We achieve this by training the parameters $\boldsymbol{\theta}$ of a neural network $M_{\boldsymbol{\theta}}$.
But we have three choices for designing the model’s objective function: predict the original data from the noised data (denoising), predict the added noise from the noised data (noise prediction), or predicting some score function of the noised data.
I implemented both denoising and noisepredicting DMs. The latter is more instructive, as it is unique to the diffusion paradigm. The model learns to predict the noise $\boldsymbol{z}_t$ added at each step:
$$\boldsymbol{z}_t \approx M_{\boldsymbol{\theta}}(\mathbf{x}_t)$$Given a prediction for $\boldsymbol{z}_t$, we can compute a prediction for $\mathbf{x}_{t1}$:
$$\mathbf{x}_{t1} \gets \mathbf{x}_t  M_{\boldsymbol{\theta}}(\mathbf{x}_t)$$By training the neural network in this way, we essentially learn to reverse the diffusion process and recover the original data from the noisy data. After training, the neural network can be used to generate new samples by applying the reverse process to a sample of pure noise.
Applying diffusion to tabular data
The majority of work on diffusion models has been done in the context of image data. In those contexts, autoencoders are often used to learn a compressed latent representation of the images and diffusion is performed in the latent space. In applying the diffusion paradigm to sensitive tabular data, I departed from prior work in a few notable ways:
 Firstly, instead of compressing an image to a latent space using an encoder network, I used a custom preprocessing approach to convert the mixedtype tabular data into a homogeneous vector representation and
 Secondly, in simplifying both diffusion processes by using only a few diffusion steps and by omitting the addition of noise between diffusion steps in the reverse process.
These modifications adapt the diffusion paradigm to the lowerdimensional manifolds of tabular data and performed better in our initial experiments, compared to the standard diffusion approaches for the image modality. Many aspects of standard diffusion implementations are unnecessary when training under DPSGD, as the gradient clipping and noising serves to regularise the model and prevent overfitting.
Tabular data is lower dimensional than images, and our diffusion models are much smaller and more efficient (because they are optimised for the privacypreserving context), so full sampling is fast. This allows us to simplify away many of these sophisticated techniques for our context.
The noise at each diffusion step $t$ is scaled by a variable $\beta_t$, which we draw from a cosine schedule:
$$\beta_t = \frac{1  \cos\left(\frac{\pi t}{T}\right)}{2},$$where $T$ is the total number of diffusion steps and $t \in \{1, 2, \ldots, T\}$ is the current step. The noise schedule, through the scaling variable $\beta_t$, determines the magnitude of noise that is added at each step. At each step $t$, the noised data $\tilde{\mathbf{x}}_t$ is generated by adding Gaussian noise $\boldsymbol{\xi} \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I})$ to the real data $\mathbf{x}$. The noise is scaled by $\beta_t$ as determined by the noise schedule:
$$\tilde{\mathbf{x}}_t = \mathbf{x} + \sqrt{\beta_t} \cdot \boldsymbol{\xi},$$Compared to earlier diffusion model work that used a linear noise schedule, adding noise more gradually through a cosine schedule reduces the number of forward process steps $T$ needed by an order of magnitude, whilst achieving similar sample quality.
We implemented two variants of the model, one which predicts the added noise and one which predicts the denoised data. This allowed for an ablation analysis of which diffusion properties affect synthetic data fidelity.
TableDiffusion model
The most notable model from my Thesis research is TableDiffusion
, the first differentiallyprivate diffusion model for tabular data.
The training process operates under differentiallyprivate stochastic gradient descent (DPSGD), providing provable privacy guarantees, whilst allowing arbitrary quantities of data to be synthesised from the same distribution as the training data.
TableDiffusion algorithms for training and sampling
We can see that this algorithm is broadly similar to training a supervised neural network. We observe the same nested loops over epochs and batches of data (lines 23). Instead of sampling fixedsized batches of data, we use Poisson sampling to get batches approximately $B$ in size (line 3). This samplingbased technique consumes the privacy budget slower than the deterministic batching commonly seen in gradient descent.
In the diffusion model, we also have an additional inner loop (line 4) over the steps $t \in \{1, \ldots, T\}$ in the forward diffusion process. This means that each batch of data is used $T$ times, augmented by varying levels of added noise. Unlike GANbased and VAEbased models, this effectively gives the diffusion model $T$ times more data to learn from, without affecting the privacy budget. For each diffusion step $t$, the data is noised, the model predicts the noise, and the loss between the predicted and true noise is calculated (lines 911).
At the end of the diffusion loop, the losses for each step are aggregated (line 12). This is the point at which DPSGD is implemented, clipping the gradients of the aggregated loss and adding further noise (line 13).
Finally, the privatised gradients are used to update the model parameters with the Adam optimiser (line 14). This is repeated for each batch sample and each epoch, until either the epoch limit (line 2) or the target privacy value $\epsilon_{\text{target}}$ is reached (line 6).
Once trained, we can sample synthetic data $\hat{\mathbf{x}}$ by sampling pure noise from a standard multivariate Gaussian of the same shape as the transformed data, then iteratively feeding it through the model in the reverse diffusion process, subtracting the (scaled) predicted noise at each step $t$.
Training code example (with Pytorch)


Sampling code example (with Pytorch)


More information
 Read the full paper: Generating tabular datasets under differential privacy.
 See the source code: github.com/gianlucatruda/TableDiffusion
 Watch the YouTube explainer: TableDiffusion: Generative AI for private tabular data
Citing this work
Truda, Gianluca. “Generating tabular datasets under differential privacy.” arXiv preprint arXiv:2308.14784 (2023).

