Generative Models

Learning Generative Models

Taxonomy of Generative Models

Subjectives

A good generative model will create a diverse set of outputs that resemble the training data without being exact copies.


Autoregressive Models - Explicit Density

Subparts

x=(x1,x2,x3,,xT)p(x)=p(x1,x2,x3,,xT)=p(x1)p(x2x1)p(x3x1,x2)p(xTx1,,xT1)=t=1Tp(xtx1,,xt1) \begin{aligned} \mathbf{x} &= (x_1, x_2, x_3, \dots, x_T) \\ p(\mathbf{x}) &= p(x_1, x_2, x_3, \dots, x_T) \\ &= p(x_1) \, p(x_2 \mid x_1) \, p(x_3 \mid x_1, x_2) \cdots p(x_T \mid x_1, \dots, x_{T-1}) \\ &= \prod_{t=1}^{T} p(x_t \mid x_1, \dots, x_{t-1}) \end{aligned}
xt=f(xt1,xt2,,xtk)\mathbf{x}_t = f(\mathbf{x}_{t-1}, \mathbf{x}_{t-2}, \dots, \mathbf{x}_{t-k})

Different inputs and outputs

y^t=f(xt,xt1,,xtk)\hat{y}_t = f(\mathbf{x}_t, \mathbf{x}_{t-1}, \dots, \mathbf{x}_{t-k})

Compare with RNNs

Multi-Layer Autoregressive Models

Classic Autoregressive Models

Pros and Cons

Practical


Variational Autoencoders

Latent Variable Models

(fw:xz)gw:zx^\left( f_{\mathbf{w}} : \mathbf{x} \mapsto \mathbf{z} \right) \quad\quad g_{\mathbf{w}} : \mathbf{z} \mapsto \hat{\mathbf{x}}

Autoencoders

Generative Models VS Generative Latent Variable Models

Variational Autoencoders

So far, we have discussed deterministic latent variables. We will now take a probabilistic perspective on latent variable models with autoencoding properties.



VAE+Autoregressive

Considering the pros and cons of both VAE and autoregressive models, we would like to combine them and get the best of both worlds.


Generative Adversarial Networks

Theory



Training

Theoretical Analysis

Empirical Analysis

Pros and cons

Gradient tricks

Classic Models


Evaluation

With exact likelihood

Without exact likelihood


Normalizing Flows


Energy-based models


Score-based models


Distances of probability distributions

Evaluation of Generative Models