So far we had a little of “neural” in our VI methods. Now it’s time to fix it, as we’re going to consider Variational Autoencoders (VAE), a paper by D. Kingma and M. Welling, which made a lot of buzz in ML community. It has 2 main contributions: a new approach (AEVB) to large-scale inference in non-conjugate models with continuous latent variables, and a probabilistic model of autoencoders as an example of this approach. We then discuss connections to Helmholtz machines — a predecessor of VAEs. Auto-Encoding Variational BayesAs noted in the introduction of the post, this approach, called Auto-Encoding Variational Bayes (AEVB) works only for some models with continuous latent variables. Recall from our discussion of Blackbox VI and Stochastic VI, we’re interested in maximizing the ELBO :
It’s not a problem to compute an estimate of the gradient of the ELBO w.r.t. model parameters , but estimating the gradient w.r.t. approximation parameters is tricky as these parameters influence the distribution the expectation is taken over, and as we know from the post on Blackbox VI, naive gradient estimator based on score function exhibits high variance. Turns out, for some distributions we can make change of variables, that is, for some distributions can be represented as a (differentiable) transformation of some auxiliary random variable whose distribution does not depend on . A well-known example of such reparametrization is Gaussian distribution: if then can be represented as for . This transformation is called the reparametrization trick. After the reparametrization the ELBO becomes
This objective is much better as we don’t need to differentiate w.r.t. expectation’s distribution, essentially putting variational parameters to the same regime as model parameters . It’s sufficient now to just take gradients of the ELBO’s estimate, and run any optimization algorithm like Adam. Oh, and if you wonder what Auto-Encoding in Auto-Encoding Variational Bayes means, there’s an interesting interpretation of the ELBO in terms of autoencoding:
Here the first term can be treated as expected reconstruction ( from the code ) loss, while the second one is just a regularization term. Variational AutoencoderOne particular application of AEVB framework comes from using neural networks as the model and the approximation . The model has no requirements, and can be discrete or continuous (or mixed). , however, has to be continuous. Moreover, we need to be able to apply the reparametrization trick. Therefore in many practical applications is set to be Gaussian distribution where and are outputs of a neural network taking as input, and denotes a set of neural network’s weights — the parameters you optimize the ELBO with respect to (and also ). In order to make reparametrization trick practical, you’d like to be able to compute quick. You don’t want to actually compute this quantity as it’d be too computationally expensive. Instead you might want to predict by a neural network in the first place, or consider only diagonal covariance matrices (as it’s done in the paper). In case of Gaussian approximation and Gaussian prior we can compute KL-divergence analytically, see the formula at stats.stackexchange. This reduces variance of gradient estimator, though one can still train a VAE estimating KL-divergence using Monte Carlo, just like the other part of the ELBO. We optimize both the model and the approximation by gradient ascent. This joint optimization pushed both approximation towards the model, and the model towards approximation. This leads not only to efficient inference using the approximation, but also the model is encouraged to learn latent representations such that the true posterior is approximately factorial. This model has generated a lot of buzz because it can be used as a generative model, essentially VAE is an autoencoder with natural sampling procedure: suppose you’ve trained the model, and now want to sample new samples similar to those you used in the training set. To do so you first sample from the prior , and then generate using the model . Both operations are easy: the first one is a sampling from some standard distribution (like Gaussian, for example), and the second one is just one feed-forward pass followed by another sampling from another standard distribution (Bernoulli, for example, in case is a binary image). If you want to read more on Variational Auto-Encoders, I refer you to a great tutorial by Carl Doersch. Helmholtz machinesIn the end I’d like to add some historical perspective. The idea of two networks, one “encoding” an observation to some latent representation (code) , and another “decoding” it back is definitely not new. In fact, the whole idea is a special case of the Helmholtz Machines introduced by Geoffrey Hinton 20 years ago. Helmholtz machine can be thought of as a neural network of stochastic hidden layers. Namely, we now have stochastic hidden layers (latent variables) (with deterministic ) where the layer is stochastically produced by the layer , that is, it is samples from some distribution , which as you might have guessed already is parametrized in the same way as in usual VAEs. Actually, VAEs is a special case of a Helmholtz machine with just one stochastic layer (but each stochastic layer contains a neural network of arbitrarily many deterministic layers inside of it). This image shows an instance of a Helmholtz machine with 2 stochastic layers (blue cloudy nodes), and each stochastic layer having 2 deterministic hidden layers (white rectangles). The joint model distribution is
And the approximate posterior is the same, but in inverse order:
The distribution is usually called a generative network (or model) as it allows one to generate samples from latent representation(s). The approximate posterior in this framework is called a recognition network (or model). Presumably, the name reflects the purpose of the network to recognize the hidden structure of observations. So, if the VAE is a special case of Helmholtz machines, what’s new then? The standard algorithm for learning Helmholtz machines, the Wake-Sleep algorithm, turns out to be optimizing a different objective. Thus, one of significant contributions of Kingma and Welling is application of the reparametrization trick to make optimization of the ELBO w.r.t. tractable. |
|