As a member of Bayesian methods research group I’m heavily interested in Bayesian approach to machine learning. One of the strengths of this approach is ability to work with hidden (unobserved) variables which are interpretable. This power however comes at a cost of generally intractable exact inference, which limits the scope of solvable problems. Another topic which gained lots of momentum in Machine Learning recently is Deep Learning, of course. With Deep Learning we can now build big and complex models that outperform most hand-engineered approaches given lots of data and computational power. The fact that Deep Learning needs a considerable amount of data also requires these methods to be scalable — a really nice property for any algorithm to have, especially in a Big Data epoch. Given how appealing both topic are it’s not a surprise there’s been some work to marry these two recently. In this series of blogsposts I’d like to summarize recent advances, particularly in variational inference. This is not meant to be an introductory discussion as prior familiarity with classical topics (Latent variable models, Variational Inference, Mean-field approximation) is required, though I’ll introduce these ideas anyway just to remind it and setup the notation. Latent Variables ModelsSuppose you have a probabilistic model that’s easy to describe using some auxiliary variables that you don’t observe directly (or even would like to infer it given the data). One classical example for this setup is Gaussian Mixture Modeling: we have components in a mixture, and is a one-hot vector of dimensionality indicating which component an observation belongs to. Then, conditioned on the distribution of is a usual Gaussian distribution: (here whenever I refer to a distribution, you should read it as its density. At least generalized one). Therefore the joint distribution of the model is Where is a probability distribution over outcomes, and is a set of all model’s parameters (, s and s). We’d like to do 2 things with the model: first, we obviously need to learn parameters , and second, we’d like infer these latent variables to know which cluster the observation belongs to, that is, we need to calculate the distribution of conditioned on : . We want to learn the parameters as usual by maximizing the log-likelihood. Unfortunately, we don’t know true assignments , and marginalizing over it as in is not a good idea as the resulting optimization problem would lose its convexity. Instead we decompose the log-likelihood as follows: The second term is a Kullback-Leibler divergence, which is always non-negative, and equals zero iff distributions are equal almost everywhere . Therefore putting eliminates the second term, leaving us with . Therefore all we need to be able to do is to calculate the posterior , and maximize the expectation. This is how EM algorithm is derived: at E-step we calculate the posterior , and at M-step we maximize the expectation with respect to keeping fixed. Now, all we are left to do is to find the posterior which is given by the following deceivingly simple formula knows as a Bayes’ rule. Of course, there’s no free lunch and computing the denominator is intractable in general case. One can compute the posterior exactly when the prior and the likelihood are conjugate (that is, after multiplying the prior by the likelihood you get the same functional form for as in the prior, thus the posterior comes from the same family as the prior but with different parameters), but many models of practical interest don’t have this property. This is where variational inference comes in. Variational Inference and Mean-fieldIn a variational inference (VI) framework we approximate the true posterior with some other simpler distribution where is a set of (variational) parameters of the (variational) approximation (I may omit and in a “given” clause when it’s convenient, remember, it always could be placed there). One possibility is to divide latent variables in groups and force the groups to be independent. In extreme case each variable gets its own group, assuming independence among all variables . If we then set about to find the best approximation to the true posterior in this fully factorized class, we will no longer have the optimal being the true posterior itself, as the true posterior is presumably too complicated to be dealt with in analytic form (which we want from the approximation when we say “simpler distribution”). Therefore we find the optimal by minimizing the KL-divergence with the true posterior ( denotes terms that are constant w.r.t. ): For many models it’s possible to look into and immediately recognize logarithm of unnormalized density of some known distribution. Another cornerstone of this framework is a notion of Evidence Lower Bound (ELBO): recall the decomposition of log-likelihood we derived above. In our current setting we can not compute the right hand side as we can not evaluate the true posterior . However, note that the left hand side (that is, the log-likelihood) does not depend on the variational distribution . Therefore, maximizing the first term of the right hand side w.r.t. variational parameters results in minimizing the second term, the KL-divergence with the true posterior. This implies we can ditch the second term, and maximize the first one w.r.t. both model parameters and variational parameters : Okay, so this covers the basics, but before we go to the neural networks-based methods we need to discuss some general approaches to VI and how to make it scalable. This is what the next blog post is all about. |
|