Variational Bayes for Latent Dirichlet Allocation
Published:
In this post we will learn about a widely-used topic model called Latent Dirichlet Allocation (LDA), proposed by Blei, Ng and Jordan in 2003. Although research in probabilistic topic modeling has been long-standing, approaching it from a perspective of a newcomer can be quite challenging. Also, there is a lot of literature on the applications of topic models, especially LDA and in many disciplines; I therefore would need to dedicate at least a series of 10 posts to reasonably cover these applications. As such, I constrain myself to the following desiderata when writing this post:
- Explain what probabilistic topic modeling is, and what assumptions it makes.
- Recognize the observable and latent variables in a topic model, and specifically in LDA.
- Explain the generative process of LDA, and derive the complete probability.
- Explain what inference means in a mixture model, and why it is hard in LDA.
- Find the approximate posterior distribution of LDA using variational inference, and explain the procedure to find the optimal variational parameters.
- Explain what it means to “fit” an LDA model to a corpus, and describe how this procedure works.
- Be able to write code for an LDA model, including training and inference.
Introduction
Being able to describe a large collection of documents is an important task in many disciplines. This task is often called “describe the haystack,” and the idea is to find the common themes that appear in the documents. For example, given a corpus of abstracts from papers published to PNAS, can we find the common scientific topics—such as “cellular biology,” “genetics” or “evolution”—that are covered in these abstracts? Another example is when you collect many tweets in a specific period, and want to find out what common topics people tweet about during this period, in the hope of predicting what topics will be trending in the near future. To help us approach this, there are three discussions worth noting here.
First, identifying topics by manually reading a collection of documents is probably the best way to characterize its themes, but the mere size of a corpus makes it impossible to perform this; we are looking at tens of thousands of abstracts, hudreds of thousands of Reddit posts, millions of Wikipedia articles, and tens of millions of tweets. Coming up with a way in which a computer can help us automatically identify the topics is much more desirable.
Second, what do we mean by topics, or themes? Put simply, a topic is a probability distribution over the vocabulary. For example, a topic about natural language processing is a distribution, with (much) higher probabilities for words such as “machine,” “token,” “vector” and “likelihood” than for words such as “mechanic,” “torts,” “cell” and “chemical.” Typically, we describe a topic by a list of most-likely words of size, say, 10 or 15. A human can look at this list and give the topic a representative name if necessary.
Third, it is quite evident that a document is rarely exclusively about one topic. (Well, this depends on how fine-grained you define each topic to be, but note that the more fine-grained, the harder it is to generalize.) In fact, we often associate a document with a mixture of topics, perhaps with a higher weight to some than others. For example, a research paper in machine learning can be a mixture of topics such as optimization, statistics, statistical physics, and so on, and a human reader can probably tell which topic is weighed higher than others after reading the paper. A solution to modeling this is to have a probability distribution over topics, given a document.
Probabilistic topic models
The two types probability distribution described above are the main ingredients of probabilistic topic models such as LDA. If we are able to model them, we can do many useful things. First, using the topic-word distributions allows us to characterize the topics present in a corpus, thereby summarizing it in a meaningful way. And using the document-topic distributions allows us to draw inference on the topics that a document is about, also helping with summarization. The applications of these models are quite boundless, which is why they are so popular in many fields such as computational social science, psychology, cognitive science, and so on.
However, in order to use them correctly as well as identifying the pros and cons to make good decisions while modeling, one should not stop at only calling sklearn.decomposition.LatentDirichletAllocation
arbitrarily, but should be able to understand the model, its assumptions, and how to tune its hyperparameters. To demonstrate this, let us dive into the details of the model.
Latent Dirichlet Allocation
A probabilistic topic model, LDA still remains one of the most popular choices for topic modeling today. It is an example of a mixture model whose structure contains two types of random variables:
- The observable variables are the words you observe in each document.
- The latent variables are those you do not observe, but which describe some internal structure of your data, in particular, the “topics”.
You can readily see the assumption here, which is that there there is some internal structure to your data, and our job is to model that structure using the latent variables.
Generative process
In specifying a mixture model like LDA, we need to describe how data can be generated using this model. Before we do that, let us set up the notation carefully. Note that in this blog post, I have chosen the notation used in Hoffmann, Blei and Bach’s paper on online learning for LDA. This blog is intended to follow the “batch variational Bayes” part of the paper, with some more detail to help you read more easily.
Suppose we have a collection of $D$ documents, where each document $d$ is of length $N_d$. Also suppose that we have a fixed vocabulary of $W$ words. We wish to discover $K$ topics in this collection, where each topic $k$ is specified by the probability $\beta_k$ over all words. The generative process works as follows. For document $d$, sample probability distribution $\theta_d$ over the topics $1, \ldots, K$. For each word $w_{di}$ in document $d$, sample a topic $z_{di}$ from the distribution $\theta_d$. With the chosen topic $z_{di}$, sample a word $w_{di}$ from the probability distribution $\beta_{z_{di}}$. In other words,
- Draw a topic-word distribution $\beta_k \sim \text{Dir}(\eta)$ for $k = 1, \ldots, K$.
- For each document $d = 1, \ldots, D$:
- Draw document-topic distribution for document $d$: $\theta_d \sim \text{Dir}(\alpha)$.
- For each word $i$ in document $d$:
- Draw a topic $z_{di} \sim \theta_d$.
- Draw a word $w_{di} \sim \beta_{z_{di}}$.
The notation is summarized in the following table.
Notation | Dimensionality | Meaning | Notes |
---|---|---|---|
$D$ | Scalar | Number of documents | Positive integer |
$W$ | Scalar | Number of words in the vocabulary | Positive integer |
$K$ | Scalar | Number of topics | Positive integer,typically much smaller than $D$ |
$N_d$ | Scalar | Number of words in document $d$ | Positive integer |
$\beta_k$ | $W$ | Word distribution for topic $k$ | $\beta_k$ ($k = 1, \ldots, K)$ are mutually independent. Each $\beta_k$ is a non-negative vector and $\sum_{w=1}^{W} \beta_{kw} = 1$. |
$\eta$ | Scalar | Dirichlet prior parameter for $\beta_k$ | All $\beta_k$ share the same parameter $\eta$. |
$\theta_d$ | $K$ | Topic distribution for document $d$ | $\theta_d$ ($d = 1, \ldots, D$) are mutually independent. Each $\theta_d$ is a non-negative vector and $\sum_{k=1}^{K} \theta_{dk} = 1$. |
$\alpha$ | Scalar | Dirichlet prior parameter for $\theta_d$ | All $\theta_d$ share the same parameter $\alpha$. |
$w_{di}$ | Scalar | Word $i$ in document $d$ | $w_{di} \in \{1, 2, \ldots, W\}$ |
$z_{di}$ | Scalar | Topic assignment for word $w_{di}$ | $z_{di} \in \{1, 2, \ldots, K\}$ |
Complete model
The types of variables should be clear to us now. The only observables we have are $w$, the words in the documents. On the other hand, the latent variables are $z$, $\theta$ and $\beta$. The generative process allows us to specify the complete model—i.e., the joint distribution of both observable and latent variables—as follows
\[\begin{align} p(w, z, \theta, \beta \mid \alpha, \eta) & = p(\beta \mid \eta) \prod_{d=1}^{D} p(\theta_d \mid \alpha) p(z_d \mid \theta_d) p(w_d \mid \theta_d, z_i, \beta) \label{eq:joint_prob}\\ & = \prod_{k=1}^{K} p(\beta_k \mid \eta) \prod_{d=1}^{D} p(\theta_d \mid \alpha) \prod_{i=1}^{N_d} p(z_{di} \mid \theta_d) p(w_{di} \mid \theta_d, z_{di}, \beta). \nonumber \end{align}\]Dirichlet and categorical distributions
Note that there are two probability distributions used in this process. The first is the Dirichlet used to sample $\beta_k$ and $\theta_d$. For example, the probability of the topic distribution for document $d$ is
\[p(\theta_d \mid \alpha) = \frac{\Gamma\left( K \alpha \right)}{\Gamma(\alpha)^K} \prod_{k=1}^K \theta_{dk}^{\alpha-1}.\]The second is the categorical distribution, used to sample $z_{di}$ and $w_{di}$. For example, to find the probablity that the word $w_{di}$ given all other variables, we first need to find the value of $z_{di}$. Suppose $z_{d_i} = 2$. Then the distribution we need to use is $\beta_2$, or the second topic. Then the probability that $w_{di}$ equals some $w$ is
\[p(w_{di} | z_{di} = 2, \beta, \theta_d) = \beta_{2, w},\]that is, the $w$-th entry of $\beta_{2}$.
Inference, Approximate Inference and Parameter Estimation
Inference refers to the task of finding the probability of latent varibles given observable variables. In our LDA example, the quantity we want to calculate is
\[\begin{align} p(z, \theta, \beta \mid w, \alpha, \eta) = \frac{p(w, z, \theta, \beta \mid \alpha, \eta)}{\int_{z, \theta, \beta} p(w, z, \theta, \beta \mid \alpha, \eta) dz d\theta d\beta}. \label{eq:bayes-infr} \end{align}\]What is this quantity? Imagine you see new document, how do you know what topics it belongs to, along with the topic weights? The probability in $\eqref{eq:bayes-infr}$ helps us do just that: use the Bayes theorem to find the posterior distribution on the latent variables, enabling us to draw inference on the structure of the document.
But there is a catch. The integral in the denominator $\eqref{eq:bayes-infr}$, which is equal to $p(w \mid \alpha, \eta)$ and often called the evidence, is very hard to evaluate. This is mainly because of the coupling of the latent variables, and exactly calulating this will take exponential time. Instead, we will use an method called variational inference to approximate it.
Variational inference (VI)
(To keep this blog post short enough, I will not explain the details of VI. You are encourage to check out Chapter 10 in Kevin Murphy’s textbook on probabilistic machine learning for an introduction to VI.)
Basically, the goal of VI is to approximate the distribution $p(z, \theta, \beta \mid w, \alpha, \eta)$ using a simpler distribution $q(z, \theta, \beta)$ that is “the closest” to $p$. Here “closeness” is defined by the Kullback-Leibler divergence between $q$ and $p$. In other words, we aim to solve the following optimization problem:
\[\min_{q} \left\{ \text{KL}(q(z, \theta, \beta) \| p(z, \theta, \beta \mid w, \alpha, \eta)) = \mathbb{E}_q \left[ \log \frac{q(z, \theta, \beta)}{p(z, \theta, \beta \mid w, \alpha, \eta)} \right] \right\}.\]Evidence lower bound (ELBO) and variational Bayes (VB)
Interestingly, minimizing this KL divergence is equivalent to maximizing the evidence lower bound (ELBO) of the data, where the ELBO $\mathcal{L}(w, z, \theta, \beta)$ is defined as
\[\begin{align} \mathcal{L}(w, \phi, \gamma, \lambda) = \mathbb{E}_q\left[ \log p(w, z, \theta, \beta \mid \alpha, \eta) \right] - \mathbb{E}_q\left[ \log q(z, \theta, \beta) \right]. \label{eq:elbo:def} \end{align}\]As the name suggests, the ELBO is a lower bound on the log-likelihood of our data. The maximum ELBO gives us the “closest” approximation to the likelihood. Check Section 10.1.2 in Murphy’s textbook for a full derivation.
To “fit” the data in the Bayesian sense, we will aim to approximate the true posterior as well as possible. Applying VI to this task is called variational Bayes (VB).
Choosing variational parameters
We have mentioned the “simpler” distribution $q(z, \theta, \beta)$ above, but what exactly is it? In using VI for LDA inference, we assume that $q(z, \theta, \beta)$ factorizes to three marginal distributions:
- $q(z_{di}) = \phi_{d w_{di} k}$. The dimensionality of $\phi$ is $D \times W \times K$, and $\sum_{k=1}^{K} \phi_{d w k} = 1, \forall d, w$;
- $\theta_d \sim \text{Dir}(\gamma_d)$, where $\gamma_d$ is a vector of length $K$. Note that $\gamma_d$ is not symmetric;
- $\beta_k \sim \text{Dir}(\lambda_k)$, where $\lambda_k$ is a vector of length $W$. Similarly, $\beta_k$ is not symmetric.
This is an application of the mean-field assumption, which says that variational distributions for each set of latent variables are mutually independent, allowing the joint to be factorized into marginals.
In summary,
\[\begin{align} q(z_d, \theta_d,\beta) = q(z_d) q(\theta_d)q(\beta), \label{eq:mean_field} \end{align}\]and we have three types of variational parameters: $\phi$ of size $D \times W \times K$; $\gamma_d$ of size $K$, for $d = 1, \ldots, D$; and $\lambda_k$ of size $W$, for $k = 1, \ldots, K$.
Factorizing ELBO
Given the complete model in $\eqref{eq:joint_prob}$ and the variational distribution in $\eqref{eq:mean_field}$, we can decompose the ELBO as follows: \(\begin{align} \mathcal{L}(w, \phi, \gamma, \lambda) & = \sum_{d=1}^{D} \left\{ \mathbb{E}_q\left[ \log p(w_d \mid \theta_d, z_d, \beta) \right] + \mathbb{E}_q\left[ \log p(z_d \mid \theta_d) \right] - \mathbb{E}_q\left[ \log p(\theta_d \mid \alpha) \right] \right\} \nonumber \\ &~~~~ - \sum_{d=1}^{D} \left\{ \mathbb{E}_q\left[ \log q(z_d \mid \theta_d) \right] + \mathbb{E}_q\left[ \log q(\theta_d) \right] \right\} \nonumber \\ &~~~~ + \mathbb{E}_q\left[ \log p(\beta \mid \eta) \right] - \mathbb{E}_q\left[ \log q(\beta) \right] \nonumber \\ & = \sum_{d=1}^{D} \left\{ \mathbb{E}_q\left[ \log p(w_d \mid \theta_d, z_d, \beta) \right] + \mathbb{E}_q\left[ \log p(z_d \mid \theta_d) \right] - \mathbb{E}_q\left[ \log q(z_d \mid \theta_d) \right] \right. \nonumber\\ &\quad \quad \quad ~ +\left.\mathbb{E}_q\left[ \log p(\theta_d \mid \alpha) \right] - \mathbb{E}_q\left[ \log q(\theta_d) \right] \right\} \nonumber \\ & ~~~~ + (\mathbb{E}_q\left[ \log p(\beta \mid \eta) \right] - \mathbb{E}_q\left[ \log q(\beta) \right]). \label{eq:elbo} \\ \end{align}\)
ELBO as a function of variational parameters
Analyzing each term in the sum. \(\begin{align} \mathbb{E}_q\left[ \log p(w_d \mid \theta_d, z_d, \beta) \right] & = \sum_{i=1}^{N_d} \mathbb{E}_q\left[ \log p(w_{di} \mid \theta_d, z_{di}, \beta) \right] \nonumber \\ & = \sum_{i=1}^{N_d} \sum_{k=1}^{K} q(z_{di} = k) \mathbb{E}_q\left[ \log p(w_{di} \mid \theta_d, z_{di}, \beta) \right] \nonumber \\ & = \sum_{i=1}^{N_d} \sum_{k=1}^{K} \phi_{d w_{di} k} \mathbb{E}_q\left[ \log \beta_{k w_{di}} \right], \nonumber \end{align}\)
where the expectation on the last row is with respect to $q(\beta_k)$. We can see that in this formula, the contribution of each word $w$ to the term is $\sum_{k=1}^{K} \phi_{d w k} \mathbb{E} \left[ \log \beta_{k w} \right]$, which is the same for regardless of the position of word $w$ in document $d$. Therefore, we can simply count the number of times $w$ appears in $d$, and then multiply it with this contribution to get the contribution of all occurrences of $w$. This gives us the equivalent expression: \(\begin{align} \mathbb{E}_q\left[ \log p(w_d \mid \theta_d, z_d, \beta) \right] = \sum_{w=1}^{W} n_{dw} \sum_{k=1}^{K} \phi_{d w k} \mathbb{E}_q\left[ \log \beta_{k w} \right], \label{eq:elbo:1} \end{align}\)
where $n_{dw}$ is the number of occurrences of word $w$ in document $d$. Using the same trick, we have \(\begin{align} \mathbb{E}_q\left[ \log p(z_d \mid \theta_d) \right] & = \sum_{w=1}^{W} n_{dw} \sum_{k=1}^{K} \phi_{d w k} \mathbb{E}_q\left[ \log \theta_{dk} \right], \text{and} \label{eq:elbo:2} \\ \mathbb{E}_q\left[ \log q(z_d) \right] & = \sum_{w=1}^{W} n_{dw} \sum_{k=1}^{K} \phi_{d w k} \log \phi_{d w k}. \label{eq:elbo:3} \end{align}\)
For the last two terms inside the sum, first note that $p(\theta_d \mid \alpha)$ is a Dirichlet distribution with symmetric parameter $\alpha$, i.e., $q(\theta_d \mid \alpha) = \frac{\Gamma(K \alpha)}{\Gamma(\alpha)^K} \prod_{k=1}^{K} \theta_{dk}^{\alpha-1}$. Therefore, \(\begin{align} \mathbb{E}_q\left[ \log p(\theta_d \mid \alpha) \right] = \log \Gamma(K \alpha) - K \log \Gamma(\alpha) + (\alpha - 1) \sum_{k=1}^{K} \log \theta_{dk}. \label{eq:elbo:4} \end{align}\)
Similarly, because $q(\theta_d)$ is a Dirichlet distribution with asymmetric parameter $\gamma_d$, we have \(\begin{align} \mathbb{E}_q\left[ \log q(\theta_d) \right] = \log \Gamma\left(\sum_{k=1}^{K} \gamma_{dk} \right) - \sum_{k=1}^{K} \log \Gamma(\gamma_{dk}) + \sum_{k=1}^{K} (\theta_{dk} - 1) \log \theta_{dk}. \label{eq:elbo:5} \end{align}\)
Now for the last two terms, also note that $p(\beta_k \mid \eta)$ is Dirichlet with symmetric $\eta$. Therefore, \(\begin{align} \mathbb{E}_q\left[ \log p(\beta \mid \eta) \right] &= \sum_{k=1}^{K} \mathbb{E}_q\left[ \log p(\beta_k \mid \eta) \right] \nonumber \\ &= K [\log \Gamma(W \eta) - W \log \Gamma(\eta)] + \sum_{k=1}^{K} \sum_{w=1}^{W} (\eta - 1) \mathbb{E}_q\left[ \log \beta_{k w} \right]. \label{eq:elbo:6} \end{align}\)
Simlarly, the final term is \(\begin{align} \mathbb{E}_q\left[ \log q(\beta) \right] &= \sum_{k=1}^{K} \mathbb{E}_q\left[ \log q(\beta_k) \right] \nonumber \\ &= \sum_{k=1}^{K} \left( \log \Gamma \left( \sum_{w=1}^{W} \lambda_{kw} \right) - \sum_{w=1}^{W} \Gamma(\lambda_{kw}) + \sum_{w=1}^{W} (\lambda_{kw} - 1) \mathbb{E}_q\left[ \log \beta_{k w} \right] \right). \label{eq:elbo:7} \end{align}\)
Plugging $\eqref{eq:elbo:1}, \eqref{eq:elbo:2}, \eqref{eq:elbo:3}, \eqref{eq:elbo:4}, \eqref{eq:elbo:5}, \eqref{eq:elbo:6}, \eqref{eq:elbo:7}$ into $\eqref{eq:elbo}$, we have the ELBO as a function of variational parameters:
\[\begin{align} \mathcal{L} &= \sum_{d=1}^{D} \left\{ \sum_{w=1}^{W} n_{dw} \sum_{k=1}^{K} \phi_{dwk} \left( \mathbb{E}_q\left[ \log \theta_{dk} \right] + \mathbb{E}_q\left[ \log \beta_{k w} \right] - \log \phi_{dwk} \right) \right. \nonumber\\ & \left. \quad \quad \quad ~ - \log \Gamma\left( \sum_{k=1}^{K} \gamma_{dk} \right) + \sum_{k=1}^{K}\left( \log \Gamma(\gamma_{dk}) + (\alpha - \gamma_{dk}) \mathbb{E}_q\left[ \log \theta_{dk} \right] \right) \right\} \nonumber \\ &~~~~ + \sum_{k=1}^{K} \left( - \log \Gamma\left( \sum_{w}^{W} \lambda_{kw} \right) + \sum_{w=1}^{W} \left( \log \Gamma(\lambda_{kw}) + (\eta - \lambda_{kw}) \mathbb{E}_q\left[ \log \beta_{k w} \right] \right) \right) \nonumber \\ &~~~~ + D [\log \Gamma(K \alpha) - K \log \Gamma(\alpha)] + K [\log \Gamma(W \eta) - W \log \Gamma(\eta)]. \label{eq:elbo:var} \end{align}\]Variational Bayes for LDA
The main objective here is to maximize the ELBO $\mathcal{L}$ with respect to the variational parameters $\phi$, $\gamma$ and $\lambda$. To do so, we will use a procedure called coordinate ascent, in which we maximize $\mathcal{L}$ with respect to one set of parameters, keeping the others fixed. We will then alternate to another set of variables, keeping others fixed, and so on. In our LDA example, we first keep $\gamma$ and $\lambda$ fixed, and maximize $\mathcal{L}$ as a function of $\phi$ only. Then we do the same for $\gamma$ and $\lambda$.
Maximizing with respect to $\phi$
Only keeping the terms involving $\phi_{dwk}$ in $\eqref{eq:elbo:var}$, and treating everything else as constants, we have the objective function w.r.t. $\phi_{dwk}$ as
\[\mathcal{L}_{[\phi_{dwk}]} = \phi_{dwk} \left( \mathbb{E}_q\left[ \log \theta_{dk} \right] + \mathbb{E}_q\left[ \log \beta_{k w} \right] - \log \phi_{dwk} \right) + \text{const},\]which gives the gradient:
\[\frac{\partial \mathcal{L}}{\partial \phi_{dwk}} = \mathbb{E}_q\left[ \log \theta_{dk} \right] + \mathbb{E}_q\left[ \log \beta_{k w} \right] - \log \phi_{dwk} - 1.\]Setting the gradient to zero and solving for $\phi_{dwk}$, we get the update rule for $\phi_{dwk}$:
\[\begin{align} \phi_{dwk} \propto \exp \left\{ \mathbb{E}_q\left[ \log \theta_{dk} \right] + \mathbb{E}_q\left[ \log \beta_{k w} \right] \right\}. \label{eq:update:phi} \end{align}\]Where we have suppressed all multiplicative constants by using $\propto$. After this update for all $\phi_{dwk}$, we can simply rescale them so that $\sum_{k=1}^{K} \phi_{dwk} = 1, \forall d, w$.
The final thing to handle is the expectations inside $\exp$. How do we calculate them exactly? Lucklily, both of them can be calculated using the digamma function $\Psi$—the first derivative of the logarithm of the gamma function—as follows:
\[\begin{align*} \mathbb{E}_q\left[ \log \theta_{dk} \right] & = \Psi(\gamma_{dk}) - \Psi\left(\sum_{i=1}^{K} \gamma_{di}\right), \\ \mathbb{E}_q\left[ \log \beta_{k w} \right] & = \Psi(\lambda_{kw}) - \Psi\left(\sum_{i=1}^{W} \lambda_{ki}\right). \end{align*}\]Maximizing with respect to $\gamma$
Similarly, the objective function w.r.t. $\gamma_{dk}$ is
\[\begin{align*} \mathcal{L}_{[\gamma_{dk}]} & = \sum_{w=1}^{W} n_{dw} \phi_{dwk} \mathbb{E}_q \left[ \log \theta_{dk} \right] - \log \Gamma\left( \sum_{i=1}^{K} \gamma_{d_i} \right) \\ & ~~~~+ \log \Gamma(\gamma_{dk}) + (\alpha - \gamma_{dk}) \mathbb{E}_q \left[ \log \theta_{dk} \right] + \text{const} \\ & = \left( \alpha + \sum_{w=1}^{W} n_{dw} \phi_{dwk} - \gamma_{dk} \right) \left( \Psi(\gamma_{dk}) - \Psi\left(\sum_{i=1}^{K} \gamma_{di}\right) \right) \\ & ~~~~ - \log \Gamma\left( \sum_{i=1}^{K} \gamma_{d_i} \right) + \log \Gamma(\gamma_{dk}) + \text{const}, \end{align*}\]where we have used the digamma function $\Psi$ similarly to the previous section. A simple manipulation gives the gradient:
\[\begin{align*} \frac{\partial \mathcal{L}}{\partial \gamma_{dk}} = \left( \Psi'(\gamma_{dk}) - \Psi'\left(\sum_{i=1}^{K} \gamma_{di}\right) \right) \left( \alpha + \sum_{w=1}^{W} n_{dw} \phi_{dwk} - \gamma_{dk} \right). \end{align*}\]Setting this gradient to zero and solving for $\gamma_{dk}$, we get the update rule for $\gamma_{dk}$:
\[\begin{align} \gamma_{dk} = \alpha + \sum_{w=1}^{W} n_{dw} \phi_{dwk}. \label{eq:update:gamma} \end{align}\]The variational Bayes estimate of $\gamma$ has an intuitive explanation. The number of times document $d$ is assigned to topic $k$ is the weighted sum of the times each word in $d$ is assigned to topic $k$, where the weight $\phi_{dwk}$ is the probability that word $w$ in document $d$ belongs to topic $k$—plus the Dirichlet prior $\eta$.
Maximizing with respect to $\lambda$
Similar to $\gamma$, we can use the digamma function $\Psi$ in the objective functin w.r.t. $\lambda_{kw}$ as follows
\[\begin{align*} \mathcal{L}_{[\lambda_{kw}]} & = \left( \eta + \sum_{d=1}^{D} n_{dw} \phi_{dwk} - \lambda_{kw} \right) \left( \Psi(\lambda_{kw}) - \Psi\left(\sum_{i=1}^{W} \lambda_{ki} \right) \right) \\ & ~~~~ - \log \Gamma\left(\sum_{i=1}^{W} \lambda_{ki} \right) + \log \Gamma(\lambda_{kw}) + \text{const}, \end{align*}\]which gives the gradient:
\[\begin{align*} \frac{\partial \mathcal{L}}{\partial \lambda_{kw}} = \left( \Psi'(\lambda_{kw}) - \Psi'\left(\sum_{i=1}^{W} \lambda_{ki} \right) \right) \left( \eta + \sum_{d=1}^{D} n_{dw} \phi_{dwk} - \lambda_{kw} \right). \end{align*}\]Setting the gradient to zero and solving for $\lambda_{kw}$, we get the update estimate:
\[\begin{align} \lambda_{kw} = \eta + \sum_{d=1}^{D} n_{dw} \phi_{dwk}. \label{eq:update:lambda} \end{align}\]Similar to $\gamma_{dk}$, the variational Bayes estimate of $\lambda$ has an intuitive explanation. The count of word $w$ in topic $k$ the weighted sum of word count for $w$ in each document $d$, where the weight $\phi_{dwk}$ is the probability that word $w$ in document $d$ belongs to topic $k$—plus the Dirichlet prior $\eta$.
Putting everything together
We have shown the update rules for the variational parameters: $\phi_{dwk}$ in $\eqref{eq:update:phi}$, $\gamma_{dk}$ in $\eqref{eq:update:gamma}$, and $\lambda_{kw}$ in $\eqref{eq:update:lambda}$. The variational Bayes algorithm is complete. There is one final thing to note, taken from the Section 2.1 of the original paper.
We can actually partition these updates into two steps, analogous to the two steps in the EM algorithm. In the “E”-step, we keep updating $\gamma$ and $\phi$ until convergence, keeping $\lambda$ fixed. In the “M”-step, iteratively update $\lambda$ holding $\gamma$ and $\phi$ fixed.
Now you can understand the paper’s Algorithm 1 fully and can start implementing it in your favorite language.