Suppose we have a vector $a \in \mathbb{R}^n$ and want to find another vector $x$ in an $\ell_1$-norm ball such that the distance between $a$ and $x$ is the smallest possible. In other words, we seek to solve the following problem

\[\begin{align} \begin{aligned} \min_{x} \quad & \left\{ f(x) = \frac{1}{2} \lVert x - a \rVert_2^2 \right\} \\ \text{such that} \quad & \lVert x \rVert_1 \leq \kappa. \end{aligned} \label{eq:primal} \end{align}\]The constraint $\lVert x \rVert_1 \leq \kappa$ denotes the $\ell_1$-norm ball of radius $\kappa > 0$ (that is, all the points whose $\ell_1$ norm is at most $\kappa$) and $f(x)$ is the primal objective function.

Does this problem have a closed-form solution? Yes, but not always. Like all projection problems, if $a$ is already in the convex set ($\lVert a \rVert_1 \leq \kappa$) then the solution is simply $x = a$, and we are done. Otherwise, we will need to approximate the solution.

In the above figure, we see an example in $n=2$ dimensions. The black square (containing all points on and within its boundaries) represents the $\ell_1$-norm ball of radius $\kappa = 1$. The center of the blue circle, in red, is the vector $a$ and the radius of the circle is the smallest distance from $a$ to the $\ell_1$-norm ball.

It is easy to see that $\eqref{eq:primal}$ is a convex optimization problem as its objective and constraint are both convex. Note that in this case, $x = 0$ is a strictly feasible solution, which means, by Slater’s condition, strong duality holds. We therefore will aim to solve $\eqref{eq:primal}$ by maximizing its dual function. Let $\gamma \geq 0$ be the dual variable; the Lagragian is

\[\begin{align} L(x, \gamma) = \frac{1}{2} \lVert x - a \rVert_2^2 + \gamma (\lVert x \rVert_1 - \kappa). \label{eq:lagrangian} \end{align}\]Finding the dual objective function requires us to minimize $L(x, \gamma)$ with respect to $x$. Notice that we can rewrite the Lagrangian as

\[\begin{align*} L(x, \gamma) = - \kappa \gamma + \sum_{i=1}^{n} \left( \frac{1}{2} (x_i - a_i)^2 + \gamma \lvert x_i \rvert \right), \label{eq:lagrangian_as_sum} \end{align*}\]where the subscript $i$ denotes the $i$th element of a vector. If we let

\[\begin{align*} s_i(x_i, \gamma) = \frac{1}{2} (x_i - a_i)^2 + \gamma \lvert x_i \rvert, \label{eq:si} \end{align*}\]then minimizing $L(x, \gamma)$ with respect to $x$ means to minimize each $s_i(x_i, \gamma)$ with respect to $x_i$. Fortunately, the problem $\min_{x_i} s_i(x_i, \gamma)$ has a unique and closed-form solution:

\[\begin{align} x_i(\gamma) = \begin{cases} a_i - \gamma & \text{if} \quad a_i > \gamma \\ 0 & \text{if} \quad - \gamma \leq a_i \leq \gamma \\ a_i + \gamma & \text{if} \quad a_i < \gamma. \end{cases} \label{eq:threshold_si} \end{align}\]This is is called the *soft thresholding operator* for $\gamma$. Equation $\eqref{eq:threshold_si}$ shows us how to convert a dual solution $\gamma$ to a primal solution $x$. Now, if we let $s_i^*(\gamma) = \min_{x_i} s_i(x_i, \gamma) = s_i(x_i(\gamma), \gamma)$, the dual objective is

We know that $g$ is a concave function by design. Furthermore, since the solution to $\min_{x_i} s_i(x_i, \gamma)$ is unique for every $\gamma \geq 0$, by Danskin’s theorem, each $s_i^*$ is differentiable, which makes $g$ differentiable as well. We can easily verify that the derivative of $g$ is

\[\begin{align} g'(\gamma) = - \kappa + \sum_{i=1}^{n} \max(\lvert a_i \rvert - \gamma, 0). \label{eq:dual_derivative} \end{align}\]It only remains to be shown that $\frac{d}{d\gamma} s_i^*(\gamma) = \max(\lvert a_i \rvert - \gamma, 0)$. To see why, note that $$ \begin{align*} s_i^*(\gamma) = s_i(x_i(\gamma), \gamma) = \frac{1}{2} (x_i(\gamma) - a)^2 + \gamma (x_i(\gamma)). \end{align*} $$ It is easy to show that, by $\eqref{eq:threshold_si}$, $$ \begin{align*} (x_i(\gamma) - a)^2 = \min(\lvert a_i \rvert, \gamma)^2, \end{align*} $$ and $$ \begin{align*} (x_i(\gamma) - a)^2 = \max(\lvert a_i \rvert - \gamma, 0)^2. \end{align*} $$ Now we consider two cases of $\gamma$. First, if $\gamma \leq |a_i|$, we have $s_i^*(\gamma) = - \frac{1}{2} \gamma^2 + |a_i| \gamma$, which gives its derivative equal to $|a_i| - \gamma$. Second, if if $\gamma > |a_i|$, $s_i^*(\gamma) = 0$. Either way, $\frac{d}{d\gamma} s_i^*(\gamma) = \max(|a_i| - \gamma, 0)$.

So far we have been able to find the dual function $g(\gamma)$ and its derivative $g’(\gamma)$. Now we will explore a method to maximize $g(\gamma)$ and recover the primal optimal solution.

As a reminder, we will aim to solve the problem

\[\begin{align} \max_{\gamma} g(\gamma) \quad \text{such that} \quad \gamma \geq 0. \label{eq:dual} \end{align}\]Since $g$ is concave and differentiable, we can aim to maximize it by using a hill-climbing algorithm such as gradient ascent with backtracking line search. Below is an example dual function and its derivative at various values of $\gamma$.

In this post we will solve this problem using a different method called bisection. The aim here is to set the derivative to zero and solve for $\gamma$. In other words, we seek the solution to $g’(\gamma) = 0$. The bisection method requires us to have a range $[\gamma_{\min}, \gamma_{\max}]$ in which we are sure the optimal solution $\gamma^*$ lies.

First, since $\gamma^*$ must be feasible, we set $\gamma_{\min} = 0$. To find an upper bound, note that since the optimal objective value for Problem $\eqref{eq:primal}$ must be non-negative, and strong duality holds, the optimal value for Problem $\eqref{eq:dual}$ is also non-negative. This implies that

\[\begin{align*} - \kappa \gamma^* + \sum_{i=1}^{n} s_i(x_i(\gamma^*), \gamma^*) \geq 0. \end{align*}\]Therefore,

\[\begin{align*} \gamma^* & \leq \frac{1}{\kappa} \sum_{i=1}^{n} s_i(x_i(\gamma^*), \gamma^*) \leq \frac{1}{\kappa} \sum_{i=1}^{n} s_i(0, \gamma^*) = \frac{1}{\kappa} \sum_{i=1}^{n} \frac{a_i^2}{2} = \frac{1}{2 \kappa} \lVert a \rVert_2^2, \end{align*}\]where the second inequality is due to the fact that \(x_i(\gamma^*)\) is the minimizer if $s_i(x, \gamma^*)$. So an upper bound we can set for \(\gamma^*\) is \(\gamma_{\max} = \frac{1}{2 \kappa} \lVert a \rVert_2^2\).

Now that we know \(\gamma^*\) is in between $\gamma_{\min} = 0$ and \(\gamma_{\max} = \frac{1}{2 \kappa} \lVert a \rVert_2^2\), the bisection method works as follows. First, let $\gamma = (\gamma_{\min} + \gamma_{\max}) / 2$. If the sign of $g’(\gamma)$ is the same as that of $g’(\gamma_{\min})$, then $\gamma_{\min}$ is updated to $\gamma$. Otherwise, $\gamma_{\max}$ is updated to $\gamma$. It is simple as that! The method is also guaranteed to converge, as after each iteration, the length of the interval $[\gamma_{\min}, \gamma_{\max}]$ is reduced by half.

Another point to note is that \(g'\) is a monotonically non-increasing function. It achieves a minimum of \(-\kappa\) when \(\gamma \geq \max_i \left\{ \lvert x_i \rvert \right\}\) and a maximum of \(-\kappa + \lVert a \rVert_1\) when \(\gamma \leq \min_i \left\{ \lvert x_i \rvert \right\}\). When \(\lVert a \lVert \leq \kappa\), $g’$ is always negative so the solution can be incorrect. In this special case, we can directly conclude the solution $x = a$ without having to solve anything else.

The figures above show the derivative of a dual function where $a = [1,2,3]^\top$. The dark green horizontal line depicts $y = 0$. We can see that $g’$ is non-increasing and piecewise. In the left plot, $\kappa$ is set to $2 < \lVert a \rVert_1 = 6$, which allows $g’$ to cross the $y = 0$ line and so the solution to $g’(\gamma)$ exists. On the other hand, in the right plot where $\kappa$ exceeds $\lVert a \rVert_1$, no solution exists. In this case one can output $a$ as the solution already.

Here is a simple Python implementation of the bisection method for maximizing the dual objective. First we define a few functions.

```
def primal_fn(x, a):
return 0.5 * np.sum((x - a) ** 2)
# Vectorize the computation of s_i(x, gamma)
def s(x, gamma, a):
return 0.5 * (x - a) ** 2 + gamma * np.abs(x)
def x_gamma(gamma, a):
sol = np.zeros_like(a)
idx = a > gamma
sol[idx] = a[idx] - gamma
idx = a < - gamma
sol[idx] = a[idx] + gamma
return sol
def dual_fn(gamma, kappa, a):
x = x_gamma(gamma, a)
return - kappa * gamma + np.sum(s(x, gamma, a))
def dual_grad(gamma, kappa, a):
return - kappa + np.sum(np.maximum(np.abs(a) - gamma, 0))
```

Then, the bisection method is straightforward. We can let the iterations run until the difference $\gamma_{\max} - \gamma_{\min}$ reaches below a pre-defined error $\varepsilon$, at which point the derivative should be close enough to $0$.

```
def bisection(a, kappa, eps=1e-5):
gamma_min, gamma_max = 0, (1 / (2 * kappa)) * np.sum(a ** 2)
# Run until gamma_max and gamma_min are the same
while gamma_max - gamma_min > eps:
gamma = (gamma_max + gamma_min) / 2
grad = dual_grad(gamma, kappa, a)
if grad < 0:
gamma_max = gamma
else:
gamma_min = gamma
return gamma
```

The first two plots in this post are produced using the following code.

```
# Point to be projected
a = np.array([1.1, 1.2])
# Radius of the ell_1 norm ball
kappa = 1
# Find approximate solution to the dual problem
dual_solution = bisection(a, kappa, eps=1e-5)
# Convert to the primal solution
primal_solution = x_gamma(dual_solution, a)
```

In the figure at the top of this post, you probably have observed that as $a$ moves, there seems to be a “region” of $a$ in which the solution stays in a vertex of the square. Projection onto the $\ell_1$-norm ball has an interesting characteristic: in high dimensions, the optimal solution has a tendency to be *sparse*, which means most of its elements are driven to zero. To see how, let’s try an example.

```
>>> np.random.seed(100)
>>> a = np.random.randn(100)
>>> dual_solution = bisection(a, kappa=1, eps=1e-10)
>>> primal_solution = x_gamma(dual_solution, a)
>>> print(np.count_nonzero(primal_solution))
5
```

In this example, I generated a $100$-dimensional vector $a$ by independently sampling $100$ samples from the standard normal distribution. After projection, the solution only contains $5$ non-zero values. Only $5$ out of $100$ elements remain non-zero! (Be careful: I set $\varepsilon$ to be very small but it’s probably better to check equality using `np.allclose`

to compare two floating point numbers.)

You may ask, “What’s the significance of this?” The tendency to drive most variables to zero is behind the success of the lasso method. Imagine you are performing a regression analysis with many, many variables. The lasso, or $\ell_1$ regularized, problem is

\[\begin{align} \min_{w} \frac{1}{2} \lVert Xw - y \rVert_2^2 + \lambda \lVert w \rVert_1, \label{eq:lasso} \end{align}\]where $X$ is the design matrix, $y$ is the ground-truth labels and $\lambda$ is the regularization strength. Note that the formulation of lasso regression is not exactly the problem discussed in this method: the variable has to go through a linear transformation in lasso. However, observations remain similar: the solution $w^*$ to this problem tends to be sparse where most weights are driven to zero.

While $\ell_2$ is a more popular regularizer, $\ell_1$ may be preferred if you like to assess features’ importance: those with non-zero coefficients tend to be very few and represent the most important features you may want to keep during feature selection.

In this post we explore the problem of projection onto an $\ell_1$-norm ball. We formalize the primal problem and see how the dual problem can be expressed and optimized. We also observe that the solution tends to be sparse in high dimensions.

Several things deserve some mentioning in these concluding remarks. First, we have yet to talk about the *asymptotic complexity* of solving $\ell_1$ projection. That is, given some tolerance $\epsilon$, how much time do we need to achieve an approximate solution $x$ to Problem $\eqref{eq:primal}$ such that $f(x) - f(x^*) < \epsilon$? Second, you may be interested in a variant of $\ell_1$ projection called simplex projection, where the variable $x$ is also constrained to be non-negative. In this case the Lagrangian in $\eqref{eq:lagrangian}$ must involve another set of variables for the constraints $x_i \geq 0, i = 1, \ldots, n$, and a different optimization algorithm is needed. Third, minimizing the lasso objective as in $\eqref{eq:lasso}$ deserves some discussion, too. The resources below should offer some answer to these questions.

- Ryan Tibshirani’s lectures on convex optimization, specifically those on duality and proximal gradient descent.
- Convex Optimization by Boyd and Vandenberghe.
- Efficient Projections onto the $\ell_1$-Ball for Learning in High Dimensions by John Duchi, Shai Shalev-Shwartz, Yoram Singer and Tushar Chandra.

I think being able to implement backpropagation, at least in the simplest case, is quite important for its conceptual understanding. Hopefully this will benefit the students who stumble upon this page after a while of searching for “How to implement backprop.”

Below is a simple fully connected neural network.

Let’s decompose this architecture:

- The first layer has 5 neurons. This network accepts inputs that are 5-dimensional.
- The final layer has 1 neuron. It represents the loss function, which is a scalar.
- The second-last layer, which has 12 neurons, is actually typically called the last layer. If this layer is followed by softmax, you can think of this network as a 12-class classifier.
- There are two hidden layers, one with 10 neurons and the other with 4.

Here’s the computation in a forward pass through this network:

- Start with the input, which is 5-dimensional.
- Compute the first hidden output:
- Apply a linear transformation: $t_0 = W_0 x$
- Apply a non-linear activation: $z_0 = \tanh(t_0)$, where we $\tanh$ to every element of $t_0$

- Compute the second hidden output:
- Apply a linear transformation: $t_1 = W_1 z_0$
- Apply a non-linear activation: $z_1 = \sigma(t_1)$, element-wise as well

- Compute the second-last layer (classification output)
- Apply a linear transformation: $t_2 = W_2 z_1$
- No activation: $z_2 = \text{Id}(t_2)$

- Compute the loss
- $\ell = \frac{1}{2} \lVert z_2 \rVert^2 = \frac{1}{2} \sum_{i} [z_2]_i^2$

Note that the dimensions of $W_0$, $W_1$ and $W_2$ are $10 \times 5$, $4 \times 10$ and $12 \times 4$, respectively.

There is nothing special about choosing $\tanh$ and $\sigma$ (sigmoid) as the activation functions in steps 2 and 3; we can simply choose others such as ReLU. Likewise, we can use an activation function in Step 4 as well.

Now our job is to find the gradient of $\ell$ with respect to the model parameters, that is, $\nabla_{W_0}\ell, \nabla_{W_1} \ell$ and $\nabla_{W_2}\ell$.

First, let’s define our network in `numpy`

. To make things a bit easier, we will define a few resuable classes.

The first class we define is a tensor. It is basically a `numpy`

array with a gradient, which is an array of the same size storing its gradient. The array is stored in `.data`

and its gradient in `.grad`

.

```
class Tensor:
def __init__(self, arr, name=None):
self.data = arr
self.grad = None
# Optionally store the name of this tensor
self.name = name
```

Activation functions are functions that will be applied element-wise to tensors. For example, $z_0 = \tanh(t_0)$ means that $z_0$ and $t_0$ have the same dimensions, and every element in $z_0$ is the hyperbolic tangent transformation of the corresponding element in $t_0$.

We will have a base class called `Activation`

, which implements two methods:

`__call__`

will be apply the function to an input.`grad`

will apply the gradient function to an input.

```
class Activation:
def __call__(self, x):
pass
def grad(self, x):
pass
```

Let’s implement the $\tanh$ activation function. We can simply use `np.tanh`

for the forward pass. The derivative of this function is

```
class Tanh(Activation):
def __call__(self, x):
return np.tanh(x)
def grad(self, x):
return 1 - np.tanh(x) ** 2
```

Similarly, we can implement the sigmoid function based on its formulas:

\[\begin{align*} \sigma(x) &= \frac{1}{1 + e^{-x}}\\ \sigma'(x) &= \sigma(x) (1 - \sigma(x)). \end{align*}\]```
class Sigmoid(Activation):
def __call__(self, x):
return np.exp(x) / (1 + np.exp(x))
def grad(self, x):
sx = self(x)
return sx * (1 - sx)
```

Another function we used above is the identity function, which simply returns the input. Its derivative is $1$.

```
class Identity(Activation):
def __call__(self, x):
return x
def grad(self, x):
return np.ones_like(x)
```

A loss function takes an input vector and returns a scalar (number). We will also implement the `grad`

method for this function, `grad`

should return a vector that is of the same shape as the input.

The example loss function above is simply (half) squared norm, which simply squares every element of the input, sums them together, and divides the result by two. Calculus tells us that the gradient of such a function is the input itself.

```
class HalfSumSq:
def __call__(self, x):
return 0.5 * np.sum(x ** 2)
def grad(self, x):
return x
```

Now we are ready to put things together and create our neural net. For the sake of simplicity, we will only define one additional method for our class, which is `loss_and_grad`

. It will (1) take an input $x$ and perform a forward pass to get the loss, and (2) perform a backward pass to calculate the gradient of the loss with respect to its parameters.

As I have explained the forward pass above, we are able define the most part of our network.

```
import numpy as np
class Net:
def __init__(self):
# Weight matrices. We will initialize them randomly
self.weights = [Tensor(np.random.randn(output_dim, input_dim))
for input_dim, output_dim in [(5, 10), (10, 4), (4, 12)]]
# Register t_0, t_1,... The default value (np.zeros) doesn't matter, as we
# populate them in the forward pass later.
self.linear_outputs = [Tensor(np.zeros(dim, dtype=float)) for dim in (10, 4, 12)]
# Register z_0, z_1,... similarly
self.nonlinear_outputs = [Tensor(np.zeros(dim, dtype=float)) for dim in (10, 4, 12)]
# Activation and loss functions
self.activations = [Tanh(), Sigmoid(), Identity()]
self.loss = HalfSumSq()
def loss_and_grad(self, x):
curr_output = Tensor(x)
# Forward prop
for i in range(len(self.nonlinear_outputs)):
# Linear transformation
self.linear_outputs[i].data = self.weights[i].data @ curr_output.data
curr_output = self.linear_outputs[i]
# Activation function
self.nonlinear_outputs[i].data = self.activations[i](curr_output.data)
curr_output = self.nonlinear_outputs[i]
# Loss function
l = self.loss(curr_output.data)
# We will implement backprop later
# TODO: backprop
return l
```

The forward propagation above creates a *computation graph*, which shows us the flow of signals from input to output. To find the gradients, we need to traverse this graph *backwards*, that is, from output to input, hence the name.

Recall that this is an application of the chain rule in multivariate calculus. Suppose we have a scalar function $h(v) = (f \circ g)(v) = f(g(v))$. To find the gradient of $h$ with respect to $v$, we follow the chain rule
\(\begin{align*}
J_{h}(v) = J_{f \circ g} (v) = J_{f}(g(v)) J_{g}(v),
\end{align*}\)
where $J$ denotes the *Jacobian*, which is a matrix of partial derivatives. Since $h$ is a scalar function, $J_{h}(v)$ is a row vector. Transposing it will give us the gradient with respect to $v$.

The computation of $h$ looks familiar. First, we have an input $v$. Then transform $v$ to another (vector of scalar) value $g(v)$. Then use $g(v)$ as the input to $f$. The chain rules says that to find the gradient for $v$, we first need to go backwards: differentiate $f$ with respect to $g(v)$ first, then differentiate g with respect to $v$, then multiply them together.

Back to our example. As we have seen, the order of computation in a forward propagation is

\[\begin{align*} x \rightarrow t_0 \rightarrow z_0 \rightarrow t_1 \rightarrow z_1 \rightarrow t_2 \rightarrow z_2 \rightarrow \ell. \end{align*}\]It should be clear to us now that finding gradients means we have to traverse the network backwards. Start from the loss $\ell$. Differentiate that with respect to $z_2$. Then with respect to $t_2$. Then with repect to $z_1$. And so on.

We actually don’t want these gradients. What we actually want is the gradient with respect to $W_0, W_1$ and $W_2$, which are the matrices that transform a $z$ in one layer to a $t$ the next layer. However, in calculating these gradients, the chain rule requires us to compute the above intermediate gradients as well.

Below is a step-by-step procedure of backpropagation.

First, let’s start with $z_2$, the most immediate signal. Since we’re using the half sum of squares loss, the gradient is just $z_2$ itself:

\[\begin{align*} \nabla_{z_2} \ell = z_2. \end{align*}\]Now to $t_2$. Since $z_2$ is an element-wise identity transformation of of $t_2$, using the chain rule we have

\[\begin{align*} \nabla_{t_2} \ell = \nabla_{z_2} \ell \odot \text{Id}'(t_2), \end{align*}\]where $\odot$ denotes element-wise multiplication. The reason why we have an element-wise multiplication here is that the Jacobian of $z_2$ with respect to $t_2$ is a diagonal matrix, $\text{diag}(\text{Id}’(t_2))$, and multiplying $J_\ell(z_2)$ with this matrix is the same as performing an element-wise product.

Now let’s move back one layer. Recall that \(\begin{align*} t_2 = W_2 z_1. \end{align*}\)

We need to find the gradient for both $z_1$ and $W_2$. First, since this is a linear operation, differentiating $t_2$ with respect to $z_1$ will simply give us $W_2$. Using the chain rule again, we have

\[\begin{align*} \nabla_{z_1} \ell = W_2^\top (\nabla_{t_2} \ell). \end{align*}\]Now to $W_2$. Applying the chain rule, we have

\[\begin{align*} \nabla_{W_2} \ell = (\nabla_{t_2} \ell) z_1^\top. \end{align*}\]Note that this is an outer product.

In both updates of $z_1$ and $W_2$, we used $\nabla_{t_2} \ell$ from the previous step. This is why the previou gradient signal needs to be stored for backpropagation, and why we need to calculate the gradient for variables we’re not interested in (remember, we only need the gradients for $W$).

Finally to $t_1$. Since $z_1$ is an element-wise sigmoid transformation of $t_1$, we apply the same formula as that for $t_2$, this time replacing $\tanh$ with $\sigma$:

\[\begin{align*} \nabla_{t_1} \ell = \nabla_{z_1} \ell \odot \sigma'(t_2). \end{align*}\]There is no need to repeat ourselves when finding the gradients for the rest of the variables. This is because the procedure for $(z_0, W_1, t_0)$ is identical for $(z_1, W_2, t_1)$. Once we have the gradient signal $\nabla_{t_1}\ell$, we’re good to go.

One final note is that when we have traversed all the way to the beginning of the network, we only need to find the gradient with respect to $W_0$. This will require $z_{-1}$, which is just $x$. The gradient for $x$ (the input) is not used for anything.

We are now ready to fill in the TODO in the `loss_and_grad`

method in `Net`

above.

```
# Paste this code at the end of loss_and_grad
# Diff the loss w.r.t. final layer (This is nabla_{z2})
self.nonlinear_outputs[-1].grad = self.loss.grad(self.nonlinear_outputs[-1].data)
for i in range(len(self.nonlinear_outputs) - 1, -1, -1):
# Gradient from z to t. The "*" below is the element-wise product
self.linear_outputs[i].grad = \
self.activations[i].grad(self.linear_outputs[i].data) * self.nonlinear_outputs[i].grad
# Gradient w.r.t. weights matrix. This is nabla_{W}.
prev_output = self.nonlinear_outputs[i-1].data if i > 0 else x
self.weights[i].grad = np.outer(self.linear_outputs[i].grad, prev_output)
# Check if we have traversed to the first layer
if i > 0:
# If not at the first layer, continue finding nabla_{z}
self.nonlinear_outputs[i-1].grad = self.weights[i].data.T @ self.linear_outputs[i].grad
return l
```

Let’s try an input $x$ and find the gradients of $\ell$ with respect to the parameters. After we call `loss_and_grad`

, the gradients of all eligible tensors will be stored in their `.grad`

attributes.

```
# For reducibility
np.random.seed(100)
np_net = Net()
x = np.ones(5, dtype=float)
loss = np_net.loss_and_grad(x)
# Get the gradients for all parameters
np_grads = {"W" + str(i): g.grad for i, g in enumerate(np_net.weights)}
```

Now we are ready to take a gradient descent step!

To verify that our computation is correct, let’s use `autorgrad`

in PyTorch and find the gradients for the parameters.

```
import torch
pt_net = torch.nn.Sequential()
pt_net.add_module("W0", torch.nn.Linear(in_features=5, out_features=10, bias=False))
pt_net.add_module("A0", torch.nn.Tanh())
pt_net.add_module("W1", torch.nn.Linear(in_features=10, out_features=4, bias=False))
pt_net.add_module("A1", torch.nn.Sigmoid())
pt_net.add_module("W2", torch.nn.Linear(in_features=4, out_features=12, bias=False))
pt_net.add_module("A2", torch.nn.Identity())
# Copy the weights in out numpy network to this new network
for param, np_param in zip(pt_net.parameters(), np_net.weights):
param.data = torch.tensor(np_param.data, dtype=float)
x = torch.ones(5, dtype=float)
output = pt_net(x)
loss = 0.5 * torch.sum(output ** 2)
print("Loss =", loss.detach().item())
loss.backward()
# Get the gradients for all parameters
pt_grads = {name.split(".")[0]: x.grad.numpy() for name, x in pt_net.named_parameters()}
pt_grads
```

Check that the gradients by both versions match.

```
for name in np_grads.keys():
assert name in pt_grads
print(name, "gradients match?", np.allclose(np_grads[name], pt_grads[name]))
```

```
W0 gradients match? True
W1 gradients match? True
W2 gradients match? True
```

We have learned how backpropagation works in a feed-forward neural network. Here are some things you can try on your own:

- Add more layers to the network.
- Try more activation functions, e.g., ReLU, leaky ReLU, GeLU, etc.
- Add bias to each Linear layer and find the gradient with respect to the bias.

The example we just went through is very simple. You may have seen other, more complicated, architectures in which the computation graph is not sequential. An example is ResNet with skip connections. The chain rule still applies, but backpropagation requires you to perform a topological sorting of the nodes in this graph, and traverse backwards. In fact, in our example, going back from output to input is basically this traversal, as our network is sequential.

Finally, you can download a Jupyter notebook version of this post here.

]]>Consider the web as a collection of *pages*, some of which are connected to each other using *hyperlinks*. For example, the Wikipedia article on general relativity contains a hyperlink to another article on Albert Einstein. By clicking the link, we move from the former webpage to the latter.

We can model this as a graph \(G = (V, E)\), where the set of nodes (or vertices) \(V\) contains the webpages and the set of edges \(E\) contains binary relations \((v_i, v_j)\), indicating that the page \(v_i \in V\) contains a hyperlink to \(v_j \in V\). Since \(v_i\) may lead to \(v_j\) but not the other way around, the edge \((v_i, v_j)\) may be in \(E\) while \((v_j, v_i)\) may not. In this case we call the graph \(G\) a *directed* graph.

PageRank defines a score for each webpage where more “important” pages have high scores. This is particularly useful in *information retrieval*, where a system is asked to return pages relevant to a query. An assumption is that higher-ranked pages should be returned first, as they are more important and therefore have a higher chance of being what the user wants. Some other intuitions on building a ranking system are:

- If many pages have a hyperlink to page \(i\), then \(i\) should be important.
- If a highly ranked page links to page \(i\), then \(i\) should also be highly ranked.

Let \(r \in \mathbb{R}^n\) be the rank vector—that is, \(r_i\) is the numerical value denoting the importance of page \(i\). We will propose a method for finding \(r\) such that if \(r_i > r_j\), then page \(i\) is more important than page \(j\). Note that since the rankings are ordinal, we can just scale \(r\) by a positive number and the relative ordering of the pages based on importance will not change at all.

To find the importance of every page, we will need to exploit the structure of the graph, specifically the in- and out-links of every node. Suppose that page \(i\) with importance \(r_i\) has \(d_i\) out-neighbors—that is, pages that \(i\) links to. In graph theory, \(d_i\) is also called the *out-degree* of \(i\). Based on the intuitions above, we want these out-neighbors to enjoy \(i\)’s importance. To do so, we assume that each out-neighbor of \(i\) will get an equal amount of importance from \(i\). In other words, each out-neighbor will get an amount \(\frac{r_i}{d_i}\) of importance from \(i\).

In this setting, the importance of a page \(j\) will be the sum of all importance flowing into it from its in-neighbors:

\[\begin{align} \label{eq:importance_flow} r_j = \sum_{i \rightarrow j} \frac{r_i}{d_i}. \end{align}\]Notice that we have a recursive structure: Every page influences the pages it leads to. But the importance of that page is flows from the pages leading to it.

Define a matrix \(A\), called the *importance matrix*, where \(A_{j, i} = \frac{1}{d_i}\) if page \(i\) leads to page \(j\). In other words, each column of \(i\) of \(A\) is a vector containing either \(0\) (where there is no out-going edge) or \(\frac{1}{d_i}\) (when there is). Since the out-degree of \(i\) is exactly \(d_i\), it must be the case that every column of \(i\) sums to \(1\).

The product \(A r\) gives us the importance flowing into every page. To see why it is, consider the \(j\)th component of this product:

\[\begin{align*} (A r)_j = \sum_{i=1}^{n} A_{j, i} r_{i} = \sum_{i \rightarrow j} \frac{r_i}{d_i}, \end{align*}\]where we have the last inequality because \(A_{j, i}\) is non-zero (and equal to \(\frac{1}{d_i}\)) when there is an edge from \(i\) to \(j\). This equation exactly matches \(\eqref{eq:importance_flow}\).

One can think of \(A\) as an adjacency matrix of \(G\), but instead of \(A_{j, i} = 1\) when there is an edge from \(i\) to \(j\), we have \(A_{j, i} = \frac{1}{d_i}\). There is a nice interpretation of \(A\) called the random surfer model.

Suppose we have a web surfer who is currently on page \(i\). To visit a new page, the surfer will randomly choose one of the out-neighbors of \(i\). Since the out-degree of \(i\) is \(d_i\), if we assume that all out-neighbors are equally likely to be chosen, the probability that the surfer will choose a neighbor is \(\frac{1}{d_i}\). This is exactly captured in the matrix \(A\).

Since \((A r)_j\) gives us the importance of page \(j\), which is also equal to \(r_j\), we have:

\[\begin{align} \label{eq:fixed_point} A r = r. \end{align}\]The solution \(r\) to this linear system is the vector containing the ranks of our webpages. Note that we can scale \(r\) by a positive number and it would still satisfy this equation, achieving our goal of preserving the order from positive scaling stated above.

Such an \(r\) satisfying \(\eqref{eq:fixed_point}\) is called a *fixed point* of \(A\), because applying \(A\) to \(r\) (that is, multiplying \(A\) by \(r\)) will not change the values of \(r\) at all. I have another post on solving for a fixed point in the context of machine learing, which can be found here. In this post, we will revisit a method to solve for \(r\).

If we look again at equation \(\eqref{eq:fixed_point}\), we can recognize that this is an eigenvector problem. Specifically, if \(\eqref{eq:fixed_point}\) holds, then \(r\) must be an eigenvector of \(A\) corresponding to an eigenvalue of \(1\). There are two important questions to answer.

First, is it guaranteed that \(A\) has \(1\) as an eigenvalue? After all, \(A\) is just a non-negative matrix with each column summing to \(1\). It turns out that this is true, and we will see the proof below.

Second, given that \(1\) is an eigenvalue, then we can solve \(A r = r\) using a row-reduction algorithm such as Gaussian elimination. Is that it? The answer is no, because Gaussian elimination has the time complexity of \(O(n^3)\), where \(n\) is the number of pages. This does not scale well with our page collection, as \(n\) could be in the billions, if not more. Therefore, we need to find another way to solve \(\eqref{eq:fixed_point}\).

To answer the first question above, notice that the matrix \(A\) is an example of a *stochastic matrix*, which is a square matrix with non-negative entries and having every column sum to 1. In the context of PageRank, \(A\) is also called the *stochastic adjacency matrix*.

What is interesting about a stochastic matrix is that it accepts \(1\) as an eigenvalue, and all other eigenvalues (real or complex) of \(A\) are less than or equal to \(1\) in absolute value.

Since $A$ is a square matrix, $A$ and $A^\top$ share the same eigenvalues. We need to prove that $1$ is an eigenvalue of $A^\top$. Because every row of $A^\top$ sums to $1$, we have $A^\top \mathbf{1}_n = \mathbf{1}_n$, where $\mathbf{1}_n$ is a column vector of $n$ ones. So, $1$ is an eigenvalue of $A^\top$ and, therefore, of $A$.

To show why all other eigenvalues of $A$ are less than or equal to $1$ in absolute value, let $\lambda$ be an eigenvalue of $A$. So $\lambda$ is also an eigenvalue of $A^\top$, associated with an eigenvector $x = [x_1,\ldots,x_n]^\top$. In other words, $A^\top x = \lambda x$. Let $j$ be index of the largest element in absolute value of $x$, that is, $|x_i| \leq |x_j| ~ \text{for all} ~ i=1,\ldots,n$. We have $$ \begin{align*} |\lambda| |x_j| = |\lambda x_j| = \left| \sum_{i=1}^{n} A_{i, j} x_i \right| \leq \sum_{i=1}^{n} A_{i, j} |x_j| = |x_j| \sum_{i=1}^{n} A_{i, j} = |x_j|, \end{align*} $$ where the first inequality uses the triangle inequality and the definition of $x_j$, and the last equality uses the fact the column $j$ of $A$ sums to 1. Since $x_j \neq 0$, this implies that $|\lambda| \leq 1$.

To answer the second question, we use the fact we just proved above, which is that \(1\) is the largest eigenvalue of \(A\) in absolute value. In linear algebra, it is also called the *spectral radius* of \(A\). As an alternative to Gaussian elimination, a popular algorithm to find the spectral radius and its corresponding eigenvector is the power iteration.

Procedure: Power Iteration

Input: A diagonalizable $n \times n$ matrix $A$

Let $b_0$ some non-zero vector

For $k = 0, \ldots, K-1$ do

Apply $A$ to $b_k$: $\tilde{b}_{k+1} = A b_{k}$

Normalize: $b_{k+1} = \frac{\tilde{b}_{k+1}}{\lVert \tilde{b}_{k+1} \rVert}$

Output: $b_{K}$

Input: A diagonalizable $n \times n$ matrix $A$

Let $b_0$ some non-zero vector

For $k = 0, \ldots, K-1$ do

Apply $A$ to $b_k$: $\tilde{b}_{k+1} = A b_{k}$

Normalize: $b_{k+1} = \frac{\tilde{b}_{k+1}}{\lVert \tilde{b}_{k+1} \rVert}$

Output: $b_{K}$

The sequence \(\left(\frac{\lVert A b_k \rVert}{\lVert b_k \rVert}\right)_k\) is guaranteed to converge to the spectral radius of \(A\) (which is \(1\) in our case), and the sequence \((b_k)_k\) converges to the corresponding eigenvector with unit norm.

How fast this sequence converges can be found here. In practice, one would run the power iteration until the difference between two iterates falls below some pre-defined tolerance \(\epsilon\). For example, we can run until \(\lVert b_{k+1} - b_{k} \rVert \leq 10^{-3}\).

We have learned how to find the importance scores of webpages in order to rank them. First, we construct the importance matrix from the structure of the graph. Then, we use the power iteration to solve for the fixed point of this matrix, which is the eigenvector corresponding to the largest eigenvalue in absolute value. This solution \(r\) now contains the importance of the pages, and we are ready to use \(r\) to rank them!

However, there are two potential problems with this approach. We will explore it and propose a solution in the below.

In the previous section, we have learned to use the power iteration to solve for the importance vector \(r\). However, the power iteration works under an assumption that the matrix \(A\) is *diagonalizable*. This will not hold if a column of \(A\) contains all zeros. This case happens when a webpage has no outgoing links. In other words, the page is a *dead end*.

How do we solve this? Let’s go back to the random surfer model above. If the surfer is at a dead end, meaning there is no hyperlink on the page the surfer can click to go to, we will assume that they will randomly jump to any other page in our collection. In addition, all pages are assumed to be equally likely to be chosen. So, if a page \(i\) is a dead end, we will replace the all-zeros column for \(i\) with a column of all \(\frac{1}{n}\)’s, where \(n\) is the number of pages in our collection.

Therefore, we can transform the matrix \(A\) into one without dead ends. Let us call this matrix \(A'\). Every column of \(A'\) now sums to 1.

The matrix \(A'\) is now guaranteed to be a stochastic matrix, and we are ready to use the power iteration to find its fixed point. However, the result might not be what we want. Consider the following scenario: In our web graph, there is a set of at least one node such that there are no links coming out of this set. There can be links between nodes in this set, but there are no links to any other outside node.

We call such a set of nodes a *spider trap*. But what is the problem? If we use the power iteration for a graph with a spider trap, the algorithm will cause all importance scores to be captured within the nodes in this spider trap, and the rest of the nodes will have zero importance. This kind of pages can be constructed intentionally or unintentionally, but their existence will cause PageRank to output an undesirable result.

So how do we deal with spider traps? Once the random surfer is in a spider trap, they will never be able to leave it. We will assume that, when the surfer is at page \(i\), they will flip a coin. If the coin comes up heads, the surfer will follow a link at random, and the probability of choosing a page is found by looking up the \(i\)th column of \(A'\). If the coin comes up tails, the surfer will jump to a page in our collection uniformly at random. So, if page \(i\) is in a spider trap, the surfer has a some chance of jumping outside the trap when the coin comes up tails.

To formalize this, let \(p\) be the probability of the coin coming up heads. The probability that the surfer, currently at page \(i\), will go to page \(j\) is

\[\begin{align*} p A'_{j, i} + (1 - p) \frac{1}{n}. \end{align*}\]In 1998, Larry Page and Sergey Brin, the founders of Google, proposed a matrix combining the solutions to these two problems. It is now widely called the *Google matrix*:

By using the power iteration on \(\mathscr{G}\), we can find the importance scores of the pages in our collection. This is the algorithm that Google uses to rank webpages.

- Interactive Linear Algebra by Dan Margalit and Joseph Rabinoff. Specifically Chapter 5.
- Mining of Massive Datasets by Jure Leskovec, Anand Rajaraman, and Jeffrey D. Ullman. Specifically Chapter 5.
- CS224W - Machine Learning with Graphs by Jure Leskovec, Fall 2021 edition. Specifically Lecture 4.
- The Anatomy of a Large-Scale Hypertextual Web Search Engine by Sergey Brin and Lawrence Page.

The slides can be found here. The lecture recording is on YouTube.

]]>- Explain what probabilistic topic modeling is, and what assumptions it makes.
- Recognize the observable and latent variables in a topic model, and specifically in LDA.
- Explain the generative process of LDA, and derive the complete probability.
- Explain what inference means in a mixture model, and why it is hard in LDA.
- Find the approximate posterior distribution of LDA using variational inference, and explain the procedure to find the optimal variational parameters.
- Explain what it means to “fit” an LDA model to a corpus, and describe how this procedure works.
- Be able to write code for an LDA model, including training and inference.

Being able to describe a large collection of documents is an important task in many disciplines. This task is often called “describe the haystack,” and the idea is to find the common *themes* that appear in the documents. For example, given a corpus of abstracts from papers published to PNAS, can we find the common scientific topics—such as “cellular biology,” “genetics” or “evolution”—that are covered in these abstracts? Another example is when you collect many tweets in a specific period, and want to find out what common topics people tweet about during this period, in the hope of predicting what topics will be trending in the near future. To help us approach this, there are three discussions worth noting here.

First, identifying topics by manually reading a collection of documents is probably the best way to characterize its themes, but the mere size of a corpus makes it impossible to perform this; we are looking at tens of thousands of abstracts, hudreds of thousands of Reddit posts, millions of Wikipedia articles, and tens of millions of tweets. Coming up with a way in which a computer can help us *automatically* identify the topics is much more desirable.

Second, what do we mean by *topics*, or themes? Put simply, a topic is a probability distribution over the vocabulary. For example, a topic about natural language processing is a distribution, with (much) higher probabilities for words such as “machine,” “token,” “vector” and “likelihood” than for words such as “mechanic,” “torts,” “cell” and “chemical.” Typically, we describe a topic by a list of most-likely words of size, say, 10 or 15. A human can look at this list and give the topic a representative name if necessary.

Third, it is quite evident that a document is rarely exclusively about one topic. (Well, this depends on how fine-grained you define each topic to be, but note that the more fine-grained, the harder it is to generalize.) In fact, we often associate a document with a *mixture* of topics, perhaps with a higher weight to some than others. For example, a research paper in machine learning can be a mixture of topics such as optimization, statistics, statistical physics, and so on, and a human reader can probably tell which topic is weighed higher than others after reading the paper. A solution to modeling this is to have a probability distribution over topics, given a document.

The two types probability distribution described above are the main ingredients of probabilistic topic models such as LDA. If we are able to model them, we can do many useful things. First, using the topic-word distributions allows us to characterize the topics present in a corpus, thereby summarizing it in a meaningful way. And using the document-topic distributions allows us to draw inference on the topics that a document is about, also helping with summarization. The applications of these models are quite boundless, which is why they are so popular in many fields such as computational social science, psychology, cognitive science, and so on.

However, in order to use them correctly as well as identifying the pros and cons to make good decisions while modeling, one should not stop at only calling `sklearn.decomposition.LatentDirichletAllocation`

arbitrarily, but should be able to understand the model, its assumptions, and how to tune its hyperparameters. To demonstrate this, let us dive into the details of the model.

A probabilistic topic model, LDA still remains one of the most popular choices for topic modeling today. It is an example of a *mixture model* whose structure contains two types of random variables:

- The
*observable variables*are the words you observe in each document. - The
*latent variables*are those you do not observe, but which describe some internal*structure*of your data, in particular, the “topics”.

You can readily see the assumption here, which is that there there *is* some internal structure to your data, and our job is to model that structure using the latent variables.

In specifying a mixture model like LDA, we need to describe how data can be generated using this model. Before we do that, let us set up the notation carefully. Note that in this blog post, I have chosen the notation used in Hoffmann, Blei and Bach’s paper on online learning for LDA. This blog is intended to follow the “batch variational Bayes” part of the paper, with some more detail to help you read more easily.

Suppose we have a collection of $D$ documents, where each document $d$ is of length $N_d$. Also suppose that we have a fixed vocabulary of $W$ words. We wish to discover $K$ topics in this collection, where each topic $k$ is specified by the probability $\beta_k$ over all words. The generative process works as follows. For document $d$, sample probability distribution $\theta_d$ over the topics $1, \ldots, K$. For each word $w_{di}$ in document $d$, sample a topic $z_{di}$ from the distribution $\theta_d$. With the chosen topic $z_{di}$, sample a word $w_{di}$ from the probability distribution $\beta_{z_{di}}$. In other words,

- Draw a topic-word distribution $\beta_k \sim \text{Dir}(\eta)$ for $k = 1, \ldots, K$.
- For each document $d = 1, \ldots, D$:
- Draw document-topic distribution for document $d$: $\theta_d \sim \text{Dir}(\alpha)$.
- For each word $i$ in document $d$:
- Draw a topic $z_{di} \sim \theta_d$.
- Draw a word $w_{di} \sim \beta_{z_{di}}$.

The notation is summarized in the following table.

Notation | Dimensionality | Meaning | Notes |
---|---|---|---|

$D$ | Scalar | Number of documents | Positive integer |

$W$ | Scalar | Number of words in the vocabulary | Positive integer |

$K$ | Scalar | Number of topics | Positive integer,typically much smaller than $D$ |

$N_d$ | Scalar | Number of words in document $d$ | Positive integer |

$\beta_k$ | $W$ | Word distribution for topic $k$ | $\beta_k$ ($k = 1, \ldots, K)$ are mutually independent. Each $\beta_k$ is a non-negative vector and $\sum_{w=1}^{W} \beta_{kw} = 1$. |

$\eta$ | Scalar | Dirichlet prior parameter for $\beta_k$ | All $\beta_k$ share the same parameter $\eta$. |

$\theta_d$ | $K$ | Topic distribution for document $d$ | $\theta_d$ ($d = 1, \ldots, D$) are mutually independent. Each $\theta_d$ is a non-negative vector and $\sum_{k=1}^{K} \theta_{dk} = 1$. |

$\alpha$ | Scalar | Dirichlet prior parameter for $\theta_d$ | All $\theta_d$ share the same parameter $\alpha$. |

$w_{di}$ | Scalar | Word $i$ in document $d$ | $w_{di} \in \{1, 2, \ldots, W\}$ |

$z_{di}$ | Scalar | Topic assignment for word $w_{di}$ | $z_{di} \in \{1, 2, \ldots, K\}$ |

The types of variables should be clear to us now. The only observables we have are $w$, the words in the documents. On the other hand, the latent variables are $z$, $\theta$ and $\beta$. The generative process allows us to specify the complete model—i.e., the joint distribution of both observable and latent variables—as follows

\[\begin{align} p(w, z, \theta, \beta \mid \alpha, \eta) & = p(\beta \mid \eta) \prod_{d=1}^{D} p(\theta_d \mid \alpha) p(z_d \mid \theta_d) p(w_d \mid \theta_d, z_i, \beta) \label{eq:joint_prob}\\ & = \prod_{k=1}^{K} p(\beta_k \mid \eta) \prod_{d=1}^{D} p(\theta_d \mid \alpha) \prod_{i=1}^{N_d} p(z_{di} \mid \theta_d) p(w_{di} \mid \theta_d, z_{di}, \beta). \nonumber \end{align}\]Note that there are two probability distributions used in this process. The first is the Dirichlet used to sample $\beta_k$ and $\theta_d$. For example, the probability of the topic distribution for document $d$ is

\[p(\theta_d \mid \alpha) = \frac{\Gamma\left( K \alpha \right)}{\Gamma(\alpha)^K} \prod_{k=1}^K \theta_{dk}^{\alpha-1}.\]The second is the categorical distribution, used to sample $z_{di}$ and $w_{di}$. For example, to find the probablity that the word $w_{di}$ given all other variables, we first need to find the value of $z_{di}$. Suppose $z_{d_i} = 2$. Then the distribution we need to use is $\beta_2$, or the second topic. Then the probability that $w_{di}$ equals some $w$ is

\[p(w_{di} | z_{di} = 2, \beta, \theta_d) = \beta_{2, w},\]that is, the $w$-th entry of $\beta_{2}$.

Inference refers to the task of finding the probability of latent varibles given observable variables. In our LDA example, the quantity we want to calculate is

\[\begin{align} p(z, \theta, \beta \mid w, \alpha, \eta) = \frac{p(w, z, \theta, \beta \mid \alpha, \eta)}{\int_{z, \theta, \beta} p(w, z, \theta, \beta \mid \alpha, \eta) dz d\theta d\beta}. \label{eq:bayes-infr} \end{align}\]What is this quantity? Imagine you see new document, how do you know what topics it belongs to, along with the topic weights? The probability in $\eqref{eq:bayes-infr}$ helps us do just that: use the Bayes theorem to find the *posterior* distribution on the latent variables, enabling us to draw inference on the structure of the document.

But there is a catch. The integral in the denominator $\eqref{eq:bayes-infr}$, which is equal to $p(w \mid \alpha, \eta)$ and often called the *evidence*, is very hard to evaluate. This is mainly because of the coupling of the latent variables, and exactly calulating this will take exponential time. Instead, we will use an method called *variational inference* to approximate it.

(To keep this blog post short enough, I will not explain the details of VI. You are encourage to check out Chapter 10 in Kevin Murphy’s textbook on probabilistic machine learning for an introduction to VI.)

Basically, the goal of VI is to approximate the distribution $p(z, \theta, \beta \mid w, \alpha, \eta)$ using a simpler distribution $q(z, \theta, \beta)$ that is “the closest” to $p$. Here “closeness” is defined by the Kullback-Leibler divergence between $q$ and $p$. In other words, we aim to solve the following optimization problem:

\[\min_{q} \left\{ \text{KL}(q(z, \theta, \beta) \| p(z, \theta, \beta \mid w, \alpha, \eta)) = \mathbb{E}_q \left[ \log \frac{q(z, \theta, \beta)}{p(z, \theta, \beta \mid w, \alpha, \eta)} \right] \right\}.\]Interestingly, minimizing this KL divergence is equivalent to maximizing the *evidence lower bound* (ELBO) of the data, where the ELBO $\mathcal{L}(w, z, \theta, \beta)$ is defined as

As the name suggests, the ELBO is a lower bound on the log-likelihood of our data. The maximum ELBO gives us the “closest” approximation to the likelihood. Check Section 10.1.2 in Murphy’s textbook for a full derivation.

To “fit” the data in the Bayesian sense, we will aim to approximate the true posterior as well as possible. Applying VI to this task is called *variational Bayes* (VB).

We have mentioned the “simpler” distribution $q(z, \theta, \beta)$ above, but what exactly is it? In using VI for LDA inference, we assume that $q(z, \theta, \beta)$ factorizes to three marginal distributions:

- $q(z_{di}) = \phi_{d w_{di} k}$. The dimensionality of $\phi$ is $D \times W \times K$, and $\sum_{k=1}^{K} \phi_{d w k} = 1, \forall d, w$;
- $\theta_d \sim \text{Dir}(\gamma_d)$, where $\gamma_d$ is a vector of length $K$. Note that $\gamma_d$ is
*not*symmetric; - $\beta_k \sim \text{Dir}(\lambda_k)$, where $\lambda_k$ is a vector of length $W$. Similarly, $\beta_k$ is
*not*symmetric.

This is an application of the *mean-field assumption*, which says that variational distributions for each set of latent variables are mutually independent, allowing the joint to be factorized into marginals.

In summary,

\[\begin{align} q(z_d, \theta_d,\beta) = q(z_d) q(\theta_d)q(\beta), \label{eq:mean_field} \end{align}\]and we have three types of variational parameters: $\phi$ of size $D \times W \times K$; $\gamma_d$ of size $K$, for $d = 1, \ldots, D$; and $\lambda_k$ of size $W$, for $k = 1, \ldots, K$.

Given the complete model in $\eqref{eq:joint_prob}$ and the variational distribution in $\eqref{eq:mean_field}$, we can decompose the ELBO as follows: \(\begin{align} \mathcal{L}(w, \phi, \gamma, \lambda) & = \sum_{d=1}^{D} \left\{ \mathbb{E}_q\left[ \log p(w_d \mid \theta_d, z_d, \beta) \right] + \mathbb{E}_q\left[ \log p(z_d \mid \theta_d) \right] - \mathbb{E}_q\left[ \log p(\theta_d \mid \alpha) \right] \right\} \nonumber \\ &~~~~ - \sum_{d=1}^{D} \left\{ \mathbb{E}_q\left[ \log q(z_d \mid \theta_d) \right] + \mathbb{E}_q\left[ \log q(\theta_d) \right] \right\} \nonumber \\ &~~~~ + \mathbb{E}_q\left[ \log p(\beta \mid \eta) \right] - \mathbb{E}_q\left[ \log q(\beta) \right] \nonumber \\ & = \sum_{d=1}^{D} \left\{ \mathbb{E}_q\left[ \log p(w_d \mid \theta_d, z_d, \beta) \right] + \mathbb{E}_q\left[ \log p(z_d \mid \theta_d) \right] - \mathbb{E}_q\left[ \log q(z_d \mid \theta_d) \right] \right. \nonumber\\ &\quad \quad \quad ~ +\left.\mathbb{E}_q\left[ \log p(\theta_d \mid \alpha) \right] - \mathbb{E}_q\left[ \log q(\theta_d) \right] \right\} \nonumber \\ & ~~~~ + (\mathbb{E}_q\left[ \log p(\beta \mid \eta) \right] - \mathbb{E}_q\left[ \log q(\beta) \right]). \label{eq:elbo} \\ \end{align}\)

Analyzing each term in the sum. \(\begin{align} \mathbb{E}_q\left[ \log p(w_d \mid \theta_d, z_d, \beta) \right] & = \sum_{i=1}^{N_d} \mathbb{E}_q\left[ \log p(w_{di} \mid \theta_d, z_{di}, \beta) \right] \nonumber \\ & = \sum_{i=1}^{N_d} \sum_{k=1}^{K} q(z_{di} = k) \mathbb{E}_q\left[ \log p(w_{di} \mid \theta_d, z_{di}, \beta) \right] \nonumber \\ & = \sum_{i=1}^{N_d} \sum_{k=1}^{K} \phi_{d w_{di} k} \mathbb{E}_q\left[ \log \beta_{k w_{di}} \right], \nonumber \end{align}\)

where the expectation on the last row is with respect to $q(\beta_k)$. We can see that in this formula, the contribution of each word $w$ to the term is $\sum_{k=1}^{K} \phi_{d w k} \mathbb{E} \left[ \log \beta_{k w} \right]$, which is the same for regardless of the position of word $w$ in document $d$. Therefore, we can simply count the number of times $w$ appears in $d$, and then multiply it with this contribution to get the contribution of all occurrences of $w$. This gives us the equivalent expression: \(\begin{align} \mathbb{E}_q\left[ \log p(w_d \mid \theta_d, z_d, \beta) \right] = \sum_{w=1}^{W} n_{dw} \sum_{k=1}^{K} \phi_{d w k} \mathbb{E}_q\left[ \log \beta_{k w} \right], \label{eq:elbo:1} \end{align}\)

where $n_{dw}$ is the number of occurrences of word $w$ in document $d$. Using the same trick, we have \(\begin{align} \mathbb{E}_q\left[ \log p(z_d \mid \theta_d) \right] & = \sum_{w=1}^{W} n_{dw} \sum_{k=1}^{K} \phi_{d w k} \mathbb{E}_q\left[ \log \theta_{dk} \right], \text{and} \label{eq:elbo:2} \\ \mathbb{E}_q\left[ \log q(z_d) \right] & = \sum_{w=1}^{W} n_{dw} \sum_{k=1}^{K} \phi_{d w k} \log \phi_{d w k}. \label{eq:elbo:3} \end{align}\)

For the last two terms inside the sum, first note that $p(\theta_d \mid \alpha)$ is a Dirichlet distribution with symmetric parameter $\alpha$, i.e., $q(\theta_d \mid \alpha) = \frac{\Gamma(K \alpha)}{\Gamma(\alpha)^K} \prod_{k=1}^{K} \theta_{dk}^{\alpha-1}$. Therefore, \(\begin{align} \mathbb{E}_q\left[ \log p(\theta_d \mid \alpha) \right] = \log \Gamma(K \alpha) - K \log \Gamma(\alpha) + (\alpha - 1) \sum_{k=1}^{K} \log \theta_{dk}. \label{eq:elbo:4} \end{align}\)

Similarly, because $q(\theta_d)$ is a Dirichlet distribution with asymmetric parameter $\gamma_d$, we have \(\begin{align} \mathbb{E}_q\left[ \log q(\theta_d) \right] = \log \Gamma\left(\sum_{k=1}^{K} \gamma_{dk} \right) - \sum_{k=1}^{K} \log \Gamma(\gamma_{dk}) + \sum_{k=1}^{K} (\theta_{dk} - 1) \log \theta_{dk}. \label{eq:elbo:5} \end{align}\)

Now for the last two terms, also note that $p(\beta_k \mid \eta)$ is Dirichlet with symmetric $\eta$. Therefore, \(\begin{align} \mathbb{E}_q\left[ \log p(\beta \mid \eta) \right] &= \sum_{k=1}^{K} \mathbb{E}_q\left[ \log p(\beta_k \mid \eta) \right] \nonumber \\ &= K [\log \Gamma(W \eta) - W \log \Gamma(\eta)] + \sum_{k=1}^{K} \sum_{w=1}^{W} (\eta - 1) \mathbb{E}_q\left[ \log \beta_{k w} \right]. \label{eq:elbo:6} \end{align}\)

Simlarly, the final term is \(\begin{align} \mathbb{E}_q\left[ \log q(\beta) \right] &= \sum_{k=1}^{K} \mathbb{E}_q\left[ \log q(\beta_k) \right] \nonumber \\ &= \sum_{k=1}^{K} \left( \log \Gamma \left( \sum_{w=1}^{W} \lambda_{kw} \right) - \sum_{w=1}^{W} \Gamma(\lambda_{kw}) + \sum_{w=1}^{W} (\lambda_{kw} - 1) \mathbb{E}_q\left[ \log \beta_{k w} \right] \right). \label{eq:elbo:7} \end{align}\)

Plugging $\eqref{eq:elbo:1}, \eqref{eq:elbo:2}, \eqref{eq:elbo:3}, \eqref{eq:elbo:4}, \eqref{eq:elbo:5}, \eqref{eq:elbo:6}, \eqref{eq:elbo:7}$ into $\eqref{eq:elbo}$, we have the ELBO as a function of variational parameters:

\[\begin{align} \mathcal{L} &= \sum_{d=1}^{D} \left\{ \sum_{w=1}^{W} n_{dw} \sum_{k=1}^{K} \phi_{dwk} \left( \mathbb{E}_q\left[ \log \theta_{dk} \right] + \mathbb{E}_q\left[ \log \beta_{k w} \right] - \log \phi_{dwk} \right) \right. \nonumber\\ & \left. \quad \quad \quad ~ - \log \Gamma\left( \sum_{k=1}^{K} \gamma_{dk} \right) + \sum_{k=1}^{K}\left( \log \Gamma(\gamma_{dk}) + (\alpha - \gamma_{dk}) \mathbb{E}_q\left[ \log \theta_{dk} \right] \right) \right\} \nonumber \\ &~~~~ + \sum_{k=1}^{K} \left( - \log \Gamma\left( \sum_{w}^{W} \lambda_{kw} \right) + \sum_{w=1}^{W} \left( \log \Gamma(\lambda_{kw}) + (\eta - \lambda_{kw}) \mathbb{E}_q\left[ \log \beta_{k w} \right] \right) \right) \nonumber \\ &~~~~ + D [\log \Gamma(K \alpha) - K \log \Gamma(\alpha)] + K [\log \Gamma(W \eta) - W \log \Gamma(\eta)]. \label{eq:elbo:var} \end{align}\]The main objective here is to maximize the ELBO $\mathcal{L}$ with respect to the variational parameters $\phi$, $\gamma$ and $\lambda$. To do so, we will use a procedure called *coordinate ascent*, in which we maximize $\mathcal{L}$ with respect to one set of parameters, keeping the others fixed. We will then alternate to another set of variables, keeping others fixed, and so on. In our LDA example, we first keep $\gamma$ and $\lambda$ fixed, and maximize $\mathcal{L}$ as a function of $\phi$ only. Then we do the same for $\gamma$ and $\lambda$.

Only keeping the terms involving $\phi_{dwk}$ in $\eqref{eq:elbo:var}$, and treating everything else as constants, we have the objective function w.r.t. $\phi_{dwk}$ as

\[\mathcal{L}_{[\phi_{dwk}]} = \phi_{dwk} \left( \mathbb{E}_q\left[ \log \theta_{dk} \right] + \mathbb{E}_q\left[ \log \beta_{k w} \right] - \log \phi_{dwk} \right) + \text{const},\]which gives the gradient:

\[\frac{\partial \mathcal{L}}{\partial \phi_{dwk}} = \mathbb{E}_q\left[ \log \theta_{dk} \right] + \mathbb{E}_q\left[ \log \beta_{k w} \right] - \log \phi_{dwk} - 1.\]Setting the gradient to zero and solving for $\phi_{dwk}$, we get the update rule for $\phi_{dwk}$:

\[\begin{align} \phi_{dwk} \propto \exp \left\{ \mathbb{E}_q\left[ \log \theta_{dk} \right] + \mathbb{E}_q\left[ \log \beta_{k w} \right] \right\}. \label{eq:update:phi} \end{align}\]Where we have suppressed all multiplicative constants by using $\propto$. After this update for all $\phi_{dwk}$, we can simply rescale them so that $\sum_{k=1}^{K} \phi_{dwk} = 1, \forall d, w$.

The final thing to handle is the expectations inside $\exp$. How do we calculate them exactly? Lucklily, both of them can be calculated using the *digamma function* $\Psi$—the first derivative of the logarithm of the gamma function—as follows:

Similarly, the objective function w.r.t. $\gamma_{dk}$ is

\[\begin{align*} \mathcal{L}_{[\gamma_{dk}]} & = \sum_{w=1}^{W} n_{dw} \phi_{dwk} \mathbb{E}_q \left[ \log \theta_{dk} \right] - \log \Gamma\left( \sum_{i=1}^{K} \gamma_{d_i} \right) \\ & ~~~~+ \log \Gamma(\gamma_{dk}) + (\alpha - \gamma_{dk}) \mathbb{E}_q \left[ \log \theta_{dk} \right] + \text{const} \\ & = \left( \alpha + \sum_{w=1}^{W} n_{dw} \phi_{dwk} - \gamma_{dk} \right) \left( \Psi(\gamma_{dk}) - \Psi\left(\sum_{i=1}^{K} \gamma_{di}\right) \right) \\ & ~~~~ - \log \Gamma\left( \sum_{i=1}^{K} \gamma_{d_i} \right) + \log \Gamma(\gamma_{dk}) + \text{const}, \end{align*}\]where we have used the digamma function $\Psi$ similarly to the previous section. A simple manipulation gives the gradient:

\[\begin{align*} \frac{\partial \mathcal{L}}{\partial \gamma_{dk}} = \left( \Psi'(\gamma_{dk}) - \Psi'\left(\sum_{i=1}^{K} \gamma_{di}\right) \right) \left( \alpha + \sum_{w=1}^{W} n_{dw} \phi_{dwk} - \gamma_{dk} \right). \end{align*}\]Setting this gradient to zero and solving for $\gamma_{dk}$, we get the update rule for $\gamma_{dk}$:

\[\begin{align} \gamma_{dk} = \alpha + \sum_{w=1}^{W} n_{dw} \phi_{dwk}. \label{eq:update:gamma} \end{align}\]The variational Bayes estimate of $\gamma$ has an intuitive explanation. The number of times document $d$ is assigned to topic $k$ is the weighted sum of the times each word in $d$ is assigned to topic $k$, where the weight $\phi_{dwk}$ is the probability that word $w$ in document $d$ belongs to topic $k$—plus the Dirichlet prior $\eta$.

Similar to $\gamma$, we can use the digamma function $\Psi$ in the objective functin w.r.t. $\lambda_{kw}$ as follows

\[\begin{align*} \mathcal{L}_{[\lambda_{kw}]} & = \left( \eta + \sum_{d=1}^{D} n_{dw} \phi_{dwk} - \lambda_{kw} \right) \left( \Psi(\lambda_{kw}) - \Psi\left(\sum_{i=1}^{W} \lambda_{ki} \right) \right) \\ & ~~~~ - \log \Gamma\left(\sum_{i=1}^{W} \lambda_{ki} \right) + \log \Gamma(\lambda_{kw}) + \text{const}, \end{align*}\]which gives the gradient:

\[\begin{align*} \frac{\partial \mathcal{L}}{\partial \lambda_{kw}} = \left( \Psi'(\lambda_{kw}) - \Psi'\left(\sum_{i=1}^{W} \lambda_{ki} \right) \right) \left( \eta + \sum_{d=1}^{D} n_{dw} \phi_{dwk} - \lambda_{kw} \right). \end{align*}\]Setting the gradient to zero and solving for $\lambda_{kw}$, we get the update estimate:

\[\begin{align} \lambda_{kw} = \eta + \sum_{d=1}^{D} n_{dw} \phi_{dwk}. \label{eq:update:lambda} \end{align}\]Similar to $\gamma_{dk}$, the variational Bayes estimate of $\lambda$ has an intuitive explanation. The count of word $w$ in topic $k$ the weighted sum of word count for $w$ in each document $d$, where the weight $\phi_{dwk}$ is the probability that word $w$ in document $d$ belongs to topic $k$—plus the Dirichlet prior $\eta$.

We have shown the update rules for the variational parameters: $\phi_{dwk}$ in $\eqref{eq:update:phi}$, $\gamma_{dk}$ in $\eqref{eq:update:gamma}$, and $\lambda_{kw}$ in $\eqref{eq:update:lambda}$. The variational Bayes algorithm is complete. There is one final thing to note, taken from the Section 2.1 of the original paper.

We can actually partition these updates into two steps, analogous to the two steps in the EM algorithm. In the “E”-step, we keep updating $\gamma$ and $\phi$ until convergence, keeping $\lambda$ fixed. In the “M”-step, iteratively update $\lambda$ holding $\gamma$ and $\phi$ fixed.

Now you can understand the paper’s Algorithm 1 fully and can start implementing it in your favorite language.

]]>Let $g: \mathbb{R}^d \rightarrow \mathbb{R}^d$ be an affine function of the form $g(x) = Ax + b$, where $A \in \mathbb{R}^{d \times d}$ and $b \in \mathbb{R}^d$. We would like to find a *fixed point* of $g$, which is a vector $x^\ast$ such that $g(x^\ast) = x^\ast$. The reason why $x^\ast$ is called a fixed point is because applying $g$ to $x^\ast$ doesn’t change itself.

The analytical solution to this problem is $x^\ast = -(A - I)^{-1} b$, but there may be several issues to this. First, $A - I$ may not be invertible, in which case we need to use least squares to find its pseudoinverse. Second, even if it is invertible, the cost of solving for $x^\ast$ is $O(d^3)$, where $d$ is the dimensionality $d$, which is very costly in high dimensions.

The common numerical method to solve for a fixed point of $g$ is the *fixed-point iteration*. Start with a randomly chosen $x_0$ and iteratively apply $g$ to it:

until $\lVert g(x_{t+1}) - x_{t+1} \rVert \lt \epsilon$ for some predetermined precision $\epsilon$. In order for this to converge, we want to ensure that $g$ is a contraction mapping, that is, there exists an $L \in [0, 1)$ such that $\forall x, x’ \in \mathbb{R}^d, \lVert g(x) - g(x’) \rVert \leq L \lVert x - x’ \rVert$. This can be achieved when the *spectral radius* of $A$ is less than $1$.

We can prove that to achieve a precision of $\epsilon$, we need to apply $O\left(\kappa \log \frac{1}{\epsilon} \right)$ iterations, where $\kappa$ is the *condition number* of $A$, which is the ratio between $A$’s largest and smallest singular values.

Fixed-point iteration could converge very slowly. The reason is that the condition number of $A$ could be large. (In real datasets, $\kappa$ could be greater than $10^6$.) Anderson acceleration (AA) can speed up convergence considerably. Here’s how it works.

Define $f_t = g(x_t) - x_t$ to be the *residual* at iteration $t$. To find $x_{t+1}$, consider the space spanned by the previous $m_t+1$ iterates $\{x_{t - m_t}, x_{t - m_t + 1}, \ldots, x_t \}$, where $m_t$ is the *window size* you can choose. To find the next iterate, we consider a linear combination of these previous vectors:

and find $\alpha^{(t)} \in \mathbb{R}^{m_t + 1}$ such that \(\| g(\bar{x}_t) - \bar{x}_t \|\) is minimized. So what are doing here to use the previous iterates to better guide us to the solution. You can check the paper for a full derivation, but the $\alpha^{(t)}$ we should choose is

\[\label{eqn:alpha} \alpha^{(t)} = \frac{(F_t^\top F_t)^{-1}\boldsymbol{1}}{\boldsymbol{1}^\top (F_t^\top F_t)^{-1} \boldsymbol{1}},\]where $F_t = \left[ f_{t- m_t},\ldots, f_{t} \right] \in \mathbb{R}^{d \times (m_t + 1)}$ is the matrix of all residuals, $\boldsymbol{1}$ is the $(m_t + 1)$-dimensional column vector of all ones.

After finding $\alpha^{(t)}$, we set the new iterate to

\[\label{eqn:extrapolate} x_{t+1} = \beta \sum_{i=0}^{m_t} \alpha_i^{(t)} g(x_{t - m_t + i}) + (1 - \beta) \sum_{i=0}^{m_t} \alpha_i^{(t)} x_{t - m_t + i},\]where $\beta \in [0, 1]$ is a predetermined *mixing parameter*.

You can see in the paper that in Algorithm 1, we actually set $\alpha^{(t)}$ as

\[\label{eqn:alpha_reg} \alpha^{(t)} = \frac{(F_t^\top F_t + \lambda I)^{-1}\boldsymbol{1}}{\boldsymbol{1}^\top (F_t^\top F_t + \lambda I)^{-1} \boldsymbol{1}},\]which is slightly different from \eqref{eqn:alpha}. The reason is we want to solve the regularized version of the problem

\[\underset{\alpha^{(t)}: \boldsymbol{1}^\top \alpha^{(t)} = 1}{\min} \| g(\bar{x}_t) - \bar{x}_t \|^2 + \lambda \| \alpha^{(t)} \|^2\]for stability (Section II). Without regularization ($\lambda = 0$), we recover \eqref{eqn:alpha}.

Anderson acceleration is very similar to the vanilla fixed-point iteration: start with some $x_0$. In each iteration, find $\alpha^{(t)}$ like above, and *extrapolate* from the $m_t + 1$ previous iterates to find the next iterate $x_{t+1}$. In other words, in each iteration $t$:

- Calculate $g(x_t)$.
- Compute the residual: $f_t = g(x_t) - x_t$.
- Form the residual matrix: $F_t = \left[ f_{t- m_t},\ldots, f_{t} \right]$.
- Solve for $\alpha^{(t)}$ according to \eqref{eqn:alpha_reg}.
- Extrapolate from $m_t + 1$ previous iterates according to \eqref{eqn:extrapolate}.

You can find the implementation in the aa.py file. The `AndersonAcceleration`

class should be in instantiated with the `window_size`

($m_t$, defaulted to $5$) and `reg`

($\lambda$, defaulted to 0). Here’s an example.

```
>>> import numpy as np
>>> from aa import AndersonAcceleration
>>> acc = AndersonAcceleration(window_size=2, reg=0)
>>> x = np.random.rand(100) # some iterate
>>> x_acc = acc.apply(x) # accelerated from x
```

You will need to apply $g$ to $x_t$ first. The result $g(x_t)$ should be the input to `acc.apply`

, which will solve for $\alpha^{(t)}$ and extrapolate to find $x_{t+1}$. See the repository for more detail.

We will minimize a strictly convex quadratic objective. Check `quadratic_example.ipynb`

for more detail. The below plot shows the *optimality gap* between $f(x_t)$ and $f(x^\ast)$ over $t$. AA with a window size of 2 converges much faster than the vanilla gradient descent (GD).

We will minimize the $\ell_2$-regularized cross entropy loss function for logistic regression. Check `logistic_regression_example.ipynb`

for more detail. Similarly, AA is much more favorable than the vanilla GD when optimizing this objective.

]]>