Noah Golmant

Meta-Learning and Optimization

In this post, I’ll try to delve into meta-learning from an optimization perspective. I’ll be asking some small questions about an interesting meta-learning algorithm called Model-Agnostic Meta-Learning (MAML). The objective of MAML is to be able to adapt to new tasks from only a few examples. If I train a model to do well on a bunch of other tasks, we’d like to come up with something that can do well on a new task, given only a few steps of gradient descent to adapt the parameters a bit. MAML does this by finding parameters such that when you take a gradient descent from those parameters, you are close to the optimum for your task. In the end, we obtain some parameters called a meta-model. If I receive examples for a new task, I can run gradient descent to minimize that task’s loss, initializing the process from the meta-model. If this task is similar to the training tasks I used to produce the meta-model, the “fine-tuned” model I end up with should do pretty well.

I’m interested in studying what the meta-model really is because I think it could provide some interesting insights into how MAML implicitly gauges the similarity between tasks. I also think it could help us make some stronger theoretical statements about how good the meta-model is as a starting point for gradient descent. The only MAML theory I’m aware of focuses on probabilistic interpretations and universality (which are super cool!).

This post will basically be a bit of a case study for MAML where I look into what it does on very simple objectives that are quadratic in the model parameters. I’ll remove the stochasticity from play, focusing on vanilla gradient descent. Then, I’ll derive the fixed point for the MAML gradient descent update equation. There’s a kind of interesting interpretation of this fixed point as a curvature-weighted average of the optima for the objectives. Then I’ll calculate the loss of the fine-tuned models and prove that MAML really does its job. I’ll close with some thoughts about how this picture might relate to the more general case, e.g. when the objectives are strongly convex.

The MAML objective

Let’s start out by stating the MAML objective. Since I’m not thinking about stochasticity right now, a task will just consist of a loss function on a domain . So we’ll consider running MAML on two equally important objectives . For any objective and step size , I’ll call its gradient descent update . Now we’re ready to state the MAML objective:

We can minimize this by running gradient descent on with some step size . The gradient is

Where is the Hessian of at . And, like usual, we’ll get an update equation .

Quadratic Forms

Like I said, we’ll be consider some very simple functions here. Let be symmetric, positive definite matrices, and let . Then we’ll set

Clearly is minimized at while is minimized at . The gradients are . The Hessians are and , respectively. I’m pretty sure this is the simplest setup I could look at with any interesting behavior. For simplicity, let’s define . Plugging things in, we can calculate

This is kind of interesting, because the gradient looks like the sum of the gradients of some modified quadratic objectives. By applying the spectral theorem, we can see that sort of attenuates the eigenvalues of in a non-linear manner. The th eigenvalue of is given by . When you look at this as a quadratic function of the step size, it is decreasing from to in the range , (which is the largest we could go due to regularity conditions anyways).

We can solve for the fixed point of the MAML update by setting . In the end, we get the clean solution

What is this fixed point?

This is kind of a matrix weighted average of the optima and . There are cool ways to look at the spectrum of to get an idea of what’s going on, but I think the simplest case to look at is when both and are diagonal, i.e. when their eigenbases “align” both with each other and with the coordinate system of and . When this happens, is a coordinate-wise weighted average of and , where the weights are simply given by the attenuated eigenvalues of and . In the general case, this scaling happens with respect to the coordinates of the vector in the eigenbasis of the matrices. So we will move closer to the space generated by the eigenvectors corresponding to the largest eigenvalues.

How good is this fixed point?

Now I’ll derive the MAML loss of .We would expect that this is some function of the distance between the optima and the actual objectives, but what is MAML improving on? Let’s calculate the loss for a baseline approach. Why don’t we just take the average of the optima, ? Plugging this into :

This is actually a form of a Mahalanobis metric. The thing is, when I increase the distance between points, the loss can skyrocket, especially if I move them apart along the direction of one of the principal eigenvectors of or . And, since we took and to have equal weight, a “low-risk” objective with smaller eigenvalues just hurts how well we do on a harder objective with higher eigenvalues, since we didn’t penalize that difference.

Knowing what we do about the loss function, we can extract out the fine-tuned models for the tasks given the midpoint as an initialization. For example, the fine-tuned model for is . So we increase the contribution of a bit in our weighted average, by an additive factor of about . This is exactly a contraction in the Mahalanobis metric induced by . To see this, note that since is positive definite, the eigenvalues of are all greater than one, while the eigenvalues of are all less than one. So this contraction is “pulling apart the fine-tuned model from the meta-model” by scaling the components of the and vectors away from each other in the eigenbasis of .

Now, let’s calculate the losses when we arrive at our fine-tuned models for the respective tasks. I’ll call . We get

We get a similar thing for . Hence, our overall loss using these fine-tuned models is

Like before, we can extract out the fine-tuned models for each task starting from this initialization. This looks the same as before: we simply move away from the meta-model and towards by taking a “mixture” of the two components weighted by . When is this doing better than ? Precisely when lies closer to the subspace spanned by the eigenvectors corresponding to the largest eigenvalues of . This is because multiplying by will scale the components corresponding to these eigenvectors down to zero in only a few iterations. And definitely does lie closer, since when we took the “weighted average” of and , we gave more weight to those components.

Finishing up

In what sense is this “the best we could’ve done”? Well, MAML found the unique point that minimizes the expected loss of the “fine-tuned” models obtained by running gradient descent from that point for the respective tasks. For quadratics, this minimizer involves an interesting “curvature-weighted average” of the optima of the respective tasks. This minimizer has a kind of “accelerated path” towards a particular task’s optimum, since we have already pushed it closer to the eigenspaces corresponding to the larger eigenvalues. In a sense, we pushed the midpoint to the “top of the cliff” of the objective’s loss surface, so that we could tumble down quickly with a little push.

For the next steps, I’d like to investigate convergence for smooth, strongly convex objectives. This basically requires checking if the function is convex and smooth. Convexity is easy. A nice result would be to bound the number of iterations to achieve some error for a task as a function of the distance between the two optima. Looking back on this quadratic stuff, I expect the distance to be with respect to something like a Mahalanobis metric based on the Hessians of the two tasks. This would give some insight into how MAML measures “task similarity” based on curvature information.

Edit: Convex, smooth objectives imply unique one-step fixed point

I thought that the case with smooth, convex objectives would end up being more interesting. Let’s investigate convexity properties of the one-step gradient descent updated version of , . First of all, if we assume since is smooth, then only minimizer of is the original minimizer of . Since , we get . Moreover, you can show that the gradient update for is a contraction, which means gradient desent converges to the same minimum as before.

I’d like to show that for a convex combination of smooth, convex objectives, the MAML objective is convex, too. I’m not sure if I have the right proof of that yet, though. But, I do think that there could be some useful information in the theory of multi-objective optimization and Paretto optimality. Besides that, we at least know that MAML converges in this case. This is because if two functions converge under gradient descent, then any convex combination of these two functions also converges. However, if the two functions converge to their minimizers in grzdient descent, I’m not sure if the convex combination has a unique minimizer. For example, if are convex with minima , then we have for . But it does not imply that there exists some that achieves this minimum. This is different than the convex combination of convex functions, since gradient descent can converge to minimizers without a convex objective.

comments powered by Disqus