Rank-One Updates

Posted by Kristian M. Eschenburg on May 11, 2018 · 3 mins read

In this post, I’m going to go over some examples of rank-one updates of matrices. To compute rank-one updates, we rely on the Sherman-Morrison-Woodbury theorem. From the previous post on Blockwise Matrix Inversion, recall that, given a matrix and its inverse

we have that

Expanding this further, the Woodbury formula proves the following identity

Given an initial matrix and its inverse , and a new matrix , we see that we can define the inverse of our new updated matrix in terms of the inverse of our original matrix and components of . Importantly, we can perform rank- updates, where .

For example, if we want to update our matrix with a new vector, , we can rewrite the formula above as follows:

where the updated inverse is defined so long as the quadratic form .


Rank-One Updates for Linear Models

Recall the Normal equations for linear models:

and

where is our design matrix, is our dependent variable, and is a solution to the Normal equation, due to the fact that the Normal equations are consistent. is the generalized inverse of , which is unique (i.e. ) only if has full column-rank. For our immediate purpose, we assume that has full column rank.

Assume that we observe a set of observations, and response variable, , and compute our coefficient estimates via the Normal equations above, using . Now given a new observation, , how can we update our coefficient estimates? We can append to as

and directly compute , or we can use the Sherman-Morrison-Woodbury theorem:

from which we can easily compute our new coefficient estimates with

Importantly, in the case of regression, for example, this means that we can update our linear model via simple matrix calculations, rather than having to refit the model from scratch to incorporate our new data. In the next few posts, I’ll go over an example of an implementation of rank-updating methods that I’ve been using in lab to study brain dynamics.