In my previous post on Dynamic Mode Decomposition, I discussed the foundations of DMD as a means for linearizing a dynamical system. In this post, I want to look at a way in which we can use rank-updates to incorporate new information into the spectral decomposition of our linear operator, , in the event that we are generating online measurements from our dynamical system. If you want a more-detailed overview of this topic, (Zhang, Rowley, Deem, & Cattafesta, 2017) developed the theory, along with open source code, for testing this method.
Recall that we are given an initial data matrix
which we can split into two matrices, shifted one unit in time apart:
and we are interested in solving for the linear operator $A$, such that
For simplicity, since we are no longer using the full matrix, I’ll just refer to as . In the previous post, we made the constraint that , and that rank() . Here, however, we’ll reverse this assumption, and such that , and that rank() , such that is invertible, so by multiplying both sides by we have
where and . Now, let’s say you observe some new data , and you want to incorporate this new data into your $A$ matrix. As in the previous post on Rank-One Updates, we saw that directly computing the inverse could potentially be costly, so we want to refrain from doing that if possible. Instead, we’ll use the Shermann-Morrison-Woodbury theorem again to incorporate our new $x_{m+1}$ sample into our inverse matrix, just as before:
Likewise, since we’re appending new data to our and matrices, we also have
such that
which is simply the sum of our original matrix , plus a rank-one matrix. (Zhang, Rowley, Deem, & Cattafesta, 2017) go on to describe some pretty cool “local” DMD schemes, by incorporating weights, as well as binary thresholds, that are time-dependent into the computation of the linear operator, $A$.