Suppose we train an RBF by minimizing the LSE on a couple of training points and we are doing it incrementally in an online fashion. So basically we update the QR factorization using e.g. Givens rotations whenever a new training example is presented.
Suppose now at some point, due to some criteria, we decide it is time to add a new center because we judged that a new point is a novelty. This amounts to adding a new feature to the regression problem (the RBF kernel centered at that point) and this can be done again by updating the QR factorization using Givens rotations.
The problem is that this would require us to compute the new feature on all of the previous example in order to correctly update the reduced QR factorization. Is it possible to do this, with some controlled error I presume, without having to store the previous examples at each time ?