Adding new center in an RBF network without memorizing previous training examples

by BS.   Last Updated August 30, 2018 12:19 PM

Suppose we train an RBF by minimizing the LSE on a couple of training points and we are doing it incrementally in an online fashion. So basically we update the QR factorization using e.g. Givens rotations whenever a new training example is presented.

Suppose now at some point, due to some criteria, we decide it is time to add a new center because we judged that a new point is a novelty. This amounts to adding a new feature to the regression problem (the RBF kernel centered at that point) and this can be done again by updating the QR factorization using Givens rotations.

The problem is that this would require us to compute the new feature on all of the previous example in order to correctly update the reduced QR factorization. Is it possible to do this, with some controlled error I presume, without having to store the previous examples at each time ?

Related Questions

How to select a radial basis function?

Updated November 08, 2017 19:19 PM

Kernelized latent factor models?

Updated April 23, 2017 07:19 AM