Adding new center in an RBF network without memorizing previous training examples

by BS.   Last Updated August 30, 2018 12:19 PM

Suppose we train an RBF by minimizing the LSE on a couple of training points and we are doing it incrementally in an online fashion. So basically we update the QR factorization using e.g. Givens rotations whenever a new training example is presented.

Suppose now at some point, due to some criteria, we decide it is time to add a new center because we judged that a new point is a novelty. This amounts to adding a new feature to the regression problem (the RBF kernel centered at that point) and this can be done again by updating the QR factorization using Givens rotations.

The problem is that this would require us to compute the new feature on all of the previous example in order to correctly update the reduced QR factorization. Is it possible to do this, with some controlled error I presume, without having to store the previous examples at each time ?



Related Questions



Probabilistic Interpretation of Radial Basis Function

Updated January 10, 2019 22:19 PM

Estimate RBF-kernel mapping function given graph/space

Updated February 24, 2019 19:19 PM

How to prove 1-norm radial function is kernel?

Updated January 31, 2019 18:19 PM