Validate a model with Posterior Predictive Checks, Kernel Density estimation and Kullback-Leibler

by Hamall   Last Updated September 19, 2018 15:19 PM

I'm wondering if this scheme is a good way to validate a model.

  1. Generate new data $y_{new}$ from Posterior Predictive distribution PPC given observed data $y_{obs}$
  2. Use Kernel Density Estimation in order to obtain a probability density distribution of both $y_{new}$ , $y_{obs}$
  3. Use Kullback-Leibler divergence to evaluate the difference between these two distribution.


Related Questions





Chow-Liu trees and Kullback Leibler divergence

Updated July 19, 2017 15:19 PM