Quantifying the Price of Uncertainty in Bayesian Models
MetadataShow full item record
This item's downloads: 276 (view details)
Milovan Krnjajic, and David Draper (2013) Quantifying the Price of Uncertainty in Bayesian Models Proceedings of AMSA 2013, Novosibirsk, Russia
During the exploratory phase of a typical statistical analysis it is natural to look at the data in order to narrow down the scope of the subsequent steps, mainly by selecting a set of families of candidate models (parametric, for example). One needs to exercise caution when using the same data to assess the parameters of a specific model and deciding how to search the model space, in order not to underestimate the overall uncertainty, which usually occurs by failing to account for the second order randomness involved in exploring the modelling space. In order to rank the models based on their fit or predictive performance we use practical tools such as Bayes factors, log-scores and deviance information criterion. Price for model uncertainty can be paid automatically when using Bayesian nonparametric (BNP) specification, by adopting weak priors on the (functional) space of possible models, or in a version of cross validation, where only a part of the observed sample is used to fit and validate the model, whereas the assessment of the calibration of the overall modelling process is based on the as-yet unused part of the data set. It is interesting to see if we can determine how much data needs to be set aside for calibration in order to obtain an assessment of uncertainty approximately equivalent to that of the BNP approach.