Learning and Model Validation
Author : In-Koo Cho(University of Illinois) and Kenneth Kasa(Simon Fraser University)
This paper proposes a selection criterion for adaptive learning models. Instead of assuming that agents revise beliefs about a fixed model, we allow agents to test the specification of their models, and to select new ones that appear to strike a better balance between fit and complexity, as measured by estimated Kullback-Leibler Information Criteria. This combined process of model revision and model selection is called validation dynamics. We prove that as the agent validates the model more frequently, the validation dynamics converges to a single model, which the agent uses almost always. This model is called the dominant recursive learning model, which we can characterize as a recursive model with the largest rate function, using the parlance of the large deviation theory. We illustrate the concept of validation dynamics using examples from Sargent (1999) and Evans and Honkapohja (2001). Possible extensions of the analysis to robust testing and selection are also discussed.