An asymptotic theory for estimation and inference in adaptive learning models with strong mixing regressors and martingale difference innovations is developed. The maintained polynomial gain specification provides a unified framework which permits slow convergence of agents' beliefs and contains recursive least squares as a prominent special case. Reminiscent of the classical literature on co-integration, an asymptotic equivalence between two approaches to estimation of long-run equilibrium and short-run dynamics is established. Notwithstanding potential threats to inference arising from non-standard convergence rates and a singular variance-covariance matrix, hypotheses about single. as well as joint restrictions remain testable. Monte Carlo evidence confirms the accuracy of the asymptotic theory in finite samples.
Estimation and inference in adaptive learning models with slowly decreasing gains
Mayer, A
2022-01-01
Abstract
An asymptotic theory for estimation and inference in adaptive learning models with strong mixing regressors and martingale difference innovations is developed. The maintained polynomial gain specification provides a unified framework which permits slow convergence of agents' beliefs and contains recursive least squares as a prominent special case. Reminiscent of the classical literature on co-integration, an asymptotic equivalence between two approaches to estimation of long-run equilibrium and short-run dynamics is established. Notwithstanding potential threats to inference arising from non-standard convergence rates and a singular variance-covariance matrix, hypotheses about single. as well as joint restrictions remain testable. Monte Carlo evidence confirms the accuracy of the asymptotic theory in finite samples.I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.