In this paper, I show how gradient-based optimization methods can be used to estimate stochastic dynamic models in economics. By extending the state space to include all model parameters, I show that we need to solve the model only once to perform structural estimation. Parameters are then estimated by minimizing the distance between key empirical moments and the model-implied ones. Unlike the Simulated Method of Moments, the model-implied moments are estimated without computing a single moment. Instead, a neural network learns the corresponding moments using raw simulated observations. Once a network learns the (differentiable) mapping between parameters and moments, a Newton-Raphson routine is coupled with simulated annealing to find the set of parameters that globally minimizes the objective function. I illustrate the algorithm by solving and estimating a benchmark macroeconomic model with stochastic volatility, endogenous labor supply, and irreversible investment.
|Original language||English (US)|
|State||Unpublished - 2018|