The Solution and Estimation of Discrete Choice Dynamic Programming Models by Simulation and Interpolation: Monte Carlo Evidence

Creator Series Issue number
  • 181
Date Created
  • 1994-09
  • Over the past decade, a substantial literature on the estimation of discrete choice dynamic programming (DC-DP) models of behavior has developed. However, this literature now faces major computational barriers. Specifically, in order to solve the dynamic programming (DP) problems that generate agents' decision rules in DC-DP models, high dimensional integrations must be performed at each point in the state space of the DP problem. In this paper we explore the performance of approximate solutions to DP problems. Our approximation method consists of: 1) using Monte Carlo integration to simulate the required multiple integrals at a subset of the state points, and 2) interpolating the non-simulated values using a regression function. The overall performance of this approximation method appears to be excellent, both in terms of the degree to which it mimics the exact solution, and in terms of the parameter estimates it generates when embedded in an estimation algorithm.

Related information Corporate Author
  • Federal Reserve Bank of Minneapolis. Research Department
  • Federal Reserve Bank of Minneapolis
Resource type DOI


属于 Collection:



Zipped Files

Download a zip file that contains all the files in this work.