Resultados de la búsqueda
Creator: Croushore, Dean Darrell, 1956- and Evans, Charles, 1958- Series: Joint committee on business and financial analysis Abstract:
Monetary policy research using time series methods has been criticized for using more information than the Federal Reserve had available in setting policy. To quantify the role of this criticism, we propose a method to estimate a VAR with real-time data while accounting for the latent nature of many economic variables, such as output. Our estimated monetary policy shocks are closely correlated with a typically estimated measure. The impulse response functions are broadly similar across the methods. Our evidence suggests that the use of revised data in VAR analyses of monetary policy shocks may not be a serious limitation.
Palabra clave: Monetary policy, Identification, VARs, Data revisions, Real-time data, and Shocks Tema: C82 - Data collection and data estimation methodology ; Computer programs - Methodology for collecting, estimating, and organizing macroeconomic data, C32 - Multiple or simultaneous equation models - Time-series models ; Dynamic quantile regressions, and E52 - Monetary policy, central banking, and the supply of money and credit - Monetary policy
Creator: Rich, Robert W., 1958- and Tracy, Joseph S., 1956- Series: Joint committee on business and financial analysis Abstract:
This paper examines data on point and probabilistic forecasts of inflation from the Survey of Professional Forecasters. We use this data to evaluate current strategies for the empirical modeling of forecast behavior. In particular, the analysis principally focuses on the relationship between ex post forecast errors and ex ante measures of uncertainty in order to assess the reliability of using proxies based on predictive accuracy to describe changes in predictive confidence. After we adjust the data to account for certain features in the conduct and construct of the survey, we find a significant and robust correlation between observed heteroskedasticity in the consensus forecast errors and forecast uncertainty. We also document that significant compositional effects are present in the data that are economically important in the case of forecast uncertainty, and may be related to differences in respondents' access to information.
Palabra clave: Forecasting, Inflation, Uncertainty, Disagreement, and Conditional heteroskedasticity Tema: C12 - Econometric and statistical methods : General - Hypothesis testing, C22 - Single equation models ; Single variables - Time-series models ; Dynamic quantile regressions, and E37 - Prices, business fluctuations, and cycles - Forecasting and simulation
Creator: Chari, V. V., Kehoe, Patrick J., and McGrattan, Ellen R. Series: Joint committee on business and financial analysis Abstract:
This paper proposes a simple method for guiding researchers in developing quantitative models of economic fluctuations. We show that a large class of models, including models with various frictions, are equivalent to a prototype growth model with time varying wedges that, at least on face value, look like time-varying productivity, labor taxes, and capital income taxes. We label the time varying wedges as efficiency wedges, labor wedges, and investment wedges. We use data to measure these wedges and then feed them back into the prototype growth model. We then assess the fraction of fluctuations accounted for by these wedges during the great depressions of the 1930s in the United States, Germany, and Canada. We find that the efficiency and labor wedges in combination account for essentially all of the declines and subsequent recoveries. Investment wedge plays at best a minor role.
Palabra clave: Business cycle, Cycle, Economic fluctuations, Fluctuation, and Growth Tema: O41 - One, Two, and Multisector Growth Models, O47 - Economic growth and aggregate productivity - Measurement of economic growth ; Aggregate productivity ; Cross-country output convergence, and E32 - Prices, business fluctuations, and cycles - Business fluctuations ; Cycles
Creator: Bartelsman, Eric J. and Beaulieu, J. Joseph Series: Joint committee on business and financial analysis Abstract:
This paper is the first of a series of explorations in the relative performance and sources of productivity growth of U.S. businesses across industries and legal structure. In order to assemble the disparate data from various sources to develop a coherent productivity database, we developed a general system to manage data. The paper describes this system and then applies it by building such a database. The paper presents updated estimates of gross output, intermediate input use and value added using the BEA=s GPO data set. It supplements these data with estimates of missing data on intermediate input use and prices for the 1977-1986 period, and it concords these data, which are organized on a 1972 SIC basis, to the 1987 SIC in order to have consistent time series covering the last twenty-four years. It further refines these data by disaggregating them by legal form of organization. The paper also presents estimates of labor hours, investment, capital services and, consequently, multifactor productivity disaggregated by industry and legal form of organization, and it analyzes the contribution of various industries and business organizations to aggregate productivity. The paper also reconsiders these estimates in light of the surge in spending in advance of the century-date change.
Palabra clave: Legal form of organization, Labor productivity, Industrial productivity, and Database design Tema: E23 - Macroeconomics : Consumption, saving, production, employment, and investment - Production and D24 - Production and organizations - Production ; Cost ; Capital and total factor productivity ; Capacity
Creator: Bullard, James and Duffy, John, 1964- Series: Joint committee on business and financial analysis Abstract:
Trend-cycle decomposition has been problematic in equilibrium business cycle research. Many models are fundamentally based on the concept of balanced growth, and so have clear predictions concerning the nature of the multivariate trend that should exist in the data if the model is correct. But the multivariate trend that is removed from the data in this literature is not the same one that is predicted by the model. This is understandable, because unexpected changes in trends are difficult to model under a rational expectations assumption. A learning assumption is more appropriate here. We include learning in a standard equilibrium business cycle model with explicit growth. We ask how the economy might react to the important trend-changing events of the postwar era in industrialized economies, such as the productivity slowdown, increased labor force participation by women, and the "new economy" of the 1990s. This tells us what the model says about the trend that should be taken out of the data before the business cycle analysis begins. Thus we use learning to address the trend-cycle decomposition problem that plagues equilibrium business cycle research. We argue that a model-consistent approach, such as the one we suggest here, is necessary if the goal is to obtain an accurate assessment of an equilibrium business cycle model.
Palabra clave: Learning, Productivity slowdown, New economy, Equilibrium business cycle theory, and Business cycle fluctuations Tema: E30 - Prices, business fluctuations, and cycles - General and E20 - Macroeconomics : Consumption, saving, production, employment, and investment - General
Creator: Fernandez-Villaverde, Jesus and Rubio-Ramírez, Juan Francisco Series: Joint committee on business and financial analysis Abstract:
This paper presents a method to perform likelihood-based inference in nonlinear dynamic equilibrium economies. This type of models has become a standard tool in quantitative economics. However, existing literature has been forced so far to use moment procedures or linearization techniques to estimate these models. This situation is unsatisfactory: moment procedures suffer from strong small samples biases and linearization depends crucially on the shape of the true policy functions, possibly leading to erroneous answers. We propose the use of Sequential Monte Carlo methods to evaluate the likelihood function implied by the model. Then we can perform likelihood-based inference, either searching for a maximum (Quasi-Maximum Likelihood Estimation) or simulating the posterior using a Markov Chain Monte Carlo algorithm (Bayesian Estimation). We can also compare different models even if they are nonnested and misspecified. To perform classical model selection, we follow Vuong (1989) and use the Kullback-Leibler distance to build Likelihood Ratio Tests. To perform Bayesian model comparison, we build Bayes factors. As an application, we estimate the stochastic neoclassical growth model.
Palabra clave: Sequential Monte Carlo methods, Nonlinear filtering, Dynamic equilibrium economies, and Likelihood-based inference Tema: C11 - Bayesian Analysis: General, C10 - Econometric and Statistical Methods and Methodology: General, C13 - Estimation: General, and C15 - Statistical Simulation Methods: General