Search Constraints
Filtering by:
Publication Year
1994
Remove constraint Publication Year: <span class='single'>1994</span>
In this paper we develop alternative ways to compare asset pricing models when it is understood that their implied stochastic discount factors do not price all portfolios correctly. Unlike comparisons based on chi-squared statistics associated with null hypotheses that models are correct, our measures of model performance do not reward variability of discount factor proxies. One of our measures is designed to exploit fully the implications of arbitrage-free pricing of derivative claims. We demonstrate empirically the usefulness of methods in assessing some alternative stochastic factor models that have been proposed in asset pricing literature.
According to previous studies, the demand-liability feature of national bank notes did not present a problem for note-issuing banks because the nonbank public treated notes and other currency as perfect substitutes. However, that view, when combined with nonbindingness of the collateral restriction against note issue, itself an implication of the fact that some eligible collateral was not used for that purpose, implies that the safe short-term interest rate is pegged at the tax rate on note circulation. Since evidence on short-term interest rates is inconsistent with such a peg, that view must be rejected.
We describe several methods for approximating the solution to a model in which inequality constraints occasionally bind, and we compare their performance. We apply the methods to a particular model economy which satisfies two criteria: It is similar to the type of model used in actual research applications, and it is sufficiently simple that we can compute what we presume is virtually the exact solution. We have two results. First, all the algorithms are reasonably accurate. Second, on the basis of speed, accuracy and convenience of implementation, one algorithm dominates the rest. We show how to implement this algorithm in a general multidimensional setting, and discuss the likelihood that the results based on our example economy generalize.
Applied general equilibrium models with imperfect competition and economies of scale have been extensively used for analyzing international trade and development policy issues. They offer a natural framework for testing the empirical relevance of propositions from the industrial organization and new trade theoretical literature. This paper warns model builders and users that considerable caution is needed in interpreting the results and deriving strong policy conclusions from these models: in this generation of applied general equilibrium models, nonuniqueness of equilibria is not a theoretical curiosum, but a potentially serious problem. Disregarding this may lead to dramatically wrong policy appraisals.
An economic experiment consists of the act of placing people in an environment desired by the experimenter, who then records the time paths of their economic behavior. Performing experiments that use actual people at the level of national economies is obviously not practical, but constructing a model economy and computing the economic behavior of the model’s people is. We refer to such experiments as computational experiments because the economic behavior of the model’s people is computed. In this essay, we specify the steps in designing a computational experiment to address some well posed quantitative question. We emphasize that the computational experiment is an econometric tool used in the task of deriving the quantitative implications of theory.
The goal of this paper is to extend the analysis of strategic bargaining to nonstationary environments, where preferences or opportunities may be changing over time. We are mainly interested in equilibria where trade occurs immediately, once the agents start negotiating, but the terms of trade depend on when the negotiations begin. We characterize equilibria in terms of simply dynamical systems, and compare these outcomes with the myopic Nash bargaining solution. We illustrate the practicality of the approach with an application in monetary economics.
This article contends that the various measures of the contribution of technology shocks to business cycles calculated using the real business cycle modeling method are not corroborated. The article focuses on a different and much simpler method for calculating the contribution of technology shocks, which takes account of facts concerning the productivity/labor input correlation and the variability of labor input relative to output. Under several standard assumptions, the method predicts that the contribution of technology shocks must be large (at least 78 percent), that the labor supply elasticity need not be large to explain the observed fluctuation in labor input, and that the contribution of technology shocks can be estimated fairly precisely. The method also estimates that the contribution of technology shocks could be lower than 78 percent under alternative assumptions.
This article is a progress report on research that attempts to include one type of market incompleteness and frictions in macroeconomic models. The focus of the research is the absence of insurance markets in which individual-specific risks may be insured against. The article describes some areas where this type of research has been and promises to be particularly useful, including consumption and saving, wealth distribution, asset markets, business cycles, and fiscal policies. The article also describes work in each of these areas that was presented at a conference sponsored by the Federal Reserve Bank of Minneapolis in the fall of 1993.
This paper studies the outcome of fully insured random selections among multiple competitive equilibria. This defines an iterative procedure of reallocation which is Pareto improving at each step. The process converges to a unique Pareto optimal allocation in finitely many steps. The key requirement is that random selections be continuous, which is a generic condition for smooth exchange economies with strictly concave utility functions.
It is often argued that with a positively skewed income distribution (median less than mean) a majority voting over proportional tax rates would result in higher tax rates than those that maximize average welfare, and will accordingly reduce aggregate savings. We reexamine this view in a capital accumulation model, in which distorting redistributive taxes provide insurance against idiosyncratic shocks, and income distributions evolve endogenously. We find small differences of either sign between the tax rates set by a majority voting and a utilitarian government, for reasonable parametric specifications. We show how these differences reflect a greater responsiveness of a utilitarian government to the average need for the insurance provided by the tax-redistribution scheme. These conclusions remain true despite the fact that the model simulations produce positively skewed distributions of total income across agents.
Current results range from 1994 to 1994