A pairwise random meeting model with money is used to study the nominal yield on pure-discount, default-free securities that are issued by the government. There is one steady state with matured securities at par and, for some parameters, another with them at a discount. In the former, exogenous rejection of unmatured securities by the government is necessary and sufficient for such a steady state to display a positive nominal yield on unmatured securities. In the latter, the post-maturity discount on securities induces a deeper pre-maturity discount even if there is no exogenous rejection of unmatured securities.
Entrepreneurs bear substantial risk, but empirical evidence shows no sign of a positive premium. This paper develops a theory of endogenous entrepreneurial risk taking that explains why self-financed entrepreneurs may find it optimal to invest into risky projects offering no risk premium. The model has also a number of implications for firm dynamics supported by empirical evidence, such as a positive correlation between survival, size, and firm age.
In this paper we examine the evidence for two competing views of how monetary and financial disturbances influenced the real economy during the national banking era, 1880-1914. According to the monetarist view, monetary disturbances affected the real economy through changes on the liability side of the banking system's balance sheet independent of the composition of bank portfolios. According to the credit rationing view, equilibrium credit rationing in a world of asymmetric information can explain short-run fluctuations in real output. Using structural VARs we incorporate monetary variables in credit models and credit variables in monetarist models, with inconclusive results. To resolve this ambiguity, we invoke the institutional features of the national banking era. Most of the variation in bank loans is accounted for by loans secured by stock, which in turn reflect volatility in the stock market. When account is taken of the stock market, the influence of credit in the VAR model is greatly reduced, while the influence of money remains robust. The breakdown of the composition of bank loans into stock market loans (traded in open asset markets) and other business loans (a possible setting for credit rationing) reveals that other business loans remained remarkably stable over the business cycle.
The possibility of exact maximum likelihood estimation of many observation-driven models remains an open question. Often only approximate maximum likelihood estimation is attempted, because the unconditional density needed for exact estimation is not known in closed form. Using simulation and nonparametric density estimation techniques that facilitate empirical likelihood evaluation, we develop an exact maximum likelihood procedure. We provide an illustrative application to the estimation of ARCH models, in which we compare the sampling properties of the exact estimator to those of several competitors. We find that, especially in situations of small samples and high persistence, efficiency gains are obtained.
In this paper we study the relationship between wealth, income distribution and growth in a game-theoretic context in which property rights are not completely enforcable. We consider equilibrium paths of accumulation which yield players utilities that are at least as high as those that they could obtain by appropriating higher consumption at the present and suffering retaliation later on. We focus on those subgame perfect equilibria which are constrained Pareto-efficient (second best). In this set of equilibria we study how the level of wealth affects growth. In particular we consider cases which produce classical traps (with standard concave technologies): growth may not be possible from low levels of wealth because of incentive constraints while policies (sometimes even first-best policies) that lead to growth are sustainable as equilibria from high levels of wealth. We also study cases which we classify as the "Mancur Olson" type: first best policies are used at low levels of wealth along these constrained Pareto efficient equilibria, but first best policies are not sustainable at higher levels of wealth where growth slows down. We also consider the unequal weighting of players to ace the subgame perfect equiliria on the constrained Pareto frontier. We explore the relation between sustainable growth rates and the level of inequality in the distribution of income.
This paper presents a method to perform likelihood-based inference in nonlinear dynamic equilibrium economies. This type of models has become a standard tool in quantitative economics. However, existing literature has been forced so far to use moment procedures or linearization techniques to estimate these models. This situation is unsatisfactory: moment procedures suffer from strong small samples biases and linearization depends crucially on the shape of the true policy functions, possibly leading to erroneous answers. We propose the use of Sequential Monte Carlo methods to evaluate the likelihood function implied by the model. Then we can perform likelihood-based inference, either searching for a maximum (Quasi-Maximum Likelihood Estimation) or simulating the posterior using a Markov Chain Monte Carlo algorithm (Bayesian Estimation). We can also compare different models even if they are nonnested and misspecified. To perform classical model selection, we follow Vuong (1989) and use the Kullback-Leibler distance to build Likelihood Ratio Tests. To perform Bayesian model comparison, we build Bayes factors. As an application, we estimate the stochastic neoclassical growth model.
I show that a simple sticky price model based on Rotemberg (1982) is consistent with a variety of facts concerning the correlation of prices, hours and output. In particular, I show that it is consistent with a negative correlation between the detrended levels of output and prices when the Beveridge-Nelson method is used to detrend both the price and output data. Such a correlation, i.e.,a negative correlation between the predictable movements in output and the predictable movements in prices is present (and very strong) in U.S. data. Consistent with the model, this correlation is stronger than correlations between prices and hours of work. I also study the size of the predictable price movements that are associated with predictable output movements as well as the degree to which there are predictable movements in monetary aggregates associated with predictable movements in output. These facts are used to shed light on the degree to which the Federal Reserve has pursued a policy designed to stabilize expected inflation.
This paper presents a model of growth through technical progress. The nature and scope of what is learned is derived from a set of axioms, and optimal search behavior by agents is then analyzed. Agents can search intensively or extensively. Intensive search explores a technology in greater depth, while extensive search yields new technologies. Agents alternate between these two modes of search. The economy grows forever and the growth rate is bounded away from zero. The growth rate is on average higher during periods of intensive search than during periods of extensive search. Epochs of higher growth are initiated by discoveries that call for further intensive exploration. This mechanism is reminiscent of the process described by Schumpeter as causing long-wave business cycles. Serial correlation properties of output and growth stem from the presence of intensive rather than extensive search. The two key parameters are technological opportunity and the cost of the extensive search.
In a world with two similar, developed economies, economic integration can cause a permanent increase in the worldwide rate of growth. Starting from a position of isolations, closer integration can be achieved by increasing trade in goods or by increasing flows of ideas. We consider two models with different specifications of the research and development sector that is the source of growth. Either form of integration can increase the long-run rate of growth if it encourages the worldwide exploitation of increasing returns to scale in the research and development sector.
This paper is interested in the small sample properties of the indirect inference procedure which has been previously studied only from an asymptotic point of view. First, we highlight the fact that the Andrews (1993) median-bias correction procedure for the autoregressive parameter of an AR(1) process is closely related to indirect inference; we prove that the counterpart of the median-bias correction for indirect inference estimator is an exact bias correction in the sense of a generalized mean. Next, assuming that the auxiliary estimator admits an Edgeworth expansion, we prove that indirect inference operates automatically a second order bias correction. The latter is a well known property of the Bootstrap estimator; we therefore provide a precise comparison between these two simulation based estimators.