We study a variant of the one-sector neoclassical growth model of Diamond in which capital investment must be credit financed, and an adverse selection problem appears in loan markets. The result is that the unfettered operation of credit markets leads to a one-dimensional indeterminacy of equilibrium. Many equilibria display economic fluctuations which do not vanish asymptotically; such equilibria are characterized by transitions between a Walrasian regime in which the adverse selection problem does not matter, and a regime of credit rationing in which it does. Moreover, for some configurations of parameters, all equilibria display such transitions for two reasons. One, the banking system imposes ceilings on credit when the economy expands and floors when it contracts because the quality of public information about the applicant pool of potential borrowers is negatively correlated with the demand for credit. Two, depositors believe that returns on bank deposits will be low (or high): these beliefs lead them to transfer savings out of (into) the banking system and into less (more) productive uses. The associated disintermediation (or its opposite) causes banks to contract (expand) credit. The result is a set of equilibrium interest rates on loans that validate depositors' original beliefs. We investigate the existence of perfect foresight equilibria displaying periodic (possibly asymmetric) cycles that consist of m periods of expansion followed by n periods of contraction, and propose an algorithm that detects all such cycles.
This paper proposes a simple method for guiding researchers in developing quantitative models of economic fluctuations. We show that a large class of models, including models with various frictions, are equivalent to a prototype growth model with time varying wedges that, at least on face value, look like time-varying productivity, labor taxes, and capital income taxes. We label the time varying wedges as efficiency wedges, labor wedges, and investment wedges. We use data to measure these wedges and then feed them back into the prototype growth model. We then assess the fraction of fluctuations accounted for by these wedges during the great depressions of the 1930s in the United States, Germany, and Canada. We find that the efficiency and labor wedges in combination account for essentially all of the declines and subsequent recoveries. Investment wedge plays at best a minor role.
This paper reports some empirical evidence on the relation between the expected real interest rate and monetary aggregates in postwar U.S. data. We find some evidence against the hypothesis, implied by the Real Business Cycle model of Litterman and Weiss (1985), that the expected real interest rate follows a univariate autoregressive process, not Granger-caused by monetary aggregates. Our findings, however, are consistent with a more general bivariate model--suggested by what Barro (1987, Chapter 5) refers to as "the basic market-clearing model"--in which the real rate depends on its own lagged values and on lagged output. Taking this bivariate model as our null hypothesis, we find no evidence that money-stock changes have a significant liquidity effect on the expected real interest rate.
The new classical view that macroeconomic fluctuations can be modeled as an equilibrium system perturbed by transitory monetary disturbances has been challenged in recent years by another equilibrium view of fluctuations, the so-called real business cycle theory. In this latter framework, shocks to the production function induce both intertemporal substitution of labor supply and permanent shifts in the stochastic trend of output. Monetary shocks, on the other hand, play only a minor role in this view of the cycle. Much of the empirical support for the real business cycle view of fluctuations is based on a re-examination of traditional methods for detrending economic time series. The issues raised by the real business cycle theorists are not new; indeed, they go back at least to the NBER's first business cycle studies. However, the real business cycle theorists attach a radical economic interpretation to what, on the surface, appears to be a purely technical note on the proper method for detrending economic data. This paper reviews the debate over stochastic trends, discusses the economic implications of the real business cycle interpretation of stochastic trend models, and weighs the time series evidence for some of the stronger claims made by real business cycle theorists. We conclude that, while this literature raises real and useful questions about the interpretation of observed fluctuations, the new classical view of the cycle is not ruled out by the data.