This paper presents a frequency-domain technique for estimating distributed lag coefficients (the impulse-response function) when observations are randomly missed. The technique treats stationary processes with randomly missed observations as amplitude-modulated processes and estimates the transfer function accordingly. Estimates of the lag coefficients are obtained by taking the inverse transform of the estimated transfer function. Results with artificially created data show that the technique performs well even when the probability of an observation being missed is one-half and in some cases when the probability is as low as one-fifth. The approximate asymptotic variance of the estimator is also calculated in the paper.
We explore the long-run demand for M1 based on a dataset comprising 38 countries and relatively long sample periods, extending in some cases to over a century. Overall, we find very strong evidence of a long-run relationship between the ratio of M1 to GDP and a short-term interest rate, in spite of a few failures. The standard log-log specification provides a very good characterization of the data, with the exception of periods featuring very low interest rate values. This is because such a specification implies that, as the short rate tends to zero, real money balances become arbitrarily large, which is rejected by the data. A simple extension imposing limits on the amount that households can borrow results in a truncated log-log specification, which is in line with what we observe in the data. We estimate the interest rate elasticity to be between 0.3 and 0.6, which encompasses the well-known squared-root specification of Baumol and Tobin.
In this paper we propose and test a new explanation of bank behavior during the Free Banking Era, 1837–63. Arguing against the view that free bank failures were due to fraud, we claim that they were caused by exposure to term structure risk. Testing this new explanation with a new and extensive body of data, we find strong support for it: periods of falling bond prices correspond to the periods with most of the free bank failures. The new data do not support the view that fraud caused the failures.
The purpose of this paper is to begin a reevaluation of the Free Banking Era by developing and examining individual bank information on the population of banks which existed under the free banking laws in four states. This information allows us to determine the number of free banks which failed and to estimate the resulting losses to their note holders. While the new evidence suggests there were problems with free banking, it presents a serious challenge to the prevailing view that free banking led to financial chaos.
The claim that bad money drives out good is one of the oldest and most cited in economics. Economists refer to this claim as Gresham’s law. Yet despite its seemingly universal acceptance, this claim does not warrant its status as a law. We find it has no convincing explanations and many overlooked exceptions. We propose an alternative hypothesis based on the costs of using a medium of exchange at a nonpar price: small-denomination currency undervalued at the mint tends to disappear from circulation while large-denomination currency usually circulates at premium. Examining a variety of historical episodes when market and legal prices were different, we find our “law” can explain history much better than Gresham’s.
This paper explains why the risky notes of banks established during the Free Banking Era (1837–63) were demanded even when relatively safe specie (gold and silver coin) was an alternative. Free bank notes were demanded because they were priced to reflect the expected value of their backing. The empirical evidence supports this explanation. Specifically, in New York, Wisconsin, and Indiana the expected value of backing was sufficient for free bank notes to circulate at par, which they did. In Minnesota the backing for notes was very poor: they exchanged well below par, being treated as small-denomination securities.
Our study examines whether there is a systematic relationship between the monetary standard under which a country operates and the rate of inflation it experiences. It also explores whether there are other properties of inflation, money, and output that differ between economies operating under a commodity standard and economies operating under a fiat standard. The basis for our study is price, money, and output data for 15 countries that have operated under both types of monetary standards. For each of these countries the data cover 80 years, and for most the data cover more than 100 years. With these data we are able to establish several facts about the differences in inflation, money growth, and output growth between economies operating under commodity standards and those operating under fiat standards. Specifically, we find that the following facts emerge when comparing commodity standards to fiat standards: inflation, money growth, and output growth are all lower; growth rates of monetary aggregates are less highly correlated with each other; growth rates of monetary aggregates are less highly correlated with inflation; and growth rates of monetary aggregates are more highly correlated with output growth.
This paper analyzes the variability of output under money supply and exchange rate rules in an open economy in which the slope of the aggregate supply curve depends on the variances of aggregate demand and market-specific innovations. It demonstrates that results regarding the dominance of one rule over the other when the slope of the aggregate supply curve is constant are reversed when the slope of the aggregate supply curve depends on the variances of innovations and these variances are sufficiently large.
In this paper we present a consistent estimator for a linear filter (distributed lag) when the independent variable is subject to observational error. Unlike the standard errors-in-variables estimator which uses instrumental variables, our estimator works directly with observed data. It is based on the Hilbert transform relationship between the phase and the log gain of a minimum phase-lag linear filter. The results of using our method to estimate a known filter and to estimate the relationship between consumption and income demonstrate that the method performs quite well even when the noise-to-signal ratio for the observed independent variable is large. We also develop a criterion for determining whether an estimated phase function is minimum phase-lag.