The purpose of this paper is to begin a reevaluation of the Free Banking Era by developing and examining individual bank information on the population of banks which existed under the free banking laws in four states. This information allows us to determine the number of free banks which failed and to estimate the resulting losses to their note holders. While the new evidence suggests there were problems with free banking, it presents a serious challenge to the prevailing view that free banking led to financial chaos.
In this paper we propose and test a new explanation of bank behavior during the Free Banking Era, 1837–63. Arguing against the view that free bank failures were due to fraud, we claim that they were caused by exposure to term structure risk. Testing this new explanation with a new and extensive body of data, we find strong support for it: periods of falling bond prices correspond to the periods with most of the free bank failures. The new data do not support the view that fraud caused the failures.
We explore the long-run demand for M1 based on a dataset comprising 38 countries and relatively long sample periods, extending in some cases to over a century. Overall, we find very strong evidence of a long-run relationship between the ratio of M1 to GDP and a short-term interest rate, in spite of a few failures. The standard log-log specification provides a very good characterization of the data, with the exception of periods featuring very low interest rate values. This is because such a specification implies that, as the short rate tends to zero, real money balances become arbitrarily large, which is rejected by the data. A simple extension imposing limits on the amount that households can borrow results in a truncated log-log specification, which is in line with what we observe in the data. We estimate the interest rate elasticity to be between 0.3 and 0.6, which encompasses the well-known squared-root specification of Baumol and Tobin.
This paper presents a frequency-domain technique for estimating distributed lag coefficients (the impulse-response function) when observations are randomly missed. The technique treats stationary processes with randomly missed observations as amplitude-modulated processes and estimates the transfer function accordingly. Estimates of the lag coefficients are obtained by taking the inverse transform of the estimated transfer function. Results with artificially created data show that the technique performs well even when the probability of an observation being missed is one-half and in some cases when the probability is as low as one-fifth. The approximate asymptotic variance of the estimator is also calculated in the paper.