Recherche
Résultats de recherche

Creator: Boyd, John H., Daley, Lane A., 1953, and Runkle, David Edward Series: Working paper (Federal Reserve Bank of Minneapolis. Research Department) Number: 515 Abstract: This paper examines the seasonal pattern of accruals for loanloss provisions and chargeoffs chosen by bank managers. Using the existing literature on intrayear discretionary accruals, knowledge of the incentive systems used to evaluate bank managers' performance, and various regulatory characteristics, we predict that accruals for provisions and chargeoffs will cluster in the fourth quarter of each year. We examine quarterly data for 105 large bank holding companies from the first quarter of 1980 through the fourth quarter of 1990. Our results indicate that: (1) provisions and chargeoffs are clustered in the fourth quarter, (2) this clustering is not related to the level of business activity of the banks, (3) the proximity of a bank's actual capital to its regulatory capital requirement does not affect this clustering, and (4) current provisions are affected both by current chargeoffs and by expectations about future chargeoffs. To examine whether the systematic characteristics of these loanloss provision and chargeoff decisions are understood by users, we also estimate a quarterly equity valuation model in which quarterly provisions should be differentially weighted to reflect their seasonal characteristics. We find strong evidence to indicate that equity prices behave as if the market participants take these seasonal properties into account.
Motclé: Bank lending, Loanloss provision, Seasonality, Loans, Loan losses, Chargeoff, and Banks Assujettir: G14  Information and Market Efficiency; Event Studies; Insider Trading and G21  Banks; Depository Institutions; Micro Finance Institutions; Mortgages 
On the Relation Between the Expected Value and the Volatility of the Nominal Excess Return on Stocks
Creator: Glosten, Lawrence R., Jagannathan, Ravi, and Runkle, David Edward Series: Working paper (Federal Reserve Bank of Minneapolis. Research Department) Number: 505 Abstract: Earlier researchers have found either no relation or a positive relation between the conditional expected return and the conditional variance of the monthly excess return on stocks when they used the standard GARCHM model. This is in contrast to the negative relation found when other approaches were used to model conditional variance. We show that the difference in the estimated relation arises because the standard GARCHM model is misspecified. When the standard model is modified allow for (i) the presence for seasonal patterns in volatility, (ii) positive and negative innovations to returns to having different impacts on conditional volatility, and (iii) nominal interest rates to affect conditional variance, we once again find support for a negative relation. Using the modified GARCHM model, we also show that there is little evidence to support the traditional view that conditional volatility is highly persistent. Also, positive unanticipated returns result in a downward revision of the conditional volatility whereas negative unanticipated returns result in an upward revision of conditional volatility of a similar magnitude. Hence the time series properties of the monthly excess return on stocks appear to be substantially different from that of the daily excess return on stocks.
Motclé: Stock market, Rate of return, Risk, Asset valuation, Return rate, and Stocks Assujettir: G12  Asset Pricing; Trading Volume; Bond Interest Rates and G11  Portfolio Choice; Investment Decisions 
Creator: Geweke, John, Keane, Michael P., and Runkle, David Edward Series: Staff report (Federal Reserve Bank of Minneapolis. Research Department) Number: 177 Abstract: Statistical inference in multinomial multiperiod probit models has been hindered in the past by the high dimensional numerical integrations necessary to form the likelihood functions, posterior distributions, or moment conditions in these models. We describe three alternative approaches to inference that circumvent the integration problem: Bayesian inference using Gibbs sampling and data augmentation to compute posterior moments, simulated maximum likelihood (SML) estimation using the GHK recursive probability simulator, and method of simulated moment (MSM) estimation using the GHK simulator. We perform a set of MonteCarlo experiments to compare the performance of these approaches. Although all the methods perform reasonably well, some important differences emerge. The root mean square errors (RMSEs) of the SML parameter estimates around the data generating values exceed those of the MSM estimates by 21 percent on average, while the RMSEs of the MSM estimates exceed those of the posterior parameter means obtained via agreement via Gibbs sampling by 18 percent on average. While MSM produces a good agreement between empirical RMSEs and asymptotic standard errors, the RMSEs of the SML estimates exceed the asymptotic standard errors by 28 percent on average. Also, the SML estimates of serial correlation parameters exhibit significant downward bias.
Motclé: Simulated maximum likelihood, Discrete choice, Panel data, Bayesian inference, Method of simulated moments, and Gibbs sampling Assujettir: C15  Statistical Simulation Methods: General and C35  Multiple or Simultaneous Equation Models: Discrete Regression and Qualitative Choice Models; Discrete Regressors; Proportions 
Creator: McCabe, Kevin A., Mukherji, Arijit, and Runkle, David Edward Series: Staff report (Federal Reserve Bank of Minneapolis. Research Department) Number: 176 Abstract: We report on experiments that tested the predictions of competing theories of learning in games. Experimental subjects played a version of the threeperson matchingpennies game. The unique mixedstrategy Nash equilibrium of this game is locally unstable under naive Bayesian learning. Sophisticated Bayesian learning predicts that expectations will converge to Nash equilibrium if players observe the entire history of play. Neither theory requires payoffs to be common knowledge. We develop maximumlikelihood tests for the independence conditions implied by the mixedstrategy Nash equilibrium. We find that perfect monitoring was sufficient and complete payoff information was unnecessary for average play to be consistent with the equilibrium (as is predicted by sophisticated Bayesian learning). When subjects had imperfect monitoring and incomplete payoff information, average play was inconsistent with the equilibrium.

Creator: Geweke, John, Keane, Michael P., and Runkle, David Edward Series: Staff report (Federal Reserve Bank of Minneapolis. Research Department) Number: 170 Abstract: This research compares several approaches to inference in the multinomial probit model, based on MonteCarlo results for a seven choice model. The experiment compares the simulated maximum likelihood estimator using the GHK recursive probability simulator, the method of simulated moments estimator using the GHK recursive simulator and kernelsmoothed frequency simulators, and posterior means using a Gibbs samplingdata augmentation algorithm. Each estimator is applied in nine different models, which have from 1 to 40 free parameters. The performance of all estimators is found to be satisfactory. However, the results indicate that the method of simulated moments estimator with the kernelsmoothed frequency simulator does not perform quite as well as the other three methods. Among those three, the Gibbs samplingdata augmentation algorithm appears to have a slight overall edge, with the relative performance of MSM and SML based on the GHK simulator difficult to determine.

Creator: Runkle, David Edward Series: Staff report (Federal Reserve Bank of Minneapolis. Research Department) Number: 107 Abstract: The statistical significance of variance decompositions and impulse response functions for unrestricted vector autoregressions is questionable. Most previous studies are suspect because they have not provided confidence intervals for variance decompositions and impulse response functions. Here two methods of computing such intervals are developed, one using a normal approximation, the other using bootstrapped resampling. An example from Sims’ work illustrates the importance of computing these confidence intervals. In the example, the 95 percent confidence intervals for variance decompositions span up to 66 percentage points at that usual forecasting horizon.
Motclé: Bootstrapping, Time series, and Macroeconomics 
On the Relation Between the Expected Value and the Volatility of the Nominal Excess Return on Stocks
Creator: Glosten, Lawrence R., Jagannathan, Ravi, and Runkle, David Edward Series: Staff report (Federal Reserve Bank of Minneapolis. Research Department) Number: 157 Abstract: We find support for a negative relation between conditional expected monthly return and conditional variance of monthly return, using a GARCHM model modified by allowing (i) seasonal patterns in volatility, (ii) positive and negative innovations to returns having different impacts on conditional volatility, and (iii) nominal interest rates to predict conditional variance. Using the modified GARCHM model, we also show that monthly conditional volatility may not be as persistent as was thought. Positive unanticipated returns appear to result in a downward revision of the conditional volatility whereas negative unanticipated returns result in an upward revision of conditional volatility.