Data‐Driven Identification Constraints for DSGE Models

AuthorJani Luoto,Markku Lanne
Date01 April 2018
DOIhttp://doi.org/10.1111/obes.12217
Published date01 April 2018
236
©2017 TheAuthors. OxfordBulletin of Economics and Statistics published by Oxford University and John Wiley & Sons Ltd.
Thisis an open access article under the ter ms of the CreativeCommons Attribution License, which permits use, distribution and reproduction in any medium, provided
the original work is properlycited.
OXFORD BULLETIN OF ECONOMICSAND STATISTICS, 80, 2 (2018) 0305–9049
doi: 10.1111/obes.12217
Data-Driven Identification Constraints for DSGE
Models*
Markku Lanne† and Jani Luoto
Department of Political and Economic Studies, and HECER, University of Helsinki,
Helsinki, Finland (e-mail: markku.lanne@helsinki.fi; jani.luoto@helsinki.fi)
Abstract
We propose imposing data-driven identification constraints to alleviate the multimodality
problem arising in the estimation of poorly identified dynamic stochastic general equilib-
rium models under non-informative prior distributions. We also devise an iterative pro-
cedure based on the posterior density of the parameters for finding these constraints. An
empirical application to the Smets and Wouters (2007) model demonstrates the properties
of the estimation method, and shows howthe problem of multimodal posterior distributions
caused by parameter redundancy is eliminated by identification constraints. Out-of-sample
forecast comparisons as well as Bayes factors lend support to the constrained model.
I. Introduction
Advances in Bayesian simulation methods have recently facilitated the estimation of rela-
tively large-scale dynamic stochastic general equilibrium (DSGE) models. However, when
using the commonly employed random walk Metropolis–Hastings (RWMH) algorithm,
typically relatively tight prior distributions have to be assumed to tackle flat and multi-
modal posterior distributions arising from weak identification in these models (see e.g.
Koop, Pesaran and Smith, 2013 and the references therein). This has the unfortunate con-
sequence that the resulting posterior distributions may not have much to say about how
well the structural model fits the data, but the priors are likely to be driving the results,
which precludes us from learning about the parameters of the model from the data.
Under less informative priors, one potential solution to the problem of weak identi-
fication is offered by the so-called data-driven identifiability constraints put forth in the
statistics literature (see Fr¨uhwirth-Schnatter, 2001) but, to the best of our knowledge, not
applied to DSGE models. Such constraints can be found by inspection of the output of
JEL Classification numbers: C11, C32, C52, D58.
*Wewould like to thank Francesco Zanetti (the Editor), and an anonymous referee for useful comments. Financial
support from the Academy of Finland (grants 268454 and 308628) is gratefully acknowledged.The first author also
acknowledges financial support from CREATES (DNRF78) funded by the Danish National Research Foundation,
while the second author is grateful for financial support from the Research Funds of the University of Helsinki. Part
of this research was done while the second author was visiting the Bank of Finland, whose hospitality is gratefully
acknowledged.
Identification constraints for DSGE models 237
the posterior distribution. Subsequently, the restricted and unrestricted models may be
compared to check the validity of the constraints. For instance, if two parameters seem
to be weakly identified and always take an equal value with high probability, a model,
where their equality is imposed, might be preferable. In practice, the constraints are set in
an iterative procedure, where at each stage the posterior distribution of the parameters is
inspected to find additional constraints, whose validity is then assessed by means of, say,
Bayes factors and improvement in estimation accuracy. The iteration continues until no
further acceptable constraints or signs of weak identification can be found.
It is important to distinguish our approach from specification searches (see e.g. Leamer,
1978), where the goal is to find the ‘true’model, or to simplify or improve the current model.
While our constraints may also lead to a simpler and more easily interpretable model, the
ultimate objective is to alleviate problems arising from lack of identifiability. In other
words, we take a DSGE model as given, but acknowledge the fact that these models tend
to be poorly identified and, therefore, try to find constraints respecting the geometry of
the posteriors to facilitate identification. In addition to improvements in estimation accu-
racy and probabilistic forecasts due to improved identification, data-driven identifiability
constraints are indeed also likely to facilitate interpretation.
Our approach calls for an efficient estimation method that is capable of handling multi-
modality likely to be encountered at least in the unrestricted DSGE model. Such a procedure
has recently been suggested by Herbst and Schorfheide (2014), who employed an adaptive
sequential Monte Carlo (SMC) algorithm to estimate the Smets and Wouters (2007) model
(SW model hereafter) based on relatively loose priors (see also Creal, 2007; Chib and
Ramamurthy, 2010). Once the final model has been obtained, it is important to ensure that
it has been accurately estimated (and in case of obvious inaccuracy, some of the constraints
may be relaxedand alter nativeconstraints entertained). To that end, Herbst and Schorfheide
suggested running the SMC algorithm multiple times to obtain an approximation of the
asymptotic variances of the parameter estimates. This is, unfortunately, computationally
very costly in the case of a complex high-dimensional DSGE model, and, therefore, in
practice only few runs (20 in Herbst and Schorfheide, 2014) are feasible, yielding a very
imprecise measure of estimation accuracy.
To facilitate assessment of estimation accuracy, we propose to augment the SMC
algorithm with a non-sequential importance sampling (IS) step, which has the advantage
that numerical standard errors can be readily calculated without burdensome simulations.
Moreover, convergence results for non-sequential IS are available (see Geweke (2005,
Theorem 4.2.2), while the asymptotic properties of the adaptive SMC algorithm are not
necessarily known. Hence, in addition to being computationally feasible in assessing the
accuracy of the estimates, our procedure is theoretically well motivated. Of course, these
two approaches are not substitutes. In particular, it may be a good idea to run the SMC
algorithm a few times before the IS step to ensure that the SMC algorithm has visited the
entire posterior. This is important because the IS step relies on the SMC approximation.
We estimate the SW model on the same data set as Smets andWouters (2007) and with
diffuse priors that are slightly different from those assumed by Herbst and Schorfheide
(2014). Our augmented SMC method yields very accurate estimates similar to those of
Herbst and Schorfheide (2014) based on both the RWMH and SMC methods and diffuse
priors, but very different from those of Smets and Wouters (2007) based on tight prior
©2017 The Authors. Oxford Bulletin of Economics and Statistics published by Oxford University and JohnWiley & Sons Ltd.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT