Formative Versus Reflective Indicators in Organizational Measure Development: A Comparison and Empirical Illustration

AuthorAdamantios Diamantopoulos,Judy A. Siguaw
Date01 December 2006
Published date01 December 2006
DOIhttp://doi.org/10.1111/j.1467-8551.2006.00500.x
Formative Versus Reflective Indicators in
Organizational Measure Development: A
Comparison and Empirical Illustration
Adamantios Diamantopoulos and Judy A. Siguaw
*
Institute of Business Administration, University of Vienna, Bruenner Straße 72, A-1210, Vienna, Austria and
*
Cornell-Nanyang Institute of Hospitality Management, Nanyang Technological University, S3-01B-49
Nanyang Avenue, Singapore 639792, Republic of Singapore
Email: adamantios.diamantopoulos@univie.ac.at [Diamantopoulos]; judysiguaw@ntu.edu.sg [Siguaw]
A comparison is undertaken between scale development and index construction
procedures to trace the implications of adopting a reflective versus formative perspective
when creating multi-item measures for organizational research. Focusing on export
coordination as an illustrative construct of interest, the results show that the choice of
measurement perspective impacts on the content, parsimony and criterion validity of the
derived coordination measures. Implications for practising researchers seeking to
develop multi-item measures of organizational constructs are considered.
Latent variables are widely utilized by organiza-
tional researchers in studies of intra- and inter-
organizational relationships (James and James,
1989; Scandura and Williams, 2000; Stone-
Romero, Weaver and Glenar, 1995). In nearly
all cases, these latent variables are measured
using reflective (effect) indicators (e.g. Hogan and
Martell, 1987; James and Jones, 1980; Morrison,
2002; Ramamoorthy and Flood, 2004; Sarros et
al., 2001; Schaubroeck and Lam, 2002; Subra-
mani and Venkatraman, 2003; Tihanyi et al.,
2003). Thus, according to prevailing convention,
indicators are seen as functions of the latent
variable, whereby changes in the latent variable
are reflected (i.e. manifested) in changes in the
observable indicators. However, as MacCallum
and Browne point out, ‘in many cases, indicators
could be viewed as causing rather than being
caused by the latent variable measured by the
indicators’ (MacCallum and Browne, 1993, p.
533). In these instances, the indicators are known
as formative (or causal); it is changes in the
indicators that determine changes in the value of
the latent variable rather than the other way
round (Jarvis, Mackenzie and Podsakoff, 2003).
Formally, if Zis a latent variable and x
1
,x
2
,...
x
n
a set of observable indicators, the reflective
specification implies that x
i
5l
i
Z
1e
i
, where l
i
is
the expected effect of Zon x
i
and e
i
is the
measurement error for the ith indicator (i51, 2,
... n). It is assumed that COV(Z,e
i
)50, and
COV(e
i
,e
j
)50, for ijand E(e
i
)50. In con-
trast, the formative specification implies that
Z5g
1
x
1
1g
2
x
2
1...1g
n
x
n
1z, where g
i
is the
expected effect of x
i
on Zand zis a disturbance
term, with COV(x
i
,z)50 and E(z)50. For more
details, see Bollen and Lennox (1991), Fornell,
Rhee and Yi (1991) and Fornell and Cha (1994).
With few exceptions (e.g. Law and Wong,
1999; Law, Wong and Mobley, 1998), formative
measures have been a somewhat ignored topic
within the area of organizational research.
Indeed, nearly all of the work that exists in the
area of formative measurement has stemmed
from researchers housed in sociology or psychol-
ogy (e.g. Bollen, 1984; Bollen and Lennox, 1991;
Bollen and Ting, 2000; Fayers and Hand, 1997;
Fayers et al., 1997; MacCallum and Browne,
British Journal of Management, Vol. 17, 263–282 (2006)
DOI: 10.1111/j.1467-8551.2006.00500.x
r2006 British Academy of Management
1993), marketing (e.g., Diamantopoulos and
Winklhofer, 2001; Fornell and Bookstein, 1982;
Jarvis, Mackenzie and Podsakoff, 2003; Rossiter,
2002) and strategy (e.g. Fornell, Lorange and
Roos, 1990; Hulland, 1999; Johansson and Yip,
1994; Venaik, Midgley and Devinney, 2004,
2005). This situation is unfortunate given that,
in many cases, work utilizing formative measures
may better inform organization theory, as illu-
strated herein.
The current study seeks to extend previous
methodological work by Bollen and Lennox
(1991), Law and Wong (1999), and Diamanto-
poulos and Winklhofer (2001) by tracing the
practical implications of adopting a formative
versus reflective measurement perspective when
developing a multi-item organizational measure
from a pool of items.
1
More specifically, we
explore whether conventional scale development
procedures (e.g. see Churchill, 1979; DeVellis,
2003; Netemeyer, Bearden and Sharma, 2003;
Spector, 1992) and index construction ap-
proaches (e.g. Diamantopoulos and Winklhofer,
2001; Law and Wong, 1999) as applied in
organizational coordination research lead to
materially different multi-item measures in terms
of (a) content (as captured by the number/
proportion of common items included in the
measures), (b) parsimony (as captured by the total
number of items comprising the respective
measures), and (c) criterion validity (as captured
by the ability of a reflective scale versus that of a
formative index to predict an external criterion,
i.e. some outcome variable).
2
None of these issues has been systematically
addressed in previous methodological research.
With regards to content, previous comparisons of
reflective and formative measures have implicitly
assumed that exactly the same set of indicators can
be used to operationalize the construct involved
(e.g. Bollen and Ting, 2000; Law and Wong, 1999;
MacCallum and Browne, 1993). For example, the
recent Monte Carlo simulation of measurement
model misspecification in marketing by Jarvis,
Mackenzie and Podsakoff (2003) was based on a
simple reversal of the directionality of the paths
between constructs and their indicators. This study
assumes that the only difference resulting from
applying a formative versus reflective measurement
approach relates to the causal priority between the
construct and its indicators. However, this is an
untested and, most likely, unwarranted assump-
tion as it implies that despite their very different
nature (see next section), scale development and
index construction strategies will result in measures
that contain an identical setofindicators.With
regard to parsimony, a natural extension of the
assumption made with regards to measure content
is that formative indexes and reflective scales are
equally parsimonious (if both types of measures
are assumed to be comprised of the exactly same
items, then the number of items must be the same
in both cases). Again, this is a questionable
assumption as it implies that the measure purifica-
tion procedures associated with scale development
and index construction respectively will result in
the exclusion (viz. inclusion) of exactly the same
number of items (although the specific items
dropped from the measures need not be the same).
Lastly, with regards to criterion validity, no
previous study has empirically examined whether
multi-item measures generated by scale develop-
ment (reflective) and index construction (forma-
tive) approaches respectively, perform similarly in
terms of their ability in predicting some outcome
variable. While considerations of validity have
featured in previous discussions of measurement
model specification, such discussions have been
purely of a conceptual nature (e.g. Bagozzi and
Fornell, 1982; Diamantopoulos, 1999).
3
In the following section, we provide some
conceptual background to the problem of devel-
oping multi-item measures in organizational
research and contrast scale development and
index construction procedures in the specific
context of organizational coordination. Next,
we apply these procedures to empirical data and
1
Throughout this paper we use the (generic) term
‘measure’ to refer to a multi-item operationalization of
a construct, and the terms ‘index’ and ‘scale’ to
distinguished between measures comprised of formative
and reflective items respectively. The terms ‘items’ and
‘indicators’ are used interchangeably.
2
Criterion (or criterion-related) validity ‘concerns the
correlation between the measure and some criterion
variable of interest’ (Zeller and Carmines, 1980, p. 79). It
is also known as ‘empirical’ (e.g. Nachmias and
Nachmias, 1976) ‘pragmatic’ (Oppenheim, 1992) and
‘predictive’ validity (e.g. Nunnally and Bernstein, 1994).
3
For a conceptual discussion of the nature of validity as
well as the analytic tools that can be used to aid its
assessment, see Carmines and Zeller (1979).
264 A. Diamantopoulos and J. A. Siguaw

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT