Assessing instructional leadership: a longitudinal study of new principals

Pages753-772
DOIhttps://doi.org/10.1108/09578231211264676
Published date21 September 2012
Date21 September 2012
AuthorGavin T.L. Brown,Constance Chai
Subject MatterEducation
Assessing instructional
leadership: a longitudinal study
of new principals
Gavin T.L. Brown and Constance Chai
Faculty of Education, The University of Auckland, Auckland, New Zealand
Abstract
Purpose – The purpose of this paper is to evaluate the psychometric properties of the Self
Assessment of Leadership of Teaching and Learning (SALTAL) inventory, in conditions of repeated
administration.
Design/methodology/approach – In 2006 and 2007, nearly all of New Zealand’s newly-appointed
school principals participated in an 18 month induction program (First Time Principals). The
SALTAL self-report was administered in three waves (i.e. before FTP, after two residential courses,
and at the end of the FTP) to two cohorts. This voluntary survey was completed all three times
by 55 per cent (n¼86) and 44 per cent (n¼85) of 2006 and 2007 participants respectively. Multi-group
confirmatory factor analysis evaluated the stability of the SALTAL factor structure for each of
the six administrations. Longitudinal curve modeling evaluated the linear effect of time on SALTAL
responses.
Findings – Responses to SALTAL were found to be statistically equivalent across all six
administrations. The longitudinal model was statistically invariant between cohorts. Initial scores
were inversely correlated with changes over time. Increased time had a significant effect on SALTAL
scores.
Originality/value – The paper shows that the SALTAL has demonstrable stability in eliciting
response in repeated administration and is useful for studying the impact of leadership development
programs.
Keywords New Zealand, Schools, Principals, Self-assessment, Instructional leadership,
First-time school leaders, Self-reported capacity, Longitudinal curve modelling
Paper type Research p aper
New research on how educational leaders make an impact on student outcomes provides
increasingly specific guidance about the relative impacts of different types of leadership
practice. The conclusions of several recent reviews of the evidence on the direct and indirect
effects of leadership on student outcomes all point to the importance of instructional
leadership (Blase and Blase, 2000; Goldring et al., 2009; Hallinger, 2011b; Leithwood et al.,
2004; Quinn, 2002; Robinson et al., 2008a). Robinson et al. (2008a, p. 2) concluded that “the
more leaders focus their relationships, their work, and their learning on the core business of
teaching and learning, the greater their influence on student outcomes.”
The term instructional, or learning-foc ussed leadership, embraces a number of
leadership practices, including setting and communicating clear instr uctional goals
and expectations; strategic resourcing of priority goals; overseeing and evaluating
teaching and teachers; prom oting and participating in te acher learning and
development and creating an orderly environment that is safe for and supportive of
both staff and students (Alig-Mielcarek and Hoy, 2005; Robinson et al., 2008a).
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/0957-8234.htm
Received 8 July 2011
Revised 17 November 2011
31 December 2011
8 February 2012
9 February 2012
Accepted 12 February 2012
Journal of Educational
Administration
Vol.50 No. 6, 2012
pp. 753-772
rEmeraldGroup PublishingLimited
0957-8234
DOI 10.1108/09578231211264676
The authors are grateful to Professor Viviane Robinson, The University of Auckland, for
providing access to the SALTAL data and for assistance with earlier drafts of the paper.
753
Assessing
instructional
leadership
The recent evidence about the impact of instructional leadership, combined with
policy imperatives to have all students succeed on intellectually challenging cur ricula,
have resulted in a new emphasis on building the instructional leadership capacity of
both principals and of more widely distributed leadership teams (Elmore, 2004; Nelson
and Sassi, 2005; Pont et al., 2008; Spillane et al., 2003). Efforts to develop individuals
and teams are not sustainable, however, if the organizational and policy environments
in which educators work are not strongly aligned to this goal. Recent analyses suggest
that, in the USA at least, there is considerable misalignment. Adams and Copland
(2007) analyzed the principal licensing standards of 50 states in the USA and found
that, while a learning focus was included in about 34 states, it was emphasized in only
six, with considerably more emphasis typically being given to general organizational
skills and knowledge such as mentoring, communicating, and managing
constituencies. They concluded that “few states have taken the important step of
crafting licensing policies that reflect a coherent learning-focused school leadership
agenda” (Adams and Copland, 2007, p. 181).
In a similar analysis, a group of researchers at Vanderbilt University examined the
instructional leadership emphasis of 66 leadership assessm ent instruments used by
some or all the school districts in 17 states in the USA. They found a greater emphasis
on instructional leadership, with 52 percent of all items coded in this category, “as
compared with management (15%), relations with the external environment (9%), and
personal leadership (22%)” (Goldrin g et al., 2009, p. 24). Despite this apparent focus on
instructional leadership, the autho rs were critical of the superficial nature of many of
the assessments, describing them as treating the content to be assessed as “a mile wide
and an inch deep” (Gol dring et al., 2009, p. 25).
The assessment of instructional leadership
If we are to monitor and evaluate the consequences of investment in the development of
instructional leadership, we need assessment tools that are technically sound and
strongly focussed on this type of leadership. Of the 66 leadership assessment tools
analyzed by the Vanderbilt team, the vast majority “have limited or no published
information concernin g their reliability or validity” (Goldring et al., 2009, p. 26).
They concluded that their use for moderate to high stakes assessment dec isionswas in
violation of professional testing standards (American Educational Research Association
(AERA), American Psychological Association (APA), and National Council for
Measurement in Education (NCME), 1999).
While the tools used by states and school districts to assess school leaders fall sho rt
on the criteria of strongly focussed on instructional leadership and tec hnically
sound, some more promising assessment tools can be found in the research literature
on instructional leadership. The most well known is the Principal Instr uctional
Management Rating Scale (PIMRS) which was developed by Hallinger in the early
1980s and has since been used in over 130 doctoral studies of instructional leadership
(Hallinger, 2011a). The scale comprises 71 items organized into 11 subscales (i.e. frame
school goals, communicate school goals, coordinate curriculum, sup ervise, and
evaluate instruction, monitor student progress, protect instructional time, provide
incentives for teachers , provide incentives for learning, promote professional
development, and maintain high visibility). Each item describes a critical job-related
behavior. Raters of the principal’s behavior,wh o may be teachers, district superintendents,
or the principals themselves, are asked to indicate the frequency with which the principal
has demonstrated the specified behavior in the past year.
754
JEA
50,6

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT