Self-regulation with rules. Lessons learned from a new quality assurance process for Ontario

Date06 July 2015
Publication Date06 July 2015
Pages216-232
DOIhttps://doi.org/10.1108/QAE-09-2014-0046
AuthorDaniel W. Lang
SubjectEducation,Curriculum, instruction & assessment,Educational evaluation/assessment
Self-regulation with rules
Lessons learned from a new quality
assurance process for Ontario
Daniel W. Lang
Theory and Policy Studies, University of Toronto, Toronto, Canada
Abstract
Purpose – The purpose of this paper is to discuss how the province over time has addressed problems
that are generic to many jurisdictions in assuring quality: level of aggregation, pooling, denition of
new and continuing programs, scope of jurisdiction, role of governors, performance indicators,
relationship to accreditation, programs versus credentials, benchmarking and isomorphism. The paper
will pay particular attention to the balance between institutional autonomy in promoting quality and
innovation in contrast to system-wide standards for assuring quality. The Province of Ontario has had
some form of quality assurance since 1969. For most of the period since then, there were separate forms
for undergraduate and graduate programs. Eligibility for public funding is based on the assurance of
quality by a buffer body. In 2010, after two years of work, a province-wide task force devised a new
framework.
Design/methodology/approach The structure of the paper is a series of “problem/solution”
discussions, for example, aggregation, pooling, isomorphism and jurisdiction.
Findings – Some problems are generic, for example, how to dene a “new” program. Assuring quality
and enhancing quality are fundamentally different in terms of process.
Research limitations/implications – Although many of the problems discussed are generic, the
paper is based on the experience of one jurisdiction.
Practical implications – The article will be useful in post-secondary systems seeking to balance
autonomy and innovation with central accountability and standardization. It is particularly applicable
to undifferentiated systems.
Social implications – Implications for public policy are mainly about locating the most effective
center of gravity between assuring quality and enhancing quality, and between promoting quality and
ensuring accountability.
Originality/value – The approach of the discussion and analysis is novel, and the results portable.
Keywords Case studies, Quality assurance, Quality improvement, Benchmarking
Paper type Research paper
Setting the context
Until recently, quality assurance in Ontario could be described to use van Vught’s (1994)
term “multidimensional”. Unlike van Vught’s concept, however, the practice in Ontario
was not the product of a deliberate design or system-wide plan. Of four parts, two were
formal and expressly about the assurance of quality:
(1) one for graduate programs (Ontario Council on Graduate Studies, 2007); and
(2) one for undergraduate programs (Ontario Council of Academic Vice Presidents,
2005).
The current issue and full text archive of this journal is available on Emerald Insight at:
www.emeraldinsight.com/0968-4883.htm
QAE
23,3
216
Received 16 September 2014
Revised 9 November 2014
Accepted 19 November 2014
QualityAssurance in Education
Vol.23 No. 3, 2015
pp.216-232
©Emerald Group Publishing Limited
0968-4883
DOI 10.1108/QAE-09-2014-0046
The former was largely external through a “buffer” body that functioned between 21
universities and the government. The latter was largely internal and coordinated by
institutional academic vice-presidents. All programs were funded, as they still are, under a
single enrolment-based funding formula (Ontario Ministry of Education and Training,
2009). The third part comprised “program reviews” that some but not all universities
conducted, usually as part of a turnover in departmental or faculty leadership. The fourth for
a short time was the deployment of “performance indicators” by government (Lang, 2005).
Accreditation might be seen as a fth part, but Ontario and Canada have no history of
institutional accreditation (Clark et al., 2009;Skolnik, 2010), and do not regard accreditation
as assuring quality except in a consumer protection sense of meeting a minimum standard.
Many professional programs are accredited at the degree level, for example, the American
Association of Medical Colleges accredits most medical schools, and the Association of
Professional Engineers of Ontario accredits programs in applied engineering. These
examples of accreditation, however, are voluntary. They are not required by government. In
fact, governments from time to time have deliberately ignored recommendations of
accrediting agencies. Professional accreditation and internally dened quality assurance
processes have not been recognized by external quality assurance processes.
Graduate programs
In 1969, an Advisory Committee on Academic Planning (ACAP) was struck as a
standing committee of the Ontario Council on Graduate Studies (OCGS), which is a
subset of the Council of Ontario Universities (COU), which is a voluntary association of
all the province’s universities. Its stated purpose was to plan and to evaluate. It was in
some respects comparative or normative, and partly formative. Although
benchmarking was not part of the higher education planning lexicon then, that was a
basic idea behind ACAP. Discipline by discipline – notably not university by university –
comparisons were made to universities outside Ontario. The assessments and
comparisons were made by eminent scholars from outside the province. A separate team
was assembled for each discipline. Four or ve different disciplines were appraised each
year. The external assessors’ recommendations were advisory to ACAP, which in turn
issued annual reports. The focus of each assessment was the graduate program of each
respective academic department and, in turn, the quality of each discipline
province-wide. Thus, in normative and comparative terms, the point of reference was
the province relative to the state of the discipline elsewhere. Undergraduate programs
were included only collaterally. In terms of quality assurance, what was being evaluated
was capability of the department, not the quality of each graduated degrees that it
offered. ACAP, perhaps with great foresight about the advent of instructional
technology, identied the format of instruction as a factor to be taken into account.
In 1974, the Appraisals Committee of the OCGS was established as a successor to
ACAP. It was the same as ACAP in three respects. Its focus was graduate education
only. Its role was evaluative. It was external. It was different in that, unlike ACAP, it did
not aim to create a plan or to assess quality collectively province-wide. Its aim, instead,
was to establish a permanent and continuing process for appraising quality. The
Appraisals Committee was more nely tuned than ACAP in that it evaluated separately
each graduate degree, as distinct from overall program, that a department offered.
The Appraisals Committee maintained separate “one time” procedures for new
programs and periodic procedures for ongoing programs (Ontario Council on Graduate
217
Self-regulation
with rules

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT