SPECIAL REPORT

The story behind the Rankings

Hundreds of phone calls, dozens of meetings and the wisdom of a consulting statistician

VICTOR DWYER November 14 1994
SPECIAL REPORT

The story behind the Rankings

Hundreds of phone calls, dozens of meetings and the wisdom of a consulting statistician

VICTOR DWYER November 14 1994

The story behind the Rankings

Hundreds of phone calls, dozens of meetings and the wisdom of a consulting statistician

The task sounds relatively straightforward: identify a common goal—the delivery of quality education; determine the most important factors that contribute to the achievement of that goal; compare the achievements of Canada’s various universities on those measures of performance. But the task, in fact, is utterly complex: frame 72 questions that, when combined, measure more than 20 different indicators of educational quality; put those questions to an army of university admissions officers, chief librarians, budget analysts, alumni directors and others; create a statistical program that combines the thousands of pieces of information into a substantial gauge of university performance. The task is the annual Maclean’s ranking of Canadian universities—the most comprehensive and accessible collection

of data on universities anywhere in the country.

Examining she broad areas, from student body to finances, the ranking explores a world whose inner workings have long been next to impenetrable. Each year, through hundreds of letters, phone conversations and personal meetings, editors and university officials discuss the intricacies of the rankings. When counting unclassified bound serials in the library, are government document volumes included? Yes. Maps and slides? No. When tallying class size, do universities count one-on-one instruction in the performing arts? Yes. And this: when institutions measure the number of international students they draw, can they include Canadians who sent their applications from outside the country? Obviously not.

The undertaking is made all the more challenging by the dynamic nature of the modem university, an institution constantly reshaping itself. One area that demanded considerable attention this year was the library, a facet of university life undergoing enormous and rapid technological change. Officials at several Nova Scotia universities that share Novanet, a central computer catalogue and book-sharing agreement, recommended that schools be allowed to count each others’ books in their separate

library tallies. However, many universities have similar arrangements. A student from York University, for instance, is free to use reciprocal privileges at the Robarts Library at the University of Toronto. In the end, Maclean’s decided against taking such agreements into account, swayed in part by David McCallum, executive director of the Canadian Association of Research Libraries.

He argued that being able to call up the title of a book on a computer screen—and even being able to order a copy for overnight delivery—is no replacement for stumbling upon real volumes on real shelves, where tables of contents can be scanned, photos glimpsed, chapters perused.

In other areas, persistent sleuthing translated into significant changes. After much research and debate, editors revised one measure of the calibre of the

professoriat. In past years, universities were asked to declare the proportion of all professors who hold a PhD or “the terminal degree in their field.” Some critics said that provision allowed for too much “wiggle room.” Many business professors, for example, have only a master’s degree—effectively making it, some argued, the terminal degree in that discipline. Other universities, meanwhile, insisted that standards are rising, and that a PhD is quickly becoming a requirement of teaching in many such fields. This year, Maclean’s counted PhDs only. But at the same time, to answer the concerns of universities offering a range of less traditional courses, the magazine expanded the list

of academic disciplines that could be excluded from the overall count when calculating that particular performance measure to, among others, the fine and performing arts, midwifery, architecture and journalism. Finally, Maclean’s once again modified— and expanded—the reputational survey, soliciting the opinions of a broader range of business leaders and academics, and send-

ing out reputational questionnaires to high-school guidance counsellors across the country.

With the amendments complete,

Maclean’s sent a 14-page questionnaire and accompanying 19-page user’s guide to university presidents in July. During the eight weeks that the schools had to complete the survey, Maclean’s editors fielded questions. In mid-September, consulting statistician Rose Anne Leonard— who has worked with the Economic Council of Canada and as a consultant to the strategic planning department at Ottawa’s Algonquin College—began the painstaking job of assembling and interpreting the data. First, she identified extreme changes from the 1993 survey— which were then double-checked with university officials, and corrected where errors were found. At the same time, Researcher Mary Dwyer began a thorough cross-checking of much of the data against publicly available sources such as Statistics Canada, the Canadian Association of University Business

Officers and the three federal granting councils.

Finally, Leonard began calculating the Maclean’s ranking. Her first step was to determine a point score for each university on each of the nearly two-dozen indicators of excellence— scores that would later be combined to produce an overall ranking. Employing the percentile method of calculation, which awards points based on how each school performs relative to those in its category, she was able to take into account the actual differences between universities on each performance measure—and to assign points accordingly.

The Maclean’s ranking also takes into account the relative impor-

tance of each indicator—recognizing that not all contribute equally to overall quality. As a result, they have been weighted, much as the Organization for Economic Co-operation and Development, for example, does when tallying the factors that contribute to the quality of life in a group of countries. Failing to assign weights would be, says Leonard, “equivalent to giving equal weight to every indicator—a notion that defies common sense.”

With the points for each performance measure calculated and weighted, Leonard then arrived at the overall rankings. Ultimately, that step was not unlike a professor’s year-end task: to tally the composite scores of a student’s essays, lab marks and exam results and calculate a final grade. It’s a

VICTOR DWYER