SPECIAL REPORT

A BATTLE FOR THE FACTS

Canadian universities took a test—but many declined to post their marks

CHRIS WOOD November 15 1993
SPECIAL REPORT

A BATTLE FOR THE FACTS

Canadian universities took a test—but many declined to post their marks

CHRIS WOOD November 15 1993

A BATTLE FOR THE FACTS

Canadian universities took a test—but many declined to post their marks

CHRIS WOOD

Universities have long been accustomed to handing out grades to students; A on a top essay, B for a passable seminar presentation, F for a failed examination. But when Maclean’s published its first ranking of universities in 1991, the experience came as a rude and unpleasant shock for many schools. Several administrations challenged the magazine’s very right to assess their performance. Many more questioned the methods that Maclean’s used to reach its conclusions. Two years later, however, it is clear that Canadian universities, like their counterparts in other countries, can no longer escape a growing public demand for rigorous scrutiny of how well they live up to their own claims to excellence.

What is equally clear, however, is that grading the academic graders is an exercise fraught with conflict. Despite mounting pressure for more accountability from postsecondary institutions of all types for the almost $15 billion in public funds that they spend each year, universities have responded at a snail’s pace. “There is progress,” observes Stuart Smith, the author of a 1991 inquiry into university education in Canada, “but it is pretty minimal.” Part of the reason may be simple re luctance on the part of academic administrators to subject themselves to review by anyone other than fellow academics. But contributing to the slow progress is an unresolved debate about how universities should be marked; whether they should be graded against each other—or held to account solely against their own individual institutional goals; whether they should be ranked—or merely described.

The complexities in that debate became clear to the editors of

Maclean’s when they sought to respond to criticism of the 22 indicators, ranging from class size to library budgets, that it uses to grade institutions in its university rankings. “Many of the individual strengths of universities are not picked up in this ranking by Maclean’s,” complained Dennis Anderson, president of Manitoba’s

Brandon University (which placed 13th among 18 in the category labelled Primarily Undergraduate universities in 1992). Anderson, like many other university administrators, urged the magazine to expand its criteria to what he called “output” measures of academic excellence, such as how satisfied graduates were with the education that they received.

As a result, when Maclean’s learned that Statistics Canada had put just such a question to about 53,000 members of the class of 1990 in a $2.5-million taxpayer-funded survey two years after their graduation, the magazine sought access to the results. At first glance, the data in the National Graduate Survey confirmed a perception first recorded in a survey that Maclean’s itself had conducted and reported last year: most graduates were either “satisfied” or “very satisfied” with their education overall.

But contained within that positive general assessment were intriguing variations. Satisfaction with how well their education had prepared graduates for a job ranged from a high of 85.3 per cent among New Brunswickers to a low of 69.4 per cent among Manitobans (see chart). Noted Maclean’s consulting statistician Georges Lemaître, of the Manitoba findings: “That is still nearly a third of respondents [in that province] who don’t like what it is they’re getting. That is not negligible.” Similarly, satisfaction with the success of different academic programs at preparing graduates for the work world varied widely, from 70.1% for social science programs to 90.9% for engineering and applied science. Graduates from the Maclean’s Primarily Undergraduate institutions, meanwhile, expressed sharply higher satisfaction with their access to faculty and class size than did respondents who graduated from Medical/Doctoral schools.

But when Maclean’s asked the federal agency to provide it with the results of the national survey for graduates of each Canadian university, the request encountered an unexpected roadblock. The agency insisted that the magazine first secure permission from at least 80 per cent of the universities in the country for the release of the information. When Maclean’s sought that permission, however, fewer than half of the universities involved agreed. For its part, the University of Toronto noted that only 379 of its 5,709 graduates in 1990 had been surveyed, and explained: “That level of response is too small to be taken seriously by anyone for any purpose.” Among the other universities that denied access were Victoria, McGill, Dalhousie, Laval and Waterloo. Speaking for the University of Victoria, associate vice-president John Schofield declared that the Statistics Canada survey “was designed to provide useful information at the national and

provincial level. At the institutional level, the data are simply not reliable."

That is debatable, at least. For his part, Ken Bennett, Statistics Canada's assistant director for edu cation, told Maclean's: "There was no problem. The samples [of each university's graduates] were suffi ciently reliable that we would have felt comfortable re leasing them." It is a view that Lemaitre, a former se

nior analyst with the federal agency, shares: he noted that monthly provincial unemployment rates are calculated on the basis of comparable percentages of the workforce.

Even among academics, the reluctance to disclose how graduates rated their satisfaction with their alma mater was far from universal. “That could be an extraordinarily useful tool,” declared Ken Snowdon, director of resource planning for Queen’s University. In addition to Queen’s, more than a dozen other institutions also gave permission for the release of the national survey data: Acadia, Sherbrooke, Wilfrid Laurier, Western, McMaster, Lethbridge, UBC and Simon Fraser. “We can’t go whining and moaning about not having the resources and not having the public support,” explains Simon Fraser’s president John Stubbs, “and then, when somebody turns the spotlight on us, say we don’t like it.”

But regardless of the reliability of Statistics Canada’s measurements, the debate over whether to release the information shed a telling light on the deep unease that many academics feel about any assessment of university performance. For his part, Smith, who is now the president of Ottawa-based RockCliffe Research and Technology Inc., is scathing in his condemnation of academic reluctance to be graded. “Fundamentally,” he says, “I think the simple fact is that they don’t

want to be accountable.” That may partly explain Canadian universities’ slow march towards any close examination of how well they perform. But there are also real disputes over how any such examination should be performed—and over who should be able to see the results, and for what purpose.

Among academics with the most strongly held views is Joy Calkin, vicepresident, academic, at the University of Calgary.

An acknowledged expert in assessing the performance of hospitals, Calkin’s view of the Statistics Canada survey is that its findings “are accurate; they’re just irrelevant.”

She adds: “The real issue is, what is the mission of the university at which you’re looking?” Arguing that each university has a unique mandate, Calkin says that she supports attempts by administrators to measure how well they perform their distinct mission. But she regards such measures mainly as tools for university managers,

and sees little purpose in disclosing the findings to the public. As for any effort to compare different universities to each other, she calls the attempt “really dumb.”

What Calkin, along with other critics of Maclean’s approach to ranking institutions, does endorse is an undertaking to compare individual academic programs. In her view, detailed comparisons of similar programs such as nursing, forestry or civil engineering, “would provide an inestimable service.” Among other benefits, she says, such comparisons would make it plain that within each university, individual disciplines may vary widely in quality. Citing her own alma mater as an example, Calkin declares: “University of Toronto has top-drawer programs in some areas. It has very mundane programs in others.” But other views vary widely. Queen’s University’s Snowdon notes provocatively that even comparisons of apparently similar programs can be misleading: every faculty may possess an English department, he says, “but is its focus on Canadian literature, or Elizabethan literature?” Meanwhile, attempts to compare the performance of different universities may be open to expert attack, Smith insists acerbically, “but these are academics. They spend their time finding out what’s wrong with any system of measurement. It doesn’t matter what you do.”

But scholarly hairsplitting is unlikely to fend off for long the mounting pressure for greater academic accountability. In the past three years, that goal, along with clear measures to establish university performance, has been a central recommendation of commis-

The debate sheds a telling light on the deep unease over accountability

sions and task forces on postsecondary education in Nova Scotia, New Brunswick, Ontario, Manitoba and Saskatchewan. For its part, the Maritime Provinces Eligher Education Commission, which acts as an intermediary between that region’s 19 universities and its three provincial governments, has urged each of the institutions it oversees to develop their own indicators of how well they are achieving their different academic goals. Says the commission’s Frederictonbased chairman, Tim Andrew: “Students want to know these things as well as taxpayers.”

Indeed, the forces propelling universities to greater accountability are international in scope. A report by the Organization for Economic Co-operation and Development (OECD) earlier this year detailed growing pressure on universities in a dozen countries to justify their appetite for public funds. And in most, it noted, “controversies existed concerning the publication of [performance] indicators.”

In Canada, the drive to develop standard and comparable measures of academic performance has gone furthest in Ontario. Prompted by demands from that province’s legislative Public Accounts Committee, a task force on university accountability earlier this year recommended that Ontario’s universities be required to report annually on their

performance according to no fewer than 36 dif ferent indicators-including several of the same measurements used by Maclean s in its rankings. The results, says Daniel Lang, the U of T assis tant vice-president of planning who chaired the committee that produced the proposals, would be made public. Although the annual report would not assign grades to the universities, says Lang, readers "could compile their own ranking.

You could make whatever comparisons you need to make to evaluate the program you’re interested in.”

Lang’s proposals are now on the desk of Dave Cooke, Ontario’s minister of education and training, who has promised a response by mid-December. His Saskatchewan counterpart, Pat Atkinson, meanwhile, is reviewing proposals to strengthen the accountability of universities in that province. And next week, 150 university administrators meet in Winnipeg to discuss a range of related issues, including the expansion of Statistics Canada’s national survey to provide unequivocally reliable information about graduates’ satisfaction with their education-institution by institution. Clearly, whatever their apprehension about the questions they may be asked, Canadian universities increasingly find themselves in the same situation as many of their own students: the one option they do not have is to skip the test— although so far, they have managed to avoid posting the marks. □