The Chronicle of Higher Education, 9/8/2000

POINT OF VIEW

How Classifications Can Help Colleges

John V. Lombardi

Driven by the American enthusiasm for identifying the mythical Number One, college rankings and classifications are a growth industry. The National Research Council, consumer guidebooks like Barron's Profiles of American Colleges and the Princeton Review's Best 331 Colleges, and magazines such as U.S. News & World Report, Maclean's, and even Men's Health, among others, try to organize institutions and their programs into groups or hierarchies.

A more serious need for categorizing, however, comes from the complexity, diversity, and richness of our academic universe. We suffer from the peculiar triumph of a higher-education industry whose range and scope exceed our descriptive language. Lacking adequate words, we name everything either a "college" or a "university" -- despite the astounding heterogeneity of our institutions.

Because great power resides in the term "college" and even greater totemic value in the label "university," we don't challenge our nomenclature. Instead, we attempt to cluster the thousands of traditional institutions, as well as a growing number of commercial and other nontraditional institutions, into categories that will help us make some sense out of the bewildering variety. A satisfactory taxonomy, however, is difficult to design, as the Carnegie Foundation for the Advancement of Teaching's recent revision of its prominent classification system proves all too well.

The foundation created the system 30 years ago to serve as a research tool for scholars. It aimed to classify colleges according to academic mission, not to order them from "the best" on down, based on resources, quality, prestige, or other criteria. Yet its classifications have increasingly been used to rank colleges -- for instance, institutions have competed to be in the "top" list of Research I institutions, rather than in the Research II group, by striving to obtain more grants. Carnegie's main goals in revising its system have been to discourage that practice, as well as to emphasize the importance of teaching.

The changes have evoked a barrage of criticism -- perhaps inevitably, given that U.S. News uses the classification as a starting point for its annual college rankings, and institutions now view the Carnegie system as a key measure of their identity. Yet can any classification or ranking method systematically and fairly provide the clarity and order -- let alone the pecking order -- we relentlessly seek?

Most methodologies struggle with various issues. Some create groups and hierarchies of institutions based on their reputations. To do that, we query presumably expert observers and ask them to rank institutions from the best to the worst. Unfortunately, the experts rarely have sufficient knowledge about more than a dozen or so institutions -- and the results are notoriously unreliable rankings. They provide little insight into an institution's actual performance.

Another method is to measure some element of quality. However, quality is one of those things about which we agree in principle but disagree in practice. What, specifically, should we measure to determine quality? The amount of money spent on each student? The SAT scores of entering freshmen? The number or distinction of faculty publications?

The new Carnegie classifications ambitiously attempt to describe in one taxonomy all of higher education. They collapse four previous categories of research and doctoral institutions into two: doctoral/research extensive and doctoral/research intensive. Instead of basing distinctions among those institutions on the amounts of federal research grants they draw annually, the new classifications divide colleges by the number of degrees they award in a given number of disciplines. Carnegie hopes that will discourage the system's use as a ranking tool and avoid giving research more weight than teaching and service.

Unfortunately, for all Carnegie's good intentions, the result is a number of categories so broad that they have little meaning -- except for those institutions caught at the boundaries. Combining institutions such as Boston College, Caltech, and the University of Idaho into the same classification reduces our ability to understand the particular strengths and different missions of each of these quite distinctive institutions. In addition, the lack of reliable comparative data hinders Carnegie's efforts to emphasize issues of teaching and learning.

Moreover, imagine the institution with 50 Ph.D. degrees but only 12 disciplines. With the new Carnegie categories, it fails to make the cut as an extensive doctoral/research university, regardless of how much research its faculty performs. But thanks to the malleability of the discipline definitions Carnegie uses, the institution can, with minor bureaucratic effort, expand its 12 disciplines to 15 through academic mitosis within a year, and join the top group.

Given such drawbacks, I wonder if any of us would pay much attention to the new system if the brand name "Carnegie" weren't so high profile. But in a world that's full of classifications and rankings but bereft of good data, brand name is important.

Carnegie officials stress that their new classifications are just an interim step toward an overhaul of the entire system in 2005. They say that the next version will be much more flexible, to bring together the different dimensions of institutions. In the next five years, let's hope that Carnegie develops reliable and verifiable data on any characteristics it uses to determine institutional performance. It should also distinguish between data that measure the performance of institutions, such as federal research expenditures, and those that measure the productivity of individuals within institutions, such as average faculty research productivity.

For now, although the Carnegie groupings make the point that all colleges are not alike, they don't help institutions compete more effectively. For example, private liberal-arts colleges compete directly with each other much more than they compete with large public research universities for faculty and staff members, students, and resources. But they already know what the new Carnegie classifications tell them, that they are indeed liberal-arts colleges. To gauge their progress and how they might improve, they need more comparative data about the performance of other liberal-arts colleges in their competitive niche -- like the number of students who participate in independent-study programs or go on to graduate school.

What complicates matters, however, is that quite dissimilar institutions compete against each other in certain aspects of their operations but not in others. While large research universities may not compete directly with some small liberal-arts colleges for medical-research grants, they compete for the best students.

Does this make comparative analysis of colleges and universities pointless? No, but it does recommend caution. The trick, of course, is to construct reasonable competitive contexts.

For example, the Johns Hopkins University is a major research university that competes with Harvard University, the University of California at Berkeley, and the University of Michigan at Ann Arbor for faculty members, grants, and postdoctoral students. It also competes with Pomona, Swarthmore, and Williams Colleges for undergraduate students.

Is it useful to compare Hopkins and Williams as institutions? Not really. But Hopkins should surely compare its undergraduate-student programs and recruitment efforts with those of Pomona, Swarthmore, and Williams to see whether it can improve in those specific areas. Similarly, Hopkins should compare its research performance with that of Harvard and Berkeley to see how it stands in the competition among major research universities.

In addition, certain measures like average faculty productivity are highly suspect if used to rank diverse institutions, because faculty perform different functions at different institutions. Judgments about faculty productivity require more extensive data about institutional focus and job assignments -- for example, whether certain faculty members are librarians, teach remedial courses, or provide public service. To improve performance, institutions need appropriately structured comparisons.

In other words, the classifications and rankings that serve higher education best are those that provide accurate, explicit data focused on specific institutional activities within certain defined areas. Such categorizations do not tell us which is the best university or college; they tell us how well each institution performs within a particular context.

At my research center at the University of Florida, we have attempted to build that sense of context into a new ranking system specifically for top American research universities. Our system ranks 82 public and private institutions separately in nine categories, and places them in tiers based on how many top-25 mentions they each receive. We've identified measurable indicators that can help a research university improve its overall performance in areas that are critical to its success. Reasonably reliable campus-specific data already exist: grants and contracts, numbers of postdoctoral appointees, endowment, annual giving, and similar elements that distinguish and define a research university. We plan to repeat the analysis annually to chart changes in institutional performance and to address different issues -- such as the impact of a medical school, or the relationship between a university's size and its performance.

In short, all classification and ranking systems need to become more sophisticated. Given the difficulties involved in defining complex institutions within the vast universe of higher education, Carnegie should indeed substantially refine its system by 2005. If the foundation can identify and organize reliable comparative measures of undergraduate-program quality, it will make a major contribution to our understanding of American higher education.

Meanwhile, classifications and rankings are here to stay, whether we like them or not. We just need to recognize -- and try to communicate to the public -- that we can't determine the best college or best university. We can, however, ascertain high-performing research universities, highly selective private colleges, academically distinguished undergraduate institutions, and individually productive faculty members. And to do that, we have to identify reliable measures, define our universe, and deliver the data.

John V. Lombardi is a professor of history and the director of TheCenter at the University of Florida.

Section: The Chronicle Review
Page: B24
Top