University Improvement: The Permanent Challenge

Prepared at the Request of President John Palms, University of South Carolina

John V. Lombardi for TheCenter Staff

February 2000

The American public research university is an enduring institution. Subject to the enthusiasm of the public and its elected representatives, competitive with the other institutions of its kind for scarce federal and private dollars, and responsive to the needs of its many constituencies, the American public research university nonetheless endures and for the most part prospers. Built into the core of their state’s public services infrastructure, these institutions in almost all cases can survive any challenge. They grow and prosper with the fortunes of their states and with the wisdom of their boards, their faculty, their administrators and their supporters. They falter when the state’s economy falters, they expand when the state expands, but always they endure.

This endurance provides an important context for the conversation about institutional improvement. American public research universities feel little pressure for substantial improvement. They can exist and continue with only modest change, flowing with the currents and fortunes of their state’s public enterprises, improving what is easy to improve, leaving alone what is hard. Over time, the university will become better than it was, but it will not become as good as it can be. Because the university does not have to improve to survive, it requires strong leadership and an effective plan to move the institution faster than the ordinary flow of events will take it. It takes strength and commitment for the university to find the direction for significant improvement and stay with it long enough to make the changes permanent.

Improvement and Measurement

Improvement and change have no meaning without measurement. Much university conversation in the public sector involves complex, uplifting, and even entertaining controversy about the values and academic directions of the institution. Much public conversation turns on elaborate discussions of accountability and governance. Most of this is charming, well intentioned, but ultimately ineffective because it does not start with the measurable things. Universities, like all enterprises, cannot manage improvement unless they can measure the improvement. Academic measurement is the simplest of concepts and the most difficult of enterprises. University people have an aversion to self-measurement. Experts in the measurement of every other quantity in the universe from attitudes to behavior, from physical to cosmological quantities, academics resist the aggregate measurement of their own work. Committed to the fundamentally handicraft nature of research and teaching, individual enterprises with unique results, they suspect the motives of those who would measure their work in aggregate, and they fear that the act of measurement itself will change the unique character of their work into a form of commodity production. Yet, absent measurement, the institution cannot drive improvement.

Improvement and Money

A second requirement for improvement is money. Academics often feel uncomfortable speaking in clearly about money (we call it resources in our embarrassment), but the investment of money makes improvement possible. If we have no extra money, we will find it difficult to improve. Some observers misunderstand this fundamental truth and imagine that we can have our faculty teach more and better without actually invoking the specter of money. If our faculty teach more and better, they work harder. If they work harder, they increase their productivity. If they increase their productivity, they have generated an internal savings because they do more with the same money. As a result, we have (hidden from view perhaps) saved money through increased productivity, and we can invest this extra money in improved quality. This kind of invisible transaction often occurs in universities, but because we do not measure it, we cannot direct it effectively and we cannot gain the maximum advantage from our innovations and improvements. Sometimes we improve productivity but because we do not measure the improvement, the savings disappear into unplanned expenditures or increased inefficiencies elsewhere in our institution. Money matters to universities.

The relationship between measurement and money is the key dynamic driving university improvement. If an institution enjoys generous funding that increases faster than the inflation in the continuing costs of operating the university, then the need for measurement declines. Such a fortunate institution can spend the surplus (or margin) on projects that improve the university. It can hire superstar faculty, it can buy equipment, and it can build buildings. Universities have no lack of good ideas that will improve the quality of the place. Faculty have no lack of superior projects that will enhance the institution’s reputation. What the university has is a lack of funds to address these opportunities. If the budget increases and general funding pays the ongoing costs, pays for whatever inefficiencies are in the university’s systems, and produces a positive margin or surplus for investment in programs and enhancements, then the university can get better quickly without worrying too much about measuring things. This is the ideal world for the American research university. Only a few universities have enjoyed this luxury.

The classic case is the institutions of the University of California. Thanks to an investment minded state, a strong tax structure, and a political agreement on the division of mission among higher education institutions, the campuses of the UC system received exceptionally good funding that they put to good use in building multiple nationally competitive research institutions. No other state can compare to California in this commitment to quality in so many of its state research universities.

Most universities, however, do not live in the California mode. They live in states with much less commitment to investment in high quality education, many fewer tax dollars to spend on public services, and a highly charged political contest among state universities over the missions of the various public institutions. Universities in these states, if they want to improve faster than the general flow of life, need to engage the issues, measure their performance and drive improvement.

The Competition

The top fifty or so public universities in America do not all have an equal chance for academic eminence. Those at the top (Michigan, Berkeley, UCLA, UNC Chapel Hill, Washington, Minnesota) have a commanding lead in every category. They have better students, better faculty, better research revenue, larger endowments, more annual giving. Some have more tuition and fees; some have excellent state support. Whatever the package, these universities not only are at the top, they will stay there barring some catastrophe. The size and bulk of a university have much to do with its success. These institutions have the base, the traditions of performance, and the faculty needed to continue to compete at the top level. They will get more than their share of the best students nationwide, of the federal dollars, of the private gifts. They set the scale of performance, and other public universities that aspire to compete, must recognize these institutions as the competition.

This competition is tough, and because universities do not have to improve to survive and continue forward, many voices will speak against the need to compete nationally. One technique is to think that because one university is smaller than another virtue lies in its smallness. This perhaps is theoretically so, but practically it is not so. Size is an important determinant of success. The top public universities are large in number of people, large in budget, large in grants and contracts, large in the number of merit scholars, large in any dimension that defines quality in the university. The size is an advantage, it provides strength, and it offers a buffer against failure and mistakes. If a university wants to compete in the league, it must recognize that the competition is between universities. It is not between university systems, it is not between departments (although departments do compete), it is between universities. We measure it by the quantity of quality work produced.

Quantity of quality work produced is a key element. A good university may well have a superstar or two, a fine colleague of national stature, a prize-winning program. The great American public universities have many superstars, many colleagues of national stature, and many prize-winning programs. This is the difference between a good university and a great university.

Measuring and Rewarding Performance

If a good university chooses to improve, chooses to move its performance to higher levels in the competition among American public research universities, then it needs a program. That program, however the university presents it, requires two fundamental things. It requires a focus on the money and measurements of productivity and quality. There is no escape from these imperatives if the university is to grow and improve faster than the general flow of events.

First, there is the money. University money comes into the institution in an endless variety of ways. From tuition and fees to grants and contracts, from state subsidies for instruction to state appropriations for capital, from gifts and endowment to patent and license fees, all these sources nourish the academic enterprise. While each source of funds has its restrictions and limitations on use, an improvement program starts from the assumption that it will measure the money and reward those units of the university that can increase the amount of money. If the university becomes lost in a complex conversation about good money and not so good money, about restricted and unrestricted money, about state and non-state money, it will miss the point. All money is good, that is the first premise. Then, the university can decide how it can increase the money and reward those who do it. If there is money for more enrollments, then the university should have a strong enrollment management plan to take maximum advantage of all the money available for enrollment. If there is money from gifts and endowment, the university should reward those units that can increase these private gifts. If there is money from grants and contracts, then the university should reward the units that increase grants and contracts. To do this the institution must measure the money, allocate the increases to the units, and mange a reward program.

While the university wants to get as much increase in dollars as possible, it also wants to reward the effective use of those dollars. This requires the measurement of performance in quality and productivity. Both are required. A small amount of superb quality at great expense does not make a great university. Neither does a large amount of poor quality at low expense. The goal is high quality and high productivity. The university must measure these things, but measure them in as simple a way possible. The favorite method for avoiding responsibility in the university is to create complexity in evaluation of performance. If we accept that every university product is unique, then each one requires a unique measurement and the cost of measurement will exceed any benefit it might produce. Instead, the university needs to pick some simple surrogates for performance, a task easier done for productivity than for quality. Nonetheless, the simple measures, if linked to effective rewards, produce remarkable changes in a very short time.

Management and Governance

Management is the final element in this conversation. Universities for the most part do not have management; they have governance. Governance is the political process that balances the various competing interests of the institution though a complicated and lengthy process. The characteristic of university governance is consensus. Consensus for a university normally results in modest and superficial change in the general operations of the institution, especially in terms of money and incentives. Universities are by nature exceedingly conservative; the faculty assume the status quo is better than whatever alternative might appear unless the alternative offers more money for less work. Universities that are already high performers benefit from this conservatism. They have a consensus for high performance and high standards that the conservative predisposition maintains. Universities that are merely good, have a consensus for good standards, but not for high standards. The conservatism will keep them good, but they rarely will make the considerable and often unpopular effort required to increase their standards to match those of excellent universities.

To improve, the university must have management. It must have direction. The institution must consult, it must meet, it must listen, and it must respond to all the information, opinion, and advice from its many constituencies, but it must nonetheless act, and it often must act without complete consensus. It must choose a direction, it must discuss this direction with the institution’s many constituencies, and then, after making whatever changes emerge from the discussion, the university must act to manage the process of improvement. This process leads to significant institutional improvement has some important characteristics. Management must drive performance based on clear, open, and explicit measurements of quality and productivity. Management must reward improvement with money. Absent either of these two elements from the management structure, the improvement will become very difficult to implement.

Finally, the management must have the support and commitment of its board. Improvement requires enhanced performance. Enhanced performance requires that people work harder and better than they did before. Some of those asked to work harder and better will not do so. Some of those asked, will. The university must reward those who perform at significantly levels of quality and productivity. This is true of groups (colleges and department) and individuals (faculty and staff). Those who do not work harder or better will find endless reasons to resist a system that rewards those who do work harder and better. The university offers a wide range of mechanisms to for such resistance. If the board does not support the management, then the system will fail and governance will overwhelm management. When governance replaces management, the improvement program will slow, falter, and quietly die.

Many times boards and administrations engage in a rhetoric that promotes productivity, quality, and performance, but when the inevitable resistance appears to a program that actually delivers these things, the board may discover that it has no appetite for the controversy that results. If improvement is significant, if rewards follow performance, then there will be controversy. The board needs to be sure it approves of the standards and processes, and then it needs to allow the management to drive the improvement process, providing no comfort to those for whom increased and measurable productivity and quality challenge the status quo.

A Reality Check on Performance

Measuring university performance is a challenging task. Every university has different strengths it may want to stress and weaknesses it may want to remedy. If the university’s goal is to become a more effective competitor among America’s best public research universities, then it must begin with some indicators of its relative position in this group. Attached to this presentation are data that provide a point of reference for understanding scale and performance. The context is a group of fifty universities identified as the top American public universities by virtue of their performance on nine measures. The Top Public Universities takes the position that the exact placement of any university in a ranking scale is of relatively little value. More important is the grouping of universities, recognizing that the differences among the universities within each group are small.

The Top Publics presentation in the attached materials has a built-in emphasis on research. Of its nine measures, five reflect research directly, and most of the rest reflect research indirectly. This emphasis recognizes that the distinguishing characteristic of America’s top public universities resides in the research quality of the faculty and programs sustained by that university. Although we might want to believe that undergraduate instructional quality is an important element in this equation, in fact it is not. Universities with exceptional research productivity usually have, in addition, fine undergraduate programs, and universities with modest research performance have fine undergraduate programs. Small colleges with almost no research program, have fine undergraduate programs. Since many universities and colleges deliver fine undergraduate instruction, the distinguishing characteristic of America’s great public universities is, in fact, research. Research is a generic term, however, that covers a number of different institutional characteristics.

The first two measures of research include both the total research and the Federal research expenditures of each university. While this appears to be a quantity measure, it is also a quality measure since most research, but not all, reflects a peer review process that judges the quality of work for which funding may be provided. The reason for including both total research and federal research is to provide a measure in the first number that includes state funding for research and a measure in the second number that clearly identifies the highly competitive federal research.

The second two items measure an institution’s private funding. Endowment is a measure of the institution’s base private funding that provides a permanent stream of income. Annual giving is a current measure of the institution’s ability to raise dollars for both today’s expenses and endowment growth. In addition, the second number identifies institutions that may not have as long a historical record of successful fundraising (reflected in a smaller endowment) but currently have a strong competitive fundraising program. Much private giving, of course, reflects the research programs of the university that attract endowments and other gifts in support of that research.

The next two indicators provide rough surrogate measures of the distinction of the faculty by counting the number of National Academy members and the number of Arts and Humanities award recipients among the faculty. These two also primarily recognize research achievements, and the second measure identifies strength in the humanities not so easily reflected in the grant and contract measures.

The indicators that show Ph.D.s granted and Post-Docs serve as a measure of graduate intensiveness, another indicator of institutional strength and an indirect indication of faculty quality. Ph.D.’s includes of course humanities and social sciences in addition to science doctorates. Post-Docs provide yet another indirect but useful indicator of both the quality of the faculty and the research depth of the institution’s science programs.

The final measure is a surrogate for undergraduate quality. Students identified as National Merit and National Achievement scholars seek out institutions with exceptional reputations. The quality of the student body is one of the major attractions that distinguish the undergraduate programs of different universities; Merit and Achievement scholars in the fall class each year gives a useful surrogate measure of the institution’s overall undergraduate quality.

Somewhat arbitrarily, we decided that the relevant group of top universities would include 25 institutions. We then counted the number of measures for which each university fell in the top 25. Those ranked in the top 25 on all nine measures belong to the top group of American public research universities. Those in the top 25 on 8 measures, belong in a second group, and so on through those universities in the top 25 on 5 measures. These universities, with from 9 to 5 measures in the top 25, form the top tier of American public research universities. Institutions with 4 to 1 measures in the top 25 belong to a second tier. The total number of universities with at least one measure in the top 25 is about 50 institutions.

The value of these categories is not so much that they giving bragging rights to different universities but that they demonstrate the difference between institutions at the top and those in various stages of growth and improvement. The very best institutions compete successfully on all measures; they rank in the top 25 on all categories. They do not necessarily all rank first on every measure, but they are at least in the top 25 on every measure. This means they have competitive balance. Universities that fall father down the structure usually have some excellent programs, but they do not have the breadth of strength visible in the top group. This provides a diagnostic, for it allows institutions the opportunity to recognize what the best performing universities do, and it permits any university to make some choices about what it wants to do to compete.

Some observers complain that these measurements unfairly benefit the large and wealthy universities. They are right. Not all large universities, however, do well; and not all wealthy universities do well. Nonetheless, the best universities are those who have the scale and the resources to perform well and who DO perform well. Other observers complain that these indicators do not show the relative productivity of the faculty. This is also true. The purpose of these data is to measure the full power and productivity of universities. Some universities have many faculty who do not produce research, and if we divided the amount of research, for example, by the number of faculty, the rankings would be somewhat different. Nonetheless, the measurement of faculty productivity is a different issue. The easiest demonstration of this, of course, is the hypothetical case of the university with one faculty member who has a million dollar grant. The faculty productivity of this institution is surely top of the scale, but no one would recognize such an institution as a major research university.

The Top Public Universities project serves not so much to rank institutions as to identify indicators of institutional performance and to focus attention on the characteristics that identify high performing universities. We can imagine other indicators that might do this task better, but for the most part such data do not exist. These data represent the best comparable data available, and for this reason the data offer the possibility of tracking institutional performance over time. TheCenter provides these data for three years (1998, 1999, 2000), although we have attached only the most recent version here. One of the useful features of this data set is its clear identification of single university data. Many institutions report data for several campuses, a practice that makes useful comparisons difficult. TheCenter adjusts the data to reflect only the main research campus of the university (although it includes geographically separated entities when they function as part of the main campus). The Center’s website has a complete discussion of the data adjustments used for every university presented in the tables. (http://thecenter.ufl.edu).

The final item available with this presentation is a copy of a document that reflects what an institution can do when it applies data based performance criteria to the management of the institution. A Decade of Performance, 1990-1999 outlines the change made possible at the University of Florida by virtue of this form of incentive driven, data based, management.

February 2000

Accompanying Materials