Is There Life After Rankings?

A report card from one college president, whose school now shuns the U.S. News ranking system—and has not only survived but thrived

Three years ago I experienced a form of liberation denied to most of my peers in higher education. I left the University of Pennsylvania, where, as dean of its law school, I had lived under the U.S. News & World Report ranking system for ten years, and assumed the presidency of Reed College, one of a handful of American institutions of higher education that refuse to cooperate with that system.

For ten years Reed has declined to fill out the annual peer evaluations and statistical surveys that U.S. News uses to compile its rankings. It has three primary reasons for doing so. First, one-size-fits-all ranking schemes undermine the institutional diversity that characterizes American higher education. The urge to improve one's ranking creates an irresistible pressure toward homogeneity, and schools that, like Reed, strive to be different are almost inevitably penalized. Second, the rankings reinforce a view of education as strictly instrumental to extrinsic goals such as prestige or wealth; this is antithetical to Reed's philosophy that higher education should produce intrinsic rewards such as liberation and self-realization. Third, rankings create powerful incentives to manipulate data and distort institutional behavior for the sole or primary purpose of inflating one's score. Because the rankings depend heavily on unaudited, self-reported data, there is no way to ensure either the accuracy of the information or the reliability of the resulting rankings.

When Reed's former president Steven Koblik decided to stop submitting data to U.S. News, he asked the magazine simply to omit Reed from its listings. Instead the editors arbitrarily assigned the lowest possible value to each of Reed's missing variables, with the result that Reed dropped in one year from the second quartile to the bottom quartile. After the predictable outcry, U.S. News purportedly began to rank Reed based on information available from other sources. In subsequent years that procedure usually placed the college somewhere in the middle of the second quartile, with a footnote stating that we "refused to fill out the U.S. News statistical survey," and claiming to base the ranking on data from published sources. But since much of the information needed to complete the magazine's ranking algorithm is unpublished, one can only guess how the editors arrive at a value.

Reed's experience has not gone unnoticed. In a recent conversation with me the president of a leading liberal arts college lamented the distortions and deceptions that the ranking process engenders. When I suggested that he follow our example, he replied, "We can't. They will just plug in their own data, and we'll drop ten places in the rankings!" Criticism of the rankings is nearly unanimous, but so is compliance with them. According to the latest statistics supplied by U.S. News, only five percent of surveyed colleges and universities fail to submit the statistical questionnaire. In the words of another of my fellow presidents, "The rankings are merely intolerable; unilateral disarmament is suicide."

Far from committing suicide, Reed College has survived. Indeed, it has thrived. Over the past ten years the number of applicants has increased by 27 percent, and the quality of entering students, as indicated both by conventional SAT and GPA measures and by Reed's internal "reader rating" system, has steadily increased—it is far higher than suggested by our nominal place in the U.S. News pecking order. More important, Reed continues to offer an academic program widely recognized for its uncommon rigor, intellectual structure, and theoretical depth. Its students continue at unusually high rates to participate in faculty research and to earn competitive prizes and fellowships. The college continues to set the pace in the percentage of its graduates who go on to earn a Ph.D.

At professional meetings my colleagues often ask, "What is life like outside the rankings rat race?" and "How has Reed survived?"

Not cooperating with the rankings affects my life and the life of the college in several ways. Some are relatively trivial; for instance, we are saved the trouble of filling out U.S. News's forms, which include a statistical survey that has gradually grown to 656 questions and a peer evaluation for which I'm asked to rank some 220 liberal arts schools nationwide into five tiers of quality. Contemplating the latter, I wonder how any human being could possess, in the words of the cover letter, "the broad experience and expertise needed to assess the academic quality" of more than a tiny handful of these institutions. Of course, I could check off "don't know" next to any institution, but if I did so honestly, I would end up ranking only the few schools with which Reed directly competes or about which I happen to know from personal experience. Most of what I may think I know about the others is based on badly outdated information, fragmentary impressions, or the relative place of a school in the rankings-validated and rankings-influenced pecking order.

A somewhat more important consequence of Reed's rebellious stance is the freedom from temptation to game the ratings formula (or, assuming that we would resist that temptation, from the nagging suspicion that we were competing in a rigged competition). Since the mid-1990s numerous stories in the popular press have documented how various schools distort their standard operating procedures, creatively interpret survey instructions, or boldly misreport information in order to raise their rankings. Such practices have included failing to report low SAT scores from foreign students, "legacies," recruited athletes, or members of other "special admission" categories; exaggerating per capita instructional expenditures by misclassifying expenses for athletics, faculty research, and auxiliary enterprises; artificially driving up the number of applicants by counting as a completed application the first step of a "two-part" application process; and inflating the yield rate by rejecting or wait-listing the highest achievers in the applicant pool (who are least likely to come if admitted). Rumors of these practices and many others like them were rampant in education circles in the early years of formulaic ranking. I was struck, however, in reading a recent New York Times article, by how the art of gaming has evolved in my former world of legal education, where ranking pressure is particularly intense. The Times reported that some law schools inflate their graduate-employment rates by hiring unemployed graduates for "short-term legal research positions." Some law schools have found that they can raise their "student selectivity" (based in part on LSAT scores and GPAs for entering students) by admitting fewer full-time first-year students and more part-time and transfer students (two categories for which data do not have to be reported). At least one creative law school reportedly inflated its "expenditures per student" by using an imputed "fair market value," rather than the actual rate, to calculate the cost of computerized research services (provided by LexisNexis and Westlaw). The "fair market value" (which a law firm would have paid) differed from what the law school actually paid (at the providers' educational rate) by a factor of eighty!

Gaming the peer evaluations is harder, but some survey responders are not above "dumping" their schools' closest peers into the bottom tier so as to undermine the competition. Perhaps the most common tactic is simple self-promotion. When I was a law-school dean, my mailbox would begin to fill up about a month before U.S. News's annual "beauty contest" questionnaire arrived—with glossy admissions brochures, alumni magazines, lists of faculty publications, and breathless announcements of new buildings and academic symposia, all accompanied by bland cover letters from my counterparts expressing the thought that I might find the enclosures interesting and illuminating. In my ten years as dean I only once received a cover letter that came right out and said what every other letter wanted to say: "When the U.S. News opinion survey comes out next week, please keep our law school in mind."

By far the most important consequence of sitting out the rankings game, however, is the freedom to pursue our own educational philosophy, not that of some newsmagazine. Consider, for example, the relative importance of standardized tests. The SAT or ACT scores of entering freshmen make up half of the important "student selectivity" score in the U.S. News formula. Although we at Reed find SAT and ACT scores useful, they receive a good deal less weight in our admissions process. We have found that high school performance (which we measure by a complex formula that weighs GPA, class rank, quality and difficulty of courses, quality of the high school, counselor evaluation, and so forth) is a much better predictor of performance at Reed. Likewise, we have found that the quality of a student's application essay and other "soft variables," such as character, involvement, and intellectual curiosity, are just as important as the "hard variables" that provide the sole basis for the U.S. News rankings. We are free to admit the students we think will thrive at Reed and contribute to its intellectual atmosphere, rather than those we think will elevate our standing on U.S. News's list.

U.S. News also gives very substantial weight (25 percent of its overall formula) to student-retention and graduation rates. But it is far from clear that high student retention is the unmixed blessing implied by that formula. Rewarding high retention and graduation rates encourages schools to focus on pleasing students rather than on pushing them. Pleasing students can mean superb educational programs precisely tailored to their needs; but it can also mean dumbing down graduation requirements, lessening educational rigor, inflating grades, and emphasizing nonacademic amenities. At Reed we have felt free to pursue an educational philosophy that maintains rigor and structure—including a strong core curriculum in the humanities, extensive distribution requirements, a junior qualifying examination in one's major, a required senior thesis, uninflated grades (not reported to students unless they request them), heavy workloads, and graduate-level standards in many courses. We have also felt free to resist pressure to provide an expensive and highly selective program of varsity athletics and other nonacademic enticements simply for their marketing advantages. Not surprisingly, our attrition rates, though declining steadily, are higher than those at the highest-ranked schools.

As a rankings holdout Reed is free to appoint talented young teacher-scholars, even if they are still completing their dissertations, without worrying about impairing the college's "proportion of professors with the highest degree in their fields" (a significant component of the U.S. News "faculty resources" index). We are also free to set academic policy without worrying about optimizing a "class size" ranking. (U.S. News gives positive weight to the percentage of classes with fewer than twenty students, and negative weight to the percentage with more than fifty.) Reed's average class size is, to be sure, very small (just below fourteen), reflecting agreement with the educational philosophy implicit in the U.S. News formula. But unlike many of our rankings-sensitive peers, we feel no pressure to use part-time adjunct faculty or teaching assistants as an inexpensive but educationally dubious technique for even further increasing the percentage of small classes. Conversely, we can embrace the educational benefits of combining large lectures with small laboratory sessions in some disciplines.

W hat lesson can be derived from the fact that Reed continues to thrive despite its refusal to cooperate with the U.S. News rankings? Some of my peers speculate that Reed's success has little application to their schools. Only a college as iconoclastic and distinctive as Reed, they argue, could pursue such a strategy and survive. I disagree. To me, our success says something important about the market for higher education as well as about Reed College. Participants in the higher-education marketplace are still looking primarily for academic integrity and quality, not the superficial prestige conferred by commercial rankings. They understand that higher education is not a mass-produced commodity but an artisan-produced, interactive, and individually tailored service of remarkable complexity. Trying to rank institutions of higher education is a little like trying to rank religions or philosophies. The entire enterprise is flawed, not only in detail but also in conception. This is not to say that schools should not be held accountable. Like its peers, Reed submits reams of data to the National Center for Education Statistics, to our accrediting agency, andto a consortium of commercial college guidebooks. The college publishes large amounts of information and descriptive material in its literature and on its Web site. Most important, it articulates its academic requirements in exquisite detail, and focuses on those measures of institutional performance that are most germane to its mission. At Reed these measures include the quality of senior theses, the amount of student research activity, the percent of graduates earning Ph.D.s, and the number of competitive prizes and awards received by students and graduates.

Before I came to Reed, I thought I understood two things about college rankings: that they were terrible, and that they were irresistible. I have since learned that I was wrong about one of them.