Watch what will happen now that U.S. News & World Report has issued its latest annual rankings of undergraduate colleges and universities: indignant college presidents will continue to denounce the process as fatuous and devoid of legitimacy.
Institutions that fared well will issue press releases and place banners on their Web sites trumpeting the recognition.
Some institutions are doing something about it, by developing and encouraging other methods of assessment. Parents, students and complaining college administrators should do everything they can to spur that movement along.
Many higher education officials acknowledge that the rankings subtly and overtly influence the policies and practices of their colleges and universities.
Of 241 college and university presidents who responded to a survey, more than half said their institutions had implemented specific measures to improve their ranking.
After studying factors that influence the rankings, some institutions have dropped the SAT as an admissions requirement, accepted more students in early-decision programs or offered more financial aid on the basis of merit–usually at the expense of aid for needy students.
Such decisions generally enable colleges to enroll greater numbers of higher-achieving students, which may lead to a higher ranking but also can lead to questions about who gets in, why, and with how much financial aid.
The institutions that responded to our survey said they had focused mainly on student-retention rates, student-graduation rates and faculty compensation (significant factors in the rankings calculation), as well as on SAT scores of incoming freshmen.
Are institutions diverting money from classrooms to efforts to influence their ranking? (VCU officials say their plan may end up costing $400,000.) Do the rankings create a perverse incentive to spend more when it may make sense to hold the line on costs? For some institutions–but by no means all–the answer to these questions is yes.
Consider colleges’ efforts to influence the “peer-assessment survey.” Each year, U.S. News sends questionnaires to about 4,000 presidents, provosts and admissions directors to determine their perceptions of the undergraduate educational experience at “peer” institutions.
This survey–until this year it was known as the “academic-reputation survey”–accounts for 25 percent of a college’s score, the largest single criterion in the rankings. Because of the difficulty of significantly changing other indicators that U.S. News measures, some officials believe their best shot at altering the algebra of the rankings is to influence the peer assessments.
To do so, they hire consultants who advise them to run costly, aggressive direct-mail campaigns aimed at officials of other academic institutions. No one can prove that such marketing efforts work, but the practice is fairly common.
In another example of questionable spending, some institutions looking to attain or maintain a high ranking learn that controlling faculty salaries and educational expenditures per student–perhaps the right prescription for a college seeking to tame tuition increases –may jeopardize their ranking and thus their ability to attract the best applicants.
Not everything U.S. News measures causes concern. Indeed, many in higher education acknowledge that the rankings incorporate some important indicators of institutional quality and performance. Nor are higher education officials wrong to pursue changes when the rankings reveal an institution’s shortcomings. Improving student graduation and retention rates, for example, are laudable objectives, and increasing the number of alumni who donate money is crucial for most schools.
What most infuriates higher education officials is the idea of any formula that purports to measure overall institutional quality by combining disparate indicators into a ranking system, especially a ranking system in which schools’ positions may vary from year to year based on changes in how the survey is conducted.
There’s little question that the U.S. News rankings can be helpful to students and families who use them appropriately. But as they do so, students and parents also need to demand better ways of judging whether the schools they’re considering are delivering on the promise of providing high-quality teaching and a good learning experience.
Most colleges and universities are trying to figure out how to conduct assessments themselves. Various national surveys and assessment instruments have existed for a long time, but only recently have a few emerged that were spurred by the U.S. News rankings.
Most prominent is the National Survey of Student Engagement, which last spring evaluated 367 institutions.
NSSE asks first-year students and seniors how they actually use the resources for learning their school provides. The surveys are built around five benchmarks of effective educational practice:
–The level of academic challenge
–Active and collaborative learning
–Student interaction with faculty;
–Enriching educational experiences, and
–A supportive environment.
The people who conduct NSSE are wrestling with several issues, including: the reliability of student-generated data; the validity of the benchmarks;; and some schools’ reluctance to participate.
In the end, assuming that gauging where one educational institution stands compared with its peers remains a worthy goal.