Demystifying Rankings and Ratings

By Steven C. Bahls, JD    //    Volume 27,  Number 4   //    July/August 2019

Americans love rankings, and college and university rankings are considered important measures of institutional stature. Most board members have mixed feelings about them but are anxious to compare their institutions with others. And most board members wonder whether their institutions should do more to improve their rankings. What do board members need to know about these third-party evaluations, including when they add value and when they don’t?

With the proliferation of rankings and ratings, no single source holds the importance that it might once have held. Any informal sampling of college and university websites reveals that while many college leaders decry rankings, their own websites still boast of specific rankings—choosing, of course, those that place their institution in the best light.

UNDERSTANDING RANKINGS AND RATINGS

To evaluate the worth of these rankings and ratings—and how to respond to them—it’s important to understand the primary types of evaluations and their merit.

Rankings and ratings based on a blending of statistical factors. Respected rankings, such as those produced by U.S. News & World Report, Forbes, Money, and the Wall Street Journal, aggregate information about colleges and universities. These comparative rankings and ratings use published formulas to rank institutions, relying on such publicly available information as that produced by the US Department of Education’s National Center for Education Statistics and information submitted by the ranked institutions. Rankings use different factors and apply different weight to common factors. U.S. News, for example, factors outcomes as 35 percent of the total score, while various resources account for 45 percent and reputational opinion 20 percent. In contrast, the Wall Street Journal’s college rankings weigh outputs at 40 percent, inputs at 30 percent, engagement at 20 percent, and diversity at 10 percent. For Money’s rankings, affordability is an important factor, accounting for one third of the ranking. The weights of the factors are arbitrary in that any weight could have been selected, but they are not capricious, since most factors are arguably related to institutional quality.

Rankings and ratings based on performance in specific areas. There are an increasing number of rankings of colleges in specific areas. Examples include commitment to environmentalism, beautiful campuses, physical fitness, and serving the public good. Most publish the factors assessed and the weights of those factors. Though most would argue that the weighting of factors makes the rankings objective and unbiased, the truth is that there is often considerable subjectivity in how those doing the ranking determine numeric scores for many of the factors. And, of course, such factors as campus beauty and fitness of students are really in the eyes of the beholder.

Ratings from the American Council of Trustees and Alumni (ACTA). One of the most widely circulated ratings of colleges is the “What Will They Learn” ratings published by ACTA. With substantial financing from conservative foundations, ACTA sends its rankings directly to board members. It rates each college on whether seven “core subjects”—composition, literature, foreign language, US history, economics, mathematics, and science—are taught in what many deem to be a traditional way. In ACTA’s view, for example, institutions do not offer sufficient instruction in history if students are allowed to take world history instead of US government or American history. Colleges are not deemed as offering sufficient literature courses unless they offer comprehensive literature surveys as opposed to allowing students to focus on specific authors or specific genres. ACTA believes that students should take a regimen of courses in the areas it prescribes. Institutions large and small that do not prescribe a narrow curriculum—such as those with more flexible distribution requirements—do poorly in ACTA’s ranking. For example, both Harvard University and Williams College, usually top-ranked, are among the 36 percent of institutions that receive a letter grade of D or F.

Trustees considering information provided by ACTA might first address one central question: Who is in the best position to prescribe curricula that advance the institution’s mission—a partisan out-side organization or the institution’s own faculty as overseen by its own provost and academic affairs committee? Is the narrow, more traditional, curriculum prescribed by ACTA, for example, better than a curriculum in which broad general education requirements support students developing critical and creative problem-solving skills? Reasonable minds can differ.

US Department of Education College Scorecard. The College Scorecard provides an easy way to compare institutions on such key factors as cost, graduation rates, starting salaries, and graduation and retention rates. Users can easily compare schools with the knowledge that information comes from such federal sources as the Integrated Post-secondary Education Data System (IPEDS).

Pay-to-play rankings. Sadly, there are also rankings that require a fee to use the ranking designation or are conditioned upon advertising in a publication promoting the rankings. I recently received a notice from a publisher that my college “after extensive efforts of our market research team” had been “shortlisted” as one of the “20 Most Valuable Online Colleges in America.” We could claim the distinction for a “nominal sponsorship.” But there was only one problem: My college does not offer online programs!

ARE RANKINGS AND RATINGS USEFUL?

Organizations that publish rankings make the argument that the rankings cut through the clutter of information about institutions of higher education and thereby provide a useful tool for students and others in com-paring the relative performance of institutions. Others argue just the opposite—that third-party assessments often oversimplify the relative performance of the institution. They contend that institutional performance should be assessed against mission, not against the arbitrary factors and weights selected by those who profit from the college-rating business. Such inputs as average SAT and GPA scores, they contend, are less relevant than whether the institution delivers the outcomes that its mission promises.

Challenge Success, a nonprofit organization affiliated with the Stanford University Graduate School of Education, makes a strong argument in its report A “Fit” Over Rankings, stating that “traditional college rankings measure a set of factors that are weighted arbitrarily, drawn from data that are most easily quantifiable and comparable, sometimes poorly documented, and not always relevant to undergraduate education.” The report goes on to cite studies that demonstrate that selectivity of a college (often an important factor in rankings) has no correlation to learning, job satisfaction, and well-being, and little correlation to long-term earnings. The factor that really predicts strong outcomes, Challenge Success concludes, is the degree to which students are engaged in their college experience. The study concluded that “most rankings tell you primarily how famous a school is.”

But to dismiss the value of rankings and ratings because different schools have different missions is probably overly simplistic. With the plethora of information available about institutions, some comparative aggregation of data can be helpful to prospective students. Rankings and ratings can be a good starting point for evaluating colleges, but they surely should not be the ending point. And rankings and ratings provide a level of transparency for the comparative performance of colleges and universities that is hard to find elsewhere. In addition, rankings do play an important part in the national debate about affordability, quality, and accountability because they provide easy-to-understand comparable data about institutions.

DO PROSPECTIVE STUDENTS RELY ON COLLEGE RANKINGS?

Though most board members understand the limitations of rankings, they wonder about the impact of rankings on the ability to recruit students. The 2017 Cooperative Institutional Research Program Freshman Survey reported that only 17.9 percent of first-year students reported that college rankings are very important in college choice. But 65.6 percent of the same group reported that “academic reputation” is very important in deciding between colleges. Clearly, much more goes into a prospective student’s assessment of academic quality than rankings. Trustees should ask how well their institutions convey academic quality to prospective students. Many institutions focus on strong employment and graduate school success as better markers of academic quality, particularly since 55.7 percent said in the same survey that a school’s graduates securing good jobs impacted their college decisions.

In a 2007 study the Higher Education Research Institute conducted a comprehensive assessment of the impact of college rankings. It found that rankings may “serve as more of an additional confirmation of an institution’s academic reputation, social and academic offerings, and educational value rather than guiding students’ college choice in any meaningful way.”

Board members sometimes ask what can be done to improve rankings. It is true that actions can be taken to improve many of the ratings and rankings, but most of those require a considerable investment. The question boards should ask is whether that investment is the best use of funds, particularly when a relatively low percentage of prospective students consider these outside evaluations important in the college decision. Consider the U.S. News rankings as an example. A college’s U.S. News ranking can be improved by increasing faculty salaries (7 percent of the score), but board members might ask whether an investment in inputs like faculty salaries is likely to enhance the student experience. And what about increasing entering credentials of students (nearly 8 percent of the U.S. News score)? That can be done by allocating more financial aid to students with high ACT and SAT test scores, but what impact does this have on students needing more merit aid? On the other hand, boards could determine that investments that improve retention and graduation rates (20 percent of the U.S. News score) will both improve the value of the institution to its students and likely improve rankings.

BETTER WAYS TO ASSESS AND COMPARE INSTITUTIONAL PERFORMANCE

There are better ways for boards to compare relative performance of their college than simplistic numerical rankings and ratings. Boards and administrations should decide which factors are important to assess the relative strengths of their institutions instead of relying on those factors found in the rankings. Boards should charge administrations and their institutional research offices with providing materials that compare the relative performance of their colleges with peer institutions based on these factors. The two best ways to do this are well-designed institutional dashboards and evidence-based out-side assessments of student performance.

Most institutions use dashboards. Strong dashboards have the following attributes:

• Strong dashboards monitor those factors determined by the board to be the most important. Board members should ensure that the items on the dashboard have a relationship to the mission of the college and its strategic plan.

• Dashboards should report data for at least five years. Such factors as retention rates, for example, are difficult to evaluate with-out knowing whether the institution is improving them.

• The best dashboards are those that compare performance against the goals set by the administration and board.

• Dashboards should compare the performance of the institution against a group of peer schools. And the best dashboards compare performance of the institution against a group of aspirational schools. Board members should understand how peer and aspirational schools are selected.

• Institutions should be prepared to provide dashboard information with respect to various demographic groups, particularly regarding student performance. Do all demographic groups, for example, enjoy similar retention and graduation rates?

• Even though outcomes are sometimes more difficult to quantify than inputs, dashboards should assess outcomes beyond retention, graduation rates, student satisfaction, and employment rates. Institutions should be encouraged to develop outcome measures including student achievement of learning goals.

In addition to data analysis provided by the institution, board members should have access to other reports comparing performance. Board members can review the IPEDS Data Feedback Report prepared for each institution by the National Center for Education Statistics. This report compares an institution’s performance on important factors (graduation rates, for example) to that of peer colleges selected by the institution. It’s a treasure trove of information often overlooked by boards.

For boards wishing to assess the quality of their institution’s academic program and student achievement compared to others, there are clearly better ways to do so than relying on the academic quality ratings of any single source. U.S. News, for instance, assesses academic quality by polling academics and high school counselors, many of whom may have little to no knowledge of the hundreds of colleges they are asked to rank. And ACTA ranks institutions by whether they have a narrowly prescribed curriculum.

By contrast, the National Survey of Student Engagement (NSSE) is based on more reliable, evidence-based measures of student learning, and compares an institution with a self-selected peer group of other institutions. The NSSE’s focus is the depth of engagement of students in their education and how the institution uses its resources and organizes educational opportunities to advance student learning. Another tool used to assess educational effectiveness is the Collegiate Learning Assessment. Students complete performance tasks that challenge them to demonstrate their ability to analyze and evaluate information, solve problems, and communicate effectively. Institutions are able to compare the performance of their students to those at other colleges.

College rankings and ratings are here to stay, if for no other reason than they sell magazines and drive traffic to websites. Boards should discuss their institution’s ranking by understanding the methodology and asking whether the ranking provides useful and reliable information about the comparative performance of the institution. Most importantly, board members should develop more reliable, less arbitrary, ways to assess the performance of their own institution and compare it to others.

AUTHOR: Steven C. Bahls, JD, is in his 17th year as the president of Augustana College.

logo
Explore more on this topic:
The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.