A New Way to Gauge Student Success Rates

By Terry W. Hartle    //    Volume 27,  Number 4   //    July/August 2019

As federal policy makers once again turn their attention to reauthorization of the Higher Education Act, there is no doubt that “accountability” will be a central point of discussion. Most policy makers define accountability as institutions providing meaningful evidence that students will be better off at the end of their studies. That, in turn means institutions answering such questions as: How many of your students graduate? Did they acquire the tools they can use to get and keep a good job? Do they have a higher income because they attended college? We can, and should, look critically at whether reducing the benefits of postsecondary education to a simple—and perhaps simplistic—set of quantifiable measures makes sense and, if not, provide alternatives. But whether we like it or not, policy makers will continue to head in this direction.

Federal efforts to calculate outcome measures are not new but neither are they sophisticated. At present, federal data provide just six numbers by which to measure institutional performance: retention, student loan default and repayment rates, gainful employment data, graduate earnings, and graduation rates. None are entirely accurate.

Retention, or the percentage of students who finish their first year and start a second, is a data point that federal officials have historically paid little attention to, in large part because the numbers are self-reported by institutions. Student loan default rates are also points of reference available to policy makers. Under federal law institutions must have a student loan default rate below a certain threshold to remain eligible to participate in the federal student loan and Pell Grant programs. This number is calculated by the federal government and has been widely used since the calculation of the number was mandated in 1990.

The assumption was that a school with a high default rate was not providing its students with an adequate education to enable them to repay their loans. But the default “trigger” that would disqualify an institution from participating is set at a relatively high level and few schools lose eligibility based on default rates. Moreover, there is some evidence that unscrupulous schools have figured out ways to artificially lower their default rate, which led Congress to modify the cohort default rate calculation in 2008.

In addition, default rates are increasingly obsolete because many borrowers who are at risk of defaulting are now placed in an “income-driven repayment” plan under which they are expected to pay a modest percentage of their income to the US Department of Education. By definition, students in income-driven repayment should never default on their loans, which means that the default indicator is increasingly based on a smaller and smaller number of students.

Both the Obama and Trump administrations have sought to implement an out-come measure that would tie earnings to a student’s major field of study at the school the student attended. Referred to as “gainful employment,” it’s basically an effort to track how much money, relative to their student loan debt, graduates make in each major their institution offers.

As you might expect, the two administrations went about it in different ways. The Obama administration opted to apply the metric only to for-profit schools and the certificate programs at degree-granting colleges and universities, but it also proposed making it a condition of program eligibility: Schools with individual programs showing a poor return for students would find those programs ineligible for federal student aid funds. The Trump administration with-drew the Obama administration’s proposed regulations and, as of this writing, has not put forward its own plan. The Trump administration plans to calculate the information for every major field of study at every institution of postsecondary education, but use it only to provide students with information; it will not have consequences for institutional or programmatic eligibility.

Sound complicated? It is. A major research university could easily have numbers reflecting student earnings in hundreds of major fields of study. Because it will include only individuals who take out student loans, the calculation for disciplines that have just a few majors will be based on a small sample. And of course there is no evidence that students and families are clamoring for this data or will be willing to spend the time to parse it. The good news, I suppose, is that all of the calculations will be made by the Department of Education by matching information on student loan recipients with their earnings data at the IRS—so at least we are not talking about a major workload increase for campus officials. Small comfort, perhaps.

Which brings us to graduation rates. In the Student Right to Know and Campus Security Act of 1990 Congress mandated that the Department of Education calculate graduation rates for all institutions of higher education. The idea was that one reasonable outcome measure was to calculate the percentage of matriculated students who completed a degree at the institution where they began their studies. To make sure that the focus was on students who intended to earn a degree, the federal government mandated that the rate be based solely on “first-time, full time students” who enrolled in a credit-bearing degree or certificate program. Students who did not enroll in such a program were excluded, as were any students who transferred into the institution, because they were not “first-time.” Any students who transferred to another institution were deemed dropouts, even if they completed a degree somewhere else. And you had to graduate within 150 percent of the “normal time” to complete a degree: three years at a community college and six years at a four-year institution.

The limitations of the federal definition—which have been part of the law since it was enacted—are well illustrated by a simple example: According to the federal government, Donald Trump, Barack Obama, Sarah Palin, and John Boehner are all college dropouts. The first three transferred schools prior to graduation, while Boehner, the former Republican Speaker of the House, took longer than six years to earn his degree.

It actually took the Department of Education five years to publish regulations to implement this requirement. However, it took roughly five more years before the first graduation rates were reported. In that intervening decade, of course, American higher education changed dramatically—far more nontraditional students began to enroll—and many took longer than the normal time to graduate. In addition, the number of transfer students increased significantly. And more and more students began to enroll in multiple institutions at the same time. Thanks to the changes in enrollment patterns, graduation rates at many schools are based on a small number of students. A school’s graduation rate is often based on less than 10 percent of the student body, especially at open-access institutions such as community colleges.

The upshot is that this number—even though it arguably makes sense as a way to assess how well schools are serving their students—is extremely flawed.

Calculating accurate graduation rates is complicated because Congress, in 2008, banned the Department of Education from implementing a “student unit record system.” Unlike the current system, which tracks the cohort of students who start college at the beginning of an academic year, a unit record system would contain information on every single student who enrolls in postsecondary education—even those who did not seek or receive federal student aid—and it would track them throughout their postsecondary education.

Such a system would make it possible to study the course-taking patterns, changes in majors or degree programs, and transfer decisions of individual students. And it would be possible to include virtually all students in graduation rate calculations no matter how long it takes individual students to finish their degrees.

Such a system would clearly and unambiguously produce accurate graduation rates. But it also raises serious concerns about individual privacy. This consideration was mostly raised by conservatives. But even some groups on the very opposite end of the political spectrum like the American Civil Liberties Union expressed objections.

Ironically, given the controversy surrounding the creation of federal unit record system, such a system already exists, and more than 98 percent of institutions participate in it. Knowledgeable observers have long noted that it is possible to produce completion data far better than the federal rate by using the information compiled by the National Student Clearinghouse. This organization operates a national—but not federal—unit record data base that gives it the capability to calculate accurate graduation rates that account for transfer students and students who take longer than 150 percent of normal time to graduate.

The clearinghouse was established in 1993 to help campus student aid administrators keep track of student eligibility for federal loans. But the clearinghouse is not a federal organization and the data that it includes unambiguously belong to the institutions that provide it. Individually identifiable information is not shared without those institutions’ and their students’ explicit permission. While this small nonprofit organization is indispensable to the operation of the federal student aid programs, it is beyond the reach of federal officials.

But thanks to a project known as SAM—the Student Achievement Measure—we have some knowledge of how transfer students and those who are still enrolled past the “normal time” impact graduation rates. SAM is a joint project supported by a number of higher education associations and run by the Association of Public and Land-grant Universities in conjunction with the American Association of State Colleges and Universities. It has been in existence for five years.

The results of including students who transfer and graduate or who are still enrolled after 150 percent of what is considered normal time are eye-opening. For many institutions, more complete information fundamentally changes how the public and policy makers will see the institution.

SAM starts with the federal graduation rate number as reported by the Department of Education but also includes four additional numbers. One shows how many students are still enrolled at the school after the arbitrary 150 percent deadline. A second number shows the percentage of students who transferred to another institution and graduated within 150 percent of normal time. The third shows the percent-age of students that are still enrolled in the school to which they transferred. The final number shows students who cannot be accounted for.

Unfortunately, we can’t aggregate the SAM numbers and call it a graduation rate because it is not fair to allow a school to count as a graduate someone who completed a degree elsewhere. But neither is it fair to hold it against a school if a student transfers and completes a degree else-where, as happens under current law. Nor is it fair to count students who are still enrolled as dropouts. So rather than calling an aggregate SAM number a more complete and accurate “graduation rate,” we ought to think of it as a “student success rate,” because it is a far more complete and meaningful number.

Consider, for example, the University of Tennessee at Chattanooga. According to the official federal statistic, in 2017, UT Chattanooga had a graduation rate of 45 percent—less than half of full-time entering students who planned to earn a degree had received one within six years of their initial enrollment. But, historically, a significant number of UT Chattanooga students transfer to other institutions. Indeed, 20 percent of its entering students transferred and received a degree from another institution within six years. Eight percent more were still enrolled at their new school, while another four percent were still enrolled at UT Chattanooga and working on their degree. This means that UT Chattanooga—which had as of 2017 a federal graduation rate of 45 percent—has a “student success rate” of 77 percent, a full 32 percentage points higher.

Eastern Michigan University shows similar gains. The school reports a federal graduation rate of 40 percent. However, when students who have graduated from another school or who remain enrolled are included, it reports a student success rate of 69 percent. The University of Southern Maine goes from a graduation rate of 33 percent to a student success rate of 63 percent. Eastern Washington University has a graduation rate of 52 percent and a student success rate of 75 percent—a full 23 percentage points higher.

Relatively few private colleges participate in SAM but those that do participate show similar results. DePaul University has a graduation rate of 69 percent and a student success rate of 86 percent. Fairleigh Dickinson University has a graduation rate of 54 percent but a student success rate that reaches 81 percent.

Selective institutions that participate in SAM also show gains, though not as dramatic because they already start with very high graduation rates. So the University of Michigan goes from 88 to 93 percent; North-eastern University, a private university, climbs from 88 to 96 percent; the University of California at Berkeley increases from 91 to 95 percent; the University of Florida goes from 88 to 96 percent; and Saint Vincent College goes from 69 to 88 percent.

A review of table 1 provides still more examples of how SAM calculations provide a very different picture of institutional success.

The bottom line is clear and unmistakable: All institutions that participate in SAM have a higher—often much higher—student success rate than their official federal graduation rates. Most schools show a gain in the range of 20 percentage points but increases of 30 percentage points are fairly common. Because SAM has been in place for nearly six years, it is possible to review the performance of some institutions over a somewhat longer time period. What is notable is that the numbers seem stable—that is, there is little fluctuation. Portland State University, for example, reports a student success rate of between 70 and 76 percent for each year. The University of Texas at Dallas reports a success rate of between 84 and 86 percent.

We see an even more dramatic pattern when we look at community colleges. Community colleges enroll large numbers of adult students, often with jobs and families. They may take courses for a couple of terms and then drop out for a while, only to resume their education later. And many students who are unprepared for a four-year institution will start their postsecondary career at a community college, fully intending to trans-fer to a four-year school without necessarily completing a degree.

For several years, the American Association of Community Colleges has compiled a database for its member institutions known as the Voluntary Framework of Account-ability (VFA), which functions like SAM. It calculates graduation rates over a longer period of time (in the case of VFA it allows a six-year window rather than the three-year term specified by the federal government) and factors transfers and those still enrolled into the equation.

The results are especially dramatic for community colleges. Bunker Hill Community College (Massachusetts) has a federal graduation rate of 11 percent. But add in students who earned a degree in six years, as well as those who are still enrolled and transfer to other institutions, and the school has a success rate of 63 percent – an increase of 51 percentage points. Howard Community College (Maryland) has a federal graduation rate of just 15 percent—but a success rate of 68 percent, a gain of 53 percentage points. Mohawk Valley Community College (New York) increases from 23 percent to 58 percent—an increase of 35 points. Salt Lake City Community College (Utah) goes from 16 percent to 58 percent—or 42 percent.

Even though they are inaccurate, federal graduation rates are too often being used to demonstrate the quality of a school (or the lack thereof). Austin Community College (ACC) in Texas learned this the hard way in 2011 when the Texas Association of Business took out a billboard noting that the school had a 4 percent graduation rate and asking if this was a good use of taxpayer funds. In fact, ACC did have a three-year graduation rate in the single digits but less than 5 percent of the school’s students qualified as first-time, full-time students. But when we look at the school’s graduation rate over six years and include transfer students, Austin Community College shows a success rate of 57 percent.

The problem with single numbers is that they convey an immediate impression: Anyone looking at the number can quickly decide if it’s good, bad, or somewhere in between. Consider a thermometer reading of the temperature outside. It’s the same thing with graduation rates. One number on one day tells a story, even if it’s a badly misleading one. All institutions face that problem, sadly, with the federal government’s Student Right to Know graduation rate. We can complain about the rate but, the sad fact is it’s still readily available to anyone with a computer. Far better that institutions should put forward a more accurate number. Thanks to SAM and VFA individual institutions can now do this.

Accounting for students who transfer as well as those who take longer to graduate than the federal definition allows provides a much more complete and meaningful picture of institutional success. We should be careful not to present this as a panacea—students who transfer often lose academic credits and, as a result, may spend more time and money in school. The same holds true for students who take more time to graduate.

But in the 21st century colleges and universities educate a wider range of students who have very different enrollment patterns than their parents did in the 1980s and 1990s, let alone their grandparents in the 1960s and 1970s. A federal indicator created in 1990 and unchanged since then will imperfectly reflect what happens on college campuses today.

Institutions can simply continue to accept the federal graduation rate or they can seek ways to present a more accurate and complete picture of their student success until the federal data improve. SAM—and VFA—have the advantage of being straightforward, easily understandable, and a more accurate representation of the student population that higher education serves in the second decade of the 21st century. If your institution is not using SAM or VFA and shouting the results from your rooftop, it probably should be.

AUTHOR: Terry W. Hartle is the senior vice president of government and public affairs for the American Council on Education.

If your institution is interested in participating in SAM contact Bryan Cook, PhD, the vice president of data and policy analysis at the Association of Public & Land-grant Universities (bcook@aplu.org). If your institution is interested in participating in VFA contact Kent Phillippe, the assistant vice president for research and student success at the American Association of Community Colleges (kphillippe@aacc.nche.edu). 

logo
Explore more on this topic:
The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.