For many years, judgements about “quality” in higher education were determined almost solely by institutional reputation, productivity, and factors such as fiscal, physical, and human resources. Regional accreditors, charged with examining the adequacy of public and independent institutions alike, looked mostly at the overall level of institutional resources and at internal shared-governance processes. Over the past three decades, however, interest on the part of external stakeholders in the actual academic performance of colleges and universities has steadily risen.
There are a number of reasons for the increased concern:
A growing atmosphere of accountability in higher education, with an emphasis on student learning outcomes;
Increased competition in the higher education marketplace, an environment that puts a premium on visible evidence of academic performance; and
The constrained fiscal conditions under which most colleges and universities operate today—a context that puts a premium on sound and evidence-based academic management practices as much as it does on fiscal discipline.
Such trends have created new and heightened responsibilities for board members of colleges and universities.
Rising Calls for Accountability
Beginning in the mid-1980s, state policy makers grew concerned about the outcomes of higher education in relation to its costs. Educational quality already was on the minds of governors and state legislators because of the Reagan administration’s “A Nation at Risk” report in 1983, which warned of declining learning standards in elementary and secondary schools. In that climate, the National Governors Association launched an initiative in 1986 titled “Time for Results,” extending a call to examine the quality of collegiate learning—a call that continues to reverberate today.
An important result of those changes was the birth of a nation-wide assessment movement in higher education that has stimulated the development of systematic investigations of student-learning outcomes at growing numbers of colleges and universities. Although the states were at first the main drivers of that effort, the push to remain internationally competitive has lately driven the federal government, acting through regional accrediting organizations, to step up engagement with quality and pay greater attention to improving graduation rates in higher education.
In the early 2000s, the rising interest in the quality of higher education was joined by a growing national imperative, fueled by a concern that the United States was no longer unquestionably the world leader in higher education. According to figures compiled by the Organisation for Economic Cooperation and Development (OECD), America slipped to 15th place in the world with respect to the proportion of young citizens (aged 25–34) earning a postsecondary credential. The Obama administration’s recently announced goal of achieving a population postsecondary attainment rate of 60 percent by 2020 is a response to this condition. But the administration also recognizes that it does no good for the United States to reach that rate if the degrees granted by its colleges and universities are substandard with respect to quality.
Clearly, improving the quality of learning outcomes is integral to achieving the United States’ goals. Operating in a global environment means that the nation needs to maintain competitiveness with respect to both the numbers and the quality of degrees produced.
The Role of the States
The explicit interest of states in higher education quality embraces more than just public institutions, and it is driven by three factors:
- First, state governments are “owner-operators” of public colleges and universities, which they directly fund and oversee. As such, state policy makers are fundamentally interested in cost-effectiveness and return on investment. With respect to outcomes, that means that policy makers want to be assured that graduates have reached acceptable levels of academic performance. But they are also concerned about such matters as student retention and the time it takes students to earn their degrees because those factors are assumed to be related to efficiency as well.
- Second, many states provide substantial scholarship support that allows students to attend independent as well as public institutions. Acting in this role, a state’s primary concern with respect to academic-quality assurance is that students obtain a credential of value—that is, one with which graduates are satisfied and that has a payoff in the marketplace for employment.
- Finally, in their roles as keepers of the public interest, states are concerned about issues including economic development, civic participation, and the overall quality of life of their citizens. Accordingly, they are interested in related dimensions of quality in higher education, such as college and university contributions to economic development in the form of well-prepared graduates, contributions to knowledge consistent with state need, and institutional responsiveness to regional and community needs. These elements of quality, of course, can be manifest in both public and independent institutions.
While these basic interests in academic accountability are common across the 50 states, they may differ in their expression. For example, a few states require students at public institutions to pass standardized examinations and/or participate in state-wide surveys about what they learned and experienced. But far more states require public institutions to report regularly on student outcomes using institution-defined criteria and assessment methods. And almost all states include graduation-rate data as part of their performance-indicator systems for public higher education. Some states are beginning to include independent institutions in these reports as well, using data on student progress collected through those institutions’ participation in state-funded student-aid programs.
Increasingly, however, institutional accrediting organizations are displacing states as the primary actors in quality assurance for higher education. Accreditation is a nominally voluntary process that began about a century ago as a means for colleges and universities to recognize and accept one another’s credits and credentials. To remain accredited, every five to eight years, institutions go through a comprehensive review process that involves preparation of a self-study, one or more multiday visits by a team of peer reviewers, and a review-and-report process noting institutional strengths and areas for improvement, together with a judgment regarding the institution’s continuing accreditation status.
The “teeth” in accreditation lie in the fact that institutions must remain accredited to continue to participate in the U.S. Department of Education’s extensive financial-aid programs, which include both need-based aid and low-interest student loans. Most colleges and universities participate heavily in both programs, which provide institutions with a significant amount of tuition revenue. This link between accreditation and federal funds exists because the federal government has essentially deputized accrediting organizations to review institutional quality on its behalf, in lieu of creating an extensive and expensive federal accountability process for higher education.
To ensure that they are doing what the federal government wants, accrediting organizations are themselves periodically reviewed by the Education Department and officially recognized as gatekeepers for federal funds. And since about 1990, one of the most prominent conditions for continuing recognition is a requirement that accrediting organizations emphasize the assessment of student learning in their reviews of institutions. The result has been increased attention to learning outcomes by accrediting teams when they visit campuses. This greater emphasis is one of the most important reasons why board members should be aware of their institution’s activities in assessing student-learning outcomes and of how faculty and staff are using assessment results to improve teaching and learning.
Although accountability demands on colleges and universities have grown markedly over the past decade, mounting evidence suggests that those demands will only continue to increase. Beginning in 2007, the U.S. Department of Education began putting greater pressure on recognized accreditors to more rigorously examine institutions on the quality of their learning outcomes. That pressure came in the wake of recommendations by the Secretary’s Commission on the Future of Higher Education (commonly known as the “Spellings Commission”), issued in 2006. While the commission stopped short of suggesting that the federal government require accreditors to use standardized tests of student learning outcomes, it did encourage institutions to adopt assessment programs that would allow the competitive performance of their graduates to be determined and urged accreditors to require such an approach. As a result, more and more, accreditors will look not only at the adequacy of an institution’s assessment process, but also at what those assessments reveal about the actual levels of learning being achieved against available benchmarks.
These developments are harbingers of an institutional operating environment in which accountability for academic quality is playing an ever larger part. Board members need to know about the growing salience of accountability as a driver of demands for evidence of academic quality in order to ensure that their institutions are in a position to respond.
Increased Competition in the Higher Ed Marketplace
For most public institutions, state funding has become a steadily diminishing share of revenues. Public institutions have been compensating for the shortfalls by raising tuition—a tactic that most independent institutions also depend on to sustain their revenues. This state of affairs means that maintaining enrollment is a critical concern for all colleges and universities today.
But most institutions are not interested in simply “maintaining enrollment”—rather, they want to recruit and attract a specific kind of student body. That desire has fostered an increasingly competitive environment, as growing numbers of public and independent institutions try to attract the best (or most suitable) students available in their recruitment pools, as well as those willing and able to pay the full cost of tuition. Indeed, for many institutions, the traditional distinctions between public and independent have disappeared. Both tend to recruit from the same markets, and both increasingly use mechanisms such as institutional aid to shape their enrollments. In such an environment, evidence of an institution’s superior performance with respect to what its graduates know and can do is a valuable lever in attracting superior applicants.
A Shift Toward Evidence-Based Management
A final factor stimulating the focus on academic quality has been the rise of new approaches to managing the curriculum and the teaching and learning processes. In adopting such techniques—derived, in part, from corporate practices—colleges and universities have been responding both to increased competitiveness and the fact that they have found that improved management of academic resources can help them achieve more on a fixed-resource base without sacrificing quality.
Institutions originally worked to improve their information resources to generate the statistics needed to respond to growing accountability demands. But much of the attraction of developing even more powerful information resources came from a different source: evidence-based management techniques, such as Total Quality Management (TQM) and Continuous Quality Improvement (CQI), which emerged in business and manufacturing in the 1990s.
Many board members who work in the business and professional community are familiar with approaches like TQM and CQI and, more important, with the principles of evidence-based management that lie behind them. Where appropriate, those board members should ask administrators if and how they apply such principles.
Colleges and universities first began applying quality-management techniques drawn from business to tasks in areas where they seemed most suitable, such as the operation and maintenance of the physical plant, personnel management, financial services, and procurement. Among the most commonly used techniques were “mapping” standardized processes (such as cutting a reimbursement check) for the purpose of streamlining them, statistical process control (in such areas as purchasing) to ensure reliable consistent service, and outsourcing such functions as food services and computing.
More recently, however, some of those same techniques have been fruitfully applied to processes related to teaching and learning by focusing on achieving a better understanding of the academic “production function”—that is, how students flow through a set of courses in a particular curriculum, what they experience, and the outcomes they achieve. Of particular importance is improving course sequencing so that students are immediately able to apply what they have learned in the appropriate settings. The effectiveness of those connections can then be monitored by looking at how students perform in subsequent courses in relation to what they experienced and how they performed in previous (prerequisite) courses in the same sequence. Such techniques have enabled much more effective and coherent learning experiences and have proved to be especially applicable in fields such as mathematics and in remedial coursework, in which it is possible to specify and teach to concrete learning outcomes. They also have proved particularly important in the growing arena of technology-mediated instructional delivery.
Colleges and universities are, furthermore, using the principles of evidence-based management to make decisions about the overall shape of the institution’s academic offerings—that is, what kinds of programs to offer in what fields and at what levels. Increasingly, such decisions are based in part on careful market research and needs analysis to determine the nature and extent of demand and to establish pricing policies.
Evaluation of current patterns of enrollment, retention and completion, assessment results, and job or graduate-school placement data can reveal whether the institution’s current array of programs is optimal. Those evaluations may then, in turn, lead to evidence-based decisions about whether to expand capacity in a given program, scale it back, or eliminate it altogether. While academic administrators have always had to make such decisions, they now increasingly base them on concrete evidence, using a growing array of indicators of program need and performance.
It is important for board members to recognize that there are limits to the applicability of quality-management techniques in academic settings. Teaching and learning are not the same as making widgets. And just because board members are familiar with quality-management applications in business settings does not mean that they can commend them to academic leaders without qualification. At the same time, it is important for academic leaders to become aware of the appropriate potential applications of quality-management principles, and the board’s questions may be a good stimulus.
In sum, these three conditions of doing business in the academy today—escalating accountability demands, increased competitiveness in the market for students, and the development of evidence-based management techniques to help deal with fiscal realities—point collectively to a growing need for new kinds of information about academic quality. Board members should realize that these conditions are not going to recede any time soon, but are instead permanent features of the higher education landscape that are shaping institutional behavior in important ways. Consequently, boards need to advocate for having the right kinds of evidence-gathering and quality-management systems in place in the realm of teaching and learning, as much as they now advocate for new sources of revenues and greater operating efficiencies in the institution’s nonacademic functions.
To succeed in this new reality, colleges and universities can no more do without a systematic program of student-outcomes assessment than they could do without a development office. And boards, which have ultimate oversight responsibility, must ensure that such a systematic program is in place.
Do Boards Spend Enough Time on Student Learning?
Nothing is more central to higher education than student learning and academic quality. But according to an AGB survey, 62 percent of boards report that they do not spend enough time on student learning; none report spending too much time. “Other priorities” and not enough time were the reasons cited. Moreover, one in five board members note that they don’t think oversight of educational quality should be the board’s role—a surprisingly high percentage, given the mission of institutions and the fiduciary role of boards.
Boards must be responsible for the oversight of student learning and educational quality as well as other fiduciary matters, such as an institution’s financial health. Ideally, boards should look for evidence of student learning and proof that students are engaging in pedagogies that lead to learning. It is important to note that boards should not overstep into the work of faculty. The faculty is responsible for the curriculum; the board’s role is to remind them of that responsibility.