Artificial Intelligence and the Future of Higher Education, Part 2

By David Tobenkin    //    Volume 32,  Number 3   //    May/June 2024
Takeaways

  • Higher education institutions must embrace a posture toward AI that makes sense for them given their goals and resources, but they should consider stretching to do more.
  • At a minimum, there are AI risks that must be addressed immediately.
  • AI models can help higher ed institutions perform assessments of their AI readiness.
  • Presentations to boards on AI are urgently needed and should be carefully curated.
  • Strategic partnerships can help boards implement AI initiatives more quickly.

This is the second of a two-part series on the impact of artificial intelligence (AI) on colleges and universities. The first part of this series, “Artificial Intelligence and the Future of Higher Education,” published in the January/February 2024 issue of Trusteeship, examined the increasing prevalence of AI applications in higher education instruction, research, and institutional operations. It noted these AI impacts on higher ed institutions:

  • Generative AI that allows non-technical users to use AI to, for example, help prepare essays
  • Machine learning AI tools that can help researchers harness the power of information technology for their research
  • The use of AI in a myriad of higher ed institutional organizational functions, such as using chatbots to communicate with current and prospective students, using AI tools to vet prospective hires, and using AI screening tools for medical imaging in university health systems

But it also noted a variety of risks posed by AI usage, such as inaccurate results, compromise of confidential and proprietary data, discrimination in data outcomes, and plagiarism. The story focused on institutions that are among the leaders in analyzing the effects of AI and supporting its use, such as the University of Michigan (UMich), the University of Southern California (USC), California State University, the University of Albany, the University of Toronto, and the Université de Montréal.

This story, Part 2, builds on Part 1 by discussing what boards should be doing to realize AI’s potential and address its challenges at their institutions.

Determining whether and how boards should intervene to help steer AI use at their institutions is proving to be no easy matter, say higher ed leaders, stakeholders, and consultants interviewed. In addition to the speed with which AI is evolving, the broad nature of AI impacts and challenges across university campuses pose a challenge to board governance models, says Angel Mendez, a long-time tech executive and board member, a trustee at Lafayette College since 2016, a former AGB trustee, and the current executive chairman of an AI-centered company.

“At Lafayette, we are going to move to institute some form of governance for AI,” Mendez says. “The question on the table is whether we do so as part of our current process for technology oversight or whether we create something separate. AI touches curricular matters, administrative opportunities, even student life; in fact, it will live in every element of the institution that the board of trustees has to mind. The question is, how do we govern AI effectively and with agility? We haven’t quite invented the next specific governance model yet, but we are looking at it very closely, starting through the eyes of the faculty, where a provost-led effort is already establishing guardrails. We will have to complement that effort through multifunctional additions to these initial working groups. More importantly, we have to look at AI as part of our strategic planning process, which is examining the opportunities that AI could address.”

Moving in a way that is effective yet inclusive on such a complex challenge is not easy. “I’m very cautious to simplify the governance challenge and say, ‘oh, yeah, govern AI on an IT committee,’ which is a common path among higher ed institutions,” Mendez says. “We all respect that there’s a lot we don’t know and [we] are clear on how change happens in a college like Lafayette. You need to bring a lot of people along before you declare specific do’s and don’ts and you have to respect shared governance, working through the complementary committees of the faculty and the board.”

And then there is the matter of the actions the board actually takes. Without great care, board actions on AI could, for example, inadvertently and adversely affect older forms of AI, such as machine learning, that have been used in research functions for decades, Mendez notes. “So I’m hoping that [the Lafayette board] can completely design and embed [an AI approach] into strategic planning and have a different governance model in place by May. It would be great if we could start the next academic year with those enhancements in place,” Mendez says.

David Harris, president of Union College in Schenectady, New York, says traditional governance models move too slowly to be useful for adjusting to AI. “We don’t have five years to have committees and task forces and more reports to figure out where to do with AI,” Harris says, “because if you do that, you’re forever going to be behind, given the exponential nature of how things change.”

Harris is “having conversations and working outside of traditional channels” with senior staff members and others on campus. Additionally, a presidential working group at Union College has been formed that is “examining what opportunities AI presents to students, faculty, staff, and to Union College as a whole.”

Questions Boards Should Ask About AI

  1. Given the current state of AI development, is any individual or group of individuals at our institution charged with thinking about its strategic applications?
  2. If no individual or group is currently charged with examining AI’s utility to the institution’s strategic priorities and current operations, what steps are we taking to identify and bring in that expertise?
  3. What level of execution relative to AI or advanced analytics is taking place at our institution?
  4.  Based on a holistic view of AI’s utility to our institution, what specific areas or applications should be implemented first?
  5. As AI continues to evolve, should specific governing guardrails be put in place to protect the institution from potential risks? What might those be?
  6. What are the risks we see associated with AI, and which ones will be crucial when it comes to oversight?
  7. How confident are we in our institution’s ability to manage the transformative opportunities brought via AI and advanced analytics?

Stephen T. Golding is a senior consultant for AGB Consulting and the ambassador to AGB’s Council of Finance Committee Chairs.

The Need for Education

Just keeping track of AI’s impacts on higher ed institutions is an important first step in boards charting AI actions—and not an easy one. If there was a single common theme of higher ed board members, campus stakeholders, and consultants interviewed, it is that boards need to better educate themselves on AI.

“Boards need to be aware of what is going to be driving change in their institution, and they also need to be aware that if they want to innovate in any way, they need to be looking at digital transformation because that’s the one thing that’s going to change their institution,” says Amy Hilbelink, AGB senior consultant and former campus president, Pittsburgh/Online, at South College. “Some great ways to do this is for the CIO to attend, and present to, the board so that the board really knows what tech is doing and knows that what they are doing is more than just keeping the computers running.”

It takes experience as a private sector board member to see just how slowly most of higher ed is moving with respect to AI, Mendez says. “In my corporate world, every board I serve has put itself through one or two major deep dives on AI just in the last six months,” Mendez says. “I’d say educate [a higher ed] board on AI a minimum of twice per year, and that’s with the assumption that there is also a specific board committee you’ve delegated to govern the space, just like you would with student life, for example.”

At Babson College in Wellesley, Massachusetts, a February presentation to the board by faculty and students included a demonstration of generative AI, a panel of undergraduate and graduate students discussing their use of AI in the classroom and in their ventures, and a role-playing game aimed at deciding whether board members preferred a moderate-use approach to AI or an all-in approach.

“What we wanted to do was create an opportunity to explore the topic in a way that was partly educational, so that as we go forward as trustees, not only do we have more information and more perspectives, but we also can think of it more in terms of our role as trustees, and what’s important and what’s relevant going forward,” says Jeffery Perry, chair of the Babson College Board of Trustees and a member of AGB’s Council of Board Chairs. Babson Senior Vice President and Chief Operating Officer Kelly Lynch says, “The board meeting really did provide an important organizing moment for the institution to take a closer look at all of the things that we are doing under the AI umbrella, and the advances that we’re making, and to aggregate them in a more comprehensive way. And it signaled to us the need to really think about the right structures and systems that will allow that kind of cohesive effort to come together to a greater extent and move forward with greater alignment.”

The AI presentation to the board featured Babson Information, Technology, and Management Professor Tom Davenport and Strategic Management Associate Professor Jonathan Sims providing a real-life, real-time simulation of how professors and Babson students are using generative AI, says Lynch.

One presentation by Sims involved feeding all of Babson’s course listings and descriptions into an AI large language model and then pretending to be a student who was interested in real estate and technology and asking what courses would it make sense for a student to take and create a sample schedule.

“That generated several hundred pages of course descriptions in a few seconds,” says Davenport. “That was quite impressive, I thought. There’s no way any human adviser, certainly not one that I ever had when I was in school, could do something like that.”

AI Strategy: Are You an AI Leader, Follower, or Middle-of-the-Roader?

Boards will need to decide upon an AI strategy, many experts asserted. “Whenever I am asked what advice I have for trustees and board members who are interested in [generative AI], I tell them that they need to be asking their senior leadership, ‘What is our AI strategy? Do we have an AI strategy yet?’” says UMich Chief Information Officer Ravi Pendse, who has led the groundbreaking university-sponsored deployment of generative AI tools for its higher education community, its AI Services platform. “Trying to embrace something as transformative as AI without a well-thought-out strategy for how it will be implemented and supported on an ongoing basis simply will not work.” Institutions interviewed reported varying degrees of involvement in their AI initiatives.

In some cases, AI initiatives may proceed below the level of boards. UMich executives did not seek the support of the regents for the AI Services platform initiative, UMich President Santa Ono says. “We certainly update them on the kinds of things that we do in terms of teaching and research, and they have great interest, but this was not something which they had to formally approve,” Ono says.

At USC, in contrast, the board has been substantially involved in many AI issues, says Ishwar K. Puri, USC’s senior vice president of research and innovation. “So I would say that our president has involved the Board of Trustees in all matters of computing,” Puri says. “So the board of trustees has different committees like the Risk and Audit Committee, which looks at cybersecurity, for instance, and it has an Academic Affairs Committee, which is very involved with some [AI topics].”

Institutions’ optimal posture toward AI may play out differently for different institutions, those interviewed say.

Institutions that intend to drive AI change through major initiatives will have to be aware of the need to provide adequate resources to support ambitious AI and other data analytics efforts, Hilbelink says. “As an example, at one well-known institution, a [chief information officer] said he was told that they were going to hire 100 new faculty next year, which is a lot of new faculty, yet were not going to give a penny to increasing technology services, showing that they’re not necessarily taking into consideration the IT needs that would grow with that number of new faculty. So that’s a perfect example of a school not thinking towards the future.”

But many institutions are not, and will never be, on the leading edge of AI change by design, says Andrew Lounder, associate vice president of programs at AGB and a board member of Wheaton College in Massachusetts. Major initiatives to drive and steer AI on campus can represent enormous financial and reputational gambles that many universities and colleges simply cannot afford, he notes. For such institutions, it may be a perfectly legitimate approach to allow better-funded peers to be pioneers and to learn from their experiences. “Tech revolutions don’t always happen in sweeping fashion, the way that futurists predict,” Lounder says.

“It depends on the type of institution,” agrees Mendez. “What if I told you that AI can accelerate virtualizing classrooms? Well, if I’m a faculty member in a highly curated educational institution [like many small liberal arts colleges], I might say, ‘whoa, we did that in the pandemic and it sucked, do we really want to do that permanently?’ But if you’re [largely online giant] Southern New Hampshire University, you are far more interested in that capability because with the technology you create and optimize a larger audience.”

Institutions with limited resources for proactive AI endeavors can also focus those resources on investing in one or two AI initiatives that offer the biggest bang for the buck. “The vice chancellor of finance for one of our flagship institutions was discussing the need to explore AI, knowing it’s necessary, and quickly adding, ‘I don’t have the budget or staff time to do this broadly,’” says Christine Smith, a managing director at advisory CPA firm Baker Tilly, which consults with higher ed and other clients on AI implementation. “Instead, he proposed focusing on one function and assessing the costs and benefits. I found his choice to prioritize AI in facilities management and maintenance was particularly astute, as it wasn’t the most obvious area for AI applications, but it encompasses everything from energy cost savings to maintenance staff scheduling and exhibits the potential payback from capital investment.”

AI Risks to Address Now

But even for institutions that choose to go slow on AI, some AI-related imperatives must be addressed now. As noted earlier and described in detail in Part 1 of this series, AI usage poses a variety of risks, such as inaccurate results, compromise of confidential and proprietary data, discrimination in data outcomes, and plagiarism.

Of particular concern is the danger of IT bias and lack of oversight, says Shauna Ryder Diggs, a physician, former UMich board member, and AGB’s current board secretary and former board chair. As an advocate for AI in higher education, she says she also recognizes the inherent challenges. “Many people in technology—UMich leader Ravi Pendse not included, because he is exceptionally good—consistently minimize the potential problems,” Diggs says. “Many of the largest IT risks are human risks, which is a perspective that board members can lend to their institutions.”

“The board members should ask questions around the security risks posed by AI, because the risks side is reputational in the end,” Diggs says. “And for institutions—all institutions, but particularly institutions like Michigan—reputation is huge. It translates not only into the trust of parents sending their collegiates to campus but also people outside of the campus trusting the knowledge that we are discovering and presenting to the world, and the monetary value of reputation. The brand of Michigan has value, so it is very important for board members to be involved. I felt this way when I was on the board. There was usually alignment around the notion that board members ask tough questions that people inside the institution cannot always ask or do not think of as internal members of the team. It is good to have external voices at the table.”

Increasing AI equity is another enormous issue both on and off campus, Diggs says. A first step is better AI coordination across campus, Diggs says: “We need everybody on board so that students can learn how to use AI properly, effectively and responsibly. Boards can be instrumental in advocating for this coordination.”

And with respect to some areas of AI, regulation is not an option; it’s the law, with more AI regulatory requirements expected to emerge soon, notes Baker Tilly Director Jordan Anderson. Many states already have privacy regulations, AI use regulations, or both, with 17 states having enacted 29 bills “focused on regulating the design, development and use of artificial intelligence,” notes a December 2023 analysis by the Council of State Governments.1 Anderson also notes that the White House has already published an AI Bill of Rights2, which he says may serve as a framework for potential legislation and that the National Institute of Standards and Technology (NIST) has already posted an AI risk management framework that also may reflect “what future AI regulation will look like.”3 In March 2024 the European Union Parliament approved the Artificial Intelligence Act, “the world’s first major set of regulatory ground rules to govern the mediatized artificial intelligence at the forefront of tech investment,” noted CNBC.4

AI technology and its implications will also challenge boards to demonstrate that they can walk the walk as well as talk the talk, when it comes to willingness to pivot and change, says Melissa Hortman, a Microsoft technology strategist for research who supports U.S. higher ed institutions and a former associate professor at an academic medical center. “Higher education is often very risk and change averse, and so a majority tend to be laggards in times of innovation,” Hortman says. “The new era of AI is all about taking risks all about change. For example, if you develop an AI policy at your institution, that policy might need to change in two weeks because a new model or a new tool comes out. That can be very uncomfortable, but I think it will especially help if boards of trustees lead that change. To do that, they need to be aware of what’s coming, talk to their peers about what’s going on, and be willing to change. If a campus doesn’t have that sort of support for comfortability with consistent change, they’re not going to be successful in this new era of AI.” A corollary is that when it comes to AI, it is as important to allow for some AI failures and miscues, Hortman says.

“Higher education institution leadership needs to create space for failure,” says Hortman. “For example, if I try something out in my class using AI and I get really low scores for my teaching effectiveness, we need to make sure I am not penalized or have to go through remediation for my teaching efforts because I tried something new or different. Institutions needs to build trust with faculty and space for failing fast and forward.”

Boards also may need to make room for dealing with such emerging issues and a steady AI draw upon their time, Babson’s Perry says. “From a board perspective, we fundamentally made a decision a few years ago to balance our role as a board of trustees in terms of our fiduciary roles, our strategic roles, and our generative roles,” Perry says. “Historically, if you look at most trustees and most boards, they focus a lot on the fiduciary roles and less on the strategic and generative issues. We decided that we wanted to really lean in in terms of helping the school in those areas by increasing the time we dedicate to them.”

The Case for Doing More

But beyond the imperative of addressing the risks just described, there is a strong case for higher ed institutions to stretch to do more, particularly as concerns the need to educate students for the jobs of the future to ensure student success, says Beverly Seay, AGB Board chair and former board member at the University of Central Florida (UCF), and a high-tech executive.

“AI is going to impact every single field and if our students aren’t learning how it’s impacting their field, they’re going to be behind students graduating from universities where the curriculum has incorporated AI,” Seay says. “Then industry will find that even though the students may have good skills, industry will have to teach them what they are missing and it will be a couple of years before they may be productive. So in my opinion, all students in all fields should be understanding data science, artificial intelligence, and cybersecurity. They all fit together, and they’re all impacting every aspect of our lives.”

A best-in-class approach that all institutions should be striving toward is embedding AI and data analytics education specific to the needs of students in different academic disciplines, Seay says. Even general exposure to AI will not be enough to equip students to be effective in many jobs of the future without carefully targeted and applied education, she says.

Seay calls out the AI Across the Curriculum initiative of the University of Florida (UF) which since 2020 has provided a particularly rich, diverse, and universal AI coursework experience at all 16 UF colleges, designed to reach new learners as well as upskill existing workers, and with no need for a background in engineering or data science. “The university currently offers 200 AI and data science courses at the undergraduate, graduate, and professional levels, from arts to architecture, with others in developmental stages, including a nine-credit hour certificate in AI, MS degrees in applied data science and AI systems, and first- and second-year courses that introduce students to AI and how AI is used within their major,” noted a UF statement.

“UF has been building a comprehensive, inclusive model to reach new K–20 students as well as upskilling the current workforce via a micro-credential in AI. As the state’s premier land-grant institution, UF already had hundreds of existing faculty who were using AI in their teaching and research,” the statement notes. “This was expanded by hiring 100 new AI-focused faculty in 2020. Comprehensive AI research is supported with HiPerGator, one of the fastest supercomputers in higher education.…This resource is available to all faculty and students, as well as other universities and industries. In 2022, the university also established centralized leadership of AI academic efforts via the establishment of the Artificial Intelligence Academic Initiative (AI2) Center. The center coordinates the development of AI academic programs and certificates, identifies opportunities for faculty and students to engage with AI, organizes seminars and conferences, and partners with UF’s Career Connections Center, Florida’s colleges and universities, and private industry in collaborations to best train an AI-ready workforce that will contribute to our nation’s economy and security.”5

“We have hundreds of faculty who can teach students about AI at UF,” says David Reed, UF associate provost for strategic needs. “However, for faculty who want to start learning about AI, we have faculty-learning communities focused on AI. In addition, the AI2 Center has funded the development of new AI courses across campus. This support includes faculty incentives to build new courses and to support production costs.”

The UCF is currently working on a similar initiative, Seay says. Priorities include helping faculty use AI in their teaching and research, and teaching students in every major about the use of AI tools in their future jobs, says UCF Provost Michael Johnson.

“Boards need to educate themselves on the current and future impacts of AI and should be asking questions like, How are students being introduced to how AI is impacting their fields of study? Are students learning enough about it so that they will be able to adapt when the tools change? Is there a center where faculty can learn how to teach students how AI will affect their field, like that of UF?” Seay says.

Assessing AI Readiness

There are models for general AI governance that can help boards examine the opportunities, risks, and needs of AI deployments in a more orderly fashion. In presentations at the AGB National Conference on Trusteeship in March and during the January meeting of the AGB Council of Finance Committee Chairs, several management consultants at Baker Tilly, including David Capitano, Christine Smith, Dave DuVarney, and Jordan Anderson, discussed AI governance. They referred to Baker Tilly’s recommended framework for AI governance for boards, its AI Readiness Framework, which has five dimensions:

  • Opportunity: an examination of the value case for why the organization will benefit from AI;
  • Data Platform: the characteristics and needs of the underlying data that train and power AI models and how they can continue to evolve their capabilities and add knowledge, including that data are high quality, accurate, concise, consistent, understood, well-unified, and accessible;
  • IT Environment and Security: an examination of cybersecurity considerations, including how to ensure and govern the security and stability of AI models, the data used, human access, and the AI products created;
  • Risks, Privacy, and Governance: the organizational risk, the regulatory risk, and the compliance requirements that are evolving in the United States, such as state and federal AI rules and regulations on privacy standards and laws, including governance of the models and how they behave, as well as privacy concerns on campus; and
  • Change Management and Adoption: the human side of the equation, including user ability to use and trust AI, including AI literacy as to its value proposition impact and help improve their specific role and improve the overall quality of the service that organizations deliver, and upskilling, with an eye to driving adoption.

“There are a lot of moving parts to make this successful and to continue to sustain and evolve AI,” says Anderson, who notes that Baker Tilly also performs assessments of client competencies in the five dimensions.

AI Tools for Use by the Board

Will AI ever be good enough to help board members with the challenging strategic issues at the board level? Most queried said that remains years away, but some are convinced that AI may indeed eventually get there and cited a variety of possible board-level uses.

AI modeling sophistication may improve to the point where board members may be able to use AI to compare value propositions for different major courses of action being contemplated by the board, Hortman says. “Imagine using AI to create a table that could forecast the profits or losses of an institution for the next five years,” muses Hortman. “Microsoft’s AI tools offer a lot of potential for higher education to use in different ways to make more data-driven decisions.”

David Morales, senior vice president of technology at online, nonprofit Western Governors University notes that one area that AI could be useful would be helping to manage projects and stay within the time, scope, and budget. “That will help [us] make decisions as to whether we need to inject more people or hire contractors to help us achieve what is needed,” says Morales. “In the future, it’s my hope that I’ll be able to talk to a data model and say, ‘I need to know, what is my success ratio for achieving this project based upon the number of people and scope?’”

Raising capital is an area “where AI could shine,” according to Mendez, whose institution (Lafayette) has 30,000 alumni. “It would help to better understand their financial capacity, so that the development office can be more effective and efficient at leveraging that capacity in support of our upcoming capital campaign,” says Mendez.

Diggs says she looks forward to AI uses that will help explore more human challenges on campus, such as improving board awareness of different thought processes in different departments throughout campus. “To me, the next step of AI is trying to help us figure out people and better assess them, because most things that go wrong, go wrong because of people,” she says.

The Need for Stakeholder Groups

It is important that boards listen to other AI stakeholders at institutions, including students. Hortman says one notable aspect about the digital progression in AI at higher ed institutions is how developments are proceeding from the ground up as well as from the top down. “Historically, digital transformation has been a top-down effort where leadership says, ‘here’s our five-year plan for the institution and the key performance indicators, now go do it,’” she says. “However, this new era of AI has kind of flipped that digital transformation on its head to a more groundswell effort where innovation is starting with faculty, staff, and students. We are seeing successful institutions leading from both the top down as well as the bottom up.”

University of Albany Provost Carol Kim says the institution’s AI journey has involved several years of conversations with stakeholders at every level of the university—from the president’s cabinet to academic department chairs to budget officers. “In the last few years, we’ve discussed what we really need to do to best prepare our students,” she says. “We were trying to build a campus-wide discussion. That’s not just on the academic side; the executive council includes our vice president for budget and finance, student affairs, information technology services, and, for my area, libraries. Everyone who we needed was brought into this discussion.”

Stakeholder involvement can also be facilitated by creating a public portal dedicated to AI, as was implemented at Union College, a small liberal arts institution in Schenectady, New York. To chronicle and share its commitment to AI, the college created a space on its website that highlights some of the ways AI technologies are informing its teaching and research and includes policies related to responsible use of generative AI chatbots by students and employees. It contains teaching and learning resources for faculty, examples of faculty and student research and instructional uses of AI, a policy on responsible and acceptable use of generative AI by Union College employees, and a Q&A for students on responsible use of generative AI tools.

Involving students and recent alumni can also be important. Nancy Fortin, who graduated from Davidson College in May with a bachelor’s degree in political science, served as a research coordinator of that institution’s College Crisis Initiative dealing with pressing policy issues, of which AI is one, an effort headed by Chris Marsicano, an education and public policy professor. She also was a part of an ongoing AI Community of Practice Group started in August to focus on AI. “I would not say that the group has the ability to say, ‘this is our policy’ so much as it is people from across the campus—people from different departments, organizations, and places on campus—coming together and bringing together their expertise under the awareness that AI is becoming increasingly used at higher education institutions by students, faculty, and staff alike,” Fortin says. “There just hadn’t been any centralized conversation around AI at Davidson prior to that. So this has been a nice way for all of us to come together [which the group has done once a month with meetings and through a Slack channel in which members send each other articles].”

“I can’t imagine a committee on AI that doesn’t have former students like Nancy right at the table with faculty who have 30 years in the field, and administrators,” Marsicano says.

Back to the Future

Putting AI into the context of past digital technological developments can also help, says Lounder. Many of the AI governance issues are similar to those faced during the internet boom, in the mid-1990s, he notes. In fact, current board leaders may benefit from reaching out to former and emeritus trustees from that era (or key staff) who can share what the board experienced back then, Lounder says.

Diggs recalls her own experience as a high school student when the Internet boom launched and how its trajectory drove home the need for a combination of student engagement, tools, and a community of fellow new-technology acolytes. “At that time, I was a librarian at my high school, and they needed people who could help students shift from using the card catalog to using computers to do searches,” Diggs says. “[AI] is essentially the phone. We just had to teach people how to do a search. AI for me is, ‘how do you teach everyone to do a search and process trillions of bytes of information to get the specific results that you want?’ So to me, this is exactly the same thing. It’s about educating everyone on how to use it properly,” says Diggs. “I was lucky to have parents who were university professors who bought me a personal computer, making me one of the only in my class to have one. But then I came to UMich and encountered computer tools vastly more powerful than the computer I had and, more importantly, students and faculty actually using these tools in all kinds of different ways.”

Strategic Partnerships in AI

Partnering with industry may also be a way of expediting AI sophistication at higher education institutions, says Steven Gonzales, chancellor of the Maricopa County Community College District (MCCCD) in the Phoenix–Tempe area, a hub for high-tech.

“I remember sitting in on a presentation by the CEO of Barrow Neurological Institute here in Phoenix,” Gonzales says. “At that time, they were using AI to decipher radiology film to look at X-rays and things. And they were able to rely on that with what I believe was around 99 percent reliability and accuracy—I’m not so sure that humans could reach that level of accuracy and be able to do it very quickly. I share that story because I think we’re going to need to work closely with industry. If you’re going to be a nurse, you’re going to need to know as a nurse, how is AI used in your daily workspace? If you’re a cinematographer, how is AI used in your workspace? If you’re someone who works on these new automobiles, what do you need to know about how AI affects that?”

MCCCD is pursuing ambitious AI educational ventures through partnerships with Intel and other tech leaders. That includes expanding existing AI certificate and associate programs and, in 2025, introducing an AI bachelor’s degree. That has forced the board to become more versed in AI issues and implications, says Marie Sullivan, president of the board. “Before, I think we were a little bit more in the dark and we were learning as community about the potential impact and breadth of AI,” Sullivan says. “So we had our own learning that we have to do. And now we’re much more conscientious about AI generally.”

In a similar vein, Seay urges higher ed institutions to assemble stakeholders and resources that will allow them to think and act bigger on AI than might otherwise be the case. The UF initiative, for example, came about through a $25 million donation from UF alumnus and NVIDIA co-founder Chris Malachowsky; a $25 million contribution by NVIDIA of hardware, software, training and services; and a $20 million investment by UF.

Additionally, IT giants such as Microsoft are actively forging relationships with higher education institutions, which can help institutions deploy generative AI securely at scale. “Microsoft has many practitioners who collaborate closely with higher education to connect Microsoft solutions and end users,” Hortman says. “Microsoft has invested in OpenAI to accelerate AI breakthroughs, which has given Microsoft the opportunity to do two things. First, Microsoft is able to make all of the OpenAI models available in a secure way through Azure, Microsoft’s cloud. Customers can then leverage the models for research and developing their own applications. For example, you could create an admissions chatbot for any prospective students for whom English isn’t their first language. It could help new students find important information or understand the steps through the admissions process. Second, we implement the OpenAI models within the Microsoft products. Microsoft has built AI Copilots into every experience across the platform.”

No matter if higher education boards partner with funders to help bring AI to their institutions or work with the technology industry directly, involving all campus stakeholders—employees, faculty, students, and staff as well as the board and senior institutional leaders—will be crucial. Managing the new innovations that AI can bring to higher education will be no small feat, but working together to ascertain the best ways to employ AI effectively and ethically while also mitigating any institutional risks will be one of the greatest responsibilities boards will have in the coming years to ensure that their institutions have the best success with AI innovation.

David Tobenkin is an award-winning freelance business, education, and technology writer, editor, and speaker based in the greater Washington, D.C. area. He is an ECMC Foundation Higher Education Media Fellow and ECMC grant recipient who has been published in Wired, the Los Angeles Times, and numerous other publications. He was a featured speaker at the 2023 Annual Conference of the Council for Higher Education Accreditation at a session dedicated to his two-part series for Trusteeship, “The College of the Future.”


1. Rachel Wright, “Artificial Intelligence in the States: Emerging Legislation, The Council of State Governments,” December 6, 2023, https://www.csg.org/2023/12/06/artificial-intelligence-in-the-states-emerging-legislation/.

2. The White House. “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People,” October 4, 2022, https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

3. National Institute of Standards and Technology, “AI Risk Management Framework,” January 26, 2023, https://www.nist.gov/itl/ai-risk-management-framework.

4. Karen Gilchrist and Ruxandra Iordache “World’s First Major Act to Regulate AI Passed by European Lawmakers,” CNBC, March 13, 2024, https://www.cnbc.com/2024/03/13/european-lawmakers-endorse-worlds-first-major-act-to-regulate-ai.html.

5. University of Florida, “Building an AI University,” https://ai.ufl.edu/about/.

logo
Explore more on this topic:
The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.