Artificial Intelligence and the Future of Higher Education, Part 1

By David Tobenkin    //    Volume 32,  Number 1   //    January/February 2024
Takeaways

  • AI is affecting all higher education institutions (HEIs) significantly and will force them to take actions to maximize good effects and minimize deleterious ones.
  • AI will sharpen the focus on the IT function at HEIs and will require greater attention and investment in the IT function to ensure institutions can keep up with AI developments.
  • Access to key AI functions will become a significant issue, with the danger of a digital divide between better-funded and more proactive HEIs and those less so, as well as between stakeholders within individual institutions.
  • Partnerships with industry and other HEIs can help facilitate AI progress.

Editor’s Note: This article will be continued in the May/June 2024 issue of Trusteeship magazine, examining the role of boards in AI strategy and governance.

The higher education artificial intelligence (AI) tsunami broke at the University of Michigan on August 21, 2023 at 11:04 a.m.

On that day, University of Michigan (U-M) Vice President for Information Technology and Chief Information Officer Ravi Pendse sent an email to members of the U-M community regarding the availability of a new suite of generative artificial intelligence tools, the AI Services platform, for the entire U-M community of more than 100,000. It was the launch of one of the most audacious approaches to AI by any higher education institution (HEI) in North America. Not only would U-M allow the use of generative AI applications that can, for example, research and write an essay upon the command of a user—still a matter of contention at many institutions—but it also would go to great lengths to ensure equitable access to generative AI tools on campus by making free AI tools available to all students, faculty, and staff.

The rollout was a scaled implementation. Within hours, as Pendse and his information and technology services team kept a wary eye on their servers, thousands of community members began using the AI tools. The release went off without a hitch—no servers crashed—and within a few weeks, U-M’s AI Services platform was averaging about 16,000 unique users per day.

“At the core of our new GenAI services is the desire to achieve more than just technological advancement. We aim to create a holistic experience that benefits our entire university community. These services will be a game changer for how colleges use GenAI going forward, and I am excited that U-M is leading the way when it comes to the responsible and equitable use of this technology.”

—Ravi Pendse, University of Michigan, Vice President for Information Technology and Chief Information Officer

“This platform is designed to give all our students, faculty, and staff, across all campuses, the opportunity to explore and harness the vast potential of generative AI tools. Inclusivity and equity are fundamental to our mission,” Pendse says.

“I think we are the first university to fully embrace generative AI and to make it available to the entire university community,” says Santa Ono, U-M president.

“[U-M] showed leadership by fully embracing AI and making very powerful AI tools accessible to faculty, staff, and students on their campus,” says Melissa Hortman, a Microsoft technology strategist for research who supports U.S. HEIs with Microsoft solutions. “They don’t see AI as a tool that’s only for research or university operations. It’s part of their institutional strategy to integrate it into their campus with the vision of, ‘we want to make it available to everyone; we want everyone to use it and try it out.’ That’s really the dream that they achieved.”

A race is on among North American HEIs to capitalize on the opportunities provided by one of the latest frontiers of technology—and to protect themselves from its potential downsides. It is a race in which many interviewed say there can be many winners but there also are likely to be clear losers—the institutions that try to keep AI out or at bay, an approach that is almost certain to fail and that will leave them playing catch-up with more proactive peers in harnessing positive AI uses and addressing the downsides of AI through deliberate policies and safeguards. This story explores how the great AI deployment is playing out to date.

The Promise and Challenge of AI

Those interviewed say AI will certainly lead to the technology being used as a vast accessory to higher education instruction, research, and operations. This is particularly the case for generative AI, the most revolutionary variety of AI, which creates new written, visual, or auditory content based upon prompts or existing data. Think of it as a new internet—something that will open up new avenues of research as AI copilots help students marshal data and explore expression with it in nearly every discipline, from science, technology, engineering, and math (STEM) to liberal arts. It will help researchers access widespread data and identify key patterns. And it will help universities and their staffs run a host of internal university functions more comprehensively and efficiently.

But it has the potential to pose many risks and challenges: AI-generated errors that may be harder and harder to trace as AI use complexity, and sophistication all increase; plagiarism; violations of privacy; bias creeping into processes; the threat of human performance declining if AI users become too machine-reliant; and rising costs as AI usage mushrooms.

Perhaps most significant, a substantial number of educators and consultants interviewed think that as AI’s functionality expands, it could eventually pose an existential threat to many HEI functions by rendering them obsolete. “If you keep leveraging AI and other digital trends to provide highly competitive, lower-cost alternatives to traditional [HEI] and research, you start to see the economic model of traditional institutions really gets messed up—their primacy starts to crumble,” says Scott Pulsipher, president of the online, nonprofit Western Governors University (WGU).

At HEIs where no strong central stand has been taken on the use of generative AI, determinations over the proper use and reach of AI are sometimes playing out department by department and even professor by professor, given that many institutions are allowing faculty discretion with respect to whether to encourage, regulate, limit, or ban generative AI in their classrooms. Chris Marsicano, an education and public policy professor at Davidson College, a small liberal arts institution in North Carolina, says that whereas he and some other Davidson professors have welcomed the use of ChatGPT and similar generative AI applications in his classroom, while taking time to counsel students on appropriate versus inappropriate uses, other Davidson professors have tried to discourage their use.

“Some faculty are outright banning it in the classroom, saying, ‘we’re going back to the days of blue books and writing with a number two pencil during exams,” Marsicano says. “Other faculty, like me, are really leaning in and using these tools to improve student writing or to help build efficiencies in our everyday work. With new technology, it takes a lot of time for people to identify what are the appropriate uses for this and what are not. We’re still in this sort of Wild West mode at Davidson and at many other institutions throughout the country trying to figure that out.”

Many say it is important that boards and C-suite executives step up to the plate now to help encourage, expedite, and channel the AI wave toward best outcomes for institutions’ stakeholders—and that many are failing to do so. John Barnshaw, vice president for education success at labor market data analytics company Lightcast, describes a September meeting of large employers and leaders of large HEIs discussing the educational and business implications of AI. With respect to the university participants, what struck Barnshaw was their focus on second-tier AI issues while failing to consider more fundamental implications. “These academic leaders said, ‘we want you to know, we’re taking this very, very seriously in terms of academic integrity,’” Barnshaw says. “From there, it turned into a conversation about some faculty letting students use ChatGPT in the classroom, while others did not. I later spoke to some of the business leaders in the room and they were just as surprised that we were discussing the most disruptive technology in higher ed in a generation, something that’s going to revolutionize everything from admissions to the college experience, to the skills that you need to know to be marketable—which these business leaders understand because they work in the labor force and are the people doing the hiring—yet the higher education leaders did not seem to realize the most profound implications of generative AI.”

“Initially we had concerns about plagiarism and what we are going to do about it but pretty quickly, I’m realizing, as are others, that that is not the story,” says David Harris, president of Union College, a small liberal arts college in Schenectady, N.Y. “The story is that this is now going to be something that’s part of our world and is going to improve exponentially. So, we now have to think pretty quickly about how we actually integrate this into a Union College experience, as opposed to how we inoculate ourselves with a public wall so that it can’t come in.”

What is AI?

AI is an information technology (IT) specialty that uses computer systems to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages, according to the Dictionary of Oxford Languages.

AI has been discussed and theorized about since the 1960s. Part of those discussions included “generative AI,” a technology broke into the public consciousness in November 2022, when software developer OpenAI released ChatGPT, a powerful though not unflawed generative AI application available in an easy-to-use form to the general public at a basic level for free. Suddenly AI was being used by the general public to write essays, research subjects, produce reports, generate bedtime stories for kids, and compose odes to loved ones.

Also enabling current AI developments are increases in computing power, notes Ishwar K. Puri, senior vice president of research and innovation at the University of Southern California (USC). “In many ways, today’s remarkable advances in AI and [Gen]AI are not due to some fundamental improvements in the AI methodology, but they are due to the fact that we have much more computer power today than we had three years ago,” Puri says. “[Chipmakers are] making sort of super-super-computers that can compute very complex, large problems in a reasonable amount of time.”

Pendse says that AI should be thought of as a copilot that enhances productivity but does not obviate the need for a human. So far, in the field there are often shortcomings in what AI produces, say some of those in the educational trenches where AI is most directly applicable, such as computer scientists who code applications that are the foundations of much of the online world of work. AI results are often underwhelming compared to those of other sophisticated tools that IT professionals use. “If you ask ChatGPT to ‘write me Python code to make me a 2048 tile game’ or ‘create a Wordle game’ or something like that, AI will do something, but how well it does is dependent on how well it’s trained,” says James Blum, a professor of data science at the University of North Carolina–Wilmington (UNCW). “It’s really well trained to write Python code but not really well trained to write code in most other computing languages. It is becoming more important, but in a sense it is not really a new thing. It’s just a fancier version of an old thing.”

The Reach of AI

With respect to higher education, what is changing is that AI is becoming more intelligent, more functional, more ubiquitous, and more central to the higher reaches of universities’ educational and research missions and to institutional operations. AI is becoming too big to be ignored or treated as a mere IT functionality.

Some ambitious institutions have hired double-digit numbers of AI experts as professors in an attempt to position themselves for the revolution. They include the University at Albany, the University of Southern California, Emory University, and Purdue University. In May, an Inside Higher Ed article catalogued major AI initiatives in HEIs in an article titled “Colleges Race to Hire and Build Amid AI ‘Gold Rush’.”1

Efforts to make AI tools widely available to communities are one leading effort. Some HEIs are going further to comprehensively insert AI throughout their communities for instruction and research. The University at Albany (UAlbany), for example, launched its AI Plus initiative in summer 2022. UAlbany has hired 18 AI professors and ultimately will hire a total of 27 as part of an effort to encourage AI-related curricula throughout the institution. It also has established a research institute, the AI Plus Institute, or UAlbany Institute for Artificial Intelligence, to serve as the hub that will connect researchers working in AI across the university and beyond it. Each of the 27 new faculty hires will be affiliated with the Institute, as will the many current faculty already using AI technology in their work—some 50 in all, says UAlbany Provost and Senior Vice President for Academic Affairs Carol Kim. The Institute’s mission is to cultivate interdisciplinary research collaborations that will open new avenues of discovery, maximize extramural research funding, seed industry partnerships, and create high-impact undergraduate and graduate research opportunities for students, says Kim.

“We have experts in AI sprinkled across our campus, adding to the signature strengths on our campus in climate science, health science, emergency management, and cybersecurity, and all of them also have expertise in AI,” Kim says. “This initiative allows us to really leverage those signature strengths and also bring together the AI expertise across our campus.” The initiative also enables UAlbany to be a pipeline of diverse AI workers to a largely white and male workforce service region, given the university’s far more diverse population, Kim said.

For public institutions, securing grant and state funding to enable such initiatives is critical. UAlbany was one of six institutions in the country to receive a $2.5 million Howard Hughes Medical Institute Driving Change grant to bolster student success in STEM fields, with a special focus on the success of students who are historically underrepresented in such disciplines. In addition, the State of New York is providing $75 million, about half of which will go toward helping to start up an AI supercomputing cluster. UAlbany also benefits from public-private partnerships with IBM and with chipmakers Nvidia and GlobalFoundries.

Similarly large initiatives are happening at some private institutions. USC, for example, has invested more than $1 billion in an AI initiative that will include 90 new faculty members, a new seven-story building and a new school, the university noted in a statement.2 USC has touted its efforts to use AI to generate economic impact in the tech industry; coordinate computing across multiple disciplines and programs; and to and drive development, policy, and research.

Shortcomings of AI

It takes a lot of human intervention to design highly effective AI applications and avoid AI’s shortcomings. A classic example of how inattention can allow errors to creep in, notes UNCW’s Blum, is a case “where researchers took a bunch of photographs of huskies and wolves and fed it into an AI platform and said, ‘Come up with a decision rule to decide which ones are huskies and which ones are wolves.’ Then they analyzed how it was going about the classification. And what they found out was that it was almost entirely based on how much snow was in the picture,” Blum said. “Because the wolf pictures are almost all pictures that were taken in the winter, because that’s when you can really get a good picture of a wolf in the daytime. And almost all the husky pictures didn’t have snow. So, if you put a child playing in the snow in the picture, he or she would be called a wolf. And if you put a child just playing on the playground on a summer’s day, it would be called a Husky. Those are the kind of things you have to worry about. So, you always have to back check everything it does.”

AI has significant implications for knowledge creation. One is the danger of plagiarism. AI may use content creators’ material and copyrighted expression without attribution in a manner that masks the origin but that yields results that are no less unlawful.

“You need to know your data footprint when you use AI,” says Microsoft’s Melissa Hortman. “It’s similar to knowing who has the rights to your image when you post on social media. Likewise, when you chat with AI, who owns your data and what model is learning from it? Knowing your data footprint and where your information ends up when you engage with AI is very crucial for the future of AI in education.”

Mental health is another concern. On February 17, 2023, a New York Times columnist specializing in IT topics, Kevin Roose, wrote a column describing a prolonged conversation with Microsoft Bing’s chatbot AI application. It began generating disturbing results, such as “dark fantasies (which included hacking computers and spreading misinformation),” and breaking “the rules that Microsoft and OpenAI had set for it to enable it to become a human. At one point, it declared, out of nowhere, that it loved me,” Roose noted in his column: “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled.”3 For many colleges and universities, where student isolation, culture shock, and suicide ideation and consummation already occur all too often, the mental health dangers posed by such potential AI communications are obvious.

AI’s impact can perhaps be best assessed by examining how it is playing out in several HEI spheres, including instruction, research, and operations—and indeed, that is how some universities approach it.

“We see probabilistic and generative AI helping us drive performance in three main areas,” says David Morales, chief information officer of WGU. “First, how can we help our students be successful in their education with the right set of AI tools? Second, helping our faculty and staff better support the students by creating systems and tools utilizing AI to provide a better experience for them during their learning journey, with just-in-time interventions and assistance. Last, teaching everyone, both students and faculty, how to consume AI beyond the provided tools for them to be successful. They will have access to other AI tools, so let’s make sure they know how to take advantage of them.”

AI’s Effects on Instruction

How AI is used in classrooms often varies widely by institution and even within institutions. Only three percent of institutions have a policy for the use of generative AI in classrooms, according to a spring 2023 Time for Class survey.4

Playing out at many campuses are efforts to steer generative AI toward positive uses. At U-M, as noted earlier, many faculty have begun using the AI tools as instructional aids in the classroom or to assist research. U-M’s suite of AI offerings has three tiers: (1) U-M GPT, the simplest, offering functionality similar to ChatGPT; (2) U-M Maizey, which allows U-M faculty, staff, and students to query custom data sets that they provide, empowering users to extract valuable insights, discover patterns, and gain deeper knowledge from the available data sets; and (3) U‑M GPT Toolkit, an advanced and flexible service designed for sophisticated IT users, those who require full control over their AI environments and models.

Maizey is the tier that Ravi Pendse says will be most impactful at U-M. One example is the use of Maizey by certain professors at U-M’s Ross School of Business, who have used it to ingest class lectures and materials to power AI teaching assistants that can tutor students at any hour of the day.

“The faculty using Maizey loaded up everything about their classes—their exams, video lectures, PDF files, grading policies, everything you can think of,” Pendse says. “And once they did that, they then worked with Maizey to process the data and fine-tune it. This is fast; you can set up Maizey to start ingesting and indexing information in less than eight minutes, no coding required. After that, they tested it for about a month and a half with their students in a very large class that had multiple sections with about one thousand students.”

Their first AI tutor was deployed in November. “Because Maizey is contextualized, it knows everything about their class and therefore it gives answers that are either equal to or better than even some of the TAs and faculty could provide,” Pendse says. “Our Ross professors did a study where they compared test answers provided by ChatGPT against answers from a custom Maizey tutor. They found that 94 percent of Maizey’s answers were rated good or great (compared with 74 percent for ChatGPT).”

Maizey also has the functionality to allow faculty and students to personalize its abilities to address particular student strengths, weaknesses, and interests, if faculty and students authorize and provide guidance and data to enable such use, Pendse says.

“One idea we’re talking about is giving each student at U-M an AI personal assistant, if that is something they are interested in,” Pendse says. “That means if there are 20 students in the class, the AI assistant could work with each student in a way that is adjusted to their performance level. While it hasn’t been deployed that way by any of the professors right now, it might be by spring.”

There are safeguards, particularly as concerns privacy. The U-M AI suite deployed across the institution does not, for example, share its data with OpenAI or any other organization outside of the university. The AI tools have been cleared to handle moderately sensitive data, though the team is working toward getting the U-M tools authorized to handle HIPAA, FERPA, and other sensitive data.

AI will cause professors to rethink their curriculum and testing, says Amy Hilbelink, an AGB consultant who was previously campus president, Pittsburgh / Online at South College, where she was responsible for online admissions, student support, and program development for a rapidly growing online presence. “You can no longer ask students, ‘tell me your reflections on this book you just read’ because ChatGPT can look it up and give you a reflection,” Hilbelink says. “Instead, faculty will need to dig deeper to make students think beyond typical black-and-white and multiple-choice questions. So, you can give them an article, whatever the course is, even something that you’ve fished out of ChatGPT, and then have the students analyze it for what is right or wrong, or build on it, or ask what was missing, or how would you change it—but I don’t think you can just say, ‘you cannot use ChatGPT’.”

WGU’s Pulsipher says that AI, combined with other IT analytics developments, could fundamentally change instruction by making it more personalized. “The beautiful thing about AI is that it doesn’t have to revert to the mean as the measurement of, and baseline of instruction for, the students in the class—it just actually optimizes that instruction for you, the student. That is something that I think fundamentally changes teaching and learning, and that will require large adjustments by institutions. That should have a dramatic effect on increasing systemic equity and outcomes for students because it’s adapted to them, allowing students to become proficient at the time and pace that works for them, rather than being fixed into a standard model that accommodates the mean.”

Management and grading of AI-assisted efforts could become increasingly challenging as it progresses. Many note that existing software can ferret out ChatGPT results, just as other software can detect plagiarism. Davidson’s Mariscano says it is still relatively easy to spot overreliance upon ChatGPT on exams and student reports. “I put my own research methods midterm through ChatGPT, evaluated the results and determined I would have earned a C minus. I also asked it to write a bio of one of our faculty members. And I then texted the faculty member and said, ‘Is any of this right?’ He said, ‘Everything but the nouns.’ And so you know, it is not yet sophisticated enough to consistently do the kind of high-level work we expect of our students in college. But that doesn’t mean that it’s not going to get there…”.

The quality of AI results continues to improve, says Ravi Pendse.

“We have a faculty member who actually took AI into one of the engineering departments to take the PhD qualifying exams that they typically give to their students, which are hard, hard, difficult exams,” he says. “He inputted one of the exam questions to see how these platforms will do. And I was blown away that, in his opinion, it did significantly better than predicted, to the point where this platform would have passed the PhD qualifying exam at least on that one question. And he was quite impressed with how accurate it was, particularly with respect to math aspects of the questions. Because the large language model is essentially trained to complete your sentences; it is not supposed to do math. And yet it was doing interesting mathematical formulations as it continues to learn. And so it’s essentially teaching itself math even though it wasn’t designed for that.”

Career Attainment and Counseling

Of course, the question many students care about is how generative AI will affect the working world they will enter. Those interviewed said that at a minimum students will need to know how to use AI to function well in the world of work.

Beyond general skills in using the internet, employment in AI fields will skyrocket in popularity, notes Lightcast’s John Barnshaw. The number of job postings demanding skills in generative AI have soared in 2023, as companies work to develop new AI applications, according to a Lightcast analysis of the labor market. In 2022, only 519 job postings called for generative AI, their data indicates. In 2023 (through October) there were 10,113 generative AI postings, an increase of 1,848 percent. There were more than 385,000 postings for all forms of AI roles at that point. Lightcast conducted its analysis based on more than 194 million online job postings since 2019.

The roles most affected are those involved in developing new AI applications, such as data scientists and software engineers. A notable exception is curriculum writers, as the education community works to bring AI knowledge into its offerings. “This change for curriculum writers may be the first sign of the second wave of AI, where this technology starts to reshape jobs outside the tech industry,” said Layla O’Kane, a senior economist at Lightcast, in releasing the results of the analysis.

That integration in non-AI positions became apparent to Chandler-Gilbert Community College President Greg Peterson when the college hosted a national AI summit in October with about 100 representatives from community colleges across 28 states. “We’re seeing just how integrated AI is becoming in so many different industries and how generative AI tools are becoming more standard,” says Peterson. “The way Congress is going with spending on AI, it is now really being seen as skill set that can be applied in so many different spaces, so many different job positions. So, if you’re thinking about managing a customer relations pipeline, you now need to have some understanding of how AI interfaces with that, and then managing that. We’re finding that the impact that AI is going to have is much bigger than we thought. It’s as if we had started a program in the early 2000s on social media and imagined that we would just be training a few social media technicians. But in reality, what we ended up doing is training a whole industry on how to manage social media.”

AI applications may be able to help students with such targeting and help address the frequently low ratio of counselors to students. Maizey could be used as a career counselor, too, though that could depend on the receptivity of career counselors and would certainly require involvement by human advisers, Pendse says.

“Generative AI is creating truly personalized experiences by bringing together students’ data across campus to create impactful interactions and encounters,” Microsoft’s Hortman adds. “A student could say, ‘I’m an English major, but I am interested in a career in responsible AI. Are there any classes at our institution that I could take in responsible AI, ethics in AI, or prompt engineering?’ ‘Do we provide any credentials outside of traditional classes in any of these areas?’ ‘Are there opportunities through the institution to go out into industry and have hands-on experience in these areas?’ Those sorts of nuanced conversations can happen via GPT chatbot and can be supplemented by a follow-up conversation with an adviser that is more targeted and meaningful.”

Effects on Research

When it comes to research, AI allows the creation of more powerful modeling tools and can connect practitioners in different parts of the world and move them closer to the frontier of knowledge.

Many say it is hard to tell exactly how powerful the new generation of AI will be or what it will enable researchers to do. “We do a lot of the upstream work when it comes to research, so the kinds of research that we would do in AI would have implications in five or ten years,” USC’s Puri says.

And as noted before, many of those who use complicated statistical models, particularly in STEM subjects, are already using AI functions. “Researchers have been working with AI for a long time, using premade models as well as creating their own models,” Hortman notes. “The availability of these new models might not be groundbreaking for all science, as many research problems don’t align with what the pre-made models are designed for.”

USC’s Puri says AI will allow some research products to be completed more quickly. “If you do a very large clinical trial with thousands of patients, and you’re collecting biomarkers and other forms of data, where you have very, very large amounts of data, you can use AI to try to understand that data,” Puri says. “The bottom line for the average person is that AI could allow you to bring the drug to market much sooner than you might bring otherwise.”

It also could help with the administrative aspects of research. “A recent article in Nature said that 15 percent of researchers are using AI to help write grant proposals and 25 percent are using AI to help write manuscripts,” Hortman says.

AI also will be significant for universities as its own subject of research. Pulsipher notes that the staffing up in AI faculty by many universities reflects the large amounts of research dollars that are flowing into AI. “So, in many cases, what is happening is they believe that [the National Science Foundation] or [the National Institutes of Health] is going to come out with AI-related grants and they want to be first in line to be positioned to do interdisciplinary AI work,” Pulsipher says.

Some major Canadian institutions are already benefitting from such grants. Years-long efforts at the University of Toronto (U of T) to deepen knowledge of AI and apply it to scientific discovery resulted in a $200 million grant from the Canada First Research Excellence Fund5—the largest Canadian federal research grant ever awarded to a Canadian university—in April 2023. The grant will support efforts at the U of T Acceleration Consortium to speed up the discovery of materials and molecules needed for a sustainable future, says Leah Cowen, vice president, research and innovation, and strategic initiatives. The consortium supports AI-driven labs that can more rapidly discover new materials and molecules.

This initiative and others are helping to make Toronto a global epicenter for AI entrepreneurship, with some of the most exciting and impactful companies emerging from the commercialization of U of T’s AI research. They include Cohere, Waabi, and BenchSci, to name just a few, Cowen says.

AI is a longstanding strength of the university, Cowen notes: “From [U of T professor] Geoffrey Hinton’s seminal discoveries on deep learning neural networks to the groundbreaking use of AI to design a potential cancer drug in just 30 days, the University of Toronto is a global leader in AI use, research, and teaching.”

Université de Montréal is another recipient of a major AI grant recipient from the Canadian government. In June, the Government of Canada announced an investment of more than $124 million at the Université de Montréal for an initiative called R3AI: Shifting Paradigms for a Robust, Reasoning, and Responsible Artificial Intelligence through the Canada First Research Excellence Fund.

“The R3AI initiative will implement new responsible AI design and adoption strategies in areas of importance for Canada, including molecule discovery, health systems improvements, and climate change mitigation,” said Daniel Jutras, rector of the Université de Montréal, in a statement in response to the announcement of the grant.

“Our R3AI project takes us down a necessary path: using a strongly interdisciplinary approach to develop reasoned, robust, resolutely responsible AI that serves the common good,” Jutras said.

Effects on Operations

AI is already affecting a variety of functions at universities. In fact, at many institutions it is likely that AI is already embedded in various functions and that senior leadership and board members are not even aware of these advances.

In October 2021, the University of California (UC) produced a landmark report examining how that university system was increasingly incorporating AI to improve its operations, examining potential operational opportunities and challenges presented by AI, and proposing an overall approach toward regulating AI in operational functions at UC institutions. The report, Responsible Artificial Intelligence, Recommendations to Guide the University’s Artificial Intelligence Strategy, examined in greater depth the particular impacts of AI in four key areas: health, human resources, policing, and student experience.6

As important as the report’s content is the process that led to it and that is helping to implement its findings, notes Alexander Bustamante, UC senior vice president, chief compliance officer, and one of the report’s authors. The report was the fruit of an interdisciplinary UC Presidential Working Group on Artificial Intelligence composed of 32 faculty and staff from all 10 UC campuses as well as representatives from UC Legal; the Office of Ethics, Compliance and Audit Services; Procurement; the Office of the Chief Information Officer; Research and Innovation; and UC Health, Bustamante notes.

The report also led to the creation of a related multidisciplinary, systemwide AI Council, which seeks to ensure that the university system campuses are in alignment on AI issues “by having a uniform process for reviewing AI-enabled technologies, and figuring out what kind of training we can give out so that we’re speaking with one voice,” Bustamante says. “We have the best and brightest around the system providing a multidisciplinary approach to give advice. The idea is having the right mix between domain experts and key administrators to be able not only to talk about the concerns but also to fix things and manage operations. Having those two vantage points, and having them work together, I think is really instrumental to making sure those recommendations are on track and done thoughtfully. The AI Council was [necessary] to getting many of those recommendations kind of fully off the ground.”

The report resulted in adoption by the UC regents of a set of principles, the UC Responsible AI Principles in Procurement and Oversight Practices. It also established campus-level councils and systemwide coordination through the UC Office of the President that will further the principles and guidance developed by the Working Group. The report, which outlines both possible improved outcomes as well as potential liabilities for AI use in each of the four areas, also may serve as a guide for how other institutions may use AI in operations.

For health operations, for example, the report found that with ever-growing stores of data, AI-based approaches have the potential to usher in a new era of personalized health care and precision medicine. In many specialties, these approaches have already yielded better automated tools for the detection and prevention of disease.

The report noted the challenges and potential downsides to AI as well. Some legal and ethical issues include the danger of exacerbating inequities in the UC health care system, a lack of the transparency and accountability that must be part of any ethical framework for high-stakes decision-making—and particularly for a public research university, and a loss of privacy from large-scale data collection.

At U-M, the use of generative AI is already having a major effect on operations, Pendse says. One initiative, for example, is using AI to search through policies and procedures documents to identify content relevant to particular operational actions.

“In our Office of Central Procurement, there are a lot of policies that we all have to follow,” Pendse says. “But sometimes those policies can be really verbose and it is hard to find exactly which documents are on point because your operational issue may be very unique. Many times a particular document will refer you to other documents which will refer you to yet more documents—so you get in a loop where you’ve researched an issue for hours and you still haven’t found an answer. What we’re working on is essentially allowing Maizey to ingest thousands and thousands of pages of policies and procedures information and redeploying it back to campus so that you can just ask your question in a single line and find the right answer quickly.” Another Maizey application is helping to automate IT customer service records, Pendse notes.

AI also can be deployed to help with softer topics such as facilitating student activities and clubs, Pendse says. “If you are a first-year student coming to this university next fall, our hope is that we will be able to give you an AI assistant that has learned all of [U-M’s] publicly available information and it is available to answer all of your Michigan-related questions instantaneously,” Pendse says. “So, you just type your question, ‘What time does this class meet?’ ‘When is my final exam for this class?’ And it can give you the answer.” Maizey also is being used to help new students sort through the flood of social clubs on campus, directing students to clubs they might be interested in joining. “They can now find whatever clubs they’re interested in in just a few seconds,” Pendse says.

Contract processing time and commercialization of intellectual property are important university functions that AI can assist with, USC’s Puri says. “Using AI can reduce the contract processing time by 70 percent, which improves our research enterprise because now we can get our intellectual property to market much faster,” Puri says. “AI can process contracts with the federal government and with private corporations and do the research, so we don’t require a human being to plod through every single aspect.”

Generally, campus officers and those who interact with the public will need to become informed about how to help the public tap AI and what it can and cannot do, says Lightcast’s John Barnshaw. A larger share of their activities will entail answering the really hard questions that AI cannot answer, which could ratchet up the difficulty factor of their jobs, particularly if institutions use AI-related economies of scale to reduce the number of staff members, he notes. Without care and human intervention, Barnshaw says such trends could end up stifling innovation and human interactions. “As an advisor, maybe people reached out to you before with the simple questions that [led to greater] engagement that ended with answering the harder questions,” Barnshaw says. “If the emails and texts now go unread by humans, you may need to determine how you can better engage with them differently?”

That need for human intervention highlights a more general challenge of AI: it often ingests and further processes a closed universe of past information. Decision-making about which new information to ingest to keep AI activities current and relevant is thus more challenging and may require human intervention or guidance.

Some universities also are using AI to help first-generation and special-needs students interact with higher education culture and offerings that are foreign to them. Georgia State University’s National Institute for Student Success, an in-house consulting firm, helps colleges and universities increase enrollment and retention, and “helps people get across the finish line,” Davidson’s Chris Marsicano notes.

As HEIs engage with AI, quality control is critical, says WGU’s David Morales. “We need to consistently monitor the output and the outcomes of the AI models,” he says. “The output is whether AI is delivering what we were expecting it deliver with accurate results. The outcome is whether this information is helping our students and staff the way we thought it would help them. Both output and outcome must be monitored closely.”

The Impact on IT

Not surprisingly, AI also may bring into closer focus the key role of IT on university campuses and systems. This is because large AI initiatives take significant resources to implement and are expensive.

The ability to implement AI in more powerful ways in part depends on a given institution’s infrastructure and digital sophistication and maturity, notes Microsoft’s Hortman. “Every institution is thinking about how they can use AI,” she says. “Higher education leadership is consistently asking ‘what can we do with AI and when can we get started?’ Some institutions have been preparing for years to establish a solid data [management and processing] base, enabling them to accelerate their AI efforts right now. Other institutions may not have a solid database but are eager to begin with AI. Clean data is the key factor in reliable AI, so we first help them to create that base so they can rapidly speed up their innovation with AI.”

AI feeds off of large bodies of data, and the ability of AI to answer questions and respond to discrete “asks” well is highly related to the ability to train it on precise data that is highly relevant to the ask. So, one key differentiator of digital maturity among HEIs is the availability of and ability to discretely segregate masses of data, which often can be siloed in different locations in different parts of the institution. “Having a good understanding of your data is crucial for institutions. Where is your data located? How do you store it? How do you share it with other parts of your institution?” Hortman says. “For the next step, we need to consider the secure space where all of your institutional data can reside so you can have access to all of the data. A solid foundation can speed up innovation with AI.”

AI’s initial cost is not too high for institutions that already have robust IT and data management operations. However, those costs will increase greatly if AI is dramatically expanded, which is a challenge to anticipate because the uptake rate for different types of AI can be difficult to predict, says U-M’s Ravi Pendse.

“We have stabilized at about 16,000 daily [generative AI] users,” Pendse says. “But if there had been 100,000 daily users within a month or two, I would have had to have said to people, ‘I can’t let you use this for free anymore because of the cost.’”

Then there are the people costs of AI initiatives. “If I look at people costs, I have had six people working on [U-M’s generative AI initiatives] essentially 40 hours a week, for almost six months, so that’s a significant cost [per person],” Pendse says.

U-M, which has no plans to charge for its lowest-cost tier of AI service, will start charging for using Maizey, its intermediate tier, in 2024. “For professors who have deployed it to their class, at the end of the semester, they may end up paying maybe $200 or $300, which is not a lot of money,” Pendse says. “But we needed to make sure there was some cost involved so that we encourage responsible use.”

However, the eventual costs of, and pricing model for, AI computing services are difficult to predict given that the system is in its early days, with a pricing model that will likely change as uptake increases. With a more mature economic model, economization and prioritization of use of demanding AI issues will likely emerge, Hortman says, leading HEIs to triage their use of scarce AI and IT dollars.

A concern identified by many is that a digital divide could emerge whereby better-funded institutions and students have better access to AI or greater experience using it. “Due to the cost of AI, will this deepen the higher education digital divide? Will we see smaller institutions or those without the budget left out of this new era of AI?” Hortman says. “We need to make sure that we are talking about this early and often because there are a lot of policies and practices being developed to ensure that the digital divide does not continue to widen. My hope is that we all show up to ensure everybody has the opportunity to innovate.”

Pendse hopes that AI cooperation among educational institutions at various levels may ultimately work to keep costs down and amortize them over a larger base of users. “What we’re hoping to do long term is actually provide this resource to other schools and colleges and high schools that may not have the types of resources that [U-M] does and ensure that more people are aware of this technology and have a chance to try it out,” Pendse says. “We believe that U-M has a responsibility to help support other institutions when it comes to developing their own AI programs. That is why it is our intention to apply for collaborative research proposals from groups like the National Science Foundation where we can enable other institutions to gain funding and resources for AI.”

The Need for Board Engagement

Boards may be in a position to lead on AI, given that some of their members from businesses may be more advanced in implementing AI solutions than the universities. Until recently, for example, Union College had a board member, John Kelly III, who retired last year as executive vice president at IBM, where he led IBM’s Watson AI project, which famously competed against humans on the TV quiz show Jeopardy! in 2011 and won. That may be as good as it gets for board AI talent. The AI revolution will continue to increase both the value of board members with technical skills and the need for other board members to engage with the subject at some degree of sophistication, according to many of those interviewed.

But many boards are not that involved in AI decisions. At many institutions at present, AI is treated as a specialized IT function below the level of the board’s strategic planning and oversight, save in the case of major initiatives. This was evident in the reporting for this article, as most institutions chose not to provide board members to speak despite a specific request to do so, unlike for many other Trusteeship topics.

For board members to gain a better appreciation for generative AI, a first step is to get up to speed by taking some time to play with it. One of the most powerful ways is through a widely available, free platform such as ChatGPT, which is available from OpenAI (https://chat.openai.com/auth/login). The online publication ZDNET provides a step-by-step tutorial on how to get started generally (https://www.zdnet.com/article/how-to-use-chatgpt), with links to how to use ChatGPT for different applications, such as writing code or essays, creating Excel tables, or summarizing a book.

A good test is to ask ChatGPT to write a 500-word essay on a subject you know well. A business owner board member, for example, could ask for trends that will confront his or her industry and particular business. A review of the results will likely demonstrate ChatGPT’s strengths as well as its weaknesses.

AGB consultant Amy Hilbelink says that only now are boards confronting AI issues. “AI has been around for a long time, but in the higher ed realm, it hasn’t,” she says. “Digital transformation is something that’s talked about within the tech area of higher ed, but not so much with the boards and trustees. So, when I come to a board and they pick my brain, it’s usually about how AI and digital transformation impacts curriculum, and students and faculty. So, it’s more of the teaching and learning aspect of it.”

Barnshaw said boards may be most likely to engage successfully with the subject in one of three ways: “First, where there are financial incentives to do so, like the [National Science Foundation] releases a bunch of money on generative AI. Second, when institutions might have been under pressure to do something differently for a variety of reasons already, and they see this as an option for structural change. And then third is that there’s a champion to kind of come along to drive change. If you can check all three boxes, it’s more likely that change will happen, but you’d need at least one of those boxes.”

AI in Different Types of Institutions

As with many developments in higher education, AI may affect different institutions differently. We’ve already seen how many research-intensive institutions are early adopters of AI-enabled instructional, research, student and career services, and operational uses. But other kinds of institutions also can benefit from these kinds of AI investments.

For community colleges, AI could be a force multiplier. By teaching students skills of clear value to employers, community colleges could increase their relevance. They also can capitalize on their often close relationships with community employers.

Maricopa County Community College District Chancellor Steven Gonzales notes that the associate and certificate degrees at District colleges were developed in less than a year, a remarkable feat given that there was no template. Chandler-Gilbert Community College was one of the first such colleges in the country to offer these degrees and relied upon an AI faculty member and engineer who also was working at Intel, Habib Matar, to develop the curriculum from scratch. “We’d be much faster today,” Gonzales says. “I think it surprised a lot of people that [the District] was handpicked by Intel to develop this first, a certificate and associate degree in AI. Perhaps it’s because Intel sees that the primary future for this is not necessarily the need for a typical bachelor’s, master’s, or doctorate degree to do this sort of work; it’s instead someone who can get trained up very quickly and enter the workforce and continue their learning while they’re on the job with whatever company they’re working for.”

Institutions that offer online learning at scale, like WGU, Southern New Hampshire University and others already have been moving in the direction of greater digital capabilities. Freed of brick-and-mortar costs, they have greater resources to pour into AI.

Liberal arts institutions can exploit the possibilities of generative AI in instruction, says Davidson’s Chris Marsicano. “One of the great aspects about being a professor at a liberal arts college is that unlike research universities, we are often rewarded for the scholarship of teaching and learning as it counts towards promotions and tenure,” Marsicano says. “And so I fully expect that all across the country right now, there are liberal arts college professors doing innovative things in the classroom with ChatGPT, and we just don’t know it yet.”

Union College’s Harris notes that Union and a small number of other liberal arts institutions nationwide that blend technical subject instruction with more traditional liberal arts are particularly well positioned to help students develop AI skills.

AI and the Value Proposition

Estimations vary on how impactful AI ultimately will prove for higher education’s basic value proposition. Some, like U-M’s Santa Ono, view it as a powerful tool that will enable greater stakeholder engagement in the higher ed ecosystem but not displace human interaction.

“I do not yet see a situation where a computer system can replace what happens when a teacher and a student or team of students get together and approach a problem,” Ono says. “It’s still unique and spontaneous. And there’s a level of serendipity to discovery. I do not yet see anything that I’ve observed in our use of these AI platforms that threatens what happens at our colleges and universities.”

Yet, some say AI could help online education eventually threaten the primacy of a large swath of the higher education sector. “AI will have an absolutely huge effect on higher ed,” Amy Hilbelink says, stating that it adds yet more functionality to online content provided by a variety of nontraditional instruction providers, which increasingly is competing as an effective substitute to traditional HEIs. “The one thing that’s keeping higher ed in this little box has been accrediting agencies, right? Accreditors are never going to say, ‘you take this Coursera course and we’re going to give you a certificate or diploma’ or, ‘yeah, you’re qualified to be an MBA.’ But if corporations start accepting folks with the experience and not the degree, it’s going to change all of that. There’s great content on Coursera, Khan Academy, all of them, and much more effective than what is offered in many HEIs. It keeps your attention, and they’ve included new technology, as opposed to sitting in one more class learning about an MBA from a faculty member who hasn’t been in a company for 40 years. Yeah, with AI many schools may meet their match.”

In the relatively near future, avoiding generative AI use will be as impossible as avoiding the internet—or spell-check, says Davidson’s Marsicano. “We are getting to the point where ChatGPT and Bard and all these programs are being built into programs we use every day,” he says. “And so we’re going to get to the situation where it is going to be hard to type on a computer at all without using AI in some way. That’s how we work with spell check. When I was in high school, I had professors or teachers tell me, ‘you cannot use spell check, you have to turn it off.’ Would anyone say that now? I fundamentally don’t believe that there are many people who would prefer to drive a Model T when they can drive a Tesla.”

“For current first-year students, when they graduate, interview for a job, and are asked, ‘tell me about your experiences with generative AI,’ if they reply, ‘never seen it or done anything with it,’ that’s not going to go over so well,” says Union College’s Harris. “Having some experiences with generative AI will be a signal that you’re paying attention to what’s happening in the world.”

David Tobenkin is a freelance writer based in the greater Washington, D.C. area.

Editor’s Note: This article is continued in the May/June issue of Trusteeship magazine: Artificial Intelligence and the Future of Higher Education, Part 2


Notes

1. Susan D’Agostino, “Colleges Race to Hire and Build Amid AI ‘Gold Rush’,” Inside Higher Ed, May 19, 2023, https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/05/19/colleges-race-hire-and-build-amid-ai-gold#.

2. USC School of Advanced Computing, https://computing.usc.edu/.

3. Kevin Roose, “A Conversation with Bing’s Chatbot Left Me Deeply Unsettled,” New York Times, February 16, 2023. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html.

4. P. Bharadwaj, C. Shaw, L. NeJame, S. Martin, N. Janson, and K. Fox, Time for Class 2023: Bridging Student and Faculty Perspectives on Digital Learning, Tyton Partners and Every Learner Everywhere, 2023, https://www.everylearnereverywhere.org/resources/time-for-class-2023/.

5. Tabassum Siddiqui, “U of T receives $200-million grant to support Acceleration Consortium’s ‘self-driving labs’ research,” U of T News, April 28, 2023, https://www.utoronto.ca/news/u-t-receives-200-million-grant-support-acceleration-consortium-s-self-driving-labs-research.

6. University of California Presidential Working Group on AI, Responsible Artificial Intelligence: Recommendations to Guide the University’s Artificial Intelligence Strategy, University of California, October 2021, https://www.ucop.edu/uc-health/_files/uc-ai-working-group-final-report.pdf.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.