Integrating Artificial Intelligence into Higher Education Governance

The Case for Language Learning Models—Do You Need Your Own?

By Joshua Cameron January 7, 2025 Blog Post

Opinions expressed in AGB blogs are those of the authors and not necessarily those of the institutions that employ them or of AGB.

Artificial intelligence (AI) is rapidly transforming various sectors, including higher education. Colleges, universities, foundations, and systems are increasingly exploring ways to leverage AI to enhance teaching, research, and administrative processes. There are many potential benefits of using AI-powered language learning models (LLMs), and I want to outline some lessons I have learned that might help board professionals, drawn from my own experience developing a customized tool.

The Power of LLMs

LLMs are a type of AI designed to process and understand natural language. They usually come in the form of a “chatbot” where a user asks questions, provides directions, and even converses with the AI. There are many popular LLMs.1 2 Each can be applied to a wide range of tasks, including:

  • Information Retrieval—Quickly accessing relevant information from vast data sets, such as research papers, financial reports, and industry trends.
  • Data Analysis—Identifying patterns and trends in complex data, helping trustees make data-driven decisions.
  • Predictive Modeling—Forecasting future outcomes based on historical data and current trends.
  • Summarization—Condensing lengthy documents into concise summaries, saving boards, presidents, foundation leaders, and board professionals time and effort.

LLMs can improve decision-making, efficiency, innovation, and governance. By providing access to relevant information and insights, LLMs can help board members make more informed and evidence-based decisions. For example, LLMs can be used to explore new ideas and identify emerging trends, stimulating innovation and adaptability. AI can automate routine tasks, such as agenda preparation and meeting-minute generation, freeing up time to focus on strategic issues. AI can also help ensure transparency and accountability by providing a digital record of board discussions and decisions.

Concerns and Challenges with Public LLMs

The nature of the work for most higher education governing boards is confidential and, in many instances, proprietary. LLMs rely on vast amounts of data for training/learning, which are usually sources from the internet and user interactions. When a user interacts directly with these often-public LLMs, the data that are shared might be incorporated without awareness or consent.

Another concern with LLMs is their tendency to provide incorrect or inaccurate information. Relying on an LLM response without independent verification can create a significant risk. Additionally, the LLM is probably not familiar with institutional policies and processes unless they are in the public domain. This means that information from the LLM might be more prone to inaccuracies because the source information is not readily available for “associative learning.”

These are not the only concerns with LLMs, but these are the two concerns that a private LLM helps to mitigate.

Why I Developed a Private LLM

Institutions that plan to implement an LLM can safeguard sensitive data (such as student, faculty, or donor records) by developing a private language model. Unlike public models hosted on external servers, private models can be deployed on secure internal infrastructure. This helps to ensure that data remains confidential and compliant with privacy regulations because it minimizes the risks associated with sharing proprietary information with third parties.

A recent accreditation visit charged our board with helping the university develop an evidence-based culture. I was tasked to help develop dashboards and other sources of evidence. One possible solution was a private language model that my board and university could use to gain valuable insights from their proprietary data. This would enable in-depth analysis of student feedback, faculty evaluations, research publications, institutional policies, accreditation studies, budgets, and other text-based information. These insights could then inform decision-making, improve educational outcomes, and foster institutional innovation.

Developing a PrivateGPT

I have a background in computer coding, so I took the initiative to develop an institution-focused private machine learning model using PrivateGPT. My PrivateGPT is a customized language model trained on a specific dataset, allowing it to provide tailored insights and recommendations. In my case, I used Visual Studio (software), Python (a programming language), and an open source PrivateGPT.

Here is an overview of the process I followed and my advice to those interested in developing their own LLM:

  1. Data Collection and Preparation: Gather relevant data, such as board meeting minutes, strategic plans, university policies and handbooks, financial reports, and research publications. For quality results, clean and preprocess the data to ensure they are formatted correctly and consistently.
  2. Model Selection: Choose a suitable language learning model architecture in consultation with qualified IT experts.
  3. Training (Ingestion): Train the model on the prepared dataset using the machine learning tools that are part of the LLM. This involves feeding the model the large amounts of data collected at the outset in step one and adjusting its parameters to optimize performance. This process is time-consuming and resource-intensive because it involves significant trial and error while the LLM “learns” the data.
  4. Fine-Tuning: This is an ongoing process of further customizing the model by providing additional training data or adjusting parameters to align it with the specific requirements needed.

A Board Professional’s Experience with a PrivateGPT

Developing my own PrivateGPT has been a rollercoaster experience. The initial setup was not overly challenging for someone with programming skills. However, someone unfamiliar with developer platforms like GitHub or computer-programming languages is likely to struggle getting up and running without support from an IT expert.

The initial data training and query experience was fun and exciting. The model works! It had no problem identifying policy overlaps, identifying data patterns, or generating resolutions and other documents. For example, summarizing a group of documents into an executive summary worked well.

However, be prepared for some answers that make no sense or that simply do not come at all, requiring a critical eye and refined inquiry. Because the model was only trained on the information it was provided (without access to public datasets and additional users), context is missing or misinformed. This led to some unexpected and unusual responses that users needed to evaluate carefully. When the model did not respond, a simple resolution was to train it to provide a predefined “no answer” response.

Training the user is just as essential as training the learning model. Users need to understand what the LLM is (and is not) capable of and how to phrase their questions appropriately. The types and style of questions matter—this is also true for public LLMs. With a personalized system, these interactions became quite nuanced and rewarding despite times of frustration.

Initial deployment was just the beginning. Maintenance of the database, updates to software, and hardware bottlenecks soon plagued usability. Our PrivateGPT runs on a local machine, so its speed and reliability depend on the quality of hardware and the depth and breadth of material provided during training. Chat experiences seem slow and archaic compared to polished public versions. Query times often took minutes compared to seconds. These times improved, but they became a significant hurdle to accomplishing meaningful work.

“Better” Alternatives to a PrivateGPT

Some institutions are developing their own LLMs for students, faculty, and staff that overcome many of the hurdles noted with my PrivateGPT, while simultaneously addressing many of the privacy concerns associated with public LLMs.3 4 Other private, nonprofit, and commercial entities are providing similar AI experiences for users, employees, and members. However, achieving this requires expert IT support and resources. As a board professional, do you need to develop an LLM for your institution by yourself? I do not think so and there are others who agree.5

AI and LLMs are becoming increasingly common in our digital world, providing board professionals with AI-powered options that are quicker and cheaper than creating an LLM by themselves from scratch. For example, AGB offers AGB Board BotTM as a resource for higher education governance topics. Board Bot helps to generate relevant content, not just find it. Given the prompt, “Please provide me with a brief template for a university advisory council description or bylaws,” the Board Bot not only generated a functional template, it also provided a link to the book, Advisory Councils in Higher Education, where one can find additional relevant information about advisory councils.

Even this article was drafted, in part, with the help of two AI tools—Copilot integrated with my institution’s Microsoft account and Gemini, Google’s free LLM chatbot. Copilot provided language suggestions when prompted as I drafted the document. I think of this as an enhanced version of spellcheck, autocomplete, thesaurus, and grammar tools. Gemini provided enhanced search results and was an additional resource. For example, Gemini summarized the rather complicated workflow for developing the PrivateGPT into the four steps shown above.

Selecting the best LLM for your needs will depend on several factors, including: the LLM’s capabilities, price, integration with available institutional resources, ease of use, and reliability, among many others. Consider your own needs, resources, and organization’s mission.

Conclusion

AI-powered LLMs have the potential to revolutionize the way people interact with data, address questions, and synthesize information. By providing enhanced decision-making capabilities and improving efficiency, these models can help higher education governing boards navigate today’s complex challenges. Unless you are an IT expert, developing a PrivateGPT as an individual board professional is likely not a practical approach. Instead, seek opportunities to bring private models to individual campuses, learn how to use them, and teach board members, presidents, and other institution and foundation leaders their value. As AI continues to advance, it is crucial for institutions to embrace this technology responsibly and ethically.

Joshua Cameron, PhD, is the executive liaison officer in the Office of the President and an associate professor at Western University of Health Sciences.

Artificial intelligence tools assisted in the development of this blog post.


Notes

1. Grant Hickey, “The best Large Language Models (LLMs) of 2024,” TechRadar, July 5, 2024, https://www.techradar.com/computing/artificial-intelligence/best-llms.

2. Joseph Ours, “A Business Leader’s Guide To Choosing Between Top Enterprise-Ready LLMs,” Forbes, November 13, 2024, https://www.forbes.com/councils/forbestechcouncil/2024/11/13/a-business-leaders-guide-to-choosing-between-top-enterprise-ready-llms/.

3. Silvia Milano, Joshua A. McGrane, and Sabina Leonelli, “Large language models challenge the future of higher education,” Nature Machine Intelligence 5, no. 4 (2023): 333-334, http://dx.doi.org/10.1038/s42256-023-00644-2.

4. Lauren Coffey, “Universities Build Their Own ChatGPT-like Tools,” Inside Higher Ed, March 21, 2024, https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/03/21/universities-build-their-own-chatgpt-ai#.

5. Jason Ly, “Does your company need it’s own LLM? The reality is, it probably doesn’t!” TechNative, June 9, 2024, https://technative.io/does-your-company-need-its-own-llm-the-reality-is-it-probably-doesnt/.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.