Balancing the Risks and Rewards of AI

By Kristin Hanson    //    Volume 33,  Number 1   //    January/February 2025
Takeaways

  • Since ChatGPT launched in November 2022, most college and university advancement offices and foundations have begun experimenting with AI. Common applications of AI in advancement include optimizing content creation and distribution and finding efficiencies in engagement processes such as lead qualification, campaign planning, and contact report collection.
  • Advancement teams are just starting to introduce autonomous fundraisers (also known as virtual engagement officers)—­fully independent entities that can speak with alumni and donors and answer their questions. Some universities have already raised gifts through their autonomous fundraisers. Most advancement leaders view these tools as avenues that can provide more customized experiences to constituents who don’t meet the giving threshold to be assigned a major gift officer.
  • Few advancement leaders have instituted specific measures to guard against misuse of AI, although several organizations have guidelines in development. The most common AI issues leaders discussed were maintaining data security, addressing transparency and bias concerns, and facilitating staff adoption and training.
  • Although AI gives resource-­strapped advancement teams lucrative new powers, all of the leaders agreed that the technology cannot—­and likely will not—­replace humans as the main driver of fundraising success.

CREDIT: STOCK.ADOBE.COM / NAPAT.T

Two years after the technology went mainstream, advancement leaders are still grappling with questions about AI’s promise—­and its perils.

During the early days of the COVID-­19 pandemic, Dan Frezza—­then the vice president of strategic operations at William & Mary—­considered how the “disconnected connectedness” of a remote-­first world would affect fundraising.

“All these things swirled in my head about how this would change the work that we do,” said Frezza, now the chief advancement officer and executive director of the College of Charleston Foundation. In his pandemic-­era conversations with Matthew Lambert, William & Mary’s senior vice president for university advancement, and other colleagues, Frezza wondered: “What are we learning about our profession through this weird world?”

As the lockdowns ebbed and life returned to “normal,” Frezza kept encouraging his team—­and fellow advancement leaders—­to embrace creativity. How could they provide more meaningful experiences to donors? And how could they find efficiencies in their processes to make those experiences happen?

Since ChatGPT burst onto the scene in November 2022, advancement teams have been exploring how it and other artificial intelligence (AI) tools can help them achieve those goals. Yet as they consider the various use cases for AI, they remain wary of its potential for harm. Developing reasonable guardrails for using AI and other advanced technologies is paramount, but those policies are still works-­in-­progress for most organizations.

How is the advancement field benefiting and protecting itself from the rise of AI two years after the advent of ChatGPT? Frezza and several other leaders shared their thoughts for this article.

Experimenting with AI in Advancement Communications

Many advancement teams’ first forays into the wilds of generative AI have focused on finding efficiencies in content production. Tools like ChatGPT can create first drafts of email and direct mail appeals, as well as requests for donor meetings and thank-you notes. They can help overcome writer’s block when fundraisers need to create subject lines for emails or names for events to catch the attention of alumni, donors, and friends.

For small advancement teams and for communications teams within larger advancement enterprises, ChatGPT and its ilk serve as force multipliers.

When Cameron Hall joined the University of South Carolina as the executive director of annual giving in 2021, he was one of three full-time staff on the team, which was reduced to two full-time staff members in 2022. Despite those meager human resources, Hall still needed to generate unique appeals for 22 university units. To meet his goals while he grew his team, Hall leveraged ChatGPT and other AI tools. They came in particularly handy when Hall and his team prepared a “heavy blitz to all audiences” for South Carolina’s Giving Day in 2023.

“We generated mail, email, and the bulk of our social content through ChatGPT, and because of that, it streamlined our processes so staff could focus on other areas of our work,” Hall says. The campaign, he adds, produced the highest-­grossing Giving Day South Carolina had seen in five years.

The efficiencies that ChatGPT and other AI platforms bring to content creation allow Hall’s team more time to forge the interpersonal connections that can, in turn, inform their AI work.

“When people ask me about generative AI, I answer: ‘What would you do with an additional four hours a week of your time?’” Hall says. “We’ve been able to spin those four hours a week into meetings across campus, talking with campus partners and understanding their work, and using what we learn in our prompt engineering to develop better content for constituents. Now, we are telling stories about the University of South Carolina that haven’t been told before.”

Advancement teams are also using AI to optimize content distribution. At the Kansas State University Foundation, Vice President of Communications Susan Berhow’s team is leveraging predictive AI tools embedded in their marketing platform to determine the best time to send their donor communications.

Previously, Berhow says, her team would dictate specific parameters when scheduling an email to be sent (i.e., “Send Email X to 100,000 people at Y o’clock on Z date”). Now, her team gives the platform a mailing list and a date but allows it to choose the best time to send to different segments based on the system’s internal machine-­learning algorithms.

“Instead of sending everything all at once, [the system is] staggering it based on its send-­time optimization logic,” Berhow says. Her team has used this capability for several months and has seen their target KPI—­click-­through rates—­improve over time compared with previous years.

Using AI to Find Engagement Efficiencies

Andrea Britton, associate director of leadership giving at Dickinson College, carries a portfolio that’s nearly twice the size of other gift officers in her office. She must be strategic in choosing which prospects she will meet with. AI helps her make faster and more informed decisions about donor outreach.

“The payoff of time saved is hard to quantify,” she says.

Bruce Aird, William & Mary’s associate vice president for advancement services and innovation, agrees with that assessment.

“These [tools] could help us determine who we may want to approach for a potential gift or who may be a good volunteer leader for one of our affinity groups,” he says. “This would be really useful for future campaign planning: Who are the people who might be funneled into a specific interest area or goal associated with the campaign, and when’s the right time to ask them for a lead gift?”

Frezza recognizes AI’s potential to introduce front-­end efficiencies for donor engagement, but he finds the best bang for AI’s buck on the back end: Creating donor contact reports.

When he’s on the road, Frezza averages about five donor visits per day. After each visit, he speaks notes into his phone on a recording app, which feeds into the College of Charleston Foundation’s database. Although talking out his notes saved him time versus writing them longhand, the transcript in his database was often riddled with errors. He estimates he’d spend about three hours per night in his hotel fixing them.

With the addition of AI to the process, Frezza says his notes come through much cleaner. Now, when he returns to his hotel each night, he’s editing his notes instead of rewriting them wholesale.

“I’m spending 80 percent less time on this kind of work,” he says. “With that extra time, maybe I can get six visits in a day when I travel—­and that ends up being more visits overall in a year.”

The Rise of the AI Fundraiser

One of advancement’s newest—­and most controversial—­AI applications is the autonomous fundraiser. Sometimes called virtual engagement officers, these independent entities are tasked with engaging, and sometimes, soliciting gifts from alumni and donors.

“There are probably thousands of potential donors that we’re not reaching in our current solicitations or appeals that we can reach by leveraging a virtual engagement officer,” Aird says.

Unlike the form letters and emails institutions have traditionally deployed for mass engagement, autonomous fundraisers are interactive and can answer alumni or donors’ questions in real time. For example, it could reply to a donor question such as, “How much did I give to the university on Giving Day in 2024?” It could also provide prompts for engagement, such as “We look forward to seeing you at Homecoming next week. Here is a list of the events you’ve signed up for—­let me know if you have any questions.”

Advancement teams can dictate which questions may be outside a virtual engagement officer’s ability to answer and require the system to refer potential donors to its human counterparts. Most platforms allow advancement teams to upload information about point people. For example, if the donor said, “I’d like to make a planned gift to the university,” the virtual engagement officer could respond with the name, phone number, and email address of the appropriate staff member, or send a message to that staff member with the lead.

William & Mary and the College of Charleston are two of more than a dozen institutions who began piloting autonomous fundraisers in 2024, and they’re already seeing solid results.

In June, William & Mary conducted a small pilot test for fiscal year-­end solicitations. The outreach to 45 past donors and board members by a virtual engagement officer yielded a handful of gifts. But the response was positive enough for William & Mary to greenlight a larger pilot of an autonomous fundraiser in November. The approximately 500 recipients of this new invitation interacted with a William & Mary-­specific avatar set against a recognizable university background.

“Some of the fears that folks had [about autonomous fundraisers] are being softened by the responses we’re seeing from donors,” says Meghan Palombo, William & Mary’s associate vice president for annual giving and philanthropic engagement. Their short-­term aim is to deploy the virtual engagement officer with donors who are not currently connected with major gift officers or volunteers.

“We hope this can be another channel to give a more concierge-­style experience to donors who aren’t currently being worked,” Palombo says.

At the College of Charleston Foundation—­where an early pilot of the autonomous fundraiser yielded a $1,000 gift—­Frezza sees AI as an answer to another longstanding question: What will replace the phonathon? Phonathons, which once accounted for a substantial portion of an average university’s annual giving, have dwindled as cell phones supplanted landlines and donors stopped answering calls from unknown numbers.

“You have to find ways to replace that revenue,” he says. “AI is another channel that may be able to own this space. That’s where an autonomous fundraiser can be really useful.”

Mitigating Potential AI Risks

Although the possible advantages AI holds for advancement teams abound, so do the technology’s threats. Yet few advancement leaders have established firm policies regarding AI use in their teams’ work.

Many advancement organizations, including the Kansas State University Foundation, have drafted, but not yet published, guidelines governing AI use. Some institutions have assigned staff to task forces and asked them to draft AI policies. But many are operating under a kind of “honor system”: Adhere to any university policies regarding AI and keep lines of communication about AI open between advancement and the institution’s leadership.

“I check in with my supervisor often about how I’m using AI,” says Dickinson’s Britton. “AI is an area where I don’t feel like it’s better to beg forgiveness than ask permission. I don’t want to put the college in a place where someone’s data or identity were compromised.”

Additional AI Applications for Advancement

  • Scaling alumni-student mentoring: Elon University has implemented Elon Q&A, an AI-enabled platform that connects students and alumni based on their questions and areas of expertise, respectively. The platform facilitates engagement with graduates who live far from campus or in an area that does not have an alumni chapter. “We launched [Elon Q&A] the platform earlier this year with a targeted campaign toward students,” says Jill Stratton, director of strategic initiatives for university advancement. “We thought in the first few weeks we’d get maybe 100 questions. We hit that number within days.”
  • Interpreting complex topics: AI is great for summarizing dense materials in easy-to-understand ways. That makes the technology a perfect match for fundraisers and advancement communicators who must translate complex information into impact reports and proposals for donors. “Say I have to report on an endowed chair in cellular biology—and I’m not a cell bio expert,” says Lynne Wester, CEO of the Donor Relations Group. “The faculty member gives me three research papers on the topic, but I don’t have time to read 72 pages. Instead, I can load them into an AI engine and say, ‘Explain this to me like I’m in sixth grade,’ and it’ll help me translate it all into layman’s terms.”
  • Simplifying image searches: AI can help advancement teams better build and use their media libraries. On the front end, AI can tag photos automatically based on what it sees in an image. On the back end, when a user needs a specific image, they can find it in seconds via keyword search. “What if we need an image of a student on a bicycle one day?” asks Susan Berhow, vice president of communications at the Kansas State University Foundation. “We can use the digital asset management system to unearth photos of bicycles from our current image library, even if we haven’t manually tagged them. This feature saves us time—and could even save us the cost of a new photo shoot of students on bikes.”
  • Analyzing survey data: The University of South Carolina’s annual giving team uses AI to quickly analyze the results from surveys about Giving Day content and other donor relations communications. “We take out any of the personalized data so the models don’t have that to produce bias,” says Cameron Hall, executive director of annual giving. “This helps us understand the data and helps us make recommendations to enhance our programs’ performance.”
  • Accelerating trip planning: Many advancement leaders interviewed say they or their gift officers often use AI to help make their travel itineraries. Predictive AI tools can help gift officers decide which donors in a particular area are most likely to take a meeting or make a gift. Generative AI tools can help identify which hotels will be most convenient or affordable and which restaurants or coffee shops will work best for donor meetings or events.

Maintaining data security

Advancement organizations are understandably wary about any technology that connects directly to their constituents’ data. Many leaders are still feeling the effects of the 2020 Blackbaud data breach, which compromised the personal information of millions of people. Public tools like ChatGPT and Notebook LM explicitly warn users that they will use any entered information to build their large language models (LLMs). What’s unclear is how and where those tools store the data that’s entered and whether the data will become public record.

“The reward and risk are the two options here: The ability to enhance what you do for your mission versus where [AI] would expose you to risk you don’t want,” says Greg Willems, CEO of the Kansas State University Foundation. “You are trying to be intelligent about how you harness these new technologies in a way that gives your donors a greater experience and elevates the quality of what you deliver in your mission impact but doesn’t unnecessarily open you up to risking your reputation and ability to execute.”

Strategies that advancement organizations can pursue as they balance these interests include the following:

  • Working closely with institutional IT experts, data engineers, and data scientists to determine what AI capabilities and safety measures are available and feasible to implement;
  • Careful selection of the AI tools available within an organization’s existing technology stack (e.g., Microsoft’s CoPilot) that involve the use of constituent data but do not retain it to train its LLMs;
  • Establishing clear guidelines for what sets of data are and are not fair game for AI use; and
  • Ensuring that any AI platform used complies with the institution’s global data security policies.

Institutions can structure their thinking about AI data access by asking themselves: “What kind of mistakes are we OK with the technology making?”

Imagine two donors have a common name—­John Smith. One of them says to an autonomous fundraiser, “Tell me my lifetime giving history to the university.” The avatar gives an answer—­but it’s the lifetime giving history of the other John Smith. Giving such an answer is obviously a breach of the university’s confidentiality agreement with the donor. Is that a mistake that’s acceptable to the university, or not?

Right now, the answer is “no” at the College of Charleston Foundation, Frezza says.

“We’re more comfortable keeping things at a higher level,” Frezza says, adding that a question like “Can we invite you to homecoming?” is an example of what would be acceptable. “We want the worst mistake to be something like inviting someone to an event that’s in New York when the person lives in Washington, D.C. It’s embarrassing, but not a dealbreaker.”

Accounting for Transparency and Bias

We live in an era where it’s increasingly difficult to prove whether something published online is “real.” Maintaining trust in this environment is crucial for advancement teams, so they must be upfront with donors about when they’re using AI technology.

“The most important component for me is that we are transparent, meaning that we are clear when we have outreach that is not a human being but instead an autonomous fundraiser,” says William & Mary’s Lambert.

For Dickinson’s Britton, transparency includes disclosing AI use internally, too.

“Even though my boss doesn’t use AI as much as I do, I always tell her when I’m using it,” she says. “If I created a document using ChatGPT or Fundwriter, it’s an obvious thing not to hide usage of AI.”

Advancement professionals must also recognize the potential for bias, particularly toward women and minority groups, in AI models. Any organizational guidelines for proper AI use must account for this propensity and recommend measures that can minimize bias as the tools learn over time. Here, advancement teams might look across campus for some help.

“What if an advancement shop went over to the computer science department, or whoever is studying AI, to ask: “Can you teach us applications of AI that would respect donor privacy and not include a ton of bias to help us in our daily work of advancement?’” suggests Lynne Wester, founder and CEO of the Donor Relations Group.

Advancement teams can also experiment with different ways of crafting prompts to avoid feeding the engine biased information. At the University of South Carolina, Hall’s team specifically excludes identifiers of gender, sex, race, and wealth when prompting its generative AI tools. Instead, they use generalizations of their target audiences.

“If I’m working on a communication for young alumni, I won’t give [the engine] hyper-­specific information about those individuals, but I might say ‘I’m speaking to people between the ages of X and Y who are interested in Z, craft this appeal in a way that appeals to this group.’” Hall says.

Staff Adoption and Training

Ensuring advancement teams use AI safely while still realizing its benefits requires organizations to engage in change management. Implementation of AI tools and strategies will encounter resistance from staff who tend to recoil from any new technology—­much less technology that carries as many unknowns as AI.

Add that to the fact that many fundraisers who tried out ChatGPT’s capabilities in its early days weren’t impressed with its generic outputs. Wester, for example, worked with a variety of AI tools to write effective donor thank-you emails over the past two years and can attest that they are not silver bullets for success.

“If you have to engineer a prompt, then refine it so it doesn’t sound like every other donor thank-you out there and get something original and heartfelt, you often have to put a lot of time and effort into that,” she says.

For most AI tools, South Carolina’s Hall says, the outputs are only as good as the inputs. Developing and practicing prompting skills is time-­consuming, but essential.

“We had to learn very quickly how to become strong prompt engineers, because there’s still a writing element to this. If you’re not a strong writer, you’re not going to get a strong output,” he says.

Hall developed his skills—­and his team’s—­by providing and seeking out training opportunities, many of which are free online. Dickinson’s Britton, a self-­taught prompt engineer, says she learned from TikToks and YouTube videos, and enrolled in any internal AI training courses the college offered.

Whether it’s a new platform or an emerging security threat, the nascent field of AI is evolving on a near-­daily basis. (As Britton quips, “It feels a bit like the Wild West.”) Advancement, on the other hand, has historically responded to technological change at a glacial pace. Teams already are challenged to keep up with the latest developments and best practices.

To keep pace with AI’s changes on an industry scale, advancement leaders can attend higher education and tech conferences, engage with experts, and collaborate with peers at other institutions. Within their own organizations, leaders can establish internal working groups to explore AI applications, then ask those groups to present to their peers. In these ways, advancement leaders can build a culture of learning and innovation that acts as a rising tide to lift all boats.

The Role of the Board in AI Governance

Many of the AI governance issues today mirror those faced during previous technology booms, like the internet’s adoption in the 1990s. As before, senior leadership and governing boards must address questions around guidelines—in the form of policies—and appropriate guardrails for mitigating the most serious risks when using AI in fundraising and donor communications. And given the dynamism in AI technologies, these policies might need to be revisited more frequently than perhaps more static areas of board oversight. Board members should be focused on encouraging the benefits of innovations in operations (with a tolerance for occasional false starts or failures) while being mindful of reputational and other potential harms if the technology is deployed absent such considerations. Moreover, as with previous digital innovations, the budgetary resources required to adopt and support these technologies cannot be an afterthought. Like other strategic initiatives, the return on investment needs to be monitored over time and the annual outlays carefully tracked.

In practice, this means foundation and independent institution governing board members should ask questions regarding the security and privacy risks posed by the usage of new AI tools—because the risks are reputational. And when dealing with donors, reputation is everything. In addition, board members should be sensitive to the emerging state, federal, and international regulations around AI to make sure they remain in compliance.

“AI Isn’t Intended to Replace Staff”

Across the board, advancement leaders believe that for all of AI’s potential, it cannot—­and will not—­replace humans in fundraising. But they also know that fundraisers who embrace the technology responsibly will have a tremendous advantage.

“Instead of calling it ‘artificial intelligence,’ maybe we in advancement should look at it as ‘assistive intelligence,’” Wester says. “We should be asking ourselves, ‘What can it assist me to do better?’”

Answering that question will require advancement leaders to adapt to new technology far faster than they typically do. Fundraisers are generally late adopters, says South Carolina’s Hall, pointing to the sector’s recent wave of customer relationship management platform transformations as an example.

“Then we find we’re stuck in a place where we have to retrofit those tools on the back end. We don’t want AI to be in that space because it learns very quickly,” Hall says. “Teaching the models how to do things, learning how to be a strong prompt engineer, and using these tools to make your work more efficient—­this all has to happen now.”

Although William & Mary’s Lambert acknowledges that AI comes with inherent risks, he says he hasn’t yet heard a worst-­case scenario that would dissuade him from finding ways to incorporate the technology in his team’s operations.

“We’re not putting a man on the moon, and we’re not doing heart surgery,” he says, noting that advancement professionals were once hesitant to replace phone calls with text messaging in annual giving strategies. Text solicitations are a practice that’s fully accepted now.

Ultimately, Lambert sees AI as “another way for those who want to engage with us to communicate with us. Some will be interested in this. Some won’t be,” he says. “But AI isn’t intended to replace staff. We can’t hire enough staff, and that’s a problem. AI is something that can help us expand our reach in other ways.”

Kristin Hanson is a freelance writer based in Baltimore, Maryland. She has written extensively on advancement and higher education, worked as a senior editor for CASE Currents magazine, and served as a senior associate director for development communications for Johns Hopkins University and Medicine.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.