Equal parts skepticism and FOMO. That’s how most nonprofit professionals we’ve talked to this year feel about artificial intelligence (AI).
You can’t stick your head in the sand when it comes to AI, nor should you jump into it without thinking deeply about the potential impact on your organization and community.
As Sam Caplan writes, “If 2023 was the year of AI curiosity, 2024 is the year of AI intentionality.” This year, AI is starting to truly mature. But that doesn’t mean AI is completely free from trust issues, which even strong advocates for AI acknowledge.
Devi Thomas, global head of Nonprofit Community Capacity at Microsoft Philanthropies, brings a wealth of knowledge on integrating tech for social good. Beth Kanter, a seasoned nonprofit consultant and author, has spent years helping organizations leverage technology to amplify their impact. Together, their insights shed light on how nonprofits can embrace AI—thoughtfully and responsibly.
It all comes down to one concept: Trust, but verify. Here’s what that means in practice.
Always put a human in the pilot’s seat
It can be tempting to let AI take over and steer the ship, with how easy it is to use. However, both Devi and Beth emphasize the importance of always keeping a human firmly in the pilot’s seat.
As Beth puts it, “We really look at AI and refer to it both internally and externally at Microsoft as your copilot. The human is the pilot.”
AI is an assistive technology, improving human output without replacing human judgment. What’s crucial is knowing which tasks to delegate to AI. In a survey of over 400 nonprofit leaders, Devi said that Microsoft found 25% believe the most important skill to teach their teams is when to use AI and when to use a human.
As Eric Lucht from Microsoft puts it, “Would you trust an intern with it? If not, don’t use AI for it.” If you’d only trust a highly paid professional with years of experience for a task, you shouldn’t use AI for it.
Accountability is Microsoft’s number one principle of responsible AI. In practice, this means that AI users should not compromise in any way on the fact that there must be human oversight involved, starting from the design phase. Keeping humans in control ensures that AI remains a tool to support and augment human efforts, rather than a poor substitute for them.
Understand AI’s limitations
From analyzing donor data sets to predict future disaster responses, to translating for displaced populations, there are seemingly infinite use cases for AI for nonprofits. AI holds tremendous potential, but it’s not a magic bullet. Understanding its limitations is crucial for nonprofits looking to use AI responsibly.
Recognize the learning curve
While natural language processing makes AI accessible to most, there’s still a learning curve involved in using it, as with any new technology. As Devi notes from Microsoft’s study, “58% of nonprofit leaders are worried about the steep learning curve to understand AI.”
Invest time in training and familiarizing your team with AI tools. Don’t assume that everyone knows how to use them correctly; that’s how you end up missing important nuances when using AI.
Beware of overconfidence
AI can often sound confident in its outputs, which might lead to misplaced trust. AI is still a work-in-progress and capable of making mistakes, as demonstrated by the recent notorious launch of Google’s AI overviews spitting out recommendations to eat rocks or add glue to pizza.
“All the critical thinking skills we were taught in using the internet are needed here,” Devi warns.
That’s the crux of “Trust, but verify”: Approach AI outputs with a critical eye and verify the results rather than taking them at face value.
Adopt a privacy-first approach
Devi shared that 63% of nonprofit leaders are worried about security risks associated with using AI. Responsible AI use involves understanding where your inputs are going, how they’re being used, and where your outputs are coming from.
The data that nonprofits handle is often sensitive. All AI-powered systems nonprofits use must comply with data protection regulations and best practices. Make sure that they encrypt data, store data securely, and regularly update security protocols to safeguard information.
Most importantly, perhaps, is educating your staff and volunteers on how to protect themselves and their data when using AI. Incorporate privacy considerations from the outset. Be transparent with stakeholders about how data is used and protected by the AI tools they use.
Know where your outputs are coming from
Transparency is another important principle of responsible AI, and key to building trust in AI systems. People need to know where the data used by AI comes from, and the processes involved in generating outputs. By being transparent, nonprofits can ensure that stakeholders understand and trust the AI systems in place.
For example, Copilot by Bing cites sources for each of its generated claims, with a hyperlink to an appropriate webpage. Devi encourages AI users to be inquisitive, and to question where AI gets its information from.
Start small, then reassess
Adopting AI can be a daunting task, especially for nonprofits with limited resources. That’s why it’s worth starting small, then reassessing whether AI makes sense for your nonprofit.
Devi recommends that nonprofits start small with AI, following a scalable, three-step process:
- Think about your use case for AI. Something specific and manageable, where AI provides clear benefits. For example, automating email responses or analyzing donor data. By focusing on a single use case, nonprofits can better control and measure the outcomes.
- Test that use case. Establish KPIs to measure the impact of AI on your chosen use case. Implement the AI tool on a small scale and monitor its performance closely.
- Evaluate and adjust. Determine what’s working and what isn’t. Assess the impact of this small-scale project’s outcomes against your KPIs. Evaluate the benefits, challenges, and any unintended consequences. As Devi puts it, it’s all about ongoing learning and improvement against your chosen use case.
From there, you can thoughtfully scale your use of AI up or down.
Shift your focus from full production to the last 20%
AI adoption requires a fundamental shift in how we approach tasks and projects. It’s no longer about starting with a blank slate. AI can lay the groundwork and people can build from there.
“Most human beings are used to creating a product from 0 to 80,” says Devi. “We now have to do a behavior shift where we take things from 80 to 100.”
This approach is particularly beneficial for those nonprofits that are often pressed for time. As one grantee we spoke to explained, “The biggest benefit of AI in organizations that really can’t afford the staffing levels of a corporate [organization] is in those basic, menial tasks.”
Beth elaborates on this concept, describing AI as a dividend of time. By taking over more tedious tasks, AI frees up nonprofit professionals to focus on more important issues.
“AI is going to have a profound impact on the way we do our work. It’s going to automate a lot of grunt work, a lot of what I call spreadsheet aerobics, cut and paste,” said Beth. “It’s also going to help free up time, maybe, to address things like the donor retention rate or help us think more creatively about strategy and innovation and have more impactful program delivery.”
In practice, this means embracing AI as a starting point to generate initial ideas, drafts, or analyses, then refining and enhancing AI outputs with human insight and expertise. Taking an AI output that is 80% decent to 100% quality requires the human touch, incorporating organizational context and nuanced understanding that AI cannot provide.
Working with AI requires a growth mindset—a willingness to learn and adapt to using AI in your workflows, and an ability to bridge the gap between AI-generated outputs and high-quality, mission-aligned outcomes.
Increase buy-in from the top-down
Leadership teams need to be both educated and able to educate others on how to use AI responsibly. They set the tone for their organizations and have massive influence on how effectively and responsibly AI is adopted and utilized.
“What I’ve seen and heard is that it starts at the top,” said Devi. “If you have a leadership team that’s deeply invested in the technology, it’s easier to create buy-in throughout the organization and with volunteers. Then, you can scale, making sure that employees and everyone who touches the mission knows how to use it responsibly and ethically.”
Leaders must communicate the value and potential of AI clearly and consistently, and ensure that AI projects are aligned with the organization’s mission. They should be able to articulate how AI enhances outcomes and operational efficiency, demonstrating the technology’s relevance to the nonprofit’s core work.
Leaders who buy into AI also must have realistic expectations of what’s possible. They need to grasp the necessity of keeping humans in the driver’s seat, while giving employees the support they need to learn new skills. This means fostering a culture of exploration and innovation around AI use.
“Give teams the time and resources to learn and play with AI tools,” said Devi. “Encourage people to spend time with generative AI, go in and really get more and more specific with their prompts. Learn how to add more context to get the right kind of output.”
“Leaders must create psychological safety around AI use,” Beth adds. “Normalize the beginner’s mind towards AI. We have to thoughtfully and responsibly go slow in adopting it.”
AI can level the nonprofit playing field
You might think that the larger organizations are the ones adopting AI enthusiastically, but as Devi noted from their study, it’s actually the smaller nonprofits that are getting the most mileage out of AI.
“Small nonprofits are adopting AI more often,” says Devi. “They’re motivated because this is very much a time saver, an opportunity to use resources to go further.”
AI can also help close the gap between inexperienced, emerging nonprofits and more established ones, especially in the grantmaking space, where articulating your mission is key to securing funding. As one grantee told us, “If I were to give AI a very detailed description of what my organization does and what [an application] is looking for, it would probably be able to assist me.”
AI offers incredible potential for nonprofits, but successful adoption requires a thoughtful approach. By following responsible AI principles, nonprofits can harness the power of AI to multiply their impact while staying true to their mission and values.
To get more involved in the conversation around AI for nonprofits, watch our on-demand webinar, AI for Foundations: A Primer for Pragmatists. Listen in as Anne Nies, Director of Science at Submittable, Alicia Tapia, Sr. Technology Strategist at Microsoft, and Sam Caplan, VP of Social Impact at Submittable, share what they’ve learned about AI and its ideal uses for foundations.