“Should we be using AI?” One of your board members has read an article, attended a conference, or talked with someone from another organization that just launched something new. And the board member wants to know where the organization stands.
It’s the right question. But it’s often framed too narrowly.
The more important question isn’t whether your organization sho uld use AI. At this point, most organizations already are, even if it’s just staff using ChatGPT to draft an email or summarize a report. Your board should be asking what their role and responsibility is in how AI is implemented. Additionally, they should be discussing what thoughtful AI governance looks like for a nonprofit.
This post is the first in a four-part series exploring what a People-First approach to AI means for each of the four audiences at the heart of every nonprofit: the board, executive leadership, frontline staff, and the people you serve. Each group has a different role in this conversation, and each deserves an answer to the question of what this means for them specifically.
The Board’s Role is Not to Become an AI Expert
Let’s get this out of the way first. You don’t need to understand how AI works to govern it well. You don’t need to know the difference between a large language model and a machine learning algorithm any more than you need to understand accounting software to exercise responsible financial oversight.
What you do need is a framework for asking the right questions and the confidence to ask them.
The board’s role in AI governance looks a lot like the board’s role in everything else: ensuring the organization’s values are reflected in its decisions, that risks are identified and managed, and that the people who depend on your organization are not harmed in the process of pursuing it.
The Mission Alignment Question
Every AI decision your organization makes, every tool it adopts, every workflow it automates, and every system it builds should be traceable back to your mission. That sounds obvious. However, it can easily get lost in the enthusiasm of a good demo or the pressure to keep pace with peer organizations.
A People-First lens asks a specific question before any technology decision is made: does this make us more effective at serving the people we exist to serve?
Efficiency matters. Administrative burden is real and reducing it genuinely frees up capacity. But efficiency that comes at the cost of the relationships, trust, and human judgment your programs depend on is not progress. It is failure. Your board is in a unique position to hold that line, because you are not in the middle of the day-to-day pressure to just get things done.
The Questions Every Board Should Be Asking
If your organization is exploring or already implementing AI tools, here are the questions worth raising at the board level. The intent is not to slow things down, but to make sure the foundation is solid before you build on it.
Who owns the data? Any AI tool your organization uses will interact with your data in some way. Who controls it? Who can access it? If you ended your relationship with the vendor tomorrow, what would happen to your information? These aren’t hypothetical concerns. These are basic stewardship questions that belong in your governance conversation.
What are the privacy implications for the people we serve? Nonprofit organizations often work with individuals who are already in challenging and difficult situations. The idea that their personal information could be processed, stored, or used in ways they may not understand or consent to is a serious ethical concern. You very likely receive funding from foundations or government sources that may have their own policies around data privacy as well. Your board should expect a clear answer on this before any client-facing AI system goes live.
How are we protecting our staff in this process? A People-First approach means the people doing the work are centered in technology decisions, not afterthoughts. Are they being consulted? Are they being trained? Do they feel like this is being done with them or to them? Staff who feel overlooked in a technology transition often disengage from it, and when that happens, even the best system fails. The trust of your employees is one of your organization’s most valuable and fragile assets, and board members should want to know it is being protected.
What does success look like, and how will we measure it? Board members are accustomed to asking this question about programs. Ask it about AI as well. If you invest in a new system, what does a good outcome look like six months from now? A year from now? How will leadership report back to you on whether it is working?
The Takeaway
This is not a new category of board governance. It’s an extension of what you already do. You would already ask whether a major investment aligns with the mission. You would already hold leadership accountable for how resources are used. You already care about the wellbeing of the people your organization serves and the staff who serve them. People-First AI governance is simply applying that same lens to a new set of decisions.
Part 2 of this series, People-First AI for Leadership, will explore what this philosophy means for the people responsible for translating board values and direction into organizational reality. Coming soon on the NonprofitNext blog.
Larry is the founder and Principal Innovation Strategist at NonprofitNext, a consulting and training organization helping nonprofits implement technology with intention, strategy, and care. Learn more at www.nonprofitnext.ai.