During your board meeting last night, someone asked whether the organization is being responsible about using AI. You recently learned that one of your staff members was using AI to help write clear case notes while another shared they are wondering whether AI might impact their job in the future. Meanwhile, the people you serve expect you to make decisions that protect their information and privacy. And somewhere in the middle of all of that, you are trying to figure out what AI actually is, whether your organization is ready for it, and how to move forward without making an expensive mistake.
That is a lot to carry. And the pressure to either ignore it, or to just pick something and move forward is real.
This post is the second in a four-part series on what a People-First approach to AI means across the nonprofit organization. Part 1 addressed the board’s governance role. Here, we focus on what this philosophy means for the people responsible for translating values into operational reality: executive directors, CEOs, and senior leadership teams.
Start With Your People, Not the Technology
The biggest mistake nonprofit leaders can make when approaching AI is starting with the tool. They see a demo, or hear about what a peer organization is doing, and the conversation immediately becomes about whether to adopt that specific platform or product. This is not unique to AI. It happens with decisions around technology quite often.
The better starting point is your team’s reality. Where are your staff spending time that isn’t mission-critical? What administrative tasks are eating hours that could be spent on direct service? Where are the roadblocks that have frustrated your team for years?
When you start with those questions, the technology conversation changes entirely. You stop asking “should we use AI?” and start asking “what problem are we actually trying to solve, and is AI the right tool for it?” That shift sounds small. In practice, it is the difference between a system your staff will actually use and one that quietly gets abandoned six months after launch. This conversation leads to a true understanding of your needs and a strategy for moving forward.
Your Staff Will Be Watching How You Do This
When we talk about People-First AI, it is more than an approach to technology. It is a philosophy about leadership. Your staff will draw conclusions about what kind of leader you are based on how you handle this transition.
Are they being consulted before decisions are made and allowed to share their concerns and ideas, or are they being informed after decisions are made? Are they being given training and the time needed to learn something new, instead of attending a single walkthrough presentation along with a user manual? Do they feel like this is being built with them, or handed down to them?
Those distinctions matter more than the technology itself. Organizations that bring staff into the conversation early, that create space for questions and honest concerns, and that invest in training tend to see far stronger adoption. More importantly, they tend to come out of transitions with their team’s trust intact. That trust is hard to rebuild once it’s lost.
The Decisions That Belong at Your Level
Board members should be asking governance questions about AI. Frontline staff should be empowered to use it confidently. But there is a set of decisions that belong squarely with executive leadership, and it is worth naming them directly.
Where does our data live, and who controls it? This is not a question to delegate entirely to your technology staff or a vendor. Executive leaders need to understand the basics: what platforms your organization’s data passes through, whether client information is being retained or used by third parties, and whether your current tools comply with any funder or regulatory requirements around privacy. If you cannot answer these questions today, that is the place to start.
Are we building on infrastructure we already own? One of the most practical principles in People-First AI is building on platforms your organization already uses and controls, such as your Google or Microsoft environment, rather than introducing new vendors who hold your data and your access. This keeps costs down, reduces risk, and means that if a vendor relationship ends, you still own everything you built and control your data.
What does a responsible rollout actually look like? Moving fast might feel like progress. However, in most cases, a thoughtful rollout with proper planning, staff input, and real training will outperform a rushed one every time. Build the implementation timeline around your team’s capacity, not around a product launch date.
Reframe the Opportunity
Here is the framing that I have found most useful for nonprofit leaders who are wrestling with where to begin: AI is not primarily a technology decision. It is a capacity decision.
The organizations that will benefit most from AI are those who use it to protect and expand the capacity of their employees and teams. This means more time for your program staff to focus on relationships and direct service. Better data for you to make decisions and demonstrate impact to funders. Sustainable workloads that help you retain the talented people you worked hard to hire and train.
That is what People-First AI looks like in practice for executive leadership. Not chasing the latest tool. Not automating for the sake of efficiency and keeping up with your peers. Instead, it means making deliberate, values-driven decisions about where technology can make your people more effective at the work that only they can do.
Part 3 of this series, People-First AI for your Staff, will explore what this philosophy means for the people doing the daily work of your organization. Coming soon on the NonprofitNext blog.
Larry is the founder and Principal Innovation Strategist at NonprofitNext, a consulting and training organization helping nonprofits implement technology with intention, strategy, and care. Learn more at www.nonprofitnext.ai.