Over the past two years, we’ve tested dozens of AI and automation tools. Some have genuinely improved our workflows. Many haven’t. But each experiment taught us something valuable about what to look for and what to avoid. And we learned the hard way that most AI tools don’t deliver what they promise.

We’ve also been talking with other leaders to learn how nonprofits, associations, and the vendors who serve them are approaching AI and automation. When our President, Bob Corlett, spoke at a recent AI event for sector leaders, the room was full of organizations grappling with the same questions: How do we evaluate these tools? When should we adopt? What are the real risks?

Through ongoing conversation and a bit of trial and error, we’ve developed a framework to evaluate whether a tool is worth the investment. Our approach reflects the realities of working in the mission-driven sector: limited resources, small teams, serious privacy concerns, and the need to maintain human connection in our work.

Our Non-Negotiable: No AI in Candidate Evaluation

Before diving into the framework, we want to be clear about a line we won’t cross: we don’t use AI to evaluate candidates. Ever. This goes beyond avoiding algorithmic bias (although that is a big concern). Executive search for mission-driven organizations requires human judgment that simply can’t be automated. AI doesn’t have the capability to understand the nuances of this work. For more about the tools we use and how we use them, read our technology and privacy practices.

Start With Privacy (Your Team Is Already Using AI)

Even if they aren’t talking about it, your staff is already experimenting with ChatGPT, Claude, or other tools (even Grammarly is powered by AI). It’s never too early to establish responsible use guidelines, even if you don’t have any formal initiatives on deck.

Define what data can and cannot be shared with different platforms. Create simple rules everyone can follow. For a quick breakdown of which platforms you can trust with your data, read this article from Head of Emerging Technology at designDATA, Greg Starling. He’s been an invaluable resource for our team.

People Before Tools

Start with your team, not the technology. This was the consensus among successful adopters at the AI event. You have to understand where your staff stands:

  • Who’s already using AI tools (officially or unofficially)?
  • Who’s excited about new technology versus anxious about job security?
  • What repetitive tasks frustrate them most?
  • Where do they see opportunities for automation?

When we introduced AI to the Staffing Advisors team, we positioned it as a force multiplier. Less keyboarding, more thinking. We didn’t mandate anything. Instead, we asked, “Is that the best use of your time? Could a tool help with that?” Those who were hesitant at first became champions when they saw how AI could enhance their work.

Keep staff needs and concerns at the forefront of these decisions. Whatever your goals are, be transparent and empathetic. This is new for a lot of us.

Business Value Over Hype

Nearly every vendor is pushing AI features. It’s hard not to be seduced by impressive demos. But we learned to focus on one question: “Does this solve a real problem we have today?” For our needs, simple solutions (often a combination of AI and tools we already use) outperform comprehensive platforms. It’s really about whatever supports your business goals.

Red flags we watch for:

  • Vendors who lead with technology instead of our needs.
  • Solutions that can’t integrate with existing systems (you often have to run a trial to figure this out).
  • Anything that promises to do our thinking for us.

If a tool seems gimmicky, doesn’t integrate well, or compromises the integrity of our process or data, we walk away.

Start Small and Prepare for the Messy Middle

If you’re just starting to think about how AI can benefit your organization, start with small improvements to existing workflows and build from there. Every implementation has a messy middle—missteps, frustrations, and the occasional aha moment. Your team needs support through this phase:

  • Document what you learn (successes and failures).
  • Provide multiple ways for teams to learn and test new processes.
  • Keep legacy workflows available during transitions.
  • Share experiences openly across teams.
  • Celebrate small wins to build momentum.

If bigger, broader AI implementation is on your strategic horizon, starting small first will help you see what tools can realistically do and get your staff comfortable with new ways of working.

Know When to Scale (and When to Move On)

Before you start any pilot, define success. What specific results do you need to see? By when? How will you know it’s time to expand (or end) the experiment? Are you seeing repeatable, measurable outcomes that align with your goals? Your next step depends on what you find:

  • Good results → Scale up and formalize the process.
  • Inconsistent results → Decide whether to refine, pause, or stop.
  • Poor results → End the experiment and redirect resources.

Maybe you’re ready to hit the ground running. Maybe the technology needs another generation of development. Maybe you need better integration with existing systems first. Set benchmarks and be clear about how you’ll decide to scale, modify, pause, or stop.

Putting It All Together: A Quick Checklist for Evaluating AI Tools

After you’ve worked through the ideas above, use this checklist as a guide to evaluate tools, structure team discussions, and push vendors for specifics.

Start with the Problem and Use Case

  • What specific problem are we solving?
  • How would our team use this day-to-day?
  • What are the expected benefits? Can we measure them?

Review Security, Privacy, and Compliance

  • Does it meet our compliance requirements?
  • Do we retain ownership of our data and outputs?
  • Is our data used to train their models?
  • Are there any ethical concerns?
  • Can we trust this tool with our sensitive information?

Evaluate Systems Fit and Cost

  • Does it replace, duplicate, or enhance existing tools?
  • How many users need access and at what pricing tier?
  • Does it integrate smoothly with current workflows?
  • What are the full costs (licenses, training, support, maintenance)?

Assess Readiness for Implementation

  • Is the interface intuitive for our team’s skill level?
  • How many users need access and at what pricing tier?
  • How much training and support will we need?
  • Should we adopt now or wait for the technology to mature?
  • What will we need to implement this effectively?

Every Tool Should Support Your Mission, Including AI

Full disclosure: we’re still figuring this out too. Next month we’ll probably abandon something we’re testing right now. But that’s not failure, it’s just part of the process as we determine which technology helps to serve our clients best.

Here’s a low-risk way to experiment. Pick a repetitive task that frustrates your team. Find the simplest possible fix using an AI-driven or automated tool. Test it for a week or two. You’ll learn more from that small experiment than from reading twenty vendor white papers. And remember, AI is just a tool. Treat it like any other that can help move your mission forward.


Keep Reading