New Gallup Data Confirms: AI Adoption Isn't a Technology Problem. It's a Values Problem.

Gallup just dropped their January 2025 update on AI at work, and the headline number isn't the interesting part. Yes, "frequent" AI use keeps climbing. But here's what caught my attention: overall adoption has flatlined. https://www.gallup.com/workplace/701195/frequent-workplace-continued-rise.aspx

Translation: the people who've crossed the threshold are going deeper. Everyone else is standing at the edge, arms crossed.

Most organizations treat this as a training problem. Or a communications problem. Or a "we just need better tools" problem.

It's none of those things. It's a values problem. And until you understand which values are in play for different groups in your organization, your AI adoption strategy will keep sputtering.

The Numbers That Should Worry You

Gallup's indicator page shows daily AI use at 12% and frequent use at 26% across the workforce. But those numbers swing wildly by industry and role. https://www.gallup.com/699797/indicator-artificial-intelligence.aspx

Your finance team isn't resisting AI for the same reasons as your customer service reps. Your software developers aren't embracing it for the same reasons as your marketing people. And your frontline healthcare workers are asking completely different questions than your HR team.

Everyone is filtering AI through their own values. And when AI feels like a threat to what matters most, people don't adopt. They wait. They comply minimally. They find workarounds.

Three Hypothetical Values Clusters

Once we profile the workforce for an organization or an industry, we often find very different reasons for reluctance to adopt AI based on very different sets of values. For example:

You might find a Security Group. These folks prioritize Employment Security, Financial Security, and Basic Needs. For them, AI isn't a shiny new tool. It's a question mark hovering over their job. They need to hear how AI protects their position, not how it "transforms the workplace."

You might find an Autonomy Group. This cluster cares deeply about Independence, Self-Expression, and Personal Responsibility. They're not afraid of AI. They're afraid of being reduced to button-pushers. They need to see how AI expands their judgment and creativity, not how it standardizes their work.

You might even find a Trust Group. These people value Trustworthiness, Respect, and Dependability. They're watching to see if leadership actually believes what they're saying. They need consistency between the AI messaging and how the organization actually behaves. One whiff of "we're not replacing anyone" followed by layoffs, and you've lost them for good.

Why Manager Encouragement Works (When It Does)

Gallup's research shows that manager encouragement dramatically increases adoption. https://www.gallup.com/workplace/701195/frequent-workplace-continued-rise.aspx

The obvious interpretation is that managers explain things better. The real reason is subtler: managers who encourage AI are signaling safety. They're telling their teams, "This aligns with what you care about. I see you in this future."

But here's the catch. That only works if the manager actually understands what matters most to their specific team. A manager who talks about efficiency gains to a team that values Belonging and Relationships will land with a thud. A manager who emphasizes competitive advantage to a team that prioritizes Harmony and Cooperation will create more anxiety, not less.

A Quick Values Audit for Your AI Rollout

Before your next all-hands meeting or pilot program, try this:

  1. Map the values landscape. Which of the 56 human values are most activated in each department or team? (Not sure? That's what values research is for. Get a start with a list of all 56 values you can download at www.davidallisoninc.come/resources)

  2. Identify the threat perception. For each group, ask: what does AI feel like it's threatening? Employment Security? Independence? Respect? Name it specifically.

  3. Reframe the message. Craft your AI communication to address the specific values at risk. Security-driven teams need reassurance. Autonomy-driven teams need agency. Trust-driven teams need proof.

  4. Check your managers. Are they equipped to have values-based conversations? Or are they just forwarding the corporate deck?

  5. Watch for mismatches. If your messaging promises Personal Growth but your implementation screams efficiency, people will notice. They always notice.

The Bottom Line

It’s really quite simple: AI adoption stalls because people fear what technology might do to what they value most.

And Gallup's data confirms what Valuegraphics research has shown for years: you can't communicate your way past a values conflict. You have to design for it.

Once you understand the values in play across your organization, AI adoption rises because anxiety drops. Resistance fades because people can finally see themselves in the future you're proposing.

And that shift won't happen in a training session. It will happen when leaders take the time to understand what actually matters to the people they're asking to change.

If you know what people value, you can change what happens next. Download free tools, data, and reports at www.davidallisoninc.com/resources


Want to know What Matters Most to the people you need to inspire?
Download free guides and resources.
Use the free Valueprint Finder to see how your values compare.
Find out why people call David “The Values Guy.”
Search the blog library for ways to put values to work for you.

Previous
Previous

Your Team Isn't Afraid of Change. They're Afraid You Don't Know What Matters to Them.

Next
Next

What Will Make a GTA Homebuyer Say Yes Now