Your AI Rollout Is Failing. Here's Why Nobody's Using It.

At CES 2026, the tech industry's message was deafening: AI in everything. Robotics, banking interfaces, employee platforms, health monitors, creator tools. If it plugs in or connects to the internet, it now runs on AI.

Billions are being spent. Features are being shipped, and in all honesty, the demos look incredible. This is full-on Jetsons-level whiz-bang cool. (And if you get that reference, I know how old you are.)

But…people still aren't buying in. 

Banks roll out AI-powered customer service, and trust drops. Companies deploy AI tools for employees and watch them create workarounds like mice in a maze, hacking their way to the treats at the end. Healthcare systems integrate AI diagnostics, and physicians find reasons to ignore the recommendations. The technology works, at least most of the time, but the adoption doesn't work very often at all.  

This isn't a tech problem. It's a values problem.

The Adoption Gap No One's Talking About

Here's what's happening: AI is being designed as if efficiency, speed, and productivity are what everyone wants.

But they're not what everyone wants. Far from it. 

Some people care most about Independence. Others need Security. Some want Trustworthiness. Others value Creativity, Equality, or simply getting Balance back in their day. The values that drive behavior aren't universal, and they definitely aren't the ones printed in your company's mission statement.

For the past decade, my company has maintained the world's largest database of human values—nearly a million surveys across 180 countries in 152 languages. We've identified exactly 56 core values that drive human behavior. And what the data shows is this: people adopt tools that give them more of what matters most to them, not what matters most to the organization rolling out the tool.

Until you know which values are in play for the people expected to use your AI, you're guessing. And guessing is a terrible adoption strategy.

The Values-First Framework for AI Adoption

Want to actually get people using the AI you're implementing? Start here:

1. Identify the values at stake
Don't assume. Survey the actual people who'll use this tool. Which of the 56 core human values matter most to them? Is it Personal Responsibility? Financial Security? Belonging? Creativity? Education? You can't design for alignment if you don't know what you're aligning with.

2. Map your AI to those values
Take your top 5-7 values and get specific. If Security ranks high, how does your AI tool make people feel safer? If it doesn't, that's your problem. If Creativity matters, does your AI feel like it's replacing their creative input or amplifying it? If Belonging is crucial, does this tool connect people or isolate them?

3. Communicate in values language
Stop selling features. Start talking about what people will get more of. "This AI saves you 3 hours a week" only works if Balance or Leisure are top values. For someone who values Relationships, frame it as "3 more hours to spend with the people who matter." Same tool, different values lens, completely different response.

4. Design the experience around values, not just functionality
If your audience values Trustworthiness, your AI needs to show its work. If they value Independence, it needs to feel like a tool they control, not one that controls them. If they value Equality, you'd better be ready to explain how it makes decisions. The user experience has to reflect what matters to them.

5. Measure adoption through a values lens
Track more than usage rates. Are people using the AI in ways that align with their values, or are they finding workarounds because it conflicts? Exit interviews and resistance patterns will tell you everything about values misalignment.

What Changes When You Get This Right

When AI aligns with human values, everything shifts.  

Adoption stops feeling like an uphill battle. Employees don't need endless training sessions because the tool just makes sense to them. Customers don't need convincing because they can see how it serves what they care about. We track what we call the Values Dividends – what changes when values are present and people get more of what matters most to them. So we know that trust, loyalty, and engagement will increase. 

We've seen this play out across industries. Organizations using values data to guide technology adoption see engagement rates climb by 40%. Trust increases by 20%. ROI goes up by 12%. Not because the technology changed, but because the people using it finally saw themselves in it.

The companies that win the AI race won't be the ones with the most advanced models or the flashiest demos. They'll be the ones who figured out that AI adoption isn't a technology problem, it's a human one.

And humans are driven by values. That's the missing layer. That's what separates AI that transforms organizations from AI that collects dust in the corner of someone's messy desktop. 

So for me, the big question coming out of CES isn't about what AI can or can’t do. It's about whether anyone will actually use it. And the answer to that question starts with understanding what matters most to people, in other words, what do people value most of all?

If you want to Change What Happens Next, start with what matters most. And that’s always going to be about what people value.

Values are the answer. It’s up to us to put them to work. 

Download free tools, data, and reports at www.davidallisoninc.com/resources


#values #keynotespeaker #leadership #ai #engagement #sales #valuesdriven #valuegraphics #humancentric #data


Want to know What Matters Most to the people you need to inspire?
Download free guides and resources.
Use the free Valueprint Finder to see how your values compare.
Find out why people call David “The Values Guy.”
Search the blog library for ways to put values to work for you.

Next
Next

What the Next Generation of Wealth Really Wants — And How You Can Reach Them