Why Banks Are Struggling with AI Adoption And What the Values Data Reveals
Financial services spent more on AI than almost any other industry. Adoption rates are still stuck in the basement. There's a reason for this, and it has nothing to do with legacy systems.
Here's what might help: The FinServ AI Values Matrix. It maps the three biggest AI use cases in banking against the three values most likely to create resistance. Once you see the collision points, you can start designing around them instead of slamming into them repeatedly.
The Spending-Adoption Paradox
The numbers look absurd when you put them side by side. Banks and financial institutions are pouring billions into AI. Accenture estimates that generative AI could add up to $340 billion in value to the banking industry annually. The opportunity is real.
And yet. Frontline adoption is crawling. Advisors aren't using the tools. Customer service agents are going around the systems. Compliance officers are double-checking everything AI produces, essentially doing the work twice.
The standard explanation is regulation. FinServ is heavily regulated, so AI adoption is naturally slower.
That's partially true. But it's also insufficient. Because the resistance isn't coming from the compliance department. It's coming from the people on the floor who have tools available and aren't using them.
The Values Beneath the Resistance
The Valuegraphics Database has given us profiles of people in financial services roles, advisors, bankers, analysts, and service professionals. The patterns are clear, and they explain a lot.
Trustworthiness (ranked 19th globally but much higher in FinServ) is central to financial services identity. These are people who built careers on being the reliable one. The one clients trust. The one whose name goes on the recommendation.
AI threatens this in ways that feel existential. If I use AI to draft a client letter, and that letter contains an error, whose fault is it? Mine. But I didn't write it. The loss of control feels like a loss of trustworthiness, even if the output is fine.
Financial Security (ranked 3rd globally) is everywhere in this industry, for obvious reasons. People who work in finance tend to value financial security for themselves, not just for clients. AI represents uncertainty. Uncertainty threatens financial security. The math isn't hard.
Loyalty (ranked 7th) shows up in interesting ways. Many FinServ professionals feel loyalty to clients that goes beyond transactions. They worry AI will damage relationships they've spent years building. They resist tools that feel impersonal, even efficient.
The FinServ AI Values Matrix
Here's the tool I promised. Match your AI use case against the values it touches.
Use Case: AI-generated client communications
- Threatens: Trustworthiness (my name is on this), Loyalty (this feels impersonal)
- Solution: Position AI as a first draft that the advisor always personalizes and approves. Never send anything AI-generated without human review and human sign-off. Make the human layer visible to clients.
Use Case: AI-driven investment recommendations
- Threatens: Trustworthiness (what if it's wrong?), Financial Security (what if I become obsolete?)
- Solution: Frame AI as research support, not decision-making. The advisor sees more options faster, but the judgment remains human. Track and celebrate instances where human judgment improved on AI recommendations.
Use Case: AI-powered customer service
- Threatens: Relationships (clients want people), Loyalty (we promised personal service)
- Solution: Use AI for the transactional queries that clients don't want to spend time on anyway. Flag anything relationship-relevant for human handling. Make the handoff seamless.
What Changes When You Design for Values
Banks that understand this dynamic do things differently.
They don't launch AI with efficiency messaging. They launch it with protection messaging. "This helps you serve clients better" beats "this saves you time" every time, because serving clients better aligns with Trustworthiness and Loyalty. Saving time sounds like a prelude to headcount reduction.
They don't measure success by adoption rates alone. They measure by advisor confidence. Are people comfortable with these tools? Do they feel like the tools support their professional identity rather than threatening it?
They don't roll out to everyone at once. They start with people whose values profile makes them natural early adopters, with high Personal Growth, high Creativity, and let enthusiasm spread through the relationship networks that already exist.
This is the conversation I have with financial services organizations wrestling with AI adoption. The technology vendors can handle the systems integration. But someone needs to handle the human integration, and that requires understanding what the humans actually care about.
The spending is already happening. The technology is already there.
The missing piece is values alignment.
It's the only thing the budgets aren't buying.
Remember: if you know what people value, you can change what happens next.
Download free tools, data, and reports at www.davidallisoninc.com/resources
Want to know What Matters Most to the people you need to inspire?
Download free guides and resources.
Use the free Valueprint Finder to see how your values compare.
Find out why people call David “The Values Guy.”
Search the blog library for ways to put values to work for you.