- Speaker #0
You're listening to Gunnick's Digital Podcast, where we share curated insights on digital strategy, artificial intelligence, and the tools that drive performance.
- Speaker #1
Welcome to the Deep Dive. Today, we're jumping into something huge, a real paradox happening in companies everywhere. Why AI transformation projects are, well, often failing.
- Speaker #2
Yeah, it's fascinating. You see these massive announcements, these big numbers.
- Speaker #1
Exactly. Like, what was it? 73% of major companies are already implementing AI or have solid plans to. And McKinsey thinks we'll hit 70% adoption across industries by 2030. It's like a done deal.
- Speaker #2
It really does sound inevitable. But, you know, if you scratch just below the surface of those headlines, the picture changes quite a bit.
- Speaker #1
How so?
- Speaker #2
Well, the internal failure rate for these huge, expensive AI projects, it's stubbornly high, especially for those mid-market companies trying desperately to keep up.
- Speaker #1
And we hear some wild stories putting this together. I remember one source talking about a multi-billion dollar company. They poured millions into a cutting edge AI marketing platform. Worked like a dream in the labs, perfect demos. But then, 12 weeks after rollout, adoption was stuck. Like really stuck. 15%.
- Speaker #3
15%.
- Speaker #1
Wow. Yeah. And get this. Employees were actually inventing manual workarounds. The work, basically. Just to avoid touching the new AI system.
- Speaker #2
And that story highlights the absolute core issue we need to unpack today. The shocking thing is, it's almost never about the technology itself.
- Speaker #1
Really? The AI usually works?
- Speaker #2
Usually, yes. The tools function as advertised, more or less. The real bottleneck, the thing that trips everyone up, it's human. It's us.
- Speaker #1
So the old ways of managing change, you know, send more emails, do more training PowerPoints, that stuff doesn't cut it here.
- Speaker #2
Not for AI, no. That old playbook might have worked okay for, say, rolling out a new email client or something relatively simple, but it completely falls apart when you're introducing intelligent automation. The games change.
- Speaker #1
Okay, so our mission for this deep dive is clear then. We need to get under the hood of this new kind of AI resistance. Why is it different?
- Speaker #2
Exactly. Traditional change management just doesn't touch the deep-seated fears.
- Speaker #1
We need to figure out the frameworks, the approaches that can actually shift these mindsets, turn that resistance into genuine readiness.
- Speaker #2
That's the goal. Because this resistance feels fundamentally different, doesn't it? It's deeper than just, you know, grumbling about learning a new spreadsheet program.
- Speaker #1
It does feel different. Why is that? Why does AI feel so primal sometimes? If the goal is making work easier, being more productive, why are people actively fighting the tools?
- Speaker #2
Because the conversation immediately pivots. It's not just about usability anymore. You know, where's the button for X? It becomes about survival, about your long-term career security, your value. When you bring in a system that can learn and potentially make decisions that used to be yours, you're poking at some really fundamental psychological fears. Fears about a person's core identity.
- Speaker #1
Okay, so what are these big fears? The ones managers often seem to miss when they're focused on the tech specs.
- Speaker #2
Right, let's break them down. Fear factor number one, job replacement and maybe even worse, irrelevance.
- Speaker #1
The big one.
- Speaker #2
It is. And the research backs this up strongly. Over half of employees, 53% in one study, genuinely fear that just using AI will signal to their bosses that they're replaceable.
- Speaker #1
Wow. So using the tool makes them feel less secure.
- Speaker #2
Exactly. And that anxiety is powerful. Get this. 52% of employees actually feel uncomfortable telling their managers they're using AI for important tasks. They hide it.
- Speaker #1
That's completely upside down. Hiding a tool that could make them better at their job.
- Speaker #2
Yeah.
- Speaker #1
just to seem indispensable. The tech might work, but the social dynamic is broken.
- Speaker #2
Perfectly put. It's a broken social contract. Okay, so fear factor number two, cognitive overload and trust.
- Speaker #1
Explain that one.
- Speaker #2
Think about traditional software, a spreadsheet, for instance. It's transparent, right? You can click on a cell, see the formula, audit how it got the number.
- Speaker #1
Sure.
- Speaker #2
AI often isn't like that. Machine learning models can be black boxes. The decisions happen inside, hidden behind complex algorithms you can't easily inspect.
- Speaker #1
Okay, I see. Less transparency.
- Speaker #2
Right. And that lack of transparency makes people question the output. They feel like they're losing control. It can also feed into something called A.I. shame.
- Speaker #3
A.I. shame.
- Speaker #1
What's that?
- Speaker #2
It's the fear of being judged. Judged as lazy or maybe incompetent because you relied on the A.I. instead of doing all the legwork yourself.
- Speaker #1
Oh, OK. So if I use A.I. to whip up a market analysis, I might worry my team thinks I didn't really do the work myself, like I cheated.
- Speaker #2
Precisely. It becomes a crisis of intellectual ownership almost. And then there's the third factor, which is huge, especially for highly skilled professionals, identity threat.
- Speaker #1
Identity threat. How does AI threaten someone's identity?
- Speaker #2
Well, think about how professionals define themselves. Often it's by their unique expertise, their special skills. A top financial analyst prides themselves on spotting that subtle market trend no one else saw.
- Speaker #1
Or an experienced HR manager values their intuition, their ability to read people in an interview.
- Speaker #2
Exactly. Their unique feel for the situation. When an AI comes along and starts performing tasks that seem similar, tasks that were core to their professional identity and sense of uniqueness, it's a direct challenge.
- Speaker #1
That sounds like a serious status crisis. How do you manage that? You've got a highly valuable expert, someone with decades of experience, suddenly feeling like their hard won knowledge is being turned into a commodity by an algorithm.
- Speaker #2
It's incredibly personal. That's why resistance isn't just some abstract organizational problem you can solve with a memo.
- Speaker #1
So we can't treat everyone the same.
- Speaker #2
Absolutely not. That's the crucial next step. We have to distinguish between resistance, which is active opposition. People may be spreading negative gossip or even subtly sabotaging the rollout.
- Speaker #1
Right. The active fighters.
- Speaker #2
Yes. And then there's reluctance, which is more passive. It's hesitation, uncertainty, maybe people just needing more proof for handholding before they commit.
- Speaker #1
Okay. So resistance versus reluctance, active versus passive. Reluctance sounds more manageable, maybe with better communication.
- Speaker #2
Generally, yes. Reluctance can often be addressed with clear information, good training, showcasing benefits. But resistance, that active... pushback. That requires getting to the emotional core, addressing those fears we talked about directly.
- Speaker #1
And who are the groups most likely to show that active resistance? Where should managers be looking?
- Speaker #2
Well, interestingly, it's often middle management that shows the most active resistance.
- Speaker #1
Middle managers. Why them?
- Speaker #2
They're caught in a tough spot. They get pressure from the top to implement the AI, make it work, but they also face the resistance and anxieties from their teams below. Plus... They often worry about their own roles. Will AI automate the supervision, the decision making, the expertise that gave them authority?
- Speaker #1
The classic middle management squeeze amplified by AI makes sense. Who else?
- Speaker #2
Those highly skilled workers we mentioned earlier, the experts. They have more status, more specialized expertise that feels threatened. So they have more to lose psychologically. And because they often have high status, their negativity can really poison the well for the entire team.
- Speaker #1
So their attitude has an outsized impact.
- Speaker #2
Definitely. And beyond individual roles or status, you also see specific departmental anxieties pop up.
- Speaker #1
Like what? Different functions worry about different things.
- Speaker #2
Exactly. Sales teams, for example, might worry that AI will make customer interactions feel cold, impersonal, damaging relationships.
- Speaker #1
OK.
- Speaker #2
Finance teams, they're often terrified of errors. And given all the news about AI potentially enabling sophisticated fraud, that fear of a public mistake. maybe even regulatory trouble, is very real for them.
- Speaker #1
And operations.
- Speaker #2
Operations teams are typically focused on stability and efficiency. They worry, rightly so sometimes, about disrupting processes that, while maybe old, are proven and working. An AI glitch in operations could halt production or service delivery entirely. Big risk.
- Speaker #1
Okay, so the fears are deep, varied, and sometimes role-specific. The strategic response, then, it sounds like it has to start with changing the story. reframing AI.
- Speaker #2
That's absolutely central. You have to shift the narrative away from replacement and towards augmentation.
- Speaker #1
AI is a helper, not a replacement.
- Speaker #2
Precisely. And this is where language becomes incredibly powerful. You need to help people visualize a positive future role for themselves with AI. The analyst isn't just crunching data anymore. They become a trend strategist using AI insights to make higher level judgments.
- Speaker #1
And the HR manager.
- Speaker #2
Maybe they shift from screening resumes to becoming a a people-experienced designer using AI data to understand employee needs better and create a better workplace. It's about elevating the human role.
- Speaker #1
That requires a real shift in how we talk about jobs internally. Are there companies doing this reframing well? Any examples?
- Speaker #2
Yeah, there are some good ones. Amazon, facing fears about warehouse automation, made a very public commitment to creating new human roles alongside the robots, things like robotics technicians. They addressed the fear head on.
- Speaker #1
OK, so they created a narrative of new opportunity.
- Speaker #2
Right. And Walmart did something similar. They rolled out a generative AI assistant to I think it was 50,000 corporate employees. But crucially, they framed it only as a productivity aid.
- Speaker #1
How so?
- Speaker #2
They focused on tasks like summarizing long documents or drafting emails and memos, things that help but don't replace core judgment. The messaging was clearly this tool is here to support you, not supplant you.
- Speaker #1
Got it. So the language really matters. Instead of saying AI is going to automate Task X, you should say something like.
- Speaker #2
Like AI will handle the routine data entry for Task X, freeing you up to focus on the strategic analysis part. You connect it to a better personal outcome for that.
- Speaker #1
You'll have more time for the interesting. work, the meaningful stuff.
- Speaker #2
Exactly. Focus on the benefit to the person, not just the efficiency gain for the company. But here's the catch. Before you even start cracking those messages, there's a critical step most companies completely miss. They get excited about the tech, they work on the messaging, and they jump straight into implementation.
- Speaker #1
And they skip. What's the foundational piece?
- Speaker #2
They skip checking the organization's vital signs first. They fail to assess the actual readiness of the organization. You have to conduct an AI readiness assessment. You need a baseline.
- Speaker #1
Like a checkup before running a marathon.
- Speaker #2
Perfect analogy. You need to know if the organization is culturally, operationally, and technically fit enough before you start this demanding AI transformation journey.
- Speaker #1
How is this different from, say, digital readiness? We've been talking about digital transformation for decades now.
- Speaker #2
It's related, but deeper. Digital readiness was often about mastering static tools. You learn Microsoft. You learn Salesforce and those tools pretty much work the same way every time. AI readiness is different. It means getting comfortable working with systems that learn, that evolve, that might introduce ambiguity or change how they operate over time. It demands continuous learning from everyone and a higher tolerance for things not being perfectly predictable.
- Speaker #1
So it's about adaptability to dynamic systems, not just using fixed tools. OK, so this assessment, this checkup. What are you measuring? What are the vital signs for AI readiness?
- Speaker #2
We usually look at it through a five-dimension readiness model. Think of it like scoring the company on five critical areas. You really need decent scores across all five to have a good shot at success.
- Speaker #1
Okay, what's dimension one?
- Speaker #2
Number one is leadership commitment. And this isn't just about signing the checks. Are leaders actively championing this? Are they providing the resources, the time? And crucially, are they visibly using the AI tools themselves? Executive modeling is non-negotiable.
- Speaker #1
If the leaders aren't using it, why should anyone else? Got it. What's two?
- Speaker #2
Second, cultural openness. Does the company culture encourage experimentation? Is it safe to try things and maybe fail? Or is there a culture of fear around mistakes? AI implementation is a learning process. You need psychological safety.
- Speaker #1
Makes sense. Number three.
- Speaker #2
Third is workforce capability. Do your people have the foundational skills needed? Not just technical skills to operate the AI, but the adaptability skills to cope when processes change, which they inevitably will. Are there big skill gaps you need to address first?
- Speaker #1
Right. Capability, fourth.
- Speaker #2
Fourth, process flexibility. How rigid are your current workflows? AI often requires significant changes to how work gets done, how information flows. If your processes are set in stone, the AI project will likely break against them.
- Speaker #1
So you need agile processes. And the fifth.
- Speaker #2
Finally. Number five is technology infrastructure. This is where the tech readiness comes in, but it's only one piece of the puzzle. Is your data clean, accessible, and relevant? Does the new AI integrate reasonably well with your existing systems? Is there adequate technical support?
- Speaker #1
Okay, leadership, culture, capability, processes, and infrastructure. If you score low on, say, culture or leadership, throwing money at the best AI platform isn't going to fix the underlying human resistance.
- Speaker #2
Exactly. You're setting yourself up for failure. The tech won't save a resistant or unprepared organization.
- Speaker #1
All right. This is making a lot of sense. We've diagnosed the problem, identified the fears, looked at the readiness factors. Let's shift now to intervention. For our listeners managing these changes, what are the practical tools? How do you start building that competence and trust?
- Speaker #2
OK, practical tools. We absolutely have to start with communication, but not just any communication. It needs to be structured specifically to diffuse that AI anxiety we talked about. about. A useful framework here is the CLEAR communication method.
- Speaker #1
CLEAR. Okay, what does that stand for?
- Speaker #2
C is for clarify the purpose. And be specific and personal. Don't just say increase productivity. Say, this will help you spend one hour less per week on tedious data entry, freeing you up for client calls. Focus on the personal win.
- Speaker #1
Okay, clarify. L.
- Speaker #2
L is link to their world. Make the AI feel less alien. Compare it to familiar technology they already trust and use. Think of it like your smart calendar assistant, but for finding sales leads, or it's like the navigation app on your phone helping guide your decisions. Demystify it.
- Speaker #1
Blanket E.
- Speaker #2
E is explain what changes. Again, specifics matter hugely here. Clearly spell out which tasks the AI will handle and, just as importantly, which tasks the human retains control over and responsibility for. Ambiguity breeds fear.
- Speaker #1
Blanket A.
- Speaker #2
A is address concerns proactively. Don't wait for the rumors and anxieties to bubble up. Acknowledge the potential discomfort directly. Talk about job security concerns openly. Explain the plan. Show you understand their perspective. And R is reinforce the partnership. Hammer home the message that AI is a tool to make people better, smarter, more effective. Emphasize that human judgment, human oversight. Ethical considerations, these remain essential and irreplaceable. It's a human AI team.
- Speaker #1
CLEAR, clarify, link, explain, address, reinforce. That framework definitely tackles the emotional side, the anxiety. But what about actually training people on these systems? They can be complex. How do you do that without triggering that AI shame or overwhelming them?
- Speaker #2
Great question. Because if the training itself is intimidating or confusing, people will just shut down and retreat. The key is gradual exposure in a safe environment. We use what's called a progressive exposure training model.
- Speaker #1
Progressive exposure. Okay, sounds like steps.
- Speaker #2
Exactly. Move through stages. You start with just awareness, short demos, maybe videos, focusing purely on the benefits, the results. Show them the why before the how. Keep it light.
- Speaker #1
Okay, awareness first, then.
- Speaker #2
Then you move to guided practice. This is hands-on but highly structured. Use sample exercises, step-by-step instructions in a controlled setting where they can't break anything important. Lots of support available.
- Speaker #1
Guided practice makes sense. Next stage.
- Speaker #2
Next is supported independence. This is where they start using the AI on their own real tasks. But and this is crucial with guaranteed support and regular check-ins scheduled. They're flying solo, but with a safety net right there.
- Speaker #1
OK, building confidence and the final stage.
- Speaker #2
Finally, full integration. They're using the tool confidently as part of their normal workflow, seeking support only when needed. They've internalized it.
- Speaker #1
That phased approach sounds much less daunting. You mentioned a safe environment for the guided practice. How important is that?
- Speaker #2
Oh, it's absolutely vital. You must create practice environments using dummy data, sandboxes where mistakes have zero real world consequences. People need to feel safe to click around, experiment, even mess up without worrying about embarrassing themselves or affecting actual work or customers.
- Speaker #1
That psychological safety is key to fighting off that AI shame and encouraging genuine learning.
- Speaker #2
Totally. It allows people to build competence without fear. And as competence grows, so does trust. It creates positive momentum.
- Speaker #1
So competence builds trust. What are the other signals, the tangible things employees watch for during a rollout that tell them, OK, I can trust this process, this technology?
- Speaker #2
They look for consistency and follow through. Big signals include seeing leadership actually using the tools, not just telling others to.
- Speaker #1
Walking the talk.
- Speaker #2
Exactly. They look for clear, easy to understand policies on data privacy related to the AI. and explicit statements about job security. Written down, not just vague reassurances.
- Speaker #1
Clear policies. What else?
- Speaker #2
Responsive support is huge. When they hit a snag and they will, how quickly and effectively is help available? Slow or useless support kills trust fast. And finally, recognizing and even rewarding effort. Acknowledge people who are engaging, learning, adapting. Show that their effort is valued.
- Speaker #1
So, leadership modeling, clear policies, good support, and recognition. These are the trust signals that turn initial curiosity into sustained use.
- Speaker #2
You got it. Those signals build the foundation of trust needed for the long haul.
- Speaker #1
Okay. If we pull this all together, the core lesson seems crystal clear. AI transformation success isn't really about the coolest algorithm or the fanciest platform. It hinges almost entirely on the human element.
- Speaker #2
That's the whole story, really. It requires a much deeper, more psychological approach to change management than we're used to.
- Speaker #1
It's not just a tech project. a people project.
- Speaker #2
Absolutely. And it's not a one-off event. Real transformation needs a systematic, phased rollout, setting the foundation with readiness checks, careful pilot launches, then scaling up in waves. And critically, the long-term win isn't just about tracking how many people logged into the AI system.
- Speaker #1
What should we track then?
- Speaker #2
You need to continuously monitor behavioral change. Are people actually working differently? Are they leveraging the AI insights? Or are they just checking a box to satisfy a mandate, maybe still using their old workarounds on the side. That's the real measure.
- Speaker #1
Which means companies need to build these change management capabilities internally, right? This isn't something you can just outsource entirely.
- Speaker #2
I firmly believe that. The ultimate sign that your AI transformation is truly successful isn't just when the technology is working smoothly. It's when your organization has learned how to anticipate, manage, and adapt to these kinds of changes more independently.
- Speaker #1
So teaching your own leaders and champions to spot resistance patterns, to adjust the communication, to evolve the training continuously.
- Speaker #2
That's the sustainable advantage, building that internal muscle for managing change in the age of AI.
- Speaker #1
Which leaves us with a final thought for everyone listening. A provocative question, perhaps.
- Speaker #2
Go for it.
- Speaker #1
Given everything we've discussed, the fears, the need for readiness, the importance of trust, what internal capabilities do you need to start building in your organization today? What skills, what cultural shifts, what leadership behaviors are essential right now? So you're prepared to navigate not just this AI shift, but all the inevitable ones still to come.
- Speaker #0
Thanks for listening to GenX Digital Podcast. Follow us for more curated insights.