Skip to main content

Most AI adoption readiness strategies fail before they begin. Not because the people aren’t smart enough. But because organisations focus on rollout when they should be focused on people.

By Emma Bromet | Partner – Data, Mantel

Executive summary

Key takeaways for business leaders

  • Successful AI implementation requires focusing on people and culture as well as technology rollout.
  • Leaders must visibly demonstrate their own learning and normalise making mistakes with new tools.
  • Teams need a psychologically safe environment to experiment without fear of judgement.
  • Organisations should allocate dedicated, unstructured time for employees to practice using AI in their daily workflows.
  • Knowledge sharing among peers prevents information silos and accelerates team proficiency.
  • Employees must understand exactly how these tools benefit their specific daily tasks to fully embrace them.

This guide isn’t about which AI tools to buy or how to write a better prompt. It’s about something more human than that: creating the conditions where your team actually wants to learn. Where curiosity is rewarded, mistakes are welcomed, and no one feels left behind.

And the stakes have never been higher. Organisations are making bold, public commitments to AI; some to the tune of $100 million or more with vendors like OpenAI. Those investments deserve to land. But technology alone won’t deliver the return. Our experience shows that the organisations truly getting value from these commitments aren’t just the ones with the best tools, they’re the ones who’ve built the culture to use them well. That’s the gap this guide is designed to close.

The five signals in this piece are drawn from the frontlines of AI adoption across real teams and real organisations. Some of what you’ll read might prompt a moment of honest reflection, and that’s a good thing. The gap between where most organisations think they are and where they actually are is something worth seeing clearly, because once you see it, you can do something about it.

The encouraging news is that none of this requires a big budget, a new platform, or a company-wide mandate. It starts with small moments: a leader admitting they’re still figuring it out, a team laughing together over a prompt that went sideways, a junior employee teaching their manager something new.

Those moments are where culture actually changes.

Read this as a starting point, not a scorecard. Then pick one signal, one team, and one week. That’s enough.

What is AI adoption readiness?

AI adoption readiness refers to an organisation’s cultural and operational capacity to integrate artificial intelligence into daily workflows. It goes beyond acquiring technology to focus on employee psychological safety, dedicated practice time, and open knowledge sharing. A ready organisation empowers its team to experiment and learn collectively.

Signal 1

1. Leaders demonstrating AI adoption readiness openly

What this signal measures: Whether people in senior roles are visibly using AI tools – and talking openly about what’s working and what isn’t.

What healthy looks like: Your leaders aren’t just endorsing AI in all-hands meetings. They’re showing up to team standups saying “I used Claude to prep for this call, and it saved me 40 minutes.” They’re sharing a prompt that flopped. They’re asking junior team members to teach them something.

Visible learning from the top removes the career risk of being seen as someone who doesn’t know what they’re doing.

The gap to watch for: Beware the ‘Delegation Trap’ where leaders treat AI as a tool for their assistants or juniors to ‘figure out’ and report back on. This signals that AI is a clerical shortcut, not a strategic capability. When a leader delegates the learning, they lose the ability to lead the transformation.

We see roles like ML engineering adopt AI faster than everyone else, not because they’re smarter, but because uncertainty is already part of their job description. They’re used to being beginners. Most of your workforce has never had to be a beginner at work. If leaders don’t model it, no one else will either.

What to do this week:

→ Ask 3 leaders in your organisation: “What AI tool have you used in the last 7 days?”

→ If the answer is vague or defensive, that’s your signal.

→ Start a weekly Slack thread or team ritual where one leader shares a prompt they tried, good or bad.

→ Frame it as: “Here’s what I’m learning” – not “here’s what you should do.”

The goal isn’t to make leaders AI experts. It’s to make learning visible.

The Reverse Mentorship Loop: At Mantel, we see the best adoption when leaders lean into ‘Reverse Mentorship.’ Don’t just show your team a prompt; ask a developer or a junior engineer to ‘code-review’ your prompt. It flattens the hierarchy and turns AI into a shared craft rather than a top-down mandate.

Signal 2

2. Establishing guardrails to create a safe environment to fail

What this signal measures: Whether your team feels psychologically safe enough to try things that might not work.

What healthy looks like: People share half-baked AI experiments in team channels. Someone posts a prompt that gives a terrible output, and everyone laughs and troubleshoots together. Mistakes are treated as data, not evidence of incompetence.

The gap to watch for:  Watch out for ‘Shadow AI’ usage. When it isn’t safe to fail or admit to using the tools, people use them in secret. This is a massive risk, not just for data security but because the best prompts and workflows stay locked in private tabs instead of becoming institutional knowledge.

This is where adoption dies. Silently. Politely.

What to do this week:

→ Run a 15-minute “AI fails” session in your next team meeting. Ask everyone to share one thing they tried that didn’t work.

→ Lead with your own failure first. This sets the tone immediately.

→ Watch who doesn’t share anything. That silence is information.

→ After the session, ask the group: “What made it hard to admit that?” The answers will tell you everything.

Psychological safety isn’t built in workshops. It’s built on small, repeated moments where vulnerability is met with curiosity rather than judgment.

Stress Testing your Workflow: stress test your own AI outputs. If an AI gives you a perfect answer, try to break it. Ask it to find the flaws in its own logic. Sharing these ‘near-misses’ with your team is how the team learns not just what the tool can do, but exactly where its guardrails end and what to be cautious of.

Signal 3

3. People have time to actually practice

What this signal measures: Whether your team has protected time to experiment with AI tools or whether learning is squeezed into the margins of an already full day.

What healthy looks like: There’s dedicated, guilt-free time in the week for people to explore, test, and practice. Not a training course. Not a workshop. Unstructured time to try things in the context of their actual work.

We have seen success building AI into existing processes and workflows, for example, as part of the Project Delivery Lifecycle. We ask questions like “how could we do this differently now we have these AI tools?” or “how could we deliver better or faster, leveraging AI”?

The gap to watch for: Organisations that frame AI adoption as something people should be doing “on top of” their existing workload. Learning becomes a guilt trip. People feel behind before they’ve even started.

What to do this week:

→ Block 30 minutes on your own calendar this week labelled “AI practice.” Treat it like a meeting you can’t move.

→ Give your team explicit permission to do the same. Say it out loud. In writing. More than once.

→ Pick one task you do every week and commit to doing it with AI assistance for the next 4 weeks. Not to save time immediately — to build the habit.

→ At the end of 4 weeks, compare your outputs. The improvement will be visible.

Practice needs permission before it becomes a habit.

Signal 4

4. Peer learning is happening naturally

What this signal measures: Whether knowledge about AI tools is flowing laterally across your team or sitting in silos with a handful of early adopters.

What healthy looks like: Someone figures out a prompt that saves them 2 hours a week and immediately shares it. There’s a Slack channel, a shared doc, a standing agenda item, and some infrastructure for “here’s what I learned this week.” The people who know more are teaching the people who know less, and it feels normal.

The teams making real progress aren’t full of individual power users. They’re learning in public with each other.

We’ve found that ‘Prompt Liquidity’ is a powerful mechanism. This means your best workflows aren’t trapped in private chats. Whether it’s a pinned Slack thread or a shared GitHub repo of ‘Golden Prompts,’ the goal is to make the internal ‘cost of curiosity’ as low as possible. If a colleague has already solved a problem with AI, you shouldn’t have to solve it again from scratch

The gap to watch for: AI knowledge concentrated in a few people who are quietly becoming significantly more productive while everyone else falls further behind. This creates a two-speed team, and it’s already happening in most organisations.

McKinsey found the number of jobs explicitly requiring AI fluency grew from 1 million in 2023 to 7 million in 2025. The gap between those who share knowledge and those who hoard it is widening every quarter.

What to do this week:

→ Identify your top 2-3 AI users on your team. Ask them: “What’s one thing you’ve figured out that you haven’t shared yet?”

→ Create a dead-simple prompt library. A shared Google Doc is enough. Label it “Things that actually work.”

→ Run a 20-minute “show and tell” in your next team meeting. One person, one tool, one real use case.

→ Make sharing a habit before it becomes a job requirement.

The organisations that win on AI won’t be the ones with the best tools. They’ll be the ones who learned together fastest.

Signal 5

5. There’s a clear “Why this helps you” narrative for every role

What this signal measures: Whether your team understands how AI makes their specific job better, or whether they’re still waiting to find out if it’s coming for it.

What healthy looks like: Every person on your team can finish this sentence without hesitating: “AI helps me specifically by ___.” The narrative isn’t abstract or corporate. It’s personal. It’s about their actual Monday morning.

The reframe that changes everything: AI won’t take your job. But someone using AI will.

That’s not a threat. It’s an invitation. But most organisations are delivering it as a threat, and wondering why adoption is slow.

The gap to watch for: ‘Tool Overload.’ Don’t assume that giving people a login is the same as giving them a solution. If an employee can’t explain how AI makes their specific Tuesday afternoon easier, the tool is just another item on their ‘to-do’ list. Adoption lives or dies in the utility of the daily workflow.

What to do this week:

→ For every role on your team, write one sentence: “AI helps a [job title] by [specific task].” If you can’t do it, your team can’t either.

→ In your next 1:1s, ask: “What’s the most repetitive thing you do each week?” That’s your entry point.

→ Pair one person’s repetitive task with one AI tool. Let them own the experiment. Give them a week.

→ When they come back, ask them to share what happened with the team. That story, whether good, bad, or messy, is worth more than any training deck you’ll ever buy.

People don’t resist change. They resist change that feels like it’s happening to them.

Make it happen with them instead.

What’s Next: Going Deeper

This checklist gives you the signals. What it can’t do is build the culture for you.

If you’re leading AI adoption inside an organisation and you want a structured way to move from diagnosis to action, that’s exactly the work we’re doing at Mantel.

The organisations that figure this out in the next 12 months will have a compounding advantage that’s very hard to close later.

Start with the checklist. Then let’s build something that lasts.

AI Adoption Readiness Signals: Healthy Behaviours vs. Gaps

Signal What healthy looks like The gap to watch for
Leadership

Leaders share their own experiments and failures openly.

Leaders delegate learning to junior staff.

Safety

Mistakes are treated as data, and teams troubleshoot together.

Secret usage or “Shadow AI” due to fear of failure.

Time

Dedicated, unstructured time is provided to test tools.

Learning is squeezed into margins on top of existing workloads.

Knowledge

Workflows and prompts are shared laterally across the team.

Knowledge becomes concentrated in a few early adopters.

Narrative

Employees can explain exactly how the tools help their specific job.

Adoption feels like a mandate happening to employees.

Frequently asked questions:

We’ve already rolled out AI across the business. Is it too late to use this?

Rollout and adoption are different things. Most organisations have done the first and skipped the second. If people aren’t using the tools confidently and consistently, you’re still at the starting line, and that’s exactly where this guide picks up.

Do we need a big budget or a dedicated AI team to act on this?

No. None of the five signals require new platforms, large investments, or a company-wide mandate. Most of what’s suggested here can be started this week with what you already have.

What if our leaders aren’t using AI themselves?

That’s Signal 1, and it’s the most common gap we see. The fix isn’t a training course. It’s making visible learning the norm. Leaders don’t need to be experts. They need to be willing to be beginners in public.

How do we know if our team feels safe enough to experiment?

Watch what happens when something goes wrong. If mistakes are quietly buried, that’s your answer. A healthy team shares failed prompts and laughs about them. If no one’s admitting anything isn’t working, people are experimenting in secret, and that’s where adoption quietly dies.

We’re already short on time. How do we add AI practice on top of everything else?

You don’t add it on top. That’s the trap. Protected practice time needs to replace something, or be built into existing workflows. Even 30 minutes a week, done consistently, compounds quickly. The goal is habit, not heroics.

What if only a few people on the team are actually using AI?

That’s Signal 4. A two-speed team is already forming in most organisations. The fix is getting knowledge flowing laterally: shared prompt libraries, quick show-and-tells, a culture where figuring something out means sharing it, not hoarding it.

How do we get people to actually care about AI rather than just comply?

Make it personal. Every person on your team needs to be able to finish the sentence: “AI helps me specifically by ___.” Abstract or corporate narratives don’t move people. Their actual Tuesday afternoon does.

Where do we start if we’re feeling overwhelmed?

Pick one signal, one team, and one week. That’s it. This isn’t a scorecard. It’s a starting point.

Can Mantel help us go deeper on this?

Yes. The checklist diagnoses where you are. If you want a structured path from diagnosis to action, that’s the work Mantel does with organisations. Reach out to start that conversation.

See how we’re helping businesses scale with AI-first solutions