--This article appears in its original version here--
Most large organisations now have an "AI story".
There are board packs, pilot projects, AI roadmaps, steering groups and a growing stack of dashboards loudly trumpeting impressive "hours saved" claims, yet the measurable contribution to performance still lags the level of investment.
You can buy AI tools. You cannot buy adoption or redesigned work.
When you sit with the people actually doing the work, you still hear:
- "I am doing the same work, just with more windows open."
- "I do not really know when I am allowed to use AI and when I am not."
- "Apparently adoption is high, but my day does not feel any easier."
Across many organisations, the same pattern keeps showing up:
AI is being deployed as another tool, rather than around how work actually gets done.
Shallow AI, deep AI, and why this is not "just another rollout"
Most organisations are living through two waves of AI at the same time.
The first is shallow and broad:
- enterprise chat tools or AI assistants made available to everyone, often embedded in the tools people already use
- light guidance about prompts and "being more productive"
- rising usage numbers presented as evidence of success
This can be useful, but the impact on how work actually feels and flows is mixed.
The second is deeper and focused:
- AI solutions, including agentic approaches, aimed at specific business problems
- for example, supporting complex case handling, preparing for client meetings, simplifying back office workflows
This deeper layer is where the real upside and the real risk sit. It is also where treating AI as "just another tool" becomes a problem.
On the surface, AI adoption looks like any other digital change: new capability, training, communications, usage tracking.
Underneath, it behaves differently:
- You are not just teaching people a new tool; you are asking them to work alongside something whose behaviour and logic they cannot fully see. Traditional enterprise platforms feel predictable and rule based; AI assistants feel variable and opaque.
- Traditional tools mainly change how you navigate work. AI often changes the nature of the task itself. It shifts what counts as expertise and where judgement really sits.
- Traditional rollouts are relatively contained: a finance system for finance, a CRM for sales. General purpose AI shows up everywhere at once, often in ways central functions did not plan; once it is available it quickly becomes a background capability that people will try to apply anywhere they think it might help.
- Traditional digital change raises questions about skills and process. AI immediately raises questions about fairness, accountability, privacy and what is acceptable, even when the technology is "working".
All of this aggregates into one critical difference:
AI adoption is fundamentally a trust challenge.
People will tolerate clunky systems they do not like. They will not willingly delegate judgement to a system they do not trust.
In most organisations, AI has to cross three trust gaps at once:
- trust in the technology itself - will it behave reliably enough, or fail in ways I cannot see coming
- trust in the organisation - will my data and activity be used to support me, or to monitor and judge me
- trust in myself - do I understand this well enough to use it safely without getting into trouble
Underneath that sits a fourth, often under acknowledged concern inside organisations: self preservation.
Many people quietly believe that once AI is trusted and embedded, it will be used to justify role reductions, location moves or outsourcing. From that perspective, holding back on adoption is not irrational resistance. It is a rational response to a perceived threat.
Unless leaders are explicit about how AI related efficiencies will be used - for example, to protect time for better service, innovation, learning or redeploying people into new roles - and then back that up with consistent behaviour, trust will remain thin.
Taken together, this is why the usual digital change playbook is not enough. If you accept that AI behaves differently and that trust is the real constraint, then AI at work only creates meaningful, sustainable value when three things line up:
Leaders, journeys and signals.
If any one of those is weak, you tend to get AI theatre rather than transformation.
Journeys - where AI value actually lives
Organisations are structured vertically. Work is experienced horizontally.
If you want to know whether AI is helping, "how much Copilot did we use" is the wrong question. More useful questions sound like:
- What happened to the experience of "handle a complex complaint"
- Did "close the month" become less brittle and frantic
- Is "onboard into a new role" clearer, faster and less overwhelming
- Has "book and manage a work trip abroad" become less painful
These are employee journeys. They cross functions, tools and reporting lines. They are also where risk, friction and emotional load concentrate.
A credible AI agenda starts by naming a small number of priority journeys and saying, very clearly:
"These are the flows of work where we must be confident AI is helping, not harming."
From there, the conversation shifts from "where can we use this AI tool" to "at which specific steps in this journey could AI genuinely help this group of workers".
That might mean:
- summarising history
- drafting first versions of responses
- surfacing relevant policies or similar cases
- suggesting next actions while keeping decisions with humans
Design work becomes about moments in real journeys, not generic rollout.
This is also where a common concern appears: "If we design AI around specific journeys, will we not end up with a patchwork of one off solutions"
The way through is to hold two layers together:
- a platform layer - for example a shared enterprise AI platform providing models, connectors, guardrails and telemetry.
- an experience layer - the journey specific flows, agents and interfaces where employees actually encounter AI in their work.
Journey first does not mean "build everything bespoke". It means using journeys to decide where to invest, and using a common platform so you only have to build the underlying capabilities once.
You can still give employees a single front door into AI at work. Behind that front door, different journeys and agents can be orchestrated on the same backbone.
Signals - instrumentation for decisions, not theatre
Many organisations now have some mix of:
- tooling to monitor the digital employee experience
- AI usage analytics
- collaboration and productivity telemetry
- EX and VoE listening
These tools can show which AI tools are being used, roughly how often and in which parts of the organisation, where "shadow AI" is emerging, and where digital friction is undermining experience.
Without a journey lens, the net effect can be more friction, not less. Common patterns include:
- verification loops, where people spend extra time checking and correcting AI output
- tool overload, with multiple assistants and platforms that do not join up
- blurred accountability for "who owns the final answer" when AI drafts the first version
Used thoughtfully, these are powerful signals. Used superficially, they become:
- surveillance - zooming in on individuals rather than patterns
- productivity theatre - big "hours saved" numbers with no felt change in work
- static reporting - dashboards that do not feature in any real decision forum
Signals become useful when they are brought onto the same horizontal plane as journeys and segments.
Instead of:
"Our AI usage is up 40 percent this quarter."
You ask:
"In this journey, for this group of people, what are we seeing in AI usage, performance and experience - and what should we do next"
Signals should serve judgement, not replace it.
Leadership: from AI evangelism to operating model ownership
The leadership job here is not to become AI evangelists or amateur data scientists. It is to own the conditions in which AI is designed, tested and governed in real work.
Practically, that means leaders who:
- Choose the journeys that matter
- Set boundaries and expectations for data and trust
- Create a rhythm where journeys, signals and decisions meet
- Protect space for small, reversible experiments
Over time, these habits do more than any single programme to shift AI from slideware into the fabric of how work gets done. They are also how you build real AI muscle in the organisation: not just awareness of the tools, but confidence in using them, questioning them and designing with them in the flow of real work.
If you have platforms, dashboards and a confident AI narrative but still struggle to point to a few critical journeys that now feel simpler, safer and more human, that is the gap you can no longer afford to ignore.
