The illusion of progress with AI
Most organisations believe they are making progress with AI.
They have:
- run training
- given access to tools
- seen employees experimenting
On the surface, this looks like adoption.
Underneath, very little has changed.
Workflows are the same.
Decisions are the same.
Outputs are marginally faster, but not materially better.
This creates a dangerous illusion. Activity is mistaken for progress.
The reality is that most AI initiatives fail long before they reach a point where success or failure would be visible.
The failure happens at the point of delegation
AI does not fail at the point of output.
It fails at the point where a human decides:
“This is a task I should use AI for.”
That decision is almost always made informally. There is no framework. No criteria. No shared standard across the organisation.
As a result, people default to one of two patterns.
They either:
- overuse AI on tasks where it adds little value or introduces risk
- underuse AI on tasks where it could materially improve speed or quality
Both look like “adoption”. Neither produces meaningful impact.
This is why organisations see uneven results. AI appears to work in isolated cases, but fails to scale across teams.
The issue is not the technology. It is inconsistent delegation.
Most tasks are never properly defined
Even when AI is used, the underlying task is rarely clear.
People tend to operate with implicit goals:
- “write something about this”
- “summarise this”
- “help me think through this”
These are not well-defined tasks. They are loose intentions.
AI responds accordingly. It produces outputs that are directionally relevant but lack precision, depth, or usefulness in context.
The user then compensates:
- editing heavily
- rewriting sections
- abandoning the output entirely
From the outside, it still looks like AI is being used.
In reality, the efficiency gain is marginal or negative.
The root issue is not output quality. It is the absence of clear task definition before AI is engaged.
Work is not decomposed, so AI is misapplied
A second, more structural issue sits beneath this.
Most work is treated as a single unit.
A report. A proposal. An analysis. A process.
AI is then applied to the whole task, rather than to specific components within it.
This is where things break down.
AI is highly effective on certain types of sub-tasks:
- structuring information
- generating drafts
- transforming formats
- identifying patterns
It is far less reliable on others:
- making high-stakes judgements
- handling ambiguous requirements
- integrating complex context
When a task is not broken down, these differences are ignored.
The result is predictable. Parts of the output are strong. Other parts are weak or unusable. The overall outcome is inconsistent.
People conclude that “AI is hit and miss”.
In reality, the application was poorly structured.
Mode confusion undermines performance
There is a further layer that most organisations have not articulated at all.
Different tasks require different modes of AI interaction:
- automation (AI executes a defined task)
- augmentation (AI supports human thinking)
- agency (AI operates within a configured workflow)
In practice, people rarely distinguish between these.
They use AI in a single default mode, regardless of task type.
This creates predictable issues.
Tasks that should be automated are handled manually, with unnecessary effort.
Tasks that require human judgement are over-delegated, creating risk.
Tasks that would benefit from structured workflows are repeated ad hoc, reducing efficiency.
Without clarity on mode, even well-chosen tasks are executed poorly.
Why this failure is invisible
This entire set of issues is difficult to see.
There is no clear moment where something “breaks”.
Instead, you see:
- inconsistent output quality
- limited time savings
- reliance on a small number of “power users”
- general scepticism about AI’s value
These are treated as cultural or behavioural issues.
They are not.
They are structural issues in how work is being delegated to AI.
Because the failure happens upstream — before the tool is meaningfully engaged — it does not show up in traditional metrics.
You cannot fix it by:
- adding more training
- improving prompts
- switching tools
Those interventions operate too late in the process.
What good looks like instead
Organisations that see real impact do something different.
They make delegation explicit.
Instead of leaving AI use to individual judgement, they define:
- what types of tasks are suitable for AI
- how tasks should be structured before AI is applied
- where human judgement must remain central
- how different modes of AI interaction should be used
This creates consistency.
It allows AI to be applied systematically, rather than opportunistically.
It also makes performance diagnosable. When something goes wrong, it is possible to identify whether the issue sits in:
- task selection
- task definition
- task decomposition
- mode selection
Without this structure, all failures collapse into “AI didn’t work”.
The shift organisations need to make
Most AI strategies are built around access and enablement.
Give people tools. Train them. Encourage experimentation.
That is necessary, but not sufficient.
The shift required is from:
- tool adoption
to - work design
AI should not be layered onto existing workflows as an optional enhancement.
Work itself needs to be restructured around where AI can and cannot add value.
This is a design problem, not a training problem.
Until that shift is made, organisations will continue to see:
- pockets of success
- inconsistent results
- limited scale
All while believing they are further along than they actually are.
The bottom line
AI adoption does not fail at rollout.
It fails at the moment someone decides how to use it.
If that decision is:
- informal
- inconsistent
- unsupported
Then everything that follows is constrained.
Most organisations have not addressed this layer.
That is why adoption appears to be happening, while impact remains limited.

.png)



















.png)