Most AI training is solving the wrong problem

There is no shortage of AI training right now.

Workshops. Webinars. Prompt libraries. “AI for everyone” sessions.

Almost all of it is built around the same underlying assumption:

If people learn the tools, adoption will follow.

That assumption is flawed.

What organisations are actually seeing is a predictable pattern. Initial curiosity. Short-term experimentation. Then a steady decline into inconsistent, low-quality use. The tools remain available, but they are not embedded into how work is actually performed.

This is not a motivation problem. It is not a change management problem.

It is a capability problem.

The training never addressed how decisions are made, how tasks are structured, or how outputs are judged. It focused on the interface, not the work.

Tools are not the unit of capability

Tool-based training assumes that capability is built by learning how to operate a system. That logic holds in stable environments where the tool defines the work. It breaks down completely in the context of AI.

AI tools are not stable. Their interfaces change. Their capabilities evolve. Their limitations shift. Training people on a specific tool is effectively training them on a temporary snapshot of a moving system.

The work itself is far more stable.

People still need to:

  • define what they are trying to achieve
  • decide how to approach a task
  • determine whether automation, augmentation, or manual effort is appropriate
  • assess whether the output is usable

These are not tool-specific activities. They are forms of judgement.

When training is anchored to tools, it teaches people how to interact with a system.
When it is anchored to capability, it teaches people how to think about work.

Only one of those survives change.

The failure point is upstream of the tool

Most organisations misdiagnose why AI use fails.

They point to:

  • poor prompts
  • weak outputs
  • limitations in the model

Those are symptoms.

The actual failure point sits earlier, in how the work is defined before AI is even involved.

If a task is vague, AI produces vague outputs.
If a task is mis-scoped, AI produces misaligned outputs.
If a task is unsuitable, AI produces something that looks plausible but creates rework or risk.

In each case, the tool is functioning exactly as expected. It is responding to the input it was given.

The issue is that the input reflects poor judgement:

  • no clear goal
  • no defined constraints
  • no understanding of whether AI is appropriate

This is why organisations see activity without progress. People are using AI, but they are applying it to the wrong problems, in the wrong way.

Training that focuses on tools never reaches this layer.

Prompting is not where the leverage sits

The current obsession with prompting reflects a deeper misunderstanding of where performance actually comes from.

A prompt is not a skill in isolation. It is an output of upstream thinking.

A well-constructed prompt assumes that the person has already:

  • clarified the objective
  • understood the task structure
  • identified relevant context
  • defined what a successful output looks like

Without those elements, prompting becomes guesswork. People iterate blindly, hoping to stumble into a better result.

With those elements in place, prompting becomes straightforward. The instruction is simply a structured expression of clear intent.

This is why prompt “tips” and templates have limited impact. They attempt to improve the surface layer without addressing the underlying capability.

The leverage sits earlier, in how work is framed and structured.

AI literacy is behavioural, not technical

AI literacy is often framed as knowledge.

What AI can do.
How models work.
Which tools to use.

That framing is insufficient.

In practice, AI literacy shows up in behaviour. It is visible in how people approach their work, not in what they can recall about a system.

It shows up in whether someone:

  • stops to assess if a task should involve AI at all
  • breaks work into components before delegating
  • provides enough context for a useful output
  • actively evaluates the quality of what is produced
  • takes responsibility for the final result

These are observable, trainable competencies. They can be assessed. They can be developed over time.

This is the basis of a capability model — not abstract knowledge, but consistent patterns of behaviour that drive performance.

What most training gets wrong

The gap becomes clear when you look at how most programmes are designed.

First, they start with the tool. The structure of the training follows the interface rather than the work. People are shown features before they understand where those features should be applied.

Second, they underweight judgement. There is little focus on evaluating outputs, challenging reasoning, or determining fitness for purpose. This creates a dynamic where people produce content quickly but lack the ability to assess whether it should be used.

Third, they treat AI as a one-step interaction. Ask a question, receive an answer, move on. In reality, effective AI use is iterative. It involves refinement, adjustment, and ongoing evaluation. Training that ignores this produces unrealistic expectations and brittle usage patterns.

Each of these design choices reinforces shallow capability.

What good looks like instead

A capability-based approach starts from a different premise.

The goal is not to teach people how to use a tool.
It is to change how they approach their work.

That requires building capability across four areas.

Delegation. The ability to decide whether and how AI should be involved in a task, based on complexity, risk, and value.

Description. The ability to communicate goals, context, and constraints clearly enough for AI to produce useful outputs.

Discernment. The ability to evaluate accuracy, reasoning, and relevance, rather than accepting outputs at face value.

Diligence. The ability to manage risk, handle data appropriately, and maintain accountability for outcomes.

These are not optional layers. They are the core of effective AI use.

Without them, tool usage remains superficial. With them, AI becomes embedded in how work is actually performed.

The commercial reality

There is a reason most providers default to tool-based training.

It is easier to sell. Easier to deliver. Easier for buyers to understand.

A session on “how to use ChatGPT” has a clear shape. It feels tangible. It can be packaged and repeated.

Capability-based training is different. It requires more structure. It forces organisations to confront how work is currently done, and where it is poorly defined. It is less immediately comfortable.

But it produces a different outcome.

Tool training creates awareness.
Capability training creates change.

Buyers are starting to notice the difference.

The bottom line

AI is not a software rollout.

It is a shift in how work is defined, structured, and executed.

If you train people on tools, you get short-term usage and long-term inconsistency.

If you build capability, you get durable performance that transfers across tools, roles, and contexts.

Most organisations are still investing in the former.

That will not hold for long.