Most organisations don’t know if they’re getting better

Ask a simple question:

Are your people actually getting better at using AI?

Most organisations cannot answer it.

They can tell you:

  • how many people attended training
  • how many licences have been issued
  • how often tools are being used

They cannot tell you whether capability has improved.

This is a fundamental gap.

Because if you cannot measure capability, you cannot:

  • manage it
  • improve it
  • justify investment in it

Everything becomes activity without evidence of impact.

Usage is not capability

A common proxy for success is usage.

More prompts. More sessions. More tool engagement.

This is a weak signal.

High usage can mean:

  • effective adoption
  • inefficient workarounds
  • repeated trial and error
  • misuse of AI on inappropriate tasks

Without context, usage tells you very little about performance.

In some cases, increased usage is actually a sign of poor capability.

People rely on AI more, but not better.

Training completion is meaningless

Another proxy is completion.

People attended the session. They finished the course. They passed the quiz.

This says nothing about whether behaviour has changed.

AI capability is not knowledge recall.

It is the ability to:

  • make better decisions about work
  • structure tasks effectively
  • evaluate outputs with judgement

None of these are captured by completion metrics.

You can have 100% completion and 0% change in how work is actually done.

Capability is behavioural, so it must be observed

If AI capability is about how people work, it needs to be measured in how people behave.

That means looking at:

  • how tasks are defined before AI is used
  • how clearly instructions are structured
  • how outputs are evaluated
  • how decisions are made about whether to use AI at all

These are observable.

They can be assessed against a standard.

They can improve over time.

This is the basis of any serious capability framework.

Without this, organisations are guessing.

Most organisations lack a baseline

Before you can improve capability, you need to know where you are starting from.

Very few organisations establish a baseline.

They do not assess:

  • current behaviours
  • current decision-making patterns
  • current strengths and gaps

They move straight to training.

This creates a problem.

Without a baseline:

  • you cannot target interventions effectively
  • you cannot measure improvement
  • you cannot demonstrate ROI

You are operating without a reference point.

Improvement requires progression, not exposure

Capability does not improve through exposure alone.

Giving people access to AI and encouraging experimentation is necessary, but insufficient.

Improvement requires progression.

That means defining levels such as:

  • basic → inconsistent use, limited judgement
  • intermediate → structured use, improving evaluation
  • advanced → deliberate, reliable application across tasks

Each level should be characterised by observable behaviours.

This allows organisations to:

  • assess where individuals sit
  • define what “better” looks like
  • track movement over time

Without progression, there is no clear path for development.

Measurement changes behaviour

What you measure shapes how people work.

If you measure:

  • attendance
  • usage
  • completion

You will get more of those things.

If you measure:

  • quality of task definition
  • clarity of instruction
  • rigour of evaluation

You shift focus to how work is actually performed.

This is where real gains come from.

Measurement is not just about reporting.

It is a lever for changing behaviour.

The commercial implication

For leaders, this creates a simple question.

What are you actually paying for?

If you invest in AI training but cannot:

  • define capability
  • measure it
  • track improvement

Then you are buying activity, not outcomes.

This becomes increasingly difficult to justify.

As AI moves from experimentation to expectation, organisations will need to show:

  • productivity gains
  • quality improvements
  • risk reduction

None of these can be demonstrated without a clear view of capability.

What good looks like instead

Organisations that take this seriously do three things.

First, they define capability in behavioural terms.

Not abstract knowledge, but what people actually do in their work.

Second, they assess it.

They establish a baseline and identify gaps at an individual and team level.

Third, they track progression.

They measure how capability changes over time and link it to outcomes.

This creates a closed loop:

  • define → assess → develop → measure

Without this loop, improvement is assumed rather than demonstrated.

The bottom line

AI capability is not intangible.

It is not something that “just happens” as people use tools.

It can be defined. It can be observed. It can be measured.

If you are not doing that, you are not building capability.

You are hoping for it.

And hope is not a strategy.