Most definitions of AI literacy are not useful
AI literacy is often described in vague terms.
Understanding AI.
Knowing how to use tools.
Being “comfortable” with the technology.
None of these are wrong. But none of them are operational.
They don’t tell you:
- what good looks like
- how to assess it
- how to develop it over time
Which means they don’t help organisations build anything.
The result is predictable. AI literacy becomes a talking point rather than a capability.
This framework defines AI literacy as behaviour
The AI Literacy Competency Framework takes a different approach.
It defines AI literacy as a set of observable, trainable workplace behaviours.
Not knowledge.
Not awareness.
Not tool familiarity.
Capability.
That means:
- you can see it in how someone works
- you can assess it consistently
- you can improve it through deliberate practice
This is what makes it usable in a real organisation.
It moves AI literacy from concept to system.
It is built on four domains of performance
At its core, the framework breaks AI capability into four domains.
These are not arbitrary categories. They reflect the actual lifecycle of working with AI.
Delegation
The ability to decide whether, when, and how to use AI.
This includes:
- assessing whether a task is suitable
- breaking work into components
- choosing the right mode (automation, augmentation, agency)
- defining clear goals before engaging AI
Most failures in AI use happen here.
If delegation is weak, everything downstream is compromised.
Description
The ability to communicate effectively with AI.
This is where prompting sits, but in a structured way.
It includes:
- constructing clear instructions
- providing relevant context
- specifying outputs properly
- iterating to improve results
This is not about “prompt tricks”.
It is about translating work into something AI can execute reliably.
Discernment
The ability to evaluate AI outputs.
This is the least developed capability in most organisations.
It involves:
- assessing accuracy
- evaluating reasoning
- determining relevance
- identifying what is missing
Without discernment, AI becomes a source of plausible but unverified outputs.
With it, AI becomes usable at scale.
Diligence
The ability to take responsibility for how AI is used.
This includes:
- managing risk
- handling data appropriately
- applying the right level of oversight
- owning the final outcome
This is where governance actually shows up in practice.
Not as policy, but as behaviour.
Each domain is broken into specific competencies
Within each domain, the framework defines detailed competencies.
For example, within Delegation:
- AI suitability assessment
- task decomposition
- capability awareness
- mode selection
- goal clarity
- oversight calibration
Each competency is clearly defined and tied to how work is actually performed.
This is not a high-level model.
It is granular enough to:
- assess individuals
- diagnose gaps
- target development
It uses progression, not pass/fail
Capability is not binary.
The framework uses five levels of proficiency:
- Novice
- Advanced Beginner
- Competent
- Proficient
- Expert
Each level describes how behaviour changes as capability develops:
- from trial and error
- to consistent application
- to confident, adaptable judgement
This allows organisations to:
- establish a baseline
- track improvement
- define what “good” looks like at each level
It also reflects reality. People do not become “AI literate” overnight. They progress.
It is deliberately vendor-agnostic
The framework does not reference specific tools.
This is intentional.
Tools will change. Rapidly.
If capability is tied to a tool, it becomes obsolete as the tool evolves.
By focusing on:
- decisions
- behaviours
- patterns of work
The framework remains stable.
It applies whether someone is using:
- ChatGPT
- Copilot
- internal systems
- future tools that do not yet exist
This is what makes it durable.
It is designed to be used, not just understood
Most frameworks are descriptive.
They explain a concept but do not change behaviour.
This one is designed for application.
It can be used to:
- assess current capability across a workforce
- design targeted training and development
- embed standards into workflows and governance
- track improvement over time
In other words, it connects directly to how organisations operate.
Why this matters now
AI is moving from optional to expected.
The question is no longer:
“Do we use AI?”
It is:
“Are we using it well?”
Without a clear definition of capability, organisations default to:
- tool access
- generic training
- inconsistent usage
This creates uneven performance and unmanaged risk.
A structured framework provides:
- a shared standard
- a way to measure capability
- a path for development
It turns AI from experimentation into something that can be scaled.
The bottom line
AI literacy is not about knowing more.
It is about working differently.
If you cannot define what that looks like in behaviour, you cannot build it.
The AI Literacy Competency Framework does exactly that.
It defines:
- what good looks like
- how it develops
- how it can be measured
Without that, organisations are relying on assumption.
With it, they can build capability deliberately.

.png)



















.png)