The rise of the wrong idea

“Prompt engineering” has become the default answer to AI capability.

Courses. Playbooks. Templates. Cheat sheets.

The message is consistent:

If you learn how to prompt properly, you’ll get better results.

There is some truth in that.

But it has been taken too far, and in the wrong direction.

Prompting is being treated as the skill.

It isn’t.

It is a visible artefact of something deeper. And focusing on it directly is leading organisations to optimise the wrong layer of performance.

Prompts are downstream of thinking

A prompt is not where the work starts.

It is where the work becomes visible.

By the time someone writes a prompt, a number of decisions have already been made, whether consciously or not:

  • what the task actually is
  • what the objective looks like
  • what constraints matter
  • what level of quality is required

If those decisions are unclear, the prompt will reflect that.

It will be vague, open-ended, or misaligned with the real need.

AI will respond accordingly.

When people say “the prompt wasn’t good enough”, they are often describing a failure that occurred earlier, in how the task was understood.

Improving the wording without improving the thinking produces only marginal gains.

The prompt obsession masks capability gaps

The focus on prompting is attractive because it is tangible.

You can:

  • show examples
  • share templates
  • demonstrate improvements quickly

It feels like progress.

What it often does is mask underlying gaps.

If someone cannot:

  • define a clear outcome
  • break down a task
  • identify relevant context
  • assess whether AI is appropriate

Then prompt techniques become a workaround, not a solution.

They might improve outputs in the short term, but they do not build transferable capability.

The moment the task changes, or the context shifts, performance drops again.

Good prompts are predictable

There is a misconception that high-quality prompts are creative, clever, or even slightly mysterious.

They are not.

They are structured expressions of four things:

  • a clear goal
  • relevant context
  • explicit constraints
  • defined output requirements

When those elements are present, the prompt is usually straightforward.

It does not require tricks or special phrasing.

It reads more like a well-written brief than a hack.

This is why experienced users appear to be “better at prompting”.

They are not guessing better words.
They are thinking more clearly about the task.

Templates don’t generalise

Prompt templates are widely promoted as a way to scale capability.

They have limited value.

Templates work when:

  • the task is repeatable
  • the context is stable
  • the output format is consistent

Outside of that, they degrade quickly.

People either:

  • apply them rigidly and get poor results
  • or modify them without understanding why, and lose their effectiveness

This creates a dependency on pre-built structures rather than developing the ability to construct instructions from first principles.

In environments where work is varied and context-dependent, this approach does not scale.

Iteration is where quality actually emerges

Another issue with prompt-centric thinking is that it reinforces a one-step model of AI use.

Write a better prompt → get a better answer.

In practice, high-quality outputs rarely emerge from a single interaction.

They are developed through iteration:

  • refining the task
  • adjusting constraints
  • probing gaps
  • correcting direction

Each step depends on the user’s ability to:

  • evaluate what was produced
  • identify what is missing or misaligned
  • issue targeted follow-up instructions

This is not prompt engineering.

It is judgement applied over a sequence of interactions.

What capability actually looks like

If prompting is not the core skill, what is?

Capability in AI use sits across a set of behaviours.

People who consistently get strong results tend to:

  • define goals clearly before engaging AI
  • break work into components rather than delegating entire tasks
  • provide context that anchors outputs to real requirements
  • specify what a usable output looks like
  • evaluate outputs critically before using them

Prompting sits inside this.

It is the mechanism through which these behaviours are expressed.

But it is not the source of performance.

Why this distinction matters

This is not a semantic argument.

It has practical consequences for how organisations build capability.

If you treat prompting as the skill:

  • you invest in templates and tricks
  • you optimise for short-term output improvement
  • you create brittle capability that depends on specific patterns

If you treat prompting as a symptom:

  • you invest in how people think about work
  • you build transferable judgement
  • you improve performance across tasks, not just within them

One approach produces surface-level gains.
The other changes how work is actually done.

The bottom line

Prompting matters.

But not in the way it is currently being positioned.

It is not a standalone skill that can be taught in isolation and expected to generalise.

It is the visible output of:

  • clear thinking
  • structured tasks
  • sound judgement

If those are weak, no prompt will compensate.

If those are strong, prompting becomes straightforward.

The organisations that understand this will move beyond prompt engineering.

The ones that don’t will keep optimising the surface.