AI Capability Academy — Sprint Program
Program Guide
AI Capability Academy

Sprint — What to Expect

A six-week structured capability program. What happens each week, how the session works, and how your skills build week by week across the AI Literacy Competency Framework.

6
Weeks
4
Domains
1hr
Per Week
3
Your Tasks
capabilityinstitute.com

One hour a week. Real work, every week.

The Sprint is not a course you attend. It is a structured capability program you apply. Every session follows the same fixed format — concept, practice, workplace challenge — and every week you apply what you learn to the same three real tasks from your job.

The tasks never change. That is deliberate. Each week introduces a new capability that you apply to the same work, progressively improving how those tasks get done. By Week 5 you have refined the same workflow through four different capability lenses. That is how skill actually develops.

Fixed Session Format

30 min concept, 20 min practice, 10 min workplace challenge. Every week. No variation.

Three Real Tasks

You choose three tasks from your actual job in Week 1. You apply every capability to those same tasks throughout the program.

Peer Learning

Fixed peer groups of 3-4 share workplace experiments each week. You learn as much from colleagues as from the session itself.

Measurable Progress

You start with a capability baseline assessment and finish with a reassessment. Your progression across all four domains is visible.

Tool Agnostic

Works with Claude, ChatGPT, Copilot, Gemini, or any other tool your team already uses. Capability transfers across tools.

Role Relevant

Every exercise and challenge is applied to your real role. No generic use cases. No abstract theory.

Six weeks across all four domains.

Each of the four Domains of the AI Literacy Competency Framework gets its own teaching week. Weeks 1 and 6 are bookend sessions — assessment, orientation, and reflection — with no workplace challenge.

Wk 1
Wk 2
Wk 3
Wk 4
Wk 5
Wk 6
Domain
Onboarding
Delegation
Description
Discernment
Diligence
Conclusion
Competencies
Assessment
1.1 · 1.2 · 1.3 · 1.6
2.1 · 2.2 · 2.3 · 2.4
3.1 · 3.2 · 3.3 · 3.4
4.1 · 4.2 · 4.4 · 4.5
Reassessment
Session
Baseline + Discovery
30/20/10
30/20/10
30/20/10
30/20/10
Reflect + Assess
Workplace Challenge
Apply to 3 tasks
Apply to 3 tasks
Apply to 3 tasks
Apply to 3 tasks
Peer Review
Group share
Group share
Group share
Group share
Your 3 Tasks
Chosen here
Refined
Refined
Refined
Refined
Documented
Week by Week
Week 1 Onboarding, Scene Setting & Baseline Assessment Bookend Session +

What Happens

  • Overview of the AI Literacy Competency Framework and the four domains
  • What AI capability actually means in a workplace context
  • How the academy works and what to expect each week
  • Baseline capability self-assessment across all four domains
  • Task Discovery Exercise — you choose your three tasks for the program
  • Participant introductions and peer group formation

What You Leave With

  • Your individual baseline capability profile across Delegation, Description, Discernment, and Diligence
  • Three real tasks from your job confirmed as your experimentation focus for the next four weeks
  • Your fixed peer group of 3-4 colleagues
  • A clear picture of how the program will build your capability week by week

The Three Task Rule. The tasks you choose today are the tasks you will use for the entire program. Every capability you learn will be applied to these same tasks. This is how skill compounds — not by trying AI on everything, but by improving the same workflows through every lens.

Week 2 Delegation — When and How to Involve AI Domain 1 +

Competencies Covered

1.1AI Suitability Assessment
1.2Task Decomposition for AI
1.3AI Capability Awareness
1.6Human Oversight Calibration
30'
Concept
20'
Practical
10'
Challenge

Session Focus

  • What AI is genuinely good at — and where it reliably fails
  • How to assess whether a task is appropriate for AI involvement
  • How to break complex work into AI-ready components
  • How much human review different AI tasks require

Practical Exercise

  • Analyse a set of example tasks and classify each as AI suitable, maybe, or human only
  • Apply the same suitability filter to your three chosen tasks

Workplace Challenge

  • Attempt to use AI for one of your three tasks this week
  • Break that task into stages — identify which parts AI should handle and which require human judgement
  • Document what worked and what did not before the next session

Why this week matters. Most people jump straight to prompting. Delegation week establishes the judgment layer first — asking whether AI should be involved at all, and in what way. Everything that follows builds on this foundation.

Week 3 Description — Communicating Effectively with AI Domain 2 +

Competencies Covered

2.1Prompt Construction
2.2Context Provision
2.3Output Specification
2.4Iterative Refinement
30'
Concept
20'
Practical
10'
Challenge

Session Focus

  • Why AI outputs are only as good as the instructions they receive
  • Structured prompt framework: Goal, Context, Constraints, Output format
  • Why context transforms generic outputs into relevant ones
  • How to specify exactly what the output should look like before prompting
  • Treating AI work as a collaborative process — not a single exchange

Practical Exercise

  • Rewrite a set of poorly performing prompts using the structured framework
  • Run the same task with and without context — compare the results

Workplace Challenge

  • Return to the same three tasks from Week 2 — but this time rewrite the prompts using structured instruction
  • Compare the new outputs against what you produced last week
  • Note the specific changes that made the biggest difference

The compound effect begins. Because you are applying Description to the same tasks from Week 2, you will see direct improvement. Better prompts on the same task make the cause-and-effect visible. This is where participants often have their first significant breakthrough.

Week 4 Discernment — Evaluating AI Outputs Critically Domain 3 +

Competencies Covered

3.1Output Accuracy Evaluation
3.2Relevance and Fit Assessment
3.3Reasoning and Logic Scrutiny
3.4Bias and Limitation Recognition
30'
Concept
20'
Practical
10'
Challenge

Session Focus

  • How AI hallucinations and confident-sounding errors occur
  • How to check AI outputs for factual accuracy and logical coherence
  • Why technically correct outputs can still be wrong for the context
  • How AI outputs can reflect bias or omit important perspectives
  • Knowing when an output is ready to use and when it should not be used at all

Practical Exercise

  • Audit a set of AI outputs — identify factual errors, reasoning gaps, and contextual mismatches
  • Evaluate outputs for audience fit and identify what is missing

Workplace Challenge

  • Run AI on your three tasks again — this time apply a critical evaluation before using any output
  • Check for accuracy, relevance, reasoning quality, and audience fit
  • Note where outputs needed adjustment and what the adjustment was

The shift from trust to judgement. This week changes how participants relate to AI output. The same tasks from previous weeks are now evaluated properly — not accepted at face value. This is often where people realise how much they were previously relying on unreviewed output.

Week 5 Diligence — Responsible AI Practice Domain 4 +

Competencies Covered

4.1Transparency and Disclosure
4.2Data Privacy and Security
4.4Ethical Consequence Awareness
4.5Accountability and Ownership
30'
Concept
20'
Practical
10'
Challenge

Session Focus

  • Who is responsible for AI-assisted work — the answer is always the human
  • When and how to disclose AI involvement in your work
  • What data should never be shared with AI tools and why
  • The ethical implications of using AI in tasks that affect other people
  • How to adjust your workflow to ensure safe, responsible AI use

Practical Exercise

  • Evaluate a set of AI use scenarios — identify responsible and irresponsible practice
  • Identify data risks within your own three task workflows

Workplace Challenge

  • Review the AI workflow you have built across your three tasks — apply a Diligence lens
  • Identify where disclosure is required, where data risks exist, and who is accountable for each output
  • Adjust your workflow to address any risks identified

Where the academy becomes serious. Diligence is what separates confident AI use from professional AI use. By applying this to the same three tasks you have been refining for four weeks, you close the loop — capability, quality, and responsibility all in one workflow.

Week 6 Conclusion, Reflection & Reassessment Bookend Session +

What Happens

  • Repeat the baseline capability self-assessment across all four domains
  • Compare your Week 1 and Week 6 capability profiles side by side
  • Participants share their most improved workflow with the group
  • Peer groups reflect on what changed across the six weeks
  • Academy feedback and next steps discussion

What You Leave With

  • An updated capability profile showing progression across Delegation, Description, Discernment, and Diligence
  • Three refined, AI-assisted workflows for real tasks in your role
  • A documented prompt library built from your own workplace experiments
  • A personal AI practice plan — what you will continue to develop beyond the program
  • Academy completion recognition

The result of three consistent tasks. Because you applied every capability to the same three tasks, you leave with something most training never produces — a genuinely improved way of working, not just knowledge about AI.