Program Guide
AI Capability Academy
Scale — What to Expect
Fourteen weeks. All 24 competencies. The same three real tasks applied to every capability lens across the full AI Literacy Competency Framework.
14
Weeks
24
Competencies
4
Domains
1hr
Per Week
3
Your Tasks
capabilityinstitute.com
How It Works

One hour a week. Capability that compounds.

Every teaching week follows the same fixed format — 30 minutes concept, 20 minutes practical, 10 minutes workplace challenge. The four domains unfold in sequence across 12 teaching weeks. Weeks 1 and 14 are bookend sessions for assessment and reflection.

The most important design choice: your three tasks never change. Every competency gets applied to the same real work from your job. Capability compounds through repetition, not variety. By Week 13 you have refined the same workflows through twelve distinct capability lenses.

Fixed Session Format

30' concept, 20' practical, 10' challenge. Every teaching week without variation.

Three Real Tasks

Chosen in Week 1. Applied across all 12 teaching weeks. Never changed.

2 Competencies Per Week

All 24 competencies covered across twelve teaching weeks.

Fixed Peer Groups

Groups of 3–4. Same people for all 14 weeks. Share experiments each session.

Measurable Progression

Baseline in Week 1. Reassessment in Week 14. Improvement documented.

Tool Agnostic

Works with Claude, ChatGPT, Copilot, Gemini — whatever your team already uses.

Program Timeline

Fourteen weeks. All four domains.

Two competencies per teaching week. Your three tasks progressively refined across every domain.

Wk 1
Wk 2
Wk 3
Wk 4
Wk 5
Wk 6
Wk 7
Wk 8
Wk 9
Wk 10
Wk 11
Wk 12
Wk 13
Wk 14
Onboard
Delegation — Domain 1
Description — Domain 2
Discernment — Domain 3
Diligence — Domain 4
Conclude
Competencies
Baseline
1.1·1.3
1.2·1.5
1.4·1.6
2.1·2.2
2.3·2.5
2.4·2.6
3.1·3.3
3.2·3.5
3.4·3.6
4.1·4.5
4.2·4.3
4.4·4.6
Reassess
Your 3 Tasks
Chosen
1st pass
Refined
Workflow
Prompted
Specified
Iterated
Audited
Fitted
Judged
Disclosed
Secured
Owned
Done
Every teaching week — Wks 2–13
30' Concept
20' Practical
10' Workplace Challenge
Peer Group Share
Week by Week
Opening Bookend
Week 1Onboarding & Baseline AssessmentNo challenge+

What Happens

Overview of the AI Literacy Competency Framework and all four domains. How the academy works. Baseline capability self-assessment. Task Discovery Exercise — choose your three tasks. Peer group formation.

What You Leave With

Baseline capability profile across all four domains. Three confirmed real tasks as your program focus. Fixed peer group for 14 weeks.

The Three Task Rule. Every competency across 12 teaching weeks applied to these same tasks. Capability compounds through repetition, not variety.

Domain 1 — Delegation
Week 2AI Suitability Assessment & AI Capability Awareness1.1 · 1.3+

Competencies

1.1AI Suitability Assessment
1.3AI Capability Awareness
30'
Concept
20'
Practical
10'
Challenge

Session Focus

What AI can and cannot reliably do. How to assess whether a task should involve AI at all. Common AI failure modes in workplace settings.

Practical

Classify workplace tasks for AI suitability. Apply the same filter to your three tasks.

Workplace Challenge

Make your first AI attempt on one of your three tasks. Document what worked and what did not.

First contact. Judgment before prompting. Does AI belong here at all? Everything else builds on this question.

Week 3Task Decomposition & Goal Clarity Before Delegation1.2 · 1.5+

Competencies

1.2Task Decomposition for AI
1.5Goal Clarity Before Delegation
30'
Concept
20'
Practical
10'
Challenge

Session Focus

Why AI fails on vague or whole tasks. Breaking work into AI-ready and human-required parts. Defining a clear outcome before engaging AI.

Practical

Break a project into human and AI steps. Write a goal statement for one task before running any AI.

Workplace Challenge

Revisit the Week 2 task — decomposed and goal-first. Compare the output to last week.

The compound effect begins. Same task, structured differently. Most participants have their first breakthrough this week.

Week 4Mode Selection & Human Oversight Calibration1.4 · 1.6+

Competencies

1.4Mode Selection and Switching
1.6Human Oversight Calibration
30'
Concept
20'
Practical
10'
Challenge

Session Focus

Automation, augmentation, and agency — choosing the right mode. How much human review different tasks require. Designing review checkpoints into workflows.

Practical

Map your three tasks to the correct AI mode. Design a human-in-the-loop workflow.

Workplace Challenge

Apply AI to one real task. Deliberately choose the mode. Document the oversight level applied.

Delegation complete. From random experimentation to structured delegation. The foundation for Description is now in place.

Domain 2 — Description
Week 5Prompt Construction & Context Provision2.1 · 2.2+

Competencies

2.1Prompt Construction
2.2Context Provision
30'
Concept
20'
Practical
10'
Challenge

Session Focus

Structured prompt framework: Goal, Context, Constraints, Output format. Why AI produces generic outputs without context.

Practical

Rewrite weak prompts. Run the same task with and without context — compare outputs.

Workplace Challenge

Rewrite the prompts for your three tasks using the structured framework. Add role, purpose, audience, and constraints.

First description layer. Significant quality jumps are common this week. Cause-and-effect between instruction and output becomes visible.

Week 6Output Specification & Example and Constraint Setting2.3 · 2.5+

Competencies

2.3Output Specification
2.5Example and Constraint Setting
30'
Concept
20'
Practical
10'
Challenge

Session Focus

Defining format, tone, length, and structure upfront. How worked examples show AI what good looks like. How constraints prevent outputs falling outside acceptable parameters.

Practical

Generate an output with a fully specified format. Aim for zero reformatting needed.

Workplace Challenge

Add full output specification to your three task prompts. Create a reusable template for one recurring task.

Reducing rework. Participants who complete this week properly report editing time drops sharply.

Week 7Iterative Refinement & Interaction Mode Adaptation2.4 · 2.6+

Competencies

2.4Iterative Refinement
2.6Interaction Mode Adaptation
30'
Concept
20'
Practical
10'
Challenge

Session Focus

Good AI work is multi-step collaboration, not single-exchange prompting. How to diagnose gaps and issue targeted follow-up. When to keep refining vs accept or reframe.

Practical

Improve a weak output across three deliberate iterations. Adapt communication style across different AI modes.

Workplace Challenge

Use iterative prompting on one of your three tasks. Plan the iterations before starting. Track what changed at each step.

Description complete. Three tasks with proper prompts, output specs, and iterative workflows. Discernment begins next.

Domain 3 — Discernment
Week 8Output Accuracy Evaluation & Reasoning and Logic Scrutiny3.1 · 3.3+

Competencies

3.1Output Accuracy Evaluation
3.3Reasoning and Logic Scrutiny
30'
Concept
20'
Practical
10'
Challenge

Session Focus

How AI hallucinations occur and why they are hard to detect. Systematic accuracy review before acting on outputs. Evaluating reasoning quality and identifying unsupported conclusions.

Practical

Audit AI outputs — identify factual errors, fabricated references, and logical gaps.

Workplace Challenge

Run your three tasks through AI and apply a critical accuracy review before using any output this week.

The trust shift. Many people realise this week how much they were relying on unverified AI output in real work.

Week 9Relevance and Fit Assessment & Missing Context Identification3.2 · 3.5+

Competencies

3.2Relevance and Fit Assessment
3.5Missing Context Identification
30'
Concept
20'
Practical
10'
Challenge

Session Focus

Why accurate outputs can still be wrong for the context. Evaluating against audience, purpose, and scope. Identifying what is missing — not just what is present.

Practical

Evaluate outputs for contextual fit. Identify omissions and improve one output for a specific audience.

Workplace Challenge

Evaluate your three task outputs for relevance and completeness. Improve one that was accurate but not fit for purpose.

Beyond accuracy. Not 'is this right?' but 'is this right for us, right now?' A fundamentally different question.

Week 10Bias and Limitation Recognition & Appropriate Use Judgement3.4 · 3.6+

Competencies

3.4Bias and Limitation Recognition
3.6Appropriate Use Judgement
30'
Concept
20'
Practical
10'
Challenge

Session Focus

How AI outputs can reflect bias, limited perspectives, or systematic omissions. Making sound use/revise/discard decisions based on risk, audience, and context.

Practical

Identify bias and framing issues. Make use judgement decisions across a set of risk scenarios.

Workplace Challenge

Review one output from your three tasks before sharing externally. Apply a bias check and document the use decision.

Discernment complete. Outputs evaluated across accuracy, fit, completeness, bias, and use. Diligence begins next.

Domain 4 — Diligence
Week 11Transparency and Disclosure & Accountability and Ownership4.1 · 4.5+

Competencies

4.1Transparency and Disclosure
4.5Accountability and Ownership
30'
Concept
20'
Practical
10'
Challenge

Session Focus

When and how to disclose AI involvement. Professional accountability always rests with the human — not the tool.

Practical

Evaluate AI use scenarios for disclosure. Accountability case studies — who owns the output?

Workplace Challenge

For each of your three tasks, document where AI disclosure is required and confirm you hold full accountability for the outputs.

Professional responsibility. AI changes how you work. Not who is responsible for the result. That is always you.

Week 12Data Privacy and Security & Intellectual Property and Attribution4.2 · 4.3+

Competencies

4.2Data Privacy and Security in AI Use
4.3Intellectual Property and Attribution
30'
Concept
20'
Practical
10'
Challenge

Session Focus

What data should never be shared with AI tools. How to handle sensitive information. IP and attribution considerations for AI-generated content.

Practical

Identify data risk in AI interaction scenarios. Evaluate IP considerations across common AI use cases.

Workplace Challenge

Review the workflows built around your three tasks — apply a data privacy check and adjust anything that needs to change.

Safe practice by design. Risk addressed inside the workflow — not bolted on as an afterthought.

Week 13Ethical Consequence Awareness & Continuous and Critical Reflection4.4 · 4.6+

Competencies

4.4Ethical Consequence Awareness
4.6Continuous and Critical Reflection
30'
Concept
20'
Practical
10'
Challenge

Session Focus

Ethical implications of AI use in tasks that affect people. Why AI fluency requires ongoing self-assessment, not a fixed end state.

Practical

Analyse ethical AI scenarios. Structured reflection on your AI use evolution from Week 2 to Week 13.

Workplace Challenge

Write a short reflection on how your three tasks and AI approach have changed. Draft your personal AI practice plan for Week 14.

The full picture. Twelve capability layers on the same three tasks. That is what makes this a capability program, not a course.

Closing Bookend
Week 14Conclusion, Reflection & ReassessmentNo challenge+

What Happens

Repeat the baseline capability assessment across all four domains. Compare Week 1 and Week 14 profiles. Each participant shares their most improved workflow. Peer group reflection. Academy completion recognition.

What You Leave With

Updated capability profile showing measurable progression. Three refined AI-assisted workflows. A documented prompt library from your own experiments. A personal AI practice plan for what comes next.

Twelve improvement cycles. One result. Workflows genuinely redesigned — not just discussed. That is the difference between a capability program and a course.

Capability Institute

capabilityinstitute.com

AI Capability Academy — Scale Program Guide