Every teaching week follows the same fixed format — 30 minutes concept, 20 minutes practical, 10 minutes workplace challenge. The four domains unfold in sequence across 12 teaching weeks. Weeks 1 and 14 are bookend sessions for assessment and reflection.
The most important design choice: your three tasks never change. Every competency gets applied to the same real work from your job. Capability compounds through repetition, not variety. By Week 13 you have refined the same workflows through twelve distinct capability lenses.
30' concept, 20' practical, 10' challenge. Every teaching week without variation.
Chosen in Week 1. Applied across all 12 teaching weeks. Never changed.
All 24 competencies covered across twelve teaching weeks.
Groups of 3–4. Same people for all 14 weeks. Share experiments each session.
Baseline in Week 1. Reassessment in Week 14. Improvement documented.
Works with Claude, ChatGPT, Copilot, Gemini — whatever your team already uses.
Two competencies per teaching week. Your three tasks progressively refined across every domain.
Overview of the AI Literacy Competency Framework and all four domains. How the academy works. Baseline capability self-assessment. Task Discovery Exercise — choose your three tasks. Peer group formation.
Baseline capability profile across all four domains. Three confirmed real tasks as your program focus. Fixed peer group for 14 weeks.
The Three Task Rule. Every competency across 12 teaching weeks applied to these same tasks. Capability compounds through repetition, not variety.
What AI can and cannot reliably do. How to assess whether a task should involve AI at all. Common AI failure modes in workplace settings.
Classify workplace tasks for AI suitability. Apply the same filter to your three tasks.
Make your first AI attempt on one of your three tasks. Document what worked and what did not.
First contact. Judgment before prompting. Does AI belong here at all? Everything else builds on this question.
Why AI fails on vague or whole tasks. Breaking work into AI-ready and human-required parts. Defining a clear outcome before engaging AI.
Break a project into human and AI steps. Write a goal statement for one task before running any AI.
Revisit the Week 2 task — decomposed and goal-first. Compare the output to last week.
The compound effect begins. Same task, structured differently. Most participants have their first breakthrough this week.
Automation, augmentation, and agency — choosing the right mode. How much human review different tasks require. Designing review checkpoints into workflows.
Map your three tasks to the correct AI mode. Design a human-in-the-loop workflow.
Apply AI to one real task. Deliberately choose the mode. Document the oversight level applied.
Delegation complete. From random experimentation to structured delegation. The foundation for Description is now in place.
Structured prompt framework: Goal, Context, Constraints, Output format. Why AI produces generic outputs without context.
Rewrite weak prompts. Run the same task with and without context — compare outputs.
Rewrite the prompts for your three tasks using the structured framework. Add role, purpose, audience, and constraints.
First description layer. Significant quality jumps are common this week. Cause-and-effect between instruction and output becomes visible.
Defining format, tone, length, and structure upfront. How worked examples show AI what good looks like. How constraints prevent outputs falling outside acceptable parameters.
Generate an output with a fully specified format. Aim for zero reformatting needed.
Add full output specification to your three task prompts. Create a reusable template for one recurring task.
Reducing rework. Participants who complete this week properly report editing time drops sharply.
Good AI work is multi-step collaboration, not single-exchange prompting. How to diagnose gaps and issue targeted follow-up. When to keep refining vs accept or reframe.
Improve a weak output across three deliberate iterations. Adapt communication style across different AI modes.
Use iterative prompting on one of your three tasks. Plan the iterations before starting. Track what changed at each step.
Description complete. Three tasks with proper prompts, output specs, and iterative workflows. Discernment begins next.
How AI hallucinations occur and why they are hard to detect. Systematic accuracy review before acting on outputs. Evaluating reasoning quality and identifying unsupported conclusions.
Audit AI outputs — identify factual errors, fabricated references, and logical gaps.
Run your three tasks through AI and apply a critical accuracy review before using any output this week.
The trust shift. Many people realise this week how much they were relying on unverified AI output in real work.
Why accurate outputs can still be wrong for the context. Evaluating against audience, purpose, and scope. Identifying what is missing — not just what is present.
Evaluate outputs for contextual fit. Identify omissions and improve one output for a specific audience.
Evaluate your three task outputs for relevance and completeness. Improve one that was accurate but not fit for purpose.
Beyond accuracy. Not 'is this right?' but 'is this right for us, right now?' A fundamentally different question.
How AI outputs can reflect bias, limited perspectives, or systematic omissions. Making sound use/revise/discard decisions based on risk, audience, and context.
Identify bias and framing issues. Make use judgement decisions across a set of risk scenarios.
Review one output from your three tasks before sharing externally. Apply a bias check and document the use decision.
Discernment complete. Outputs evaluated across accuracy, fit, completeness, bias, and use. Diligence begins next.
When and how to disclose AI involvement. Professional accountability always rests with the human — not the tool.
Evaluate AI use scenarios for disclosure. Accountability case studies — who owns the output?
For each of your three tasks, document where AI disclosure is required and confirm you hold full accountability for the outputs.
Professional responsibility. AI changes how you work. Not who is responsible for the result. That is always you.
What data should never be shared with AI tools. How to handle sensitive information. IP and attribution considerations for AI-generated content.
Identify data risk in AI interaction scenarios. Evaluate IP considerations across common AI use cases.
Review the workflows built around your three tasks — apply a data privacy check and adjust anything that needs to change.
Safe practice by design. Risk addressed inside the workflow — not bolted on as an afterthought.
Ethical implications of AI use in tasks that affect people. Why AI fluency requires ongoing self-assessment, not a fixed end state.
Analyse ethical AI scenarios. Structured reflection on your AI use evolution from Week 2 to Week 13.
Write a short reflection on how your three tasks and AI approach have changed. Draft your personal AI practice plan for Week 14.
The full picture. Twelve capability layers on the same three tasks. That is what makes this a capability program, not a course.
Repeat the baseline capability assessment across all four domains. Compare Week 1 and Week 14 profiles. Each participant shares their most improved workflow. Peer group reflection. Academy completion recognition.
Updated capability profile showing measurable progression. Three refined AI-assisted workflows. A documented prompt library from your own experiments. A personal AI practice plan for what comes next.
Twelve improvement cycles. One result. Workflows genuinely redesigned — not just discussed. That is the difference between a capability program and a course.