Welcome to AI
A practical, no-jargon introduction to artificial intelligence for secondary school teachers. Ten modules covering everything from what AI is, to using it confidently in your classroom every day.
What you will learn
This course covers ten modules. By the end, you will be able to:
- Explain what AI is and what it is not.
- Describe how large language models work in plain language.
- Understand what generative AI can and cannot do.
- Know how memory and context work in each of the four major tools.
- Write prompts that consistently produce useful, teacher-specific outputs.
- Choose the right AI tool for a given classroom task.
- Use AI for lesson planning, differentiation, feedback, and parent communications.
- Use AI responsibly, with data protection built in at every step.
How long does this take?
Each module takes five to ten minutes to read. The full course takes about 60 to 75 minutes, though you can pause and return at any time. Use the sidebar to jump to any module.
What Is AI?
Understanding the term that is on everyone's lips — and what it actually means for your work.
You have heard "AI" everywhere — in news headlines, product pitches, and Brickfield conversations. But what does it actually mean?
AI refers to computer systems that can perform tasks that normally require human intelligence — such as understanding language, recognising patterns, making decisions, or generating content.
A calculator does arithmetic. AI thinks, reasons, and creates — to varying degrees and with important limitations.AI is not new
AI has been developing for decades. What changed recently is that it became dramatically more capable and broadly accessible. The following timeline shows the key stages.
1950s to 1980s — The early days
Researchers built rule-based systems. Computers could follow instructions but not adapt or learn. Chess programmes and early speech recognition were the main results.
1990s to 2010s — Machine learning takes over
Instead of writing every rule by hand, engineers fed computers large amounts of data and let them find patterns. Spam filters, Netflix recommendations, and Google Search all use this approach.
2017 to 2022 — The Transformer revolution
A new architecture called Transformers made AI dramatically better at understanding language. This led directly to tools like GPT-3 and DALL-E.
2022 to present — The accessible AI era
ChatGPT launched in November 2022 and reached 100 million users in 60 days. AI moved from research labs to everyday tools. This is where we are now.
Three types of AI you will encounter
Most of the AI you meet in the workplace falls into one of the following three categories.
Narrow AI
Designed to do one thing very well. Your spam filter, Spotify's recommendations, and facial recognition on your phone are all examples of narrow AI.
Generative AI
AI that can create new content — text, images, code, or audio. ChatGPT, Claude, and Gemini are generative AI tools. This is what this course focuses on.
AGI (theoretical)
Artificial General Intelligence — AI as capable as a human across every task. This does not exist yet. Current AI tools are far from this stage, despite how they can sometimes appear.
What AI is not
It is not a database you are querying. It is not always correct. It does not "know" things — it generates plausible responses based on patterns. It does not have opinions, feelings, or consciousness. These distinctions matter when you use it at work.
Knowledge check
Which best describes what "AI" means today in tools like ChatGPT or Gemini?
Large Language Models
The technology powering ChatGPT, Claude, Gemini, and Copilot — explained without requiring a technical background.
When people say "AI" today, they usually mean a type of system called a Large Language Model, or LLM. This is what powers ChatGPT, Claude, Gemini, and the AI inside Microsoft Copilot.
How does an LLM work?
Imagine reading every book, article, forum post, and website ever written — billions of documents. Your brain absorbs all the patterns of how words relate to each other, how sentences are structured, how arguments are made. When someone asks you a question, you generate an answer based on all those absorbed patterns. That is roughly what an LLM does.
It is a pattern-completion engine operating at enormous scale.More precisely, an LLM is trained to predict: "given these words, what word is most likely to come next?" — millions of times across billions of examples. When this training is done at sufficient scale, the model learns to reason, argue, summarise, translate, write code, and create.
The three stages of training
Every major AI model goes through the following stages before you use it.
Stage 1 — Pre-training
The model reads enormous amounts of text from the internet, books, code, and encyclopaedias. It learns language patterns. A model like GPT-4 or Claude is trained on trillions of words.
Stage 2 — Fine-tuning
Human trainers rate responses as helpful or unhelpful. The model is adjusted to produce better, safer, more useful outputs. This process is called RLHF — Reinforcement Learning from Human Feedback.
Stage 3 — Deployment
The trained model is made available to users. You send it text. It sends text back. The intelligence is built in — the model does not automatically get smarter from your conversations unless the company specifically designs it to do so.
Two limitations that matter every day
The following limitations are important to keep in mind whenever you use AI tools at work.
LLMs can hallucinate
LLMs generate plausible-sounding text. They do not "know" facts — they predict words. This means they can confidently produce wrong information. Always verify important facts, especially names, numbers, dates, and citations.
Knowledge cutoff dates
LLMs have a training cutoff date. They do not know about events after they were trained, unless given tools to search the web. Always ask yourself: does this question require current information?
Knowledge check
If you ask an LLM about a news event from this week, what should you expect?
Generative AI
AI that creates — text, images, code, audio, video. What does that mean for teachers?
Generative AI is the branch of AI that creates new content. Rather than just classifying or predicting (for example, "is this email spam?"), it generates something that did not exist before — a paragraph, a piece of code, an image, or an email draft.
What can generative AI do?
Select a capability below to see a realistic example of what generative AI produces.
What generative AI means for your role
For most people on the teaching staff, you will interact with generative AI primarily through text. Here is what that looks like day to day.
Emails
Draft, respond to, or polish customer emails in seconds. Give AI the context about the situation and recipient; it provides a professional first draft.
Documents
Summarise long reports, create meeting notes, draft proposals, and convert bullet points into polished prose.
Thinking partner
Work through a decision, get a contrary view, or generate options you had not considered. AI is a useful sounding board at any stage of a project.
Code
Generate, explain, debug, and review code. Developers can work significantly faster with AI as a pair programming partner.
Think of it as a brilliant but overconfident intern
Generative AI is fast, willing to attempt anything, and remarkably capable — but it needs clear direction, it can misunderstand context, and it sometimes confidently produces things that are wrong. Your job is to guide it, check its work, and use your own expertise to refine the output.
Knowledge check
A colleague asks: "Is AI making up the content it writes, or is it finding answers from a database?" What is the correct answer?
Memory and Context
Why the AI seems to "forget" you — and how each tool handles this differently.
One of the most confusing things about AI tools is memory. You might have a useful conversation with Claude on Monday, then come back on Tuesday and it has no idea who you are. This is not a bug — it is by design. Understanding how memory works will save you a lot of frustration.
The context window
Think of the AI's memory like a whiteboard. Each conversation starts with a blank whiteboard. As you chat, the conversation fills it up. The AI can see everything on the whiteboard — but when the session ends, the whiteboard is wiped. Nothing carries over to the next session.
The technical term for this whiteboard is the "context window" — the amount of text the model can see at once during a conversation.The diagram below shows how a single conversation fills the context window.
How each tool handles memory
Each AI product has its own approach to persisting information beyond a single conversation. Select a tool below to see what it offers.
ChatGPT memory
ChatGPT (Plus and Team plans) includes a Memory feature. When switched on, ChatGPT notes facts like your job, preferences, and past context, then recalls them in future conversations.
Where to find it: Settings → Personalisation → Memory. You can view, delete, or switch off individual memories at any time.
Also useful: Custom Instructions, found under Settings → Personalisation. This is a text field where you describe yourself and how you want ChatGPT to respond. These instructions apply to every new conversation automatically.
Gemini memory and Gems
Gemini uses Gems — custom AI personas you configure with specific instructions and a defined purpose. Think of a Gem as a specialist version of Gemini for a particular task or role.
Where to find it: In Gemini at gemini.google.com, look for Gems in the left sidebar. You can create your own or use Google's pre-built Gems.
Gemini can also access your Google Workspace data — Gmail, Drive, and Calendar — through Extensions. This allows it to reference your actual emails and documents. Extensions must be enabled deliberately in settings.
Claude memory and Projects
Claude uses Projects to provide persistent memory within a defined workspace. A Project stores custom instructions, uploaded files, and a shared conversation history — all visible to Claude throughout every conversation in that Project.
Where to find it: In Claude.ai, select Projects from the left sidebar. Create a Project, add a custom system prompt, and upload relevant documents such as knowledge bases or style guides.
Claude also has a standalone Memory feature that notes facts about you across conversations, similar to ChatGPT's Memory.
Microsoft Copilot and organisational context
Microsoft 365 Copilot is deeply integrated with your organisation's Microsoft 365 data. It can access emails, Teams chats, SharePoint files, meeting recordings, and calendar entries — within your organisation's permissions boundary.
When you use Copilot in Teams, it knows the context of your current meeting. In Outlook, it knows the email thread. In Word, it knows your document. This contextual integration is Copilot's biggest differentiator from standalone AI tools.
Note for teachers: Once the team has Microsoft 365 Copilot licenses, Copilot will be able to search across SharePoint, OneDrive, and Teams meetings — making it ideal for finding customer-related information across the organisation.
Knowledge check
You want Claude to always know you work as a teacher and to write in a professional but warm tone. What is the best approach?
Writing Good Prompts
The single skill that makes the biggest difference to the quality of AI output.
A prompt is the text you type to an AI tool. The quality of your prompt determines the quality of the response. This is sometimes called prompt engineering — though "clear communication" is a more accurate description of what it actually involves.
Good prompting is like good briefing. A brilliant freelancer given a one-line brief will produce a generic result. Given a detailed brief with context, tone, audience, and examples, they will produce something excellent. AI works in exactly the same way.
Five elements that make a strong prompt
Use the following five elements to structure any prompt that matters.
Context
Tell the AI who you are, what you are working on, and relevant background. For example: "I am a customer success manager as a teacher, an Irish accessibility edtech company..."
Length and format
Specify how long and what format you want. For example: "...in three short bullet points" or "as a 200-word professional email" or "as a bulleted list followed by a one-paragraph summary."
Examples
If you have a preferred style, share a sample. For example: "Here is an email I wrote last month that represents our tone well: [paste it here]"
Audience and tone
Specify who will read this and what tone is appropriate. For example: "The audience is a university IT director who is not technical. Tone: professional, warm, and jargon-free."
Role
Give the AI a role to adopt. For example: "You are a senior technical writer specialising in WCAG accessibility documentation."
See the difference a good prompt makes
Select an option below to compare a weak prompt against a strong one for the same task.
Iteration is the normal way to work
You rarely get the perfect output on the first attempt — and that is expected. Treat AI interaction as a conversation. Follow-up instructions such as "make it shorter", "make it more formal", "add a section about pricing", or "the third paragraph is too stiff — try again with more warmth" all work well.
Prompting tips by role
For customer emails: include the parent or carer's concern, and the outcome you want. For marketing copy: include the target audience, word count, and a link to existing brand guidelines. For developers: include the programming language, the relevant code context, and what you have already tried.
Ready-to-use prompt templates
The following six patterns cover the majority of everyday AI tasks as a teacher. Copy, adapt, and save the ones you use most.
Rewrite
Use when you have a draft that needs improving.
Summarise
Use when you need to distil a long document quickly.
Explain
Use when you need to translate technical content for a non-technical audience.
Generate options
Use when you need a starting list of ideas.
Draft from notes
Use when you have rough notes and need a finished document.
Review and improve
Use when you want critical feedback on something you have written.
Knowledge check
You ask an AI to "write a support response" and receive something generic. What is the best next step?
Tools at a Glance
ChatGPT, Gemini, Claude, and Microsoft Copilot — what makes each one different?
There are now many AI tools available and the lines between them are blurring. But each has distinct strengths, origins, and design philosophies. The following overview will help you choose the right tool for the task.
ChatGPT
Made by OpenAI. The most widely used AI tool globally. Excellent general-purpose assistant with image generation and web search. A free tier is available; Plus costs $20 per month.
Gemini
Made by Google. Deeply integrated with Google Workspace — it can read your Gmail, Drive, and Calendar. Best if your team relies heavily on Google products.
Claude
Made by Anthropic. Particularly strong at long-document analysis, nuanced writing, and following complex instructions. Known for careful, detailed responses.
Microsoft Copilot
Made by Microsoft, powered by OpenAI. Lives inside Teams, Outlook, Word, and Excel. Best for searching your organisation's Microsoft 365 data.
Choosing the right tool for the task
Select a tool below to see its best use cases for teachers specifically.
ChatGPT — best for
- General writing, drafting, brainstorming, and ideation.
- Generating images with DALL-E (Plus plan).
- Searching the web for current information when web search is enabled.
- Creating custom GPTs to automate specific workflows.
- Teams and Enterprise plans with data privacy protections for organisations.
At Brickfield: A good starting point for any team member new to AI. The free tier is capable for everyday tasks.
Gemini — best for
- Teams using Google Workspace — Gmail, Drive, Meet, and Calendar.
- Searching across your emails and documents using natural language.
- In-document assistance in Google Docs, Sheets, and Slides.
- Research with Google Search integration built in.
At Brickfield: If the team uses Google Workspace significantly, Gemini could be powerful for finding information across shared Drives and email threads.
Claude — best for
- Reading and analysing long documents — contracts, reports, research papers.
- Writing that requires nuance, care, and following detailed multi-step instructions.
- Complex technical writing and documentation.
- Developer tasks including code generation, debugging, and architecture discussion.
- Tasks requiring careful handling of sensitive or nuanced content.
At Brickfield: Currently the primary tool for Gavin and the CTO. Particularly suited to the technical depth required in accessibility documentation. Claude Projects align well with how Brickfield operates.
Microsoft Copilot — best for
- Searching across your organisation's SharePoint, OneDrive, and Teams data.
- Summarising meeting recordings and generating action items from Teams calls.
- Drafting emails and documents with awareness of prior Microsoft 365 context.
- Excel data analysis and PowerPoint slide creation.
At Brickfield: The most likely choice for customer success and sales teams once Microsoft 365 Copilot is licensed. It can surface customer conversation history, SharePoint files, and meeting notes in one place.
You do not have to pick just one
Different tools have different strengths. Many people use Copilot for internal Microsoft 365 data, Claude or ChatGPT for deep writing and analysis, and a third tool for specific tasks. This is normal and practical.
Which tool for which Brickfield task?
The following table maps the tasks most common as a teacher to the tool best suited for each. Ratings reflect each tool's current strengths — all four can handle all tasks, but some are meaningfully better for specific work.
| Task | ChatGPT | Claude | Gemini | Copilot |
|---|---|---|---|---|
| Drafting parent and carer emails | Excellent | Excellent | Good | Good |
| Summarising lesson observations | Excellent | Excellent | Good | Excellent |
| Reviewing curriculum documents | Good | Excellent | Good | Good |
| Writing lesson plans and schemes of work | Excellent | Excellent | Good | Good |
| Creating quiz questions and assessments | Excellent | Excellent | Good | Good |
| Writing differentiated resources | Good | Excellent | Good | Good |
| Writing subject explanations and starters | Excellent | Excellent | Good | Good |
| Searching school policy documents | Not applicable | Not applicable | Not applicable | Excellent |
| Turning rough notes into lesson plans | Excellent | Excellent | Good | Excellent |
| Research and current information | Good | Good | Excellent | Good |
Note: "Excellent" and "Good" reflect current relative strengths, not hard limits. All four tools are capable across all these tasks. For M365 document search, Copilot is the only tool with access to your organisation's internal data.
Writing and Revision
Practical AI workflows for marketing, customer success, sales, and anyone who writes anything.
Writing is where most people as a teacher will get immediate, tangible value from AI. Drafting emails, polishing documents, repurposing content, responding to customers — all of these can be accelerated significantly with the right approach.
Parent and carer emails
Provide the AI with the parent or carer's concern, relevant background, and the tone you want. Ask it to draft a response. Review, adjust, and send. The following example shows a well-structured prompt for a customer email.
AI in your role as a teacher
The following examples show how different roles on the team are using AI today. Your exact job is probably reflected here.
Customer Success
Drafting accessibility remediation advice for clients — paste in a raw audit finding and ask AI to turn it into a clear, actionable recommendation in plain English. Also useful for summarising long support threads before escalating, and drafting renewal or at-risk emails where tone matters a great deal.
Sales
Turning rough discovery call notes into structured proposals within minutes. Paste your notes, describe the client's key pain points, and ask for a first-draft proposal structure. Also strong for competitive research summaries and preparing talking points before a demo.
Marketing
Generating accessibility blog post outlines, drafting LinkedIn post variations for A/B testing, and repurposing a single piece of content into multiple formats (email, social post, short article). Always add specific Brickfield data points and customer examples that AI cannot know.
Engineering
Explaining Moodle plugin code to non-developer stakeholders, generating docblock comments, and drafting release notes from a git commit list. See the Developers module for a full treatment of coding use cases.
Anyone working with accessibility documentation
Summarising WCAG updates into plain-language summaries for clients, drafting accessibility policy sections, and explaining audit findings to non-technical stakeholders. Always verify WCAG claims against the official W3C documentation — AI can misstate or simplify criteria.
Marketing copy
For LinkedIn posts, blog introductions, and newsletter copy: give the AI your key message, the intended audience, the tone, and a word count. Always add your own voice afterwards — AI gives you a strong starting draft, not a final product.
Revision and improvement
You do not need to generate content from scratch. Paste in your own draft and ask the AI to improve it. The following revision instructions all work well.
- "Make this more concise — aim for half the length."
- "Improve the flow — some sentences feel disconnected."
- "This reads like a template. Make it feel more personal and direct."
- "The third paragraph is unclear. Rewrite it so the main point is in the first sentence."
An effective revision workflow
The following five steps describe how to get consistently good results from AI-assisted writing.
- Write a rough draft yourself first. AI revision of your own words produces better results than pure generation from scratch.
- Paste your draft and tell the AI specifically what to improve: tone, length, clarity, or formality.
- Compare the AI version with yours. Take the best elements from each.
- Make a final edit with your own voice, personal knowledge, and any relationship context the AI cannot know.
- Send. You have saved 20 to 30 minutes and the output is stronger than either version alone.
Keep your voice
AI writing tends toward the generic. Your job is to inject the specific detail, the personal relationship awareness, and the Brickfield brand voice. Use AI for structure and speed; add your own knowledge for substance and authenticity.
Knowledge check
A team member says AI-written emails are obvious and feel fake. What is most likely happening?
AI in the Classroom
Practical workflows for lesson planning, differentiation, feedback, and student-facing tasks.
This module covers the teaching tasks where AI delivers the most consistent time saving. Each workflow below explains what to give the AI, what to expect back, and what you still need to do yourself.
Lesson planning and preparation
Lesson plan from scratch
Give the AI your subject, year group, topic, duration, and any specific requirements. Ask for objectives, a starter, main activity, plenary, and suggested questions. You review and adjust — the AI handles the first draft.
Retrieval practice questions
Give the AI a topic or set of lesson notes. Ask for questions at a specific difficulty level. Request an answer key alongside.
Differentiation
Paste in your main activity or explanation. Ask the AI to produce a scaffolded support version and a stretch extension alongside the original. Never include student names.
Assessment and feedback
Feedback comment bank
Give the AI your mark scheme or success criteria and the level of the work. Ask for a bank of feedback comments you can adapt. Never paste in student work with names attached.
Knowledge organiser
Give the AI your topic and year group. Ask for key vocabulary, key facts, and key people or dates organised ready to print.
Parent and carer communications
Parent email
Describe the situation in general terms. Ask for a draft in the appropriate tone. Fill in any real names only after copying the draft out of the AI tool — never enter them into the prompt itself.
You are still the teacher. AI produces the first draft in seconds — but you bring the knowledge of your students, the understanding of what they already know, and the professional judgement to edit the output until it is genuinely right for your class.
What AI cannot do in the classroom
Cannot replace
- Knowing your specific students and their history
- Making safeguarding or pastoral judgements
- Assessing spoken contributions or practical work
- Building relationships that make students feel safe to try
- Guaranteeing factual accuracy — always check subject content
Can help with
- First drafts of any written resource
- Generating question sets and quiz banks
- Rephrasing content at different reading levels
- Drafting parent and carer communications
- Producing structured feedback templates
Knowledge check
A colleague pastes a student's full essay including their name into an AI tool and asks for feedback. What is the main concern?
Responsible Use
Using AI ethically, safely, and with integrity — especially in an accessibility-focused company.
At Brickfield, we build tools that help make digital content more accessible. Our use of AI should reflect the same values. Responsible AI use is not just about avoiding mistakes — it is about using these tools in ways that uphold quality, honesty, and respect for the people we serve.
When NOT to use AI
Knowing when to step back is as important as knowing how to use AI well. The following situations require human judgement, not AI assistance.
- Accessibility compliance decisions. AI can help draft WCAG documentation, but it cannot determine whether something meets a success criterion. Always verify against official W3C documentation and use qualified human review.
- Legal or contractual advice. AI will produce confident-sounding but legally unreliable output. Never use AI to interpret contract terms, data processing agreements, or regulatory obligations.
- Security vulnerability assessment. AI can explain vulnerabilities in general terms but should not be used to assess whether a specific system is secure.
- Final product copy without review. No AI-generated content should go to a client, prospect, or public audience without a human read. This applies especially to accessibility-themed content where errors undermine Brickfield's credibility.
- Financial decisions or forecasts. AI-generated numbers, projections, or financial summaries are not reliable without verification against source data.
- Anything involving confidential personal data. If the task requires using a real customer's name, contract details, or personal information, do not use a free AI tool.
The WCAG line
This deserves its own callout. Brickfield's reputation rests on accessibility expertise. AI can make a confident, well-written claim about a WCAG criterion that is subtly or significantly wrong. Always treat AI-generated accessibility guidance as a starting point, never as a final authority. Check it against the source.
How to verify AI output
The course has mentioned verification throughout. The following checklist makes that concrete. Run through it before using any AI-generated output in a professional context.
Before using any AI output — check these
- Facts and figures. Did AI cite a statistic, date, name, or version number? Verify it independently. Do not assume it is correct.
- Sources. If AI cited a report, paper, or article, check that the source actually exists and says what AI claims it says.
- Tone and voice. Does this sound like Brickfield? AI defaults to a generic professional register. Add warmth, specificity, and your own relationship knowledge.
- Accuracy of the brief. Did AI actually answer what you asked? It sometimes answers a slightly different question and the output feels right until you read it carefully.
- Policy alignment. Does this content reflect Brickfield's current positioning, pricing, or product capabilities? AI does not know what changed last week.
- Accessibility of the output itself. If you are publishing this content, check it meets the same standards you would apply to any Brickfield deliverable.
For developers — additional checks
- Run the code. Do not deploy AI-generated code you have not executed and tested.
- Check for security issues — hard-coded credentials, unvalidated inputs, deprecated functions.
- Test edge cases. AI-generated code often handles the happy path well and fails on edge cases.
- Understand what the code does before committing it. If you cannot explain it, do not ship it.
What must never go into any AI tool
The following categories of information must not be entered into any AI tool that is not covered by an enterprise data privacy agreement.
- Customer names, contact details, or any personal data.
- Confidential client information or contractual details.
- Internal financial data or commercially sensitive strategies.
- Passwords, API keys, or security credentials.
- Proprietary code that gives Brickfield a competitive advantage.
Accuracy and hallucination
AI can produce confident, well-written, and completely wrong information. This is called hallucination. It is particularly risky in the following situations.
- Making claims about WCAG standards — always verify against official W3C documentation.
- Citing statistics or research — check that the source exists before sharing it.
- Writing legal, medical, or technical content — verify with qualified experts.
- Generating code for production systems — review and test it thoroughly before deploying.
Accessibility and AI output
As a company whose mission is accessibility, hold AI-generated content to the same standards you apply to client deliverables. AI-generated images may lack alt text. AI-generated documents may have poor heading structure. Review all AI output for accessibility before publishing, just as you would any other content.
Transparency and disclosure
There is no universal rule requiring disclosure of AI assistance — but good practice includes the following principles.
- Do not present AI-generated content as your own original research or expert opinion.
- Do not use AI to write content that impersonates a person's unique voice without their knowledge.
- Be honest if asked directly whether AI was used to create something.
- Do not submit AI-generated work in contexts where that is prohibited, such as certain academic or contract requirements.
Bias and fairness
LLMs trained on internet text inherit biases from that data. They may produce content that reflects gender, racial, or cultural stereotypes, or that represents some demographics better than others. Always review AI-generated content critically — especially anything involving people, communities, or social topics.
Mistakes people commonly make with AI
The following are the most frequent failure patterns across all roles. Recognising them saves weeks of frustration.
Asking vague questions
"Write me something about accessibility." Without context, audience, format, or length, the output will always be generic. The prompt is the brief.
Expecting perfection first time
AI output is a starting point, not a finished product. The first response is the opening of a conversation, not the end of one. Iterate.
Copying output without reading it
AI produces fluent, confident text — which makes it easy to miss errors. Always read what you are about to send or publish. It looked fine until someone noticed the wrong product name.
Starting over instead of iterating
If the first response is not right, continue in the same conversation with a correction: "that is too formal — make it warmer" or "you missed the point about pricing." Do not restart.
Forgetting AI has no Brickfield context
AI does not know your customers, your current deals, your product roadmap, or last week's pricing change. You have to tell it. Set up Projects or Custom Instructions to avoid repeating this every session.
Treating confident output as correct output
AI writes with equal confidence whether it is right or wrong. Fluency is not accuracy. The output that sounds most authoritative is the one most worth checking.
Knowledge check
A sales team member wants to use the free tier of ChatGPT to draft a proposal. They paste in the client's full name, annual budget figures, and detailed requirements from the discovery call. Is this acceptable?
Cheat Sheet and What's Next
Everything you need on one page. Save it, print it, share it.
Course complete
You have covered what AI is, how LLMs work, what generative AI can do, how memory and context work across four tools, how to write effective prompts, how to use AI for writing and development, and how to use AI responsibly. That is a strong foundation.
The Brickfield AI Cheat Sheet
The following is your one-page reference. Everything you need to use AI well as a teacher, condensed.
Use AI for
Drafting emails and customer responses
Summarising long documents and reports
Rewriting and polishing your own drafts
Turning meeting notes into proposals
Generating ideas and outlines
Explaining technical content in plain language
Writing and debugging code
Translating and adapting content for different audiences
Do not use AI for
Accessibility compliance decisions (always verify with W3C source)
Legal or contractual interpretation
Final copy without a human review
Financial forecasts or projections
Security vulnerability assessment
Anything requiring real customer personal data in a free tool
Always
Verify facts, numbers, and cited sources
Read the output before sending or publishing
Add your own voice, context, and relationship knowledge
Iterate — the first response is a draft, not the answer
Set up your tool with Brickfield context so you stop repeating yourself
Pick the right tool
Claude — long docs, nuanced writing, coding
ChatGPT — general tasks, image gen, web search
Copilot — your Microsoft 365 data, Teams, Outlook
Gemini — Google Workspace, Gmail, Drive, research
Six prompt templates to keep
The following six prompts cover the tasks most people do every day. Adapt them for your context.
Rewrite
Summarise
Explain
Generate ideas
Draft from notes
Review
Before you go: three things to do this week
The following three actions will move you from course-complete to actually using AI well.
- Set up your context. Create a Claude Project or Gemini Gem with your subject, year groups, and teaching context. Upload a lesson plan or two. Do it once, benefit every session.
- Try AI on one real task this week. Pick a lesson starter, a parent email, or a set of quiz questions you would have written manually. The first real use is the most instructive.
- Share what works. When you find a prompt or workflow that genuinely saves you time, share it with a colleague. Everyone benefits from collective discovery.
Brickfield and AI
As a professional working with young people, you are well placed to use these tools thoughtfully. Use AI to reduce workload — and always ensure that what you produce reflects your professional judgement and your school's values.