The questions teachers actually ask
Honest answers to the things people ask at conferences, in staffrooms, and in training sessions — including the questions people are sometimes afraid to ask out loud.
Select any question to read the answer. If something is not here that you think should be, get in touch — we add questions based on what we hear at events.
The big picture
No. That answer is worth saying plainly, without hedging.
AI tools are very good at generating first drafts of text. They are not good at knowing your students, reading a room, noticing that something is wrong with a child, building the kind of relationship that makes a student try harder, or making the hundreds of professional judgements a teacher makes every day.
The more interesting question is what changes. AI shifts where your time goes — less time on first drafts of worksheets and emails, more time on the things that actually require you. That is a change in workflow, not a threat to the profession.
Teaching is fundamentally relational. That is not something a language model can replicate.
The short version: no, it is not a fad, and waiting indefinitely is itself a decision with consequences.
Generative AI has been adopted faster than almost any previous technology. Your students are already using it. Their parents are using it at work. Schools that have no position on it are not neutral — they are leaving teachers and students to navigate it without guidance.
That said, "wait and see" in the sense of not adopting every new tool immediately is sensible. Start with one task. See if it saves you time. Build from there. You do not need to be an expert. You need to be informed enough to make good decisions for your classroom.
Yes. Modern AI tools work in plain language. There is no coding, no setup, and nothing to install. If you can write an email, you can use an AI tool.
The most important skill is knowing how to write a clear, specific instruction — which is something every teacher already does every day when they write a task brief for students.
The AI mini-course on this site covers how these tools work, in plain language, with no technical background assumed.
The DES position is evolving. As of 2025–2026, there is no blanket ban on AI use in Irish schools, and the Department has acknowledged that generative AI is now part of the educational landscape.
The SEC has confirmed that submitting AI-generated content in assessed work without attribution is academic dishonesty — equivalent to plagiarism. This applies to CBAs, projects, and orals as well as written exams.
The NCCA has begun engaging with AI in the context of curriculum development and assessment design. Their guidance is worth following.
The most important thing is to check for updated circulars from the DES and guidance from your school's management body (CSP, ETBI, JMB, NAPD etc.) — this space is moving quickly and guidance is being updated regularly.
Our ethics and policy page covers this in more detail.
First: you are not alone. The majority of Irish schools do not yet have a formal AI policy. That does not mean "anything goes" — your existing data protection, safeguarding, and professional conduct obligations still apply in full.
In the absence of a policy, the safest approach is to use AI only for tasks that involve no personal data about students or families, and to treat every output as a draft you review before it goes anywhere.
If you want to move things forward at your school, the ethics and policy page includes a summary of what a good AI policy should contain. Bringing that to a staff meeting or to your principal is a reasonable starting point.
If you are genuinely unsure whether a specific use is acceptable, ask your line manager before you do it — not after.
Students and academic integrity
It depends on what they are doing with it and in what context.
Using AI to understand something better — asking it to explain a concept, give an example, or simplify a text — is not cheating. It is a study tool, like a textbook or a YouTube explanation.
Using AI to produce work that is submitted as the student's own in an assessed context — a CBA, a project, an essay — is academic dishonesty. The SEC is clear on this.
The grey area is in the middle, and it is genuinely grey. Using AI to restructure a paragraph? To check an argument? To improve phrasing? Schools need to be explicit about where their line is, because students cannot follow rules that have not been communicated clearly.
The most useful framing is: if the assessed task is meant to show what the student knows and can do, and AI has done the knowing and doing, then the assessment is not showing what it is supposed to show. That is the problem — not AI itself.
No — not as evidence of misconduct.
AI detection tools such as GPTZero and Turnitin's AI detection have significant false positive rates. They have flagged work by students for whom English is an additional language as AI-generated, when it was not. They have flagged short sentences, formal writing styles, and non-native English patterns as suspicious. Using these tools as evidence in an academic integrity process exposes your school to serious fairness and legal risks.
If you suspect a student has submitted AI-generated work, the right approach is a conversation. Ask them to explain their thinking, talk you through their choices, or expand on a point. Genuine understanding of work is almost always apparent in conversation.
The more effective long-term response is designing assessments that make AI shortcuts less useful — see the assessment in an AI world section on the ethics page.
That is increasingly the view of curriculum developers and employers — yes. Not because AI use is always appropriate, but because students who graduate without understanding how these tools work and how to use them critically are at a disadvantage.
The more useful question might be: what does responsible, critical AI use look like for a student at this level, in this subject? Teaching students to use AI well — to ask better questions, to verify outputs, to recognise when AI is wrong, and to understand when using it crosses a line — is itself a valuable skill.
The student guide on this site is designed to be shared with classes and covers this directly.
The honest answer is: it depends on the task and your school's current position.
If the task is designed to assess the student's own thinking — an essay, a project, a structured analysis — then AI should not be producing the content. The student can use it to understand something they are stuck on, or to get feedback on a draft they have written, but the thinking and the writing should be theirs.
If the task is practice — revision questions, reading, checking their own understanding — AI can be a useful study partner, used in the same way they might use a tutor or a study group.
Being explicit with students about this distinction — for each task — is more useful than a blanket rule in either direction. And it is worth having that conversation with the whole class, not waiting until someone asks.
Data protection and safety
Potentially, yes — in specific circumstances.
Data protection is the most serious risk. If you enter personal data about students, staff, or families into a free AI tool without appropriate legal cover, you may be in breach of GDPR and the Data Protection Acts. This is not hypothetical — the DPC has investigated schools for less. The fix is simple: never put personal data into an AI prompt.
Copyright is a lesser but real concern. Uploading published textbooks, third-party curriculum resources, or other copyrighted material to an AI tool without permission may infringe copyright.
Professional conduct is a risk if you use AI output without reviewing it and something harmful, inaccurate, or inappropriate reaches students or families. You remain responsible for everything you share, regardless of where the draft came from.
Used carefully — anonymised data, your own materials, everything reviewed before use — the risks are manageable. Ignore the rules and the risks are real.
Generally yes, provided you have checked two things first.
No personal data. If your slides or lesson plans contain student names, class photos, individual assessment results, or any other personally identifiable information, remove it before uploading. Strip headers and footers that name the school and class if the document contains anything sensitive.
You own the content. If your materials include substantial excerpts from published textbooks, publisher worksheets, or third-party resources, check whether uploading them is permitted. Your own original content is yours to use.
Both Claude and Gemini's free tiers may use your conversations to improve their systems — though both offer settings to opt out of this. If this concerns you, check the current privacy settings for each tool, or use a paid tier with stronger data protections.
The data protection guide covers how to anonymise documents before uploading.
Not automatically. This is a common and important misunderstanding.
Your school's Microsoft 365 agreement covers the core Microsoft 365 services. Microsoft Copilot is an add-on — typically a separate licence — with its own data terms. Whether school data processed through Copilot is covered by your organisation's data processing agreement depends on the specific licence your school has, the tenancy configuration, and how Copilot has been set up by your IT administrator.
The only way to be sure is to ask your IT administrator or data protection officer directly: "Is Microsoft Copilot covered by our school's data processing agreement, and what data are we permitted to use with it?"
Do not assume. The consequences of getting this wrong involve real people's personal data.
By default, AI tools do not carry memory between separate conversations. Each new conversation starts fresh — the AI has no knowledge of what you discussed yesterday or last week unless you tell it again in the current conversation.
Some tools offer optional memory features (Claude Projects, ChatGPT Memory, Gemini Gems) that allow the AI to retain information across conversations. These are opt-in. If you use them, be mindful of what you are choosing to store — especially in a professional context.
Within a single conversation, the AI can see everything you have written in that session. This is called the context window. Once you close the conversation and start a new one, it starts again from zero.
Using AI in practice
This is probably the most common experience. And it is almost always a prompt problem, not a tool problem.
A vague instruction produces a vague output. "Write me a lesson plan on the French Revolution" gives the AI almost nothing to work with — no year group, no prior knowledge, no duration, no exam context, no format preferences. The output will be generic, because the instruction was generic.
The same task with a specific prompt — year group, topic, duration, prior knowledge, what you want included, what format — produces something genuinely useful. The before and after demos on this site show exactly this difference side by side.
Try the Prompt Lab. It builds a detailed, specific prompt for you from a few fields. That is usually enough to turn "useless" into "useful first draft".
It depends heavily on the task and how specific your prompts are. But the honest answer for tasks like lesson planning, retrieval quiz creation, differentiated resources, and parent communications is: a lot, once you are past the learning curve.
A retrieval starter that would take 20 minutes to write from scratch takes about three minutes with AI — two to write the prompt, one to read and fix the output. A parent email that would take 15 minutes takes about five. A three-level differentiated task that might take 40 minutes takes about ten.
None of those are zero — you still need to read, check, and sometimes edit. But across a week of teaching, the time adds up quickly. Teachers who use it consistently report getting back an hour or more per week. See the worked examples for realistic timings from real tasks.
Start with whichever one you will actually use. That sounds flippant but it is the right answer.
If your school uses Google Workspace, start with Gemini — it integrates into the tools you already have open. If you use Microsoft 365, Copilot at copilot.microsoft.com is a natural starting point. If you have no particular ecosystem, Claude and ChatGPT both have capable free tiers that work in any browser with any email address.
The prompts on this site work with all four tools. You do not need to commit to one or become an expert before you start. The tools comparison page covers each one's strengths if you want more detail.
You do not trust it — you verify it. That is the right relationship with AI output.
AI tools can produce incorrect information confidently and fluently. This is called "hallucination" and it happens across all tools. It is more common in areas that require precise factual recall — specific dates, statistics, quotations, legal details, scientific data — and less common in areas like writing structure, tone, and general explanations.
A practical rule: the more specific and verifiable a claim is, the more you should check it before using it. Quotes, statistics, named research, exam board specifications, and historical dates should all be verified against a reliable source before they go near students.
Think of AI output the way you would think of a capable but occasionally unreliable colleague's first draft. Useful starting point. Not the final word.
There is no legal requirement to disclose this in most cases — a teacher producing materials using AI is similar to a teacher using any other tool or resource to prepare for class.
That said, there are good professional reasons to be transparent. If you are asking students to engage critically with AI in their own work, modelling honest disclosure is consistent with that. And if an AI-generated resource contains an error that affects students, the fact that it came from AI is not a defence — you reviewed it and used it.
Some schools are developing transparency norms around AI use. Until those norms are clearer, the most defensible position is: use AI to draft, review everything carefully, and do not use anything you cannot stand behind professionally.
It can, if used badly. And it is a legitimate concern worth taking seriously.
If teachers use AI to produce generic resources without adapting them to their class, the materials will be generic. If students use AI to produce essays without engaging with the ideas, they will produce hollow essays. If schools adopt AI without thinking about what it means for assessment and learning, some things will get worse.
But the same is true of any tool. Photocopied worksheets from the same textbook year after year are also generic. The question is not whether AI makes things generic — it is whether the teacher using it is doing so thoughtfully.
Used well, AI does the opposite of making things generic. It makes it faster to differentiate, to produce materials tailored to a specific topic, class, and moment. The teacher's judgement about what their students need is still the thing that determines quality. AI just reduces the time it takes to produce the first draft of the resource that judgement is applied to.
Something not answered here?
These questions come from real conversations at events and in schools. If you have a question that is not covered here, get in touch — we update this page based on what we hear.
Contact Brickfield (opens in new tab)