Pathway 1 — Understanding

AI ethics and school policy

What Irish schools need to know about academic integrity, data protection, and using AI responsibly with students.

This page gives you practical guidance, not legal advice. If you have specific questions about GDPR, data processing, or contractual obligations, talk to your school's data protection officer or a legal adviser.

What the Irish education system actually says

Here is the honest answer to the question most teachers are really asking: can I use AI without getting into trouble? The short version is yes — with some clear limits.

The DES has not banned AI in schools. Their position is that teachers remain the professional decision-makers in the classroom, and that AI should support that judgement — not replace it. That is a sensible position and it gives you room to work.

The SEC is more specific. They have confirmed that submitting AI-generated content as your own work in any assessed context — CBAs, projects, orals, written work — is academic dishonesty. Same rules as plagiarism, same consequences. That applies to students, not to teachers preparing resources.

The NCCA is actively engaged with AI in curriculum and assessment design. Their guidance is worth following as it develops. And this space is moving fast — check for updated DES circulars regularly, because guidance issued six months ago may already be out of date.

Academic integrity — where is the line?

This is the question teachers ask most often, and it is a fair one. The honest answer is that the line depends on context — what the task is for, what it is meant to demonstrate, and how the AI output is used.

Here is a practical way to think about it. If the assessed task is designed to show what a student knows, understands, and can do — and AI has done the knowing, understanding, and doing — then the assessment is not showing what it is supposed to show. That is the problem. Not AI itself.

  1. Generally fine: using AI to understand a concept, get feedback on a draft the student has written, brainstorm ideas before developing them, or practise for an exam. These are legitimate study uses.
  2. Fine for teachers: using AI to create lesson materials, rubrics, feedback templates, or differentiated resources — provided you read and verify the output before using it.
  3. Not acceptable: submitting AI-generated text as a student's own work in any assessed context — CBAs, portfolios, project work, extended writing.
  4. Not acceptable: using AI to complete tasks specifically designed to assess the student's own thinking, research, or creative output.
  5. Genuinely grey: using AI to restructure or rephrase a student's writing. Schools need an explicit position on this and need to communicate it clearly — students cannot follow rules that have not been stated.

GDPR — what it means in practice for your classroom

Irish schools are data controllers under GDPR and the Data Protection Acts 1988–2018. That is not new. What is new is that AI tools create a very easy way to accidentally breach those obligations — because it feels like you are just typing into a chat box, not processing personal data.

And this is key: the DPC does not distinguish between intentional and accidental. If personal data goes into a free AI tool that your school has not approved, that is a compliance issue regardless of your intent.

  1. Student data stays out of free AI tools. Names, class lists, assessment results, medical or SEN information, parent contact details, disciplinary records — none of it. Free tools process data on external servers with variable data handling terms.
  2. Anonymise before you prompt. Replace names with generic labels ("Student A", "a 3rd Year student"). Do not describe situations that would identify someone even without a name.
  3. Check your school's data processing agreements. Some schools have enterprise agreements with Microsoft 365 or Google Workspace that provide stronger protections for staff-facing tools. Ask your DPO — do not assume.
  4. Special category data needs extra care. Health information, SEND details, religious beliefs, and ethnic origin carry higher protection requirements under GDPR. Do not put any of this into an AI prompt.
The DPC publishes guidance for schools at dataprotection.ie (opens in new tab). The data protection guide on this site has practical, step-by-step anonymisation help.

A student has submitted AI-generated work — what do you do?

First: do not reach for an AI detection tool. The honest verdict on those tools is that they are unreliable. GPTZero, Turnitin's AI detection, and others have significant false positive rates — they have flagged legitimate student work as AI-generated, including work by students for whom English is an additional language. Using them as evidence in an integrity process exposes your school to real fairness and legal risk.

So what does that mean in practice? Have a conversation. Ask the student to talk you through their thinking, explain a choice they made, or expand on a point. Genuine engagement with work is almost always visible in conversation. A student who cannot explain what they submitted almost certainly did not write it — and that becomes clear quickly without any detection software.

  1. Have the conversation first. Ask the student to explain their thinking, walk you through their choices, or expand on a point. This is more reliable than any software.
  2. Look at the wider picture. If the submitted work is significantly beyond what the student demonstrates in class or in other assessments, that is a professional concern worth following up — regardless of AI.
  3. Follow your school's academic integrity process. AI misuse should go through the same procedure as any other form of academic dishonesty. Document the concern and follow the agreed steps.
  4. Do not use AI detection software as evidence. It is not reliable enough. Their output is not proof.

Building a school AI policy — the essentials

Most Irish schools do not yet have a formal AI policy. That does not mean anything goes — your existing data protection, safeguarding, and professional conduct obligations still apply. But a clear, proportionate policy makes a real difference, because students and staff cannot follow rules that have not been stated.

Here is what a good policy needs to include.

A clear statement of purpose. What is the school's overall position? For example: "AI tools may be used as learning aids but not as a substitute for student work in assessed contexts."
Specific permitted and prohibited uses. Be concrete. "Brainstorming with AI is permitted; submitting AI-generated text as your own work in a CBA is not."
Data protection requirements. Students and staff need to know not to enter personal information about themselves or others into any AI tool.
Consequences of misuse. Align with your existing academic integrity policy so there is no ambiguity.
A review date. This area is moving fast. Build in an annual review — guidance issued today may be out of date within twelve months.

Bias, accuracy, and your professional responsibility

AI tools reflect the data they were trained on. That data contains cultural assumptions, gaps in representation, and gender bias. You cannot see those biases in a polished-looking output — which is exactly what makes them dangerous.

This is not an argument against using AI. It is an argument for always keeping yourself in the loop. Before using any AI-generated content with students, check it for factual accuracy, cultural or gender assumptions in the examples and language, and whether it is appropriate for the age group and context.

And here is the thing: you do not outsource your professional responsibility by using AI. If you use a resource containing an error, the responsibility remains with you — regardless of where the first draft came from.

Assessment in an AI world

AI does not make assessment impossible. But it does make some older assumptions less secure. If a polished written response can be generated in seconds outside the classroom, then assessment design needs to place more value on process, reasoning, decision-making and evidence of genuine understanding.

That does not mean every task must become a timed exam. It means assessment should be designed so that the student's thinking stays visible — through planning, drafting, explaining, speaking, justifying and revising, not just the final product.

Less secure evidence Unsupervised take-home writing, generic essays, and final products with no drafting trail are now easy to outsource.
Stronger evidence In-class writing, oral explanation, live problem-solving, annotated drafts, source-based justification and process evidence are harder to fake and easier to trust.

How assessment can adapt

  1. Assess the process, not only the product. Ask students to submit plans, drafts, annotations or a spoken rationale alongside the final piece.
  2. Use more in-class evidence. Timed writing, live practical tasks, questioning and short viva-style follow-up confirm authenticity and understanding.
  3. Design tasks that require choices. When students must justify why they selected particular evidence or interpretations, their understanding becomes visible in a way that AI cannot replicate.
  4. Teach ethical AI use explicitly. Students need to know when AI support is permitted, when it is not, and how to acknowledge it where appropriate.
Draft and reflection Ask students to submit the work and a short reflection: what did you change, why, and where do you still feel uncertain?
Oral defence Two or three verbal questions after a written piece. Takes three minutes per student. Tells you far more than the written piece alone.
Checkpoint assessment Break larger tasks into planning, outline, draft and redraft stages. Each stage becomes evidence in itself.
Context-rich tasks Use class-specific sources or recent practical work that requires engagement with your actual teaching sequence — not just the topic in general.
A useful question to test any assessment task: if a student used AI on this, what evidence would still show me what they personally understand, can explain, and can do?