Module 10 of 11
Module 9 — Using AI at Work

Responsible Use

Using AI ethically, safely, and with integrity — especially in an accessibility-focused company.

At Brickfield, we build tools that help make digital content more accessible. Our use of AI should reflect the same values. Responsible AI use is not just about avoiding mistakes — it is about using these tools in ways that uphold quality, honesty, and respect for the people we serve.

When NOT to use AI

Knowing when to step back is as important as knowing how to use AI well. The following situations require human judgement, not AI assistance.

  • Accessibility compliance decisions. AI can help draft WCAG documentation, but it cannot determine whether something meets a success criterion. Always verify against official W3C documentation and use qualified human review.
  • Legal or contractual advice. AI will produce confident-sounding but legally unreliable output. Never use AI to interpret contract terms, data processing agreements, or regulatory obligations.
  • Security vulnerability assessment. AI can explain vulnerabilities in general terms but should not be used to assess whether a specific system is secure.
  • Final product copy without review. No AI-generated content should go to a client, prospect, or public audience without a human read. This applies especially to accessibility-themed content where errors undermine Brickfield's credibility.
  • Financial decisions or forecasts. AI-generated numbers, projections, or financial summaries are not reliable without verification against source data.
  • Anything involving confidential personal data. If the task requires using a real customer's name, contract details, or personal information, do not use a free AI tool.

The WCAG line

This deserves its own callout. Brickfield's reputation rests on accessibility expertise. AI can make a confident, well-written claim about a WCAG criterion that is subtly or significantly wrong. Always treat AI-generated accessibility guidance as a starting point, never as a final authority. Check it against the source.

How to verify AI output

The course has mentioned verification throughout. The following checklist makes that concrete. Run through it before using any AI-generated output in a professional context.

Before using any AI output — check these

  • Facts and figures. Did AI cite a statistic, date, name, or version number? Verify it independently. Do not assume it is correct.
  • Sources. If AI cited a report, paper, or article, check that the source actually exists and says what AI claims it says.
  • Tone and voice. Does this sound like Brickfield? AI defaults to a generic professional register. Add warmth, specificity, and your own relationship knowledge.
  • Accuracy of the brief. Did AI actually answer what you asked? It sometimes answers a slightly different question and the output feels right until you read it carefully.
  • Policy alignment. Does this content reflect Brickfield's current positioning, pricing, or product capabilities? AI does not know what changed last week.
  • Accessibility of the output itself. If you are publishing this content, check it meets the same standards you would apply to any Brickfield deliverable.

For developers — additional checks

  • Run the code. Do not deploy AI-generated code you have not executed and tested.
  • Check for security issues — hard-coded credentials, unvalidated inputs, deprecated functions.
  • Test edge cases. AI-generated code often handles the happy path well and fails on edge cases.
  • Understand what the code does before committing it. If you cannot explain it, do not ship it.

What must never go into any AI tool

The following categories of information must not be entered into any AI tool that is not covered by an enterprise data privacy agreement.

  • Customer names, contact details, or any personal data.
  • Confidential client information or contractual details.
  • Internal financial data or commercially sensitive strategies.
  • Passwords, API keys, or security credentials.
  • Proprietary code that gives Brickfield a competitive advantage.

Accuracy and hallucination

AI can produce confident, well-written, and completely wrong information. This is called hallucination. It is particularly risky in the following situations.

  • Making claims about WCAG standards — always verify against official W3C documentation.
  • Citing statistics or research — check that the source exists before sharing it.
  • Writing legal, medical, or technical content — verify with qualified experts.
  • Generating code for production systems — review and test it thoroughly before deploying.

Accessibility and AI output

As a company whose mission is accessibility, hold AI-generated content to the same standards you apply to client deliverables. AI-generated images may lack alt text. AI-generated documents may have poor heading structure. Review all AI output for accessibility before publishing, just as you would any other content.

Transparency and disclosure

There is no universal rule requiring disclosure of AI assistance — but good practice includes the following principles.

  • Do not present AI-generated content as your own original research or expert opinion.
  • Do not use AI to write content that impersonates a person's unique voice without their knowledge.
  • Be honest if asked directly whether AI was used to create something.
  • Do not submit AI-generated work in contexts where that is prohibited, such as certain academic or contract requirements.

Bias and fairness

LLMs trained on internet text inherit biases from that data. They may produce content that reflects gender, racial, or cultural stereotypes, or that represents some demographics better than others. Always review AI-generated content critically — especially anything involving people, communities, or social topics.

Mistakes people commonly make with AI

The following are the most frequent failure patterns across all roles. Recognising them saves weeks of frustration.

Asking vague questions

"Write me something about accessibility." Without context, audience, format, or length, the output will always be generic. The prompt is the brief.

Expecting perfection first time

AI output is a starting point, not a finished product. The first response is the opening of a conversation, not the end of one. Iterate.

Copying output without reading it

AI produces fluent, confident text — which makes it easy to miss errors. Always read what you are about to send or publish. It looked fine until someone noticed the wrong product name.

Starting over instead of iterating

If the first response is not right, continue in the same conversation with a correction: "that is too formal — make it warmer" or "you missed the point about pricing." Do not restart.

Forgetting AI has no Brickfield context

AI does not know your customers, your current deals, your product roadmap, or last week's pricing change. You have to tell it. Set up Projects or Custom Instructions to avoid repeating this every session.

Treating confident output as correct output

AI writes with equal confidence whether it is right or wrong. Fluency is not accuracy. The output that sounds most authoritative is the one most worth checking.

Knowledge check

A sales team member wants to use the free tier of ChatGPT to draft a proposal. They paste in the client's full name, annual budget figures, and detailed requirements from the discovery call. Is this acceptable?