Skip to main content
Template Free to customise

AI Policy
Template

A starter AI usage policy you can customise and deploy across your organisation. Covers GDPR compliance, approved tools, quality standards, and responsible use.

If your team uses AI tools (and they probably do, whether you know about it or not), you need a policy. This template gives you a solid starting point. Copy it, customise the bracketed sections, and deploy it.

Copy the sections below into your own document. Replace everything in [brackets] with your organisation's details. Have your data protection officer or legal counsel review before deployment.

1

Purpose & Scope

Purpose

This policy establishes guidelines for the responsible use of artificial intelligence (AI) tools and systems within [Organisation Name]. It aims to maximise the benefits of AI while managing risks related to data protection, quality, and compliance.

Scope

This policy applies to all employees, contractors, and third parties who use AI tools in the course of their work for [Organisation Name]. It covers both organisation-approved AI tools and personal use of AI tools for work purposes.

2

Approved AI Tools

Approved tools list

Only the following AI tools are approved for use with [Organisation Name] data: • [Tool 1 — e.g., ChatGPT Enterprise] • [Tool 2 — e.g., Microsoft Copilot] • [Tool 3 — e.g., Claude for Business] Tools not on this list must be assessed and approved by [role/department] before use.

Requesting approval

To request approval for a new AI tool, submit a request to [role/department] including: the tool name, intended use case, data types involved, and the vendor's data processing terms.

3

Data Protection & GDPR

Personal data

Personal data (names, email addresses, phone numbers, national insurance numbers, financial information) must NOT be entered into any AI tool unless: a) The tool is on the approved list AND b) The tool's data processing terms are GDPR-compliant AND c) A Data Protection Impact Assessment has been completed

Confidential information

Trade secrets, financial projections, client lists, proprietary code, and internal strategy documents must not be shared with public AI tools. Use enterprise-tier tools with appropriate data agreements.

Data retention

Review the AI tool's data retention policy. Do not assume data entered is deleted after your session. Where possible, use tools that offer session-based data handling.

4

Quality & Accuracy

Human review

All AI-generated content, analysis, and recommendations must be reviewed by a human before being used in client deliverables, public communications, or decision-making. AI is a tool to assist, not replace, professional judgement.

Fact-checking

AI tools may generate plausible but incorrect information ("hallucinations"). Verify all facts, statistics, citations, and claims generated by AI before use.

Disclosure

Where appropriate, disclose to stakeholders (clients, partners, regulators) that AI tools were used in producing work product. Follow industry-specific disclosure requirements.

5

Security

Account security

Use organisation-managed accounts for AI tools, not personal accounts. Enable multi-factor authentication where available. Do not share AI tool credentials.

Prompt hygiene

Do not include passwords, API keys, access tokens, or other security credentials in AI prompts. Assume anything you type into a public AI tool could become public.

6

Ethical Use

Non-discrimination

Do not use AI tools in ways that could discriminate against individuals based on protected characteristics (age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, sexual orientation) under the Equality Act 2010.

Transparency

Do not misrepresent AI-generated work as solely human-created when this would be misleading. Do not use AI to deceive or manipulate.

7

Roles & Responsibilities

All staff

Follow this policy. Report any concerns or breaches to [role/department]. Use AI tools responsibly and exercise professional judgement.

Managers

Ensure team members are aware of this policy. Monitor AI tool usage within your team. Escalate concerns to [role/department].

[Data Protection / Compliance team]

Maintain the approved tools list. Conduct DPIAs for new AI tools. Handle policy breaches and data incidents. Review and update this policy annually.

8

Monitoring & Enforcement

Compliance monitoring

[Organisation Name] reserves the right to monitor the use of AI tools on company devices and networks to ensure compliance with this policy.

Breaches

Breaches of this policy will be handled in accordance with [Organisation Name]'s disciplinary procedure. Serious breaches, particularly those involving personal data, may constitute gross misconduct.

Customisation Notes

Replace all [brackets]

Every instance of [Organisation Name], [role/department], etc. should be replaced with your actual details.

Adjust the approved tools list

List only the tools you've vetted. Start with what you already use and add as you assess new tools.

Align with existing policies

Cross-reference your data protection, IT security, and acceptable use policies for consistency.

Get legal review

Have your data protection officer or legal counsel review before deployment, especially if you operate in regulated sectors.

Communicate and train

Don't just publish it. Walk your team through the policy and explain the reasoning behind each section.

Review annually

AI moves fast. Review and update this policy at least once a year, or whenever significant new tools are introduced.

This template provides general guidance only. It is not legal advice. Organisations in regulated sectors (financial services, healthcare, legal) should seek specialist compliance review.

Need Help Implementing AI Governance?

Our AI education programmes include policy development workshops. We help you create governance frameworks that are practical, not just paperwork.