Generative AI Usage Policy¶
This policy establishes guidelines for the use of generative AI tools. It is designed to ensure transparency, quality, and security as we adopt AI-assisted development practices across our teams.
This policy covers the use of generative AI for code generation or assistance, documentation generation, image or content generation, code review assistance, and any other AI-assisted output produced in the course of our work.
1. Disclose AI Use¶
All project assets that has been generated or assisted by AI must be disclosed. This applies to code, documentation, images, configuration, and any other content produced with AI assistance. Records will include tool and model used. Disclosure is achieved by including trailers in the relevant Git commit messages, for example:
Co-Authored-By: Claude Opus 4.6 <noreply@cam.ac.uk>
Generated-By: GitHub Copilot 0.37.6
Model: claude-opus-4-6-20250219
Co-Authored-By: OpenAI ChatGPT 5.3 Codex <noreply@cam.ac.uk>
Generated-By: GitHub Copilot 0.37.6
Model: gpt-5.3-codex
To minimise the risk of omission, manual disclosure is discouraged. Instead, we will explore automated approaches to attach the trailers (e.g. git-mob, git-hooks, git trailers)
2. Use Only UIS Enterprise-Licensed Tools¶
Only AI tools covered by an UIS enterprise licence can be used. UIS has an enterprise licenses for:
- Microsoft 365 only via the University tenancy.
- Google Workspace only via the University tenancy.
- GitHub Copilot only via the University tenancy.
The current approved tools for project asset generation are:
- GitHub Copilot only via the University tenancy.
GitHub Copilot official tooling will ensure agentic mode and that the project context is also added to the prompt when generating a reponse, ensuring less errors and hallucinations as well as automatic validation of the assets generated by being able to execute commands.
When the assets generated don't require project content, e.g. image generation, other UIS enterprise-licensed tools can be used.
Your university account must be used when logging in to these services to ensure the correct license is used.
The enterprise contract provides assurances that our code and data are not used for model training, and includes privacy and data-protection commitments.
The following are explicitly not allowed:
- local or self-hosted LLMs without explicit approval,
- AI tools or extensions not licensed by UIS, and
- free-tier or personal accounts for AI services.
Local models carry risks including potential data exfiltration through back-channels that may not be visible without network-level monitoring. Without an enterprise contract, there are no legal guarantees on how data is handled.
Note
The approved tool list will be reviewed on an ongoing basis. The market for AI development tools is evolving rapidly, and we will adopt better tools as they become available. Team members should be prepared to switch tools and models as the landscape changes.
3. Human Review Required¶
Human review of all AI-generated or AI-assisted assets is mandatory. This applies to code, documentation, images, configuration, and any other content produced with AI assistance. The submitter assumes full responsibility for quality and correctness.
Specifically:
- All AI-generated code must be reviewed before submitting a merge request.
- All AI-generated documentation, images, or other content must be reviewed before publishing or sharing.
- The submitter assumes full responsibility for the quality and correctness of AI-assisted output.
Submission of any AI-generated work carries the implicit assumption that the human review has been completed.
4. Secrets and Sensitive Data¶
Never expose real secrets or sensitive data to AI agents. When providing repository context to an AI agent, ensure that:
- Secret environment files (.env, secrets files) are excluded from the agent context or contain only fake/placeholder values.
- Agents are not given access to 1Password, credential stores, or real API keys.
- Even with enterprise contracts, there is no guarantee about what an agent may do with secrets in context. Treat this as a non-negotiable requirement.
Only use trusted MCP servers. For example, the ones provided by a large organisation or open source community that provide some assurances. Examples of these are: the official Playwright MCP, or Google Chrome's MCP. Untrusted servers may return misleading or malicious content.
Policy Review¶
This policy will be reviewed and updated on an ongoing basis as the AI tooling landscape evolves.
Feedback and suggestions should be directed to the DevOps leadership team.