Skip to content

Digital Pooling

Subject Moderation Interface (SMI)

This section gives an overview of the Subject Moderation Interface (SMI), describing its current status, where and how it's developed and deployed, and who is responsible for maintaining it.


This is a prototype service that is not fully supported. See the FAQ for Subject Moderation Interface alpha for more details.

Service Description

The Subject Moderation Interface (SMI) service provides a web application for moderating undergraduate applications as part of the admissions process.

Service Status

The SMI is currently in alpha.


Technical queries and support should be directed to and will be picked up by a member of the team working on the service. To ensure that you receive a response, always direct requests to rather than reaching out to team members directly.


Digital Pooling sync jobs & the SMI are currently deployed to the following environments:

Main Application URL Django Admin URL Backend API URL
Production Live service
Integration End-to-end testing getting as close to production-like as possible
Staging Manual testing before deployment to production and production-like environments
Development Development playground

Environment details:

Environment Deployment Frequency User Access Purpose
Development An environment to check any development code changes without affecting end-users.
Integration An environment to test various interfaces and the interactions between integrated components or systems.
Test An environment to evaluate if a component or system satisfies functional requirements.
Production An environment where code changes are tested on live user traffic.

The GCP console pages for managing the infrastructure of each component of the deployment are:

Name Main Application Hosting Database Synchronisation Job Application Hosting
Production GCP Cloud Run GCP Cloud SQL (Postgres) GCP Cloud Scheduler
Integration GCP Cloud Run GCP Cloud SQL (Postgres) GCP Cloud Scheduler
Staging GCP Cloud Run GCP Cloud SQL (Postgres) GCP Cloud Scheduler
Development GCP Cloud Run GCP Cloud SQL (Postgres) GCP Cloud Scheduler

All environments share access to a set of secrets stored in the meta-project Secret Manager.

Source code

The source code for Digital Pooling is spread over the following repositories:

Repository Description
Main Application The source code for the main application Docker image
Synchronisation Job Application The source code for the synchronisation job application Docker image
Infrastructure Deployment The Terraform infrastructure code for deploying the applications to GCP

Pooling process scripts

These are scripts that are used to create resources that enable the Pooling process - Google Drives, Poolside Meeting Outcome Spreadsheets (PMOS).

These were brought into a snippets directory in the Synchronisation Service to keep them under version control and to allow consolidation of common methods. The snippets in this directory are run manually at given points during the application cycle.

Script Purpose
Create college drives Creates Google Drives for all colleges and across different environments.
Create PMOS sheets Creates PMOSes for different courses, across different environments and for one of more Pools.

Technologies used

The following gives an overview of the technologies the SMI is built on.

Category Language Framework(s)
Web Application Backend Python Django
Web Application Frontend JavaScript React
Synchronisation Job Application Python Flask
Database PostgreSQL n/a

SMI Operational documentation

The following gives an overview of how the SMI is deployed and maintained.

How and where the SMI is deployed

Database for undergraduate applicant data is a PostgreSQL database hosted by GCP Cloud SQL. The main web application is a Django backend with React frontend, hosted by GCP Cloud Run. The synchronisation job application (which provides an API with end-points for synchronising the SMI database with other services) uses the Flask library, is hosted by GCP Cloud Run and invoked by GCP Cloud Scheduler.

The SMI infrastucture is deployed using Terraform, with releases of the main application and synchronisation job application deployed by the GitLab CD pipelines associated with the infrastructure deployment repository.


Digital Pooling follows the divisional deployment boilerplate standard. Container images are pushed to the meta project's Artifact Registry as part of every pipeline on the SMI and Synchronisation Service repositories.

The deployment repo (pools/deploy) is responsible for actually deploying services. The development & staging environments default to the 'latest' tag on the master branch, ie. the latest merged code. The integration & production environments have a specific tag (release) of the code deployed on them. Terraform locals are used to configure which version is deployed to a given environment.

Pipelines in the deployment repo run terraform plan for all environments. The job to run terraform apply is always enabled on pipelines for the master branch. For all other environments, the apply job being available is contingent on the plan job succeeding.

The terraform plan job will need re-running for a given environment if the output of this becomes stale (because the environment state changes) before terraform apply can be run.

Deploying a new release

To deploy a new release, a new issue is created in the current sprint following this template.

User access management

Environment User access management
Production Google OAuth2
Integration Google OAuth2
Staging Google OAuth2
Development Google OAuth2

However, while being able to use your University Google account to sign in, permissions are initially granted through invitations to the service and managed in the Django Admin console should adjustments be necessary.

In the production environment, all users are removed at the end of an admissions cycle and invitations and permissions re-issued on the start of a new one.



The files in each of the source code repositories provide information about debugging both local and deployed instances of the applications.


Applicant data is initially retrieved from CamSIS via a synchronisation job (managed by GCP Cloud Scheduler which periodically calls the synchronisation job API). Additional annotations are later added by manually importing the subject master spreadsheet (SMS) and subject-specific variants (using the Django admin application page (staging) for the appropriate environment).

Periodic synchronisation jobs ensure that each applicant has an associated folder on Google Drive (for storing additional documents). They also ensure that applicant data is consistent between the SMI and poolside meeting outcome spreadsheets (PMOSs), which are Google Sheets spreadsheets. A manually invoked process on CamSIS uses the SMI web application API to retrieve pooling decisions about applicants, and update the CamSIS database as necessary.

The flow of applicant data to/from the SMI and other services is summarised by the diagram below.

Flow of applicant data to/from the SMI

CamSIS environments

The following relationship holds between Digital Pooling environments and CamSIS environments:

Digital Pooling env CamSIS env
development intb_dat
test (staging) intb_test
integration intb_reg
production intb_prod

Service Management and tech lead

The service owner for Digital Pooling is Helen Reed.

The service manager for Digital Pooling is Signe Overgaard Jensen.

The tech lead for Digital Pooling is Brent Stewart.

The following engineers have operational experience with Digital Pooling and are able to respond to support requests or incidents: