Skip to content

Raven SAML2

This page gives an overview of the Raven SAML2 service, describing its current status, where and how it's developed and deployed, and who is responsible for maintaining it.

Service Description

The Raven service provides a self-service, web-based interactive sign in service for the University. It has several parts. Raven SAML2 provides a standard SAML 2.0 interface for sites around the University.

There is a dedicated documentation site for Raven SAML2.

Service Status

The Raven SAML2 service is currently live. There are no plans to decommission the service as we need to run a SAML2 service to operate within the UK Access Management Federation.


Technical queries and support should be directed to and will be picked up by a member of the team working on the service. To ensure that you receive a response, always direct requests to rather than reaching out to team members directly.

Issues discovered in the service or new feature requests should be opened as GitLab issues in the "ansible-shibboleth" project in GitLab (DevOps only).


We are in the process of re-platforming the Raven SAML2 service as a deployment within a k8s cluster. Issues relating to this new deployment should be opened as issues on an appropriate project within the shib-cloud group (DevOps only).


Raven SAML2 is currently deployed to the following environments:

Name URL Supporting VMs
Production shib-live{123}
Staging N/A1 shib-next{12}
Development N/A1 shib-dev{12}

1 Testing staging or development is performed by local modification of /etc/hosts. See the testing page in the operational documentation (DevOps only).

See also a list of VMs in the operational documentation (DevOps only) for VMs relating to shared databases and logging.

As part of the re-platforming work, the following instances have also been deployed on k8s clusters:

Name URL
Staging and

These deployments also have a certificate for installed and so can be used as an alternative to production Raven SAML2 by means of a change to /etc/hosts as documented in the testing page of the operational documentation (DevOps only).

The is managed by Google. We're using this as a test to determine the pros and cons of Google managed certificates for Raven more widely.


Public-facing documentation for testing Shib k8s can be found on the UIS webpage.

Source code

Source code for Raven SAML2 is spread over the following repositories:

Repository Description
Shibboleth External repository holding the Shibboleth source code itself
ansible-shibboleth1 Ansible for on-premises installation
idp-frontend-container2 Containerised Apache2 frontend which handles interactive authentication
shib4-idp-container2 Containerised Shibboleth
docker-compose1 Docker-compose configuration for local development
infrastructure1 Terraform configuration for infrastructure and deployment

1 DevOps only

2 GitLab users only

Technologies used

The following gives an overview of the technologies that Raven SAML2 is built on.

Category Language Framework(s)
Shibboleth IdP Java, XML and JavaScript Many
On-premises deployment Ansible
K8s deployment Terraform

Operational documentation

There is a dedicated operational documentation wiki for the service (DevOps only). Of particular note is a section describing interation with the UIS traffic manager and patching information (DevOps only).

The k8s deployment follows our standard deployment practice for Google cloud with the wrinkle that exact container versions are specified in the k8s deployment and so deployment follows a "gitops" model.


There is additional deployment documentation available in the infrastructure project README (DevOps only). The below is a high-level summary.

Ordinarily deployment is driven by GitLab CI. Commits to non-master branches will trigger a job to perform a terraform lint and create manual approval jobs to terraform plan and terraform apply against the development environment.

When commits land in master, terraform plan will automatically be run against staging and a manual approval terraform apply job is created. Additionally manual approval jobs for running terraform plan and terraform apply are created for the production environment.

Generally, merge requests should be followed by checking the terraform plan job against staging and triggering the terraform apply job if all looks good. Assuming staging passes testing, the production terraform plan and terraform apply jobs may be triggered.

How and where the Raven SAML2 is deployed


Due to its active development, the k8s deployment is not documented here.

Raven SAML2 is deployed via an Ansible playbook. We have a wrapper script which ensures that the correct version of Ansible is run and transparently uses the aperture jump host (UIS only) to connect to the servers.

Run via:

APERTURE_JUMP_HOST=1 ./ --inventory inventory playbook.yml --limit [NODES]

where [NODES] is one of shib-live, shib-next or shib-dev.

Deploying a new release


Due to its active development, the k8s deployment is not documented here.

Generally a new release is deployed using the Ansible playbook wrapper script noted above.

  1. Do an initial "smoke test" deploy to shib-dev nodes. Perform a test sign in (DevOps only) to various sites including: UIS intranet, The FT, The API Gateway and GitLab.
  2. Deploy to shib-next nodes and repeat testing.
  3. Deploy to shib-live1 node and tail -f the /opt/shibboleth-idp/logs/idp-process.log file to check that sign ins resume.
  4. Deploy to shib-live2 node and tail -f the /opt/shibboleth-idp/logs/idp-process.log file to check that sign ins resume.
  5. Deploy to shib-live3 node and tail -f the /opt/shibboleth-idp/logs/idp-process.log file to check that sign ins resume.


Historically metrics for the Raven SAML2 service could be viewed on the UIS Grafana instance (UIS Staff Network only) and alerts were sent via nagios (UIS only).

We have transitioned to a monitoring and alerting system based on Cloud Monitoring. Alert policies and metrics can be views in the Raven Monitoring workspace (DevOps only).

At this time our standard alerts have been configured for Raven SAML2:

  • Service uptime check from various geographic regions.
  • SSL expiry checks.
  • Check for excessive k8s storage volume usage.
  • Check for excessive CPU, memory or disk pressure on nodes.
  • Check for excessive CPU, memory or storage use by pods.

In addition, the k8s deployment has the following monitoring:

  • Check that Falcon, University and UK Federation metadata sources are correctly imported according to their refresh schedule.


For k8s-deployed Raven SAML2, a full environment may be run locally (DevOps only). This allows configuration changes to be debugged locally without affecting any deployed service.

For on-premises Raven SAML2, your best bet is to claim one or both of the shib-dev{1,2} nodes and try deploying to them.

Service Management and tech lead

The service owner for Raven SAML2 is Vijay Samtani.

The service manager for Raven SAML2 is Rich Wareham (provisional).

The tech lead for Raven SAML2 is Rich Wareham.

The following engineers have operational experience with Raven SAML2 and are able to respond to support requests or incidents: