How to perform a point-in-time recovery for a PostgreSQL Cloud SQL instance¶
This guide describes how to restore a Cloud SQL database using point-in-time recovery (PITR). The steps have been tested with a PostgreSQL database, but may also work for other database types.
Overview¶
Point-in-time recovery of a Cloud SQL database isn't quite as simple as clicking the restore button on a snapshot in the Google Cloud console. This is because GCP creates a new database to contain the restored data rather than inserting it back into our original database for us. Once we have created a clone from a specific point in time, we can run a few commands to export the data from the clone to a bucket and then import it back into the source database.
This whole process will likely take about 60 minutes, depending on the size of the database and whether you have completed the steps before. This is considered a disaster recovery operation, rather than a business-as-usual operation. You should consider whether restoring an overnight snapshot might be a better approach.
Prerequisites¶
-
You have a Cloud SQL database with point-in-time recovery enabled. If you are using our most recent deployment boilerplate this will be the case unless you explicitly opt-out.
-
You have the
gcloud
command-line tool installed and configured. - You have the necessary permissions to create and manage Cloud SQL instances and Cloud Storage buckets.
- You know the precise time to which you want to restore, and it is within the retention period of your database's log files. The default retention period is 7 days.
Steps to restore a Cloud SQL database using point-in-time recovery¶
- Find a partner to be a second pair of eyes. This process carries some risk, so it's worthwhile having someone watch what you're doing.
- Stop further updates from being applied to the database, as these will be lost. A simple solution is to temporarily scale the webapp to zero, along with any other services that use the database, via manual scaling. The easiest way to do this is to use the GCP Console.
-
Open a terminal session and set some variables to make your life easier:
PROJECT="<the GCP project ID>"
CLOUD_SQL_INSTANCE="<the ID of the original database instance, e.g. sql-604b9ff8>"
TIMESTAMP="<the UTC timestamp you want to restore to, e.g. 2025-07-09T12:25:01.094Z>"
DB_SERVICE_ACCOUNT="$(gcloud --project "${PROJECT}" sql instances describe \ "${CLOUD_SQL_INSTANCE}" --format="value(serviceAccountEmailAddress)")"
CLONE_CLOUD_SQL_INSTANCE="${CLOUD_SQL_INSTANCE}-restored"
BUCKET="${CLOUD_SQL_INSTANCE}-restore-$(date +%s)"
Warning
Pay attention to the timezone of the timestamp you provide. The timestamp should be in UTC, which may cause confusion if you are currently in BST or another timezone.
-
Ensure
gcloud
is configured to use the correct project:gcloud config set project "${PROJECT}"
-
Create a point-in-time clone of the database. This will create a whole new database. This step will take some time to complete (likely at least 30 minutes):
gcloud sql instances clone "${CLOUD_SQL_INSTANCE}" "${CLONE_CLOUD_SQL_INSTANCE}" \ --point-in-time "${TIMESTAMP}" --async
Info
The
--async
flag means that the command will return immediately, and the clone operation will continue in the background. You can check the status of the clone operation in the web console, or by using: ```gcloud sql operations list --instance "${CLONE_CLOUD_SQL_INSTANCE}" ``` Wait for the operation to complete before proceeding to the next step.
-
Find the service account ID for the cloned database:
CLONE_DB_SERVICE_ACCOUNT="$(gcloud sql instances describe "${CLONE_CLOUD_SQL_INSTANCE}" \ --format="value(serviceAccountEmailAddress)")"
-
Create a new bucket where we will store a SQL file containing all the data from the database:
gcloud storage buckets create "gs://${BUCKET}"
-
Grant the cloned database access to the bucket:
gcloud storage buckets add-iam-policy-binding "gs://${BUCKET}" \ --member="serviceAccount:${CLONE_DB_SERVICE_ACCOUNT}" --role=roles/storage.objectAdmin
-
Get a list of databases to export:
gcloud sql databases list --instance "${CLONE_CLOUD_SQL_INSTANCE}"
You should see output similar to this:
NAME CHARSET COLLATION webapp UTF8 en_US.UTF8 postgres UTF8 en_US.UTF8
-
For each database, export all the data to a SQL file:
DB_NAME=<the name of the DB>
gcloud sql export sql "${CLONE_CLOUD_SQL_INSTANCE}" "gs://${BUCKET}/export-${DB_NAME}.sql.gz" \ --clean --if-exists --database "${DB_NAME}"
Info
The
--clean
flag instructs Postgres to addDROP
statements for everything before attempting to load the data. This means we don't end up with duplicate data in our restored database. The--if-exists
flag instructs Postgres to turn theDROP
statements intoDROP ... IF EXISTS
statements, which means the export could also be imported into an empty database if necessary. This initially sounds risky, but we have PITR enabled and regular backups, so we can always restore to before the restore. -
Grant your original database access to the bucket:
gcloud storage buckets add-iam-policy-binding "gs://${BUCKET}" \ --member=serviceAccount:"${DB_SERVICE_ACCOUNT}" --role=roles/storage.objectAdmin
-
Find which user created the tables in the first place (probably "webapp" if you're running a Django project), and assign it to a variable:
gcloud sql users list --instance "${CLOUD_SQL_INSTANCE}"
You should see output similar to this:
NAME HOST TYPE PASSWORD_POLICY postgres BUILT_IN webapp BUILT_IN admin BUILT_IN
Then assign the user to a variable:
DB_USER=<the database user>
-
For each database, import the data from the export:
DB_NAME=<the name of the DB>
gcloud sql import sql "${CLOUD_SQL_INSTANCE}" "gs://${BUCKET}/export-${DB_NAME}.sql.gz" \ --database="${DB_NAME}" --user="${DB_USER}"
-
If you scaled your webapp to zero in step 2, scale it back up to its previous level.
-
Once you're satisfied that everything was successful (you may be satisfied straight away, or you may wish to leave it a few days), clean the bucket and cloned database up:
gcloud storage rm -r "gs://${BUCKET}"
gcloud sql instances delete "${CLONE_CLOUD_SQL_INSTANCE}" --async
Summary¶
In this how-to, we learned how to restore a Cloud SQL database to a specific point in time.