Docker image that periodically dumps a Postgres database, and uploads it to an Amazon S3 bucket.
CRON_SCHEDULE
: The time schedule part of a crontab file (e.g:15 3 * * *
for every night 03:15)DB_HOST
: Postgres hostnameDB_PASS
: Postgres passwordDB_USER
: Postgres usernameDB_NAME
: Name of databaseS3_PATH
: Amazon S3 path in the format: s3://bucket-name/some/pathAWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
MAIL_TO
: IfMAIL_TO
andMAIL_FROM
is specified, an e-mail will be sent, using Amazon SES, every time a backup is takenMAIL_FROM
WEBHOOK
: If specified, an HTTP request will be sent to this URLWEBHOOK_METHOD
: By default the webhook's HTTP method is GET, but can be changed using this variableKEEP_BACKUP_DAYS
: The number of days to keep backups for when pruning old backups. Defaults to7
.FILENAME
: String that is passed intostrftime()
and used as the backup dump's filename. Defaults to$DB_NAME_%Y-%m-%d
.
/data/backups
- The database is dumped in into this directory
This image can also be run as a one off task to restore one of the backups.
To do this, we run the container with the command: python -u /backup/restore.py [S3-filename]
(S3-filename
should only be the name of the file, the directory is set through the S3_PATH
env variable).
The following environment variables are required:
DB_HOST
: Postgres hostnameDB_PASS
: Postgres passwordDB_USER
: Postgres usernameDB_NAME
: Name of database to import intoS3_PATH
: Amazon S3 directory path in the format: s3://bucket-name/some/pathAWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
To run a one off backup job, e.g. to test that it works when setting it up for the first time, simply start
the container with the docker run command set to python -u backup.py
(as well as all the required environment
variables set).
This image uses the alpine version(s) of the official postgres image as base image.
The following docker tags are available for this image, and they are based on the corresponding official postgres alpine image:
13
,latest
12
11
10