WARNING: this project probably does not support MacOS/BSD unless you install the appropriate GNU utilities. If you'd like it to work with default MacOS/BSD tools, please submit a patch that detects them and uses different behaviour for MacOS/BSD only; complete with tests. It should detect the type of tool, not the platform.
Backup scripts to manage incremental backups using gnu tar's incremental snapshot capability, while supporting an s3 sync as well.
See changelog.md
Download the latest release tar.gz. If you just want to install directly to /usr/local/bin/, you can run the following...
# swap the -t for -x after you've confirmed the tar listing is only putting cloud-tar in `/usr/local/bin/`
curl -s https://api.github.com/repos/TrentonAdams/cloud-tar/releases/latest \
| jq -r '.assets[0].browser_download_url' \
| read -r latest; curl -s -L $latest \
| sudo tar -tvz -C /usr/local/bin
Alternatively clone the repo and install in /usr/local/bin.
git clone --recurse-submodules git@github.com:TrentonAdams/cloud-tar.git
cd cloud-tar/
make clean tests install
This script is based on simple concepts. The gorey details can be found on the TAR Incremental Dumps Page
- Start with level 0 backup, meaning initial backup. it's backup_index is 0, in the form backup_name.0.backup??
- Subsequent backups get a timestamp with number of milliseconds since 1970 as their backup_index value, in the form backup_name.1622074049793.backup??
- To start over just delete backup_name.sp, resulting in a new backup_index 0 backup, and tar will automatically create a new level 0. Although it is best to just backup to a new folder when you start a new level 0. The incremental backups are useless if you overwrite the original level 0 anyhow.
- Make sure you are backing up to a new bucket or folder if starting over, so that you don't get confused on which timestamped backups are relevant to your current level 0. The incremental backups are useless if you overwrite the original level 0 anyhow.
- We support splitting backups at 4G by default, so if you wanted you could copy them to a FAT32 USB drive.
To back up...
- user named "${USER}" from your USER environment var
- to folder
/media/backup/cloud-tar
- with gpg encryption to gpg recipient
me@example.com
- an s3 sync to s3 bucket named user-backup-home
- using
~/backup/tar-excludes.txt
as the tar exclusion file
So let's clone, run tests, and run a test backup.
cloud-tar backup \
-s /home/${USER} \
-d /media/backup/cloud-tar \
-n home \
-r me@example.com \
-e ~/backup/tar-excludes.txt \
-b user-backup-home;
An example tar-excludes.txt follows. Replace username
with your ${USER}
. Tar
exclude files differ from rsync exclude files. With tar excludes you have to use
the full path that you're backing up. With rsync excludes, it's relative to the
last folder in the path you're backing up.
/home/username/.config/google-chrome
/home/username/.cache
/home/username/.config/Code/Cache
/home/username/.config/Code/CacheData
For now, restores are manual. We'll be adding features to manage them later on. Here's an example of some backups, along with restore.
Let's do level 0 and two incremental backups with added files then deleted files.
recipient=me@example.com
rm -rf files backup; mkdir -p files backup;
for i in {1..10}; do echo "file${i}" > "files/file-${i}"; done;
cloud-tar backup \
-s ./files/ \
-d backup/ \
-r "${recipient}" \
-n test-backup;
for i in {11..15}; do echo "file${i}" > "files/file-${i}"; done;
cloud-tar backup \
-s ./files/ \
-d backup/ \
-r "${recipient}" \
-n test-backup;
rm -f files/file-{9,10};
cloud-tar backup \
-s ./files/ \
-d backup/ \
-r "${recipient}" \
-n test-backup;
Let's take a look at what we have, and how to restore.
$ ls -ltr backup
total 28
-rw-rw-r-- 1 trenta trenta 685 Jun 1 00:26 test-backup.0.backupaa
-rw-rw-r-- 1 trenta trenta 431 Jun 1 00:26 test-backup.0.spb
-rw-rw-r-- 1 trenta trenta 442 Jun 1 00:26 test-backup.1622528789886.spb
-rw-rw-r-- 1 trenta trenta 610 Jun 1 00:26 test-backup.1622528789886.backupaa
-rw-rw-r-- 1 trenta trenta 191 Jun 1 00:26 test-backup.sp
-rw-rw-r-- 1 trenta trenta 437 Jun 1 00:26 test-backup.1622528789900.spb
-rw-rw-r-- 1 trenta trenta 488 Jun 1 00:26 test-backup.1622528789900.backupaa
Restoring is as simple as running the restore process
cloud-tar restore \
-s backup \
-d restore \
-n test-backup
Take special note that it properly deletes the files that were removed as part of the restore process...
restoring backup/test-backup.0.backupaa
gpg output info here
./files/
./files/file-1
./files/file-10
./files/file-2
./files/file-3
./files/file-4
./files/file-5
./files/file-6
./files/file-7
./files/file-8
./files/file-9
restoring backup/test-backup.1622528789886.backupaa
gpg output info here
./files/
./files/file-11
./files/file-12
./files/file-13
./files/file-14
./files/file-15
restoring backup/test-backup.1622528789900.backupaa
gpg output info here
./files/
tar: Deleting ‘./files/file-9’
tar: Deleting ‘./files/file-10’
- add integrity check (
tar -tvzg file.sp
) - add backup deletion script, where we can delete success backups as far back as
we need, and restore the snapshot file to that point.
- Possibly
ls -1 backup-dir/name.*.spb | tail -2
to get previous backup snapshot file - Decrypt backup snapshot file to
backup-dir/name.sp
- Delete files for
ls -1 backup-dir/name.*.spb | tail -1
- Delete files for
ls -1 backup-dir/name.*.backup* | tail -1
- Previous won't quite work, as we need to account for split file names. So
we need to grab the timestamp from the most recent backup file name, and
then deleted wildcarded
name.*.backup*
- Possibly
- make "splitting" at 4G an option, not a requirement.
- create asciinema demo.
- add argument for backup size warning