Clean Working Directory
Closed this issue · 3 comments
If I was to run the UPDATE command without any options specified, would Step L) delete the backup made in Step D)?
Fortunately I had a backup because I ran BACKUP with a path specified first. After running the UPDATE, I checked both the directory where the script was located and /tmp/docker_update
... there was no backup in the script folder, and /tmp/docker_update
didn't even exist.
Suggestions/Points:
-
If the backup is being deleted at the end, then you should definitely change that. Making a backup as part of the workflow is pointless if it is just going to be deleted at the end, because you are left with nothing to restore back to.
-
The heading for Workflow Step D) in the README.md file has a minor typo:
D) ackup current files
- You should set the default
WORKING_DIR
asdirname $0
(where the script is located) and have options which allow users to define an alternative path. Using/tmp
makes sense for scripts that are run regularly because you know the OS (or at least something) will do the housekeeping for that directory and your script doesn't need to cater for it. For this though, it makes more sense for everything to be relative to the script directory, especially since that is where the binaries get downloaded to.
- Related to the above point, you should have a
-p
(path) or-d
(directory) option to allow the user to specify a working directory.-b
(backup) would then be just the filename, which if not specified, defaults to what is currently in the script.
I may be going a little over the top with this, so apologies for that. Geeky stuff like this is my idea of fun lol :D
Thanks a lot for your suggestions Hiren, I truly appreciate your feedback. I'm a fan of geeky stuff too. :-) Below my comments on your suggestions:
- Fully agree, fortunately the step
execute_clean()
only deletes the subfolderdocker
with the commandrm -rf "$WORKING_DIR/docker"
. I clarified the documentation directly in the master. - Good catch, I fixed it directly in the master.
- I agree, right now the backups are stored in the temp directory, unless a fully qualified path for the backup is specified. It would be better to store them in the current directory by default. I guess the variable name
working directory
is a bit confusing, as it refers to the temporary directory in reality. - I'm not sure about this one yet. Right now, both
download
andinstall
require an explicitPATH
argument. The path forbackup
andrestore
is implicit and is derived from the filename in-b
.
My current proposed changes are:
- Change default directory of backups to the current directory where the script is located (
dirname $0
) - Rename
$WORKING_DIR
to$TEMP_DIR
to avoid confusion
A new feature branch is ready, care to review?
git clone -b path https://github.com/markdumay/synology-docker.git
A summary of the key changes:
- The
$WORKING_DIR
is split into$TEMP_DIR
,$BACKUP_DIR
, and$DOWNLOAD_DIR
. The$TEMP_DIR
is now used for extracting files only and is recreated at the start of the script. Theupdate
command clears the directory when finished too. Bothdownload
andinstall
use the$DOWNLOAD_DIR
path, which is explicitly set when calling these commands. - The
-p
option has been added, as it allows you to use auto-versioning for the backup names. However, specifying a path in the filename for the-b
option implicitly works too. - The directory for backup files defaults to
$PWD
instead ofdirname $0
. I thought this might work better when the script is kept in/usr/local/bin/
for instance.