markdumay/synology-docker

Clean Working Directory

Closed this issue · 3 comments

If I was to run the UPDATE command without any options specified, would Step L) delete the backup made in Step D)?

Fortunately I had a backup because I ran BACKUP with a path specified first. After running the UPDATE, I checked both the directory where the script was located and /tmp/docker_update... there was no backup in the script folder, and /tmp/docker_update didn't even exist.

Suggestions/Points:

  1. If the backup is being deleted at the end, then you should definitely change that. Making a backup as part of the workflow is pointless if it is just going to be deleted at the end, because you are left with nothing to restore back to.

  2. The heading for Workflow Step D) in the README.md file has a minor typo:

D) ackup current files

  1. You should set the default WORKING_DIR as dirname $0 (where the script is located) and have options which allow users to define an alternative path. Using /tmp makes sense for scripts that are run regularly because you know the OS (or at least something) will do the housekeeping for that directory and your script doesn't need to cater for it. For this though, it makes more sense for everything to be relative to the script directory, especially since that is where the binaries get downloaded to.
  1. Related to the above point, you should have a -p (path) or -d (directory) option to allow the user to specify a working directory. -b (backup) would then be just the filename, which if not specified, defaults to what is currently in the script.

I may be going a little over the top with this, so apologies for that. Geeky stuff like this is my idea of fun lol :D

Thanks a lot for your suggestions Hiren, I truly appreciate your feedback. I'm a fan of geeky stuff too. :-) Below my comments on your suggestions:

  1. Fully agree, fortunately the step execute_clean() only deletes the subfolder docker with the command rm -rf "$WORKING_DIR/docker". I clarified the documentation directly in the master.
  2. Good catch, I fixed it directly in the master.
  3. I agree, right now the backups are stored in the temp directory, unless a fully qualified path for the backup is specified. It would be better to store them in the current directory by default. I guess the variable name working directory is a bit confusing, as it refers to the temporary directory in reality.
  4. I'm not sure about this one yet. Right now, both download and install require an explicit PATH argument. The path for backup and restore is implicit and is derived from the filename in -b.

My current proposed changes are:

  1. Change default directory of backups to the current directory where the script is located (dirname $0)
  2. Rename $WORKING_DIR to $TEMP_DIR to avoid confusion

A new feature branch is ready, care to review?

git clone -b path https://github.com/markdumay/synology-docker.git

A summary of the key changes:

  1. The $WORKING_DIR is split into $TEMP_DIR, $BACKUP_DIR, and $DOWNLOAD_DIR. The $TEMP_DIR is now used for extracting files only and is recreated at the start of the script. The update command clears the directory when finished too. Both download and install use the $DOWNLOAD_DIR path, which is explicitly set when calling these commands.
  2. The -p option has been added, as it allows you to use auto-versioning for the backup names. However, specifying a path in the filename for the -b option implicitly works too.
  3. The directory for backup files defaults to $PWD instead of dirname $0. I thought this might work better when the script is kept in /usr/local/bin/ for instance.