Moving docker: Operation not permitted
Opened this issue · 13 comments
Which version of the script did you use?
Was that all of the errors?
From the errors I can see it is okay. It moved the files and folders but then failed to delete those empty folders.
Ok, good to know! I used the latest version. The errors ended up being around 3-4x the amount of the screenshot. I moved around 25 packages from one volume to another to remove a volume completely.
Is it expected that Container and Web Manager both have to be repaired each time DSM reboots? They both run fine after that though, but still weird that they scream for repair after each reboot.
When you say the latest version you mean v3.1.60 or the pre-release v4.0.66-RC ?
No, it's not normal for Container Manager and Web Manager to need to be repaired each time DSM reboots.
Let me know the next time you reboot, and don't repair Container Manager and Web Manager, so we can check why those packages need repairing.
Latest stable release, since I wasn't aware there was a RC. Would it have been better to use the RC?
I could easily reboot to recreate the issue, since I have some time now. When it happened before I tried to manually start Container Manager, but unfortunately there aren't any details. Do you know which logs I would have to inspect to get details?
sudo synosystemctl start pkgctl-Docker Fail to start [pkgctl-Docker].
And maybe some more context of what I did. I had a SHR-1 storage pool of 3TB in my DS216+ and replaced it with a 2TB SHR-1 SSD storage pool. Since the 2TB disks were not supported (I used these QNAP enclosures to accommodate 2x1TB SSDs in RAID-0 each), I also ran the latest hdd_db script, which worked perfectly. So in short and obvious stuff like shutting down services or doing a backup etc. omitted:
- Degraded original volume by removing one HDD
- Inserted SSD enclosure 1 and created a new storage pool and volume2
- Moved apps using this script and shared folders using DSM
- Removed second and last disk of storage pool 1 completely using Storage Manager
- Inserted the second SSD enclosure 2 and re-created (edit: I mean created a new one) storage pool 1 and volume1
- Moved apps and shared folders back from volume2 to volume1
- Removed storage pool 2 and expanded pool 1 with the SSD enclosure 1
In the process everything seemed to work well, except the error messages in the screenshot. I also had to forcefully remove docker shared folder using sudo synoshare --del TRUE docker
though since there was no way to move it, and it also wouldn't let me delete the storage pool with it. I then created a new docker shared folder and restored the data from backup. I've done this before a few times on different Diskstations to be able to encrypt the docker folder.
Personally I would have used the latest v4 RC. It exports docker json files for all your containers, including ones that are stopped, so if anything goes wrong you can download the image in Container Manager then import the json files to restore your containers.
Often when a package needs repairing it's because there's a symlink, or path in a config file, pointing to a non-existent folder. What is strange is that 2 of your packages are needing repair after each reboot (after you've already repaired them).
That QNAP QDA-A2AR is nice. But I can't figure out how, with RAID 1, you would replace a failed drive and have the RAID rebuild.
Silly me, don't know where I looked. But I seemed to be lucky since also the stopped containers show up.
I just found out that I can't reboot for a while since the storage pool is still expanding. But yes, there seems to be some leftover or symlinks to the old volume. I'll let you know if the issue persists after a new reboot.
Maybe I did a logical mistake but the idea is to use two cheap 1tb solid state drives each in a RAID-0 config, which makes the enclosure show up as one SSD with 2tb. Both enclosures are in a SHR, so if one enclosure fails I can remove it and inspect using a pc which of the two ssds failed and replace it (I got a bunch of those cheap). Do you think I made a mistake?
edit: oh by expanding I don't mean doubling the capacity but mirroring, maybe that's the misunderstanding.
So each QNAP QDA-A2AR has 2 x SSDs in RAID 0, and then DSM sees 2 x 2TB SSDs which you've setup as RAID 1? That's how I would have done it with 2 x QNAP QDA-A2AR.
Do you have a PC? I like to use WinSCP's ui to browse to /var/packages/ContainerManager where you can see if any symlinks are broken and check where they point to.
I have physical access to the disksation next week and can hook the drives up to a pc then. Or I will try to check later today on the host using ssh and report back. Thanks for your help so far!
Don't connect the drives to a PC.
WinSCP is a Windows app that supports FTP, FTPS, SCP, SFTP, WebDAV or S3 file transfer protocols. It has includes PuTTY.
I use WinSCP to connect to my Synology NAS using SCP (which uses SSH) and to launch PuTTY windows that.
You can check if container manager has any missing or broken symlinks with:
#!/bin/bash
pkg=ContainerManager
symlinks=("etc" "home" "share" "target" "tmp" "var")
check_symlink(){
if [[ ! -L "$1" ]] ; then
echo -e "\e[41mMissing symlink\e[0m $1"
elif [[ ! -a "$1" ]] ; then
echo -en "\e[41mBroken symlink\e[0m $1 --> "
readlink "$1"
else
echo -n "$1 --> "
readlink "$1"
fi
}
for s in ${symlinks[@]}; do
check_symlink "/var/packages/$pkg/$s"
done
check_symlink "/var/packages/$pkg/var/docker"
I renamed a symlink so it would be missing and changed another symlink to point to a non-existent volume so you can see what to expect if any of container manager's symlinks are missing or broken.
So there is an update on my issue with Docker, and lately also with Web Station. Both are not auto starting after the recent migration from HDD to SSD.
I found out that the reason is actually very simple and has nothing to do with this script. It's because I encrypted the shared folders 'docker' and 'web-packages', both of which are not mounted in time before the packages try to start (I am using the auto mount feature).
Silly me! I thought something messed up and I spent many hours reinstalling both packages and configuring all the web services and docker containers from scratch.
So this issue here can be closed. Very thankful for your inputs and of course the scripts themselves!