fradelg/docker-mysql-cron-backup

Large backup files cannot be restored

kammiao opened this issue · 7 comments

My backup file in .sql.gz format exceeds 170M and cannot be restored to the database using the restore.sh script, and it will be stuck after executing the command

I need more info. Can you at least paste the output of the restore.sh script?

Sorry I can't give you the output of the restore.sh script because the restore.sh script doesn't give any output, but I observed that the process gets stuck while executing gzip -d -c xxx.sql.gz

Sorry, but there is no gzip (should be gunzip) in the restore.sh script, which is the one you should call. Can you paste the command and explain the way you are restoring your .sql.gz file?

The command I use is "docker container exec mysql-backup /restore.sh /backup/latest.xxxx.sql.gz", when I execute this command, there is no output even after an hour.
So I entered the mysql-backup container to view the process, and the output was:
PID USER TIME COMMAND
1 root 0:00 dockerize -wait tcp://...:**** -timeout 10s /run.sh
12 root 0:01 crond -f -l 8 -L /mysql_backup.log
16 root 0:43 tail -F /mysql_backup.log
17926 root 50:33 {restore.sh} /bin/bash /restore.sh /backup/latest.xxx.sql.gz
17932 root 0:13 gzip -d -c /backup/latest.xxx.sql.gz
17933 root 0:00 /bin/bash
17954 root 0:00 ps

The mirror I use is 'fradelg/mysql-cron-backup:latest'

If you are logged into the container, then you can add an echo line just before and after the line:

SQL=$(gunzip -c "$1")

run the restore.sh script within the container and check if your process is stuck before or after this line.

You should get a message before the mysql command is executed so I assume this is where you have the problem but you can add more echos to debug the problem.

Thank you for your suggestion