Docker container loses all data
ShadowVoyd opened this issue · 7 comments
Is this a BUG REPORT or FEATURE REQUEST?:
- BUG
- FEATURE
What happened:
Docker installation intermittently stops responding and dies, restarting container show installation page, losing all data.
What did you expect to happen:
How to reproduce it (as minimally and precisely as possible):
use docker compose (using default values). After a day or so passes, container stops responding.
Anything else we need to know?:
Log information upon inspecting the unresponsive container:
2023-07-20T18:11:46: PM2 log: Launching in no daemon mode
2023-07-20T18:11:46: PM2 log: App [trudesk:0] starting in -fork mode-
2023-07-20T18:11:46: PM2 log: App [trudesk:0] online
2023-07-20T18:13:02: PM2 log: Stopping app:trudesk id:0
2023-07-20T18:13:02: PM2 log: App [trudesk:0] exited with code [0] via signal [SIGINT]
2023-07-20T18:13:02: PM2 log: pid=19 msg=process killed
2023-07-20T18:13:02: PM2 log: App [trudesk:0] starting in -fork mode-
2023-07-20T18:13:02: PM2 log: App [trudesk:0] online
2023-07-20T18:20:03: PM2 log: App [trudesk:0] exited with code [1] via signal [SIGINT]
2023-07-20T18:20:03: PM2 log: App [trudesk:0] starting in -fork mode-
2023-07-20T18:20:03: PM2 log: App [trudesk:0] online
2023-07-21T18:11:47: PM2 log: [PM2] This PM2 is not UP TO DATE
2023-07-21T18:11:47: PM2 log: [PM2] Upgrade to version 5.3.0
Environment:
- Trudesk Version: 1.2.8
- OS (e.g. from /etc/os-release): docker/linux
- Node.JS Version: 16.14.2
- MongoDB Version: 5.0.19
- Is this hosted on cloud.trudesk.io: no
Did you map the docker container to a volume?
Did you map the docker container to a volume?
So the docker-compose lacks the necessary-to-mount data directories. Fascinating.
Did you map the docker container to a volume?
So the docker-compose lacks the necessary-to-mount data directories. Fascinating.
I did not say that at all. I asked if you mapped the docker container to a volume.
If you take a moment to read the docker-compose.yml file you will see that the docker file has volumes attached to the containers. I do not know what set-up/modifications you have made, so I simply asked if you mapped the docker container to a volume.
If you are using the trudesk in production, you should use the manual docker deployment to map your own data directories instead of the internal docker volumes that the docker-compose file uses.
In the Docker Deployment Documentation it goes over how to achieve this. It also states that the docker-compose file is built as a quick and easy way to demo/test trudesk before making the plunge into a full production deployment.
Using the internal docker volumes makes it harder to troubleshoot the actual crash that is happening. You would need to see the actual trudesk server output.log
file that is stored in /usr/src/trudesk/log/output.log
within the volume. Having a hard volume map allows you to easily access the container's volume files.
Did you map the docker container to a volume?
So the docker-compose lacks the necessary-to-mount data directories. Fascinating.
I did not say that at all. I asked if you mapped the docker container to a volume.
If you take a moment to read the docker-compose.yml file you will see that the docker file has volumes attached to the containers. I do not know what set-up/modifications you have made, so I simply asked if you mapped the docker container to a volume.
If you are using the trudesk in production, you should use the manual docker deployment to map your own data directories instead of the internal docker volumes that the docker-compose file uses.
In the Docker Deployment Documentation it goes over how to achieve this. It also states that the docker-compose file is built as a quick and easy way to demo/test trudesk before making the plunge into a full production deployment.
Using the internal docker volumes makes it harder to troubleshoot the actual crash that is happening. You would need to see the actual trudesk server
output.log
file that is stored in/usr/src/trudesk/log/output.log
within the volume. Having a hard volume map allows you to easily access the container's volume files.
But you do know. I said i used the default values. And through testing i discovered it was wiping out its own data.
We encountered this issue aswell.
Trudesk has been running for some days, then hang. Restarted the container with loosing all data.
Then i tried to restart it sometimes. Most of the time it works, without loosing data, but then, loosing again. ( I think it was hanging again before that, but i'm not sure )
We encountered this issue aswell.
Trudesk has been running for some days, then hang. Restarted the container with loosing all data.
Then i tried to restart it sometimes. Most of the time it works, without loosing data, but then, loosing again. ( I think it was hanging again before that, but i'm not sure )
Yeah even deploying it using the instructions https://docs.trudesk.io/v1.2/getting-started/deployment/docker-deployment/ resulted in trudesk wiping out its own data again.
I have been running Trudesk in a docker container for over a month since this issue was raised.
Week 1: I had it restart every day at 00:00
Week 2: No restarts performed
Week 3: Restart every other day.
Is data being removed from MongoDB or the files trudesk store? Again, what are the contents of /usr/src/trudesk/log/output.log
I have not seen it remove its data from the MongoDB. I logged into the MongoDB docker container and could see the data directly through the Mongo CLI.
I finally checked the kubernetes clusters running 1.2.9
in production and none have had their data removed from MongoDB.
Without seeing a MongoDB log at the time of wipe I cannot troubleshoot what could be the cause.
The code only wipes the database in one function which is during a restore.
Line 79 in 34ff067
I can only guess at this point as your setup could vary. That would be trudesk losing its connection link to MongoDB somehow and you're having to run the install wizard again and during the installation, it creates a new database.
Bottom line without more information I cannot proceed. I'm not able to reproduce.
Feel free to reopen with the requested log files.