A lot of CPU Usage and 10000 request every second in webUI like #28
masterwishx opened this issue ยท 84 comments
im very sorry but i again have this issue ,maybe yo can open the old one
#28
it was running about 2 days normal then it start grow up about 10000 in second ,
if im not running container for some days it again working Good for 2 days ,and then again start raising
Very strange i dont have this isshue on gregyankovoy/goaccess
becose i used his container befor i moved the Logs folder to /mnt/user/Logs/NginxProxyManager/
where i can look to find the problem ?
Just want to confirm that I'm also having this issue -- roughly 8,000 requests per second as reported by the web-ui.
Running the Unraid version of the image (v1.0.5) -- everything is flawless except for this one issue. Thanks for all your effort so far -- I discovered your work via a Youtube video that explained everything! https://youtu.be/-CQcEWVBjQU
Just want to confirm
Thanks, I thought I only one have this problem
@masterwishx @Mission-Contro1
I deployed a new version to the develop branch. I'm adding the files now to the goaccess.conf file instead of the command line. Give it shot.
docker pull xavierh/goaccess-for-nginxproxymanager:develop
Give it shot
OK, Thanks a lot I will try.
I founded mostly requests going from Authelia and nextcloud also I have Uptime Kuma every 5 min. The container start OK but after some time it's going much more request. So I need to stop it for some days.
This very strange issue I didn't had this problem on old version of https://github.com/GregYankovoy
Sadly still having the same issue. It will start just fine but after a few seconds it will create the 8,000 requests once every second.
@masterwishx You and I have a very similar set up! I have Nextcloud and Uptime Kuma running as well. I will try disabling them both from NPM to see if it narrows down the issue.
I tried stopping NPM but the GoAccess request counter is still going strong.
I tried stopping NPM
I don't think the problem in them Becouse befor I used other same docker but with old version of goaccess 1.4 and was no problem.
I still have it but not using for now. But I checked both and the old version does not have this issue.
Now I trying the dev instead last of this docker and will post here if will be same problem.
@Mission-Contro1 and @masterwishx so I just found an issue in the current develop build that actually caused the problem that you found. It might be related to what you were seeing in the past but just manifested itself differently this time.
Please pull the latest develop version:
docker pull xavierh/goaccess-for-nginxproxymanager:develop
Still no dice -- the behavior is unchanged. Once it first loads, the counter will remain static for a few seconds before it ticks up by 8.000 requests per second.
Also the Tx. Amount ticks up very rapidly as well.
@Mission-Contro1 can you add the debug flag if you haven't already. "DEBUG=True" and browser to /goaccess_conf.html search for this flag "#GOAN_PROXY_FILES" and provide me the output of the files listed after that.
However these are the files that are listed at /goaccess-config/proxy_logs:
log-file /goaccess-config/access_archive.log
log-file /opt/log/proxy-host-1_access.log
log-file /opt/log/proxy-host-2_access.log
log-file /opt/log/proxy-host-3_access.log
Please pull the latest develop version
From yesterday dev version until today I don't
see this issue, but have spikes some times not every second and still have much requests at sum I think, but maybe needs more time for checking for night I shut down server.
I will update to lasted dev and will check also
Sorry but checked now problem still exist every second. Much requests and TX amount.
can you add the debug flag if you haven't already. "DEBUG=True" and browser to /goaccess_conf.html search for this flag "#GOAN_PROXY_FILES" and provide me the output of the files listed after that.
can you please explain where to add this flag ?
do you mean add variable to container ?
Hmm... That screenshot looks good.
I added a variable for the container recently called DEBUG but you got me the information I needed.
- DEBUG=True
Can you try skipping the archived logs?
- SKIP_ARCHIVED_LOGS=True
OK i will try
without arhived logs should be only for 3 day log ...
I forgot to mention that i have another log file from old container of GregYankovoy called proxy_log
That is still generating in NPM by some of containers , added lines
"access_log /config/logs/proxy_logs.log proxy;
error_log /config/log/proxy_errors.log warn;"
but in your container its read all "proxy-host-_access." what is much better way , so i dont think its make issue ?!?
That additional file can't be the issue. I can't figure out what might be causing the problem. Do you have any additional customizations to NPM?
No other customizations on my end -- 3 Proxy Hosts with 3 different certs through Let's Encrypt, that's all. I haven't been able to get any closer to figuring out what's wrong.
@Mission-Contro1 can you post a screenshot of all your files in that directory?
Do you have any additional customizations to NPM?
No
I but I use cloudflare dns proxy also.
The file is still here becose issue in this container
I mean I need it for check the old container but I don't have same problem in it.
I have 5000 request in a day in container and in cloudflare analytic.
But here I have 1000000 in 3 days.
I'm also don't understand what the problem maybe its in 1.55 version.
Maybe the author of program can help?
@masterwishx I read the thread and I think I'd recommend as well what allinurl said. If you can run goaccess directly against the logs and see if it happens that'd be best.
If you need help with that maybe I can lend a hand.
OK, I can try but I don't know what is mean directly? Without the docker?
How I can do it?
If you can help with it will be awesome
sorry been busy for a bit, i'll get to this by the end of the week
OK, Thanks
I'm also facing the same issue.
I only have proxy_host-*.log
and proxy-host-_access.log
files in the "Current" (according to container logs) (49 files)
and 80 "Archived"
Once I stop it from scanning the archived, It's seems to work fine.
With archived files, it keeps increasing and all the statistics go to "yesterday" day.
If you want me to test anything, let me know
Did some very basic test. copied and unzipped all logs into a folder.
Run grep "/json" * | wc -l
and got back 2231594
But on gui.. and rising
@stavros-k you have both versions, proxy_host-.log and proxy-hostaccess.log?
@stavros-k you have both versions, proxy_host-_.log and proxy-host_access.log?
Correct,
root@Prometheus:/mnt/user/appdata/NginxProxyManager/log# ls proxy*
proxy-host-13_access.log proxy-host-19_error.log.1.gz proxy-host-26_access.log.4.gz proxy-host-28_error.log.4.gz proxy-host-32_error.log.9.gz proxy-host-35_error.log proxy-host-3_error.log.6.gz proxy-host-6_access.log proxy_host-13.log proxy_host-19.log.2.gz proxy_host-23.log proxy_host-4.log.10.gz
proxy-host-13_access.log.1.gz proxy-host-19_error.log.10.gz proxy-host-26_error.log proxy-host-28_error.log.5.gz proxy-host-33_access.log proxy-host-35_error.log.1.gz proxy-host-3_error.log.7.gz proxy-host-6_access.log.1.gz proxy_host-13.log.1.gz proxy_host-19.log.3.gz proxy_host-24.log proxy_host-4.log.11.gz
proxy-host-13_access.log.2.gz proxy-host-19_error.log.2.gz proxy-host-26_error.log.1.gz proxy-host-28_error.log.6.gz proxy-host-33_access.log.1.gz proxy-host-35_error.log.2.gz proxy-host-3_error.log.8.gz proxy-host-6_access.log.2.gz proxy_host-13.log.10.gz proxy_host-19.log.4.gz proxy_host-24.log.1.gz proxy_host-4.log.12.gz
proxy-host-13_access.log.3.gz proxy-host-19_error.log.3.gz proxy-host-26_error.log.10.gz proxy-host-28_error.log.7.gz proxy-host-33_access.log.2.gz proxy-host-35_error.log.3.gz proxy-host-3_error.log.9.gz proxy-host-6_access.log.3.gz proxy_host-13.log.11.gz proxy_host-19.log.5.gz proxy_host-26.log proxy_host-4.log.13.gz
proxy-host-13_access.log.4.gz proxy-host-19_error.log.4.gz proxy-host-26_error.log.2.gz proxy-host-28_error.log.8.gz proxy-host-33_access.log.3.gz proxy-host-35_error.log.4.gz proxy-host-40_access.log proxy-host-6_access.log.4.gz proxy_host-13.log.12.gz proxy_host-19.log.6.gz proxy_host-26.log.1.gz proxy_host-4.log.14.gz
proxy-host-13_error.log proxy-host-19_error.log.5.gz proxy-host-26_error.log.3.gz proxy-host-28_error.log.9.gz proxy-host-33_access.log.4.gz proxy-host-35_error.log.5.gz proxy-host-40_error.log proxy-host-6_error.log proxy_host-13.log.13.gz proxy_host-19.log.7.gz proxy_host-26.log.2.gz proxy_host-4.log.2.gz
proxy-host-13_error.log.1.gz proxy-host-19_error.log.6.gz proxy-host-26_error.log.4.gz proxy-host-2_access.log proxy-host-33_error.log proxy-host-35_error.log.6.gz proxy-host-4_access.log proxy-host-6_error.log.1.gz proxy_host-13.log.14.gz proxy_host-19.log.8.gz proxy_host-26.log.3.gz proxy_host-4.log.3.gz
proxy-host-13_error.log.10.gz proxy-host-19_error.log.7.gz proxy-host-26_error.log.5.gz proxy-host-2_access.log.1.gz proxy-host-33_error.log.1.gz proxy-host-35_error.log.7.gz proxy-host-4_access.log.1.gz proxy-host-6_error.log.10.gz proxy_host-13.log.2.gz proxy_host-19.log.9.gz proxy_host-27.log proxy_host-4.log.4.gz
proxy-host-13_error.log.2.gz proxy-host-19_error.log.8.gz proxy-host-26_error.log.6.gz proxy-host-2_access.log.2.gz proxy-host-33_error.log.2.gz proxy-host-35_error.log.8.gz proxy-host-4_access.log.2.gz proxy-host-6_error.log.2.gz proxy_host-13.log.3.gz proxy_host-2.log proxy_host-27.log.1.gz proxy_host-4.log.5.gz
proxy-host-13_error.log.3.gz proxy-host-19_error.log.9.gz proxy-host-26_error.log.7.gz proxy-host-2_access.log.3.gz proxy-host-33_error.log.3.gz proxy-host-36_access.log proxy-host-4_access.log.3.gz proxy-host-6_error.log.3.gz proxy_host-13.log.4.gz proxy_host-2.log.1.gz proxy_host-28.log proxy_host-4.log.6.gz
proxy-host-13_error.log.4.gz proxy-host-20_access.log proxy-host-26_error.log.8.gz proxy-host-2_access.log.4.gz proxy-host-33_error.log.4.gz proxy-host-36_access.log.1.gz proxy-host-4_access.log.4.gz proxy-host-6_error.log.4.gz proxy_host-13.log.5.gz proxy_host-2.log.10.gz proxy_host-29.log proxy_host-4.log.7.gz
proxy-host-13_error.log.5.gz proxy-host-20_access.log.1.gz proxy-host-26_error.log.9.gz proxy-host-2_error.log proxy-host-33_error.log.5.gz proxy-host-36_access.log.2.gz proxy-host-4_error.log proxy-host-6_error.log.5.gz proxy_host-13.log.6.gz proxy_host-2.log.11.gz proxy_host-3.log proxy_host-4.log.8.gz
proxy-host-13_error.log.6.gz proxy-host-20_access.log.2.gz proxy-host-27_access.log proxy-host-2_error.log-2022011417.backup proxy-host-33_error.log.6.gz proxy-host-36_error.log proxy-host-4_error.log.1.gz proxy-host-6_error.log.6.gz proxy_host-13.log.7.gz proxy_host-2.log.12.gz proxy_host-3.log.1.gz proxy_host-4.log.9.gz
proxy-host-13_error.log.7.gz proxy-host-20_access.log.3.gz proxy-host-27_access.log.1.gz proxy-host-2_error.log.1.gz proxy-host-33_error.log.7.gz proxy-host-36_error.log.1.gz proxy-host-4_error.log.10.gz proxy-host-6_error.log.7.gz proxy_host-13.log.8.gz proxy_host-2.log.13.gz proxy_host-3.log.10.gz proxy_host-5.log
proxy-host-13_error.log.8.gz proxy-host-20_access.log.4.gz proxy-host-27_access.log.2.gz proxy-host-2_error.log.10.gz proxy-host-33_error.log.8.gz proxy-host-36_error.log.2.gz proxy-host-4_error.log.2.gz proxy-host-6_error.log.8.gz proxy_host-13.log.9.gz proxy_host-2.log.14.gz proxy_host-3.log.11.gz proxy_host-5.log.1.gz
proxy-host-13_error.log.9.gz proxy-host-20_error.log proxy-host-27_access.log.3.gz proxy-host-2_error.log.2.gz proxy-host-33_error.log.9.gz proxy-host-37_access.log proxy-host-4_error.log.3.gz proxy-host-7_access.log proxy_host-15.log proxy_host-2.log.2.gz proxy_host-3.log.12.gz proxy_host-5.log.10.gz
proxy-host-18_access.log proxy-host-20_error.log.1.gz proxy-host-27_access.log.4.gz proxy-host-2_error.log.3.gz proxy-host-34_access.log proxy-host-37_access.log.1.gz proxy-host-4_error.log.4.gz proxy-host-7_access.log.1.gz proxy_host-18.log proxy_host-2.log.3.gz proxy_host-3.log.13.gz proxy_host-5.log.11.gz
proxy-host-18_access.log.1.gz proxy-host-20_error.log.10.gz proxy-host-27_error.log proxy-host-2_error.log.4.gz proxy-host-34_access.log.1.gz proxy-host-37_error.log proxy-host-4_error.log.5.gz proxy-host-7_access.log.2.gz proxy_host-18.log.1.gz proxy_host-2.log.4.gz proxy_host-3.log.14.gz proxy_host-5.log.12.gz
proxy-host-18_access.log.2.gz proxy-host-20_error.log.2.gz proxy-host-27_error.log.1.gz proxy-host-2_error.log.5.gz proxy-host-34_access.log.2.gz proxy-host-37_error.log.1.gz proxy-host-4_error.log.6.gz proxy-host-7_access.log.3.gz proxy_host-18.log.10.gz proxy_host-2.log.5.gz proxy_host-3.log.2.gz proxy_host-5.log.13.gz
proxy-host-18_access.log.3.gz proxy-host-20_error.log.3.gz proxy-host-27_error.log.10.gz proxy-host-2_error.log.6.gz proxy-host-34_access.log.3.gz proxy-host-38_access.log proxy-host-4_error.log.7.gz proxy-host-7_access.log.4.gz proxy_host-18.log.11.gz proxy_host-2.log.6.gz proxy_host-3.log.3.gz proxy_host-5.log.14.gz
proxy-host-18_access.log.4.gz proxy-host-20_error.log.4.gz proxy-host-27_error.log.2.gz proxy-host-2_error.log.7.gz proxy-host-34_access.log.4.gz proxy-host-38_error.log proxy-host-4_error.log.8.gz proxy-host-7_error.log proxy_host-18.log.12.gz proxy_host-2.log.7.gz proxy_host-3.log.4.gz proxy_host-5.log.2.gz
proxy-host-18_error.log proxy-host-20_error.log.5.gz proxy-host-27_error.log.3.gz proxy-host-2_error.log.8.gz proxy-host-34_error.log proxy-host-39_access.log proxy-host-4_error.log.9.gz proxy-host-7_error.log.1.gz proxy_host-18.log.13.gz proxy_host-2.log.8.gz proxy_host-3.log.5.gz proxy_host-5.log.3.gz
proxy-host-18_error.log.1.gz proxy-host-20_error.log.6.gz proxy-host-27_error.log.4.gz proxy-host-2_error.log.9.gz proxy-host-34_error.log-2021100603.backup proxy-host-39_access.log.1.gz proxy-host-5_access.log proxy-host-7_error.log.2.gz proxy_host-18.log.14.gz proxy_host-2.log.9.gz proxy_host-3.log.6.gz proxy_host-5.log.4.gz
proxy-host-18_error.log.10.gz proxy-host-20_error.log.7.gz proxy-host-27_error.log.5.gz proxy-host-32_access.log proxy-host-34_error.log-2021101503.backup proxy-host-39_error.log proxy-host-5_access.log.1.gz proxy-host-9_access.log proxy_host-18.log.2.gz proxy_host-20.log proxy_host-3.log.7.gz proxy_host-5.log.5.gz
proxy-host-18_error.log.2.gz proxy-host-20_error.log.8.gz proxy-host-27_error.log.6.gz proxy-host-32_access.log.1.gz proxy-host-34_error.log-2021102223.backup proxy-host-39_error.log.1.gz proxy-host-5_access.log.2.gz proxy-host-9_access.log.1.gz proxy_host-18.log.3.gz proxy_host-20.log.1.gz proxy_host-3.log.8.gz proxy_host-5.log.6.gz
proxy-host-18_error.log.3.gz proxy-host-20_error.log.9.gz proxy-host-27_error.log.7.gz proxy-host-32_access.log.2.gz proxy-host-34_error.log.1.gz proxy-host-3_access.log proxy-host-5_access.log.3.gz proxy-host-9_access.log.2.gz proxy_host-18.log.4.gz proxy_host-20.log.10.gz proxy_host-3.log.9.gz proxy_host-5.log.7.gz
proxy-host-18_error.log.4.gz proxy-host-24_access.log proxy-host-27_error.log.8.gz proxy-host-32_access.log.3.gz proxy-host-34_error.log.2.gz proxy-host-3_access.log.1.gz proxy-host-5_access.log.4.gz proxy-host-9_access.log.3.gz proxy_host-18.log.5.gz proxy_host-20.log.11.gz proxy_host-30.log proxy_host-5.log.8.gz
proxy-host-18_error.log.5.gz proxy-host-24_access.log.1.gz proxy-host-27_error.log.9.gz proxy-host-32_access.log.4.gz proxy-host-34_error.log.3.gz proxy-host-3_access.log.2.gz proxy-host-5_error.log proxy-host-9_access.log.4.gz proxy_host-18.log.6.gz proxy_host-20.log.12.gz proxy_host-31.log proxy_host-5.log.9.gz
proxy-host-18_error.log.6.gz proxy-host-24_access.log.2.gz proxy-host-28_access.log proxy-host-32_error.log proxy-host-34_error.log.4.gz proxy-host-3_access.log.3.gz proxy-host-5_error.log.1.gz proxy-host-9_error.log proxy_host-18.log.7.gz proxy_host-20.log.13.gz proxy_host-32.log proxy_host-6.log
proxy-host-18_error.log.7.gz proxy-host-24_access.log.3.gz proxy-host-28_access.log.1.gz proxy-host-32_error.log.1.gz proxy-host-34_error.log.5.gz proxy-host-3_access.log.4.gz proxy-host-5_error.log.10.gz proxy-host-9_error.log.1.gz proxy_host-18.log.8.gz proxy_host-20.log.14.gz proxy_host-32.log.1.gz proxy_host-6.log.1.gz
proxy-host-18_error.log.8.gz proxy-host-24_access.log.4.gz proxy-host-28_access.log.2.gz proxy-host-32_error.log.10.gz proxy-host-34_error.log.6.gz proxy-host-3_error.log proxy-host-5_error.log.2.gz proxy-host-9_error.log.2.gz proxy_host-18.log.9.gz proxy_host-20.log.2.gz proxy_host-32.log.2.gz proxy_host-7.log
proxy-host-18_error.log.9.gz proxy-host-24_error.log proxy-host-28_access.log.3.gz proxy-host-32_error.log.2.gz proxy-host-34_error.log.7.gz proxy-host-3_error.log-2022030403.backup proxy-host-5_error.log.3.gz proxy-host-9_error.log.3.gz proxy_host-19.log proxy_host-20.log.3.gz proxy_host-32.log.3.gz proxy_host-9.log
proxy-host-19_access.log proxy-host-24_error.log.1.gz proxy-host-28_access.log.4.gz proxy-host-32_error.log.3.gz proxy-host-34_error.log.8.gz proxy-host-3_error.log.1.gz proxy-host-5_error.log.4.gz proxy-host-9_error.log.4.gz proxy_host-19.log.1.gz proxy_host-20.log.4.gz proxy_host-33.log proxy_host-9.log.1.gz
proxy-host-19_access.log.1.gz proxy-host-24_error.log.2.gz proxy-host-28_error.log proxy-host-32_error.log.4.gz proxy-host-35_access.log proxy-host-3_error.log.10.gz proxy-host-5_error.log.5.gz proxy-host-9_error.log.5.gz proxy_host-19.log.10.gz proxy_host-20.log.5.gz proxy_host-33.log.1.gz proxy_host-9.log.2.gz
proxy-host-19_access.log.2.gz proxy-host-26_access.log proxy-host-28_error.log.1.gz proxy-host-32_error.log.5.gz proxy-host-35_access.log.1.gz proxy-host-3_error.log.2.gz proxy-host-5_error.log.6.gz proxy-host-9_error.log.6.gz proxy_host-19.log.11.gz proxy_host-20.log.6.gz proxy_host-34.log
proxy-host-19_access.log.3.gz proxy-host-26_access.log.1.gz proxy-host-28_error.log.10.gz proxy-host-32_error.log.6.gz proxy-host-35_access.log.2.gz proxy-host-3_error.log.3.gz proxy-host-5_error.log.7.gz proxy-host-9_error.log.7.gz proxy_host-19.log.12.gz proxy_host-20.log.7.gz proxy_host-35.log
proxy-host-19_access.log.4.gz proxy-host-26_access.log.2.gz proxy-host-28_error.log.2.gz proxy-host-32_error.log.7.gz proxy-host-35_access.log.3.gz proxy-host-3_error.log.4.gz proxy-host-5_error.log.8.gz proxy_host-1.log proxy_host-19.log.13.gz proxy_host-20.log.8.gz proxy_host-4.log
proxy-host-19_error.log proxy-host-26_access.log.3.gz proxy-host-28_error.log.3.gz proxy-host-32_error.log.8.gz proxy-host-35_access.log.4.gz proxy-host-3_error.log.5.gz proxy-host-5_error.log.9.gz proxy_host-1.log.1.gz proxy_host-19.log.14.gz proxy_host-20.log.9.gz proxy_host-4.log.1.gz
I should mention that I use this image -> https://github.com/jlesage/docker-nginx-proxy-manager
@masterwishx I pushed a new version today v1.0.9 which contains the latest GoAccess v1.6 and a lot of code refactoring. Feel free to give it a try.
Sorry I haven't been able to solve your problem directly.
Thanks ,i have auto update in unraid so i checked its also have this isshue , but i really cant understand where the problem becose
its start OK and after some time adding request for today nad goes back for some day ...
the author of goaccess told me to try without docker to check it but dont really know how to do it ....
i will try to enable debug and check it ...
What i can check with debug ?
Thanks a lot for your work...
I'm also facing the same issue.
did you fixed the problem ?
Sadly still having the same issue
do you still have this problem ?
@masterwishx can you provide a screenshot of your unraid container setup? Also, I assume you only have one nginx proxy manager running and feeding the logs?
I will post the screenshot, yes I have only one but moved logs to /mnt/user/Logs becose of old GregYankov container... and have old logs in NPM folder but can see by log of you container the only need files log are loaded...
@masterwishx also copy and paste or screenshot the new log information displayed. Maybe that would help.
Also forgot to mention that i still have next line in some of proxy host in NPM :
#NPM Proxy Logs to one file for goaccess monitoring
access_log /config/logs/proxy_logs.log proxy;
error_log /config/log/proxy_errors.log warn;
That was added for logs for old Greg Container but this two log files not used by you container ....
but think i posted befor this info ...
@masterwishx yeah, it shouldn't read the proxy_logs.log or proxy_errors.log. And yes looks like from the output those files are not being read. I think we've talked about this in the thread but not sure if you actually did it. Can you disable it from reading the archived files, for a few days and see if you can reproduce the error?
Yes ,we talked befor and i disabled the arhive logs but same isshue ... what can i ckeck with debug enabled ?
I am also having the same issue. The container has generated over 1 billion requests over 10K lines of log entries for one of my proxy host. Any updates to the issue?
I think in about a week I'll provide the ability to load your own configuration file. This should help you solve your issue by loading logs individually. I can't really do anything from my end to troubleshoot because I'm just setting up the config automatically. There isn't anywhere I can really debug.
There isn't anywhere I can really debug.
Thanks anyway i hope it will fix issue but cant understund where the problem , for now by loaded logs cant see there is problem on loaded logs exist ....
The author of goaccess said the best way to find the problem is to run the goaccess without docker and check if problem is exist but dont really know how to run it in unraid without docker ,
and lets say it will be OK but i still want to run it in docker , and whats cool in your container is all runs and catch all logs
automaticly ....
if i have DEBUG enabled what i can check in my side?
@masterwishx the debug flag just currently provides the feedback in the logs and also provides the goaccess config has an HTML file for you to view. Other than there isn't much to produce from a debug sense.
And yeah there isn't a real way to run goaccess via a docker since its a command line tool.
Also have this issue on docker running ngixProxyManager. The GoAccess hit counter will not stop!
@masterwishx I'm not sure if I can read the lines in each log file and figure out if maybe a file is growing to large or maybe you'll be able to tell on your end which file might cause the problem. I'll look into it.
@masterwishx I'm not sure if I can read the lines in each log file and figure out if maybe a file is growing to large or maybe you'll be able to tell on your end which file might cause the problem. I'll look into it.
I narrowed it down to it NOT being the data. the issue only occurs when data is being actively written to proxy_host-*.log by ngix proxy manager and being read by goacess at the same time. some how this is causing a recursive loop within goacess
@tonyle8 were you able to confirm this via goaccess command line?
@tonyle8 were you able to confirm this via goaccess command line?
Nah, i simply STOPPED my docker container running ngix proxy manager after populating data in the log files and restarted goaccess so the files are not simultaneously accessed.
Ok good news! I was able to resolve it: It turns out I was correct in regards to the file access. This issue only happens when you are using Bind mount (connecting directly to a host drive). Switching to named volumes fixes the problem and works perfectly.
@tonyle8 can you provide an example of what you changed? Also, are you using the docker container directly (docker-compose), portainer, or through the unraid template?
@tonyle8 can you provide an example of what you changed? Also, are you using the docker container directly (docker-compose), portainer, or through the unraid template?
See below for docker compose on portainer. My ngix app volume is set to the same "data" volume where saves the logs
goaccess:
image: xavierh/goaccess-for-nginxproxymanager:latest
container_name: goaccess
restart: always
environment:
- PUID=0
- PGID=0
- TZ=America/New_York
- SKIP_ARCHIVED_LOGS=True #optional
- DEBUG=False #optional
- BASIC_AUTH=False #optional
- BASIC_AUTH_USERNAME=user #optional
- BASIC_AUTH_PASSWORD=pass #optional
- EXCLUDE_IPS=127.0.0.1 #optional - comma delimited
ports:
- '8206:7880'
volumes:
#- /C/Docker/NgixProxyManager/data:/opt/log #Bind mount does not work
- data:/opt/log #Named Volume
@tonyle8 - I'll add this to the README file as an option to try if you're getting this issue. Thanks!
I'm not sure if I can read the lines in each log file and figure out if maybe a file is growing to large or maybe you'll be able to tell on your end which file might cause the problem. I'll look into it.
after stopped container for a week , started it again and see same issue some 20000 every second ...
i can see hits from today and goes back , for now 5 days back.
all hits for authelia and nextcloud hosts i think ,looks by picture ...
@masterwishx
try my fix at the top and see if that works for you
@masterwishx looks like you are using bind mount with unraid. Try switching to named volume if it supports it?
Try switching to named volume if it supports it
OK Thanks
I'll add this to the README file as an option to try if you're getting this issue
can we use it in unraid ?
@masterwishx so working off of @tonyle8 explanation/assumption can you try to change the access mode for the host path? Try changing it to Read Only - Slave. If that doesn't work then try Read Only - Shared.
No dice with changing the access modes on the containers:
NPM Official: Read/Write - Slave
Goaccess for NPM: Read Only - Slave
Also doesn't work for shared mode. :(
Read Only - Slave. If that doesn't work then try Read Only - Shared.
i have Read Only now i will try these ...
if i remember correctly Read Only Slave its for Unassigned Devices but i have in /mnt/user/Logs/NginxProxyManager/
but will try anyway ...
Also doesn't work for shared mode. :(
Do you also still having this issue in unraid ?
Yes, still having this issue in Unraid. I'm going to try and create a volume in docker/unraid and have NPM write the logs to the volume and GoAccess read the logs from it.
Yes, still having this issue in Unraid
Please write here if it works or not. Thanks
Yes, still having this issue in Unraid
Please write here if it works or not. Thanks
I wasn't able to get it to work -- combination of running low on time and there not being a way to do it via Unraid's UI easily.
I pointed NPM to a Docker volume I created but it failed to start because I had two mount points at the same location. Then the image uninstalled itself and I had to reinstall. Nasty bug.
I wasn't able to get it to work
i see , thanks for sharing this try
@xavier-hernandez
Maybe the problem is becose container dont have bind mont config and using volume name mount only if problem can be fixed by name volume like tonyle8 said?!?
@masterwishx I don't believe that is how unraid works. Maybe I can create a temporary build where the logs will be copied into the container itself and see if that fixes the issue. We can try that at least.
Sure, if you can, we can do it. Then container need to copy logs for some cron times? Or how?
Also wanted to check again the container yesterday , it's stopped for now.
And the problem is starts after some time (hours)
Also wanted to dedug with dev tools but don't have a lot of knowledge. If you can point where to look maybe? Also was some errors.. I can post here if it will help
@masterwishx - do you leave the website open for hours or do you close it and come back?
Yes, some sort of cron job or sleep timer. This would be just for you so maybe just a sleep timer. If it works then maybe I can make it a cron job.
Yes, some sort of cron job
Thanks , i can help whatever need and what i can to find the isshue ...
do you leave the website open
Tryed both ways, the end is same :(
Last time i leaved open becose of chrome develop tools (whanted to check if i can find somethink)...
I think the problem is solved ?!?
Tested for two days . befor not used for some time . will check it more ....
i meaned no more increasing requsets ,can someone also confirm ?
Yes I can confirm, it seem to run stable now.
First impressions are good -- it appears to be working! I'll leave it running for a few days.
What did you change this time around?
First impressions are good -- it appears to be working! I'll leave it running for a few days.
What did you change this time around?
@Mission-Contro1 if this question is for me, a lot has changed. GoAccess was upgraded twice I believe and the way the processes were running has also changed. I was never able to reproduce this issue on my end so I'm not totally sure what could have fixed it if it is stable now.
What did you change this time around?
if you read above, for some one helped named volume even befor this was fixed by author ....
also very interesting how was fixed but more important is working fine now, tested for 3 days and cheking more...
on unraid i have another problem , on autostart somehow #86
but after if start manually its OK
Marking this ticket as closed since 3 users have confirmed this is no longer an issue. Please re-open or open a new issue if this reoccurs. Thanks.