Monitoring Snowflake proxy Compile from the source
feloxxx opened this issue · 4 comments
Hello and thank you very much for this interesting project. I use the Snowflkae proxy in an installed version. So no Docker container. I would like to use the monitoring script. Unfortunately, I understand that to configure the script tor-snowflake.sh from Docker usage to the version with journalctl? I write the log file in /var/log/snowflake.log. Could you help me here? Thank you so much!
Sure, lets see what we can figure out.
I understand that to configure the script tor-snowflake.sh from Docker usage to the version with journalctl
The main reason that journalctl
was suggested is that (like docker
) it allows you to pass a --since
argument to constrain the amount of logs fed into the plugin. There isn't a good way to do this if logs aren't going via the journal.
But we can approximate it.
I've hit a weird bug with the version of date
that I'm running though.
I can add an hour:
ben@optimus:~/tmp$ date --date="2024-05-13 08:05 +1 hour"
Mon 13 May 09:05:00 BST 2024
But, if I try and subtract
ben@optimus:~/tmp$ date --date="2024-05-13 08:05 -1 hour"
Mon 13 May 11:05:00 BST 2024
ben@optimus:~/tmp$ date --date="2024-05-13 08:05 -2 hour"
Mon 13 May 12:05:00 BST 2024
Will have to look into that.
We'll use epoch for adjustments instead then
filter_string=""
now=`date +'%s'`
for i in {0..4}
do
# Calculate a date string
new_d=$(( $now - ($i * 3600) ))
hr=`date -d@$new_d +'%Y/%m/%d %H'`
# Append to the filter
filter_string="${filter_string}${hr}|"
done
# Remove the trailing pipe
filter_string=${filter_string:0:-1}
# Grep the logfile
egrep -e "($filter_string)" /var/log/snowflake.log
I've patched that into the file below - there's a variable at the top that you can override the logfile location by exporting env var LOGFILE
(though I've defaulted it to the path you mentioned)
You'll need to unzip the file first (github doesn't support attaching plain bash scripts to issues)
gunzip tor-snowflake_grep.sh.gz
Hi, First of all, thank you very much for the support and very detailed information. Really great!
the shell script seems to read the data cleanly.
/opt/tor-snowflake_grep/tor-snowflake_grep.sh
snowflake,timeperiod_s=3600 conns=24i,sent=208872000i,recv=23765000i 1715652602000000000
snowflake,timeperiod_s=3600 conns=28i,sent=272247000i,recv=27713000i 1715656202000000000
snowflake,timeperiod_s=3600 conns=38i,sent=4577402000i,recv=239738000i 1715659802000000000
telgraf also transmits data to InfluxDB. But these look, I think, strangely.
Do you have an idea what I could check? Can I send you more information? Thanks to you and best regards
Hmmm, the data (or some of it) must've made it in given the schema is showing up, which suggests the issue is probably time-range related.
What time-range did you have selected in the top right? The data lags behind by about an hour, so if it was at a shorter period that might explain it.
Either way, do you want to hit the script editor toggle and try running this
from(bucket: "snowflake_sync")
|> range(start: -2d)
|> filter(fn: (r) => r._measurement == "snowflake" )
and see whether it returns anything.