Hammering MQTT servers?
Closed this issue · 6 comments
Just realized this as I was looking into my Home assistant MQTT logs.
Got the script running every 5 mins via crontab.Polling is disabled. It seems python never exits and hammers the mqtt server (obfuscated ip and username):
2024-01-21 22:04:14: New connection from IP_ADDRESS:42671 on port 1883.
2024-01-21 22:04:14: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:14: New client connected from IP_ADDRESS:42671 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:14: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:14: New client connected from IP_ADDRESS:60953 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:15: New connection from IP_ADDRESS:55569 on port 1883.
2024-01-21 22:04:15: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:15: New client connected from IP_ADDRESS:55569 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:15: New connection from IP_ADDRESS:35631 on port 1883.
2024-01-21 22:04:15: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:15: New client connected from IP_ADDRESS:35631 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:15: New connection from IP_ADDRESS:50761 on port 1883.
2024-01-21 22:04:15: New connection from IP_ADDRESS:33767 on port 1883.
2024-01-21 22:04:15: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:15: New client connected from IP_ADDRESS:33767 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:15: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:15: New client connected from IP_ADDRESS:50761 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:15: New connection from IP_ADDRESS:51817 on port 1883.
2024-01-21 22:04:15: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:15: New client connected from IP_ADDRESS:51817 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:16: New connection from IP_ADDRESS:54751 on port 1883.
2024-01-21 22:04:16: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:16: New client connected from IP_ADDRESS:54751 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:16: New connection from IP_ADDRESS:45733 on port 1883.
2024-01-21 22:04:16: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:16: New client connected from IP_ADDRESS:45733 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:16: New connection from IP_ADDRESS:39235 on port 1883.
2024-01-21 22:04:16: New connection from IP_ADDRESS:34031 on port 1883.
2024-01-21 22:04:16: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:16: New client connected from IP_ADDRESS:39235 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:16: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:16: New client connected from IP_ADDRESS:34031 as renogy-bt (p2, c1, k60, u'USERNAME').
2024-01-21 22:04:16: New connection from IP_ADDRESS:52739 on port 1883.
2024-01-21 22:04:16: Client renogy-bt already connected, closing old connection.
2024-01-21 22:04:16: New client connected from IP_ADDRESS:52739 as renogy-bt (p2, c1, k60, u'USERNAME').
If I reboot the pi, there will be a 5 minute delay between connections, but eventually it will hammer it.
Anyone else having this issue?
May be you can try to capture the renogy-bt logs too. How many python processes can you see on ‘ps’ command?
It would seem python does not exist after running the job. I now have 4x python executables running while only have 2 crontab entries for renogybt (2 different config files running every 5 and 6 minutes). This is on a pi Zero who's only purpose is to run this.
Where would the Renogy bt logs be?
So you have two Renogy devices or just one device? My bet is on the crossover that happens at 30th minute and ends up in a callback hell.
So you have two Renogy devices or just one device? My bet is on the crossover that happens at 30th minute and ends up in a callback hell.
Bravo! Je suis hyper confiant que c'est ca le probleme car ca fait du sens! Merci!
This makes sense and must be the reason! I will change the cycle so that they never overlap and report back!
Did that resolve the issue?
Yes, you were absolutely correct. I changed my second cronjob to 4-59/5 * * * * and it now no longer conflicts!