Refactor required
Closed this issue · 5 comments
There is anwful lot of duplicated code in this repo and a refactor would be healthy. For example update_api
, get_bridges
are duplicated in multiple files.
do you have a proposed strategy?
Eg, would it be best to return to 1 sensors.py file, in which all current binary_sensors, remotes and device_trackers are created?
Might be easiest to prevent duplicate code?
Hi @robmarkcole,
I'm doing the refactor, extracting a single data_manager for remotes and binary_sensors, and also sharing a 'base entity' for all. I will cc'd you when a PR is ready.
But I just realized something about the scan_interval wars, and, if I'm not missing anything, a scan_interval
< 1s is impossible to have with the current implementation relying on async_track_time_interval
:
- That helper adds a listener for
EVENT_TIME_CHANGED
, which ticks at 1Hz tops - With the old scan_interval of 0.1 seconds, if one update is called at t=0., the next one is scheduled for t=0.1, but it won't be called until t=1, and so on. So, in reality, delayed response for the sensors and switches is in [0, 1] second range, always. Same for the new value of 0.5 secs.
- To pull the bridge at higher freq, it would require a custom implementation of track_time_interval (one using asyncio.sleep instead of relying on time events), but I know for sure that approach is discouraged if we want to merge with HA core eventually.
So my question is: I'm missing something or this CC has worked all this time with a real update rate of max 1 Hz?
Hi @azogue thanks for looking into the details and I appreciate the PR!
At various points people have been adamant that the 0.1 rate was making an important difference vs the official integration, and its possible that overtime the underlying HA behaviour has changed. RE track_time_interval I would like to see the remotes merged into HA at some point so lets keep that goal in mind with our approach.
Cheers
At various points people have been adamant that the 0.1 rate was making an important difference vs the official integration
Yeah, I've read most of the issues :)
And reading that I've observed some pattern for people with continuous errors (for any reason): they had 1 error each second :)
I could imagine some benefit of using a timedelta < 1s, though:
- The HA clock ticks each second, but when the machine is stressed this can happen with some delay. For each tick, HA core sets the next one according to the current delay, and it only warnings the user when delay > 2 sec, I think.
- In that context, using a strict rate of 1 sec could miss some of the ticks, and then you have a delay in the [0, 2] range, being clearly appreciable sometimes.
- Using the 0.1 value ensures that for all ticks an update is going to be called, but that it's a brute approach :). With the 0.5 sec happens almost the same, but it is true that you could miss some tick in stressful situations.
Ideally, we could make a special version of the async_track_time_interval
to operate with rounded seconds, instead of adding the timedelta to set the next point. That way we can program the update to be called for each HA tick, ensuring max update rate (but up to a second slower than a push API).
I would like to see the remotes merged into HA at some point so lets keep that goal in mind with our approach.
Me too. I think 1 Hz update rate, with a shared call for all sensors and remotes, could be a good compromise to agree with HA Core, more if we make remotes & the high 1Hz rate optional, and document it accordingly.
RE async_track_time_interval
it could be an idea to implement that in a branch, then if it is successful create a PR on HA to add the function. This would avoid adding a dependency we would potentially later need to remove, and also give the best shot at having the function acceptable to HA