home-assistant/core

KNX entities switching constantly from available to unavailable

arcano81 opened this issue Β· 80 comments

The problem

After upgrading to HA last version 2021.11.1 the KNX entities continue switching from availble to Not avalable
image

What version of Home Assistant Core has the issue?

2021.11.1

What was the last working version of Home Assistant Core?

2021.10.9

What type of installation are you running?

Home Assistant OS

Integration causing the issue

KNX

Link to integration documentation on our website

https://www.home-assistant.io/integrations/knx/

Example YAML snippet

No response

Anything in the logs that might be useful for us?

2021-11-05 15:28:01 WARNING (MainThread) [xknx.log] Received DisconnectRequest from tunnelling sever.
2021-11-05 15:28:06 WARNING (MainThread) [xknx.log] Heartbeat to KNX bus failed. No active communication channel.

Additional information

No response

Hey there @Julius2342, @farmio, @marvin-w, mind taking a look at this issue as it has been labeled with an integration (knx) you are listed as a code owner for? Thanks!
(message by CodeOwnersMention)


knx documentation
knx source
(message by IssueLinks)

Hi πŸ‘‹!
Since this version KNX entities actually reflect the connection state and turn "unavailable" when the connection to the bus is lost. This did not work before.

So I think the actual problem is that your connection is unstable. Please check your logs to confirm Open your Home Assistant instance and show your Home Assistant logs.

For general KNX integration Troubleshooting and to get even more detailed logs see also https://www.home-assistant.io/integrations/knx/#troubleshooting--common-issues

Hey! :)

Could you please add more logs to this issue - as mentioned by @farmio. Otherwise it's quite hard to analyse the issue you are experiencing.

Thanks!

Here some more logs to better explain the issue.
As you can see in the first 2 lines seems that there is something wrong with the connection to the KNX BUS that didn't happen before the last HA update.
Thanx!

2021-11-06 01:56:15 WARNING (MainThread) [xknx.log] Received DisconnectRequest from tunnelling sever.
2021-11-06 01:56:17 WARNING (MainThread) [xknx.log] Heartbeat to KNX bus failed. No active communication channel.
2021-11-06 01:56:22 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/1
2021-11-06 01:56:22 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/1' (FinestraLavanderia - State)
2021-11-06 01:56:22 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/10
2021-11-06 01:56:22 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/10' (FinestraCameretta - State)
2021-11-06 01:56:23 WARNING (MainThread) [xknx.log] Received DisconnectRequest from tunnelling sever.
2021-11-06 01:56:29 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/1
2021-11-06 01:56:29 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/1' (FinestraLavanderia - State)
2021-11-06 01:56:29 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/10
2021-11-06 01:56:29 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/10' (FinestraCameretta - State)
2021-11-06 01:56:31 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/11
2021-11-06 01:56:31 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/11' (FinestraBagnoOspiti - State)
2021-11-06 01:56:31 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/12
2021-11-06 01:56:31 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/12' (FinestraCucina - State)
2021-11-06 01:56:33 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/13
2021-11-06 01:56:33 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/13' (FinestraSalaPranzo - State)
2021-11-06 01:56:33 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/14
2021-11-06 01:56:33 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/14' (PortaIngressoP1 - State)
2021-11-06 01:56:35 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/15
2021-11-06 01:56:35 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/15' (PortaIngressoLaterale - State)
2021-11-06 01:56:35 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/16
2021-11-06 01:56:35 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/16' (FinestraMatrimonialePT - State)
2021-11-06 01:56:37 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/17
2021-11-06 01:56:37 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/17' (FinestraSoggiornoPT - State)
2021-11-06 01:56:37 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/18
2021-11-06 01:56:37 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/18' (PortaIngressoPT - State)
2021-11-06 01:56:39 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/19
2021-11-06 01:56:39 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/19' (FinestraBagnoRosso - State)
2021-11-06 01:56:39 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/2
2021-11-06 01:56:39 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/2' (FinestraSalaHobby - State)
2021-11-06 01:56:41 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/20
2021-11-06 01:56:41 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/20' (FinestraCamerettaPT - State)
2021-11-06 01:56:41 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/3
2021-11-06 01:56:41 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/3' (PortaSalaHobby - State)
2021-11-06 01:56:43 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/4
2021-11-06 01:56:43 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/4' (FinestraSoggiorno - State)
2021-11-06 01:56:43 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/5
2021-11-06 01:56:43 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/5' (FinestraAlzante - State)
2021-11-06 01:56:45 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/6
2021-11-06 01:56:45 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/6' (FinestraStudio - State)
2021-11-06 01:56:45 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/7
2021-11-06 01:56:45 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/7' (FinestraBagnoGrigio - State)
2021-11-06 01:56:47 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/8
2021-11-06 01:56:47 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/8' (FinestraMatrimoniale - State)
2021-11-06 01:56:47 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/9
2021-11-06 01:56:47 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/9' (FinestraBagnoAzzurro - State)
2021-11-06 01:56:49 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/21
2021-11-06 01:56:49 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/21' (Tenda_BOX - State)
2021-11-06 01:56:49 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/22
2021-11-06 01:56:49 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/22' (Tenda_SOGGIORNO2 - State)
2021-11-06 01:56:51 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/23
2021-11-06 01:56:51 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/23' (Tenda_INGRESSO_SALAHOBBT - State)
2021-11-06 01:56:51 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/24
2021-11-06 01:56:51 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/24' (Tenda_MATRIMONIALE_PT - State)
2021-11-06 01:56:53 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/25
2021-11-06 01:56:53 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/25' (Tenda_CAMERETTA_PT - State)
2021-11-06 01:56:53 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/26
2021-11-06 01:56:53 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/26' (Tenda_SOGGIORNO_PT - State)
2021-11-06 01:56:55 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/27
2021-11-06 01:56:55 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/27' (Tenda_INGRESSO_PT - State)
2021-11-06 01:56:55 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/28
2021-11-06 01:56:55 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/28' (Tenda_SOGGIORNO1 - State)
2021-11-06 01:56:57 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/29
2021-11-06 01:56:57 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/29' (Tenda_STUDIO - State)
2021-11-06 01:56:57 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/30
2021-11-06 01:56:57 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/30' (Tenda_VETROLUNGO - State)
2021-11-06 01:56:59 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/31
2021-11-06 01:56:59 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/31' (Tenda_BAGNOGRIGIO - State)
2021-11-06 01:56:59 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/32
2021-11-06 01:56:59 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/32' (Tenda_MATRIMONIALE_P1 - State)
2021-11-06 01:57:01 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/33
2021-11-06 01:57:01 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/33' (Tenda_BAGNOAZZURRO - State)
2021-11-06 01:57:02 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/34
2021-11-06 01:57:02 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/34' (Tenda_CAMERETTA_P1 - State)
2021-11-06 01:57:03 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/35
2021-11-06 01:57:03 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/35' (Tenda_CUCINA - State)
2021-11-06 01:57:04 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/36
2021-11-06 01:57:04 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/36' (Tenda_ALZANTE - State)
2021-11-06 01:57:05 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/37
2021-11-06 01:57:05 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/37' (Tenda_INGRESSO_P1 - State)
2021-11-06 01:57:06 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/38
2021-11-06 01:57:06 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/38' (Tenda_BAGNOOSPITI - State)
2021-11-06 01:57:07 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/39
2021-11-06 01:57:07 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/39' (Radar_box - State)
2021-11-06 01:57:08 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/40
2021-11-06 01:57:08 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/40' (Radar_corr_PT - State)
2021-11-06 01:57:09 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/41
2021-11-06 01:57:09 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/41' (Radar_corr_P1 - State)
2021-11-06 01:57:10 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/1
2021-11-06 01:57:10 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/1' (FinestreP1 - State)
2021-11-06 01:57:11 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/2
2021-11-06 01:57:11 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/2' (TendaP1 - State)
2021-11-06 01:57:12 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/3
2021-11-06 01:57:12 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/3' (RadarP1 - State)
2021-11-06 01:57:13 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/4
2021-11-06 01:57:14 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/4' (FinestrePT - State)
2021-11-06 01:57:14 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/5
2021-11-06 01:57:14 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/5' (TendaPT - State)
2021-11-06 01:57:16 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/6
2021-11-06 01:57:16 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/6' (RadarPT - State)
2021-11-06 01:57:16 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/7
2021-11-06 01:57:16 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/7' (Box - State)
2021-11-06 01:57:18 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/8
2021-11-06 01:57:18 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/8' (TendaBox - State)
2021-11-06 01:57:21 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 11/0/0
2021-11-06 01:57:21 WARNING (MainThread) [xknx.log] Could not sync group address '11/0/0' (Pedonale - State)

Ok. This line

[xknx.log] Received DisconnectRequest from tunnelling sever.

means that your IP interface closed the connection - unfortunately we don't get any more infos when this happens. You could look if your IP interface has some kind of logs.
What device are you using? Are there multiple simultaneous connections - like from ETS, a second HA instance or any other IP device or software?

[xknx.log] Could not sync group address '4/1/8' (TendaBox - State)
[xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/8

this could be a result of the missing connection and HA trying to initialize states. (these are less interesting for our debugging purposes)

Does HA reconnect after some time or does it stay disconnected? There is no hint of a reconnection in your logs. Did you remove some loglines or is this everything you get from xknx?

See my initial comment on how to get more logs by changing the log level of xknx.knx
Please wrap logs in tripple backticks ` so the markdown parser doesn't cut off information and its better readable.

@marvin-w I wonder why not only the heartbeat task seems to keep running after the disconnect, but also the StateUpdater πŸ€”

I'd probably need debug logs in order to analyse this further, but agreed it looks strange.

https://www.home-assistant.io/integrations/knx/#troubleshooting--common-issues

My KNX->IP interface is a Zennio KIPI 1.0 (https://www.zennio.com/products/system/kipi)
I can't find any kind of logs from the interface, neither via ETS.
There are 2 simultaneus connection to the KNX bus via this interface:

After disconnecting from the BUS HA reconnects in 4-5 seconds. The entities becomes "not available" and then after 4-5 seconds the current state is correctly shown in the register.

I did not remove any log lines from KNX integration. This is all I get !
I've already changed the log level before posting setting as follows in configuration.yaml:

logger:
  default: warning
  logs:
    #homeassistant.components.modbus: debug
    #pymodbus.client: debug
    knx: debug
    knx.log: debug
    knx.knx: debug
    knx.telegram: debug
    knx.raw_socket: debug
    knx.state_updater: debug

I insert some more loglines and an image following.
As you will notice the disconnection from tunnelling server occurs at 9:58:35 and the entities in HA becomes unavailable. 3 seconds later the entities becomes available again.
Let me know if you need any other information to better understand the issue.

Thanx !

image

2021-11-07 09:58:35 WARNING (MainThread) [xknx.log] Received DisconnectRequest from tunnelling sever.
2021-11-07 09:58:35 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 3/0/12
2021-11-07 09:58:35 WARNING (MainThread) [xknx.log] Could not sync group address '3/0/12' (TempCamerettaPT - Value)
2021-11-07 09:58:35 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 3/0/11
2021-11-07 09:58:35 WARNING (MainThread) [xknx.log] Could not sync group address '3/0/11' (TempBagnoPT - Value)
2021-11-07 09:58:35 WARNING (MainThread) [xknx.log] Sending telegram failed. No active communication channel.
2021-11-07 09:58:40 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/1
2021-11-07 09:58:40 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/1' (FinestraLavanderia - State)
2021-11-07 09:58:40 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/10
2021-11-07 09:58:40 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/10' (FinestraCameretta - State)
2021-11-07 09:58:41 WARNING (MainThread) [homeassistant.helpers.entity] Update of sensor.cdz_soggiorno_inside_temperature is taking over 10 seconds
2021-11-07 09:58:43 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/11
2021-11-07 09:58:43 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/11' (FinestraBagnoOspiti - State)
2021-11-07 09:58:43 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/12
2021-11-07 09:58:43 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/12' (FinestraCucina - State)
2021-11-07 09:58:45 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/13
2021-11-07 09:58:45 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/13' (FinestraSalaPranzo - State)
2021-11-07 09:58:45 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/14
2021-11-07 09:58:45 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/14' (PortaIngressoP1 - State)
2021-11-07 09:58:47 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/15
2021-11-07 09:58:47 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/15' (PortaIngressoLaterale - State)
2021-11-07 09:58:47 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/16
2021-11-07 09:58:47 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/16' (FinestraMatrimonialePT - State)
2021-11-07 09:58:49 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/17
2021-11-07 09:58:49 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/17' (FinestraSoggiornoPT - State)
2021-11-07 09:58:49 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/18
2021-11-07 09:58:49 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/18' (PortaIngressoPT - State)
2021-11-07 09:58:51 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/19
2021-11-07 09:58:51 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/19' (FinestraBagnoRosso - State)
2021-11-07 09:58:51 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/2
2021-11-07 09:58:51 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/2' (FinestraSalaHobby - State)
2021-11-07 09:58:53 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/20
2021-11-07 09:58:53 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/20' (FinestraCamerettaPT - State)
2021-11-07 09:58:53 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/3
2021-11-07 09:58:53 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/3' (PortaSalaHobby - State)
2021-11-07 09:58:55 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/4
2021-11-07 09:58:55 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/4' (FinestraSoggiorno - State)
2021-11-07 09:58:55 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/5
2021-11-07 09:58:55 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/5' (FinestraAlzante - State)
2021-11-07 09:58:56 ERROR (MainThread) [custom_components.frigate.api] Error fetching information from http://ccab4aaf-frigate:5000/api/stats: Cannot connect to host ccab4aaf-frigate:5000 ssl:default [Try again]
2021-11-07 09:58:57 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/6
2021-11-07 09:58:57 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/6' (FinestraStudio - State)
2021-11-07 09:58:57 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/7
2021-11-07 09:58:57 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/7' (FinestraBagnoGrigio - State)
2021-11-07 09:58:59 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/8
2021-11-07 09:58:59 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/8' (FinestraMatrimoniale - State)
2021-11-07 09:58:59 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/9
2021-11-07 09:58:59 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/9' (FinestraBagnoAzzurro - State)
2021-11-07 09:59:01 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/21
2021-11-07 09:59:01 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/21' (Tenda_BOX - State)
2021-11-07 09:59:01 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/22
2021-11-07 09:59:01 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/22' (Tenda_SOGGIORNO2 - State)
2021-11-07 09:59:03 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/23
2021-11-07 09:59:03 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/23' (Tenda_INGRESSO_SALAHOBBT - State)
2021-11-07 09:59:03 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/24
2021-11-07 09:59:03 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/24' (Tenda_MATRIMONIALE_PT - State)
2021-11-07 09:59:05 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/25
2021-11-07 09:59:05 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/25' (Tenda_CAMERETTA_PT - State)
2021-11-07 09:59:05 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/26
2021-11-07 09:59:05 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/26' (Tenda_SOGGIORNO_PT - State)
2021-11-07 09:59:07 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/27
2021-11-07 09:59:07 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/27' (Tenda_INGRESSO_PT - State)
2021-11-07 09:59:07 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/28
2021-11-07 09:59:07 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/28' (Tenda_SOGGIORNO1 - State)
2021-11-07 09:59:09 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/29
2021-11-07 09:59:09 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/29' (Tenda_STUDIO - State)
2021-11-07 09:59:09 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/30
2021-11-07 09:59:09 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/30' (Tenda_VETROLUNGO - State)
2021-11-07 09:59:11 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/31
2021-11-07 09:59:11 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/31' (Tenda_BAGNOGRIGIO - State)
2021-11-07 09:59:11 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/32
2021-11-07 09:59:11 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/32' (Tenda_MATRIMONIALE_P1 - State)
2021-11-07 09:59:13 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/33
2021-11-07 09:59:13 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/33' (Tenda_BAGNOAZZURRO - State)
2021-11-07 09:59:13 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/34
2021-11-07 09:59:13 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/34' (Tenda_CAMERETTA_P1 - State)
2021-11-07 09:59:15 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/35
2021-11-07 09:59:15 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/35' (Tenda_CUCINA - State)
2021-11-07 09:59:15 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/36
2021-11-07 09:59:15 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/36' (Tenda_ALZANTE - State)
2021-11-07 09:59:17 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/37
2021-11-07 09:59:17 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/37' (Tenda_INGRESSO_P1 - State)
2021-11-07 09:59:17 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/38
2021-11-07 09:59:17 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/38' (Tenda_BAGNOOSPITI - State)
2021-11-07 09:59:19 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/39
2021-11-07 09:59:19 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/39' (Radar_box - State)
2021-11-07 09:59:19 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/40
2021-11-07 09:59:19 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/40' (Radar_corr_PT - State)
2021-11-07 09:59:21 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/2/41
2021-11-07 09:59:21 WARNING (MainThread) [xknx.log] Could not sync group address '4/2/41' (Radar_corr_P1 - State)
2021-11-07 09:59:21 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/1
2021-11-07 09:59:21 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/1' (FinestreP1 - State)
2021-11-07 09:59:23 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/2
2021-11-07 09:59:23 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/2' (TendaP1 - State)
2021-11-07 09:59:23 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/3
2021-11-07 09:59:23 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/3' (RadarP1 - State)
2021-11-07 09:59:25 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/4
2021-11-07 09:59:25 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/4' (FinestrePT - State)
2021-11-07 09:59:25 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/5
2021-11-07 09:59:25 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/5' (TendaPT - State)
2021-11-07 09:59:27 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/6
2021-11-07 09:59:27 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/6' (RadarPT - State)
2021-11-07 09:59:27 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/7
2021-11-07 09:59:27 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/7' (Box - State)
2021-11-07 09:59:29 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 4/1/8
2021-11-07 09:59:29 WARNING (MainThread) [xknx.log] Could not sync group address '4/1/8' (TendaBox - State)
2021-11-07 09:59:32 WARNING (MainThread) [xknx.log] Error: KNX bus did not respond in time (2.0 secs) to GroupValueRead request for: 11/0/0
2021-11-07 09:59:32 WARNING (MainThread) [xknx.log] Could not sync group address '11/0/0' (Pedonale - State)

I've already changed the log level before posting setting as follows in configuration.yaml:

logger:
default: warning
logs:
#homeassistant.components.modbus: debug
#pymodbus.client: debug
knx: debug
knx.log: debug
knx.knx: debug
knx.telegram: debug
knx.raw_socket: debug
knx.state_updater: debug

It would need to be xknx* instead of knx* (xknx is the name of the Python library used by the knx Integration).

Sorry guys, I was so focused on debugging the problem that I didn't see the missing "X" !!
Here the logs:

2021-11-07 18:10:51 DEBUG (MainThread) [xknx.raw_socket] Received from ('192.168.1.225', 3671): 0610042000170452b2002900bcc01002481003008066ad
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.knx] Received: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="TUNNELLING_REQUEST" Reserve="0" TotalLength="23" />
body="<TunnellingRequest communication_channel_id="82" sequence_counter="178" cemi="<CEMIFrame SourceAddress="IndividualAddress("1.0.2")" DestinationAddress="GroupAddress("9/0/16")" Flags="1011110011000000" payload="<GroupValueWrite value="<DPTArray value="[0x66,0xad]" />" />" />" />" />
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.knx] Sending: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="TUNNELLING_ACK" Reserve="0" TotalLength="10" />
body="<TunnellingAck communication_channel_id="82" sequence_counter="178" status_code="ErrorCode.E_NO_ERROR" />" />
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.telegram] <Telegram direction="Incoming" source_address="1.0.2" destination_address="9/0/16" payload="<GroupValueWrite value="<DPTArray value="[0x66,0xad]" />" />" />
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.raw_socket] Received from ('192.168.1.225', 3671): 0610042000170452b2002900bcc01002481003008066ad
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.knx] Received: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="TUNNELLING_REQUEST" Reserve="0" TotalLength="23" />
body="<TunnellingRequest communication_channel_id="82" sequence_counter="178" cemi="<CEMIFrame SourceAddress="IndividualAddress("1.0.2")" DestinationAddress="GroupAddress("9/0/16")" Flags="1011110011000000" payload="<GroupValueWrite value="<DPTArray value="[0x66,0xad]" />" />" />" />" />
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.knx] Sending: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="TUNNELLING_ACK" Reserve="0" TotalLength="10" />
body="<TunnellingAck communication_channel_id="82" sequence_counter="178" status_code="ErrorCode.E_NO_ERROR" />" />
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.telegram] <Telegram direction="Incoming" source_address="1.0.2" destination_address="9/0/16" payload="<GroupValueWrite value="<DPTArray value="[0x66,0xad]" />" />" />
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.raw_socket] Received from ('192.168.1.225', 3671): 06100209001052000801c0a801e10e57
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.knx] Received: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="DISCONNECT_REQUEST" Reserve="0" TotalLength="16" />
body="<DisconnectRequest CommunicationChannelID="82" control_endpoint="<HPAI 192.168.1.225:3671 />" />" />
2021-11-07 18:10:51 WARNING (MainThread) [xknx.log] Received DisconnectRequest from tunnelling sever.
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.knx] Sending: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="DISCONNECT_RESPONSE" Reserve="0" TotalLength="8" />
body="<DisconnectResponse CommunicationChannelID="82" status_code="ErrorCode.E_NO_ERROR" />" />
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.state_updater] StateUpdater stopping
2021-11-07 18:10:51 DEBUG (MainThread) [xknx.log] Closing transport.
2021-11-07 18:10:54 DEBUG (MainThread) [xknx.knx] Sending: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="CONNECT_REQUEST" Reserve="0" TotalLength="26" />
body="<ConnectRequest control_endpoint="<HPAI 192.168.1.221:50817 />" data_endpoint="<HPAI 192.168.1.221:50817 />" request_type="ConnectRequestType.TUNNEL_CONNECTION" />" />
2021-11-07 18:10:54 DEBUG (MainThread) [xknx.raw_socket] Received from ('192.168.1.225', 3671): 06100206001453000801c0a801e10e5704041006
2021-11-07 18:10:54 DEBUG (MainThread) [xknx.knx] Received: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="CONNECT_RESPONSE" Reserve="0" TotalLength="20" />
body="<ConnectResponse communication_channel="83" status_code="ErrorCode.E_NO_ERROR" control_endpoint="<HPAI 192.168.1.225:3671 />" request_type="ConnectRequestType.TUNNEL_CONNECTION" identifier="4102" />" />
2021-11-07 18:10:54 DEBUG (MainThread) [xknx.log] Tunnel established communication_channel=83, id=4102
2021-11-07 18:10:54 DEBUG (MainThread) [xknx.state_updater] StateUpdater initializing values
2021-11-07 18:10:54 DEBUG (MainThread) [xknx.state_updater] StateUpdater reading 4/2/1 for FinestraLavanderia - State
2021-11-07 18:10:54 DEBUG (MainThread) [xknx.state_updater] StateUpdater reading 4/2/10 for FinestraCameretta - State
2021-11-07 18:10:55 WARNING (MainThread) [homeassistant.helpers.template] Template variable warning: 'mappingproxy object' has no attribute 'is_hue_group' when rendering '{{ states.light | selectattr('state', 'eq', 'on') | rejectattr('attributes.is_hue_group') | list | count }}'
2021-11-07 18:10:55 WARNING (MainThread) [homeassistant.helpers.template] Template variable warning: 'mappingproxy object' has no attribute 'is_hue_group' when rendering '{{ states.light | selectattr('state', 'eq', 'on') | rejectattr('attributes.is_hue_group') | list | count }}'
2021-11-07 18:10:55 WARNING (MainThread) [homeassistant.helpers.template] Template variable warning: 'mappingproxy object' has no attribute 'is_hue_group' when rendering '{{ states.light | selectattr('state', 'eq', 'on') | rejectattr('attributes.is_hue_group') | list | count }}'
2021-11-07 18:10:55 WARNING (MainThread) [homeassistant.helpers.template] Template variable warning: 'mappingproxy object' has no attribute 'is_hue_group' when rendering '{{ states.light | selectattr('state', 'eq', 'on') | rejectattr('attributes.is_hue_group') | list | count }}'
2021-11-07 18:10:55 WARNING (MainThread) [homeassistant.helpers.template] Template variable warning: 'mappingproxy object' has no attribute 'is_hue_group' when rendering '{{ states.light | selectattr('state', 'eq', 'on') | rejectattr('attributes.is_hue_group') | list | count }}'
2021-11-07 18:10:55 WARNING (MainThread) [homeassistant.helpers.template] Template variable warning: 'mappingproxy object' has no attribute 'is_hue_group' when rendering '{{ states.light | selectattr('state', 'eq', 'on') | rejectattr('attributes.is_hue_group') | list | count }}'
2021-11-07 18:10:55 DEBUG (MainThread) [xknx.raw_socket] Received from ('192.168.1.225', 3671): 061004200017045300002900bce01105180c0300800c29
2021-11-07 18:10:55 DEBUG (MainThread) [xknx.knx] Received: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="TUNNELLING_REQUEST" Reserve="0" TotalLength="23" />
body="<TunnellingRequest communication_channel_id="83" sequence_counter="0" cemi="<CEMIFrame SourceAddress="IndividualAddress("1.1.5")" DestinationAddress="GroupAddress("3/0/12")" Flags="1011110011100000" payload="<GroupValueWrite value="<DPTArray value="[0xc,0x29]" />" />" />" />" />
2021-11-07 18:10:55 DEBUG (MainThread) [xknx.knx] Sending: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="TUNNELLING_ACK" Reserve="0" TotalLength="10" />
body="<TunnellingAck communication_channel_id="83" sequence_counter="0" status_code="ErrorCode.E_NO_ERROR" />" />
2021-11-07 18:10:55 DEBUG (MainThread) [xknx.telegram] <Telegram direction="Outgoing" source_address="0.0.0" destination_address="4/2/1" payload="<GroupValueRead />" />
2021-11-07 18:10:55 DEBUG (MainThread) [xknx.knx] Sending: <KNXIPFrame <KNXIPHeader HeaderLength="6" ProtocolVersion="16" KNXIPServiceType="TUNNELLING_REQUEST" Reserve="0" TotalLength="21" />
body="<TunnellingRequest communication_channel_id="83" sequence_counter="140" cemi="<CEMIFrame SourceAddress="IndividualAddress("1.0.6")" DestinationAddress="GroupAddress("4/2/1")" Flags="1011110011100000" payload="<GroupValueRead />" />" />" />
2021-11-07 18:10:55 DEBUG (MainThread) [xknx.telegram] <Telegram direction="Incoming" source_address="1.1.5" destination_address="3/0/12" payload="<GroupValueWrite value="<DPTArray value="[0xc,0x29]" />" />" />
2021-11-07 18:10:55 INFO (MainThread) [xknx.log] Successfully reconnected to KNX bus.

I don't know if is the same problem, but after the upgrade to 2021.11 I have some issue with knx, that trigger some automations even if not necessary.

I think the problem is in the communication & entity update.

@arcano81 the disconnect is issued form your tunnelling server. How often does it occur? You can try to turn off the second device connecting to it and see if this is causing the problems. Alternatively try to use ETS bus monitor to get an idea what is happening.

@sagitt without further information we can't help you. But your problems don't seem related to this issue (entities unavailable) so please open a new one providing as much information as possible.

@farmio The disconnection occurs about 200 times in 24 hours. In attach you can find a query from HA DB that reports the states of a cover entity. You can see how often the disconnections occurs.
I'll try to disconnect the second device, see what happens and keep you updated.

Thanx
Entity State.csv

@farmio I disconnected the second device but the problem is still there.
It seems not depending on the seond device connecting to the KNX-IP interface.
Any other ideas? Are there some logs or other info that you need to better understand the issue?

So about every 8 minutes. Ok that is very often, but it makes it quite easy to log such an event.
As it happens from outside of HA the only chance to see what's happening are logs from ETS rather than xknx/HomeAssistant. Most preferably Bus monitor logs (if your device supports these - if not use group monitor).

Is the KIPI device the only ip interface in your installation?

@arcano81 the disconnect is issued form your tunnelling server. How often does it occur? You can try to turn off the second device connecting to it and see if this is causing the problems. Alternatively try to use ETS bus monitor to get an idea what is happening.

@sagitt without further information we can't help you. But your problems don't seem related to this issue (entities unavailable) so please open a new one providing as much information as possible.

I analyzed the situation and the problem is given by this change.

I see that sometimes my entities go to "unavailable" and just after some seconds receive the correct state.
This trigger some automation that check the entity status change, like i'm doing with some automation when my cover go to "0%". After going to unavailable and return to 0% all automations are triggered. the same happen with my key-lock contact (see image), i receive an alert for lock closed\open without any real status change.

This working mode is not a problem if all works fine, but I found this in logs when this happen:

Schermata 2021-11-10 alle 20 21 36

Schermata 2021-11-10 alle 20 24 11
Schermata 2021-11-10 alle 20 32 39

So this happen multiple times during the day. There is a way to restore the old working method?

Thanks

@arcano81 the disconnect is issued form your tunnelling server. How often does it occur? You can try to turn off the second device connecting to it and see if this is causing the problems. Alternatively try to use ETS bus monitor to get an idea what is happening.
@sagitt without further information we can't help you. But your problems don't seem related to this issue (entities unavailable) so please open a new one providing as much information as possible.

I analyzed the situation and the problem is given by this change.

I see that sometimes my entities go to "unavailable" and just after some seconds receive the correct state. This trigger some automation that check the entity status change, like i'm doing with some automation when my cover go to "0%". After going to unavailable and return to 0% all automations are triggered. the same happen with my key-lock contact (see image), i receive an alert for lock closed\open without any real status change.

This working mode is not a problem if all works fine, but I found this in logs when this happen:

Schermata 2021-11-10 alle 20 21 36

Schermata 2021-11-10 alle 20 24 11 Schermata 2021-11-10 alle 20 32 39

So this happen multiple times during the day. There is a way to restore the old working method?

Thanks

We did not change the connection behavior itself. It has been like this for a long time. The only difference is, that you are now able to see when the bus is not connected by updating the available property of each given entity. I'm almost certain that you had this issues before but they just weren't visually highlighted as they are now.

@arcano81 the disconnect is issued form your tunnelling server. How often does it occur? You can try to turn off the second device connecting to it and see if this is causing the problems. Alternatively try to use ETS bus monitor to get an idea what is happening.
@sagitt without further information we can't help you. But your problems don't seem related to this issue (entities unavailable) so please open a new one providing as much information as possible.

I analyzed the situation and the problem is given by this change.
I see that sometimes my entities go to "unavailable" and just after some seconds receive the correct state. This trigger some automation that check the entity status change, like i'm doing with some automation when my cover go to "0%". After going to unavailable and return to 0% all automations are triggered. the same happen with my key-lock contact (see image), i receive an alert for lock closed\open without any real status change.
This working mode is not a problem if all works fine, but I found this in logs when this happen:
Schermata 2021-11-10 alle 20 21 36
Schermata 2021-11-10 alle 20 24 11 Schermata 2021-11-10 alle 20 32 39
So this happen multiple times during the day. There is a way to restore the old working method?
Thanks

We did not change the connection behavior itself. It has been like this for a long time. The only difference is, that you are now able to see when the bus is not connected by updating the available property of each given entity. I'm almost certain that you had this issues before but they just weren't visually highlighted as they are now.

of course, but this is a problem even if now is the right and before not

Is possible an option to disable this behavior or add a configurable timeout? so the entites go to unavailble only after defined time occur

@arcano81 the disconnect is issued form your tunnelling server. How often does it occur? You can try to turn off the second device connecting to it and see if this is causing the problems. Alternatively try to use ETS bus monitor to get an idea what is happening.
@sagitt without further information we can't help you. But your problems don't seem related to this issue (entities unavailable) so please open a new one providing as much information as possible.

I analyzed the situation and the problem is given by this change.
I see that sometimes my entities go to "unavailable" and just after some seconds receive the correct state. This trigger some automation that check the entity status change, like i'm doing with some automation when my cover go to "0%". After going to unavailable and return to 0% all automations are triggered. the same happen with my key-lock contact (see image), i receive an alert for lock closed\open without any real status change.
This working mode is not a problem if all works fine, but I found this in logs when this happen:
Schermata 2021-11-10 alle 20 21 36
Schermata 2021-11-10 alle 20 24 11 Schermata 2021-11-10 alle 20 32 39
So this happen multiple times during the day. There is a way to restore the old working method?
Thanks

We did not change the connection behavior itself. It has been like this for a long time. The only difference is, that you are now able to see when the bus is not connected by updating the available property of each given entity. I'm almost certain that you had this issues before but they just weren't visually highlighted as they are now.

of course, but this is a problem even if now is the right and before not

Is possible an option to disable this behavior or add a configurable timeout? so the entites go to unavailble only after defined time occur

I will think about it and talk to @farmio, I'm personally not feeling this but we will see.

@arcano81 the disconnect is issued form your tunnelling server. How often does it occur? You can try to turn off the second device connecting to it and see if this is causing the problems. Alternatively try to use ETS bus monitor to get an idea what is happening.

@sagitt without further information we can't help you. But your problems don't seem related to this issue (entities unavailable) so please open a new one providing as much information as possible.

I analyzed the situation and the problem is given by this change.

I see that sometimes my entities go to "unavailable" and just after some seconds receive the correct state. This trigger some automation that check the entity status change, like i'm doing with some automation when my cover go to "0%". After going to unavailable and return to 0% all automations are triggered. the same happen with my key-lock contact (see image), i receive an alert for lock closed\open without any real status change.

This working mode is not a problem if all works fine, but I found this in logs when this happen:

Schermata 2021-11-10 alle 20 21 36

Schermata 2021-11-10 alle 20 24 11 Schermata 2021-11-10 alle 20 32 39

So this happen multiple times during the day. There is a way to restore the old working method?

Thanks

We did not change the connection behavior itself. It has been like this for a long time. The only difference is, that you are now able to see when the bus is not connected by updating the available property of each given entity. I'm almost certain that you had this issues before but they just weren't visually highlighted as they are now.

of course, but this is a problem even if now is the right and before not

Is possible an option to disable this behavior or add a configurable timeout? so the entites go to unavailble only after defined time occur

I will think about it and talk to @farmio, I'm personally not feeling this but we will see.

Thank you, this is like a breaking change and "break" lots of my automations.

I have a similar issue. Sometimes, but not always, I get this disconnect every 30 minutes exactly. It is always at exactly X:00 or X:30.

2021-12-01 14:00:02 WARNING (MainThread) [xknx.log] Received DisconnectRequest from tunnelling sever.
2021-12-01 14:00:02 WARNING (MainThread) [xknx.log] Sending telegram failed. No active communication channel.
2021-12-01 14:00:06 INFO (MainThread) [xknx.log] Successfully reconnected to KNX bus.

It always reconnects after a few seconds but of course it triggers some automations etc.
I was not able yet to find the reason for this. There are no scheduled HA automations at :00/:30 and in ETS I cannot see any bus messages at :00/:30 at all.

I was not able yet to find the reason for this. There are no scheduled HA automations at :00/:30 and in ETS I cannot see any bus messages at :00/:30 at all.

The disconnect is issued form your tunnelling gateway (what manufacturer and type is it?). Its hard for us to say why. Your best bet would be to see if it provides some kind of logging (maybe it has a web interface etc.).
Other than that you could try to do a network capture (eg. wireshark) and have (or have us) look into it.
Debug logs for the knx integration may help too.

It is a MDT IP Interface SCN-IP000.03 which, unfortunately, has not any logs at all.

I do understand that the actual problem is not related to HomeAssistant/XKNX and most likely has been there since ever. However, it would be great if there would be a (configurable) option to ignore such quick reconnects to the bus.

Some of the "unavailable" automation triggers can be worked around but some not, for example HomeKit triggers notifications if locks jump from the "unavailable" state.

@josiasmontag I'm more in the camp of finding and fixing the source of issues than to add code and complexity to hide them.
Mostly its just topology issues that can be fixed quite easily. If it is indeed an issue in xknx we can fix it for all affected users - as long as we can (reproduce it and) pin it down.

If you need help in finding out what is causing this feel free to join the xknx Discord server.

@arcano81 @josiasmontag @sagitt what Device does your HA instance run on?

@arcano81 @josiasmontag @sagitt what Device does your HA instance run on?

HassOS on VM using proxmox, on intel NUC with KNX\IP interface Weinzierl 731 directly into a managed switch not passing between router (knx\ip & nuc -> Switch)

Anyway i "fixed" this issue on my istance considering unavailable in every automation (like 200 automation, 2 weeks of work) and every template & custom sensor, climate... i already have a watchdog\hearbeat analysis system and i inhibite automations with this during a persistent communication fault.

@arcano81 @josiasmontag @sagitt what Device does your HA instance run on?

Home Assistant OS on Raspberry Pi 4 model B 4Gb RAM with MicroSD 64Gb.

@arcano81 @josiasmontag @sagitt what Device does your HA instance run on?

Synology NAS / Docker / HA image.

Having the same issue here.
First I was using the docker version on a Raspberry Pi 4 model B.
Then I switched to HA OS on the RPI, to see whether it changed anything (it didn't).

I'm using a MDT IP Interface SCN-IP000.03

When taking a look with wireshark, I'm having the impression that the MDT interface is either ignoring some requests or that my network is somehow dropping them.

image
tcpLog.zip

We are still kind of in the dark about this issue - most of all because none of the xknx developers is experiencing this and we can't possibly recreate it. At this point I'm not even sure if all these issues here are even related, or if we face multiple different problems.

Tcpdumps will surely help to shed some light on these issues, thank you very much!

If you could set up a HA Test instance running only default_config and knx (maybe just on a venv installation on a local machine - avoid fancy πŸ˜‰) I'd like to know if the knx connection is as unstable as in your production installation.

Also if anyone here is capable of doing some testing with an xknx working branch, I'd be very interested in your results.
XKNX/xknx#822

@hvandenbroeck Hi! I've had a look at your tcplog - this really does look like a networking problem and not so much like a KNX / xknx issue. Even ARP and ICMP packages are ignored.
I'd check for poor wifi signal, flaky cables, broken switches, power supplies etc.

@hvandenbroeck How does your network stack look like?
My MDT IP Interface is attached to an Unifi USW-24-PoE switch. HomeAssistant running on the Synology NAS is directly attached to this switch as well. The router is an Unifi UDM-Pro.

@hvandenbroeck How does your network stack look like? My MDT IP Interface is attached to an Unifi USW-24-PoE switch. HomeAssistant running on the Synology NAS is directly attached to this switch as well. The router is an Unifi UDM-Pro.

Just for reference, I'm running almost the same network stack without an issue. I just don't have a Synology NAS (well I have one, but it's not running HA) and HA instead runs on a dedicated server in the same rack connected to a Unify 24 PoE switch and this is connected to my Unifi UDM Pro. My KNX IP device (Weinzierl IP BAOS 777) is connected to the PoE switch as well.

I don't think we will find anything here without proper debug logs and a wireshark dump (for your specific case).

Thanks all for the input.

Last days I tried switching cables, using wifi instead of a cable, updating the firmware of the MDT interface and using a clean install. None of those changed anything to the behavior.

When trying to ping the IP interface it usually reponded quite slow and sometimes packets were dropped. Again evidence that it had nothing to do with xknx or home assistant.

Going in ETS and doing a full download of the MDT IP interface solved my problem. Right now the program version shows 2.1. Unfortunatly I have no clue what it was before. Pinging goes alot faster, no dropped packets anymore and most importantly, no sign of dropped connections in HA for the last 20 hours.

image

Hi everybody,
I’ll give you a brief update on my situation.
Yesterday I started a clean installation of Home Assistant (2021.12.3 with knx integration only) on a TS-453 Pro Qnap NAS using a container.
This new installation is running in parallel with the other Home Assistant running on a Raspberry
No modification to Knox setup or Ethernet connections.
The problem of disconnection of knx is not showing up in the Qnap installation but it is still present on the Raspberry installation. Could it be some kind of conflict between integrations on my Raspberry installation?
Any ideas on how to debug this?

thanx
Alessandro

@arcano81 a Pi4 with that amount of Ram should be able to handle a HA installation imho - so I guess it's not a hardware speed problem. But it could still be a defect Ethernet cable to the Pi or something.

And yes, I really believe it could also be some kind of "misbehaving" integration blocking the event loop xknx is running in.
Do you use any custom components?
Do you see any suspicious log messages? If not I guess disabling some integrations to encircle it could help. But after all it's just a gut feeling I have.

https://github.com/XKNX/xknx/releases/tag/0.18.14

So a new xknx is contained in HA 2021.12.4 which was released tonight. This might solve some of the described issues, let's see.
Im looking forward to your feedback.

I had a similar problem as described in this thread. It turned out that a third party integration (Helios KWL/Fan) caused the whole network stack of HomeAssistant to pause for a few seconds every now and then. This forced the KNX connection to trigger a reconnect and all entities were unavailable for 30-60 seconds. KNX disconnects because of the missing keep alive message caused by the blocking problem in HomeAssistant. Disabling the third party integration solved the problem. I highly recommend trying this as well. Disable and enable all integrations one by one and check the behavior.
I was too lazy to debug the faulty integration at the time to find out the cause.
Hope this helps.

Hello all,

I have the same issue. The connection is not available. Which means the last change is modified.

Installation HA: 2021.12.4

@farmio The problem is still there... Would even say it has worsened. This morning I had 3 shutters stay closed :-(

image

image

I had a similar problem as described in this thread. It turned out that a third party integration (Helios KWL/Fan) caused the whole network stack of HomeAssistant to pause for a few seconds every now and then. This forced the KNX connection to trigger a reconnect and all entities were unavailable for 30-60 seconds. KNX disconnects because of the missing keep alive message caused by the blocking problem in HomeAssistant. Disabling the third party integration solved the problem. I highly recommend trying this as well. Disable and enable all integrations one by one and check the behavior.
I was too lazy to debug the faulty integration at the time to find out the cause.
Hope this helps.

Thank you for pointing this out. It was indeed the Helios integration for me as well.

Hi everybody,
here some KNX/IP protocol captured with Wireshark. (192.168.1.221 is HA, 192.168.1.225 is the KNX/IP Zennio Interface)
You can see the disconnection request @20:00:16
Hope this could help in understanding the problem.

KNX Disconnection Dump.zip

After deactivating the plug-in integrations, such as the Helios plug-in, initially solved the disconnects, it wasn't long before the disconnects returned.

I have now uninstalled all Third Party Addons. With the result that even more disconnects are now taking place. Since yesterday evening there have been 781 disconnects to the KNX gateway.

At the same time, I have established a connection via the ETS Monitor, which has been running completely since yesterday evening.

So it is definitely not due to the network between HA & KMX IP Interface

FYI- I installed a test system with only the KNX interface. The same problem - at some point I lost the connection.

In a next step I will set up a VLAN for KNX.

Hi everybody, here some KNX/IP protocol captured with Wireshark. (192.168.1.221 is HA, 192.168.1.225 is the KNX/IP Zennio Interface) You can see the disconnection request @20:00:16 Hope this could help in understanding the problem.

KNX Disconnection Dump.zip

There are 7 (seven) packets in this dump... it doesn't provide really much insight when everything that may influence something is filtered or cut away.
The only thing I can see here is that the Disconnect is issued by the interface, and HA takes a very long time to respond which is not good.

@farmio
Here attached a 40min dump of KNX/IP protocolo. Hope this could be enough to better understand the issue.
Let me know if you need some more data.
KNX IP dump 02.zip

@arcano81 is this your production instance or the clean-knx-only-installation you mentioned before?

@arcano81 is this your production instance or the clean-knx-only-installation you mentioned before?

Yes, this is my production instance!

@arcano81 ok. Imho the issue is very likely to be a misbehaving integration that is not the knx Integration.
Something is blocking the event loop for 4-10 seconds.

You can try to investigate how the debugger works and see if it can reveal anything. https://www.home-assistant.io/integrations/debugpy/

Or just brute force disable one integration after the other (starting with custom components).

At this point I'm afraid I can't help you further.

Hi All, I'm the owner one of the mentioned helios integration. The "production" version of the integration blocks the event loop (there is thread locking inside, and the socket connection is also not based on asyncio in the underlying eazyctrl package). I have made a quick (just for testing) modification to make everything async (locking, connection, etc...). If someone has installed the homeassistant-easycontrols integration and has this #59170 issue please try to update the integration from this branch: https://github.com/laszlojakab/homeassistant-easycontrols/tree/fix/network-stack-restart

Please give me a feedback if it solves the original problem. Thanks!

@arcano81 ok. Imho the issue is very likely to be a misbehaving integration that is not the knx Integration. Something is blocking the event loop for 4-10 seconds.

You can try to investigate how the debugger works and see if it can reveal anything. https://www.home-assistant.io/integrations/debugpy/

Or just brute force disable one integration after the other (starting with custom components).

At this point I'm afraid I can't help you further.

@farmio Today I run again my "clean-knx-only-installation" and the disconnection issue is present also with this setup. The issue is less frequent than in the production installation but it's present.
Any further idea or information you need?

I have made similar experience as @arcano81. I have a second HA instance which has the KNX integration only with 2 sensors. The KNX-only instance has disconnects as well, but less frequent and at different times.

I got a Wireshark capture of the production HA instance with 3 disconnnects at:

2022-01-09 10:53:04
2022-01-10 00:17:11
2022-01-10 03:47:13

knx-production.pcapng.zip

Could you check if this is the stuck event loop issue as well @farmio? For me it looks like the HA instance is replying packets in time but I am not sure.

Next, I will try to get a capture of a disconnect of the KNX-only instance.

@josiasmontag Don't know what causes this, but every Frame is listed twice in your capture. It's most probably some capturing configuration issue so I'll ignore this πŸ€·β€β™‚οΈ

The first disconnect is different, the other 2 seem also to be caused by a blocked loop. This picture is showing the 2nd disconnect.
Bildschirmfoto 2022-01-10 um 11 44 58
You can see the Telegram 33 was ACKed in 0.000696 seconds.
Telegram 34 didn't get an ACK in time (~1 second) and is repeated by the KNX interface.
After an additional second the KNX interface sends a DisconnectRequest - as it should.
After additional 0.2 seconds xknx is unblocked again and works its queue - it ACKs both Telegrams and answers the DisconnectRequest (which both is pointless since we don't have a connection anymore anyway).

What happened between 33 and 34 remains a mystery 🀷

I see the same problem - interestingly only with one device (kitchen window constantly switching from off to unavailable and back).

Just captured a disconnect with the KNX-only HA instance. It looks different:

image

knx-only-instance.pcapng.zip

You could try to checkout and run #63766 and see if it logs anything. This should find integrations that block the event loop by using time.sleep (my assumption is that some integrations lib does this from a discovery flow or something like that - I could be very wrong though).

@josiasmontag Yes it looks like the first disconnect of the other capture. I have no idea why this occurs.

I'd try to find out why you see all packets twice.

XKNX/xknx#847 would also be worth a try - its a more or less experimental version of xknx which puts its connection (acking etc.) logic in a separate thread. It works fine on my system (but then again the current release version also works fine here 🀷).

XKNX/xknx#847 would also be worth a try - its a more or less experimental version of xknx which puts its connection (acking etc.) logic in a separate thread. It works fine on my system (but then again the current release version also works fine here 🀷).

This is a great news! I would like to try it on my test and production system.
@farmio Can you explain me how to install the experimental version?

@arcano81 see https://xknx.io/home_assistant.html#running-ha-with-local-xknx-library
It should be enough to git clone, checkout the PR and set the environment variable.

It will probably only work with a venv installation - Container will probably be more complicated and HaOS I have no idea.

#63766 was just merged to dev. It's easier to just try current HA dev branch and see if that logs any warnings.
See https://developers.home-assistant.io/docs/development_environment

Small update for the experimental xknx version: We decided to ship this, so we disabled threading by default again. If you want to try manually set threaded=True in your local clone here https://github.com/XKNX/xknx/blob/main/xknx/io/connection.py

I switched my production installation to dev and updated the Core and the Supervisor.
HA Update
The system is up and running since 10 hours and received 7 Disconnect Request from tunneling server but no warning for usage of time.sleep blocking the main event loop in the register.

Hm... it is not included in 20220111 - was merged after that build - it should be in todays build though.

That said there are other blocking calls than time.sleep which are not detected by this - it's just picking low-hanging fruit.

Hello all,

since I upgraded my NAS with RAM, the KNX interface is running stable.

-> my NAS no longer needs virtual memory
-> when taking a snapshot from my VM I still lose the connection
-> If you have a NAS, do not use the shared memory.

Stable with HA 2021.12.9 and 2021.12.10

My problem disappeared when commenting our DWD app. (They were blocking the event loop, causing disconnects which then made some entities to be flaky.)

So a new beta has started yesterday. Please try it and see if the connection problems with KNX disappear - or if there are any log messages regarding "timer_out_of_sync" (homeassistant.core debug) or similar.

Great news !Thankyou.
How can I get this beta version for a try?

That depends on your installation method. A quick search at Google or HA community forum will tell you.

I updated my production instance of HA to last beta versione (2022.2.0b3).
The problem with knx disconnection became even worse than before.With this beta version diconnections happens every 2/3 seconds.
I attach here the log from HA.
Log KNX beta version (2022.2.0b3).log

@arcano81: Can you please add the KNX diagnostics to the issue? You can download it on the integrations page after clicking on the three dots on the KNX integration.

Also, please send another log with

debugpy:

active in your configuration.yaml and add debug logging also to homeassistant.core please.

@arcano81 thanks for these logs, we found a bug in the route_back logic. Try disabling route_back - you probably don't need it as your HA device and interface are on the same network anyway.

@farmio Disabling the route back solved the problem of disconnection every 2/3 seconds.
Still present the usual behaviour with random diconnections.
@marvin-w I attach here the KNX diagnostics
config_entry-knx-e0f52016e11797923387412866023651.zip
and here the HA log with debugpy and homeassistant.core. (The logs start from bootstrap of HA and ends 10 minutes after KNX tunneling disconnection). You can find a disconnection @ 2022-01-30 22:44:46
Ha Log (2022.2.0b3).zip

Last questions: How do I use the individual address in KNX configuration?Which address do I have to use?
image
Thanx

@arcano81

How do I use the individual address in KNX configuration?Which address do I have to use?

You don't need to configure any address here. A tunnelling server should assign an address for a client at connection. Some (rare) don't, yours does: "current_address": "1.0.6"
If you would be using routing you'd need to set it.

What is this entity: entity_id=sensor.ntc_temperature_01 which integration does trigger it and do you have any automations or anything using it? I don't know if this is coincidence, but it changes state near every timer_out_of_sync event or DisconnectRequest.

It doesn't seem debugpy is activated in these logs. It is not a logger but an integration, you'd need to add debugpy: to your configuration.yaml - see https://www.home-assistant.io/integrations/debugpy/

The entity_id=sensor.ntc_temperature_01 comes frome ESPHome integration.
It's a temperature sensor to acquire data from the hot water boiler. It's not used in any automation, just for showing up its value in the dashboard.
I correctly added debugpy:in configuration.yaml and restarted...
I'll let you have some new logs tomorrow !
Thankyou

@farmio
2022-01-31 00:29:39 WARNING (MainThread) [homeassistant.util.async_] Detected blocking call inside the event loop. This is causing stability issues. Please report issue for debugpy doing blocking calls at homeassistant/components/debugpy/__init__.py, line 49: debugpy.listen((conf[CONF_HOST], conf[CONF_PORT]))
The logs is showing this line and HA has become instable, very slow at bootstrap.
Try to disable debugpy: and see what happens.

This has just been fixed in beta 4 (#65252).
But I think it's expected that HA could run noticeably slower when the debugger is attached.

Upgraded to HA 2022.2.0b4
Now the system boots correctly. It's a bit slower but everything is up and running.
The KNX disconnection now happens again every 10 seconds even if rout back is disabled.
Here you can find logs from HA and diagnostic form KNX integration.
config_entry-knx-e0f52016e11797923387412866023651.json.zip
home-assistant (2022.2.0b4)+debugpy.zip

I will roll back to HA 2022.2.0b3 which has a tolerable behaviour with KNX disconnection while waiting for a fix.
Thanx

@arcano81 to me it seems your HA instance is just a tick too big for the Raspberry, but I could be totally wrong there. There are just so many timeout occasions and asyncio warnings on different parts of the system.

I'd like to try to enable threading for xknx on your HA to see if it fixes at least knx, if you are interested feel free to ping me at Discord farmio#5918 (or at xknx discord)

Dear Folks,

I have the same issue.

My environment:

HA 32bit Core Version: 2021.12.10 on Raspi 4 4 GB
LAN Connection
MDT SCN-IP000.03 KNX IP Interface
FW Version 3.03.
Application Programm Vers. 2.1

Till yesterday this error occurs nearly every 5 -6 minutes
After farmio gives me the hint I investigated for a couple of hours.

I Found the following issues:

Node-Red
I use it as workflow engine in homeassistant.

  1. I found a couple of problems here not sure what impact these single issues had.

Normally I use the ultimate knx plugin which creates on own knx connection next to homeassistant.
Unfortunately, you can not set a tunneling group address as well as for xknx
2. I had a misconfiguration in one node that sent an unexpected payload to a group address
xknx.exceptions.exception.CouldNotParseTelegram: <CouldNotParseTelegram description="payload invalid" destination_address="3/1/5" device_name="LED WHZ Voute Außen" feature_name="State" payload="<GroupValueWrite value="<DPTArray value="[0x3]" />" />" source_address="1.1.12"/>

After that happens the connection got lost.

2022-02-02 22:33:43 WARNING (MainThread) [xknx.log] Received DisconnectRequest from tunnelling sever.

  1. I had a second knx integration that was not completely deleted, I deleted it.

Integrations

I've had a couple of buggy integrations that keep trying to connect to a network endpoint and fail.

2022-02-02 22:33:46 WARNING (MainThread) [homeassistant.components.onvif] Couldn't connect to camera 'test', but will retry later. Error: All connection attempts failed 2022-02-02 22:33:51 ERROR (MainThread) [xknx.log] Unexpected xknx error while processing incoming telegram <Telegram direction="Incoming" source_address="1.1.12" destination_address="3/1/5" payload="<GroupValueWrite value="<DPTArray value="[0x3]" />" />" /> homeassistant.exceptions.HomeAssistantError: Error when calling async_set_default for bulb Yeelight Ceiling10 0x652d1fb at 192.168.1.33: {'code': -5000, 'message': 'general error'}

I fixed that or deactivated the integrations.

For now, it looks much better. The last disconnection was 15 hours ago

--

Does it make sense to add hysteresis to the function that reports connection aborted? That this function does not report every single micro disconnect?

@Blade0001

xknx.exceptions.exception.CouldNotParseTelegram: <CouldNotParseTelegram description="payload invalid" destination_address="3/1/5" device_name="LED WHZ Voute Außen" feature_name="State" payload="<GroupValueWrite value="" />" source_address="1.1.12"/>

This looks bad, but actually doesn't cause anything other than this log message:

2022-02-02 22:33:51 ERROR (MainThread) [xknx.log] Unexpected xknx error while processing incoming telegram <Telegram direction="Incoming" source_address="1.1.12" destination_address="3/1/5" payload="<GroupValueWrite value="" />" />

Granted, it should be handled in a less alarming way, but other than that it is not a cause for a disconnect.
I'd recommend to update to 2022.02 and see if you get disconnections again.

I had a second knx integration that was not completely deleted, I deleted it.

This however can really mess with your connection.

I have (had?!) the same problem and after switching off the support of "slow tunneling connections" in my MDT IP Interface, I don't see any errors anymore regarding the disconnect in my log file.
Can anyone confirm this?

Edit: Yesterday I turned on the "slow tunneling connections" again and I got the errors with the disconnect again

@farmio: What is the recommendation for this?