Synology: Host name conflict
otherguy opened this issue ยท 68 comments
A fresh install (latest image, latest Synology DSM, latest iOS 11) of homebridge does not work for me. The logs show this:
Withdrawing address record for fe80::42:4cff:fea6:2530 on docker0.
Withdrawing address record for 172.17.0.1 on docker0.
Withdrawing address record for 172.16.1.200 on eth0.
Host name conflict, retrying with homebridge-48
Registering new address record for fe80::42:4cff:fea6:2530 on docker0.*.
Registering new address record for 172.17.0.1 on docker0.IPv4.
Registering new address record for 169.254.175.241 on eth1.IPv4.
Registering new address record for 172.16.1.200 on eth0.IPv4.
As you can see, it's already at -48
and this keeps going on forever.
The relevant log messages are:
Starting Avahi daemon
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.6.32 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Loading service file /etc/avahi/services/sftp-ssh.service.
Loading service file /etc/avahi/services/ssh.service.
*** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. ***
*** WARNING: Detected another IPv6 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. ***
Joining mDNS multicast group on interface docker0.IPv6 with address fe80::42:4cff:fea6:2530.
New relevant interface docker0.IPv6 for mDNS.
Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
New relevant interface docker0.IPv4 for mDNS.
Joining mDNS multicast group on interface eth1.IPv4 with address 169.254.175.241.
New relevant interface eth1.IPv4 for mDNS.
Joining mDNS multicast group on interface eth0.IPv4 with address 172.16.1.200.
New relevant interface eth0.IPv4 for mDNS.
Network interface enumeration completed.
I have no other Docker images running on the Synology, but there is (native) Plex and a few other services. Is something else running an mDNS service that causes this?
My iOS 11 device is not able to find the homebridge. Entering the code manually just shows a loading spinner that is there forever.
I have tried to test this by installing Plex (which also uses Avahi) on DSM (6.1.3 Update 6) then launching the homebridge container, but haven't been able to replicate this error.
I also tried adding a second network interface but was still unable to replicate the issue.
A few things you could try.
1. Try disabling IPv6.
Control Panel -> Network -> Network Interface -> LAN -> Edit -> IPv6 -> Off
2. Try with Bonjour services both on and off.
Control Panel -> File Services -> Advanced -> Enable Bonjour File Discovery Service
Thank you @oznu !
IPv6 was off from the beginning (yes, really). I tried it with IPv6 set to Auto, but the same thing happens. My router has IPv6 disabled as well and the Synology does not get a local IPv6 address. I believe the IPv6 address is applied by the Docker daemon.
Bonjour services were on. I tried it with Bonjour off, but again, the same thing happens.
When Plex is not running, again the same thing happens.
I'm a bit confused because your image was working fine! I merely reinstalled all packages because I migrated my volume to btrfs.
Just to try it, i ran it with an empty startup.sh
and this config.json
, no other files:
{
"bridge": {
"name": "Homebridge",
"username": "CC:22:3D:E3:CE:30",
"port": 51826,
"pin": "181-62-230",
"manufacturer": "@nfarina",
"model": "Homebridge",
"serialNumber": "0.4.31"
},
"description": "My Home",
"accessories": [
],
"platforms": [
]
}
I'm not sure if it will fix the problem, but I've pushed up a change to the Avahi config. You can test it out by downloading the latest image.
I also came across these:
- https://askubuntu.com/a/735977
- https://web.archive.org/web/20150228130344/http://avahi.org/wiki/AvahiAndUnicastDotLocal
Other people have experienced the same Host name conflict
issue if the DNS servers on their local network are serving any .local
domains.
Thank you, I will try. Interestingly enough, if I run the docker image manually, there are no errors:
docker run --rm -it --name homebridge-test -e PUID=1024 -e GUID=100 -v /volume1/docker/homebridge/:/homebridge --network=host oznu/homebridge
Here is the relevant output if I run your newly pushed image manually, using the command above:
Starting Avahi daemon
Found user 'avahi' (UID 86) and group 'avahi' (GID 86).
Successfully dropped root privileges.
avahi-daemon 0.6.32 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Loading service file /etc/avahi/services/sftp-ssh.service.
Loading service file /etc/avahi/services/ssh.service.
*** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. ***
Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
New relevant interface docker0.IPv4 for mDNS.
Joining mDNS multicast group on interface eth1.IPv4 with address 169.254.175.241.
New relevant interface eth1.IPv4 for mDNS.
Joining mDNS multicast group on interface eth0.IPv4 with address 172.16.1.200.
New relevant interface eth0.IPv4 for mDNS.
Network interface enumeration completed.
Registering new address record for 172.17.0.1 on docker0.IPv4.
Registering new address record for 169.254.175.241 on eth1.IPv4.
Registering new address record for 2002:d52f:cbef:1:211:32ff:fe38:3b6d on eth0.*.
Registering new address record for 172.16.1.200 on eth0.IPv4.
Server startup complete. Host name is stark.local. Local service cookie is 890998352.
Service "stark" (/etc/avahi/services/ssh.service) successfully established.
Service "stark" (/etc/avahi/services/sftp-ssh.service) successfully established.
There are no errors.
I deleted the container created by the Docker UI in Synology and created it through the command line. Now it appears to work! I can stop and start the container through the UI as well.
Just for testing, I recreated the same container again through the UI and again it fails. I'm not sure what the UI is doing differently, but something seems to be messing up the configuration.
I have to wait until I'm home to pair the iOS device with homebridge. Will update in a couple of hours!
Thanks for testing this. Would you be able to provide the json output from a docker inspect
on the container created by the Synology GUI and the container created from the CLI so I can compare the differences?
docker inspect <container id or name>
Sure, and I did that already as one of the first things. There are no notable differences as far as I can see:
Created through UI:
[
{
"Id": "ca178ef672c15c2e3a2162edb5e7263ba9846bd95b82dd75a6209059df64661d",
"Created": "2017-11-14T14:18:57.934229642Z",
"Path": "/init",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 16433,
"ExitCode": 0,
"Error": "",
"StartedAt": "2017-11-14T14:19:24.802324475Z",
"FinishedAt": "0001-01-01T00:00:00Z",
"StartedTs": 1510669164,
"FinishedTs": -62135596800
},
"Image": "sha256:830673d6749e9dc4bf859c43f50247506e31799c3fc1a7938ee5a5c9e60a8acb",
"ResolvConfPath": "/volume1/@docker/containers/ca178ef672c15c2e3a2162edb5e7263ba9846bd95b82dd75a6209059df64661d/resolv.conf",
"HostnamePath": "/volume1/@docker/containers/ca178ef672c15c2e3a2162edb5e7263ba9846bd95b82dd75a6209059df64661d/hostname",
"HostsPath": "/volume1/@docker/containers/ca178ef672c15c2e3a2162edb5e7263ba9846bd95b82dd75a6209059df64661d/hosts",
"LogPath": "/volume1/@docker/containers/ca178ef672c15c2e3a2162edb5e7263ba9846bd95b82dd75a6209059df64661d/log.db",
"Name": "/homebridge-test",
"RestartCount": 0,
"Driver": "btrfs",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": [],
"ContainerIDFile": "",
"LogConfig": {
"Type": "db",
"Config": {}
},
"NetworkMode": "host",
"PortBindings": null,
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": [],
"CapDrop": [],
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"ARCH=amd64",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=8.9.0",
"YARN_VERSION=1.2.1",
"HOMEBRIDGE_VERSION=0.4.31",
"S6_KEEP_ENV=1",
"PUID=1024",
"GUID=100"
],
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 50,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": -1,
"MemorySwappiness": -1,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": null,
"Name": "btrfs"
},
"SynoStatus": "running",
"Mounts": [
{
"Type": "volume",
"Name": "e2fbcca290011f69116bd85f5cc73b0e84151bfb31084065acd82839d9eea904",
"Source": "/volume1/@docker/volumes/e2fbcca290011f69116bd85f5cc73b0e84151bfb31084065acd82839d9eea904/_data",
"Destination": "/homebridge",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "homebridge-test",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": true,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"ARCH=amd64",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=8.9.0",
"YARN_VERSION=1.2.1",
"HOMEBRIDGE_VERSION=0.4.31",
"S6_KEEP_ENV=1",
"PUID=1024",
"GUID=100"
],
"Cmd": null,
"ArgsEscaped": true,
"Image": "oznu/homebridge",
"Volumes": {
"/homebridge": {}
},
"WorkingDir": "/homebridge",
"Entrypoint": [
"/init"
],
"OnBuild": null,
"Labels": {},
"DDSM": false
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "61f698d57085823b6b8affa3f26900209151073bcca89cc2077ec5eb9eb89afa",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/default",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"host": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "344b9c826297c6049a17d076fada038cde076711de3fa5f7a37d56fd52c88ff2",
"EndpointID": "a5a1de3160ae1a14747303f0118e7a8547c5d2ee6799db536112d8d8fa61724f",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": ""
}
}
}
}
]
Created via CLI:
[
{
"Id": "08a9b420362ce322c0f075bfe86ee40c29a81e3b16a4ff355b01fa9565b09b18",
"Created": "2017-11-13T12:32:09.734074861Z",
"Path": "/init",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 32183,
"ExitCode": 0,
"Error": "",
"StartedAt": "2017-11-13T13:48:02.695941573Z",
"FinishedAt": "2017-11-13T13:47:22.960600849Z",
"StartedTs": 1510580882,
"FinishedTs": 1510580842
},
"Image": "sha256:830673d6749e9dc4bf859c43f50247506e31799c3fc1a7938ee5a5c9e60a8acb",
"ResolvConfPath": "/volume1/@docker/containers/08a9b420362ce322c0f075bfe86ee40c29a81e3b16a4ff355b01fa9565b09b18/resolv.conf",
"HostnamePath": "/volume1/@docker/containers/08a9b420362ce322c0f075bfe86ee40c29a81e3b16a4ff355b01fa9565b09b18/hostname",
"HostsPath": "/volume1/@docker/containers/08a9b420362ce322c0f075bfe86ee40c29a81e3b16a4ff355b01fa9565b09b18/hosts",
"LogPath": "/volume1/@docker/containers/08a9b420362ce322c0f075bfe86ee40c29a81e3b16a4ff355b01fa9565b09b18/log.db",
"Name": "/homebridge",
"RestartCount": 0,
"Driver": "btrfs",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/volume1/docker/homebridge/:/homebridge"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "db",
"Config": {}
},
"NetworkMode": "host",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"ARCH=amd64",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=8.9.0",
"YARN_VERSION=1.2.1",
"HOMEBRIDGE_VERSION=0.4.31",
"S6_KEEP_ENV=1",
"PUID=1024",
"GUID=100"
],
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": -1,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": null,
"Name": "btrfs"
},
"SynoStatus": "running",
"Mounts": [
{
"Type": "bind",
"Source": "/volume1/docker/homebridge",
"Destination": "/homebridge",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "stark",
"Domainname": "",
"User": "",
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Tty": true,
"OpenStdin": true,
"StdinOnce": true,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"ARCH=amd64",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=8.9.0",
"YARN_VERSION=1.2.1",
"HOMEBRIDGE_VERSION=0.4.31",
"S6_KEEP_ENV=1",
"PUID=1024",
"GUID=100"
],
"Cmd": null,
"ArgsEscaped": true,
"Image": "oznu/homebridge",
"Volumes": {
"/homebridge": {}
},
"WorkingDir": "/homebridge",
"Entrypoint": [
"/init"
],
"OnBuild": null,
"Labels": {},
"DDSM": false
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "e43dada53caceb9ebf716e1724cbbcb05b2a507587efca7b36753f393d0c1065",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/default",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"host": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "344b9c826297c6049a17d076fada038cde076711de3fa5f7a37d56fd52c88ff2",
"EndpointID": "975ebab85567088a6c5178857651086f0fa8d9a2b9c1380a147823f2264ef48b",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": ""
}
}
}
}
]
Binds are empty because I haven't mapped a config for the GUI-created test container. Other than that, I see no major differences!
The only difference that I can see is the Hostname
.
When using --net=host
the container will typically assume the same hostname as DSM.
For some reason the hostname of the container created from the GUI is homebridge-test
while the hostname of the container created via the CLI is stark
- which I guess is the name of your Synology NAS.
I tested this by creating a new container on the latest DSM (DSM 6.1.4-15217 Update 1) with the latest Docker package and checked the Use same network as Docker host
box under networking settings, and as expected the container had the same hostname as DSM.
I tried overriding the hostname using --hostname=homebridge
while also using --net=host
and this had no effect - the hostname inside the container still matched DSM.
I'm not sure how the container you are creating via the GUI, with host networking, is picking up a different hostname.
Good catch! Yes, stark
is the name of the Synology NAS. I'm on the newest DSM and newest Docker package as well.
Interestingly, the hostname also changes to homebridge
or homebridge-test
when I use the clone button in the Docker GUI to clone the settings from my CLI-created container!
So I'm guessing it's an issue with the Docker package or my installation of said package, and not with the docker-homebridge
image itself.
Thank you for helping me troubleshoot! Running it from the CLI is perfectly fine for me.
I also have the same issue after upgrading the image. Turning IPv6 didn't change anything (I turned it on just before the upgrade).
Good news, well sort of, I have been able to replicate the issue using the latest Docker package - this was not showing up a few days ago.
I'll try and figure out a solution.
When creating a container the Synology GUI is more or less sending the equivalent of this command:
docker run -d --name=homebridge --net=host --hostname=homebridge oznu/homebridge
Support for --net=host
alongside --hostname
was added in Docker 1.11.0, but didn't actually work until Docker 1.13.0. Before we all updated the Docker package we were running version 1.11.2, after the update, we now are running 17.05.0 so we see this issue.
One potential solution, which I have tested and works, is to override the hostname in the Avahi config. This is set in avahi-daemon.conf
:
[server]
host-name=dsm-hostname
...
I'll work on making it so the host-name
variable in the avahi-daemon.conf
can be set using a Docker environment variable as a work around.
FYI, DSM 6.2 (rc) has Docker 17.09.0!
See:
- https://github.com/oznu/docker-homebridge#2-when-running-on-synology-dsm-set-the-dsm_hostname-environment-variable
- https://github.com/oznu/docker-homebridge/wiki/Homebridge-on-Synology#2-create-container
This hostname must match what DSM is using exactly or else it will still fail.
Let me know if this does not work for you.
Right now I'm battling another issue with homebridge-hue
(ebaauw/homebridge-hue#217) but I will let you know as soon as this is resolved.
Sorry, I haven't pulled the latest image.
Worked after I disabled the bonjour serviceโฆ
Works, even with enabled bonjour service! Thank you! โค๏ธ
In trying to solve this problem, and keep it simple to install Homebridge on a Synology, I have put together a Synology Package that will manage the oznu/homebridge container lifecycle. If you two would like to test it out it's available here:
https://github.com/oznu/homebridge-syno-spk
The package will pull down the latest oznu/homebridge docker image, configure the volume mount for you, and set the PUID
, PGID
, and TZ
environment variables. Most importantly it will start the container without using the --hostname=homebridge
flag which is what has been causing this issue.
Updating to the latest version of homebridge is also much easier, you just have to go into Package Center and execute the Stop
then Run
actions on the package.
Works for me with Bonjour now too...
@oznu alright, I will try it out!
How to build the spk from source?
Will the package also keep the Docker image updated? Or is it what you meant with Updating to the latest version of homebridge is also much easier
?
Does it also mean I can remove the Docker container and image? Sorry, as you wrote - it manages the image
It'd be nice if the manager asked for existing homebridge config, not only the share and would allow to specify the share and config folder.
Thanks for testing it out and providing feedback! ๐
How to build the spk from source?
It's a bit of a pain to get setup for building from source. The Synology Docs have a mostly correct guide. I'm using Travis CI to build the SPK for each release, you can see the scripts in the .ci
directory which go through the entire process.
Will the package also keep the Docker image updated?
Yes. The current way of upgrading the image using the Synology Docker GUI is tedious and not very reliable, the SPK will check for updates each time it's started, and if an update is available re-create the container with the same settings. I'm using the bundled docker-compose
tool in the background which makes all this easy.
Does it also mean I can remove the Docker container and image?
As you figured out, the container is just managed by the SPK, you can still view the logs, access the terminal, start and stop the container, and do everything else you normally would using the Synology Docker GUI.
It'd be nice if the manager asked for existing homebridge config, not only the share and would allow to specify the share and config folder.
I agree. I can't figure out a way to get a folder selection dialog box to appear in the install wizard, this might not even be possible and is certainly not mentioned in the Synology Docs. I'm thinking of having two options:
-
Simple - as it is now, you specify which share you want to use. I can make this a drop down box with existing shares, default to the
docker
share which everyone who has the Docker package installed will have. -
Advanced - specify the full file path where you want the config to be stored in a text box, e.g.
/volume1/my-share/homebridge
.
This should keep it simple for people who just want to get it up and running, but also satsify those who want more control around where they store their config.
Hello, I find myself stuck and I can not find the fault, I know it's not about the subject but can you help me. I am the tutorial from A to Z and in the terminal of the container, he puts me this:
@oznu can you help me because, I do not understand it to work, I detect Homebridge in the "Home" application, I had to do a bad manipulation.
I am in Docker 17.05.0-0349 and DSM 6.1.4-15217 Update 2.
Do you have any idea where the problem is coming from?
Thank you in advance.
Hi @oznu
thanks for your effort on this.
Today I first tried your normal docker image, took the latest version. Set the DSM_HOSTNAME variable to the exact name of my DS and ended up with the host name conflict issue.
I deleted the container, deleted the image and tried your SPK version (which is very convenient btw) but is also spamming the log file with the host name issue.
The container, which is created by your SPK does not have the DSM_HOSTNAME variable set. Is this correct?
I am runningDS415+, DSM 6.1.4-15217 Update 2, Docker 17.05.0-0349 and I do need to have the Synology Bonjour via SMB enabled because of the TimeMachine Backup.
When stopping your SPK created container, the following appears in the logs:
Is there anything I can help you with trying to narrow done the host name conflict in use issue?
@smoochy Thanks for adding more details. I updated to DSM 6.1.4-15217 Update 2 to test this.
The container, which is created by your SPK does not have the DSM_HOSTNAME variable set. Is this correct?
Correct. It is not needed when created via the SPK or the command line because the offending --hostname=homebridge
flag is not set like it is when creating via the DSM GUI.
I can still reproduce the issue when creating the container from the DSM GUI and not setting the DSM_HOSTNAME
variable, but can't get it to fail when that variable is set or when I use the SPK.
Here are my test scenarios:
- DSM GUI + DSM_HOSTNAME variable set = Working
- DSM GUI + DSM_HOSTNAME variable not set = Not Working
- Using the SPK = Working
- Using the CLI via SSH without
--hostname=homebridge
= Working - Using the CLI via SSH with
--hostname=homebridge
= Not Working
All Bonjour options are enabled as I use Time Machine as well:
Could you try running this at the command line via ssh to see if you're able to get it working this way?
docker rm -f homebridge
docker run --net=host oznu/homebridge
Hi @oznu, thanks for your reply.
When I have the Bonjour service with SMB broadcast on DSM enabled, I get this result:
<- Screenshots removed ->
When I have Bonjour service with SMB Broadcast disabled, I get this result:
<- Screenshots removed ->
I'm wondering if this is just not working on Synology NAS units have more than one ethernet interface active. In the case of every single person who has posted their log output on this issue, Avahi is trying to bind to both eth0
and eth1
which suggests multiple interfaces are present. If this is the case it would it explain why it works for some and not for others, and would also be the reason why I'm unable to replicate it with my single port NAS.
If anyone who is experiencing this issue is able to temporarily disconnect their secondary interfaces and see if this fixes the problem it would be most appreciated. If it does solve the problem, I can push an update which will force Avahi to only broadcast on the primary interface.
Hi oznu,
I do have a 415+ with two Ethernetports, correct. But I am only using the first port. There is nothing connected to port 2. It has a self assigned IP address. Status is disconnected. But it does not seem to be able via GUI to completely disable port 2?
Same here. 415+, second port disconnected.
Thanks for reporting back. I think it's still worth testing only binding avahi to the primary interface, my expectations of this actually working is pretty low though.
I've added a new docker tag avahi-primary-interface
that will tell avahi to only publish on the primary interface when DSM_HOSTNAME
is set.
docker run --net=host -e DSM_HOSTNAME=$(hostname) oznu/homebridge:avahi-primary-interface
I have no idea if this will actually fix the problem, but as mentioned earlier, every person who has posted log output on this issue has more than one ethernet interface, so I think it's worth a try.
When entering this command this happens:
<- Screenshots removed ->
I need to add:
It went on until hostname 16 was reached. Then it stopped.
Then I stopped the activated Bonjour service and only after this the last 3 lines (starting with the "Server startup complete" message popped up.
Thanks for testing. One more thing to try, I set Avahi to debug mode which should tell us exactly what record it's conflicting with. This may or may not help, but we'll see.
docker pull oznu/homebridge:avahi-primary-interface
docker run --net=host -e DSM_HOSTNAME=$(hostname) oznu/homebridge:avahi-primary-interface
<- Screenshots removed ->
Only after I disabled the Bonjour service, the last 4 lines appeared.
I still have this on a Synology docker.
events.js:183
throw er; // Unhandled 'error' event
^Error: dns service error: unknown
at Advertisement.on_service_registered (/usr/local/share/.config/yarn/global/node_modules/mdns/lib/advertisement.js:42:24)
at SocketWatcher.MDNSService.self.watcher.callback (/usr/local/share/.config/yarn/global/node_modules/mdns/lib/mdns_service.js:18:40)
I have a second "docker" network interface next to the single ethernet of DS216+II.
I tried everything in this post, but nothing works. Switching off IPv6, Bonjour, trying to manually run the docker image. It is interesting, that the message appears only when shutting down docker, not when it happens.
If I run the image manually, then I dont have this error message, but it still does not work.
Hi,
I get this with the latest image on my DS918+
Have not seen the *** WARNING *** part before.
2018-01-03 18:46:47 | stdout | Host name conflict, retrying with DS918-10
-- | -- | --
2018-01-03 18:46:47 | stdout | Withdrawing address record for fe80::211:32ff:fe82:6a97 on eth0.
2018-01-03 18:46:47 | stdout | Withdrawing address record for 192.168.20.20 on ovs_bond0.
2018-01-03 18:46:46 | stdout | Registering new address record for fe80::211:32ff:fe82:6a97 on eth0.*.
2018-01-03 18:46:46 | stdout | Registering new address record for 192.168.20.20 on ovs_bond0.IPv4.
2018-01-03 18:46:46 | stdout | Registering new address record for 172.17.0.1 on docker0.IPv4.
2018-01-03 18:46:46 | stdout | Host name conflict, retrying with DS918-9
2018-01-03 18:46:46 | stdout | Withdrawing address record for fe80::211:32ff:fe82:6a97 on eth0.
2018-01-03 18:46:46 | stdout | Withdrawing address record for 192.168.20.20 on ovs_bond0.
2018-01-03 18:46:45 | stdout | *** WARNING *** For more information see <http://0pointer.de/avahi-compat?s=libdns_sd&e=node&f=DNSServiceRegister>
2018-01-03 18:46:45 | stdout | *** WARNING *** Please fix your application to use the native API of Avahi!
2018-01-03 18:46:45 | stdout | *** WARNING *** The program 'node' called 'DNSServiceRegister()' which is not supported (or only supported partially) in the Apple Bonjour compatibility layer of Avahi.
2018-01-03 18:46:45 | stdout | *** WARNING *** For more information see <http://0pointer.de/avahi-compat?s=libdns_sd&e=node>
2018-01-03 18:46:45 | stdout | *** WARNING *** Please fix your application to use the native API of Avahi!
2018-01-03 18:46:45 | stdout | *** WARNING *** The program 'node' uses the Apple Bonjour compatibility layer of Avahi.
2018-01-03 18:46:45 | stdout | Registering new address record for fe80::211:32ff:fe82:6a97 on eth0.*.
2018-01-03 18:46:45 | stdout | Registering new address record for 192.168.20.20 on ovs_bond0.IPv4.
2018-01-03 18:46:45 | stdout | Registering new address record for 172.17.0.1 on docker0.IPv4.
2018-01-03 18:46:45 | stdout | Host name conflict, retrying with DS918-8
2018-01-03 18:46:45 | stdout | Withdrawing address record for fe80::211:32ff:fe82:6a97 on eth0.
2018-01-03 18:46:45 | stdout | Withdrawing address record for 192.168.20.20 on ovs_bond0.
2018-01-03 18:46:44 | stdout | Registering new address record for fe80::211:32ff:fe82:6a97 on eth0.*.
2018-01-03 18:46:44 | stdout | Registering new address record for 192.168.20.20 on ovs_bond0.IPv4.
2018-01-03 18:46:44 | stdout | Registering new address record for 172.17.0.1 on docker0.IPv4.
Any updates on this? This drives me nuts.
Tried to install homebridge directly and via spk. I have a DS with two eth ports but use ony one.
Stop. It is working fine now.
Just updated the syno docker app via appstore to its newest version. Check that.
I am also trying to get this running on a DS 713+ with dual ethernet interfaces. I see the thread has been closed, does that mean there is a definitive answer or workaround?
@CRDunne - this issue is still open. I've tagged it as "Help Wanted" as I can't replicate this on my own Synology NAS and I'm out of ideas of what to get users to test.
For me it works perfectly now!
Hello everybody,
I'm new in this products and I want to try homebridge on my Synology (1517+).
I follow the instruction and after reboot of my container everything fell good.
I launch my iPhone and nothing ...
I've a LACP connection with my 4 wire (no IPV6). I try to break my LACP, disconnect 3 of my 4 ports and already nothing on mu iPhone.
I add other sources (Fibaro HC2, MelCloud) already nothing.
I 've a firewall between my wifi and my internal network (Fortigate 60E with 2 internal and a wan)) I xhek my rules and everything seams to be my fine. (I add a wifi router on my swith directly to exclude the fire wall problema).
I put the end of my log but I don't know where to find
Hi Everyone,
It seems like the best solution to this problem would be to remove Avahi from the container entirely, doing so will prevent any possible conflicts with other Avahi/Bonjour stacks running on either the docker host or inside another container. Luckily when the hard work done here https://github.com/KhaosT/HAP-NodeJS/pull/495 is merged in we'll be able to do this.
In the meantime, I have created a test image which runs a customised version of Homebridge based on that pull request which will allow you to test running Homebridge without Avahi. Just use the no-avahi
image tag:
# x64
docker run --net=host oznu/homebridge:no-avahi
# arm
docker run --net=host oznu/homebridge:no-avahi-raspberry-pi
For anyone that is still experiencing problems with Host name conflicts
it would be great to get your feedback on this setup to confirm it solves the problem.
Feedback from others who already have it working would be great as well. I was able to do a "drop in replacement" of this image and everything just worked - I didn't need to re-pair with my iPhone or reconfigure accessories.
Note 1: If you're running any of the following plugins please be aware these depend on Avahi (mdns) to operate and will not work with the no-avahi
image.
homebridge-controllerlink
homebridge-samsung-cast-tv
homebridge-alexa
homebridge-rfbridge
homebridge-yamaha
homebridge-plugin-chromecast
homebridge-tradfri
homebridge-mcuiot
homebridge-dacp
homebridge-wssensor
homebridge-plugin-chromecast-keith
homebridge-automation-chromecast
Note 2: If you run multiple instances of Homebridge, each instance will require a different name in the config.json
to work, otherwise you're going to get an Service name is already in use on the network
error.
Note 3: You can't install using the no-avahi
tag on your Synology NAS using homebridge-syno-spk yet. If I get some confirmations this setup works well I might make it an option.
Just to say that I was seeing the errors when using the default image, but I just switched to the no-avahi image and it works great and fixes the issue for me. Thanks for putting that together!
Hey @oznu, just a quick question: Does this also address grover/homebridge-dacp#10 that people have been reporting in homebridge-dacp?
It doesn't look like it, but wanted to confirm. If not: Do you have an idea, what's causing getaddrinfo to fail inside the docker containers?
Note that I switched the latest version (not published yet) to make use of Bonjour, but still no dice.
@oznu As the developer behind several of those non-functional plugins
homebridge-alexa
homebridge-yamaha - I'm running my own branch
homebridge-mcuiot
homebridge-wssensor
Any thoughts on how to resolve the issue?
In each I'm using MDNS device discovery to identify devices on the network.
PS I don't have a synolog, but have some users of my plugins reporting issues.
There are two issues. This issue is more about conflicts with other Bonjour/Avahi stacks running that prevents Homebridge running at all. The no-avahi
variant of the image would have prevented your plugins from installing at all so I don't think the referenced issue is related to that comment.
The issue you referenced is because Alpine Linux lacks nss-mds, so it can't resolve .local
domains using the system dns. I wrote a small library that resolves the .local
domain in pure javascript which solved the problem: mdns-resolver.
The homebridge-dacp plugin author got it working using this: grover/homebridge-dacp@63ab4b8
Hi @NorthernMan54,
This library is a pure-javascript implementation of Bonjour which I would recommend, for just discovering devices the master version should be fine:
https://www.npmjs.com/package/bonjour
A pending pull request for HAP-NodeJS uses this in place of the mdns library.
It works great on Linux, macOS and Windows (you don't even need to install Apple Bonjour).
The trick to making any of the mdns libraries work on Alpine Linux is to resolve the .local
domain to an IP address in JavaScript.
So I used the https://github.com/oznu/homebridge-syno-spk link and it worked perfectly for about a month.
I then turned on Bonjour to enable time machine backup.
The container will no longer start and I have documented such on the 'Issues' page of syno-spk. I've also disabled bonjour to see whether that would fix the problem. It did not.
It seem my issue may be related with this issue, but I am unsure.
Homebridge 0.4.39 has been released and drops the mdns
dependency. To ensure plugins that depend on mdns
still work I'll still be including Avahi in the default image.
The no-avahi
version of this image will still be maintained with both Avahi
and dbus
disabled. People who are using the no-avahi
version at the moment should download latest version of that tag which now includes the offical Homebridge package instead of my customised version.
After updating the GUI Plugin for Homebridge today, I'm getting the same error as the OP.
Homebridge still works fine, I just can't access the GUI anymore.
I used the command inside the Docker image to update and restarted the container. Everything works fine again now.
Thanks a lot!
See:
- https://github.com/oznu/docker-homebridge#2-when-running-on-synology-dsm-set-the-dsm_hostname-environment-variable
- https://github.com/oznu/docker-homebridge/wiki/Homebridge-on-Synology#2-create-container
This hostname must match what DSM is using exactly or else it will still fail.
Let me know if this does not work for you.
Its fixed the hostname problem. :) Thank you!
Hi,
i have 7 oznu/homebridge running since almost one year without problems.
since a couple of days a have the same issue on my containers.
ipv6 is deactivated, change status of Bonjour service no success, setup DSM_HOSTNAME no success.
What strange is, that it's trying to use ip address of gateway of my docker bridge 172.17.0.1
My DSM_HOSTNAME=HOME-NAS
DS IP 192.168.1.50
hier is a couple exemples out of different logs of HB. with setting DSM_HOSTNAME and without.
Host name conflict, retrying with HOME-NAS-66 Registering new address record for fe80::bce7:faff:fe83:ccd3 on dockereab6db9.*. Registering new address record for fe80::e849:a8ff:fe05:977b on dockercdfbee0.*. Registering new address record for fe80::d419:66ff:fec6:44d2 on docker610261f.*. Registering new address record for fe80::42:16ff:fecb:7444 on docker0.*. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for 192.168.1.50 on eth0.IPv4. crond[354]: USER root pid 431 cmd run-parts /etc/periodic/15min Withdrawing address record for fe80::bce7:faff:fe83:ccd3 on dockereab6db9. Withdrawing address record for fe80::e849:a8ff:fe05:977b on dockercdfbee0. Withdrawing address record for fe80::d419:66ff:fec6:44d2 on docker610261f. Withdrawing address record for fe80::42:16ff:fecb:7444 on docker0. Withdrawing address record for 192.168.1.50 on eth0. Host name conflict, retrying with HOME-NAS-67 Registering new address record for fe80::bce7:faff:fe83:ccd3 on dockereab6db9.*. Registering new address record for fe80::e849:a8ff:fe05:977b on dockercdfbee0.*. Registering new address record for fe80::d419:66ff:fec6:44d2 on docker610261f.*. Registering new address record for fe80::42:16ff:fecb:7444 on docker0.*. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for 192.168.1.50 on eth0.IPv4.
without
Host name conflict, retrying with hb-alexa-copy-2416 Registering new address record for fe80::bce7:faff:fe83:ccd3 on dockereab6db9.*. Registering new address record for fe80::e849:a8ff:fe05:977b on dockercdfbee0.*. Registering new address record for fe80::d419:66ff:fec6:44d2 on docker610261f.*. Registering new address record for fe80::42:16ff:fecb:7444 on docker0.*. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for 192.168.1.50 on eth0.IPv4. Withdrawing address record for fe80::bce7:faff:fe83:ccd3 on dockereab6db9. Withdrawing address record for fe80::e849:a8ff:fe05:977b on dockercdfbee0. Withdrawing address record for fe80::d419:66ff:fec6:44d2 on docker610261f. Withdrawing address record for fe80::42:16ff:fecb:7444 on docker0. Withdrawing address record for 192.168.1.50 on eth0. Host name conflict, retrying with hb-alexa-copy-2417 Registering new address record for fe80::bce7:faff:fe83:ccd3 on dockereab6db9.*. Registering new address record for fe80::e849:a8ff:fe05:977b on dockercdfbee0.*. Registering new address record for fe80::d419:66ff:fec6:44d2 on docker610261f.*. Registering new address record for fe80::42:16ff:fecb:7444 on docker0.*. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for 192.168.1.50 on eth0.IPv4.
Host name conflict, retrying with hb-shelly-copy-2419 Registering new address record for fe80::bce7:faff:fe83:ccd3 on dockereab6db9.*. Registering new address record for fe80::e849:a8ff:fe05:977b on dockercdfbee0.*. Registering new address record for fe80::d419:66ff:fec6:44d2 on docker610261f.*. Registering new address record for fe80::42:16ff:fecb:7444 on docker0.*. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for 192.168.1.50 on eth0.IPv4. Withdrawing address record for fe80::bce7:faff:fe83:ccd3 on dockereab6db9. Withdrawing address record for fe80::e849:a8ff:fe05:977b on dockercdfbee0. Withdrawing address record for fe80::d419:66ff:fec6:44d2 on docker610261f. Withdrawing address record for fe80::42:16ff:fecb:7444 on docker0. Withdrawing address record for 192.168.1.50 on eth0. Host name conflict, retrying with hb-shelly-copy-2420 Registering new address record for fe80::bce7:faff:fe83:ccd3 on dockereab6db9.*. Registering new address record for fe80::e849:a8ff:fe05:977b on dockercdfbee0.*. Registering new address record for fe80::d419:66ff:fec6:44d2 on docker610261f.*. Registering new address record for fe80::42:16ff:fecb:7444 on docker0.*. Registering new address record for 172.17.0.1 on docker0.IPv4. Registering new address record for 192.168.1.50 on eth0.IPv4.
Hope you can help me.
Thank you
@giss69 swap to the no-avahi
image tag:
https://github.com/oznu/docker-homebridge/wiki#image-variants
sorry i've opened a new issue.
ist it a right version
DS 218+
no-avahi | amd64, arm32v6, arm64v8 | Alpine Linux
what is the easiest way to swap? or should i delete container and setup a new one?
Also been having this issue. I'm a new user so don't have any historical knowledge but just chiming in as well.
@stephenjmcmahon this comment is still up-to date:
Or use this package: https://github.com/oznu/homebridge-syno-spk
And make sure you select the default option (no-avahi) when prompted.
@oznu appreciate the quick reply! I I've spun up no-avahi and will report back
I also tried "DSM_HOSTNAME" and a matching network hostname as well but no dice there.
Hi, the same problem with QNAP container station. The hostname is set to the name of the container.