net=host mode bug under mac, container with default bridge mode can not access host opening port
ziyouchutuwenwu opened this issue Β· 56 comments
Output of docker version
:
(python2) mmcdeMacBook-Air:~ mmc$ docker version
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:15:28 2016
OS/Arch: darwin/amd64
Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:15:28 2016
OS/Arch: linux/amd64
Output of docker info
:
Containers: 3
Running: 2
Paused: 0
Stopped: 1
Images: 3
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 23
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host null bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.15-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.954 GiB
Name: moby
ID: D3CE:BLOM:7POQ:5KMA:LIMF:PSVT:MEAQ:GNEV:4JNX:N5YW:6HCJ:QD3X
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 26
Goroutines: 40
System Time: 2016-07-29T16:15:59.152792247Z
EventsListeners: 1
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
Additional environment details (AWS, VirtualBox, physical, etc.):
virtualbox 5.1.2 r108956
here is my host mac network info
issue 1
Steps to reproduce the issue:
- start a new container with -net=host mode
docker run -it --net=host edib/elixir-phoenix-dev /bin/bash
input ip address in container
Describe the results you received:
wish to see the same ip as host network interface eth0
Describe the results you expected:
host ip is 192.168.1.55, container ip is 192.168.65.xx, here is the screenshot
issue 2
Steps to reproduce the issue:
- start a new container with default mode
docker run -it edib/elixir-phoenix-dev /bin/bash
in container run
telnet to 172.17.0.1 8888
Describe the results you received:
8888 is my host port, listening for http request, screenshot
container telnet 172.17.0.1 failed, but ping 172.17.0.1 runs ok
Describe the results you expected:
running container can telnet to host port, it runs ok under linux docker.
Any progress for this issue? We are meeting with the same issue.
hi, is there any time schedule? i checked it everyday, but nothing happened.
I'm blocked by this issue as well.
I'm experiencing the first issue as well.
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:04:48 2016
OS/Arch: darwin/amd64
Experimental: true
In this case, --net=host
is working as expected...
It's function however is not exactly intuitive on the mac ;)
The ability to set the --net
, --pid
, --ipc
namespaces to host
refers to host
in the context of where the container engine is. In this case it is a Linux VM. So for every use of host
, consider it to mean vm
.
With that said, it would help us to understand what sort of applications need to run with --net=host
so we can find a solution for this requirement.
You can always use port mapping as an alternative...
For example, docker run -p 8888:8888
will allow you to access your service at localhost:8888
from the Mac.
It might be easier to split this in to two issues to track being unable to access services running on the Mac from the container's default gateway IP address....
@dave-tucker that clarifies things. Then how does one allow the container to access the mac host's interfaces (what host should mean imho)? Or is that a Linux only thing?
@AlexGustafsson: thanks for the clarification. Allowing the container to be accessed via a specific Mac host interface sounds entirely reasonable to me. I had hoped something like docker run -p 10.0.0.1:80:80
would work, but unfortunately it fails because -- for historical reasons -- we try to bind to 10.0.0.1:80
on both the Mac and the VM, and unfortunately the VM doesn't have the same interface as the Mac. I'll investigate fixing this.
@ziyouchutuwenwu to telnet to a host port, you can telnet to the default gateway (192.168.65.1
). For example I can run
$ docker run -it justincormack/debian bash
Unable to find image 'justincormack/debian:latest' locally
latest: Pulling from justincormack/debian
efd26ecc9548: Pull complete
a3ed95caeb02: Pull complete
2df06b6623ba: Pull complete
Digest: sha256:4c0acbaf234244e7a565b1ed0f3bbe87561d1c0440f5e9382941eb35bb8e518a
Status: Downloaded newer image for justincormack/debian:latest
root@9833dddf1ed1:/# ssh user@192.168.65.1
The authenticity of host '192.168.65.1 (192.168.65.1)' can't be established.
ECDSA key fingerprint is cc:ce:e0:0f:03:ae:1b:be:b3:28:8b:75:40:8c:ff:e3.
Are you sure you want to continue connecting (yes/no)?
Is this sufficient for your use-case?
@djs55 οΌthank you for your help
the actually ip gateway should be 192.168.1.1, 192.168.65.1 seems to be confused.
For users, displaying the same ip address will much better than the confused digital 65 i think.
not resolved for version
Version 1.12.1 (build: 12133)
2d5b4d9c3daa089e3869e6355a47dd96dbf39856
docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 1.12.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 10
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay bridge null host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.20-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.953 GiB
Name: moby
ID: 57YZ:VNCF:YHUO:DG2H:67D2:TCYI:5FY7:FIZN:F4O7:XFAD:EGNN:3E4I
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 22
Goroutines: 41
System Time: 2016-09-16T21:44:52.575686599Z
EventsListeners: 1
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.7.1
Git commit: 6f9534c
Built: Thu Sep 8 10:31:18 2016
OS/Arch: darwin/amd64
Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 17:52:38 2016
OS/Arch: linux/amd64
mac version
OS X EI Capitan
10.11.6(15G1004)
my docker test command
docker run -it --net=host edib/elixir-phoenix-dev /bin/bash
ip address | grep 192
result is
inet 192.168.65.2/28 brd 192.168.65.15 scope global eth0
my host ip address is
192.168.1.55
@ziyouchutuwenwu the resolution discussed above was not being able to use --net=host
which as discussed is difficult to achieve but being able to bind specifically to a particular host interface with port publishing. We may be able to make --net=host
work one day but there is currently no timetable for this, it needs significant engineering work.
@justincormack thank you very much, i will use port mapping instead.
@ziyouchutuwenwu, I'm curious why its hard to bind to a host interface?
@justincormack thank you! I don't know why but this is the only answer I found in all the bugs on net host not currently working on docker for mac.
I was very surprised to see this not working in the new shiny macOS version of Docker (working in the intuitive way, not in the "yes, strictly speaking 'host' means 'vm' on macOS").
Is there a workaround when --net=host
is required for the container to do e.g. uPnP discovery of the local network? Eg: https://home-assistant.io/components/discovery/
With the old Docker I could create a bridged adapter in VirtualBox to achieve this if I remember correctly?
still doesn't work for version 1.12.6.
Any update on this issue. I tried the latest docker and I still experiencing it.
It appears it is possible for a standard xhyve install is able to allow access to the xhyve VM from the outside as indicated by the blog post at http://mifo.sk/post/xhyve-for-development
I think this would be the first step to properly supporting --net=host in Docker for Mac
Can someone from the Docker team investigate this?
@justincormack you mentioned a while back this not having a timetable and needing significant engineering work... has there been any forward movement to put this on the timetable?
i have similar issue but my problem is I got the 192.168.65.x ip as the manager node ip when running docker swarm init
which will not work for any worker node wanna join the swarm created.
I would need to provide the --advertise-addr <my real host ip>
to make the docker swarm working.
If I didn't get it wrong, I was blocked by this issue. It's so confusing for a docker new user like me when I tried to run some HelloWorld level apps and Docker for Mac didn't respond as expected.
So please at least make a note under the document about the 'host' mode before this issue is solved.
agree with kehao95, a note will help a lot and avoid frustrations from the production. Especially in the documentation of the awesome Mac native docker client.
I'm facing this issue as well, I'm using emby server but the dlna is publishing on the internal subnet only, +1 for timetable
2018 still not work in mac.
We need --network host
to run a selenium container which can access a non-docker server on the host.
Still not working, does anybody solve it in some workaround?
same issue here, still doesn't work
I think this problem must have at least 2 years and there is no move from Docker....
sad
also need it working...
I think the reason is xhyve, as docker on mac run all the container under this little VM. I just ran using the --net=host and the network interfaces in xhyve VM and in container matches but not with the original host which is macos. so it makes sense why --net=host does not work on mac as the actual host is xhyve in this case, not the mac.
To login into xhyve
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
And here are the network interfaces in the xhyve vm
docker0 Link encap:Ethernet HWaddr 02:42:FE:10:5C:44
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:feff:fe10:5c44/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:718 (718.0 B)
docker_gwbridge Link encap:Ethernet HWaddr 02:42:D4:6B:46:79
inet addr:172.20.0.1 Bcast:172.20.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:d4ff:fe6b:4679/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:1488 (1.4 KiB)
eth0 Link encap:Ethernet HWaddr 02:50:00:00:00:01
inet addr:192.168.65.3 Bcast:192.168.65.255 Mask:255.255.255.0
inet6 addr: fe80::3d54:c71b:fbef:5f7b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:26825 errors:0 dropped:0 overruns:0 frame:0
TX packets:20154 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:19936409 (19.0 MiB) TX bytes:2268037 (2.1 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:10362 errors:0 dropped:0 overruns:0 frame:0
TX packets:10362 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:945027 (922.8 KiB) TX bytes:945027 (922.8 KiB)
veth2f727a0 Link encap:Ethernet HWaddr 86:35:3E:23:15:16
inet6 addr: fe80::8435:3eff:fe23:1516/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:2906 (2.8 KiB)
and a container with --net=host
β ~ docker run -it --net=host alpine ash
/ # ifconfig
br-47d654cdc4b8 Link encap:Ethernet HWaddr 02:42:E3:30:A4:78
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
docker0 Link encap:Ethernet HWaddr 02:42:FE:10:5C:44
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:feff:fe10:5c44/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:718 (718.0 B)
docker_gwbridge Link encap:Ethernet HWaddr 02:42:D4:6B:46:79
inet addr:172.20.0.1 Bcast:172.20.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:d4ff:fe6b:4679/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:1488 (1.4 KiB)
eth0 Link encap:Ethernet HWaddr 02:50:00:00:00:01
inet addr:192.168.65.3 Bcast:192.168.65.255 Mask:255.255.255.0
inet6 addr: fe80::3d54:c71b:fbef:5f7b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:26825 errors:0 dropped:0 overruns:0 frame:0
TX packets:20154 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:19936409 (19.0 MiB) TX bytes:2268037 (2.1 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:10658 errors:0 dropped:0 overruns:0 frame:0
TX packets:10658 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:971963 (949.1 KiB) TX bytes:971963 (949.1 KiB)
veth2f727a0 Link encap:Ethernet HWaddr 86:35:3E:23:15:16
inet6 addr: fe80::8435:3eff:fe23:1516/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:2906 (2.8 KiB)
So the network interfaces matchs exactly . Both xhyve and container share the same network stack. The host(xhyve) IP and container IP is 192.168.65.3 in this case. Which is totaly different from my mac IP
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether ac:bc:32:cf:78:51
inet6 fe80::10d2:890d:b005:e2e3%en0 prefixlen 64 secured scopeid 0x5
inet 192.168.0.29 netmask 0xffffff00 broadcast 192.168.0.255
inet6 2601:646:9601:20c0:18df:11e2:9f4f:be09 prefixlen 64 autoconf secured
inet6 2601:646:9601:20c0:65a6:b2e4:9c50:34dc prefixlen 64 autoconf temporary
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
And in case of linux it works as it runs directly on the linux kernel. I am expecting same result on windows as it runs in a hyversied container under windows as well.
Linux container cant run directly under macos or windows as the kernels are different. Linux container has to run in linux kernel and VM/hyperviser are the only solution for this. I dont think it can support natively unless we do some port forwarding from the hyperviser to the original host.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30d of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale
/remove-lifecycle stale
@RSamal the fact that docker on macOS is implemented via an xhyve VM is an "implementation detail", and when I as a user pass --net host
, I expect the same behavior as if as I was running the docker client tool on a Linux host.
unless we do some port forwarding from the hyperviser to the original host.
Exactly. This is the kind of magic that docker should do in the background, to ensure that things work the same regardless of which platform docker runs on.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30d of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale
/lifecycle frozen
/remove-lifecycle stale
+1
same issue here.
does anybody solve it in some workaround?
My workaround has been using the local address "host.docker.internal" as it was mentioned here.
https://docs.docker.com/docker-for-mac/networking/#httphttps-proxy-support
Any news whether this issue is still open?
Seems no solution yet?
In some case, we cannot use port map (e.g. some device discovery, by broadcast... I don't know the protocol details). host mode is useful, at least, for debugging.
Any updates on this?
4 years and still no response/fix?
need a fix or workaround urgently..
Why hasn't this been fixed yet ? Its been 4 years !
A temporary workaround is to install a ubuntu VM through VirtualBox, hook it up via bridge mode and THEN run the docker inside of it.
This response to a similar issue worked great for me. Moving from VirtualBox to xhyve was clearly a regression for networking in Docker.
Running HomeBridge on Docker without Host Network Mode
@ganttee have you tested this?
Hi everyone!, I tried to follow:
However, I got lost on the generate_sevice.sh script. Nor I understand the avahi side of things. I think I missing a few steps.
I am trying to run it on a Mac M1 silicon. Any help would appreciated. thanks!
Anyone know if macvlan interfaces are supported on Macs? (I assume not)
It's been almost 6 years now that this issue is up. Isn't a doable thing for Docker team to implement a dynamic port forwarding feature between xhyve and the host in order to have --network host
work as intended (in the user's perspective, no the dev's)?
Sort of baffled I just wasted hours debugging an issue that has existed for years...
Same here, but I still hope I will find a workaround, I've read some people succeed to configure a VM and use host network mode properly, not sure how to do though a tutorial would be more than welcomed.
Use case : use a macbook air as a node in a cloud cluster.
Hereβs a homebridge-specific tutorial (the VMβs network adapter needs to be configured as βbridgedβ - and perhaps promiscuous depending on your use-case - in the VMβs configuration on the host machine).
Workaround was previously boot2docker/docker-machine. (Or just a proper Linux VM). Guessing Rancher Desktop or Colima now.
Perhaps someone will correct me.
https://www.paolomainardi.com/posts/docker-performance-macos/
Now with the new release 4.14.0 of Docker Desktop on Mac utilizing the Apple Virtualization Framework, it should be possible to add another network adapter featuring the bridged network aka. network_mode = host ?
Is this a big challenge? Can somebody point out the hypervisor configuration for the Apple Virtualization Framework here? :)
Also thanks for putting it in the "Considering" space of the docker-roadmap ! π
going to mark this as a duplicate of docker/roadmap#238 so we have one place to track this.
Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.
If you have found a problem that seems similar to this, please open a new issue.
/lifecycle locked