docker/for-win

How to access containers by internal IPs 172.x.x.x

wclr opened this issue Β· 154 comments

wclr commented

How to access containers by internal IP 172.x.x.x from dev machine (with docker for windows installed)? So by default you can not connect to containers.

I found out that it can be achived by adding route manually (you actually need to add routes for each sub-netwrok, I usually do for 17-25):

route /P add 172.17.0.0 MASK 255.255.0.0 10.0.75
route /P add 172.18.0.0 MASK 255.255.0.0 10.0.75
route /P add 172.19.0.0 MASK 255.255.0.0 10.0.75
...

Is is a valid method? Shouldn't it be made possible by default?

rn commented

you should be able to access the containers via localhost. Does that not work?

wclr commented

you should be able to access the containers via localhost.

What do you mean by that? You mean port mapping or what?

I want to reach their IPs, in my case then I use dnsdock to have DNS discovery for containers and access them by pretty dns names (without need of port mapping)

So you should be able to access containers from your container host using the container IP. You can use docker inspect <container ID> to get your container's IP address.

Does that answer your question?

wclr commented

@kallie-b ok what should I do after I got IP? I want to ping it by IP. But it won't work from dev machine. I'm asking how to do this.

Right, okay--yes, I'm happy to help.

So, can you provide the results that you get when you run docker inspect <container ID>? And let me know which IP address you're trying to use to ping the container--I want to confirm you're using the container's internal IP.

Also, I'm assuming your dev machine is the container host--is that correct? You're not running a VM on your dev machine as the container host, or anything like that?

Could you describe the steps you are taking more specifically (including where each step is executed--in the container, on the container host, or on another, external, host)? Wherever possible, also include any error messages.

wclr commented

I'm assuming your dev machine is the container host

My machine is not a container host, it is a windows 10 dev machine with installed docker for windows, it has only 10.0.75.x interface related to docker, no 172.x.x.x interface to be able to communicate with 172.x.x.x addresses directly. Host machine is linux that runs on Hyper-V, called MobyLinuxVM.

As I've mentioned, this will solve the issue:

route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2

If I was using linux (I never used with docker), but I asume my dev machine would be also a docker host, I could access docker internal network 172.x.x.x. directly without any specific manually added routes to route table.

What I want is a comment about this issue from docker team, and if they are going to make integration between windows 10 dev machine and docker internal networks deeper.

wclr commented

There seem to be a problem with docker network when such route:

route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2

is added

Log is full of events, and growing very fast (log.txt - up to 1GB for a few hours):

15:48:00.469][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54882-172.26.234.194:51029: creating UDP NAT rule
[15:48:00.471][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54883-172.26.234.194:51029: creating UDP NAT rule
[15:48:00.473][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54884-172.26.234.194:51029: creating UDP NAT rule
[15:48:00.475][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54885-172.26.234.194:51029: creating UDP NAT rule
[15:48:00.476][VpnKit         ][Debug  ] com.docker.slirp.exe: Socket.Datagram.input udp:10.0.75.1:54886-172.26.234.194:51029: creating UDP NAT rule

Here is a log with this case:
https://gist.github.com/whitecolor/4940a8566f2b0211f6864cc11adb69be

Which also effects on the host, CPU usage is going up to 100% some time later
image

Can you comment on this as well? What is causing those events in the log?

@whitecolor I'm not sure I understand what you are trying to achieve. Is it a Windows container or a Linux container you are trying to connect to?

wclr commented

@dgageot
I need to connect to running containers from Windows dev machine where docker is installed.
This can be currently done by adding appropriate routes to routing table via 10.0.75.2 (this IP of docker linux host running on HyperV I believe).

wclr commented

Did I still failed to explain my request in OP?

  1. I'm running docker-for-windows on windows machine.
  2. Containers that are run on this platform has internal IPs like 172.18.x.x
  3. I want to reach (be able to ping) running containers directly from Windows machine (not using port mapping, I want to reach container's IP)

By default one can not just ping 172.18.x.x, but I found out the solution, add a route in route table:

route /P add 172.18.0.0 MASK 255.255.0.0 10.0.75.2

And now ping 172.18.x.x worked.

But after I installed the lastest beta (build 9123) where network was changed a lot this method using routing table doesn't work anymore.

So can you elaborate on this. How one can reach (ping) 172.x... containers from windows dev machine? Why the method with routing tabled stopped to worked, and how it can be fixed?

@whitecolor Thanks for workaround!
Also faced with this problem under windows, under linux I don't have such a problem...

I need to have access to the containers directly by IP address of container, for example by 172.18.0.3

wclr commented

@Hronom I wonder how does it work on linux by default, which gateway routes 172. address to containers?

@whitecolor On linux if I type in console ifconfig, I get next network interfaces:

br-bc76575bc879 Link encap:Ethernet  HWaddr *:*:*:*:*:*  
          inet addr:172.19.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

br-fccc8ee02778 Link encap:Ethernet  HWaddr *:*:*:*:*:*  
          inet addr:172.18.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:343481 errors:0 dropped:0 overruns:0 frame:0
          TX packets:448723 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:93440945 (93.4 MB)  TX bytes:169198433 (169.1 MB)

docker0   Link encap:Ethernet  HWaddr *:*:*:*:*:*  
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:66359 errors:0 dropped:0 overruns:0 frame:0
          TX packets:77517 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3569440 (3.5 MB)  TX bytes:203222893 (203.2 MB)

So there is a network interface br-fccc8ee02778 with IP 172.18.0.1 and mask 255.255.0.0

wclr commented

So probably on windows host there too such of interface with proper address should be added. But should there be interfaces for each 172.x... ?

If your Windows containers are connecting to the default nat network on the container host, there should be a host vNIC named (e.g., vEthernet (nat)) with the NAT network's default gateway IP address assigned to this interface. Could you please verify this by running ipconfig /all

If that's true, then both the internal NAT network prefix and the external network prefix should be "on-link" from the container host's perspective and routing should happen automatically without creating static routes.

I've also created a PR (MicrosoftDocs/Virtualization-Documentation#513) to help aid in container networking diagnostics as well as a clean-up script.

wclr commented

@dgageot can you please comment on this I believe it is quite important and basic networking issue.

wclr commented

@Hronom
Can you confirm that the latest beta version doesn't work too? (without routes added to routing table)
I just remember now that when I installed the latest I might not check it with clean routing table. (Just don't want to install and then rollback again.)

@whitecolor sorry I'm don't have a chance to test this under beta version...

I can confirm that route add method is not working with latest beta (1.13.0-rc4-beta34 (9562)). 172.17.0.1 is reachable, but none of the containers are.

I can also confirm that the method is working with 1.12.3 (8488) and 1.12.5 (9503).

wclr commented

@pachkovsky so and without route (out of the box) it too doesn't work I believe in the latest beta?

@rneugeba @dgageot
Not sure why there is no reaction from the team?

@whitecolor without the route it's not working neither in 1.2.x nor in 1.3

rn commented

@whitecolor could you please provide exact steps on how to reproduce on what you try to achieve, including the command line you use to start dnsdock.
thanks

wclr commented

@rneugeba
Well dnsdock actually has nothing to do with this issue. The problem with accessing containers by IP from windows machine.

  • You just start any container (but container should be able to respond to pings).

  • Then you need to get its IP. Suppose it is default bridge network, so: docker network inspect bridge (usually something like 172.17.0.2)

  • Try to ping this IP from windows machine ping 172.17.0.2.

  • If you where on linux ping would work out of the box.

  • On docker for windows it doesn't work out of the box.

  • I'm using currenlty 1.12.3-beta30 (8568) and possible workaround works: route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2

  • But on later (the latest) beta even this workaround with route doesn't work

  • Probably it should work out of the box as it is does on linux. what do you think?

rn commented

@whitecolor what's your use case? If you want to monitor a container there are a number of other options...

A bit of a background, while the access via 172.x.x.x (or similar) may work on a local Linux host there are a number of additional considerations:

  • this is only the case for the default bridge networking mode. Other networking modes work different, especially if you consider network plugins.
  • We can't really easily in the general case provide access to the Linux networking from the Windows host because:
    • the user can change/define different IP prefixes for the default bridge as well as a custom bridges
    • some of the IP prefixes may be used on the networks the host is attached to and messing with routing therefore may cause connectivity issues.

Because of this this is unlikely to be supported in Docker for Windows (or Docker for Mac).

In general we recommend:

  • explicitly publish ports for services running in containers. This way they are exposed on localhost on the Windows host.
  • Use a container to connect to the another container, e.g. docker run -it --rm alpine ping 172.17.0.2 where 172.17.0.2 is the IP from a different container on the same network.
wclr commented

explicitly publish ports for services running in containers. This way they are exposed on localhost on the Windows host.

Use a container to connect to the another container, e.g. docker run -it --rm alpine ping 172.17.0.2 where 172.17.0.2 is the IP from a different container on the same network.

Thanks second that is a good advice for some (I believe rare) cases. But the much more important case is developer's convenience in accessing running services by fixed names not fixed/or not fixed ports. Having to deal with different port number for multiple services is very inflexible and clumsy way to go while development.

So the case, why we need it. Accessing multiple web services:

  • Often while dev process there is number service of (web)-services running and one want to access.
  • With port mapping I have to map each service on each own port - this is very inconvinient way I not sure way people use it (mostly because they don't want to think). .
  • For web services can be invented number of ad-hoc solutins to simplify access: for example using minhost which can map random port to a fixed DNS name, another solution is to implement proxy routing container that will route requests to other containers depending on the name.
  • But if we have direct access from dev machine there appear very interesting to solutions to access running services in very immutable and convenient way: for example there is dnsdock which is a container that running DNS service and watching the state of the docker host and other containers (using docker API). When container is up it add specific DNS record that would point to this container (it is convenient to assign DNS name via container's label). So on dev machine I put this DNS service as default and thus I'm able to access services in the cotainers by pretty DNS names from windows dev machine. So in browser I just type: http://server.my-app.docker or http://client.my-app.docker (instead of some ugly port numbers)

Another case example is accessing to DB servers by different management clients for example (MongoChef) for mongodb, and in general accessing any kind of service by special software installed on the developer's machine.

We can't really easily in the general case provide access to the Linux networking from the Windows host because:

So, generally if there wouldn't be an access by default maybe this is even more correct, to avoid conflict, BUT there should be some way to make it available for those who need it, adding manual route (route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2) is ok solution.

But we need to understand what happens with MobyLinuxVM network configuration after I believe this was fixed and why method with adding route doesn't work any more.

I hope docker for win team would be able to help and resolve this.

rn commented

I had a closer look and i can confirm that the route add trick does work on 1.12.5-stable but does not work on the latest beta (1.13.0-rc4 beta34). Digging a bit deeper I noticed that with 1.13.0-rc4 The ICMP echo request packets arrive on the interface but get dropped by the FORWARD iptables chain. In 1.13.0-rc4 the default policy for the FORWARD chain is set to DROP while in 1.12.5 the policy is set to ACCEPT.

It appears that the this PR changed the policy in response to this issue.

Basically, with 1.12.x and previous you could access containers from other hosts by default while with 1.13 you can't anymore. Your route add trick basically changes allows you to access the the Linux VM from "another host", ie the host system, and that has been disabled in the upstream docker engine.

However, you can disable the iptables configuration by specifying "iptables": false in the Advanced Daemon configuration and I verified that when then adding a route on the host via route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2 I can ping a container from the host.

Note however, we don't really recommend this approach and would suggest to use the alternatives outlined in a previous comment.

wclr commented

However, you can disable the iptables configuration by specifying "iptables": false in the Advanced Daemon configuration

Could you elaborate how to achieve this on latest docker for windows? So we can install the latest version and apply the fix?

Note however, we don't really recommend this approach and would suggest to use the alternatives outlined in a previous comment.

Why? What is your recommendation arguments?

rn commented

Could you elaborate how to achieve this on latest docker for windows?

Whale systray rightclick -> Settings -> Daemon. Then toggle Basic to Advanced and you get an editor window where you can add "iptables": false to the daemon configuration.

Why? What is your recommendation arguments?

Accessing container by IP may make your development flow easier but for production it certainly will be better to expose ports and services properly.

wclr commented

Thanks. Will try with latest beta.

Accessing container by IP may make your development flow easier but for production it certainly will be better to expose ports and services properly.

Yeah, that it is obviously for enhancing development workflow with docker & docker-compose.

rn commented

@whitecolor could you confirm if the workaround suggested is working for you?
Thanks

Whale systray rightclick -> Settings -> Daemon. Then toggle Basic to Advanced and you get an editor window where you can add "iptables": false to the daemon configuration.

@rneugeba works for me in 1.13.0-rc4

wclr commented

@rneugeba
After adding "iptables": false

  • Yes I can now ping 172.x from windows machine
  • docker is not publishing ports (on windows machine)
  • containers can not reach windows dev machine and external hosts

https://docs.docker.com/engine/reference/commandline/dockerd/#linux-configuration-file
There is multiple warning about disabling this option:

--iptables=false prevents the Docker daemon from adding iptables rules. If multiple daemons manage iptables rules, they may overwrite rules set by another daemon. Be aware that disabling this option requires you to manually add iptables rules to expose container ports. If you prevent Docker from adding iptables rules, Docker will also not add IP masquerading rules, even if you set --ip-masq to true. Without IP masquerading rules, Docker containers will not be able to connect to external hosts or the internet when using network other than default bridge.

Considering this need to have more instructions how to make docker function normally, or have another wokaround then "iptables": false. What is your suggestions?

@rneugeba is there a way to get inside MobyLinuxVM, can to find the way to do this, want to explore what is inside with routing tables.

rn commented

Oops, you are right. In that case, I'm afraid, there is no easy way to achieve what you want.

I will discuss next week with some other folks but, unfortunately, this might be closed as won't fix

wclr commented

@rneugeba

I will discuss next week with some other folks but, unfortunately, this might be closed as won't fix

Surely this should be not, at least there should be some maybe hacky way to achieve this, (I don't know: maybe change this ip-forward policy manually in MobyLinuxVM).

I think this is important feature, and if it won't be available it would be a very bad favor for dev folks. Just not many people use docker in development yet, because it is really hard to get smooth dev workflow (it is not impossible, but still quite hard), surely docker want to simplify things to make them more hard.

I'll repeat my question: is there a way to get inside MobyLinuxVM, can to find the way to do this, want to explore what is inside with routing tables?

rn commented

You can enter the root namespace via nsenter as with Linux containers. Something like

docker run --rm -ti --privileged --network=none --pid=host justincormack/nsenter1 /bin/sh

should do the trick.

rn commented

Discussed this with my colleagues and unfortunately there doesn't seem an easy/natural way to expose the container network on the host with Docker for Windows/Mac. Closing this as won-fix

wclr commented

@rneugeba that is really sad and disappointing to hear. Can you advice how what can be manually changed in the MobyLinuxVM to make this available again (as it was when it worked)? Maybe something related to iptables polices?

Don't you think that it is actually important feature for developer? Maybe to have some special settings, that those who need it could enable in dev mode?

rn commented

As mentioned above, this is a change which was introduced with Docker engine 1.13 with this PR due to this issue. This was a genuine issue and it's good that it got fixed.

There is no easy workaround I can see. You could disable iptables and then manage the rules yourself, but that seems very error prone and complex...

wclr commented

For those who need a solution:

  • Get inside MobyLinuxVM: docker run --rm -ti --privileged --network=none --pid=host justincormack/nsenter1 /bin/sh
  • Change policy to ACCEPT from DROP: iptables -A FORWARD -j ACCEPT

or with single command:

  • docker run --rm -ti --privileged --network=none --pid=host justincormack/nsenter1 bin/sh -c "iptables -A FORWARD -j ACCEPT"

Seems to work.

wclr commented

@rneugeba maybe create an issue in docker to add a parameter in for docker deamon config like ip-forward-policy?

Also there is such issue for implementing workaround: I wanted to add some onboot script that would change the policy after docker services started in the VM, but each time MobyLinuxVM restarts changes made to VM's file system of the machine disappear. Any ideas how this could be done?

TewWe commented

I would also bump this for a workaround. As @whitecolor suggests, it shouldn't be a default, but some option that would allow to turn on something like a "developer mode" with the ip-forward-policy setting, so for non-prod purposes it could still help a lot.

How we use docker for development, so why we would need such feature?
The environment we are developing to is a very data- and inter-dependent service network. Because of this we developed different docker container pools which are generated automatically, and represent a subset of the service network and is autonomous enough for a developer to test their flow. So we are generating these containers continously and developers can use multiple of them to simulate part of the real environment for data-dependent testing. As there can be dozens of services running in these pools and multiple developer machines should use these, exposing ports would be error prone, and would cause collisions if multiple pools are used on multiple machines. Not to mention the administrative part, where one has to know which port belongs to which service on a specific pool.

So we are really benefiting from the current possibility of directly accsessing the containers, and would love to see this in the future as well!

Thanks.

wclr commented

I've created a proposal in docker repo about allowing to override ip forward policy moby/moby#30093

@TewWe

This is still possible to use workaround in the latest version I posted in previous post.

rn commented

Note, that this only works because we have two network interfaces in Windows and one is accessible from the host (the 10.0.75.1 one). This is only used for SMB filesharing and if we find a different path for it, it may go away. Providing a more generic way of accessing the internal docker network, which works across Windows and Mac is non-trivial. We had discussed some options internally, but none of them are really palatable.

wclr commented

Note, that this only works because we have two network interfaces in Windows and one is accessible from the host (the 10.0.75.1 one). This is only used for SMB filesharing and if we find a different path for it, it may go away.

Currently route is added to 10.0.75.2, which this IP for?

I believe it still should be possible to have access from windows/mac PC to docker internal network. It is all just IP which provides inter-network routing. Linux native installation provides access to docker network, why windows should not provide it? Besides windows and mac installation mostly used for development, I think it is really important feature this it should be resolved by docker team eventually.

I have an SMB share on the host to share local folders with docker and now I want to run a couple of containers with SMB shares as well, how exactly would you redirect the ports to the host?? its impossible! the only way is to access them directly, and you removed that option!

wclr commented

@g8tguy there is a workaround currenlty I just hope that docker team would be smart/grateful enough not to remove this ability at all, because it would really make things bad.

One comment regarding the workaround... the netmask should probably be tighter since 172 contains both public and private addresses. Shouldn't it be 172.16.0.0 and 255.240.0.0?

wclr commented

@mverrilli it may be, I just had some issues with 255.0.0.0 version of mask in older docker versions (docker log was flooded with some messages), if it works for you than ok.

@whitecolor Well, if you include all 172, then you are going to have problems accessing some sites. For example, here are some Google ip blocks. If you happen to get routed through these networks without using the netmask I provided, you won't be able to route to them.

172.253.0.0/16
172.217.30.0/24
172.217.28.0/24
172.217.24.0/24
172.217.16.0/24
172.217.0.0/24
172.217.0.0/16
172.102.9.0/24
172.102.8.0/24
172.102.8.0/21
172.102.14.0/23
172.102.12.0/23
172.102.11.0/24
172.102.10.0/24

Oh and I just realized why you said that, because you mentioned a different solution instead of the one mentioned on top. I actually do use a route method, but that's because I've abandoned Docker for Windows in favor of running VMWare Photon (has Docker). Then I just use the route I mentioned and the firewall entry. I had responded because I was watching this topic.

wclr commented

@mverrilli no, on windows you need both =) you need routes and you need to modify MobyLinux as well, does Photon supports docker-compose workflows?

@whitecolor Ahh, then I'd suggest using the route I mentioned to avoid issues. :-) docker-compose is coming in the next Photon release, I think.

+1 for this issue, needed for proper development usage

@whitecolor

or with single command:
docker run --rm -ti --privileged --network=none --pid=host justincormack/nsenter1 bin/sh -c "iptables -A FORWARD -j ACCEPT"

how make this change pernament?

wclr commented

how make this change pernament?

You can not, because image is started from scratch each time. Maybe there a way to modify original image, i'm not sure.

I use another method: my meta-framework runs a script that fixes docker each time any of my dev containers are started.

edyan commented

Hi and thanks for that discussion that helped me to fix the same issue.

I have developed a tool that compose the docker-compose command and this is how I did to make it "persistent" (even if it's not really, it'll happen only on start) : https://github.com/edyan/stakkr/blob/master/stakkr/actions.py#L227

But I have 2 questions:

  • Do you think the iptables + route workaround could work under Mac also ? I know the route command is different, and I have no mac to test ...
  • The 10.0.75.2 could be different in another system, is there any docker command to get it ? I searched and didn't find one ...

Thanks !

wclr commented

@edyan

  • I think it should work on mac, I mad it work with docker toolbox using routes, latest docker version just dont work on my mac but I think it should be the same issue.

  • You may try determine 10.0.75.2 inside MobyLinux with ip addr show | grep hvint0

edyan commented

@whitecolor

Thanks :)

I did that and that works very well :
docker run --net=host --pid=host -it --privileged --rm alpine /bin/sh -c "ip addr show hvint0 | grep -oE '\b([0-9]{1,3}\.){3}[0-9]{1,3}\b'"

I'll wait to find a mac to be able to write the right route command !

wclr commented

For routing docker toobox on mac I did:

sudo route add  172.16.0.0/15 192.168.99.100 > /dev/null
edyan commented

thanks, I'll keep it like that and see later ... if you have some time one day, you can try stakkr under mac, then I'll know if it works or not.

By the way, it's very useful (it's not because I did it πŸ˜„ , doc : http://stakkr.readthedocs.io/)

It's a "docker" alternative to vagrant and it avoids configuring the docker-compose config file manually ...

wrong..

edyan commented

@LeonSoft connect the containers together ? If they are in the same network, it's by default. Use docker compose or another tool (such as stakkr) to have your container on the same network.

Else you can create a network and attach all containers to it (see : https://docs.docker.com/engine/reference/commandline/network_connect/)

oops..

I work in this situation:

  • Windows + VMWare
  • docker is installed into a linux virtual machine
  • linux virtual machine has ip 192.168.2.25
  • docker bridge network is 172.17.0.0
  • have a container with internal ip address 172.17.0.2
  • my windows PC has ip 192.168.2.2
  • to be able to ping docker container 172.17.0.2 from my windows PC have execute this command in windows prompt as Administrator:
    route ADD -p 172.17.0.0 MASK 255.255.0.0 192.168.2.25
    now ping 172.17.0.2 works correctly in windows prompt

(Windows 7, Ubuntu 4.4, linux docker 17.06)

I think I was wrong. Everything works as expected on linux, but I was on windows. I.e., you can visit container services via IP directly on linux.

edyan commented

@lucnap @LeonSoft

Yes that's the topic of the discussion, everything works well (or not) with docker installed on Linux, we can access containers from the host, but not on Mac and Windows for which we need specific commands.

Koc commented

Is it possible to create connection type host inside hyper-v and access without adding routes? Like in virtualbox - add host only adapter

Koc commented

after running docker run --rm -ti --privileged --network=none --pid=host justincormack/nsenter1 bin/sh -c "iptables -A FORWARD -j ACCEPT" new containers hasn't access to the Internet. I should restart docker

edyan commented

Are you using docker containers or windows ?

What's your version?

Koc commented

I am using docker for windows and linux containers.

Version: 17.06.2-ce-win27 (13194)
Channel: stable
Sha1: 428bd6ceae2994bd2fc2a72ec122507abe2cf526
Started on: 2017/09/12 07:11:21.287
Resources: C:\Program Files\Docker\Docker\Resources
OS: Windows 10 Enterprise
Edition: Enterprise
Id: 1703
Build: 15063
BuildLabName: 15063.0.amd64fre.rs2_release.170317-1834
File: C:\Users\Koc\AppData\Local\Docker\log.txt
CommandLine: "C:\Program Files\Docker\Docker\Docker for Windows.exe" 
edyan commented

With the same version I did the following and it works :

$ docker run --rm -d nginx
# Install iputils-ping, the ct name is "blissful_brahmagupta"
$ docker exec blissful_brahmagupta ping 8.8.8.8
# It works

$ docker run --rm -ti --privileged --network=none --pid=host justincormack/nsenter1 bin/sh -c "iptables -A FORWARD -j ACCEPT"
$ docker exec blissful_brahmagupta ping 8.8.8.8
# It works

$ docker run --rm -d nginx
# Install iputils-ping, the new ct name is "wonderful_borg"
$ docker exec wonderful_borg ping 8.8.8.8
# It works

Did I miss something ?

Emmanuel

Koc commented

@edyan strange thing. Floating bug, it works for me now but doesn't works before. Will try find how to reproduce it again.

Koc commented

@edyan I've created separate issue for my problem with internet connection #1122

Koc commented

I've tryed docker toolbox based on virtualbox. It creates two network adapter for virtual machine: NAT + host-only network. And I can access containers (172.x.x.x) directly without any manipulation with routes or iptables. Why docker for win cann't use same approach with two network adapters?

@Koc docker for win use Hyper-V (Π° это ΠΎΡ‚Π²Ρ€Π°Ρ‚ΠΈΡ‚Π΅Π»ΡŒΠ½Π°Ρ ΡˆΡ‚ΡƒΠΊΠ°, Π² ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠΉ Π΄Π°ΠΆΠ΅ нСльзя Π·Π°ΠΏΡƒΡΡ‚ΠΈΡ‚ΡŒ ΠΎΠ±Ρ€Π°Π· windows10). Moby VM inside Hyper-V use only pre-created "DockerNAT" network interface, which is looks like NAT-interface in Virtualbox. Maybe this is reason. (ЧСстно говоря, Ρ‚ΠΎΠΆΠ΅ ΠΏΠΎΠ΄ΡƒΠΌΡ‹Π²Π°ΡŽ ΠΏΠ΅Ρ€Π΅ΠΉΡ‚ΠΈ Π½Π° docker toolbox).

or with single command:
docker run --rm -ti --privileged --network=none --pid=host justincormack/nsenter1 bin/sh -c "iptables -A FORWARD -j ACCEPT"

@whitecolor this workaround stop working with 17.10.0-ce version. I getting err message
bin/sh: iptables: not found

wclr commented

@ondraondra81 I didn't install 17.10.0-ce yet. maybe try to add apk add iptables

I'm still seeing this issue with the latest version of Docker on Windows! Annoying!

wclr commented

@jmkni this is current by design behavior, unfortunately.

@whitecolor Ah, annoying, thanks!

docker run --rm -ti --privileged --network=none --pid=host justincormack/nsenter1 bin/sh -c "iptables -A FORWARD -j ACCEPT"

resolved it for me you amazing person!

I have same issue as @ondraondra81 ,with version 17.12.0-ce, i tried @whitecolor solution and it didn't work
/ # apk add iptables

ERROR: Unable to lock database: Read-only file system
ERROR: Failed to open apk database: Read-only file system

wclr commented

@ionghitun that is a bad news, I'm still on 17.09.1-ce, didn't check the later versions, you should try/search for some tricks to workaround the issue to unlock it, there probably should be some way.

I reinstalled 17.09.1 and works, indeed is bad news not working on latest version.

After upgrading to 17.12.0-ce (15048) I started getting the "bin/sh: iptables: not found" too. The fix for me was:
#docker run --rm -ti --privileged --network=none --pid=host docker4w/nsenter-dockerd bin/sh -c "iptables -A FORWARD -j ACCEPT"

@nkapashi I confirm it works, thanks!

I have docker version 7.09.0-ce-mac35 (19611)
Docker service running on MAC(Docker for mac)
Not able to ping the Container with Ip address 127.17.0.1 from the host machine.
Any work around for the ping to work on MAC?

I have tried the #docker run --rm -ti --privileged --network=none --pid=host imagename bin/sh -c "iptables -A FORWARD -j ACCEPT" but still not able to ping it.

Docker Version: 17.12.0-ce

  1. I did docker run --rm -ti --privileged --network=none --pid=host imagename bin/sh -c "iptables -A FORWARD -j ACCEPT"
  2. ifconfig gave the below iNet address:
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:279 errors:0 dropped:0 overruns:0 frame:0
    TX packets:279 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1
    RX bytes:69424 (67.7 KiB) TX bytes:69424 (67.7 KiB)

Now how to access the service url? localhost:8080 doesnt work.

wclr commented

@vidyas78 you should be able to access 172.x.x.x, which are internal docker network addresses (to get it for a container - docker inspect [container-name] | grep 172). Also may use https://github.com/aacebedo/dnsdock to access containers using DNS names.

Also if you are not so familiar with basic docker networking and related stuff, just consider using host port mapping without involving hacky solution discussed in the current thread.

My application docker image is built on linux OS image. When running the container, I have a need to have the container join the subnet mask the host is in.
In Linux OS , I'm able to achieve this using docker run --net=host . But in Windows when I use
--network=host , windows uses Hyper-V and the IP gets mapped to 10.X.X.X. During this mode, docker inspect [container-name] | grep 172 returns empty. Neither I'm able to make it join the host nor able to find out the internal IP of the container.

I was hunting for various options to solve this. Is there any way to achieve this? Would appreciate any inputs provided. Thanks!

Running Linux OS running as host and installing docker, it will automatically works flawlessly and you are able to ping 172.x.x.x depending on your IP, Linux doesn't use virtualization since docker is install natvely on the computer. As for Windows, it uses Hyper-V and it uses 10.0.75.2 as the VM's IP address. and as for Mac, it uses something i forgot the name is, but when you install docker, you can use http://localhost to access the VM. if you are using port, then make sure that u use the right port to access whatever application you are using.

Windows and Mac are not fun to use with Docker but it works with ports flawlessly, but Linux is more flawlessly :)

Is there any alternative for --net=host @ Windows that's proven to perform the same behavior as in Linux?

@vidyas78

Most likely not, because Docker relies on Hyper-V when you are using Windows 10 Professional and it uses NAT to communicate with. I've already tried different ways but it doesn't seem to work. If users got the route add port working before is probably because they were using an older version of Windows 10 Pro. Docker gets install natively under Linux so it behaves differently and it can be communicate one another perfectly. I pretty much gave up so I'm just using ports, so if you have applications running u should be using 10.0.75.2:port.

I ve found a workaround. I m talking about windows hyper v docker containers in same subnet as host. This long thread is a bit confusing.
The default network bridge is not the same as a user-defined network bridge. (see https://docs.docker.com/network/bridge)
So create a new network bridge br0 whith your parameters (powershell syntax).

docker network create \`
  --driver=bridge \`
  --subnet=172.28.0.0/16 \`
  --ip-range=172.28.5.0/24 \`
  --gateway=172.28.5.254 \`
  br0

And create a route in cmd terminal : route add 172.28.0.0 mask 255.255.0.0 10.0.75.2 -p
If your Internal Virtual Switch\Subnet Address parameter in docker settings is 10.0.75.0 (default) you must use 10.0.75.2 Or check @whitecolor command docker run --net=host --pid=host -it --privileged --rm alpine /bin/sh -c "ip addr show hvint0 | grep -oE '\b([0-9]{1,3}\.){3}[0-9]{1,3}\b'"
Start all your docker containers with --network=br0 example :
docker run --rm --network=br0 -it alpine /bin/sh
#ip a (give you the ip container ie 172.28.5.1 )
You can now ping all containers from host (192.168.0.5 for example ) and ping 172.28.5.1 is ok

@rn commented on 29 Dec 2016 give a great anwser and he says "Note however, we don't really recommend this approach and would suggest to use..."
I don't touch iptables here and i would like to know if my approach is safer and why ?

@fabricek
I tried your solution, and doesn't even work. that seems to be same if i were to use compose to create the bridge as below

version: '2'
services:
  sandbox:
    image: wordpress:php7.1-apache
    container_name: sandbox_wordpress
    ports:
      - '80'
    environment:
      WORDPRESS_DB_NAME: sandbox
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: sup1er2man3
      WORDPRESS_TABLE_PREFIX: wp_sandbox_
    volumes:
      - './public_html:/var/www/html'      
    networks:
      mynet:
        ipv4_address: 172.26.0.5

  themereview:
    image: wordpress:php7.1-apache
    container_name: themereview_wordpress
    ports:
      - '80'
    environment:
      WORDPRESS_DB_NAME: themereview
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: sup1er2man3
      WORDPRESS_TABLE_PREFIX: wp_themereview_
    volumes:
      - './public_html:/var/www/html'
    networks:
      mynet:
        ipv4_address: 172.26.0.6

  mysql:
    image: mariadb
    container_name: sandbox_mysql
    environment:
      MYSQL_ROOT_PASSWORD: example
      MYSQL_DATABASE: sandbox
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: sup1er2man3
    volumes:
#      - db_data:/var/lib/mysql
      - './docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d'
    networks:
      mynet:
        ipv4_address: 172.26.0.7
        aliases:
          - mysql
      
  phpmyadmin:
    image: phpmyadmin/phpmyadmin
    container_name: sandbox_phpmyadmin
    environment:
     - PMA_ARBITRARY=1
    ports:
     - '80'
    volumes:
     - /sessions
    networks:
        mynet:
            ipv4_address: 172.26.0.8

#volumes:
#    db_data:
    
networks:
  mynet:
    driver: bridge
    ipam:
      config:
        - subnet: 172.26.0.0/24

and add route to it.

Review all my points it should work. You didn't put all the options in your mynet config. Try create manually network and remove it from your dockerfile to test. What route rule are you using ? Can you ping the container or this is your wordpress which is not reachable ? Putting a alpine image in your dockerfile with same parameters is pingable ? Make some tests and tell us what's wrong.

Oh i ve missed something. Try my example today and it won't work. I check it and will tell you what is missing.

@fabricek

I'm going to assume that mynet is basically the same as if you were to create br0 then i made a route to route add 172.26.0.0 mask 255.255.0.0 10.0.75.2 -p I'm still not able to ping any of the 172.0.0.0 IP address. I don't use alpine, since wordpress:php7.1-apache instead.

@benlumia007
is ping port on container open? Is ICMP traffic served by container?