docker/for-win

Missing DockerNAT after upgrading to Docker Desktop 2.2.0 on Windows

Closed this issue · 69 comments

After updating my local Docker Desktop from 2.1.0.5 to 2.2.0 i was unable to use the IP 10.0.75.1.

After some investigations, i found that the entire definition of "DockerNAT" disappeared.

I searched google for possible solutions but i didn't find anything useful.

I tried to delete and reinstall the 2.2.0 version but without success.

Through the page https://docs.docker.com/docker-for-windows/release-notes/ i was able to retrieve a functional version of the latest 2.1.0.5; once installed all started to work as before. The output of the ipconfig command is:

`

Ethernet card vEthernet (DockerNAT):
Connection specific DNS suffix:
IPv4 Address. . . . . . . . . . . . : 10.0.75.1
Subnet mask . . . . . . . . . . . . . : 255.255.255.240
Default gateway . . . . . . . . . :
`
This is the part that was missing with the new installation.

Yep. They did that. I am already updated and lose DockerNAT too.

We deliberately removed the DockerNAT in the latest version because it's no longer necessary due to our new filesharing implementation.

There are existing, much more suitable methods for doing everything around this network adapter, can you perhaps describe why you want to use this IP and I can point you to a good solution?

Thanks for the fast replay.

At the moment i am using the 10.0.75.1 IP address as convenient way to address my machine both by the browser (running in the host) and the internal services (running inside the docker network).

I modified the "hosts" file to be able to use a DNS-like name and since i am still in development phase it is more than sufficient.

I can say that in the final production environment this problem will not arise (i will re-organize my services in a convenient way), but in development, where i have limited resources, it is very convenient to do so.

@mikeparker what are those solutions? The missing DockerNAT is an issue for us as well

@LeonSebastianCoimbra
Generally implementing something which allows you to access the host from a container AND use the same mechanism for container-to-container communication is not a best practice, because at scale the system often needs multiple hosts across a cluster, and so the hostname (and IP) will change. Generally for container to container communication we have DNS names provided by docker compose for example.

However, if you're looking for a solution that is 'developer only' (this won't necessarily work in production) and you are trying to standardise a mechanism for talking to containers either from a different container or on the host, you need to use a DNS name (not an IP address) as the DNS name will resolve to different IP addresses inside containers and on the host - we no longer have an IP address accessible by both the host and the container.

We technically provide a DNS name you can use for this purpose: kubernetes.docker.internal this is currently used by, as you can guess, kubernetes, allowing us to share the kube context on the host and inside containers. There is also host.docker.internal which may work depending on what you're trying to do exactly.

If this is something that is required maybe we need to look again at the feature and make it more of a standard feature - any more information on your use case would be useful.

@ChristopherMillon let me know what your use case is and whether it's the same as the above.

Yes. I use it in dev on laptop, make route for bridge docker. Startup dns-proxy-server(http://mageddo.github.io/dns-proxy-server/latest/en/1-getting-started/running-it/) in container. After that DNS record update automaticly form start-up containers. And I can access to any conteiners by DNSname.

I have solution https://www.bountysource.com/issues/39154772-how-to-access-containers-by-internal-ips-172-x-x-x whih disapear after 2.2 update. To access internal container IP's. What is a solution now?

Linux native installation provides access to docker network, why windows should not provide it? But we use route for that and it works. In setting we have:
Hyper-V subnet
10.0.75.0/28
default: 10.0.75.0/28

But fact script what start Hyper-V virtual machine not use it after update.

@mikeparker it's basically the same, we docker inspect ... get the IP Address of the container and then communicate with it from the host for dev/testings purposes.

You recommend using DNS? How would this work exactly? Do we have to set explicit hostnames for our containers and prefix them by host.docker.internal or kubernetes.docker.internal?

@ChristopherMillon that doesn't sound the same. Leon is talking about accessing the host machine. You're talking about accessing a container.

If you want to go host -> container simply expose the port and use localhost.

@straga @bondas83 I don't understand what you're saying.

@mikeparker binding onto the localhost is an issue as in case of CI/CD (or even dev machine) those ports won't necessarily be available.
We're using docker-compose, and ports are pre-defined in the docker-compose.yml

Before it worked with a route going through DockerNAT, and ensure we could run tests in parallel without worrying about used ports.

@ChristopherMillon can you elaborate a bit more:
a) why don't you have control over which ports are in use on a CI/CD machine? Usually CI/CD environments are designed to be reproducible.
b) You say your ports are pre-defined in your docker-compose file but you also say you can't guarantee which ports you're using, can you clarify?
c) When you say tests in parallel are you talking about spinning up the same docker-compose set of services multiple times on the same host for different purposes?

Anything further you could tell me about what you've been doing in the past and why would be helpful, thanks! (including the automation of docker inspect and how that fits into your workflow)

@mikeparker For example a I have 10 services with 8069 port.

  1. demo1_testdrop.app.dev.local + demo1_testdrop.pg11.db.local
  2. demo10_report.app.dev.local + demo10_report.app.pg11.db.local
    ....
  3. ....

if use port: need 20 diferent ports. Remember them all, keep a list of which where.
It looks not good.

@mikeparker
a) Couldn't guarantee a specific port is available on a testing node as another test set might be running as well using the same port (We run sets of tests in parallel).
b) What I meant is because we're using docker-compose, we couldn't be able to say 'use any available port on the host', hence why binding onto the localhost could be unreliable in this case.
c) Multiple docker-compose sets are trying to spawn up the same services, using the same ports for different test sets.

After writing all this, I realised we could do fine with binding onto host, sometimes we might have to manually docker rm in order to free the ports up and remove test sets parallelism.

We got C# test assemblies calling docker-compose, a particular set of tests will spawn up a particular docker compose environment in order to run the tests, and we do like to be able to run those sets of tests in parallel as it makes things faster.
Nice thing about having C# test assemblies doing all this, is the debugging capability in our services.

Those tests assemblies docker-compose up -f somefiles.yml and get all the associated IP Addresses for each container, then we use those IP Addresses (say I want the IP of the kafka1 container) to arrange the environment, act and assert what happened.

@ChristopherMillon if I understand you correct you're running C# tests on the host, and the C# tests themselves trigger docker-compose to spin up containers necessary for the test, then the test runner on the host communicates with those containers? And you're running these tests in parallel and some of them share docker-compose files to spin up infrastructure which would naturally have the containers overlap port-wise?

Where does the DockerNAT come into this? You say get all the associated IP Addresses for each container but this doesnt mention the IP address of the host. Do you need to communicate from the container to the host, or purely from the host to the container? (or both)

I'm happy to go back and forth some more, but I think it's going to be quicker if you can provide a fuller reproduction of the problem you're seeing as it's hard to diagnose like this. Can you describe a set of steps that I can perform to see the exact problem you're having?

@mikeparker
However, if you're looking for a solution that is 'developer only' (this won't necessarily work in production) and you are trying to standardise a mechanism for talking to containers either from a different container or on the host, you need to use a DNS name (not an IP address) as the DNS name will resolve to different IP addresses inside containers and on the host - we no longer have an IP address accessible by both the host and the container.

Yes, only for development and yes, emulating it in the "hosts" file (in development i do not have access to a DNS server to do the change) i was using a personal DNS name.

@mikeparker
We technically provide a DNS name you can use for this purpose: kubernetes.docker.internal this is currently used by, as you can guess, kubernetes, allowing us to share the kube context on the host and inside containers. There is also host.docker.internal which may work depending on what you're trying to do exactly.

Thanks for the tip, i think that this one can solve my current problem for the development.

@mikeparker
If this is something that is required maybe we need to look again at the feature and make it more of a standard feature - any more information on your use case would be useful.

NOTE: this won't solve the case when i need a specific DNS name (different from the one docker provide by default), since the ip addresses pointed by "host.docker.internal" (and similar) is dynamic and changes at each reboot. The convenient part of the 10.0.75.1 was that it was static (always reliable) and gave me the ability to choose the DNS name. If you think that SSL certificates can be involved it is not a bad thing!!

Thanks to your clarifications i think that i my be ok from my side... i let you close this issue as soon as also the others participants are satisfied.

On 2.0.x and 2.1.x I relied on DockerNAT to discover the host IP and let containers (more specifically Linux Containers) communicate with the Windows host. From a Powershell script on 2.0.x and 2.1.x I discovered the host IP using the following code:

$ip=(Get-NetIPConfiguration | Where-Object { $_.InterfaceAlias -eq 'vEthernet (DockerNAT)' }).IPV4Address.IPAddress

On 2.2.x I confirm that Docker NAT is no longer available. The solution, inspired by this discussion, is to rely on docker.for.win.localhost. Basically I spawn a small Alpine container asking it to resolve docker.for.win.localhost. So I changed the Powershell line into the following:

$ip=(docker run --rm alpine sh -c 'getent hosts docker.for.win.localhost | awk ''{ print $1 }''')

Note: it returns a different IP (192.168.xxx.yyy) instead of the usual 10.0.xxx.yyy but I verified that they are equivalent and containers-to-host communication works.

Same here, as soon as i am updated to 2.2.0 DockerNAT is gone. @mikeparker you say this done on purpose? Then i'm really missing a piece now, how can i access running Docker container by it's IP address directly from windows machine?
Prior to 2.2.0 this has been achieved via DockerNAT network, i've been adding a route to the routing table, some convenient DNS name in hosts file and accessed running web services within my container, or DB...
Now, with DockerNAT gone, none of that work... do i miss something and there is another way to achieve the same behavior?

I logged the same issue with another bug #5560 (sorry, I've now closed that).

But as I say there, it would've been super nice to have a clear 'breaking changes' notification in this release, as it was removed on purpose 😄

... sorry to be that guy: https://xkcd.com/1172/

Sorry to bother you again, but i find another reason to miss the fixed ip address 10.0.75.1 just now.

It is always a matter of development: i am using google OAuth2 authentication system to access GDrive in my application. In development, i defined (in hosts file) a DNS name (say for example "goofy.mikeymouse.com") that pointed to 10.0.75.1 and i used it in the configuration of the google application as "allowed redirect URI".

As suggested, i switched to "host.docker.internal" (or gateway.docker.internal or so on) instead of my DNS name, but when i changed the allowed redirect URI in the google console i get an error: "Invalid redirect: must end with a top-level public domain (e.g. .com or .org).".... ".internal" is not accepted.

Since the ip address related to the .internal pseudo-names is dynamic, i can not create my on DNS name unless i modify by hand the hosts file every time that i restart docker... which is not acceptable.

How to aceess docker container from windows?
I have 10 of developing project with mysql, redis, php fpm, nginx ... All has the same ports and ssl. It is hard to remember localhost:443, localhost:444 ... (but in 2.2 it works)
It is much metter to remember https://project1.local https://prooject2.local (but in 2.2 i dont know solution, in 2.1 there was workaround with "route /P add 172.0.0.0 MASK 255.0.0.0 10.0.75.2")

maybe i can access project1.docker.internal ? :D

Thanks everyone for all the details, keep it coming, it all helps us prioritise the features here as we understand more use cases. There are 2 separate issues here so I'll address them separately: Container-to-Host and Host-to-Container.

Connecting Container-to-Host

There was an unofficial workaround to do this previously which was not a supported feature, using the DockerNAT IP address. Our docs do not mention this: https://docs.docker.com/docker-for-windows/networking/ and you can see a reference to the docker0 interface which exists on Linux but not Mac or Windows which is a similar thing.

Right now you can workaround this using host.docker.internal which maps to your current IP address, which changes when you change network. You can also use kubernetes.docker.internal which always maps to 127.0.0.1 on the host and internally we map it differently to ensure the traffic gets through. HOWEVER: These will not work on a Linux host! Don't deploy this to a linux host and expect it to work.

We are currently trying to standardise the use of host.docker.internal across platforms so right now this only works on Windows and Mac, getting this into the Linux system is harder. This may become fully supported at some later date, but be aware this solution does not scale. If you use an orchestrator (i.e. swarm/kubernetes) across multiple hosts (your docker containers are split across 2 machines for example), you can't guarantee which host you're actually talking to as your container could be running on either host, so really your best bet is to put the workload from the host inside a container as well - this is using docker 'as its designed'.

@aldobongio I suspect docker.for.win.localhost no longer works, except for machines that already have this entry injected in the hosts file. I would change to host.docker.internal or kubernetes.docker.internal to be future proof.

Connecting Host-to-Container

Connecting host->container via IP address (and if you define a DNS name manually) is not a supported feature. Our documentation explicitly calls out that you cannot do this (although technically, you could until now, with an undocumented hack and manually routing work): https://docs.docker.com/docker-for-windows/networking/

Per-container IP addressing is not possible
Our current recommendation is to publish a port, or to connect from another container. This is what you need to do even on Linux if the container is on an overlay network, not a bridge network, as these are not routed.

Similarly, we don't support directly addressing a specific container with a DNS name. This would mean intercepting all network traffic and redirecting certain requests, but this is something we could support in future if its popular.

If you simply want to open a browser connecting to these containers, and you can't remember which port is which container, try using the new Dashboard feature released in 2.2.0.0 and open the browser through that. Another idea is to use browser bookmarks.

Ideally the workflow to use if you want to connect automatically is to use ports or connect from another container. Whatever your host is doing, can you put it in a container? The second option is to use ports. You can give the container a port range rather than a specific port in order to avoid clashes. To know ahead of time which port it'll use you can automate the setting of ports using scripting. There are different ways to do this in docker-compose, one method is to use variable substitution in the port section:

https://docs.docker.com/compose/compose-file/#variable-substitution

port: ${MY_PORT}

How it say: Give with one hand and take away with the other. Gave speed of volumes but took away comfort:)
DNS is solution because you can set this in router for all 100 developers. O port is not compatible with some huge projects, because all links should have port.

Maybe the is workaround to manulay create old good working DockerNAT network? or maybe:
route /P add 172.0.0.0 MASK 255.0.0.0 {gateway.docker.internal IP} or some other gateway IP ?

Right now you can workaround this using host.docker.internal which maps to your current IP address, which changes when you change network. You can also use kubernetes.docker.internal which always maps to 127.0.0.1 on the host and internally we map it differently to ensure the traffic gets through. HOWEVER: These will not work on a Linux host! Don't deploy this to a linux host and expect it to work.

Just to not forgot i repeat what i wrote in my last post: there are situations in which using the .internal DNS name that you provide is not possible (see accepted redirect uri by google) and a custom DNS name is needed. To do so having a fixed ip address to which refer to or a way to specify custom DNS name is a correct solution. It is only for development but since development is part of the game i think that is important.

Is it not possible for us to manually re-add the DockerNAT in 2.2 after installing? Then we could use latest Docker while we change our configs to make it not required..

@mikeparker I see your reasoning behind removing DockerNAT. The thing is, those unofficial 'workarounds' became some sort of 'the way de facto' to work. Even applications such as DockStation using it kind of 'natively' in order to provide a convenient and easy way for 'host-to-container' connection. And this was really convenient, run the docker - open your browser - connect to running nginx or whatever using some meaningful DNS name - be happy. You can't deny the importance of having meaningful names (you have names for the containers).

Maybe there is a possibility to make it as an option? At least as @IanIsFluent says, temporarily, in order to give some time to adapt workflows and to find new workarounds.

@bondas83

DNS is solution because you can set this in router for all 100 developers. O port is not compatible with some huge projects, because all links should have port.

This sounds like you're using Docker Desktop on Windows to run a production system! Whilst it's great the system seems stable enough to do this, This is not really what it was designed for, it's designed as a developer tool before pushing your containers to production. For example, there's no guarantees about the VM continuing to work if you log off. I'd be interested to hear why you aren't running a proper server. I suppose its good that users consider it a good enough solution to use in this way. If this is a popular use case maybe we should support it more. But again I appreciate the community has many creative ways to build workflows with the tools we have!

@Martyrer @IanIsFluent @bondas83 @LeonSebastianCoimbra

Maybe the is workaround to manulay create old good working DockerNAT network?
Is it not possible for us to manually re-add the DockerNAT in 2.2 after installing

I will look into this. It should be possible, probably a case of running a few lines of powershell. This isnt a permanent solution though, if the VM is recreated it'll lose the link to the custom NIC you create, so you'll need to re-run it every time (this isn't an issue if you're just leaving Docker Desktop running forever).

If we want to properly support accessing containers by DNS name I think we'd probably provide a fuller solution, like a proper VPN. Maybe we can integrate with remote hosts too in this regard. If we want to support this feature theres no reason to make users manually edit the routing table, for example, you should just be able to give a container a DNS name. I've raised this but again it would be good to hear more use cases as to why users want this (is it because you're running a production server using Docker Desktop on Windows, for example, and want to share URLs with your team? Or purely for 1-person local development and if so why aren't ports good enough?)

@LeonSebastianCoimbra I hear you and I read what you wrote. It's a bit weird for programs to disregard perfectly valid DNS entries, I don't know why google is doing that. I'm unsure if theres a good case for allowing users access to customise the DNS name of the host. Using the hostname host.docker.internal is already a non-cross-platform workaround, customising it will break even more workflows across machines. Did you read what I wrote about moving host workloads into a container? your best bet is to put the workload from the host inside a container. I know this isn't always possible and its extra work, but I'd be interested in why you can't do this, what's your use case? I would be interested if there are other users in the same boat also (most replies here seem to be concerned with host-to-container DNS/IPs rather than container-to-host.)

I probably also should have mentioned that the DockerNAT was one of the biggest causes of crashes and breaking behaviour in Docker Desktop. It was vulnerable to windows updates, external VPNs and firewalls, 3rd party software and gives obscure error messages when it breaks. Removing it has raised stability for many users and reduced the number of things that can go wrong. I know this doesn't help if it was working fine for you and you relied on it, but there's some context anyway.

Thanks, Mike. As well as the number of crashes it was causing, it's also worth mentioning that it can't work in the new WSL 2 implementation of Docker Desktop, which will expect will quickly become the majority of our Windows users once Microsoft releases WSL 2.

We can use Traefik with configure each container in Labels.
Only need manual add record in windows host file.

NOTE: This is not intended to be a long term solution, it's purely for users who need an immediate temporary fix whilst working out how to change over to a supported method of host<->container communication.

As a temporary workaround, you can open
C:\Program Files\Docker\Docker\resources\MobyLinux.ps1 and insert the following line between Line 175 and 176:

$SwitchName = "DockerNAT"

Your script should then look like this

image

This will set up the script to create the switch as the powershell code is still there to do it, we just blanked the name to take it out for now.

Let me know if this works for you.

@mikeparker not exactly your version, but worked for me. I added 'DockerNAT' as a value for the parameters, where it was before update. Also i've added $SwitchName = 'DockerNAT' on 806 line, then it started to work exactly the same way it was before. Thank you for a hint.

image

Leaving out DockerNAT broke countless developer setups where only an entry in the hosts file on the host was required for testing out domain resolving by docker/kubernetes services. Now proper DNS setup is required for the same result. Although I agree that this is not best practice, it allowed developers to bypass dns setup and work with a domain name of choice without having to setup a DNS server.
If you leave out functionality, at least provide us with a workable alternative. Hacking into MobyLinux.ps1 is no such thing.

@LeonSebastianCoimbra I hear you and I read what you wrote. It's a bit weird for programs to disregard perfectly valid DNS entries, I don't know why google is doing that. I'm unsure if theres a good case for allowing users access to customise the DNS name of the host. Using the hostname host.docker.internal is already a non-cross-platform workaround, customising it will break even more workflows across machines. Did you read what I wrote about moving host workloads into a container? your best bet is to put the workload from the host inside a container. I know this isn't always possible and its extra work, but I'd be interested in why you can't do this, what's your use case? I would be interested if there are other users in the same boat also (most replies here seem to be concerned with host-to-container DNS/IPs rather than container-to-host.)

My use case is simple: i am developing web applications that need to access to different external storage providers (OneDrive, GDrive, DropBox). Each of them allow to access their private API through an OAuth2 authentication process. In base of the OAuth2 protocol, the process is as follow:

  1. the user try to access one of my pages that uses the GDrive (by instance); if i already have an authorization token i allow it, otherwise i send a redirect response to the browser to address it to the google login page. With the parameters, there is also an URL that will next be used to go back to my site
  2. the browser display the google authentication page, the user insert its username and password (my site is not involved in this... as per OAuth2 design) and when it is approved, google send another redirect to the browser, asking it to display the URL specified in the original request
  3. the page specified in the URL get the parameters from google and complete the authorization process, getting at last a token usable to access to the gdrive API

This is the standard OAuth2 flow and it is implemented as is by Google, Microsoft, DropBox and so on.

To be able to use the OAuth2, we must define inside of each provider (Google, Microsoft etc...) an "application"... a set of definitions that identify our application. Among the possible parameters there is also a list of accepted redirect URI: if the initial request ([1]) refers to one URL in the list then it is accepted, otherwise it is refused.

This is the point where every provider differs: Microsoft does not accept different domains in this list, Google accept only valid first domain level (aka .org, .com, .fr, but not .internal) and so on.

While i am in development i need the possibility to "hide" my local web application behind a fake DNS name... and to have less trouble i did find handy to point it to 10.0.75.1 instead of 127.0.0.1...

I also based other things (always in development... not in production) on the 10.0.75.1... it solved me a couple of troubles very fast.

I agree with @bravecobra : hacking the MobyLinux.ps1 script can be a temporary work-around, but not the solution.

For the moment, i will continue to develop with the old, working version (2.1.0.5) since i can not afford to loose more time on this matter. Based on this discussion, i don't think that you will re-introduce DockerNAT (or something similar), one day I will evaluate how to change my development environment with the changes introduced in the new versions.

Does anyone know what Mac users do for this use case? Because DockerNAT has never been available on Mac. It might give an idea for a different approach.

@stephen-turner as i know, they use localhost:{port}, and my mac colleagues genuinely wonder why would i need this or why it's more convenient

Hi why did you remove the NAT interface? In my team, we used this for access to containers. The localhost approach doesn't work! my scenario is this:

  • multiple docker containers that publish itself into Zookeper (also intern in the docker network)
  • all ports are exposed to the host.
  • for developing reason I need to query zookeper by name for access to the our dockerized environment: ZK will return the internal IP i.e. 172.19.0.3:11889 and my application will do a connection to that ip.
  • My debug application before used the NAT interface for routing the connection to that IP: now it doesn't work anymore!!

could you please give me a workaround?

@marcosperanza #5538 (comment) here is a temporary workaround given

@marcosperanza can you simply expose a port and use localhost:port to talk to zookeeper from your host? This is the recommended and supported method. If this doesn't work for your setup we'd be interested to know why.

@mikeparker Hi here is my scenario

image

After ZK query my application tries to connect directly to 172.19.0.2:9001 (we using gcpc)

and windows cannot route this request.

Of course, we can publish into ZK a name instead of the IP and in windows to map all names ->localhost, but this for us is a huge change and is out of scope. (too many changes to do in all components only for developing reasons)

for this reason, I think that NAT interface can be very useful.

thank a lot

@marcosperanza my first thought is to swap out your IP addresses inside your service discovery and replace them with host.docker.internal then service discovery will map to the correct addresses and be usable from both within your containers and from the host.

service1 -> host.docker.internal:9001
service2 -> host.docker.internal:9002

Essentially this is swapping in a development configuration for service discovery.

My 2nd thought is do you need service discovery at all for local development? If your container ports are static you can just replace the service discovery calls with an in memory mapping of which ports are where:

service1 -> localhost:9001
service2 -> localhost:9002

You could still use ZK for container-to-container communication if needed.

hi @mikeparker thanks, but both tips for me aren't feasible: the first one means that we have to patch all components in order to put into ZK that string. The second one is the same: I don't want to patch my code for a workaround.

The Docker Natting was a great feature and to be honest I didn't understand very well why it has been removed.

As a fast workaround, I think that we will use the temporary WA that gave me @Martyrer

If you have another solution please share it with us.

I can answer why it was removed. (All of this information is higher up in this thread, but let's collect it together).

  1. It was one of our highest causes of crashes
  2. It was one of our highest causes of broken behaviour with third-party tools
  3. It doesn't work in WSL 2, which is the future of Docker Desktop on Windows, so you would have found it disappearing soon anyway
  4. It doesn't work in Docker Desktop on Mac, and we prefer cross platform solutions
  5. It wasn't an advertised feature, but an implementation detail that people had discovered

Having said that, we understand that some people were finding it useful. But far more people were having problems caused by it. And in almost all cases there are other ways of achieving the same thing.

If we were to bring this back, we would do it as a properly supported feature on all platforms. We have added it to our wishlist, but we don't have a target date yet.

@mikeparker, my usecase is pure development - I run Linux X11 apps on Windows notebook from a container. Simply, instead of installing the app on the Win, I build a Docker image and run it together with Win native apps. First, it helps me to keep Win installation clean and second, after reinstall, I have all my tools ready.
It took me more than half a year to find the right and stable connection from container to X11 server running on Windows host. And that stable point was DockerNAT IP 10.0.75.1. After the upgrade to 2.2.0.0, I have switched to host.docker.internal, which works, but because it is changing with network change and points to dynamic IP, the X11 containers are crashing often. Usually, it is enough to lock and leave the PC for few minutes and the X11 window is gone.
I understand this is a niche use case, but for me very convenient and this change made it much less usable. Would be nice to keep some fixed local IP, which we can use to internal communication between container and host.

For local dev, this is going to be an issue for us. We're trying to conceive of an easy way for the team to get away from DockerNAT, but hitting roadblocks.

We've got about 10 microservices + 5 webs in docker containers. One of those services/webs is IdentityServer, which uses Azure AD. Azure AD forces https unless it's http://localhost or an IP. We've been using the IP.

When doing local dev, we're using dotnet dev-certs https, which only creates certs for localhost. It looks like I'm going to have to come up with a way to create our own dev certs so we can use https://host.docker.internal in lieu of https://localhost.

@snekcz can you try kubernetes.docker.internal instead of host.docker.internal? The code path is a little different but it may work for your use case. Let me know as this may play into our roadmap.

@snekcz can you try kubernetes.docker.internal instead of host.docker.internal? The code path is a little different but it may work for your use case. Let me know as this may play into our roadmap.

Unfortunatelly, it doesn't work at all. I get "Error: Can't open display: kubernetes.docker.internal:0.0".

@mikeparker, new observation from today for my usecase: when I'm connected to the network via cable, the X11 apps are quite stable. However, when connected via wifi, they are crashing frequently.

@stephen-turner I'll throw in here. In testing my Django app, I use a separate server to run the static files (normally a docker container on localhost). When trying to show the app to others running on my computer, they now can't access my docker container with static files. I previously had enabled this checkbox that now is gone:
image

And its really annoying that have such a big change happen in a minor update with no warning. I'm going back a version while people figure out a better solution than hack about in the powershell guts.

Oh, and the documentation that talks about this networking issue links to a page that doesn't exist. Maybe no one has noticed that? Probably should make it link to here

@mikeparker, @stephen-turner: Here’s our use case, it’s a bit different from the ones described above. Hopefully we’ll remain able to do something like this in the future. For now I applied the hack that restores DockerNAT, but we’re worried about the eventual switch to WSL2.

We use Docker only for testing, on the dev machines, never for deploying services. We build Jira add-ons, and we need to do all sorts of testing, both manual and automatic, with many versions of Jira and Confluence. (The two can be connected, and it’s important that we also test this.)

The current setup is that we have added many entries in the hosts file all pointing to the DockerNAT address (e.g. jira700, jira710, ..., jira850). We have scripts that can start a pair of containers (DB + Jira) running a specified version of Jira on a certain port, with a custom database snapshot. For example, I can start Jira 8.0.0 and 8.2.0, and we can address one as http://jira800:8800/ and the other as http://jira820:8820/. (All containers share a single bridge network.)

Those addresses also work from inside the containers, which is important because Jira needs to know its own address and must be able to connect to itself using it. That is a technical requirement that we cannot alter, since our clients rely on it, and thus our tests must happen in exactly that situation. The containers can also address each-other with the same addresses, which is important because Jira and Confluence can be linked and we need to test that as well.

Note that each container having a distinct DNS name is important, because cookie authentication does not look at the port part of the URL, only the domain name. If we were using something like host.docker.internal we would not be able to logged in to two different Jira containers at the same time in the same browser. (There are test cases where that is important, because Jira servers can delegate credentials between them.) Also, the exact DNS name and port used to access a container from the host (via the browser) needs to also work from inside containers (both container-to-self and container-to-other-container). We can’t have a private connection between containers, because Jira will generate URLs based on its known “base URL” that need to work from the browser (for manual testing & debugging), and from the container itself (for Jira itself), and from other containers (for connections between Jira and Confluence servers).

Using DockerNAT all the above worked automatically. We didn’t have to do any routing hacks, the only “custom” thing was adding the names to the hosts file. We didn’t even realize this was not an intentional feature until it disappeared. I’ve been searching since I updated Docker and I can’t figure out any way of doing our tests without this.

Using DockerNAT all the above worked automatically. We didn’t have to do any routing hacks, the only “custom” thing was adding the names to the hosts file. We didn’t even realize this was not an intentional feature until it disappeared. I’ve been searching since I updated Docker and I can’t figure out any way of doing our tests without this.

@bogdanb: if your machine has a fixed IP address you can try to use that in the hosts file instead of 10.0.75.1. That's what I'm doing now and at least for my use case it seems to be solid. If it is not (as it was for me before), then there is a problem ...

Same here, DockerNAT in combination with routing + DNS entries in the host files is used to host several micro- & webservices + database instances. With DockerNAT, on each laptop or dev environment, we could use the same configs with the same local DNS entries and it just works.

I get that WSL2 will change a lot of how Docker works on Windows, but IMO is this a breaking feature without any announcement. I suspect WSL2 will open up the access to the containers by their IP addresses again?

Some of the services we use locally for dev:

  • Authentication backend API (auth-api.local)
  • Authentication frontend (auth.local)
  • MSSQL db's (db.local / ip addresses)
  • Several other backend API & frontend containers; depending on which project we're working on. (all xxx(-api).local).

Switching this to localhost & port mapping would increase the difficulty of hosting a local test environment by a lot.

Docker on Windows 10.
I was using dperson/samba to share docker volumes between host and containers. I mapped the samba shares using 10.0.75.xxx ip address. When I upgraded and dockerNat went away I lost the ability to do this.

I cannot forward the ports when running the, docker run -d -it -p 139:139 -p 4445:445........, it seems windows is already using port 445. If I try something like docker run -d -it -p 139:139 -p 4445:445..... the container runs but I can't get the drive mapped in windows. It appears that windows requires samba to be at port 445.
This is where I got the idea of using samba to share files.
https://www.guidodiepen.nl/2017/03/alternative-way-to-share-persistent-data-between-windows-host-and-containers/

What did I do wrong? What can I do to get this working again?

Anyone have the dsamba container working with docker for windows since DockerNAT "Docker 2.2.0" was removed? If yes what ip address does the samba server show up as? Please give details of what you did to get it to work.

So, I also use 10.0.75.2 IP to access the container from my Windows env . I do not use localhost/10.0.0.75 since Docker has lots of unresolved issues (as far as I'm updated) with the VPNKit component, which crushes a lot and make the container ports unavailable from localhost.

any solution for that?

We had the same problem, as we were utilizing the dockerNAT to keep multiple containers running with the same exposed ports (80/443) for our web-development.

I solved the problem after I discovered that you could bind a docker port to a loopback adapter like: "127.0.0.1:80:80". This means you have the whole range from 127.0.0.1 up to 127.255.255.254 to attach to.

So in Project A, we now use 127.0.0.5:80:80, and in project B, we use 127.0.0.6:80:80 and so on.
In your hosts file, you can now still map 127.0.0.5 to example.local and 127.0.0.6 to another-example.local with the same effect you had before, when using dockerNAT.

The only downside is that you have to map each project to a static IP instead of the dynamic IP range we had before, but we can live with that.

I hope this helps someone :)

Neunerlei
"I solved the problem after I discovered that you could bind a docker port to a loopback adapter like: "127.0.0.1:80:80". This means you have the whole range from 127.0.0.1 up to 127.255.255.254 to attach to."
"In your hosts file, you can now still map 127.0.0.5 to example.local and 127.0.0.6 to another-example.local with the same effect you had before, when using dockerNAT."

Can you provide more detail on how to do this?

@KarlNeosem Of course, as you can see here, it is possible to bind a port to a loopback IP.

All you have to do now is to find a free loopback IP (I use 127.055.0.1 as minimum IP) and map that in your docker-compose.yml file.

version: "3.0"
services:
  test:
    image: jwilder/whoami
    ports:
      - 127.55.0.1:80:8000

Now add this to your hosts file:

127.55.0.1 example.org

And that's it. If you have multiple projects use a new IP for each of them 127.55.0.2, 127.55.0.3...

This only works when you call the container from the host machine, but as a development environment, it works perfectly. Sadly it does not work in docker-toolbox as far as I tested it there.

Hi Team,

I am trying to host a build agent on container from my desktop in which I will pass values as arguments for downloading the agent from particular URL.
I have requirement for both Linux and Windows,
Linux: I have successfully build container where the agent is downloaded on the container with the help of startup script and it is working as expected.
Windows: But here container is running. It is exiting by giving an error "The remote name could not be resolved: URL"
Can anyone help me here. Same settings and URL to download agent software is used but it successful in linux not in Windows. I can see network setting s were missing in Windows. How can i resolve?

Another one here, trying to reach a local microsoft sql server instance from the containers. Developing a microservices architecture locally.

"All you have to do now is to find a free loopback IP (I use 127.055.0.1 as minimum IP) and map that in your docker-compose.yml file."

I see what you are doing here. However when I try to start my container at lets say 127.55.0.1:445:445 it complains that port 445 is in use. I have to use port 445 or windows cannot map the samba shares. I believe I need to start the container with its own IP accessible from the Host.

With DockerNAT the container had it's own IP address reachable from the host.
Does anyone know how to recreate the DockerNAT?

or:
Is there way to share Docker volumes with the Host System.

After updating my local Docker Desktop from 2.1.0.5 to 2.2.0 i was unable to use the IP 10.0.75.1.

After some investigations, i found that the entire definition of "DockerNAT" disappeared.

I searched google for possible solutions but i didn't find anything useful.

I tried to delete and reinstall the 2.2.0 version but without success.

Through the page https://docs.docker.com/docker-for-windows/release-notes/ i was able to retrieve a functional version of the latest 2.1.0.5; once installed all started to work as before. The output of the ipconfig command is:

`

Ethernet card vEthernet (DockerNAT):
Connection specific DNS suffix:
IPv4 Address. . . . . . . . . . . . : 10.0.75.1
Subnet mask . . . . . . . . . . . . . : 255.255.255.240
Default gateway . . . . . . . . . :
`
This is the part that was missing with the new installation.

Hi,

If I have correctly understood your problem which looks like same than mine today - I was using 10.0.75.1 in my local Windows host file to resolve DNS names - , I have replaced in my hosts file 10.0.75.1 by 192.168.86.225 which is the (fixed) IP of Hyper-V Virtual Ethernet Adapter created with docker 2.2 installation.
Then everything works fine, I can http://mylocalname.com in my computer to work completely locally, even without network. I was working that way for many years with DockerNAT, it seems ok now.

My use case is similar to @snekcz. I use docker as a container for my development tools - I code through VS Code Remote Containers and use X11 to forward windows from the container to Windows. As @snekcz mentioned, X11 is now broken. Additionally, I used CNTLM to authenticate behind a corporate proxy and that has stopped working too since there is no longer a static IP it can listen to.

I updated to Docker Desktop 2.3, ant there is no MobyLinux.ps1, where we can do workaround with domains.

Maybe there is new features or new workaround

Thanks for your comments, everyone. I'm going to close this issue now. DockerNAT itself will not be coming back for all the reasons I explained in #5538 (comment). However, we have heard the feature request of providing an IP address for each container, so I have added an item to our public roadmap at docker/roadmap#93 to request that functionality. (BTW if you haven't seen our roadmap before, please do feel free to browse it and suggest your top feature requests there). Thank you.

with 2.1.0.5 version I had services like mysql, mosquitto running on a dedicated bridged network.
other bunch of services would run on "host" network and they could connect to mysql and mosquitto using 127.0.0.1:8883, localhost:3306 or 127.0.0.1:3306

Now this is all gone, not working anymore. The application was ported to docker desktop ( it has all the connection strings hardcoded ) It's quite an effort to update all the 127.0.0.1 and localhost strings with container names in order to make it work in latest docker desktop version. Is there a way to mitigate this without having to change application with all the hardcoded strings to match container names ?

@stephen-turner Remember the time you tried making everyone install Docker Desktop from the Microsoft store.. reminds me of that day. This is a breaking change.. YET.. no one knew about it until everything broke.. cheers for that.

To me this is a lack of Docker Windows configuration.
I found a good solution for few days : at each reboot I just force the interface IP address and use the first one I had set in my local host names.
For this, 2 choices :

  • change IP address with Windows 10 UI
  • the one I use : change IP address with a script ran at boot as administrator. Just create a file initip.ps1 containing that line : netsh interface ip set address name="vEthernet (Default Switch)" static 172.17.191.1 255.255.255.0 (or any local IP of your choice, I don't need to change the gateway) and launch it when you need, even if Docker is running no problem, no need to restart anything.
    And now at each reboot I have that IP set, so my local host names resolution is perfect, I use my Windows 10 with real hostnames but everything is local.

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked