docker/roadmap

IP addresses for Docker Desktop containers

Opened this issue · 6 comments

Tell us about your request
Provide each container started from Docker Desktop with an IP address that can be used to address the container from the host.

Which service(s) is this request for?
Docker Desktop

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Some users would like to have IP addresses for each container. Many of the use cases for this seem to be when people have a lot of replicates of a container that all use the same port.

This used to be possible on Windows using DockerNAT. This was never a supported feature but an implementation detail, but some people had discovered it, and were relying on it until DockerNAT was removed. This functionality was never available on Mac.

Are you currently working around the issue?
One user docker/for-win#5538 (comment) has reported using compose ports syntax to achieve the same goal.

Additional context
docker/for-win#5538

Running samba in a docker container to share docker volumes with windows host machine.

Cannot forward the ports when running, docker run -d -it -p 139:139 -p 445:445........, windows is already using port 445. If you try something like docker run -d -it -p 139:139 -p 4445:445..... the container runs but I can't get the drive mapped in windows. It appears that windows requires samba to be at port 445.

I believe the container needs its own IP address on a network accessible by the host.

From what I understand by listening to

  • Docker for Mac and Windows: The Insider's Guide by Justin Cormack (video, slides)
  • Unikernels and Docker: From Revolution to Evolution by Mindy Preston (video, slides)

the original motivation for VPNKit was to have a solution that gets containers online with zero minimal footprint in the networking stack of the host. VPNKit definitely does achieve that, but at the cost of IP connectivity between host and container.

The lack of "IP addresses for containers" on Mac really makes it feel "second class". I think it should be a priority to bring it on par with Linux.

As an outsider, the obvious question is, why is there no additional interface that bridges between macOS and the Linux VM running in HyperKit? VPNKit could still be used as the default gateway, like it's done now, and connections to the host could be routed through the bridge.

As a windows developer in a microservices environment running legacy AND containerized .NET core apps, removing the NAT Gateway was a backwards step. For this reason we are still running the older 2.0.0.X version of Docker that allows it.

The main reason is that with direct IP address routing to containers it allows us to

  1. Have a common docker-compose among all developers that spins up all required containers (external plus our own microservices) with known IP addresses
  2. Provide a network route so traffic from the host can reach those containers
  3. Maintain a common localhosts with known DNS names (eg microservice1.dev.internal) to reach services which is much easier to remember than magic port numbers all one the same host (127.0.0.1:8086 .. now what was that one again???)
  4. Easily develop and debug a legacy .NET service with configuration to interact with all it's dependencies deployed locally as containers, since each developer uses the same config "some_sql_serverdb.dev.internal and microservice2.dev.internal)
  5. Also easily develop and debug .NET core services for the same reasons above

This was a very fast and efficient flow that is completely broken now the only options we are left with is

  1. Port forwarding all containers to the host. What a nightmare since now you have easy context, such as an explicit DNS name defined in localhosts, to tell you what you are connecting to
  2. Deploy an reverse proxy container, such as Traefik/Nginx, and then configure all the rules to route to containers. This would still allow you nice DNS names for HTTP services since you can do host/path routing, but not sure how socket based services using raw TCP/UDP would fare

Don't try to force a single workflow on everyone, as originally we actually had everything configure via port forward and it was a freaking nightmare. Since we moved to known static IPs for the locally deployed containers and using a network route for traffic flow from host -> docker it has been an excellent local development cycle.

I just read the first comment about binding to loopback IPs.. going to try that because that will actually address all the reasons I just posted in the above comment :-)

Nice great! I second this, and add a column to view the IPs easily on the console