docker/for-linux

Support host.docker.internal DNS name to host

Mahoney opened this issue ยท 272 comments

  • This is a bug report
  • This is a feature request
  • I searched existing issues before opening this one

Expected behavior

As in docker-for-mac and docker-for-windows, inside a container, the DNS name host.docker.internal resolves to an IP address allowing network access to the host (roughly the output of ip -4 route list match 0/0 | cut -d' ' -f3 inside the same container).

Actual behavior

host.docker.internal resolves to nothing

Steps to reproduce the behavior

Execute docker run --rm alpine nslookup host.docker.internal

See it returns nslookup: can't resolve 'host.docker.internal': Name does not resolve

Output of docker version:

Client:
 Version:	18.03.0-ce
 API version:	1.37
 Go version:	go1.9.4
 Git commit:	0520e24
 Built:	Wed Mar 21 23:10:09 2018
 OS/Arch:	linux/amd64
 Experimental:	false
 Orchestrator:	swarm

Server:
 Engine:
  Version:	18.03.0-ce
  API version:	1.37 (minimum version 1.12)
  Go version:	go1.9.4
  Git commit:	0520e24
  Built:	Wed Mar 21 23:08:36 2018
  OS/Arch:	linux/amd64
  Experimental:	false

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 18.03.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.13.0-37-generic
Operating System: Ubuntu 17.10
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.947GiB
Name: rob-VirtualBox
ID: 3L2C:BTV3:TQO2:4SAG:XVW5:744G:MPWQ:62FK:56DP:KH3Z:EQ7Z:TBR5
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.)

Running Ubuntu in VirtualBox 5.2.8 on OS/X 10.13.4

same situation. I want to use this in docker-compose.yml for replace remote_host

    environment:
      XDEBUG_CONFIG: "remote_host=192.168.0.83 remote_port=9001 var_display_max_data=1024 var_display_max_depth=5"
$ docker-compose -f ~/projects/docker-yii2-app-advanced/docker-run/docker-compose.yml run --rm --entrypoint nslookup php "host.docker.internal"
Creating network "dockerrun_default" with the default driver
Creating dockerrun_mysql_1 ... done
Creating dockerrun_db_1    ... done
nslookup: can't resolve '(null)': Name does not resolve

nslookup: can't resolve 'host.docker.internal': Name does not resolve
...
Kernel Version: 4.4.0-116-generic
Operating System: Ubuntu 16.04.4 LTS

https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds

I want to connect from a container to a service on the host

The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host.

The gateway is also reachable as gateway.docker.internal.

Same situation, when taking a closer look at /etc/hosts I notice the following:

172.17.0.2 f85e063d8c3e

Which suggests that it's just setting it to a random ID rather than host.docker.internal which is what I need.

Zomis commented

When I tried to find out how to connect to host from Docker I found this question/answer on Stack Overflow: https://stackoverflow.com/a/43541732/1310566

I was not aware at the time that it only applied to macOS and Windows (it was just recently edited)

While this feature not on linux I use

web:
  image: httpd:2.4
  volumes:
    - ......
  entrypoint: 
  - /bin/sh
  - -c 
  - ip -4 route list match 0/0 | awk '{print $$3" host.docker.internal"}' >> /etc/hosts && httpd-foreground 

@atolia
This look like work with docker-compose exec web and don't work with docker-compose run --rm --entrypoint /bin/bash web

Mimic on that can be done with:

echo -e "`/sbin/ip route|awk '/default/ { print $3 }'`\tdocker.host.internal" | sudo tee -a /etc/hosts > /dev/null

Notice @atolia already provided a similar solution, but he is not considering non-privileged default USER, this one will work for non-root images with passwordless sudo available, for images where default user is root, just remove the sudo part.

This command will make docker.host.internal available regardless of the Docker version OR execution mode. I'm using this on entrypoint files.

Firstly - docker is an amazing tool and so thank you for all who work tirelessly on it! As a leader of a large mixed team of linux and mac engineers this has been one of the biggest "why did we leave vagrant" questions I get hammered about when we called time on vagrant. It is very frustrating that this exists on mac and windows and not linux. We need connect back for xdebug and for letting selenium running in a container to access local urls for acceptance testing. This difference is bloating our build scripts with more and more fragility so it would be great if this was standardised. Is it not a worry if the same version of docker engineer on the 3 platforms can deviate in feature set?

docker.host.internal is still unavailable on my mac. And I can't connect to my host with 172.17.0.1.
My docker version:

Client:
 Version:      18.03.1-ce
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   9ee9f40
 Built:        Thu Apr 26 07:13:02 2018
 OS/Arch:      darwin/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.03.1-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.5
  Git commit:   9ee9f40
  Built:        Thu Apr 26 07:22:38 2018
  OS/Arch:      linux/amd64
  Experimental: true

In case you missed it: docker.for.mac.host.internal and docker.for.mac.localhost do work - but only on docker for mac...

From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal

Older aliases are deprecated in favor of this one. And I tried them, not working.

Not working here either (unsurprisingly).

@JuxhinDB it's not a random number, it's the container id. Still not useful though.

I'm running a microsoft/dotnet-framework container on a Windows host (v18.03.1-cd-win65 17513), and host.docker.internal does not work.

Any idea, when will linux support for connecting to special DNS will be fixed?

@kunalbhatia87 wait 4 resolve this issue? ๐Ÿ˜†

rfay commented

There are several comments (and workarounds) here that mistakenly use "docker.host.internal", which I don't think was ever supported. The hostname we want to be supported is the one that's supported in Docker for Windows and Docker for Mac, "host.docker.internal"

rfay commented

@Mahoney I think you should check "This is a bug report" in the OP. This is a bug. Docker team, please acknowledge it, thanks!

@rfay I'm not aware of this ever being a documented feature of Linux docker - as far as I can see it's only documented for docker-for-mac and docker-for-windows, and only as a recent change in each case. I couldn't find any discussion around the choice or anything to suggest it had been agreed as something all versions of Docker should implement, though it would make sense to me if it were.

So as far as I can see "feature request" rather than "bug" is the correct categorisation.

@Mahoney while I do agree that this is a feature request, it's an important one. In the end, you want your dev team to use a shared config file all across โ€” the underlying OS should be indifferent.

For me, running Linux on a Mac-based team, it's very bad to have to either create and ignore the changes to a config file, or to have to create hosts entries in each VM to mimic Docker for Mac behavior. I think it's simpler to just have an additional entry on the Docker networking so that the host is always reachable using the same hostname.

@brunosaboia What I do while waiting for this to be resolved is create Linux-specific config files and mount them via docker-compose.override.yml.

Not the perfect workaround, but it is the best solution right now.

I'm running a windows container on a windows host:
microsoft/dotnet:2.1-aspnetcore-runtime-nanoserver-1709

host.docker.internal does not work to connect to a service on the host.

Issues with docker for windows should be raised in the docker/for-win repo after an appropriate search such as https://github.com/docker/for-win/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+host.docker.internal

Yeah, this definitely needs to be implemented so that we can have consistency.

Really, containers should always have had something like this anyway.

As a workaround, you can follow: https://dev.to/bufferings/access-host-from-a-docker-container-4099

Working fine for me

Sadly, I'm not going to take on the debt and overhead of an entrypoint script in all of my containers.

Definitely wouldn't want this "workaround" to become a permanent solution.

@FX-HAO If you have the same problem as me, try disabling custom DNS settings in here:

image

(It was defunct on my machine also, but removing the custom DNS settings made host.docker.internal work as intended.)

Note: This is not related to the OP:s question, so sorry for wandering off-topic a bit here.

@perlun Fix my problem, Thanks.

This works for me in alpine, ubuntu and debian images in linux host:

    if grep "docker.host.internal" /etc/hosts; \
    then \
    echo -e "\n it already exists" ;\
    else \
    echo -e "`/sbin/ip route|awk '/default/ { print $3 }'`\tdocker.host.internal" >> /etc/hosts ;\
    fi

That's just a variation of an entrypoint script, as mentioned already. That's not a solution or relevant to this issue.

The idea here is that we need a uniform hostname to use on linux, docker for mac and docker for windows.

+1 on this, for consistency!

The problem disappeared after removing the DNS section in my daemon config

Thanks!

@mdelapenya - Best to qualify that your comment is off topic for this ticket. The issue referenced by the original description is not resolved.

Thanks @atrauzzi for your -1, but I suffered for this error and I just posted my workaround/solution.

I promise don't disturb you with my feedback ๐Ÿ”•

@seemethere @thaJeztah It would be interesting to hear your thoughts about this one. I would be willing to make an attempt at filing a PR about this, if you can point me in the right direction. (where is the Docker-internal DNS code retained, in what repo, and how can the new functionality be applied to Docker-for-Linux scenarios only?)

@mdelapenya - Really, it's as simple as editing your comment to note that it's off topic. Wouldn't be so bad then. The risk is that your remark offers so little context, it makes this issue sound close-able to someone reading at a glance.

Don't think you have any reason to be sour, I'm happy to remove the -1 and delete my clarification if you edit yours! ๐Ÿ˜„

Be considerate of other people in the thread!

@perlun Did you find a solution to make docker.host.internal works with the custom DNS settings?

Also curious to know whether there will be one DNS entry to rule them all!

๐Ÿ˜บ One ring to rule them all

hopping in here since I'm new to this and it's interesting. I'm so happy I found this thread.

@alexdashkov

@perlun Did you find a solution to make docker.host.internal works with the custom DNS settings?

Unfortunately, no. There was no pressing reason for me to use the custom DNS so I could just go with whatever my DHCP server would hand me.

(Again: this is the off-topic part of the thread. I will refrain from writing more about this; the underlying problem is that host.docker.internal does not work at all on Docker for Linux, unlike on Windows and macOS.)

Too bad there's noone willing and/or knowing how to do such an important consistency change for such a long time. And actually the part that relates to the most popular system when it comes to Docker usage is the only one affected...

Feel free to retweet, like and reply to this: https://twitter.com/Omega_/status/1024360453668003841

I agree, there just seems to be no triage/oversight here and this is a fairly high profile issue. Someone at a product level at docker surely would give this some priority.

Ok this is hard to see, but it's terrible. host.docker.internal is not available on Linux on Docker. It works on MacOS since 18.03.

Still have doubts? Try it:

docker run --rm -it bash:latest ping host.docker.internal

Issue description mentioned it works on Windows (I did not test that, but it's probably ok). I suppose it's safer to not have this on a production server for security reasons, but for debugging, it's especially useful for PHP XDebug.

Entrypoint fix for Docker on Linux

As shown in previous comments, here's what one can add to its entrypoint.sh (written in bash, I refactored #264 (comment) for readability).

#!/usr/bin/env bash
set -x
set -e

function fix_linux_internal_host() {
  DOCKER_INTERNAL_HOST="host.docker.internal"

  if ! grep $DOCKER_INTERNAL_HOST /etc/hosts > /dev/null ; then
    DOCKER_INTERNAL_IP=`/sbin/ip route | awk '/default/ { print $3 }' | awk '!seen[$0]++'`
    echo -e "$DOCKER_INTERNAL_IP\t$DOCKER_INTERNAL_HOST" | tee -a /etc/hosts > /dev/null
    echo "Added $DOCKER_INTERNAL_HOST to hosts /etc/hosts"
  fi
}

fix_linux_internal_host

# [...] some other things you may want in your entrypoint
# Make sure you use the same magic as the FROM php:whatever you are using such as:
# https://github.com/docker-library/php/blob/master/7.3-rc/stretch/apache/docker-php-entrypoint
# https://github.com/docker-library/php/blob/master/7.3-rc/stretch/cli/docker-php-entrypoint
exec "$@"

Edit 1: Updated to match @brunosaboia's comment
Edit 2: Added shebang, required entrypoint exec command and notes
Edit 3: Add example to show that host.docker.internal is not available on Linux

There are two comments describing how to fix this with the entrypoint so far:

Please note that they broke the hostname ๐Ÿ”ฅ. Right value is

  • host.docker.internal
    not
  • docker.host.internal

โœ”๏ธ It's spelled ok in above fix_linux_internal_host. This was very misleading.

Docker command line argument

Another way around (also mentioned in above comments) is to pass --add-host <some_hardcoded_ip> where one must first find its host ip exposed to container by running following command (inside the container):

/sbin/ip route|awk '/default/ { print $3 }'

@GabLeRoux , thanks for that.

I would change your ip route call to the following:

/sbin/ip route | awk '/default/ { print $3 }' | awk '!seen[$0]++'

Just to be sure you're removing duplicates, if any.

Because it has to be automatic.

Sorry, I meant it could take a similar approach by making sure dnsmasq's hostfile gets updated and dnsmasq gets restarted accordingly, whenever a docker bridged network comes up or goes down.

Using host.docker.internal in the extra_hosts config doesn't seem to work either:

services:
  apigateway:
    image: apigateway-img
    container_name: apigateway
    ports:
     - 80:80
    extra_hosts:
     - wdb:host.docker.internal

Running docker-compose gives:

ERROR: for apigateway  Cannot create container for service apigateway: b'invalid IP address in add-host: "host.docker.internal"'

Running this on Docker for Mac version:

Docker version 18.06.0-ce, build 0ffa825
docker-compose version 1.22.0, build f46880f

I tried docker.for.mac.host.internal as well, but it gives the same issue.

However, if I start a container manually and perform a ping to host.docker.internal, I get a connection:

root@f7bbd5f4702d:/# ping host.docker.internal
PING host.docker.internal (192.168.65.2) 56(84) bytes of data.
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=1 ttl=37 time=0.881 ms
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=2 ttl=37 time=0.543 ms
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=3 ttl=37 time=0.330 ms
64 bytes from 192.168.65.2 (192.168.65.2): icmp_seq=4 ttl=37 time=0.300 ms
^C
--- host.docker.internal ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3076ms
rtt min/avg/max/mdev = 0.300/0.513/0.881/0.233 ms

It seems like host.docker.internal cannot be resolved during docker-compose setup and then it fails that container. To me, it seems like it would be logical that DNS names in the extra_hosts config are passed through as such without lookup attempt during docker-compose setup time?

@rigolepe you cannot use a non numeric ip address in "extra_hosts"

until host.docker.internal is working for every platform you can use my container acting as a NAT gateway without any manually setup https://github.com/qoomon/docker-host

until host.docker.internal is working for every platform I should use .env file for unchanged containers

Is there any work currently being done to bring host.docker.internal to docker for Linux?

Like others, I've wasted time on this one. This is what I ended up doing:

  1. Create dummy interface:
    NB: I already had dummy0 device on my system and wanted to leave it well alone, so created another one as dummy1:
    ip link add dummy1 type dummy
    ip addr add 169.254.5.5/32 dev dummy1
    ip link set dev dummy1 up
  2. Change /etc/docker/daemon.json, add an entry for DNS using above interface:
    "dns" : ["169.254.5.5"]
    Restart docker.
  3. Change /etc/dnsmasq.conf on the host, add a line:
    address=/host.docker.internal/169.254.5.5
    NB: If dnsmasq isn't listening on all addresses, you may have to add the above IP address as listen address.
    You can probably do this without dnsmasq, but it tends to be running on my machines, and I prefer not to add stuff to /etc/hosts if I can avoid it.

CAVEAT: I'm very new to Docker, there may be drawbacks to this approach. My main driver was that I didn't want to modify any compose or docker files. YMMV.

The way docker does DNS by default is fundamentally broken. It should always use dnsmasq and the host's DNS configuration by default (and not use 8.8.8.8). Each container should (by default) resolve DNS by querying the host, which should forward the request to its own resolver, and provide resolution for all of .docker.internal, including host.docker.internal. All other accessible containers should also resolve in .docker.internal

Totally agree.

The workaround I've used for now: use 172.17.0.1 as the "host IP address". This works unless people start changing the IP address of the docker0 interface, but it's clearly much less elegant than a host.docker.internal DNS name.

(Caveat: this uses the bridge default Docker network which is apparently deprecated and not recommended for production use according to this web page. However, for development environments it can be pretty fine.)

@perlun that IP works for me in most environments... but sometimes it doesn't. Definitely not a solution for production :-(

@juanmirocks You can use ip -4 addr show docker0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}' (because the docker0 network should have an ip that connects to the host; you can run ip -4 addr show docker0 to inspect it).

Then you could pass the result value to some variable and use it.

For example, if you use docker-compose inside a container (https://hub.docker.com/r/docker/compose/), you can change it's script to use the variable.

1) Download the run script:

sudo curl -L --fail https://github.com/docker/compose/releases/download/1.22.0/run.sh -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

You can see the original script in the official repo: https://github.com/docker/compose/blob/1.22.0/script/run/run.sh

2) Change it to use the IP mentioned previously (basically these 2 lines of code instead of the last line in the original file):

DOCKER_HOST_IP=$(ip -4 addr show docker0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w "$(pwd)" --env DOCKER_HOST_IP=$DOCKER_HOST_IP $IMAGE "$@"

Full script: https://github.com/lucasbasquerotto/docker-scripts/blob/1.0.0/docker-compose.sh

3) Use it in any compose file:

services:
  myservice:
    image: alpine
    extra_hosts:
      - "host.docker.internal:$DOCKER_HOST_IP"

The above is a docker compose use case, but it can be adapted to other scenarios.

Especially for docker compose scenario I really recommend this docker container to solve the problem https://github.com/qoomon/docker-host

I'd use apline so we're not depending on host
docker run --rm alpine ip route | awk 'NR==1 {print $3}'

Hi,
I'm using docker-maven-plugin in my java maven project and I would need to reach the machine host from inside the docker container (I have a REST client calling a localhost:8081/... url).
What's the option for me?
Thanks

@qoomon ok thanks. But how can I connect that container to mine one and what will it be the url of my endpoint from localhost:8081... to ...?

@seth100 have a look at the README of https://github.com/qoomon/docker-host
your domain to access the docker host will look like dockerhost:8081

@seth100 I've improved the documentation with some better examples and description, hopefully this will answer all your questions. Feel free to open a support issue at https://github.com/qoomon/docker-host

@qoomon "Warning: The --link flag is a legacy feature of Docker. " https://docs.docker.com/network/links/

I've just installed Docker under Ubuntu and this bit me.
Is there any hope of getting this feature in anytime soon?
I think it really is valuable to be able to use a logical name instead of an IP.

@GreenAsJade I know however it is the easiest way so far. I added examples for the new official suggested method via docker network -> https://github.com/qoomon/docker-host

Thumbs up for implementing this for Linux. I'm trying to run the same XDebug session that I've used on Windows and it doesn't work on Linux.

๐Ÿ‘ to adding this feature please to enable a platform-independent way of knowing the host ip address

@MatMercer @salqadri feel free to submit a pull request

Resolving 'gateway.' from a container on linux gives IP address of a hosts from the same bridge.
I can not find this documented anywhere.

Docker version 18.09.1, build 4c52b90

These are all interesting workarounds, but it's kind of awkward that they are OS dependent...

@MatMercer @salqadri feel free to submit a pull request

I tried to find a code that creates the "host.docker.internal" URL. Even greping the entire repository I didn't find what code generates it. So, since I'm not a GO nor Docker developer, I don't know how to implement it.

Here is a fix: moby/libnetwork#2348

Here is a fix: docker/libnetwork#2348

AWESOME. I hope it gets merged soon!

Looks good, but I wish there was more effort on the other items here:

#264 (comment)

Looks good, but I wish there was more effort on the other items here:

#264 (comment)

That should be another issue entirely, not this one.

My docker image doesn't have the ip command that others used for workaround. So this is my workaround to get the IPv4 address for host so that I can use XDEBUG for PHP.

export HOST=$(printf "%d.%d.%d.%d" $(awk '$2 == 00000000 && $7 == 00000000 { for (i = 8; i >= 2; i=i-2) { print "0x" substr($3, i-1, 2) } }' /proc/net/route))
export XDEBUG_CONFIG="idekey=PHPSTORM remote_host=${HOST} remote_port=9000 remote_autostart=1 auto_trace=1 remote_enable=1"

This way, I don't have to add any additional dependencies. (Assuming you already have awk and a compatible shell. I tested it works in both Bash and Dash.)

You can always get rid of awk:

while read -r _ dest gw _ _ _ _ mask _; do if [ "$dest" == "00000000" ] && [ "$mask" == "00000000" ]; then printf "%d.%d.%d.%d" "0x${gw:6:2}" "0x${gw:4:2}" "0x${gw:2:2}" "0x${gw:0:2}"; break; fi; done < /proc/net/route

or marginally more readable:

# read each input line capturing dest, gw and mask as the main interesting bits
while read -r _ dest gw _ _ _ _ mask _; do
    # if both dest and mask are all 0s
    if [ "$dest" == "00000000" ] && [ "$mask" == "00000000" ]; then
        # reformat the gateway as a dotted quad ip
        printf "%d.%d.%d.%d" "0x${gw:6:2}" "0x${gw:4:2}" "0x${gw:2:2}" "0x${gw:0:2}"
        # don't process more of the loop (probably optional)
        break
    fi
done < /proc/net/route

But if we do it that way, then it doesn't work on dash anymore. :-(

Also, I think both ways, we are writing it in a way that depends on the host system architecture to be little endian. If we wanted to be perfect, we would also have to find a way to determine the host byte order.

Any news?

In case anyone else is still waiting for this.
https://biancatamayo.me/blog/2017/11/03/docker-add-host-ip/

any work around ?

With nginx and certbot containers, I use these lines, works fine with host.docker.internal:

command:
  - /bin/sh
  - -c
  - ip -4 route list match 0/0 | awk '{print $$3" host.docker.internal"}' >> /etc/hosts && while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"

Sources:

Most thumbed-up issue in this project ( open issues page, click sort->most reactions -> thumbsup )

shehi commented

Considering Docker is having financial problems, no wonder even the most thumbed-up issue sits here for ages.

jlk commented

Let's leave the financials out of github and try to support our friends and the community.

FYI, I was told that my workaround I made above does not work for my colleagues that use Docker for Macintosh.
Does anyone know a cross-platform (host's platform) workaround to this issue?

I've created a container acting as transparent proxy to access docker host without any setup -> https://github.com/qoomon/docker-host

Guys, really, stop posting your "brilliant" workarounds... containers, modifications of compose files, praying to all IT gods, crying etc. - all your solutions could be found on the first page of Google search results. But there is only one "true" way - it should be out of the box for all platforms...

msrd0 commented

@maoxuner I doubt it's always that ip address with every possible setup of docker networks

Yes, the IP changes on mine, and that's just the defaults.

For now adding in the entrypoint seems to work well assuming you have passwordless sudo as suggested by @hernandev However, this would be great if brought in to be consistent with Mac.

You can access the host IP with the option : network_mode: host in your docker-compose

The reason people are asking for this is for docker environments that don't use host network_mode.

@0xbad0c0d3 - Its a workaround, but I havent seen it posted anywhere else and its simple as hell to do and doesnt break other OS's. So its a winning one liner.

A temporary 'hack' as this is never going to be implemented.
It will do nothing on mac+win hosts.
It will make host.docker.internal work on linux hosts.

Add this text to a file called 'docker-entrypoint.sh'
ping -c1 -q host.docker.internal 2>&1 | grep "bad address" >/dev/null && echo "$(netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}') host.docker.internal" >> /etc/hosts && echo "Hosts File Entry Added for Linux!!!!!" || :

Add these lines to the docker file for that container.
COPY docker-entrypoint.sh /docker/ RUN chmod +x /docker/docker-entrypoint.sh ENTRYPOINT ["/docker/docker-entrypoint.sh"]

It must be as an entry point script as the hosts file gets rewritten by docker very late on build.

It first pings that URL which is present on WIN/OSX docker (but not on linux), then redirects the error output to the standard output such that I can use the pipe. Then grep listens for bad address (but its output of the line, gets sent to null (which means its not output) but still results in success thus the command after the && runs too, which is to add the IP of the gateway.

The gateway IP only works on Linux as this is native communication with the host machine.

On Win and Mac the gateway IP my command grabs actually would refer to the mini VM that Docker runs to facilitate its container architecture that the linux version doesnt require, and as a result of that X debug(for example, which was why I worked this out) doesnt sing out to the right place.

@MrNickH There are some problems I see with your approach:

1) You need to update the dockerfile (or create a new one that extends another image).
2) That approach overrides the entrypoints of images that already have an entrypoint (you could try to make some workarounds, but probably it will need to be done on a per image basis, and will end up being a workaround of a workaround and probably causing problems later on).

The benefit I see with your approach is that you can include the entrypoint file in a versioned repository and, if you use docker-compose, just build a simple dockerfile with the RUN and ENTRYPOINT commands like you exposed, to make a thin wrapper and make it work in a reliable way across multiple machines with git clone ... && cd myrepo && docker-compose build && docker-compose up -d (this example is using docker-compose, but it should work with other tools in a similar way).

I think using an environment variable and passing it to the --add-host directive (like --add-host host.docker.internal:$ENV_HOST_IP), or extra-hosts when using docker-compose, is less intrusive, and are less likely to cause trouble. Furthermore, you don't have to change the dockerfile (making it easy to remove it when this feature is implemented for linux).

That said, if your approach is working for you and you aren't seeing any major drawbacks, then it's fine too.

as this is never going to be implemented.
:(

Having consistent features between platforms seems pretty important to me. But I guess docker & co have higher priority issues.

as this is never going to be implemented.
:(

Having consistent features between platforms seems pretty important to me. But I guess docker & co have higher priority issues.

Quite.

@MrNickH There are some problems I see with your approach:

1) You need to update the dockerfile (or create a new one that extends another image).
2) That approach overrides the entrypoints of images that already have an entrypoint (you could try to make some workarounds, but probably it will need to be done on a per image basis, and will end up being a workaround of a workaround and probably causing problems later on).

The benefit I see with your approach is that you can include the entrypoint file in a versioned repository and, if you use docker-compose, just build a simple dockerfile with the RUN and ENTRYPOINT commands like you exposed, to make a thin wrapper and make it work in a reliable way across multiple machines with git clone ... && cd myrepo && docker-compose build && docker-compose up -d (this example is using docker-compose, but it should work with other tools in a similar way).

I think using an environment variable and passing it to the --add-host directive (like --add-host host.docker.internal:$ENV_HOST_IP), or extra-hosts when using docker-compose, is less intrusive, and are less likely to cause trouble. Furthermore, you don't have to change the dockerfile (making it easy to remove it when this feature is implemented for linux).

That said, if your approach is working for you and you aren't seeing any major drawbacks, then it's fine too.

Your examples exactly match the situation Im in, and I suspect a good deal of others situations too!
Addhost cannot be performed dynamically depending on Host env and will break Win/Mac Solutions.

To me it's a one liner that keeps consistency, and hopefully can be removed when figurative fingers are pulled out.