kubenetworks/kubevpn

Is it possible to run inside docker

rucciva opened this issue · 54 comments

Hello, i have successfully run kubevpn directly from my notebook. Now i tried to run it inside docker but doesn't seems to work.

Screenshot 2024-03-27 at 08 49 05

I have created custom docker image based on bitnami/kubectl

FROM bitnami/kubectl:1.27 AS base 
USER 0
RUN apt update -y && apt install -y curl git ca-certificates unzip
WORKDIR /tmp



FROM base AS kubectl-oidc_login
RUN export ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')"; \
    export VERSION="v1.28.0"; \
    curl -sL -o kubelogin.zip "https://github.com/int128/kubelogin/releases/download/${VERSION}/kubelogin_linux_${ARCH}.zip"
RUN unzip kubelogin.zip \
    && mv kubelogin /usr/local/bin/kubectl-oidc_login



FROM base AS kubectl-kubevpn
RUN export ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')"; \
    export VERSION="v2.2.3"; \
    curl -sL -o kubevpn.zip "https://github.com/kubenetworks/kubevpn/releases/download/${VERSION}/kubevpn_${VERSION}_linux_${ARCH}.zip" 
RUN unzip kubevpn.zip \
    && mv bin/kubevpn /usr/local/bin/kubectl-kubevpn
COPY --from=kubectl-oidc_login /usr/local/bin/kubectl-oidc_login /usr/local/bin/kubectl-oidc_login
RUN apt-get install -y wget dnsutils vim curl  \
    net-tools iptables iputils-ping lsof iproute2 tcpdump binutils traceroute conntrack socat iperf3 \
    apt-transport-https ca-certificates curl

i run it using docker-compose

x-base-env: &base-env
  environment:
    KUBECONFIG: ${PWD:-/src}/.kube/kubeconfig.yaml
  volumes:
    - ${PWD:-.}:${PWD:-/src}
  working_dir: ${PWD:-/src}

x-base: &base
  <<: *base-env
  build: .

services:
  start-vpn.sh:
    <<: *base
    build:
      target: kubectl-kubevpn
    privileged: true
    sysctls:
      net.ipv6.conf.all.disable_ipv6: 0 
    entrypoint: [ ./start-vpn.sh ]

Hello, i have successfully run kubevpn directly from my notebook. Now i tried to run it inside docker but doesn't seems to work.

Screenshot 2024-03-27 at 08 49 05 I have created custom docker image based on bitnami/kubectl
FROM bitnami/kubectl:1.27 AS base 
USER 0
RUN apt update -y && apt install -y curl git ca-certificates unzip
WORKDIR /tmp



FROM base AS kubectl-oidc_login
RUN export ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')"; \
    export VERSION="v1.28.0"; \
    curl -sL -o kubelogin.zip "https://github.com/int128/kubelogin/releases/download/${VERSION}/kubelogin_linux_${ARCH}.zip"
RUN unzip kubelogin.zip \
    && mv kubelogin /usr/local/bin/kubectl-oidc_login



FROM base AS kubectl-kubevpn
RUN export ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')"; \
    export VERSION="v2.2.3"; \
    curl -sL -o kubevpn.zip "https://github.com/kubenetworks/kubevpn/releases/download/${VERSION}/kubevpn_${VERSION}_linux_${ARCH}.zip" 
RUN unzip kubevpn.zip \
    && mv bin/kubevpn /usr/local/bin/kubectl-kubevpn
COPY --from=kubectl-oidc_login /usr/local/bin/kubectl-oidc_login /usr/local/bin/kubectl-oidc_login
RUN apt-get install -y wget dnsutils vim curl  \
    net-tools iptables iputils-ping lsof iproute2 tcpdump binutils traceroute conntrack socat iperf3 \
    apt-transport-https ca-certificates curl

i run it using docker-compose

x-base-env: &base-env
  environment:
    KUBECONFIG: ${PWD:-/src}/.kube/kubeconfig.yaml
  volumes:
    - ${PWD:-.}:${PWD:-/src}
  working_dir: ${PWD:-/src}

x-base: &base
  <<: *base-env
  build: .

services:
  start-vpn.sh:
    <<: *base
    build:
      target: kubectl-kubevpn
    privileged: true
    sysctls:
      net.ipv6.conf.all.disable_ipv6: 0 
    entrypoint: [ ./start-vpn.sh ]

Yes, it support run in docker, i tested it, looks fine.
image

Here is compose.yaml:

x-base-env: &base-env
  environment:
    KUBECONFIG: ${PWD:-/src}/.kube/kubeconfig.yaml
  volumes:
    - ${PWD:-.}:${PWD:-/src}
  working_dir: ${PWD:-/src}

x-base: &base
  <<: *base-env
  image: docker.io/naison/kubevpn:naisontest

services:
  start-vpn.sh:
    <<: *base
    build:
      target: kubectl-kubevpn
    privileged: true
    sysctls:
      net.ipv6.conf.all.disable_ipv6: 0
    entrypoint: [ ./start-vpn.sh ]

start-vpn.sh

#!/bin/bash

kubectl-kubevpn connect --foreground

but i found some info stackoverflow

hmm got it, maybe something with rancher desktop i guess.
thanks a lot, closing this.

By viewing logs, i noticed that, it failed with step adding route..., all steps before adding route... should works well. later i will try rancher.

thanks a lot @wencaiwulue ,

just to confirm one thing. kubevpn does not need existence of any other executable right? such as ifconfig and route

just to confirm one thing. kubevpn does not need existence of any other executable right? such as ifconfig and route

Server(kubevpn-traffic-manager pod): only needs iptables

Client:

  • Linux/Windows: not need any other executable existence
  • macOS: needs inner command route to add route

Linux/Windows: not need any other executable existence
macOS: needs inner command route to add route

so if i run linux container inside macOS, the kubevpn inside container doesn't need any other executable right?

Linux/Windows: not need any other executable existence
macOS: needs inner command route to add route

so if i run linux container inside macOS, the kubevpn inside container doesn't need any other executable right?

Yes, not need any other executable

cool thanks

ah i found the root cause,

its --extra-cidr "<ip address range that also include the ip of the kubernetes api server>"

surprisingly if i use this flag in mac, its working.

is this OS related behavior?

ah i found the root cause,

its --extra-cidr "<ip address range that also include the ip of the kubernetes api server>"

surprisingly if i use this flag in mac, its working.

is this OS related behavior?

Are you say that?:
On macOS: add flag --extra-cidr <k8s api server cidr> , it works fine
On Linux: add this flag will couse failed?

yups.

the reason i need to add --extra-cidr <k8s api server cidr> is that both my service that i plan to access from the VPN and the k8s server are hosted behind cloud firewall which share the same ip addresses, and this is something i can't control.

yups.

the reason i need to add --extra-cidr <k8s api server cidr> is that both my service that i plan to access from the VPN and the k8s server are hosted behind cloud firewall which share the same ip addresses, and this is something i can't control.

Maybe you can use ssh jump server to solve this issue? kubevpn support ssh jump function ~

yups.

the reason i need to add --extra-cidr <k8s api server cidr> is that both my service that i plan to access from the VPN and the k8s server are hosted behind cloud firewall which share the same ip addresses, and this is something i can't control.

Or maybe you can change docker cidr settings? change to another network cidr?

yups.
the reason i need to add --extra-cidr <k8s api server cidr> is that both my service that i plan to access from the VPN and the k8s server are hosted behind cloud firewall which share the same ip addresses, and this is something i can't control.

Maybe you can use ssh jump server to solve this issue? kubevpn support ssh jump function ~

Yeah its an option but i would still prefer to not add additional machine on the path. Less thing to maintain the better.

yups.
the reason i need to add --extra-cidr <k8s api server cidr> is that both my service that i plan to access from the VPN and the k8s server are hosted behind cloud firewall which share the same ip addresses, and this is something i can't control.

Or maybe you can change docker cidr settings? change to another network cidr?

Do you mean my local docker cidr? Yes i could change that but i don't think thats gonna solve the problem. Because it only happens when i add the cidr that contain the public ip of the kubernetes api server, not the internal ip

Btw do you think thats there is a problem in my setup?
I think its kind of the same when routing all traffic , i.e. using 0.0.0.0/0 cidr, which is a common use case for a vpn.

Or is kubevpn meant only for accessing the internal kubenetes cluster?

yups.
the reason i need to add --extra-cidr <k8s api server cidr> is that both my service that i plan to access from the VPN and the k8s server are hosted behind cloud firewall which share the same ip addresses, and this is something i can't control.

Or maybe you can change docker cidr settings? change to another network cidr?

Do you mean my local docker cidr? Yes i could change that but i don't think thats gonna solve the problem. Because it only happens when i add the cidr that contain the public ip of the kubernetes api server, not the internal ip

Yes, local docker setting. cidr which contains api server ip should not add to extra-cidr, because kubevpn client will connect to this apiserver ip directly

Btw do you think thats there is a problem in my setup? I think its kind of the same when routing all traffic , i.e. using 0.0.0.0/0 cidr, which is a common use case for a vpn.

Or is kubevpn meant only for accessing the internal kubenetes cluster?

Looks like your settings is not recommended usage

vpn use 0.0.0.0/0 as cidr, but actually, vpn server ip will not route by vpn client, otherwise it will cause a loop.

you can add cidr which not contains apiserver, it should works fine

Looks like your settings is not recommended usage

got it. So the case with macos was not intended even though it's working.

you can add cidr which not contains apiserver, it should works fine

yeah i can confirm that its working if i excude the apiserver public ip. It just that i need to access the same public ip address used by the kubernetes api server to access the service. Guess i need to used dnsmasq to overwrite the resolved ip address of all my services to internal ip of kubernetes network.

Looks like your settings is not recommended usage

got it. So the case with macos was not intended even though it's working.

you can add cidr which not contains apiserver, it should works fine

yeah i can confirm that its working if i excude the apiserver public ip. It just that i need to access the same public ip address used by the kubernetes api server to access the service. Guess i need to used dnsmasq to overwrite the resolved ip address of all my services to internal ip of kubernetes network.

Yes, not intended.

Why you need to access the same public ip address used by the kubernetes api server to access the service? if an public ip is used by apiserver, this ip should not allocate to another device(whatever router, switch or anything ....) .

Let's comfire that:
1, you can connect to apiserver by kubevpn connect, now you can ping podIP or serviceIP, right ?
2, extra-cidr means we needs to access those cidr by k8s network. eg: some resource could access in k8s pod. so we can add this option, let we can access those resource in our local PC.
3, need to access the same public ip address used by the kubernetes api server to access the service those service is k8s service or not?

my topology is like this

internet -> cloud WAF -> kubernetes API (Public IP)
internet -> cloud WAF -> ingress (Public IP) -> kubernetes services (only allow connection from whitelisted IP)

this kubernetes services are configured to only allow connection originating from whitelisted ip only, which include the private ip of the kubernetes cluster. I use the kubevpn so that my notebook could access these kubernetes services since my connection will have the cluster private IP as source IP (at least this is what happens when i use MacOS).

both of the cloud WAF i mentioned above is the same service, so they have same CIDR.

my topology is like this

internet -> cloud WAF -> kubernetes API (Public IP) internet -> cloud WAF -> ingress (Public IP) -> kubernetes services (only allow connection from whitelisted IP)

this kubernetes services are configured to only allow connection originating from whitelisted ip only, which include the private ip of the kubernetes cluster. I use the kubevpn so that my notebook could access these kubernetes services since my connection will have the cluster private IP as source IP (at least this is what happens when i use MacOS).

both of the cloud WAF i mentioned above is the same service, so they have same CIDR.

There question:
1, kubenetes API and kubernetes servcies are in same k8s cluster or not?
2, ingress (Public IP) is in which cluster?
3, could kubernetes API(Public IP) can access ingress (Public IP) -> kubernetes services or not ?

 internet -> cloud WAF -> kubernetes API (Public IP)
                                             ↓
                              ingress (Public IP) -> kubernetes services (only allow connection from whitelisted IP)
  1. kubenetes API and kubernetes servcies are in same k8s cluster or not?

same cluster, but different public IP. but they are exposed through the same cloud WAF CIDR , which is the only way to access both the kube api server and kubernetes ingress from Internet.

2, ingress (Public IP) is in which cluster?

same cluster with the kubernetes API

3, could kubernetes API(Public IP) can access ingress (Public IP) -> kubernetes services or not ?

Yes it could

  1. kubenetes API and kubernetes servcies are in same k8s cluster or not?

same cluster, but different public IP. but they are exposed through the same cloud WAF CIDR , which is the only way to access both the kube api server and kubernetes ingress from Internet.

2, ingress (Public IP) is in which cluster?

same cluster with the kubernetes API

3, could kubernetes API(Public IP) can access ingress (Public IP) -> kubernetes services or not ?

Yes it could

Can we access Ingress(Public IP) via kubernetes pod network? (your kubernetes service can access from another pod in same cluster or not?)

Can we access Ingress(Public IP) via kubernetes pod network? (your kubernetes service can access from another pod in same cluster or not?)

yes i can.

But the problem is when the connection involve a FQDN of the service (redirect etc), where all of them are currently mapped to the cloud-WAF CIDR, not the ingress IP.

thats why i mentioned the use of Dnsmasq earlier as a workaround, to rewrite the dns address of all the services to ingress IP instead of cloud-WAF CIDR

Can we access Ingress(Public IP) via kubernetes pod network? (your kubernetes service can access from another pod in same cluster or not?)

yes i can.

But the problem is when the connection involve a FQDN of the service (redirect etc), where all of them are currently mapped to the cloud-WAF CIDR, not the ingress IP.

thats why i mentioned the use of Dnsmasq earlier as a workaround, to rewrite the dns address of all the services to ingress IP instead of cloud-WAF CIDR

Do you mean that nslookup k8s service it will mapped to the cloud-WAF CIDR? that very weird

no, i mean i need to access the service using their fqdn, such as service.domain.com.
and those are all mapped to cloud-WAF CNAME, not the ingress public IP

no, i mean i need to access the service using their fqdn, such as service.domain.com. and those are all mapped to cloud-WAF CNAME, not the ingress public IP

So, service.domain.com. -> cloud-WAF CNAME -> ingress public IP, finally point to Ingress ip ,right ?

Yes. Same with kubernetes api server

kube.domain.com -> cloud waf -> kube api server

Yes. Same with kubernetes api server

kube.domain.com -> cloud waf -> kube api server

Sorry for lating reply(busy work...), maybe we can have a online meeting to diagnose the problem ?

Sure we can, that would be really helpful.

Sure we can, that would be really helpful.

Join Feishu event: https://www.larkoffice.com/calendar/share?token=e0c3aebbdba8e3557e1d6704707a5203
Subject: (No title)
When: Saturday, Mar 30, 14:00 - 14:30 (GMT+8)
Organizer: Caiwen Feng

Can you join this meeting?

Sure

Sure we can, that would be really helpful.

Join Feishu event: https://www.larkoffice.com/calendar/share?token=e0c3aebbdba8e3557e1d6704707a5203 Subject: (No title) When: Saturday, Mar 30, 14:00 - 14:30 (GMT+8) Organizer: Caiwen Feng

Can you join this meeting?

Sure thanks a lot

hi sorry, seems to require me to sign up. a moment

can't seems to sign up since don't have +86 number, can we use google meet instead?

can't seems to sign up since don't have +86 number, can we use google meet instead?

ok, let me have a try

I'm restarting my notebook. A moment and ill get back to you. Sorry

I'm restarting my notebook. A moment and ill get back to you. Sorry

got it

hi @wencaiwulue , i got another workaround. which is have another container to run kubectl proxy --disable-filter --address 0.0.0.0 and have kubevpn connect to that container instead of the actual server, and it works.

just have a little hiccup, kubevpn won't use the context i passed to the flag and keep using the default context.
for example eventhough i start with kubectl kubevpn --context somecontext, it won't use somecontext

hi @wencaiwulue , i got another workaround. which is have another container to run kubectl proxy --disable-filter --address 0.0.0.0 and have kubevpn connect to that container instead of the actual server, and it works.

just have a little hiccup, kubevpn won't use the context i passed to the flag and keep using the default context. for example eventhough i start with kubectl kubevpn --context somecontext, it won't use somecontext

Wow, congratulations~

Do you mean in local docker, startup a containerA with commnad kubectl proxy and then, startup another containerB with command kubevpn connect, they are sharing same docker network?

Can you create another issue about flag context not works? i will fix it as soon as possible~. thanks

Do you mean in local docker, startup a containerA with commnad kubectl proxy and then, startup another containerB with command kubevpn connect

yes, there will be two containers

they are sharing same docker network?

they do lives in the same CIDR, but they have different IP.

Can you create another issue about flag context not works? i will fix it as soon as possible~. thanks

done

they are sharing same docker network

just for additional note, i've tried using the same IP/network namespace for proxy and kubevpn container, and the same problem resurfaced.

hi @wencaiwulue , sorry to ping again.
i noticed that the container that run kubevpn can no longer resolve any dns set by docker compose. Seems like kubevpn overwrite all dns lookup including the one that set by docker compose. Is that expected?


----------------------------------------------------------------------------------
    Warn: Use sudo to execute command kubevpn can not use user env KUBECONFIG.    
    Because of sudo user env and user env are different.    
    Current env KUBECONFIG value: /Users/rucciva/Development/vira/deployment/.kube/kubeconfig.yaml
----------------------------------------------------------------------------------

start to connect
got cidr from cache
get cidr successfully
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
adding route...
dns service ok
+---------------------------------------------------------------------------+
|    Now you can access resources in the kubernetes cluster, enjoy it :)    |
+---------------------------------------------------------------------------+
Press any key to disconnect...
prepare to exit, cleaning up
failed to release ip to dhcp, err: failed to get cm DHCP server, err: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
update ref count error, increment: -1, error: update ref-count failed, increment: -1, error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
can not update ref-count: update ref-count failed, increment: -1, error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
leave proxy resources error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
clean up successfully
prepare to exit, cleaning up
failed to release ip to dhcp, err: failed to get cm DHCP server, err: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
update ref count error, increment: -1, error: update ref-count failed, increment: -1, error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
can not update ref-count: update ref-count failed, increment: -1, error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
context canceled

hi @wencaiwulue , sorry to ping again. i noticed that the container that run kubevpn can no longer resolve any dns set by docker compose. Seems like kubevpn overwrite all dns lookup including the one that set by docker compose. Is that expected?


----------------------------------------------------------------------------------
    Warn: Use sudo to execute command kubevpn can not use user env KUBECONFIG.    
    Because of sudo user env and user env are different.    
    Current env KUBECONFIG value: /Users/rucciva/Development/vira/deployment/.kube/kubeconfig.yaml
----------------------------------------------------------------------------------

start to connect
got cidr from cache
get cidr successfully
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
adding route...
dns service ok
+---------------------------------------------------------------------------+
|    Now you can access resources in the kubernetes cluster, enjoy it :)    |
+---------------------------------------------------------------------------+
Press any key to disconnect...
prepare to exit, cleaning up
failed to release ip to dhcp, err: failed to get cm DHCP server, err: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
update ref count error, increment: -1, error: update ref-count failed, increment: -1, error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
can not update ref-count: update ref-count failed, increment: -1, error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
leave proxy resources error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
clean up successfully
prepare to exit, cleaning up
failed to release ip to dhcp, err: failed to get cm DHCP server, err: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
update ref count error, increment: -1, error: update ref-count failed, increment: -1, error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
can not update ref-count: update ref-count failed, increment: -1, error: Get "http://host.docker.internal:8001/api/v1/namespaces/default/configmaps/kubevpn-traffic-manager": dial tcp: lookup host.docker.internal on 10.250.0.10:53: no such host
context canceled

Oh. no......, it is not expected, but i tested it, looks like works fine?
image

hmm i see, got it thanks thanks

hey @wencaiwulue , just want to confirm something. It seems than when kubevpn establish the VPN connection, it will add kube's dns service as the first dns resolver on the client machine. This is what happens to my /etc/resolv.conf before and after connecting with kubevpn:

# cat /etc/resolv.conf 
nameserver 192.168.5.3
# kubectl kubevpn connect

----------------------------------------------------------------------------------
    Warn: Use sudo to execute command kubevpn can not use user env KUBECONFIG.    
    Because of sudo user env and user env are different.    
    Current env KUBECONFIG value: .kube/kubeconfig.yaml
----------------------------------------------------------------------------------

start to connect
got cidr from cache
get cidr successfully
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
adding route...
dns service ok
+---------------------------------------------------------------------------+
|    Now you can access resources in the kubernetes cluster, enjoy it :)    |
+---------------------------------------------------------------------------+
# cat /etc/resolv.conf 
search default.svc.cluster.local svc.cluster.local cluster.local openstacklocal
nameserver 10.250.0.10
nameserver 192.168.5.3
options ndots:5 attempts:2 timeout:2

IIRC, the behavior of the machine is that it will only contact the next dns resolver if the current dns resolver is not responding. so in this case, it will first ask to resolve domain from the kubernetes dns resolver and then only ask the local dns resolver if the kubernetes dns resolver fail to be contacted.

if my understanding is right, then wont we become unable to use the local resolver since the kubernetes dns resolver will probably answer all the dns request, hence we won't be able to resolve things such as docker.host.internal since the kubernetes dns did not know about that hostname?

hey @wencaiwulue , just want to confirm something. It seems than when kubevpn establish the VPN connection, it will add kube's dns service as the first dns resolver on the client machine. This is what happens to my /etc/resolv.conf before and after connecting with kubevpn:

# cat /etc/resolv.conf 
nameserver 192.168.5.3
# kubectl kubevpn connect

----------------------------------------------------------------------------------
    Warn: Use sudo to execute command kubevpn can not use user env KUBECONFIG.    
    Because of sudo user env and user env are different.    
    Current env KUBECONFIG value: .kube/kubeconfig.yaml
----------------------------------------------------------------------------------

start to connect
got cidr from cache
get cidr successfully
update ref count successfully
traffic manager already exist, reuse it
port forward ready
tunnel connected
adding route...
dns service ok
+---------------------------------------------------------------------------+
|    Now you can access resources in the kubernetes cluster, enjoy it :)    |
+---------------------------------------------------------------------------+
# cat /etc/resolv.conf 
search default.svc.cluster.local svc.cluster.local cluster.local openstacklocal
nameserver 10.250.0.10
nameserver 192.168.5.3
options ndots:5 attempts:2 timeout:2

IIRC, the behavior of the machine is that it will only contact the next dns resolver if the current dns resolver is not responding. so in this case, it will first ask to resolve domain from the kubernetes dns resolver and then only ask the local dns resolver if the kubernetes dns resolver fail to be contacted.

if my understanding is right, then wont we become unable to use the local resolver since the kubernetes dns resolver will probably answer all the dns request, hence we won't be able to resolve things such as docker.host.internal since the kubernetes dns did not know about that hostname?

Yes, it will add k8s dns server to fisrt line to /etc/resolv.conf

Yes, your understanding is right, dns call servers to resolve domain one by one.

No, you should startup your docker container with options --add-host=host.docker.internal:host-gateway. eg:
docker run -it --privileged --sysctl net.ipv6.conf.all.disable_ipv6=0 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp -v ~/.kube/config:/root/.kube/config --platform linux/amd64 --add-host=host.docker.internal:host-gateway debian

it works well. because it will add enty to /etc/hosts .

eg:

root@57cd6f355394:/kubevpn/bin# nslookup
<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)
<jemalloc>: (This is the expected behaviour if you are running under QEMU)
> host.docker.internal
;; Got recursion not available from 172.24.128.10, trying next server
;; Got recursion not available from 172.17.64.8, trying next server
;; communications error to 192.168.65.5#53: timed out
;; communications error to 192.168.65.5#53: timed out
;; no servers could be reached

> host.docker.internal
;; Got recursion not available from 172.24.128.10, trying next server
;; Got recursion not available from 172.17.64.8, trying next server
;; communications error to 192.168.65.5#53: timed out
;; communications error to 192.168.65.5#53: timed out
;; no servers could be reached

> exit

root@57cd6f355394:/kubevpn/bin# ping host.docker.internal
PING host.docker.internal (192.168.65.2) 56(84) bytes of data.
64 bytes from host.docker.internal (192.168.65.2): icmp_seq=1 ttl=37 time=0.425 ms
64 bytes from host.docker.internal (192.168.65.2): icmp_seq=2 ttl=37 time=1.88 ms
^C
--- host.docker.internal ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1007ms
rtt min/avg/max/mdev = 0.425/1.152/1.880/0.727 ms
root@57cd6f355394:/kubevpn/bin# cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
192.168.65.2	host.docker.internal
172.17.0.6	57cd6f355394

172.24.155.93  reviews                  # Add by KubeVPN
172.24.139.146 ratings                  # Add by KubeVPN
172.24.189.12  productpage              # Add by KubeVPN
172.24.163.61  kubevpn-traffic-manager  # Add by KubeVPN
172.24.174.112 details                  # Add by KubeVPN
172.24.154.80  authors                  # Add by KubeVPNroot@57cd6f355394:/kubevpn/bin# ^C
root@57cd6f355394:/kubevpn/bin# exit

i see, so i still need to add --add-host=host.docker.internal:host-gateway
got it thanks

hi @wencaiwulue , sorry for pinging you again.
just out of curiosity, did anything changes on how the kubevpn add routes to the machine in the latest version?
now my use case no longer works in macos too, consistent with linux container.

nevermind, the network was conflicting with local docker. sorry again for pinging