thrnz/docker-wireguard-pia

HowTo: How to enable cross docker host/device internet sharing of the WireGuard VPN tunnel and port forward the port to an IP address (suggestion)

Closed this issue · 17 comments

A little addition to enable other containers on a seperate docker host OR other network devices access to the wireguard VPN tunnel, as well as portforwarding for torrents etc.

This is a suggestion, and I hope it helps others, it took me a while to figure this out, as I'm just starting out with Linux/Docker.

Connect to the WireGuard container shell and locate the /scripts/pf.sh file.

Find line 252 - echo "$(date): Press Ctrl+C to exit" - start a new line and add the following:

echo "$(date): Enabling Internet Sharing..."
iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
iptables -A FORWARD -i wg0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth1 -o wg0 -j ACCEPT
echo "$(date): Internet Sharing DONE!"

If you wish to enable port forwarding and setup the forward to an IP address, in the next line add:

echo "$(date): Forwarding port $pf_port to transmission container on second host (192.168.1.48)"
iptables -t nat -A PREROUTING -p tcp --dport $pf_port -j DNAT --to-destination 192.168.1.48:$pf_port
echo "$(date): Port Forwarding DONE!"

It would be great if we could add the above IPTABLES entries as Environment Variables.

If you have more than one docker host, and wish to allow all containers on other hosts access to the WireGuard internet connection, you will need to the following (this worked for me):

  1. Create (or use an existing macvlan on the docker host hosting this wireguard container - this is so it has a known/valid local IP address.
  2. Create a new macvlan on the other host, using the IP address of the WireGuard container as the gateway.
  3. Move your containers onto this new macvlan network
  4. Use the above IPTABLES script and add to the relevant pf.sh file and location.

Obviously if the container is upgraded to the latest version you will lose the changes to pf.sh, so re-add them.

thrnz commented

I could probabaly add this and stick it behind a couple of env vars if anyone else needs to use it.

Maybe something like FWD_IFACE=eth1 to set the interface involved, and then PF_DEST_IP=192.168.1.48 to set where the pia forwarded port should be sent on to.

It might also be worth forwarding on udp traffic to, so we'd end up with something like this:

# Nat+forward traffic from a specific interface if requested
if [ -n $FWD_IFACE ]; then
  iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
  iptables -A FORWARD -i wg0 -o $FWD_IFACE -m state --state RELATED,ESTABLISHED -j ACCEPT
  iptables -A FORWARD -i $FWD_IFACE -o wg0 -j ACCEPT
  echo "$(date): Forwarding traffic from $FWD_IFACE to VPN"
fi

# Set env var PF_DEST_IP to forward on to another address
if [ -n $PF_DEST_IP ]; then
  iptables -t nat -A PREROUTING -p tcp --dport "$pf_port" -j DNAT --to-destination "$PF_DEST_IP:$pf_port"
  iptables -t nat -A PREROUTING -p udp --dport "$pf_port" -j DNAT --to-destination "$PF_DEST_IP:$pf_port"
  echo "$(date): Forwarding incoming VPN traffic on port $pf_port to $PF_DEST_IP:$pf_port"
fi

Would that likely do what you need it to? I don't really have the right setup to test it in atm.

@thrnz

Looks good, I'd happily test it out for you. Oh could you also if possible, please include the text editor nano ?

Query, the $FWD_IFACE would that not always be eth0 ?

If only I can figure out how to convert my docker-compose script to a docker --run script, so I can use existing networks etc and not have to throw-up a stack and then delete parts of it, etc. - but thats for me to work out I think.

thrnz commented

I figured that the interface might change depending on how many networks the container is attached too. In most cases it'll probably just be the one, eth0, but doesn't hurt to leave it as an option if needed.

Let me know if it misbehaves or needs any further tweaking.

I'm a bit reluctant to add any extra packages to the base image, but you can always add it yourself when needed using apk add nano. Alpine packages install pretty quickly.

OK that works, I did find a caveat though, if you want to use port forwarding, you need to disable the firewall using the env FIREWALL=0.

Here is my updated docker-compose.yml as an example of:

  1. Using an external IP for transmission service container
  2. Using port forwarding to that IP address
  3. Disabling the firewall within this container
  4. Using an existing network for this container
  5. Naming the container
  6. Setting the hostname of the container

docker-compose.yml example attached as docker-compose.txt
docker-compose.txt

Thanks, for the quick response on the docker image. :)

If I can help in the future, please do ask.

Also do not forget to update the frontpage documentation which lists all the environment variables. :)

thrnz commented

I don't suppose something like this would be enough to get port forwarding working with FIREWALL=1:

iptables -A FORWARD -i wg0 -o $FWD_IFACE -p tcp -d $PF_DEST_IP --dport $pf_port -j ACCEPT
iptables -A FORWARD -i wg0 -o $FWD_IFACE -p udp -d $PF_DEST_IP --dport $pf_port -j ACCEPT

I don't suppose something like this would be enough to get port forwarding working with FIREWALL=1:

iptables -A FORWARD -i wg0 -o $FWD_IFACE -p tcp -d $PF_DEST_IP --dport $pf_port -j ACCEPT
iptables -A FORWARD -i wg0 -o $FWD_IFACE -p udp -d $PF_DEST_IP --dport $pf_port -j ACCEPT

Just tried these commands, after re-deploying with the FIREWALL=1 Env set, and they error out. I presume because the $pf_port is not available at the command line. So I manually entered the following

iptables -A FORWARD -i wg0 -o $FWD_IFACE -p tcp -d $PF_DEST_IP --dport 38188 -j ACCEPT
iptables -A FORWARD -i wg0 -o $FWD_IFACE -p udp -d $PF_DEST_IP --dport 38188 -j ACCEPT

Which worked!

Into which file and where in that file do I enter iptables comands that you wanted me to try so they survive a reload of the container?

thrnz commented

Nice! I've updated the image so it should be up on Docker Hub shortly. Otherwise check out the most recent pf_forward.sh if you want to manually update it yourself.

Nice! I've updated the image so it should be up on Docker Hub shortly. Otherwise check out the most recent pf_forward.sh if you want to manually update it yourself.

Excellent! Many thanks, and I've pulled down the latest image and deployed, currently working as expected.

If Transmission is running as a Docker container on the same host do I need to use either the FWD_IFACE or PF_DEST_IP variables to enable / be able to use the PIA PF port for incoming connections in Transmission? Or is it enough to use network_mode: service and update the Transmission settings with the PF port manually? Hope it is OK to ask this here. It was not clear from the readme.

thrnz commented

Using network_mode: service:servicename would be enough. I've updated the readme to clarify.

If you're manually setting the forward port number in Transmission, I'd suggest using PORT_PERSIST=1 to try to keep the same port number forwarded across container restarts.

Thanks for the clarification! I am not able to get it to work for some reason. I've spun up the VPN container with the credentials and I can see from the logs that everything seems to be in order and I get a PF port number. I then build the Transmission container and check that it is using the VPN connection and that the GUI and downloading works. However, when entering the PF port it reports as closed. I guess I am missing something, but I am not sure what. I have added my docker-compose and the logs below. I would be grateful if you could have a look.

  vpn:
    image: thrnz/docker-wireguard-pia
    volumes:
      - pia:/pia
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    ports:
      - target: 9091
        published: 9091
    environment:
      - LOC=swiss
      - USER=<my pw>
      - PASS=<my un>
      - LOCAL_NETWORK=<lan2 ip>.0/25,<lan1 ip>.0/29,172.16.0.0/12
      - KEEPALIVE=25
      - PORT_FORWARDING=1
      - PORT_PERSIST=1
      - FIREWALL=1
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv6.conf.default.disable_ipv6=1
      - net.ipv6.conf.all.disable_ipv6=1
      - net.ipv6.conf.lo.disable_ipv6=1
    healthcheck:
      test: ping -c 1 www.google.com || exit 1
      interval: 30s
      timeout: 10s
      retries: 3

  transmission:
    image: ghcr.io/linuxserver/transmission
    container_name: transmission
    depends_on:
      - vpn
    network_mode: "service:vpn"
    environment:
      - PUID=${PUID}
      - PGID=${PGID}
      - TZ=${TZ}
      - TRANSMISSION_WEB_HOME=/transmission-web-control/
    volumes:
      - type: bind
        source: ${DIR_DOCKER}/transmission/config
        target: /config
      - type: bind
        source: ${DIR_DOWNLOADS}/torrents/watch
        target: /watch
      - type: bind
        source: ${DIR_DOWNLOADS}/torrents
        target: /downloads
    restart: unless-stopped

image

image

thrnz commented

I'm not familiar with Transmission, however when I tried with your docker-compose it appeared to work as expected. I did have to click 'save' and reopen the config window first though for the port test to succeed.

transmission screenshot

Don't you just love when it's those little stupid thing like that which are the reason you pulled your hair out for a couple of days.

Thanks a bunch!!

Nice! I've updated the image so it should be up on Docker Hub shortly. Otherwise check out the most recent pf_forward.sh if you want to manually update it yourself.

I am having trouble getting this setup. I can’t get my transmission container to route. I’ve included the pertinent docker compose configs. any idea where i’m going wrong?

Version: “3.7”
services:
wireguard-pia:
    container_name: wireguard-pia
    image: thrnz/docker-wireguard-pia
    networks:
      vlan60:
        ipv4_address: 192.168.60.45
    volumes:
      - pia:/pia
      - pia-shared:/pia-shared
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    environment:
      - LOCAL_NETWORK=192.168.0.0/16
      - LOC=server
      - USER=user
      - PASS=pass
      - KEEPALIVE=25
      - VPNDNS=1.1.1.1,1.0.0.1,8.8.8.8
      - PORT_FORWARDING=1
      - PORT_PERSIST=1
      - PORT_SCRIPT=${USERDIR}/docker/transmission/port_script.sh
      - FIREWALL=0
      - WG_USERSPACE=0
      - FWD_IFACE=eth0
      - PF_DEST_IP=192.168.80.42
    sysctls:
      # wg-quick fails to set this without --privileged, so set it here i>
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv4.ip_forward=1
      # May as well disable ipv6. Should be blocked anyway.
      - net.ipv6.conf.default.disable_ipv6=1
      - net.ipv6.conf.all.disable_ipv6=1
      - net.ipv6.conf.lo.disable_ipv6=1
  # The container has no recovery logic. Use a healthcheck to catch disco>
    healthcheck:
      test: ping -c 1 www.google.com || exit 1
      interval: 30s
      timeout: 10s
      retries: 3

 transmission:
    image: ghcr.io/linuxserver/transmission
    container_name: transmission
    networks:
      vlan80:
        ipv4_address: 192.168.80.42
    depends_on:
      - wireguard-pia

networks:
 vlan60:
    driver: macvlan
    driver_opts:
      parent: bond0.60
      enable_ipv6: "true"
    ipam:
      config:
        - subnet: 192.168.60.0/24
 vlan80:
    driver: macvlan
    driver_opts:
      parent: bond0.80
      enable_ipv6: "false"
    ipam:
      config:
        - subnet: 192.168.80.0/25
          gateway: 192.168.60.45

I am having trouble getting this setup. I can’t get my transmission container to route. I’ve included the pertinent docker compose configs. any idea where i’m going wrong?

@noumenon272

Couple of notes.

  1. gateway in an ipam configuration does NOT set the default gateway of a "network". When you create a network, docker makes a bridge adapter on the host machine. gateway is merely the ip address that's assigned to the host machine side of the bridge. You can see this with ip addr on the host. In your configuration you tried to assign a gateway address to the vlan80 network which isn't even that subnet. When I try with macvlan, docker won't even start. When I try with the default network driver, docker starts the services but my gateway assignment is just ignored (since it's for the wrong subnet) and the first usable address in the subnet is used as the host-side IP on the bridge instead.

  2. Generally, in order for the transmission service to be able to talk to the wireguard-pia service, the wireguard-pia service would need access to the vlan80 network.

What works is don't make a network for transmission. On the transmission container use network_mode: service: wireguard-pia (where wireguard-pia matches the name of your service, not the name of your container). If you want to put transmission in a different docker-compose stack on the same host, use network_mode: container: wireguard-pia (which matches the explicit container name you set).


I haven't yet figured out how to get another container to route through the pia-vpn container when using the network: syntax in docker-compose. I've gone so far as to set custom subnets with IPAM, assign a static IP in the pia-vpn container, and then in the application containers run ip route del default and ip route add default via {pia-vpn-container-ip} to send them to the pia-vpn container. Nothing. But from the host I can set up routes that use the IP of the pia-vpn container and that works fine... I just can't get other docker containers to use it without using network_mode: service: vpn.

A service can only have either network_mode: or network: defined. But you can use network: on the pia-vpn service and then use network_mode: for the other services in the same compose.yaml. Any services that are set to network_mode: service: wireguard-pia will inherent the full list of networks that the wireguard-pia service is using. So if you have a service that has an internal: true network to talk to its database, you'd have to add that network to the pia service and set your app to use the same network stack with network_mode: service: ... or network_mode: container: ...

I'm not entirely sure if my setup falls under what the OP was trying to attempt, but my basic setup for testing is the following:

The host 10.0.0.76 running the this container under docker, and a non-dockerized application running on the same host. I wanted to use this similar to ngrok. I tried the following configuration,

docker-compose.yml
version: '3'
services:
    vpn:
        image: thrnz/docker-wireguard-pia
        volumes:
            - pia:/pia
            - pia-shared:/pia-shared
        cap_add:
            - NET_ADMIN
            - SYS_MODULE
        environment:
            - LOC=swiss
            - USER=<>
            - PASS=<>
            - LOCAL_NETWORK=10.0.0.0/24
            - KEEPALIVE=25
            - VPNDNS=8.8.8.8,8.8.4.4
            - PORT_FORWARDING=1
            - PF_DEST_IP=10.0.0.76
            - FIREWALL=0
            - PORT_PERSIST=1
            - FWD_IFACE=eth0
        sysctls:
            - net.ipv4.conf.all.src_valid_mark=1
            - net.ipv4.ip_forward=1
            - net.ipv6.conf.default.disable_ipv6=1
            - net.ipv6.conf.all.disable_ipv6=1
            - net.ipv6.conf.lo.disable_ipv6=1
        healthcheck:
            test: ping -c 1 www.google.com || exit 1
            interval: 30s
            timeout: 10s
            retries: 3

volumes:
    pia:
    pia-shared:

However I could not get it to access from server. I could spin up a local web server (inside the container) to test and even a sidecar container with network_mode: "service:vpn" and everything works fine, but could not access the host application.

After some trial and error I simply needed to execute the following inside the container:
iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE and everything works. I hope this saves someone time!