listening on 443(tcp) permission denied on AKS
Raksha-CPU opened this issue · 6 comments
Hello there,
I'm currently working with an eturnal server on AKS. However, I've encountered an error and I'm seeking assistance to figure out what might be the issue.
I've provided my deployment, configmap, and service files below.
Could you kindly help me understand what might be causing this problem?
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $.Values.eturnalconfig.name }}
namespace: {{ $.Values.namespace }}
data:
eturnal.yml: |
eturnal:
# Shared secret for deriving temporary TURN credentials (default: $RANDOM):
secret: "long-and-cryptic"
# The server's public IPv4 address (default: autodetected):
relay_ipv4_addr: "x.x.x.x"
# The server's public IPv6 address (optional):
#relay_ipv6_addr: "2001:db8::4"
listen:
# -
# ip: "::"
# port: 443
# transport: udp
-
ip: "::"
port: 443
transport: tcp
#-
# ip: "::"
# port: 5349
# transport: tls
# TLS certificate/key files (must be readable by 'eturnal' user!):
#tls_crt_file: /etc/eturnal/tls/crt.pem
#tls_key_file: /etc/eturnal/tls/key.pem
# UDP relay port range (usually, several ports per A/V call are required):
relay_min_port: 36000 # This is the default.
relay_max_port: 46000 # This is the default.
# Reject TURN relaying from/to the following addresses/networks:
blacklist: # This is the default blacklist.
- "127.0.0.0/8" # IPv4 loopback.
- "::1" # IPv6 loopback.
#- recommended # Expands to a number of networks recommended to be
# blocked, but includes private networks. Those
# would have to be 'whitelist'ed if eturnal serves
# local clients/peers within such networks.
# If 'true', close established calls on expiry of temporary TURN credentials:
strict_expiry: false # This is the default.
# Logging configuration:
log_level: info # critical | error | warning | notice | info | debug
log_rotate_size: 10485760 # 10 MiB (default: unlimited, i.e., no rotation).
log_rotate_count: 10 # Keep 10 rotated log files.
#log_dir: stdout # Enable for logging to the terminal/journal.
# See: https://eturnal.net/documentation/#Module_Configuration
modules:
mod_log_stun: {} # Log STUN queries (in addition to TURN sessions).
#mod_stats_influx: {} # Log STUN/TURN events into InfluxDB.
#mod_stats_prometheus: # Expose STUN/TURN and VM metrics to Prometheus.
# ip: any # This is the default: Listen on all interfaces.
# port: 8081 # This is the default.
# tls: false # This is the default.
# vm_metrics: true # This is the default.
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ $.Values.namespace }}
name: {{ $.Values.eturnal.name }}
labels:
app: eturnal
spec:
replicas: 1
selector:
matchLabels:
app: {{ $.Values.eturnal.name }}
template:
metadata:
labels:
app: {{ $.Values.eturnal.name }}
spec:
nodeSelector:
node_pool: {{ $.Values.node_pool }}
subdomain: eturnal
hostNetwork: true
securityContext:
runAsUser: 9000
runAsGroup: 9000
fsGroup: 9000
containers:
- name: {{ $.Values.eturnal.name }}
image: {{ $.Values.eturnal.image }}
imagePullPolicy: {{ $.Values.eturnal.imagePullPolicy }}
securityContext:
allowPrivilegeEscalation: true
readOnlyRootFilesystem: true
runAsUser: 9000
runAsGroup: 9000
runAsNonRoot: true
privileged: false
capabilities:
add: [CAP_NET_BIND_SERVICE]
ports:
- name: stunturn-udp
containerPort: 443
hostPort: 443
protocol: UDP
- name: stunturn-tcp
containerPort: 443
hostPort: 443
protocol: TCP
volumeMounts:
- name: eturnal-config
mountPath: /etc/eturnal.yml
subPath: eturnal.yml
readOnly: true
volumes:
- name: eturnal-config
configMap:
name: eturnal-config
defaultMode: 0440
I have following error and my pod doesn't start
$ kubectl logs pod/eturnal-56ffcc7bf-58btw -n eturnal
Cannot query stun.conversations.im:3478: network is unreachable
Exec: /opt/eturnal/erts-14.0.2/bin/erlexec -noinput +Bd -boot /opt/eturnal/releases/1.11.1/start -mode embedded -boot_var SYSTEM_LIB_DIR
/opt/eturnal/lib -config /opt/eturnal/releases/1.11.1/sys.config -args_file /opt/eturnal/releases/1.11.1/vm.args -erl_epmd_port 3470 -s
tart_epmd false -- foreground
Root: /opt/eturnal
/opt/eturnal
[error] crasher:
initial call: stun_acceptor:init/4
pid: <0.601.0>
registered_name: []
exception exit: eacces
in function stun_acceptor:init/4 (stun_acceptor.erl, line 92)
ancestors: [stun_acceptor_sup,stun_listener_sup,stun_sup,<0.580.0>]
message_queue_len: 0
messages: []
links: [<0.583.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 376
stack_size: 28
reductions: 1065
neighbours:
[critical] Aborting: Cannot start listening on [::]:443 (tcp): permission denied
apiVersion: apps/v1
kind: Deployment
metadata:
[...]
capabilities:
add: [CAP_NET_BIND_SERVICE]
[...]
I am not 100% sure, but does it work, if you use NET_BIND_SERVICE
instead of CAP_NET_BIND_SERVICE
?
Hi,
I tried using NET_BIND_SERVICE but unfortunately, I'm still encountering the same problem.
I came across an article that mentioned AKS doesn't support binding to privileged ports for non-root users.
https://learn.microsoft.com/en-us/azure/aks/developer-best-practices-pod-security#:~:text=When%20you%20run%20as%20a%20non%2Droot%20user%2C%20containers%20cannot%20bind%20to%20the%20privileged%20ports%20under%201024.%20In%20this%20scenario%2C%20Kubernetes%20Services%20can%20be%20used%20to%20disguise%20the%20fact%20that%20an%20app%20is%20running%20on%20a%20particular%20port.
It seems I need to build the Docker image with a root user instead of using the eturnal user. Could you please guide me on the necessary changes to make in order to run the image as a root user?
Here are the build instructions.
In the Dockerfile comment this line. This will run the container by default as root.
Afterwards, build the image and push it to your container image registry.
Probably in that case your deployment needs adjustments in this way as well.
apiVersion: apps/v1
kind: Deployment
metadata:
[...]
securityContext:
readOnlyRootFilesystem: true
runAsUser: 0
runAsGroup: 0
runAsNonRoot: false
privileged: false
[...]
Before building, you may try this as well:
https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#safe-and-unsafe-sysctls
https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#setting-sysctls-for-a-pod
net.ipv4.ip_unprivileged_port_start
Hello,
Great news! Adding the net.ipv4.ip_unprivileged_port_start did the trick, allowing the turnserver to function without needing root privileges. The process was straightforward to set up on both VM and AKS platforms. And most importantly, everything is working perfectly.
Thank you once again for the remarkable turnserver.
Glad it works now. Thanks for the feedback. We will include this into the documentation as well.