Upgrade to 1.1.14 fails with: listen tcp 0.0.0.0:80: bind: permission denied
amity0 opened this issue · 6 comments
I'm trying to upgrade my flagr version to 1.1.14 (using docker) but I'm getting the error:
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
<input>:1:21: invalid char escape
2022/11/22 14:18:26 listen tcp 0.0.0.0:80: bind: permission denied
The host and port are 0.0.0.0:80. It works for 1.1.13. Anything changed in the new version?
Have you tried this locally? Could you share a command to reproduce it?
I have tried:
docker run -it -p 80:18000 ghcr.io/openflagr/flagr:1.1.14
and
docker run -it --env PORT=80 -p 80:80 ghcr.io/openflagr/flagr:1.1.14
both worked fine locally.
If there is more context and a way to reproduce it, then I can investigate it further.
@marceloboeira Thanks for the quick reply. I'm unable to reproduce it locally. It only happens on my kubernetes cluster (using GKE)
I'm not sure how to proceed with this one. Do you have any more ideas on how to debug the issue?
It would be interesting to understand what is the task definition, mostly what environment variables are set and how things are configured so that we can reproduce it locally. Could you share the task definition from GKE? (feel free to redact private things/secrets, we mainly care about docker configuration)
Here is the redacted deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flagr
labels:
app: flagr
spec:
replicas: 1
selector:
matchLabels:
app: flagr
template:
metadata:
labels:
app: flagr
spec:
containers:
- name: flagr
image: ghcr.io/openflagr/flagr:1.1.14
command: ["/bin/sh"]
args: ["-c", "source /env/variables.original.export && ./flagr"]
volumeMounts:
- name: shared-data
mountPath: /env
- name: flagr-health-check
mountPath: /probe
env:
- name: HOST
value: "0.0.0.0"
- name: PORT
value: "80"
- name: FLAGR_DB_DBDRIVER
value: postgres
- name: FLAGR_HEADER_AUTH_ENABLED
value: "true"
- name: FLAGR_HEADER_AUTH_USER_FIELD
value: My-Header
- name: FLAGR_PROMETHEUS_ENABLED
value: "true"
- name: FLAGR_PROMETHEUS_INCLUDE_LATENCY_HISTOGRAM
value: "true"
livenessProbe:
httpGet:
path: /api/v1/health
port: 80
initialDelaySeconds: 5
timeoutSeconds: 10
periodSeconds: 10
failureThreshold: 5
readinessProbe:
exec:
command:
- sh
- /probe/run_prob.sh
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 10
I'll try to run it in different port and check if it happens again
I've tried those options locally and I can't simulate it either. I'm wondering if it is something related to port 80 and the host, or maybe something else already using port 80...
If you can test with another port and confirm that's not the problem it would be great.
@marceloboeira running it with different port (3000) works as expected. I guess that it's related to k8s? anyway i'm closing the issue. thanks for the help