[BUG] K8s OPNSENSE_EXPORTER_DISABLE_WIREGUARD=true is not working
janisii opened this issue ยท 7 comments
{"caller":"collector.go:192","collector_name":"wireguard","component":"collector","err":"opnsense-client api call error: endpoint: api/wireguard/service/show; failed status code: 400; msg: {\"message\":\"controller OPNsense\\\\Core\\\\Api\\\\IndexController not found\",\"status\":400}","level":"error","msg":"failed to update","ts":"2024-04-04T14:51:40.675Z"}
Not sure if this is wireguard related, tried to turn it off with K8s OPNSENSE_EXPORTER_DISABLE_WIREGUARD=true
but it still fails here and pod restarts.
Version
- OPNSense router: 23.7.12
- OPNsense exporter version: 0.0.4 (ghcr.io/athennamind/opnsense-exporter:latest)
This should be something at your side. Are you sure you pulled ghcr.io/athennamind/opnsense-exporter:latest and the image is not cached on your k8s node?
You should see these log at exporter start:
{
"caller":"main.go:34",
"level":"info",
"msg":"starting opnsense-exporter",
"ts":"2024-04-06T22:36:24.107Z",
"version":"v0.0.4"
},
{
"caller":"main.go:83",
"level":"info",
"msg":"wireguard collector disabled",
"ts":"2024-04-06T22:36:24.116Z"
}
This is my docker compose:
version: '3'
services:
opnsense-exporter:
image: ghcr.io/athennamind/opnsense-exporter:0.0.4
container_name: opensense-exporter
restart: always
command:
- --opnsense.protocol=https
- --opnsense.address=ops.local.athennamind
- --exporter.instance-label=instance1
- --web.listen-address=:8080
- --log.format=json
environment:
OPNSENSE_EXPORTER_OPS_API_KEY: "xxxx"
OPNSENSE_EXPORTER_OPS_API_SECRET: "xxxxx"
OPNSENSE_EXPORTER_DISABLE_WIREGUARD: "true"
ports:
- "8080:8080"
Hello, I don't get this line in my logs ๐ The same issue with tag: 0.0.4. Here is my deployment config:
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: gw-opnsense-exporter
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: gw-opnsense-exporter
template:
metadata:
labels:
app.kubernetes.io/name: gw-opnsense-exporter
spec:
containers:
- name: gw-opnsense-exporter
image: ghcr.io/athennamind/opnsense-exporter:0.0.4
imagePullPolicy: Always
volumeMounts:
- name: api-key-vol
mountPath: /etc/opnsense-exporter/creds
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
ports:
- name: metrics-http
containerPort: 8080
livenessProbe:
httpGet:
path: /metrics
port: metrics-http
readinessProbe:
httpGet:
path: /metrics
port: metrics-http
args:
- "--log.level=info"
- "--log.format=json"
env:
- name: OPNSENSE_EXPORTER_INSTANCE_LABEL
value: "gw-opnsense-exporter"
- name: OPNSENSE_EXPORTER_OPS_API
valueFrom:
secretKeyRef:
name: gw-opnsense-exporter-cfg
key: host
- name: OPNSENSE_EXPORTER_OPS_PROTOCOL
valueFrom:
secretKeyRef:
name: gw-opnsense-exporter-cfg
key: protocol
- name: OPS_API_KEY_FILE
value: /etc/opnsense-exporter/creds/api-key
- name: OPS_API_SECRET_FILE
value: /etc/opnsense-exporter/creds/api-secret
- name: OPNSENSE_EXPORTER_OPS_INSECURE
value: "true"
- name: OPNSENSE_EXPORTER_DISABLE_UNBOUND
value: "true"
- name: OPNSENSE_EXPORTER_DISABLE_WIREGUARD
value: "true"
- name: OPNSENSE_EXPORTER_DISABLE_CRON_TABLE
value: "true"
resources:
requests:
memory: 64Mi
cpu: 100m
limits:
memory: 128Mi
cpu: 500m
volumes:
- name: api-key-vol
secret:
secretName: gw-opnsense-exporter-cfg
items:
- key: key
path: api-key
- key: secret
path: api-secret
Logs from pod start:
2024-04-07 13:01:14.230 {"caller":"main.go:34","level":"info","msg":"starting opnsense-exporter","ts":"2024-04-07T10:01:14.229Z","version":"v0.0.4"}
2024-04-07 13:01:14.272 {"caller":"main.go:80","level":"info","msg":"unbound collector disabled","ts":"2024-04-07T10:01:14.272Z"}
2024-04-07 13:01:14.276 {"address":"[::]:8080","caller":"tls_config.go:313","level":"info","msg":"Listening on","ts":"2024-04-07T10:01:14.275Z"}
2024-04-07 13:01:14.277 {"address":"[::]:8080","caller":"tls_config.go:316","http2":false,"level":"info","msg":"TLS is disabled.","ts":"2024-04-07T10:01:14.276Z"}
2024-04-07 13:01:16.447 {"caller":"collector.go:192","collector_name":"wireguard","component":"collector","err":"opnsense-client api call error: endpoint: api/wireguard/service/show; failed status code: 400; msg: {\"message\":\"controller OPNsense\\\\Core\\\\Api\\\\IndexController not found\",\"status\":400}","level":"error","msg":"failed to update","ts":"2024-04-07T10:01:16.447Z"}
2024-04-07 13:01:16.537 {"caller":"utils.go:47","level":"warn","msg":"parsing rtt: '~' to float64 failed. Pattern matching failed.","ts":"2024-04-07T10:01:16.529Z"}
2024-04-07 13:01:16.537 {"caller":"utils.go:47","level":"warn","msg":"parsing rttd: '~' to float64 failed. Pattern matching failed.","ts":"2024-04-07T10:01:16.529Z"}
2024-04-07 13:01:16.537 {"caller":"utils.go:47","level":"warn","msg":"parsing loss: '~' to float64 failed. Pattern matching failed.","ts":"2024-04-07T10:01:16.529Z"}
2024-04-07 13:01:16.537 {"caller":"utils.go:47","level":"warn","msg":"parsing rtt: '~' to float64 failed. Pattern matching failed.","ts":"2024-04-07T10:01:16.529Z"}
2024-04-07 13:01:16.537 {"caller":"utils.go:47","level":"warn","msg":"parsing rttd: '~' to float64 failed. Pattern matching failed.","ts":"2024-04-07T10:01:16.529Z"}
2024-04-07 13:01:16.537 {"caller":"utils.go:47","level":"warn","msg":"parsing loss: '~' to float64 failed. Pattern matching failed.","ts":"2024-04-07T10:01:16.529Z"}
As you can see only "unbound collector disabled". In main.go file there should be also some other messages, because I have disabled Cron as well.
Could this be K8s related only?
For now I will just use 5sec timeout on Liveness, Readiness checks (as poor work-around).
livenessProbe:
httpGet:
path: /metrics
port: metrics-http
timeout: 5
readinessProbe:
httpGet:
path: /metrics
port: metrics-http
timeout: 5
Sorry for pointing to your garden ๐. It's not related to k8s or your environment. I found the problem. It only appears when more than 1 disable flags are passed.
Will be fixed in the next release
Thank you for your report.
Stay healthy
Sorry for pointing to your garden ๐. It's not related to k8s or your environment. I found the problem. It only appears when more than 1 disable flags are passed.
Will be fixed in the next release
Thank you for your report.
Stay healthy
Any progress on the issue? ๐
This shouldn't be a problem anymore since the v0.0.5 release
This shouldn't be a problem anymore since the v0.0.5 release
Thanks it works now!