external-secrets/kubernetes-external-secrets

Failed to get /openapi/v2 and /swagger.json: Created Component, but require templated one.

mycrEEpy opened this issue · 2 comments

Since about 1 or 2 weeks our kubernetes-external-secrets pods keep crashing at the start in all our clusters with the following error:

npm info it worked if it ends with ok
npm info using npm@6.14.6
npm info using node@v12.18.4
npm info lifecycle kubernetes-external-secrets@6.0.0~prestart: kubernetes-external-secrets@6.0.0
npm info lifecycle kubernetes-external-secrets@6.0.0~start: kubernetes-external-secrets@6.0.0
> kubernetes-external-secrets@6.0.0 start /app
> ./bin/daemon.js
{"level":30,"time":1605111151141,"pid":19,"hostname":"kubernetes-external-secrets-584dbd884c-jtzlt","msg":"loading kube specs"}
Error: Failed to get /openapi/v2 and /swagger.json: Created Component, but require templated one. This is a bug. Please report: https://github.com/silasbw/fluent-openapi/issues
    at /app/node_modules/kubernetes-client/lib/swagger-client.js:58:15
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
    at async main (/app/bin/daemon.js:33:3)
npm info lifecycle kubernetes-external-secrets@6.0.0~start: Failed to exec start script
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! kubernetes-external-secrets@6.0.0 start: `./bin/daemon.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the kubernetes-external-secrets@6.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm timing npm Completed in 19309ms
npm ERR! A complete log of this run can be found in:
npm ERR!     /home/node/.npm/_logs/2020-11-11T16_12_33_438Z-debug.log
stream closed

We are on AWS EKS v1.18 with platform versions eks.1 & eks.2.
Prior to and after the v1.18 upgrade everything was working fine until now they suddenly keep crashing.

I am having the same issue with EKS 1.15. The same image(3.0.0) runs fine on a couple of EKS 1.15. It is crashing on of the clusters out of nowhere.

Found the solution for our problem, we needed to revert deployment of kube-metrics-adapter from v0.1.9 to v0.1.8.
Beforehand all resources of kube-metrics-adapter need to be deleted from the cluster.