0xPolygonID/issuer-node

K8s deployment issue for API UI after restarts

Closed this issue · 5 comments

I have set up a K8s deployment and after the initial setup it's fine (2.2.0 release), but after restarting the pods it is returning "level=ERROR msg="issuer DID must exist" even though it's detected by the issuer, but only for the issuer-node-api-ui-deployment.yaml, the rest of the pods work fine.

image

Is this supposed to happen?
A workaround currently is to exec into the pod and rename or delete the did.txt from the volume.

Hi, @markoftw I will take a look because in my local cluster is not happening. Your workaround is fine for now.
Did you recreate the database when you restarted the pods? I mean, The issue is that the issuer node is reading a did from a file dad is not in the database....

Hi @martinsaporiti, I have deployed it on AWS EKS - then removed the deployments (which deleted the pods) and re-created the deployments, the other pods that look for the init-did-check started fine, only api-ui-issuer-node-deployment seems to have the issue. The database was not re-created in the process.

the problem is the the pod is loading a did from a file that is not in the database. so the best way to solve that for now is mounting a new volume in the issuer-node-pv.yaml (line 14).//cc: @martinsaporiti

Hi @martinsaporiti, I have deployed it on AWS EKS - then removed the deployments (which deleted the pods) and re-created the deployments, the other pods that look for the init-did-check started fine, only api-ui-issuer-node-deployment seems to have the issue. The database was not re-created in the process.

Well that is because only the API UI pod checks the did value against the database. I will try to reproduce it.

Closed due to inactivity. Feel free to reopen the issue or create a new one. Thanks.