gcsfusecsi-metrics-collector container getting OOM killed
Opened this issue · 6 comments
I'm experiencing occasional OOM kills of the gcsfusecsi-metrics-collector
container (part of gcsfusecsi-node
DaemonSet). This container has a somewhat low memory limit (30Mi). Is there a way to costumize the memory limit of this container?
Hi @pdfrod, this is interesting behavior. Can you provide GKE cluster version? Can you provide the number of pods that you are running on each node? Can you also confirm if this is causing issues in your workload? In this case, I can provide the steps to disable metrics exporting.
Could you share the Cluster ID with me? You can get the id by running gcloud container clusters describe <cluster-name> --location <cluster-location> | grep id:
Sure, here's the info you requested @hime.
- GKE cluster version: 1.31.1-gke.1678000
- the number of pods and nodes varies due to autoscaling. At the moment the most busy node has 18 pods (9 kube-system pods and 9 application pods). In total there are 98 pods (69 kube-system + 29 application pods) over 8 nodes.
- I haven't noticed any issues with my workloads, but the pods that are using GCS FUSE CSI driver only need to access the volume very rarely (maybe a couple of times a week), so I would be very unlikely to notice any problems.
- the Cluster ID is 121bfe79164042aa9d9011c96cc4c2166952fc6e990d4282b9d3be45c069f917.
I should probably mention that I don't remember seeing this problem when there were just a couple of deployments using this driver. Now that I have 12 deployments using the driver, I'm seeing OOM kills of the metrics collector every day.
If there's a way to disable the metrics collector container, that would be even better as currently I'm not using those metrics.
Let me know if you need more info.
Cool, I'll give that a try. Thanks!
Cool, thanks a lot!
Since I've disabled metrics on my cluster I haven't seen any OOM kills, so it's looking good so far.