Bumping storage
Opened this issue · 3 comments
Is there a way to specify the storage available in the helm chart? I have a few edge cases where pods are dying because they run out of space.
Are you thinking of local storage on the cluster nodes the pods are running on or are you thinking of mounting volumes on the pods?
For the latter, this might be useful. For the former, I suppose it depends on what disk you request when asking the cloud provider to set up the Kubernetes cluster, rather than specifying it in the Helm chart?
Thanks @paciorek that's the problem that I'm having. Let's say I want to use a high memory z1d
AWS instance. The 6xlarge
version comes with, supposedly, a 900GB NVME. However, when I look at df -h
on a pod, I see that the drive only has 74GB. It's not clear to me how the pod chooses its defaults.