aws-sigv4-proxy sidecar running OOM on EKS EC2
jrouly opened this issue · 2 comments
jrouly commented
I'm running an aws-sigv4-proxy
sidecar along with opencost.io to proxy Prometheus metrics from an AMP workspace endpoint.
I've granted the aws-sigv4-proxy
container an obscene level of resources, and it's continuing to burn through them and run OOM as though no GC is active.
I'm running this pod on EKS EC2 workers provisioned with Karpenter.
apiVersion: v1
kind: Pod
metadata:
namespace: opencost
labels:
app: opencost
spec:
containers:
- env:
- name: CLUSTER_ID
value: <cluster id>
- name: PROMETHEUS_SERVER_ENDPOINT
value: http://localhost:8080
image: gcr.io/kubecost1/opencost
imagePullPolicy: Always
name: opencost
resources:
limits:
cpu: "1"
memory: 1Gi
- args:
- --verbose
- --name
- aps
- --region
- us-east-1
- --host
- localhost:8080
- --sign-host
- https://aps-workspaces.us-east-1.amazonaws.com/workspaces/<workspace id>/
- --upstream-url-scheme
- http
image: public.ecr.aws/aws-observability/aws-sigv4-proxy
imagePullPolicy: Always
name: aws-sigv4-proxy
resources:
limits:
cpu: "3"
memory: 5Gi
requests:
cpu: "3"
memory: 5Gi
restartPolicy: Always
serviceAccount: opencost
alvinlin123 commented
Thanks for reporting the issue. Would you be able to provide a Go Heap Profile?
jrouly commented
@alvinlin123 no, I no longer have the pod deployed as it continued to run OOM as described.
Do you have instructions or a link to documentation for gathering a Go Heap Profile?