openshift-pipelines/pipelines-as-code

gke pipelines-as-code immediately maxes out Persistent Disk SSD (GB) quota on Compute Engine API

Opened this issue · 3 comments

When I apply the following Kubernetes manifests on the default GKE cluster:

# Install tekton pipelines and tasks
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

# Install tekton operators for tekton config
kubectl apply -f https://storage.googleapis.com/tekton-releases/operator/latest/release.yaml

# Install Tekton dashboard
kubectl apply --filename https://storage.googleapis.com/tekton-releases/dashboard/latest/release.yaml

# View Tekton dashboard
kubectl proxy

# Install pipelines-as-code on Kubernetes
kubectl apply -f https://raw.githubusercontent.com/openshift-pipelines/pipelines-as-code/stable/release.k8s.yaml

As well as creating the following Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  labels:
    pipelines-as-code/route: controller
  name: pipelines-as-code
  namespace: pipelines-as-code
  annotations:
    kubernetes.io/ingress.class: gce
spec:
  defaultBackend:
    service:
      name: pipelines-as-code-controller
      port:
        number: 8080

My cluster immediately hits the following quota
image

Why is this? What in Tekton is using so much Persistent Disk SSD storage? Nothing else is running on this project.

I am not so sure what is the issue with GKE but i am running this same setup under kind (with more services) and definitively don't reach that same size:

image

maybe try to contact the GKE support?

can you try without pipelines-as-code does this takes way less ?

pipelines-as-code images should not take much as well:

image