pulumi/pulumi-yaml

Support calling resource methods

AaronFriel opened this issue · 4 comments

It is almost possible to call resource methods manually, as in this program which calls the "getKubeconfig" method on cluster:

variables:
  kubeconfig:
    Fn::Invoke:
      Function: google-native:container/v1:Cluster/getKubeconfig
      Arguments:
        __self__: ${someClusterResource}

However, this generates an Invoke, not a Call RPC. The syntax also leaves something to be desired, with the __self__ parameter being an implementation detail that other Pulumi languages do not expose to users.

The simplest “raw” thing to do might be to just expose the Call concept directly (much like we do for Invoke today)?

variables:
  kubeconfig:
    Fn::Call:
      Function: google-native:container/v1:Cluster/getKubeconfig
      Self: ${someClusterResource}
      Arguments:
        …

Are there other more “sugar” options worth considering?

Just adding a note that we would also need to add support for the above in docs gen as well: pulumi/pulumi-google-native#709

Are there other more “sugar” options worth considering?

Maybe something like the following, where you only have to specify the resource instance and name of the method?

variables:
  kubeconfig:
    Fn::Method:
      Resource: ${someClusterResource}
      Method: getKubeconfig
      Arguments:
        ...

Or, could we get fancy and grab the method out of an expression?

variables:
  kubeconfig:
    Fn::Method:
      Method: ${someClusterResource.getKubeconfig}
      Arguments:
        ...

For those who stumble across this particularly for GKE, you can work around it by building a kubeconfig variable and referencing that:

name: gke-yaml-cluster
runtime: yaml
description: A GKE cluster
resources:
  cluster:
    type: google-native:container/v1beta1:Cluster
    properties: 
      clusterTelemetry:
        type: ENABLED
      defaultMaxPodsConstraint:
        maxPodsPerNode: 100
      initialNodeCount: 1
      ipAllocationPolicy:
        clusterIpv4CidrBlock: /14
        servicesIpv4CidrBlock: /20
        useRoutes: false
      location: us-west2
      resourceLabels:
        env: lbriggs
  provider:
    type: pulumi:providers:kubernetes
    properties:
      kubeconfig: ${kubeconfig}
  nginx-ingress:
    type: kubernetes:helm.sh/v3:Release
    properties: # The arguments to resource properties.
      chart: "ingress-nginx"
      repositoryOpts:
        repo: https://kubernetes.github.io/ingress-nginx
      cleanupOnFail: true
      createNamespace: true
      description: "Main load balancer"
      lint: true
      name: "ingress-nginx"
      namespace: "ingress-nginx"
      version: "4.7.1"
      values:
        ingressClass: "internet"
    options:
      provider: ${provider}
variables:
  kubeconfig:
    fn::toJSON:
      apiVersion: v1
      clusters:
        - cluster:
            certificate-authority-data: ${cluster.masterAuth.clusterCaCertificate}
            server: https://${cluster.endpoint}
          name: ${cluster.name}
      contexts:
        - context:
            cluster: ${cluster.name}
            user: ${cluster.name}
          name: ${cluster.name}
      current-context: ${cluster.name}
      kind: Config
      users:
        - name: ${cluster.name}
          user:
            exec:
              apiVersion: client.authentication.k8s.io/v1beta1
              command: gke-gcloud-auth-plugin
              installHint: Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
              provideClusterInfo: true
outputs:
  kubeconfig: ${kubeconfig}