jenkinsci/google-oauth-plugin

Only compatible with legacy access scopes management and not IAM roles?

Closed this issue · 3 comments

f-ld commented

I am using a GKE cluster without the legacy access scopes but using a custom service account (as documented here: https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes)

When using this plugin I am trying to create a "Google Service Account from metadata", it says that I have only access to the 2 following scopes:

Then I'm trying to use the Google Container registry auth plugin (see https://wiki.jenkins.io/display/JENKINS/Google+Container+Registry+Auth+Plugin) and I think this is not going to work because of the missing "https://www.googleapis.com/auth/devstorage.read_write" scope in the above list, right?

And then more specifically I am trying to use it from the Docker pipeline plugin (see https://jenkins.io/doc/book/pipeline/docker/) to push an image to gcr.io and this is not working (while I bet it should since people created a PR on it to document it, see jenkinsci/google-container-registry-auth-plugin#2) because it tells me that the credentials are not available. I bet this is all about the missing scopes above (i.e. scope missing, so GCR auth plugin cannot do its work and have credentials available for docker pipeline plugin), right?

If all the above is right, any plan to get it work with IAM roles?

f-ld commented

Note: one alternative if you do can deal with key json file (what I wanted to avoid because I do not want people to manipulate credential files) is to create a "Google Service Account from private key" with that private key file (json / p12, but I did use json), calling it "" (let's say for example "fantastic-foobar-123").

And if you get that key file from a service having the "Storage Admin" role (note: do not confound with the "Storage Object Admin" that is not sufficient because it misses the "storage.buckets.get" permission) then you are all good with that kind of code in your pipeline:

node {

    def scmVars = checkout scm
    def version = scmVars.GIT_COMMIT
    def tag = "gcr.io/<yourGCPproject>/<yourservice>"

    stage("Pushing Docker image ${tag}") {
      echo "Pushing image to ${tag}:${version}"
      docker.withRegistry('https://gcr.io', "gcr:<yourGCPproject>") {
        app.push("${version}")
        app.push("latest")
      }
    }
}

Note: the credential used in docker.withRegistry() has to be prefixed with "gcr:".

Hope this will help people lost in documentation.

Closing this, as the GCR Auth plugin has not been actively maintained since 2015.
See our Jenkins Integration samples' usage of Kaniko here for an alternative: https://github.com/GoogleCloudPlatform/jenkins-integration-samples/tree/master/gke

No plans currently to change the use of access scopes.