Frequent calls to docker-credential-desktop and osascript when doing a copy
evankanderson opened this issue · 8 comments
What steps did you take:
I was attempting to copy the image from these instructions: https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.4/tap/install-aws.html
The copy was from a password-protected repository at registry.tanzu.vmware.com
to a non-password-protected docker container registry.
What happened:
This is 214 images and approx 7GB of data. I noticed that it took a long time (more than 2 minutes) to collect all the information for the clone, and while that was happening, the terminal title was flashing between osascript
, sw_version
and a few other commands. A ps
revealed the following:
28423 ttys011 0:01.87 imgpkg copy --concurrency=1 -b registry.tanzu.vmware.com/tanzu-application-platform/tap-packages:1.4.0 --to-repo registr
31366 ttys011 0:00.02 docker-credential-desktop get
31370 ttys011 0:00.09 osascript -e user locale of (get system info)
As an approximate guess, this looks like approx (31370 - 28423) / 4 > 700
invocations of docker-credential-desktop get
, which seems excessive. This seems to continue; after 5 minutes, there seem to have 1200 invocations (or approx 2/second). Nothing else seems to be spawning processes on my machine (idle except for typing this bug report into Chrome). Overall, it took 8 minutes to collect the information for these 214 images, and then approx 1 minute to verify that all the images had been copied to the destination registry (I'd prevously copied most images, so this should have been a metadata-only comparison).
What did you expect:
One invocation of docker-credential-desktop get
per registry, or perhaps one per minute.
Anything else you would like to add:
This is running 0.35.0; looking through previous bugs, #334 looks related but clearly doesn't seem to have helped in this case.
Running with --debug
doesn't show a lot of requests to the /v2
endpoint for authentication as described in #290
Environment:
- imgpkg version (use
imgpkg --version
):
imgpkg version v0.35.0
-
Docker registry used (e.g.
Docker HUB
):
Self-install of Dockerregistry
image from https://github.com/evankanderson/k8s-private-local-registry -
OS (e.g. from
/etc/os-release
):
Darwin Kernel Version 22.2.0: Fri Nov 11 02:08:47 PST 2022; root:xnu-8792.61.2~4/RELEASE_X86_64
Vote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.
(This is also adding 4+ minutes after the copy completes. I'm doing a repo-to-repo copy)
This seems to be invoked about once per call to https://github.com/carvel-dev/imgpkg/blob/develop/pkg/imgpkg/registry/auth/custom_keychain.go#L62, still digging as to why.
It appears that we're constructing a new RoundTripper
every time, here:
https://github.com/carvel-dev/imgpkg/blob/develop/pkg/imgpkg/registry/registry.go#L265
I'm guessing it's because the ref.Context
object is different each time, and we should be using a string version of the Registry
value.
Nevermind, we are finding the correct RoundTripper, but it's not caching the credentials at all.
I'm currently looking at https://github.com/carvel-dev/imgpkg/blob/develop/pkg/imgpkg/registry/registry.go#L244, and wondering whether we should be caching the set of []regremote.Option
(have to put this down for the evening, though)
Looking in go-containerregistry
, it looks like supplying a keychain (as we do) calls o.keychain.Resolve(target)
to get an authn.Authenticator
each time. I'm going to see if we can cache this.
Ref:
https://github.com/google/go-containerregistry/tree/main/pkg/v1/remote/options.go#L140
It looks like augmenting SimpleRegistry.transport
to cache authenticators solves this, will be trying to put together PR shortly.