[Bug]: unknown media-type
Opened this issue ยท 8 comments
zot version
v2.1.1
Describe the bug
We are using Zot Registry to host our private Docker registry used mainly in our CI. We build images using Docker Buildx with --cache-from
and --cache-to
parameters set to registry.
Docker Buildx pull and push build cache from/to the registry and it breaks the UI. When I try to open an image with cache in registry it doesn't open because the request fails with the same error you can see in the logs below.
Zot Regsitry also logs these lines in a loop:
zot | {"level":"warn","mediatype":"application/vnd.oci.image.layer.v1.tar+gzip","goroutine":667,"caller":"zotregistry.dev/zot/pkg/storage/common/common.go:520","time":"2024-10-15T19:00:19.649733557Z","message":"unknown media-type"}
zot | {"level":"warn","mediatype":"application/vnd.oci.image.layer.v1.tar+gzip","goroutine":667,"caller":"zotregistry.dev/zot/pkg/storage/common/common.go:520","time":"2024-10-15T19:00:19.649817397Z","message":"unknown media-type"}
zot | {"level":"warn","mediatype":"application/vnd.oci.image.layer.v1.tar+gzip","goroutine":667,"caller":"zotregistry.dev/zot/pkg/storage/common/common.go:520","time":"2024-10-15T19:00:19.649900038Z","message":"unknown media-type"}
zot | {"level":"warn","mediatype":"application/vnd.buildkit.cacheconfig.v0","goroutine":667,"caller":"zotregistry.dev/zot/pkg/storage/common/common.go:520","time":"2024-10-15T19:00:19.650578642Z","message":"unknown media-type"}
Seems like these media types are not part of the OCI spec but the spec also doesn't forbid them. Is there anything that can be done in Zot Registry to allow Docker Buildx cache to work without issues described above? Are you open to solving these issues?
Here are some discussions about Docker Buildx cache manifests:
moby/buildkit#2220
docker/buildx#173
opencontainers/distribution-spec#290
To reproduce
-
Configuration
Installed Docker with Docker Buildx -
Client tool used
Docker Buildx with--cache-from=type=registry,ref=$CI_REGISTRY_IMAGE_CACHE_TAG
and--cache-to=type=registry,ref=$CI_REGISTRY_IMAGE_CACHE_TAG,mode=max
-
Seen error: Described above in the log output.
Expected behavior
I would expect registry to allow these cache images. To show them in UI and to not repeat log lines described above.
Screenshots
No response
Additional context
No response
I don't think this is related to issues you mention. Docker Buildx is using OCI media types by default and I am able to successfully push and pull images to/from the Zot registry.
From what I can tell the issue is in the Registry cache storage backend and the format is uses when pushing the cache to registry even though it is using OCI media types.
Here is a documentation for the Registry cache storage from Docker Buildx:
@vojtad from what I understand in the issues you mention and the error messages, the root cause is the index has references to layers as opposed to manifests or other image indexes. This comment give a good example for our case: docker/buildx#173 (comment)
Per image spec
Image indexes concerned with portability SHOULD use one of the above media types. Future versions of the spec MAY use a different mediatype (i.e. a new versioned format). An encountered mediaType that is unknown to the implementation MUST NOT generate an error.
@rchincha we need to decide what do about these misplaced layers in order not to generate errors
.
Thanks for the clarification. This makes sense.
I've found an option for Docker Build registry cache backend to change the cache image media type to force it to generate image manifest instead of image index. This might help to workaround the current issue, but I didn't have time to test it, yet.
Actually I looked for those warning messages. They are just warnings, Those are not errors.
Are there other errors you are seeing in the zot log? Maybe you can share them.
We've had to stop our migration to zot. I should be able to get back to it in 1 week. I am not sure why but zot actually stopped working at one time, all requests were timing out. It was spamming only these log lines (tens of them per second). I will share more information when I get back to it and also I should have more time to debug potential issues.
It is possible storage was locked while GC was running (there messages are generated when GC walks through all blobs to check which one of them is referenced by indexes/manifests), resulting in the API timing out until GC finishes.
If GC is taking too long, that is also something we should look at.
https://zotregistry.dev/latest/articles/pprofiling/
zot
has inbuilt profiling support. So when in doubt, feel free to invoke.