Minio as storage for docker registry
krishnasrinivas opened this issue · 25 comments
Bradley Weston @bweston92
https://github.com/bweston92/minio-docker-repository
Just run ./test.sh and follow the logs on the docker registry server you'll see the error
When I run the following I get the correct files in the Minio storage however I "Retrying in 5 seconds" as if something went wrong.
Docker registry logs: https://gist.github.com/bweston92/03361ac1d7b79eb9fa9b3db5ed185356
docker-registry-config.yml
version: 0.1
log:
level: debug
formatter: text
fields:
service: registry
environment: staging
loglevel: debug
storage:
s3:
accesskey: USWUXHGYZQYFYFFIT3RE
secretkey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region: us-east-1
regionendpoint: http://minio:9000
bucket: docker
encrypt: false
keyid: mykeyid
secure: true
v4auth: true
chunksize: 5242880
rootdirectory: /i
delete:
enabled: true
maintenance:
uploadpurging:
enabled: true
age: 168h
interval: 24h
dryrun: false
readonly:
enabled: false
http:
addr: :5000
test.sh
#!/usr/bin/env bash
set -e
docker rm -f minio || true
docker rm -f test_registry || true
DOCKER_OPTS="$DOCKER_OPTS --insecure-registry 127.0.0.1:5000"
docker pull minio/minio:latest
docker pull registry:2
docker pull hello-world
docker run -d --name minio \
-p 9000:9000 \
-e "MINIO_ACCESS_KEY=USWUXHGYZQYFYFFIT3RE" \
-e "MINIO_SECRET_KEY=MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03" \
-v $(pwd)/export:/export \
-v $(pwd)/config:/root/.minio \
minio/minio:latest /export
docker run -d --name=test_registry \
--link minio \
-p 5000:5000 \
-v $(pwd)/docker-registry-config.yml:/etc/docker/registry/config.yml \
registry:2
mc config host add testing http://127.0.0.1:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 S3v4
mc --debug mb testing/docker
echo "Please open a new terminal and enter: "
echo "$ docker logs -f minio"
echo "$ docker logs -f test_registry"
read
docker tag hello-world 127.0.0.1:5000/hello-world
docker push 127.0.0.1:5000/hello-world
@bweston92 i tried with minio as a process instead of docker container, and it worked fine. Checking with minio docker image.
@bweston92 this change made it work:
regionendpoint: http://169.254.6.100:9000
# regionendpoint: http://minio:9000
169.254.6.100 is the ip addres of docker0. So 'minio' was not reachable from the registry container. I used --name minio
for the minio container.
@bweston92 this change made it work:
regionendpoint: http://169.254.6.100:9000
regionendpoint: http://minio:9000
169.254.6.100 is the ip addres of docker0. So 'minio' was not reachable from the registry container. I used --name minio for the minio container.
So no bug on the minio end @krishnasrinivas ?
@harshavardhana no.
@bweston92 can you confirm that it works for you? If it does, will you have time to contribute a document on how to use minio as storage for docker registry?
Here's the link to our cookbook repo: https://github.com/minio/cookbook
I setup a link so http://minio should of resolved to that IP address will give it another go in the morning also doesn't conclude why it made the objects in Minio (so clearly accessed it) and just failed to push all the data (kept retrying)
---Sent from Boxer | http://getboxer.com
On 20 September 2016 at 20:19:52 BST, Krishna Srinivas notifications@github.com wrote:@bweston92 this change made it work: regionendpoint: http://169.254.6.100:9000 # regionendpoint: http://minio:9000 169.254.6.100 is the ip addres of docker0. So 'minio' was not reachable from the registry container. I used --name minio for the minio container. —You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or mute the thread. {"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/minio/minio","title":"minio/minio","subtitle":"GitHub repository","main_image_url":"https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png","avatar_image_url":"https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png","action":{"name":"Open in GitHub","url":"https://github.com/minio/minio"}},"updates":{"snippets":[{"icon":"PERSON","message":"@krishnasrinivas in #2743: @bweston92 this change made it work:\r\n\r\n\r\n regionendpoint: http://169.254.6.100:9000 \r\n# regionendpoint: http://minio:9000\r\n
\r\n\r\n169.254.6.100 is the ip addres of docker0. So 'minio' was not reachable from the registry container. I used --name minio
for the minio container.\r\n\r\n"}],"action":{"name":"View Issue","url":"https://github.com/minio/minio/issues/2743#issuecomment-248404230"}}}
I setup a link so http://minio should of resolved to that IP address will give it another go in the morning also doesn't conclude why it made the objects in Minio (so clearly accessed it) and just failed to push all the data (kept retrying)
If the server works with ip and not with hostname then perhaps hostname configuration has an issue. @bweston92 - is this hostname config automated by docker?
on gitter:
Oh now it just worked
bweston92 tried without minio inside docker (and as just a process) and it worked.
@bweston92 did you get it working with minio container?
Not yet I think this is something todo with the registry as Minio gets all the files even though it retries.
---Sent from Boxer | http://getboxer.com
On 21 September 2016 at 14:32:19 BST, Krishna Srinivas notifications@github.com wrote:on gitter: Oh now it just worked bweston92 tried without minio inside docker (and as just a process) and it worked. @bweston92 did you get it working with minio container? —You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or mute the thread. {"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/minio/minio","title":"minio/minio","subtitle":"GitHub repository","main_image_url":"https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png","avatar_image_url":"https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png","action":{"name":"Open in GitHub","url":"https://github.com/minio/minio"}},"updates":{"snippets":[{"icon":"PERSON","message":"@krishnasrinivas in #2743: on gitter:\r\n\u003e Oh now it just worked\r\n\r\nbweston92 tried without minio inside docker (and as just a process) and it worked.\r\n\r\n@bweston92 did you get it working with minio container?\r\n"}],"action":{"name":"View Issue","url":"https://github.com/minio/minio/issues/2743#issuecomment-248613154"}}}
export KUBERNETES_PROVIDER=libvirt-coreos && export NUM_NODES=4
./cluster/kube-up.sh
# wait for etcd to settle
helmc install workflow-v2.5.0
# wait for kubernetes cluster to all be ready
deis pull 192.168.57.10:5000/aergo-server:1.0.0-SNAPSHOT -a aergo-server
Hostname based is a real problem even if /etc/hosts entry is added. DNS resolution fails for some reason.
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Image is up to date for hello-world:latest
9120d11ad97fa42fe272b318c7b9ec6f2f81c67ac36b870755d194163e02ec10
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 minio 9120d11ad97f
172.17.0.3 9e35bb00b520
e46973f510ef066e3edd61b01eb6cb7e52c07839149b6e3b457ee1174cf5de52
Added ‘testing’ successfully.
Please open a new terminal and enter:
$ docker logs -f minio
$ docker logs -f test_registry
The push refers to a repository [127.0.0.1:5000/hello-world-99219968-64a3-499d-8115-3d7152f4e26e]
a02596fdd012: Retrying in 1 second
At the end of retrying i see
The push refers to a repository [127.0.0.1:5000/hello-world-dda35fd9-02d5-45bb-81fd-1e1202cefd63]
a02596fdd012: Pushing [==================================================>] 3.584 kB
dial tcp: lookup minio: no such host
At the end of retrying i see
The push refers to a repository [127.0.0.1:5000/hello-world-dda35fd9-02d5-45bb-81fd-1e1202cefd63]
a02596fdd012: Pushing [==================================================>] 3.584 kB
dial tcp: lookup minio: no such host
This would look like a registry bug and not the one reported @rbellamy which seems to be network related to deis infrastructure.
Some setup details
$ docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.6.2
Git commit: b9f10c9
Built: Thu, 16 Jun 2016 21:17:51 +1200
OS/Arch: linux/amd64
#!/usr/bin/env bash
set -e
docker rm -f minio || true
docker rm -f test_registry || true
DOCKER_OPTS="$DOCKER_OPTS --insecure-registry 127.0.0.1:5000"
docker pull minio/minio
docker pull registry:2
docker pull hello-world
docker run -d --name minio \
-p 9000:9000 \
-e "MINIO_ACCESS_KEY=USWUXHGYZQYFYFFIT3RE" \
-e "MINIO_SECRET_KEY=MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03" \
-v $(pwd)/export:/export \
-v $(pwd)/config:/root/.minio \
minio/minio /export
docker run -d --name=test_registry \
--link minio \
-p 5000:5000 \
-v $(pwd)/docker-registry-config.yml:/etc/docker/registry/config.yml \
registry:2
/home/harsha/mygo/bin/mc config host add testing http://127.0.0.1:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
echo "Please open a new terminal and enter: "
echo "$ docker logs -f minio"
echo "$ docker logs -f test_registry"
read
uuid=`uuidgen`
docker tag hello-world 127.0.0.1:5000/hello-world-${uuid}
docker push 127.0.0.1:5000/hello-world-${uuid}
Going through entire codebase and checking error by error - all replies are pretty much the same in both cases. This issue seems to be happening way above the stack inside registry.
Going through once more.
Going through entire codebase and checking error by error - all replies are pretty much the same in both cases. This issue seems to be happening way above the stack inside registry.
Going through once more.
Okay so now i ran registry outside of container and made the relevant /etc/hosts entries as well and that seems to work all the time.
version: 0.1
log:
# level: debug
formatter: text
fields:
service: registry
environment: staging
#loglevel: debug
storage:
s3:
accesskey: USWUXHGYZQYFYFFIT3RE
secretkey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region: us-east-1
regionendpoint: http://minio
bucket: docker-minio-test
encrypt: false
secure: false
v4auth: true
chunksize: 5242880
rootdirectory: /i
delete:
enabled: true
maintenance:
uploadpurging:
enabled: true
age: 168h
interval: 24h
dryrun: false
readonly:
enabled: false
http:
addr: :5000
I am betting now on the network layer here - moby/moby#26492
The issue is not with the container talking to external services but the issue seems to be related to internal network problems. I guess as a workaround right now you can use ip addresses.
The issue is not with the container talking to external services but the issue seems to be related to internal network problems. I guess as a workaround right now you can use ip addresses.
Closing this as docker issue for now.
Not directly related, but I'm getting s3: The AWS Access Key Id you provided does not exist in our records.
error from registry -- it looks like my registry is still connecting to AWS S3 and not the one defined in regionendpoint
any clues?
^- it's not connecting to AWS, verified by setting asdfasdf as host
Now I got it solved (I was running 2.2 registry...). Next up: The Content-Md5 you specified did not match what we received
looks like #1603
@cirocosta Yes, I was running wrong version of the registry, if I recall correctly
When I run the following I get the correct files in the Minio storage however I "Retrying in 5 seconds" as if something went wrong.
If you arrive here as I did trying to debug this symptom in a situation where you're using minio as a docker registry backend, in short my case was that redirects were enabled and so the address for the storage backend hosted in kubernetes (http://minio:9000 as in minio.default.svc.cluster.local at 9000 with a search path) was being leaked to the end user where they could not resolve 'minio'.
See https://docs.docker.com/registry/configuration/#redirect
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.