dind-cluster does not work anymore
cscetbon opened this issue · 28 comments
The latest image seems broken. A week ago I was able to set up a k8s cluster but now anytime I try I get
* Making sure DIND image is up to date
v1.11: Pulling from mirantis/kubeadm-dind-cluster
Digest: sha256:6901087e83e9b04469a4d723e6582aefa80589ddd3137dc54449e4824005488c
Status: Image is up to date for mirantis/kubeadm-dind-cluster:v1.11
* Removing container: 39db1a014f6e
39db1a014f6e
* Starting DIND container: kube-master
A dependency job for docker.service failed. See 'journalctl -xe' for details.
I use Docker for Mac CE Edge channel. I also tried on a Linux box and got the exact same issue. I checked with both 1.11 and 1.12.
When I manually start the docker service in the kube-master container I get
# systemctl status docker.service
WARNING: terminal is not fully functional
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: https://docs.docker.com
Nov 23 01:18:16 kube-master systemd[1]: Starting Docker Application Container Engine...
Nov 23 01:18:16 kube-master systemd[1]: Dependency failed for Docker Application Container Engine.
Nov 23 01:18:16 kube-master systemd[1]: docker.service: Job docker.service/start failed with result 'dependency'.
Nov 23 01:18:16 kube-master systemd[1]: Stopped Docker Application Container Engine.
I'm seeing that too. Is it something to do with the changes in #254 ?
What is the procedure for building a mirantis/kubeadm-dind-cluster image as it was before #254 was merged? Or does anyone already have such an image that they can share?
To answer my own question: with code as at nelljerram@0cbdd83 (which is a few changes on top of 5debf38), I have done
sudo aptitude install liblz4-tool
build/build-local.sh
docker tag mirantis/kubeadm-dind-cluster:local calico/kubeadm-dind-cluster:v1.12
docker push calico/kubeadm-dind-cluster:v1.12
Then, with calico/kubeadm-dind-cluster:v1.12 instead mirantis/kubeadm-dind-cluster:v1.12, I can bring up a cluster again.
Thanks @neiljerram, i’m gonna try a similar technic until this issue is fixed. I’m surprised nobody saw this issue before releasing a new version of the image ...
My code @tklrchain ?
Suddenly hitting the same error. Full log attached below
full log
+ APISERVER_PORT=8081 dind-cluster up
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
* Making sure DIND image is up to date
v1.11: Pulling from mirantis/kubeadm-dind-cluster
d2519f41f710: Pulling fs layer
afc0f047b31b: Pulling fs layer
6eec252ae038: Pulling fs layer
4f7c178c80ae: Pulling fs layer
4d26a3937654: Pulling fs layer
8489894bd35d: Pulling fs layer
7191ba4e837a: Pulling fs layer
0580d19c38ca: Pulling fs layer
5a4cc781085a: Pulling fs layer
8983fe4840e5: Pulling fs layer
90859e0cf5bf: Pulling fs layer
b744706d3f9a: Pulling fs layer
5cf347650673: Pulling fs layer
3182a0dcfac8: Pulling fs layer
fe192fb6710f: Pulling fs layer
fbe065309217: Pulling fs layer
1a9cdb3950de: Pulling fs layer
dd6d7312d9d2: Pulling fs layer
eece11185bb5: Pulling fs layer
5e0d963a4fa0: Pulling fs layer
df3eeb47363f: Pulling fs layer
7339d2e08a8c: Pulling fs layer
0f20e690530e: Pulling fs layer
2fb17f6220b0: Pulling fs layer
2f377a05e646: Pulling fs layer
99572acbe121: Pulling fs layer
4f7c178c80ae: Waiting
4d26a3937654: Waiting
8489894bd35d: Waiting
7191ba4e837a: Waiting
0580d19c38ca: Waiting
5a4cc781085a: Waiting
8983fe4840e5: Waiting
90859e0cf5bf: Waiting
b744706d3f9a: Waiting
5cf347650673: Waiting
3182a0dcfac8: Waiting
fe192fb6710f: Waiting
fbe065309217: Waiting
1a9cdb3950de: Waiting
dd6d7312d9d2: Waiting
eece11185bb5: Waiting
5e0d963a4fa0: Waiting
df3eeb47363f: Waiting
7339d2e08a8c: Waiting
0f20e690530e: Waiting
2fb17f6220b0: Waiting
2f377a05e646: Waiting
99572acbe121: Waiting
6eec252ae038: Verifying Checksum
6eec252ae038: Download complete
d2519f41f710: Verifying Checksum
d2519f41f710: Download complete
4d26a3937654: Verifying Checksum
4d26a3937654: Download complete
afc0f047b31b: Verifying Checksum
afc0f047b31b: Download complete
4f7c178c80ae: Verifying Checksum
4f7c178c80ae: Download complete
8489894bd35d: Verifying Checksum
8489894bd35d: Download complete
7191ba4e837a: Verifying Checksum
7191ba4e837a: Download complete
5a4cc781085a: Verifying Checksum
5a4cc781085a: Download complete
0580d19c38ca: Verifying Checksum
0580d19c38ca: Download complete
8983fe4840e5: Verifying Checksum
8983fe4840e5: Download complete
d2519f41f710: Pull complete
b744706d3f9a: Verifying Checksum
b744706d3f9a: Download complete
5cf347650673: Verifying Checksum
5cf347650673: Download complete
90859e0cf5bf: Verifying Checksum
90859e0cf5bf: Download complete
3182a0dcfac8: Verifying Checksum
3182a0dcfac8: Download complete
fbe065309217: Verifying Checksum
fbe065309217: Download complete
fe192fb6710f: Verifying Checksum
fe192fb6710f: Download complete
1a9cdb3950de: Download complete
dd6d7312d9d2: Verifying Checksum
dd6d7312d9d2: Download complete
eece11185bb5: Verifying Checksum
eece11185bb5: Download complete
5e0d963a4fa0: Verifying Checksum
5e0d963a4fa0: Download complete
df3eeb47363f: Verifying Checksum
df3eeb47363f: Download complete
7339d2e08a8c: Verifying Checksum
7339d2e08a8c: Download complete
0f20e690530e: Verifying Checksum
0f20e690530e: Download complete
2fb17f6220b0: Verifying Checksum
2fb17f6220b0: Download complete
2f377a05e646: Verifying Checksum
2f377a05e646: Download complete
99572acbe121: Verifying Checksum
99572acbe121: Download complete
afc0f047b31b: Pull complete
6eec252ae038: Pull complete
4f7c178c80ae: Pull complete
4d26a3937654: Pull complete
8489894bd35d: Pull complete
7191ba4e837a: Pull complete
0580d19c38ca: Pull complete
5a4cc781085a: Pull complete
8983fe4840e5: Pull complete
90859e0cf5bf: Pull complete
b744706d3f9a: Pull complete
5cf347650673: Pull complete
3182a0dcfac8: Pull complete
fe192fb6710f: Pull complete
fbe065309217: Pull complete
1a9cdb3950de: Pull complete
dd6d7312d9d2: Pull complete
eece11185bb5: Pull complete
5e0d963a4fa0: Pull complete
df3eeb47363f: Pull complete
7339d2e08a8c: Pull complete
0f20e690530e: Pull complete
2fb17f6220b0: Pull complete
2f377a05e646: Pull complete
99572acbe121: Pull complete
Digest: sha256:6901087e83e9b04469a4d723e6582aefa80589ddd3137dc54449e4824005488c
Status: Downloaded newer image for mirantis/kubeadm-dind-cluster:v1.11
/root/.kubeadm-dind-cluster/kubectl-v1.11.3: OK
* Starting DIND container: kube-master
time="2018-11-26T02:37:01.122644531Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
time="2018-11-26T02:37:01.122685737Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]"
time="2018-11-26T02:37:01Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f25fca8e9a33af9b50b4820707ed6b9e2c71de0d851cca8377010b2c4cb68a3e/shim.sock" debug=false pid=713
time="2018-11-26T02:37:01Z" level=warning msg="Running modprobe nf_nat failed with message: `ip: can't find device 'nf_nat'\nnf_nat_masquerade_ipv4 16384 1 ipt_MASQUERADE\nnf_nat_ipv4 16384 1 iptable_nat\nnf_nat 32768 3 xt_nat,nf_nat_masquerade_ipv4,nf_nat_ipv4\nnf_conntrack 131072 8 xt_nat,ipt_MASQUERADE,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_nat_ipv4,xt_conntrack,nf_nat\nlibcrc32c 16384 3 nf_nat,nf_conntrack,raid456\nmodprobe: can't change directory to '4.15.0-39-generic': No such file or directory`, error: exit status 1"
time="2018-11-26T02:37:01Z" level=warning msg="Running modprobe xt_conntrack failed with message: `ip: can't find device 'xt_conntrack'\nxt_conntrack 16384 6 \nnf_conntrack 131072 8 xt_nat,ipt_MASQUERADE,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_nat_ipv4,xt_conntrack,nf_nat\nx_tables 40960 7 xt_nat,xt_tcpudp,ipt_MASQUERADE,xt_addrtype,iptable_filter,xt_conntrack,ip_tables\nmodprobe: can't change directory to '4.15.0-39-generic': No such file or directory`, error: exit status 1"
* Running kubeadm: init --config /etc/kubeadm.conf --ignore-preflight-errors=all
Initializing machine ID from random generator.
A dependency job for docker.service failed. See 'journalctl -xe' for details.
docker failed to start. Diagnostics below:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: https://docs.docker.com
Yes, this looks like an old dind-cluster...sh
script
My continuous integration build starts with clean state and downloads the latest file every time.
- curl https://cdn.rawgit.com/kubernetes-sigs/kubeadm-dind-cluster/master/fixed/dind-cluster-v1.11.sh -o /usr/local/bin/dind-cluster
- chmod +x /usr/local/bin/dind-cluster
- until docker info; do sleep 1; done
- ln -s ~/.kubeadm-dind-cluster/kubectl /usr/local/bin/
- APISERVER_PORT=8081 dind-cluster up
I did not change the above script for few weeks and started to hit the error only recently. (Currently disabled production build testing temporarily due to this issue)
You can try to use gitcdn instead of rawgit, e.g. https://gitcdn.link/repo/kubernetes-sigs/kubeadm-dind-cluster/master/fixed/dind-cluster-v1.12.sh
I'll update the docs
TBH it didn't occur to me that there would be an interdependency between the image and the scripting...
Same here. I now better understand why nobody else saw the issue. Shouldn't the image sha1 be fixed in the script then ? Or at least fixed on tagged version ?
I just tried on another box with the last version of the branch and it's working as expected.
I tried https://gitcdn.link/cdn/kubernetes-sigs/kubeadm-dind-cluster/30a2033581adf53161fe1cdc76f1550193927db4/fixed/dind-cluster-v1.12.sh but still got the same error. Maybe somehow the bot is not downloading the corresponding docker image, or maybe my issue is something different?
Can anyone check if the digest sha256:8e679951101f3f2030e77a1146cc514631f21f424027fcc003fc78a0337eb730 is correct for mirantis/kubeadm-dind-cluster:v1.12, or observing the same error messages below?
full log
+ curl -L https://gitcdn.link/cdn/kubernetes-sigs/kubeadm-dind-cluster/30a2033581adf53161fe1cdc76f1550193927db4/fixed/dind-cluster-v1.12.sh -o /usr/local/bin/dind-cluster
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 80007 0 80007 0 0 941k 0 --:--:-- --:--:-- --:--:-- 941k
+ chmod +x /usr/local/bin/dind-cluster
+ until docker info; do sleep 1; done
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 18.09.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-39-generic
Operating System: Alpine Linux v3.8 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.759GiB
Name: 6f8ec7252501
ID: H3TJ:WK55:73GR:W5QX:6W2K:TR5K:VM2N:ZXEU:CJ4G:ECOD:33G4:35ZW
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ ln -s ~/.kubeadm-dind-cluster/kubectl /usr/local/bin/
+ APISERVER_PORT=8081 dind-cluster up
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
* Making sure DIND image is up to date
v1.12: Pulling from mirantis/kubeadm-dind-cluster
d2519f41f710: Pulling fs layer
955f58a6264c: Pulling fs layer
5096ac02979b: Pulling fs layer
634ff5643509: Pulling fs layer
4cf9fffd83fa: Pulling fs layer
06a502fa8831: Pulling fs layer
538fcb27d3e4: Pulling fs layer
c6ec4c86f75b: Pulling fs layer
78a269534d9f: Pulling fs layer
39ce1f39d5b8: Pulling fs layer
0772dee7906b: Pulling fs layer
56c9b6795a69: Pulling fs layer
885a6ad04493: Pulling fs layer
365efdbb0c4f: Pulling fs layer
67f8ad0a8ee6: Pulling fs layer
84bbc52030c4: Pulling fs layer
1ad121b5f327: Pulling fs layer
183dd41b445a: Pulling fs layer
a1ce9b6a06ae: Pulling fs layer
130254934705: Pulling fs layer
c0817c88613e: Pulling fs layer
aa257fa77d2a: Pulling fs layer
bdc6bfc7f65e: Pulling fs layer
f8a9bb39922c: Pulling fs layer
746b2e632c40: Pulling fs layer
9030e8bcf587: Pulling fs layer
365efdbb0c4f: Waiting
67f8ad0a8ee6: Waiting
84bbc52030c4: Waiting
1ad121b5f327: Waiting
183dd41b445a: Waiting
a1ce9b6a06ae: Waiting
130254934705: Waiting
c0817c88613e: Waiting
aa257fa77d2a: Waiting
bdc6bfc7f65e: Waiting
f8a9bb39922c: Waiting
746b2e632c40: Waiting
9030e8bcf587: Waiting
634ff5643509: Waiting
4cf9fffd83fa: Waiting
06a502fa8831: Waiting
538fcb27d3e4: Waiting
78a269534d9f: Waiting
39ce1f39d5b8: Waiting
0772dee7906b: Waiting
56c9b6795a69: Waiting
885a6ad04493: Waiting
c6ec4c86f75b: Waiting
d2519f41f710: Verifying Checksum
d2519f41f710: Download complete
5096ac02979b: Verifying Checksum
5096ac02979b: Download complete
955f58a6264c: Verifying Checksum
955f58a6264c: Download complete
634ff5643509: Verifying Checksum
634ff5643509: Download complete
06a502fa8831: Verifying Checksum
06a502fa8831: Download complete
4cf9fffd83fa: Verifying Checksum
4cf9fffd83fa: Download complete
538fcb27d3e4: Verifying Checksum
538fcb27d3e4: Download complete
c6ec4c86f75b: Verifying Checksum
c6ec4c86f75b: Download complete
78a269534d9f: Verifying Checksum
78a269534d9f: Download complete
39ce1f39d5b8: Verifying Checksum
39ce1f39d5b8: Download complete
0772dee7906b: Verifying Checksum
0772dee7906b: Download complete
56c9b6795a69: Verifying Checksum
56c9b6795a69: Download complete
d2519f41f710: Pull complete
885a6ad04493: Verifying Checksum
885a6ad04493: Download complete
365efdbb0c4f: Verifying Checksum
365efdbb0c4f: Download complete
84bbc52030c4: Verifying Checksum
84bbc52030c4: Download complete
1ad121b5f327: Verifying Checksum
1ad121b5f327: Download complete
67f8ad0a8ee6: Verifying Checksum
67f8ad0a8ee6: Download complete
130254934705: Verifying Checksum
130254934705: Download complete
a1ce9b6a06ae: Verifying Checksum
a1ce9b6a06ae: Download complete
183dd41b445a: Verifying Checksum
183dd41b445a: Download complete
aa257fa77d2a: Verifying Checksum
aa257fa77d2a: Download complete
c0817c88613e: Verifying Checksum
c0817c88613e: Download complete
f8a9bb39922c: Verifying Checksum
f8a9bb39922c: Download complete
bdc6bfc7f65e: Verifying Checksum
bdc6bfc7f65e: Download complete
746b2e632c40: Verifying Checksum
746b2e632c40: Download complete
9030e8bcf587: Verifying Checksum
955f58a6264c: Pull complete
5096ac02979b: Pull complete
634ff5643509: Pull complete
4cf9fffd83fa: Pull complete
06a502fa8831: Pull complete
538fcb27d3e4: Pull complete
c6ec4c86f75b: Pull complete
78a269534d9f: Pull complete
39ce1f39d5b8: Pull complete
0772dee7906b: Pull complete
56c9b6795a69: Pull complete
885a6ad04493: Pull complete
365efdbb0c4f: Pull complete
67f8ad0a8ee6: Pull complete
84bbc52030c4: Pull complete
1ad121b5f327: Pull complete
183dd41b445a: Pull complete
a1ce9b6a06ae: Pull complete
130254934705: Pull complete
c0817c88613e: Pull complete
aa257fa77d2a: Pull complete
bdc6bfc7f65e: Pull complete
f8a9bb39922c: Pull complete
746b2e632c40: Pull complete
9030e8bcf587: Pull complete
Digest: sha256:8e679951101f3f2030e77a1146cc514631f21f424027fcc003fc78a0337eb730
Status: Downloaded newer image for mirantis/kubeadm-dind-cluster:v1.12
/root/.kubeadm-dind-cluster/kubectl-v1.12.1: OK
* Starting DIND container: kube-master
time="2018-11-27T01:42:03.622794333Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
time="2018-11-27T01:42:03.622834907Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]"
time="2018-11-27T01:42:03Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bb07e61e8fe8084cd75ccf03bd7e75dd144569f5998fc82e551012c1c6f5c94e/shim.sock" debug=false pid=736
time="2018-11-27T01:42:03Z" level=warning msg="Running modprobe nf_nat failed with message: `ip: can't find device 'nf_nat'\nnf_nat_masquerade_ipv4 16384 1 ipt_MASQUERADE\nnf_nat_ipv4 16384 1 iptable_nat\nnf_nat 32768 3 xt_nat,nf_nat_masquerade_ipv4,nf_nat_ipv4\nnf_conntrack 131072 8 xt_nat,ipt_MASQUERADE,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_nat_ipv4,xt_conntrack,nf_nat\nlibcrc32c 16384 3 nf_nat,nf_conntrack,raid456\nmodprobe: can't change directory to '4.15.0-39-generic': No such file or directory`, error: exit status 1"
time="2018-11-27T01:42:04Z" level=warning msg="Running modprobe xt_conntrack failed with message: `ip: can't find device 'xt_conntrack'\nxt_conntrack 16384 6 \nnf_conntrack 131072 8 xt_nat,ipt_MASQUERADE,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_nat_ipv4,xt_conntrack,nf_nat\nx_tables 40960 7 xt_nat,xt_tcpudp,ipt_MASQUERADE,xt_addrtype,iptable_filter,xt_conntrack,ip_tables\nmodprobe: can't change directory to '4.15.0-39-generic': No such file or directory`, error: exit status 1"
* Running kubeadm: init --config /etc/kubeadm.conf --ignore-preflight-errors=all
Initializing machine ID from random generator.
A dependency job for docker.service failed. See 'journalctl -xe' for details.
docker failed to start. Diagnostics below:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: https://docs.docker.com
Nov 27 01:42:05 kube-master systemd[1]: Starting Docker Application Container Engine...
Nov 27 01:42:05 kube-master systemd[1]: Dependency failed for Docker Application Container Engine.
Nov 27 01:42:05 kube-master systemd[1]: docker.service: Job docker.service/start failed with result 'dependency'.
Nov 27 01:42:05 kube-master systemd[1]: Stopped Docker Application Container Engine.
*** kubeadm failed
I just tried on the CI bot and my local machine again. Both pulled the same image, sha256:f8a53b72213ca5f310fe000dde45e09a5a14ef78d209d9d8e6a67e465b1263fa , it worked on my laptop but not on the bot :/ I guess I have a different problem that I have to debug.
Any response to my proposal #255 (comment) ?
+1 to #255 (comment) . I think it's already widely accepted that using hash digest for Docker image is a better practice in general. E.g., https://www.spinnaker.io/guides/user/kubernetes-v2/best-practices/
I'd do it this way: start making releases for the project and push images with tags kdc-version--k8s-version
, e.g. mirantis/kubeadm-dind-cluster:1.0--v1.12
, and include it in fixed scripts which will be published as part of the release. The "released" scripts will also include digests for the images. The users will be encouraged to download scripts from gh releases. I can also make the CI push images for each commit, e.g. mirantis/kubeadm-dind-cluster:f9f9823--v1.12
for the cases when the released scripts/images are not enough. Corresponding fixed scripts for "non-released" commits will be stored as artifacts in CircleCI (and also include digests). The fixed
directory will be kept for a while and updated from the latest release. WDYT?
@ivan4th I think it's gonna solve the current issue, so good for me. As long as anyone who downloads a version of the scripts can still use it for the time he wants, that's a 👍
Still broken on my side with this problem
I'm on MacOS and hitting hitting this issue as well. Running into this on 1.11 and 1.12 when upping cluster.
Hi, sorry for the inconvenience. After #273 is merged k-d-c will have releases with scripts pinned to the images built for them.
No worries stuff happens. The project is awesome thanks for your hard work!
Hey @ivan4th, I get again the same issue. I tried to use scripts from release v0.1.0
but I get errors. See https://pastebin.com/raw/sfU4WweZ
@ivan4th don't know if you have some time to look into it, but I've been able to find out that it fails only when DIND_INSECURE_REGISTRIES
is set :
* + Setting up insecure-registries on kube-master
+ jq+=("{\"insecure-registries\": ${DIND_INSECURE_REGISTRIES}}")
+ got_changes=1
+ [[ -n 1 ]]
++ IFS=+
++ echo '{}+{"insecure-registries": ["docker.com"]}'
+ local 'json={}+{"insecure-registries": ["docker.com"]}'
+ docker exec -i kube-master /bin/sh -c 'mkdir -p /etc/docker && jq -n '\''{}+{"insecure-registries": ["docker.com"]}'\'' > /etc/docker/daemon.json'
+ docker exec kube-master systemctl daemon-reload
+ docker exec kube-master systemctl restart docker
A dependency job for docker.service failed. See 'journalctl -xe' for details.
+ dind::cleanup
+ '[' 0 -gt 0 ']'
If I unset it, I no longer have any issues. It's working with an older version of the script though
Hello -
Trying to POC this but running into same issue: docker.service: Job docker.service/start failed with result 'dependency'. Tried with .11,.12 and .13 (all clean dind runs). Any workarounds?