nfs-client example test-pod doesn't work on arm
sgielen opened this issue · 4 comments
I'm walking through the "Without helm" instructions for the nfs-client-provisioner on ARM64. Everything is working, when I kubectl apply -f test-claim.yaml
a PV is created, the PVC is bound and a directory appears on NFS.
But, when I kubectl apply -f test-pod.yaml
the pod is created but never comes up:
$ kubectl get pods | grep test-pod
test-pod 0/1 Error 0 3m16s
$ kubectl describe pod test-pod | tail -n8
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-pod to kathleen
Normal Pulling 3m28s kubelet, kathleen Pulling image "gcr.io/google_containers/busybox:1.24"
Normal Pulled 3m25s kubelet, kathleen Successfully pulled image "gcr.io/google_containers/busybox:1.24"
Normal Created 3m25s kubelet, kathleen Created container test-pod
Normal Started 3m25s kubelet, kathleen Started container test-pod
$ kubectl logs test-pod
standard_init_linux.go:211: exec user process caused "exec format error"
Most likely, the gcr.io/google_containers/busybox:1.24
doesn't have an ARM64 variant and therefore the Intel variant is pulled which causes exec format error.
Since the guide explicitly contains this line:
Note: To deploy to an ARM-based environment, use: deploy/deployment-arm.yaml instead, otherwise use deploy/deployment.yaml.
...perhaps the test-pod.yaml
could be patched to support ARM as well. I've tested that it works properly like this:
diff --git a/nfs-client/deploy/test-pod.yaml b/nfs-client/deploy/test-pod.yaml
index e5e7b7fe..8196a048 100644
--- a/nfs-client/deploy/test-pod.yaml
+++ b/nfs-client/deploy/test-pod.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: test-pod
- image: gcr.io/google_containers/busybox:1.24
+ image: busybox:latest
command:
- "/bin/sh"
args:
With this patch, the container comes up normally and the SUCCESS
file appears on NFS.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.