nfs-provisioner arm64 support
omegazeng opened this issue · 6 comments
Failed to pull image "k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.0": rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture arm64, variant "v8", OS linux
https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner/tree/v3.0.0/deploy/docker/arm/Dockerfile
only arm32 v7
Hope to support arm64 v8
Thx!
build arm64
GOOS=linux GOARCH=arm64 go build -o deploy/docker/nfs-provisioner ./cmd/nfs-provisioner
cd deploy/docker/
docker build . -t omegazeng/nfs-provisioner:latest
docker push omegazeng/nfs-provisioner:latest
install
- nfs.yaml
# kubectl create ns nfs --context=xxx
# helm repo add kvaps https://kvaps.github.io/charts
# helm install nfs -n nfs kvaps/nfs-server-provisioner -f nfs.yaml --kube-context=xxx
replicaCount: 1
image:
repository: omegazeng/nfs-provisioner
tag: latest
pullPolicy: IfNotPresent
persistence:
enabled: true
storageClass: oci
size: 100Gi
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
kubectl create ns nfs --context=xxx
helm repo add kvaps https://kvaps.github.io/charts
helm install nfs -n nfs kvaps/nfs-server-provisioner -f nfs.yaml --kube-context=xxx
k get po -n nfs
NAME READY STATUS RESTARTS AGE
nfs-nfs-server-provisioner-0 1/1 Running 0 6m32s
k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs cluster.local/nfs-nfs-server-provisioner Delete Immediate true 175m
oci (default) oracle.com/oci Delete Immediate false 8d
oci-bv blockvolume.csi.oraclecloud.com Delete WaitForFirstConsumer false 8d
test
- test-dynamic-volume-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-dynamic-volume-claim
spec:
storageClassName: "nfs"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
k apply -f test-dynamic-volume-claim.yaml
k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-dynamic-volume-claim Bound pvc-28d1e8e4-ebe7-4f12-a0b8-42e99d913ecc 100Mi RWO nfs 10m
k get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
ocid1.volume.oc1.phx.abyhqljsv43hgrbhwibnblhtg7myrf46gfd7yp4j5ialzxzftbw6i3g7qtba 100Gi RWO Delete Bound nfs/data-nfs-nfs-server-provisioner-0 oci 178m
pvc-28d1e8e4-ebe7-4f12-a0b8-42e99d913ecc 100Mi RWO Delete Bound default/test-dynamic-volume-claim nfs 8m35s
so good so far.
I will keep this issue open until official support for arm64.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.