add clarity on how on the GPU model format for `price_target_gpu_mappings` config
Closed this issue · 0 comments
andy108369 commented
I've noticed one provider did set 4090
instead of rtx4090
in his price_target_gpu_mappings
(in provider.yaml
)
Need to bring clarity to the doc on that, that it should match the name set by the operator-inventory based on the gpus.json file, and can as well be viewed by running the following commands:
kubectl get nodes --show-labels
Example
$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node1 Ready control-plane 29h v1.28.6 akash.network/capabilities.gpu.vendor.nvidia.model.rtx4090.interface.pcie=8,akash.network/capabilities.gpu.vendor.nvidia.model.rtx4090.ram.24Gi=8,akash.network/capabilities.gpu.vendor.nvidia.model.rtx4090=8,akash.network/capabilities.storage.class.beta3=1,akash.network=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,nvidia.com/gpu.present=true
node2 Ready control-plane 29h v1.28.6 akash.network/capabilities.gpu.vendor.nvidia.model.rtx4090.interface.pcie=8,akash.network/capabilities.gpu.vendor.nvidia.model.rtx4090.ram.24Gi=8,akash.network/capabilities.gpu.vendor.nvidia.model.rtx4090=8,akash.network/capabilities.storage.class.beta3=1,akash.network=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=,nvidia.com/gpu.present=true
node3 Ready <none> 29h v1.28.6 akash.network/capabilities.gpu.vendor.nvidia.model.rtx4090.interface.pcie=8,akash.network/capabilities.gpu.vendor.nvidia.model.rtx4090.ram.24Gi=8,akash.network/capabilities.gpu.vendor.nvidia.model.rtx4090=8,akash.network/capabilities.storage.class.beta3=1,akash.network=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node3,kubernetes.io/os=linux,nvidia.com/gpu.present=true
$ kubectl -n akash-services get pod akash-provider-0 -o yaml |grep -A1 AKASH_FROM
- name: AKASH_FROM
value: akash1cnzkdynwd4u6j7s8z5j0fg76h3g6yhsggmuqta
...
$ provider-services query provider get akash1cnzkdynwd4u6j7s8z5j0fg76h3g6yhsggmuqta -o text
attributes:
...
...
- key: capabilities/gpu/vendor/nvidia/model/rtx4090
value: "true"
- key: capabilities/gpu/vendor/nvidia/model/rtx4090/ram/24Gi
value: "true"
- key: capabilities/gpu/vendor/nvidia/model/rtx4090/ram/24Gi/interface/pcie
value: "true"
- key: capabilities/gpu/vendor/nvidia/model/rtx4090/interface/pcie
value: "true"
...
as you can see it is rtx4090
, not the 4090
.