kubernetes-sigs/cluster-api-provider-openstack

Add Openstack compute labels to k8s nodes nodes

cunningr opened this issue ยท 13 comments

/kind feature

Describe the solution you'd like

When creating k8s worker nodes we would like to see a node label indicating (or representing) the underlying physical compute node so that apps can be deployed using topology spread constraints and avoid scheduling to the same physical node.

In a topology where we host multiple k8s worker instances on the same physical compute this is an important requirement for some applications.

Anything else you would like to add:
.

Thank you. Possibly, although I am missing how the label gets onto the machine. I am aware that we are running quite an old version so maybe this has already been solved, but when I check the available labels on the machine, I only see:

  labels:
    cluster.x-k8s.io/cluster-name: clusterABC123
    machine-template-hash: "916525549"
    custom.label1/cluster: clusterABC123
    custom.label2/component: worker
    custom.label3/node-group: md-0

The OSM has the same labels as above. Do more recent versions of CAPO add labels for the instance compute host?

The above is a capi feature introduced on capi 1.4.0 (IIRC). Basically it allows you to propagate labels/annotations. The same page explains how it works and which labels/annotations are propagated from which resources to which.

For example from here we are adding labels to .spec.template.metadata.labels which get propagated to the node.

In our case we use that to set the role of nodes

Code:

spec:
...
  template:
    metadata:
      labels:
        node-role.kubernetes.io/worker: worker

and then the nodes in the cluster:

โฏ kubectl get node                                                                                                                                                                                                                        
NAME                                 STATUS   ROLES           AGE   VERSION
capi-dev-control-plane-5f8hr-64qf7   Ready    control-plane   8d    v1.26.4
capi-dev-control-plane-5f8hr-p25bj   Ready    control-plane   8d    v1.26.4
capi-dev-control-plane-5f8hr-xsnmb   Ready    control-plane   8d    v1.26.4
capi-dev-md-0-infra-nnh88-wtcw2      Ready    worker          8d    v1.26.4
capi-dev-md-0-infra-nnh88-wz48g      Ready    worker          8d    v1.26.4
capi-dev-md-0-infra-nnh88-z29qt      Ready    worker          8d    v1.26.4

These labels/annotations are updated in-place and you dont need to roll-out new machines

I think we are doing this already with some of our labels that we apply in the CAPI cluster object. It's good to know that we can also apply labels in a similar way to the MachineTemplate but I was wondering if this meta data could be gleaned by the capo-controller-manager at instance deploy time (as it creates the OSM) and have those added to the Machine labels. I think from what you are describing is, today, we would need an additional function to glean the instance metadata we want and add it to the Machine objects and then CAPI would sync it to the workload cluster nodes?

dulek commented

This is a valid use case from my perspective, however the OpenStack API on a tenant level only gives you hostId property in a form of e.g. d934b1bca83fc5ec7d4d6e7a525dbf75c43dfffcad22a5ee5163bb8c. Would a label with that work for you?

From my understanding of how spread topologies are working in k8s we just need a unique representation of the host so that workloads can be scheduled on different hosts. So yes, I think it would but Iโ€™m going to have another read ๐Ÿ‘

I wonder if this would fit better in cloud-provider-openstack, which IIRC is already adding the other topology labels?

I opened kubernetes/cloud-provider#67 to discuss adding this capability to the cloud provider.

We could also do this in CAPO, but we'd have to add a custom label to the machine after it has been created.

I wonder if this would fit better in cloud-provider-openstack, which IIRC is already adding the other topology labels?

CPO CSI has this but OCCM doesn't ..

I opened kubernetes/cloud-provider#67 to discuss adding this capability to the cloud provider.

saw this issue, but you mentioned it's rejected before, do we have any link for it so we can check the history?

I opened kubernetes/cloud-provider#67 to discuss adding this capability to the cloud provider.

saw this issue, but you mentioned it's rejected before, do we have any link for it so we can check the history?

I can't find where I read the discussion, but from memory what was rejected was defining well-known topology labels beyond 'region' and 'zone'. IIRC the concern was that there is such a wide variety of potential topologies it would quickly pollute the set of well-known labels.

I don't believe the concept of a hypervisor topology label was rejected, and certainly there were a lot of users asking for it. There was just no desire to define a well-known label for it, hence the label would be specific to CPO.