kubernetes-sigs/image-builder

Improve code for vSphere metadata setting

erkanerol opened this issue · 4 comments

Intro:

We have discussed metadata in the OVF file for Flatcar in the slack thread and then I opened a PR to fix the Flatcar case #1337

It is still being discussed since the desired output is not clear. On the other hand, the implementation is also problematic. This issue is about the implementation, not about the values/inputs/outputs.

Current Status:

In the code base, there are two different inputs guest_os_type and vsphere_guest_os_type. guest_os_type refers values like centos8-64 or indows2019Server-64 whereas vsphere_guest_os_type refers windows2019srv_64Guest or otherLinux64Guest .

There are some re-assigments in the codebase as below but it doesn't make sense. These refer different types of values.

# https://github.com/kubernetes-sigs/image-builder/blob/7cb76c78e013db59783b93815caf710c5ad34bef/images/capi/packer/ova/packer-node.json#L155
 "guest_os_type": "{{user `vsphere_guest_os_type`}}",

There is a OVF template in the code and it is filled by image-build-ova.py script. The script uses guest_os_type as input and has a static map for guest_os_type -> vsphere_guest_os_type relationship.

What should be improved:

  • The re-assignments are useless and wrong. They should be removed.
  • The script should also respect vsphere_guest_os_type if it exists and allow users to override the value. The static, hard-coded map is very limiting.

/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.