kubernetes/kubernetes

Ignore and potentially prevent reporting container status for not-existing containers

SergeyKanzhelev opened this issue · 2 comments

What happened?

Some third party controllers may report the Container Status for containers that are not defined in a pod spec. This may lead to inconsistencies in codebase and ideally needs to be blocked.

We see this with the admiraltyio/admiralty#206, but there may be more examples like this since k8s never checked for consistency of statuses to specs.

What did you expect to happen?

As mentioned: #124906 (review) usages of container statuses needs to be reviewed and in most places we should start ignoring statuses for non-existing containers.

How can we reproduce it (as minimally and precisely as possible)?

Update the Pod Status with the container status for the container that doesn't exist:

"apiVersion": "v1",
"kind": "Pod",
"metadata": {
            .....
},
"spec": {
  "containers": [],
  "initContainers": [
    {
      ....
      "name": "tester",
      ....
    },
    {
        ....
      "name": "init-proxy",
      ....
    }
  ],
  "nodeName": "foo",
},
"status": {
  "containerStatuses": [
    ....
  ],
  "initContainerStatuses": [
    {
      ....
      "imageID": "",
      "lastState": {},
      "name": "not-existing-container",
      "ready": false,
      "restartCount": 0,
      "state": {
        "waiting": {
          "reason": "PodInitializing"
        }
      }
    },
    {
        ......
      "imageID": "",
      "lastState": {},
      "name": "tester",
      "ready": false,
      "restartCount": 0,
      "state": {
        "waiting": {
          "reason": "PodInitializing"
        }
      }
    },
    {
      ....
      "imageID": "",
      "lastState": {},
      "name": "init-proxy",
      "ready": false,
      "restartCount": 0,
      "state": {
        "waiting": {
          "reason": "PodInitializing"
        }
      }
    }
  ],
  "phase": "Pending",
  "qosClass": "Burstable",
  "startTime": "2024-05-07T23:55:23Z"
}

Anything else we need to know?

/sig node

Kubernetes version

$ kubectl version
# paste output here

Cloud provider

N/A

OS version

N/A

Install tools

N/A

Container runtime (CRI) and version (if applicable)

Any

Related plugins (CNI, CSI, ...) and versions (if applicable)

N/A

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.